0
0
Fork 0
mirror of https://github.com/netdata/netdata.git synced 2025-04-08 23:30:11 +00:00

Change "netdata" to "Netdata" in all docs ()

* First pass of changing netdata to Netdata

* Second pass of netdata -> Netdata

* Starting work on netdata with no whitespace after

* Pass for netdata with no whitespace at the end

* Pass for netdata with no whitespace at the front
This commit is contained in:
Joel Hans 2019-08-13 08:07:17 -07:00 committed by GitHub
parent dc38b1d15d
commit a726c905bd
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
105 changed files with 788 additions and 794 deletions
.travis
CONTRIBUTING.mdCONTRIBUTORS.mdREADME.mdREDISTRIBUTED.md
backends
collectors
README.md
apps.plugin
cgroups.plugin
charts.d.plugin
diskspace.plugin
fping.plugin
freebsd.plugin
freeipmi.plugin
ioping.plugin
macos.plugin
nfacct.plugin
node.d.plugin
README.md
fronius
named
sma_webbox
snmp
stiebeleltron
plugins.d
proc.plugin
python.d.plugin
README.md
chrony
dovecot
fail2ban
go_expvar
haproxy
httpcheck
isc_dhcpd
logind
mongodb
oracledb
portcheck
web_log
statsd.plugin
tc.plugin
xenstat.plugin
contrib
daemon
database
docs
health
README.md
notifications
awssns
custom
discord
email
flock
irc
kavenegar
messagebird
pagerduty
pushbullet
pushover
rocketchat
slack
smstools3
syslog
telegram
twilio
web
libnetdata
README.md
adaptive_resortable_list
config
procfile
simple_pattern
storage_number
packaging
registry
streaming
web
README.md
api
README.md
badges
exporters
prometheus
shell
formatters

View file

@ -54,7 +54,7 @@ At this stage, basically, we build :-)
We do a baseline check of our build artifacts to guarantee they are not broken
Briefly our activities include:
- Verify docker builds successfully
- Run the standard netdata installer, to make sure we build & run properly
- Run the standard Netdata installer, to make sure we build & run properly
- Do the same through 'make dist', as this is our stable channel for our kickstart files
## Artifacts validation
@ -66,7 +66,7 @@ Briefly we currently evaluate the following activities:
- Basic software unit testing
- Non containerized build and install on ubuntu 14.04
- Non containerized build and install on ubuntu 18.04
- Running the full netdata lifecycle (install, update, uninstall) on ubuntu 18.04
- Running the full Netdata lifecycle (install, update, uninstall) on ubuntu 18.04
- Build and install on CentOS 6
- Build and install on CentOS 7
(More to come)

View file

@ -15,15 +15,15 @@ This is the minimum open-source users should contribute back to the projects the
### Spread the word
Community growth allows the project to attract new talent willing to contribute. This talent is then developing new features and improves the project. These new features and improvements attract more users and so on. It is a loop. So, post about netdata, present it to local meetups you attend, let your online social network or twitter, facebook, reddit, etc. know you are using it. **The more people involved, the faster the project evolves**.
Community growth allows the project to attract new talent willing to contribute. This talent is then developing new features and improves the project. These new features and improvements attract more users and so on. It is a loop. So, post about Netdata, present it to local meetups you attend, let your online social network or twitter, facebook, reddit, etc. know you are using it. **The more people involved, the faster the project evolves**.
### Provide feedback
Is there anything that bothers you about netdata? Did you experience an issue while installing it or using it? Would you like to see it evolve to you need? Let us know. [Open a github issue](https://github.com/netdata/netdata/issues) to discuss it. Feedback is very important for open-source projects. We can't commit we will do everything, but your feedback influences our road-map significantly. **We rely on your feedback to make Netdata better**.
Is there anything that bothers you about Netdata? Did you experience an issue while installing it or using it? Would you like to see it evolve to you need? Let us know. [Open a github issue](https://github.com/netdata/netdata/issues) to discuss it. Feedback is very important for open-source projects. We can't commit we will do everything, but your feedback influences our road-map significantly. **We rely on your feedback to make Netdata better**.
### Translate some documentation
The [netdata localization project](https://github.com/netdata/localization) contains instructions on how to provide translations for parts of our documentation. Translating the entire documentation is a daunting task, but you can contribute as much as you like, even a single file. The Chinese translation effort has already begun and we are looking forward to more contributions.
The [Netdata localization project](https://github.com/netdata/localization) contains instructions on how to provide translations for parts of our documentation. Translating the entire documentation is a daunting task, but you can contribute as much as you like, even a single file. The Chinese translation effort has already begun and we are looking forward to more contributions.
### Sponsor a part of Netdata
@ -57,7 +57,7 @@ Netdata delivers alarms via various [notification methods](health/notifications)
### Help other users
As the project grows, an increasing share of our time is spent on supporting this community of users in terms of answering questions, of helping users understand how netdata works and find their way with it. Helping other users is crucial. It allows the developers and maintainers of the project to focus on improving it.
As the project grows, an increasing share of our time is spent on supporting this community of users in terms of answering questions, of helping users understand how Netdata works and find their way with it. Helping other users is crucial. It allows the developers and maintainers of the project to focus on improving it.
### Improve documentation
@ -80,11 +80,11 @@ Of course we appreciate contributions for any other part of the NetData agent, i
#### Code of Conduct and CLA
We expect all contributors to abide by the [Contributor Covenant Code of Conduct](CODE_OF_CONDUCT.md). For a pull request to be accepted, you will also need to accept the [netdata contributors license agreement](CONTRIBUTORS.md), as part of the PR process.
We expect all contributors to abide by the [Contributor Covenant Code of Conduct](CODE_OF_CONDUCT.md). For a pull request to be accepted, you will also need to accept the [Netdata contributors license agreement](CONTRIBUTORS.md), as part of the PR process.
#### Performance and efficiency
Everything on Netdata is about efficiency. We need netdata to always be the most lightweight monitoring solution available. We will reject to merge PRs that are not optimal in resource utilization and efficiency.
Everything on Netdata is about efficiency. We need Netdata to always be the most lightweight monitoring solution available. We will reject to merge PRs that are not optimal in resource utilization and efficiency.
Of course there are cases that such technical excellence is either not reasonable or not feasible. In these cases, we may require the feature or code submitted to be by disabled by default.
@ -92,9 +92,9 @@ Of course there are cases that such technical excellence is either not reasonabl
Unlike other monitoring solutions, Netdata requires all metrics collected to have some structure attached to them. So, Netdata metrics have a name, units, belong to a chart that has a title, a family, a context, belong to an application, etc.
This structure is what makes netdata different. Most other monitoring solution collect bulk metrics in terms of name-value pairs and then expect their users to give meaning to these metrics during visualization. This does not work. It is neither practical nor reasonable to give to someone 2000 metrics and let him/her visualize them in a meaningful way.
This structure is what makes Netdata different. Most other monitoring solution collect bulk metrics in terms of name-value pairs and then expect their users to give meaning to these metrics during visualization. This does not work. It is neither practical nor reasonable to give to someone 2000 metrics and let him/her visualize them in a meaningful way.
So, netdata requires all metrics to have a meaning at the time they are collected. We will reject to merge PRs that loosely collect just a "bunch of metrics", but we are very keen to help you fix this.
So, Netdata requires all metrics to have a meaning at the time they are collected. We will reject to merge PRs that loosely collect just a "bunch of metrics", but we are very keen to help you fix this.
#### Automated Testing
@ -106,7 +106,7 @@ Of course, manual testing is always required.
#### Netdata is a distributed application
Netdata is a distributed monitoring application. A few basic features can become quite complicated for such applications. We may reject features that alter or influence the nature of netdata, though we usually discuss the requirements with contributors and help them adapt their code to be better suited for Netdata.
Netdata is a distributed monitoring application. A few basic features can become quite complicated for such applications. We may reject features that alter or influence the nature of Netdata, though we usually discuss the requirements with contributors and help them adapt their code to be better suited for Netdata.
#### Operating systems supported

View file

@ -2,9 +2,9 @@
SPDX-License-Identifier: GPL-3.0-or-later
-->
# netdata contributors license agreement
# Netdata contributors license agreement
**Thank you for contributing to netdata!**
**Thank you for contributing to Netdata!**
This agreement is part of the legal framework of the open-source ecosystem
that adds some red tape, but protects both the contributor and the project.
@ -17,22 +17,22 @@ contributions for any other purpose.
## copyright license
The Contributor (*you*) grants netdata Inc. a perpetual, worldwide, non-exclusive,
The Contributor (*you*) grants Netdata Inc. a perpetual, worldwide, non-exclusive,
no-charge, royalty-free, irrevocable copyright license to reproduce,
prepare derivative works of, publicly display, publicly perform, sublicense,
and distribute his contributions and such derivative works.
## copyright transfer
The Contributor (*you*) hereby assigns netdata Inc. copyright in his
The Contributor (*you*) hereby assigns Netdata Inc. copyright in his
contributions, to be licensed under the same terms as the rest of the code.
> *Note: this means we may re-license netdata (your contributions included)
> *Note: this means we may re-license Netdata (your contributions included)
> any way we see fit, without asking your permission.
> We intend to keep the netdata agent forever FOSS.
> We intend to keep the Netdata agent forever FOSS.
> But open-source licenses have significant differences and in our attempt to
> help netdata grow we may have to distribute it under a different license.
> For example, CNCF, the Cloud Native Computing Foundation, requires netdata
> help Netdata grow we may have to distribute it under a different license.
> For example, CNCF, the Cloud Native Computing Foundation, requires Netdata
> to be licensed under Apache-2.0 for it to be accepted as a member of the
> Foundation. We want to be free to do it.*
@ -43,9 +43,9 @@ original creation and that he is legally entitled to grant the above license.
> *Note: if you are committing third party code, please make sure the third party
> license or any other restrictions are also included with your commits.
> netdata includes many third party libraries and tools and this is not a
> Netdata includes many third party libraries and tools and this is not a
> problem, provided that the license of the third party code is compatible with
> the one we use for netdata.*
> the one we use for Netdata.*
## signature
@ -66,7 +66,7 @@ are subject to this agreement.
> 1. add your github username and name in this file
> 2. commit it to the repo with a PR, using the same github username, or include this change in your first PR.
# netdata contributors
# Netdata contributors
This is the list of contributors that have signed this agreement:

View file

@ -154,17 +154,17 @@ not just visualize metrics.
Release v1.16.0 contains 40 bug fixes, 31 improvements and 20 documentation updates
**Binary distributions.** To improve the security, speed and reliability of new netdata installations, we are delivering our own, industry standard installation method, with binary package distributions. The RPM binaries for the most common OSs are already available on packagecloud and well have the DEB ones available very soon. All distributions are considered in Beta and, as always, we depend on our amazing community for feedback on improvements.
**Binary distributions.** To improve the security, speed and reliability of new Netdata installations, we are delivering our own, industry standard installation method, with binary package distributions. The RPM binaries for the most common OSs are already available on packagecloud and well have the DEB ones available very soon. All distributions are considered in Beta and, as always, we depend on our amazing community for feedback on improvements.
- Our stable distributions are at [netdata/netdata @ packagecloud.io](https://packagecloud.io/netdata/netdata)
- The nightly builds are at [netdata/netdata-edge @ packagecloud.io](https://packagecloud.io/netdata/netdata-edge)
**Netdata now supports TLS encryption!** You can secure the communication to the [web server](https://docs.netdata.cloud/web/server/#enabling-tls-support), the [streaming connections from slaves to the master](https://docs.netdata.cloud/streaming/#securing-the-communication) and the connection to an [openTSDB backend](https://docs.netdata.cloud/backends/opentsdb/#https).
**This version also brings two long-awaited features to netdatas health monitoring:**
**This version also brings two long-awaited features to Netdatas health monitoring:**
- The [health management API](https://docs.netdata.cloud/web/api/health/#health-management-api) introduced in v1.12 allowed you to easily disable alarms and/or notifications while netdata was running. However, those changes were not persisted across netdata restarts. Since part of routine maintenance activities may involve completely restarting a monitoring node, netdata now saves these configurations to disk, every time you issue a command to change the silencer settings. The new [LIST command](https://docs.netdata.cloud/web/api/health/#list-silencers) of the API allows you to view at any time which alarms are currently disabled or silenced.
- A way for netdata to [repeatedly send alarm notifications](https://docs.netdata.cloud/health/#alarm-line-repeat) for some, or all active alarms, at a frequency of your choosing. As a result, you will no longer have to worry about missing a notification, forgetting about a raised alarm. The default is still to only send a single notification, so that existing users are not surprised by a different behavior.
- The [health management API](https://docs.netdata.cloud/web/api/health/#health-management-api) introduced in v1.12 allowed you to easily disable alarms and/or notifications while Netdata was running. However, those changes were not persisted across Netdata restarts. Since part of routine maintenance activities may involve completely restarting a monitoring node, Netdata now saves these configurations to disk, every time you issue a command to change the silencer settings. The new [LIST command](https://docs.netdata.cloud/web/api/health/#list-silencers) of the API allows you to view at any time which alarms are currently disabled or silenced.
- A way for Netdata to [repeatedly send alarm notifications](https://docs.netdata.cloud/health/#alarm-line-repeat) for some, or all active alarms, at a frequency of your choosing. As a result, you will no longer have to worry about missing a notification, forgetting about a raised alarm. The default is still to only send a single notification, so that existing users are not surprised by a different behavior.
As always, weve introduced new collectors, 5 of them this time:

View file

@ -1,16 +1,16 @@
# Redistributed software
netdata copyright info:
Netdata copyright info:
Copyright 2016-2018, Costa Tsaousis.
Copyright 2018, Netdata Inc.
Released under [GPL v3 or later](LICENSE).
netdata uses SPDX license tags to identify the license for its files.
Netdata uses SPDX license tags to identify the license for its files.
Individual licenses referenced in the tags are available on the [SPDX project site](http://spdx.org/licenses/).
netdata redistributes the following third-party software.
Netdata redistributes the following third-party software.
We have decided to redistribute all these, instead of using them
through a CDN, to allow netdata to work in cases where Internet
through a CDN, to allow Netdata to work in cases where Internet
connectivity is not available.
- [Dygraphs](http://dygraphs.com/)

View file

@ -1,15 +1,15 @@
# Metrics long term archiving
netdata supports backends for archiving the metrics, or providing long term dashboards,
Netdata supports backends for archiving the metrics, or providing long term dashboards,
using Grafana or other tools, like this:
![image](https://cloud.githubusercontent.com/assets/2662304/20649711/29f182ba-b4ce-11e6-97c8-ab2c0ab59833.png)
Since netdata collects thousands of metrics per server per second, which would easily congest any backend
server when several netdata servers are sending data to it, netdata allows sending metrics at a lower
Since Netdata collects thousands of metrics per server per second, which would easily congest any backend
server when several Netdata servers are sending data to it, Netdata allows sending metrics at a lower
frequency, by resampling them.
So, although netdata collects metrics every second, it can send to the backend servers averages or sums every
So, although Netdata collects metrics every second, it can send to the backend servers averages or sums every
X seconds (though, it can send them per second if you need it to).
## features
@ -30,7 +30,7 @@ X seconds (though, it can send them per second if you need it to).
metrics are sent to a document db, `JSON` formatted.
- **prometheus** is described at [prometheus page](prometheus/) since it pulls data from netdata.
- **prometheus** is described at [prometheus page](prometheus/) since it pulls data from Netdata.
- **prometheus remote write** (a binary snappy-compressed protocol buffer encoding over HTTP used by
**Elasticsearch**, **Gnocchi**, **Graphite**, **InfluxDB**, **Kafka**, **OpenTSDB**,
@ -54,26 +54,26 @@ X seconds (though, it can send them per second if you need it to).
So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do.
For example, to calculate CPU utilization in this format, you need to know how to convert kernel ticks to percentage.
- `average` sends to backends normalized metrics from the netdata database.
In this mode, all metrics are sent as gauges, in the units netdata uses. This abstracts data collection
- `average` sends to backends normalized metrics from the Netdata database.
In this mode, all metrics are sent as gauges, in the units Netdata uses. This abstracts data collection
and simplifies visualization, but you will not be able to copy and paste queries from other sources to convert units.
For example, CPU utilization percentage is calculated by netdata, so netdata will convert ticks to percentage and
For example, CPU utilization percentage is calculated by Netdata, so Netdata will convert ticks to percentage and
send the average percentage to the backend.
- `sum` or `volume`: the sum of the interpolated values shown on the netdata graphs is sent to the backend.
So, if netdata is configured to send data to the backend every 10 seconds, the sum of the 10 values shown on the
netdata charts will be used.
- `sum` or `volume`: the sum of the interpolated values shown on the Netdata graphs is sent to the backend.
So, if Netdata is configured to send data to the backend every 10 seconds, the sum of the 10 values shown on the
Netdata charts will be used.
Time-series databases suggest to collect the raw values (`as-collected`). If you plan to invest on building your monitoring around a time-series database and you already know (or you will invest in learning) how to convert units and normalize the metrics in Grafana or other visualization tools, we suggest to use `as-collected`.
If, on the other hand, you just need long term archiving of netdata metrics and you plan to mainly work with netdata, we suggest to use `average`. It decouples visualization from data collection, so it will generally be a lot simpler. Furthermore, if you use `average`, the charts shown in the back-end will match exactly what you see in Netdata, which is not necessarily true for the other modes of operation.
If, on the other hand, you just need long term archiving of Netdata metrics and you plan to mainly work with Netdata, we suggest to use `average`. It decouples visualization from data collection, so it will generally be a lot simpler. Furthermore, if you use `average`, the charts shown in the back-end will match exactly what you see in Netdata, which is not necessarily true for the other modes of operation.
5. This code is smart enough, not to slow down netdata, independently of the speed of the backend server.
5. This code is smart enough, not to slow down Netdata, independently of the speed of the backend server.
## configuration
In `/etc/netdata/netdata.conf` you should have something like this (if not download the latest version
of `netdata.conf` from your netdata):
of `netdata.conf` from your Netdata):
```
[backend]
@ -82,7 +82,7 @@ of `netdata.conf` from your netdata):
host tags = list of TAG=VALUE
destination = space separated list of [PROTOCOL:]HOST[:PORT] - the first working will be used, or a region for kinesis
data source = average | sum | as collected
prefix = netdata
prefix = Netdata
hostname = my-name
update every = 10
buffer on failures = 10
@ -122,13 +122,13 @@ of `netdata.conf` from your netdata):
destination = [ffff:...:0001]:2003 10.11.12.1:2003
```
When multiple servers are defined, netdata will try the next one when the first one fails. This allows
you to load-balance different servers: give your backend servers in different order on each netdata.
When multiple servers are defined, Netdata will try the next one when the first one fails. This allows
you to load-balance different servers: give your backend servers in different order on each Netdata.
netdata also ships [`nc-backend.sh`](nc-backend.sh),
Netdata also ships [`nc-backend.sh`](nc-backend.sh),
a script that can be used as a fallback backend to save the metrics to disk and push them to the
time-series database when it becomes available again. It can also be used to monitor / trace / debug
the metrics netdata generates.
the metrics Netdata generates.
For kinesis backend `destination` should be set to an AWS region (for example, `us-east-1`).
@ -138,16 +138,16 @@ of `netdata.conf` from your netdata):
- `hostname = my-name`, is the hostname to be used for sending data to the backend server. By default
this is `[global].hostname`.
- `prefix = netdata`, is the prefix to add to all metrics.
- `prefix = Netdata`, is the prefix to add to all metrics.
- `update every = 10`, is the number of seconds between sending data to the backend. netdata will add
some randomness to this number, to prevent stressing the backend server when many netdata servers send
- `update every = 10`, is the number of seconds between sending data to the backend. Netdata will add
some randomness to this number, to prevent stressing the backend server when many Netdata servers send
data to the same backend. This randomness does not affect the quality of the data, only the time they
are sent.
- `buffer on failures = 10`, is the number of iterations (each iteration is `[backend].update every` seconds)
to buffer data, when the backend is not available. If the backend fails to receive the data after that
many failures, data loss on the backend is expected (netdata will also log it).
many failures, data loss on the backend is expected (Netdata will also log it).
- `timeout ms = 20000`, is the timeout in milliseconds to wait for the backend server to process the data.
By default this is `2 * update_every * 1000`.
@ -155,7 +155,7 @@ of `netdata.conf` from your netdata):
- `send hosts matching = localhost *` includes one or more space separated patterns, using ` * ` as wildcard
(any number of times within each pattern). The patterns are checked against the hostname (the localhost
is always checked as `localhost`), allowing us to filter which hosts will be sent to the backend when
this netdata is a central netdata aggregating multiple hosts. A pattern starting with ` ! ` gives a
this Netdata is a central Netdata aggregating multiple hosts. A pattern starting with ` ! ` gives a
negative match. So to match all hosts named `*db*` except hosts containing `*slave*`, use
`!*slave* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive
or negative).
@ -166,8 +166,8 @@ of `netdata.conf` from your netdata):
except charts ending in `*reads`, use `!*reads apps.*` (so, the order is important: the first pattern
matching the chart id or the chart name will be used - positive or negative).
- `send names instead of ids = yes | no` controls the metric names netdata should send to backend.
netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read
- `send names instead of ids = yes | no` controls the metric names Netdata should send to backend.
Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read
by the system and names are human friendly labels (also unique). Most charts and metrics have the same
ID and name, but in several cases they are different: disks with device-mapper, interrupts, QoS classes,
statsd synthetic charts, etc.
@ -176,26 +176,26 @@ of `netdata.conf` from your netdata):
These are currently only sent to opentsdb and prometheus. Please use the appropriate format for each
time-series db. For example opentsdb likes them like `TAG1=VALUE1 TAG2=VALUE2`, but prometheus like
`tag1="value1",tag2="value2"`. Host tags are mirrored with database replication (streaming of metrics
between netdata servers).
between Netdata servers).
## monitoring operation
netdata provides 5 charts:
Netdata provides 5 charts:
1. **Buffered metrics**, the number of metrics netdata added to the buffer for dispatching them to the
1. **Buffered metrics**, the number of metrics Netdata added to the buffer for dispatching them to the
backend server.
2. **Buffered data size**, the amount of data (in KB) netdata added the buffer.
2. **Buffered data size**, the amount of data (in KB) Netdata added the buffer.
3. ~~**Backend latency**, the time the backend server needed to process the data netdata sent.
3. ~~**Backend latency**, the time the backend server needed to process the data Netdata sent.
If there was a re-connection involved, this includes the connection time.~~
(this chart has been removed, because it only measures the time netdata needs to give the data
to the O/S - since the backend servers do not ack the reception, netdata does not have any means
(this chart has been removed, because it only measures the time Netdata needs to give the data
to the O/S - since the backend servers do not ack the reception, Netdata does not have any means
to measure this properly).
4. **Backend operations**, the number of operations performed by netdata.
4. **Backend operations**, the number of operations performed by Netdata.
5. **Backend thread CPU usage**, the CPU resources consumed by the netdata thread, that is responsible
5. **Backend thread CPU usage**, the CPU resources consumed by the Netdata thread, that is responsible
for sending the metrics to the backend server.
![image](https://cloud.githubusercontent.com/assets/2662304/20463536/eb196084-af3d-11e6-8ee5-ddbd3b4d8449.png)
@ -204,12 +204,12 @@ netdata provides 5 charts:
The latest version of the alarms configuration for monitoring the backend is [here](../health/health.d/backend.conf)
netdata adds 4 alarms:
Netdata adds 4 alarms:
1. `backend_last_buffering`, number of seconds since the last successful buffering of backend data
2. `backend_metrics_sent`, percentage of metrics sent to the backend server
3. `backend_metrics_lost`, number of metrics lost due to repeating failures to contact the backend server
4. ~~`backend_slow`, the percentage of time between iterations needed by the backend time to process the data sent by netdata~~ (this was misleading and has been removed).
4. ~~`backend_slow`, the percentage of time between iterations needed by the backend time to process the data sent by Netdata~~ (this was misleading and has been removed).
![image](https://cloud.githubusercontent.com/assets/2662304/20463779/a46ed1c2-af43-11e6-91a5-07ca4533cac3.png)

View file

@ -41,7 +41,7 @@ visibility into your application and systems performance.
## Getting Started - Netdata
To begin lets create our container which we will install Netdata on. We need
to run a container, forward the necessary port that netdata listens on, and
to run a container, forward the necessary port that Netdata listens on, and
attach a tty so we can interact with the bash shell on the container. But
before we do this we want name resolution between the two containers to work.
In order to accomplish this we will create a user-defined network and attach
@ -68,7 +68,7 @@ be sitting inside the shell of the container.
After we have entered the shell we can install Netdata. This process could not
be easier. If you take a look at [this link](../packaging/installer/#installation), the Netdata devs give us
several one-liners to install netdata. I have not had any issues with these one
several one-liners to install Netdata. I have not had any issues with these one
liners and their bootstrapping scripts so far (If you guys run into anything do
share). Run the following command in your container.
@ -97,7 +97,7 @@ Netdata dashboard.
![](https://github.com/ldelossa/NetdataTutorial/raw/master/Screen%20Shot%202017-07-28%20at%204.00.45%20PM.png)
This CHART is called system.cpu, The FAMILY is cpu, and the DIMENSION we are
observing is “system”. You can begin to draw links between the charts in netdata
observing is “system”. You can begin to draw links between the charts in Netdata
to the prometheus metrics format in this manner.
## Prometheus

View file

@ -1,8 +1,8 @@
# Using netdata with AWS Kinesis Data Streams
# Using Netdata with AWS Kinesis Data Streams
## Prerequisites
To use AWS Kinesis as a backend AWS SDK for C++ should be [installed](https://docs.aws.amazon.com/en_us/sdk-for-cpp/v1/developer-guide/setup.html) first. `libcrypto`, `libssl`, and `libcurl` are also required to compile netdata with Kinesis support enabled. Next, netdata should be re-installed from the source. The installer will detect that the required libraries are now available.
To use AWS Kinesis as a backend AWS SDK for C++ should be [installed](https://docs.aws.amazon.com/en_us/sdk-for-cpp/v1/developer-guide/setup.html) first. `libcrypto`, `libssl`, and `libcurl` are also required to compile Netdata with Kinesis support enabled. Next, Netdata should be re-installed from the source. The installer will detect that the required libraries are now available.
If the AWS SDK for C++ is being installed from source, it is useful to set `-DBUILD_ONLY="kinesis"`. Otherwise, the building process could take a very long time. Take a note, that the default installation path for the libraries is `/usr/local/lib64`. Many Linux distributions don't include this path as the default one for a library search, so it is advisable to use the following options to `cmake` while building the AWS SDK:
@ -21,7 +21,7 @@ To enable data sending to the kinesis backend set the following options in `netd
```
set the `destination` option to an AWS region.
In the netdata configuration directory run `./edit-config aws_kinesis.conf` and set AWS credentials and stream name:
In the Netdata configuration directory run `./edit-config aws_kinesis.conf` and set AWS credentials and stream name:
```
# AWS credentials
aws_access_key_id = your_access_key_id
@ -32,7 +32,7 @@ stream name = your_stream_name
```
Alternatively, AWS credentials can be set for the *netdata* user using AWS SDK for C++ [standard methods](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html).
A partition key for every record is computed automatically by the netdata with the purpose to distribute records across available shards evenly.
A partition key for every record is computed automatically by Netdata with the purpose to distribute records across available shards evenly.
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2Faws_kinesis%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

View file

@ -1,32 +1,32 @@
# Using netdata with Prometheus
# Using Netdata with Prometheus
> IMPORTANT: the format netdata sends metrics to prometheus has changed since netdata v1.7. The new prometheus backend for netdata supports a lot more features and is aligned to the development of the rest of the netdata backends.
> IMPORTANT: the format Netdata sends metrics to prometheus has changed since Netdata v1.7. The new prometheus backend for Netdata supports a lot more features and is aligned to the development of the rest of the Netdata backends.
Prometheus is a distributed monitoring system which offers a very simple setup along with a robust data model. Recently netdata added support for Prometheus. I'm going to quickly show you how to install both netdata and prometheus on the same server. We can then use grafana pointed at Prometheus to obtain long term metrics netdata offers. I'm assuming we are starting at a fresh ubuntu shell (whether you'd like to follow along in a VM or a cloud instance is up to you).
Prometheus is a distributed monitoring system which offers a very simple setup along with a robust data model. Recently Netdata added support for Prometheus. I'm going to quickly show you how to install both Netdata and prometheus on the same server. We can then use grafana pointed at Prometheus to obtain long term metrics Netdata offers. I'm assuming we are starting at a fresh ubuntu shell (whether you'd like to follow along in a VM or a cloud instance is up to you).
## Installing netdata and prometheus
## Installing Netdata and prometheus
### Installing netdata
### Installing Netdata
There are number of ways to install netdata according to [Installation](../../packaging/installer/#installation)
The suggested way of installing the latest netdata and keep it upgrade automatically. Using one line installation:
There are number of ways to install Netdata according to [Installation](../../packaging/installer/#installation)
The suggested way of installing the latest Netdata and keep it upgrade automatically. Using one line installation:
```
bash <(curl -Ss https://my-netdata.io/kickstart.sh)
```
At this point we should have netdata listening on port 19999. Attempt to take your browser here:
At this point we should have Netdata listening on port 19999. Attempt to take your browser here:
```
http://your.netdata.ip:19999
```
*(replace `your.netdata.ip` with the IP or hostname of the server running netdata)*
*(replace `your.netdata.ip` with the IP or hostname of the server running Netdata)*
### Installing Prometheus
In order to install prometheus we are going to introduce our own systemd startup script along with an example of prometheus.yaml configuration. Prometheus needs to be pointed to your server at a specific target url for it to scrape netdata's api. Prometheus is always a pull model meaning netdata is the passive client within this architecture. Prometheus always initiates the connection with netdata.
In order to install prometheus we are going to introduce our own systemd startup script along with an example of prometheus.yaml configuration. Prometheus needs to be pointed to your server at a specific target url for it to scrape Netdata's api. Prometheus is always a pull model meaning Netdata is the passive client within this architecture. Prometheus always initiates the connection with Netdata.
#### Download Prometheus
@ -57,7 +57,7 @@ sudo tar -xvf /tmp/prometheus-2.3.2.linux-amd64.tar.gz -C /opt/prometheus --stri
We will use the following `prometheus.yml` file. Save it at `/opt/prometheus/prometheus.yml`.
Make sure to replace `your.netdata.ip` with the IP or hostname of the host running netdata.
Make sure to replace `your.netdata.ip` with the IP or hostname of the host running Netdata.
```yaml
# my global config
@ -101,7 +101,7 @@ scrape_configs:
#source: [as-collected]
#
# server name for this prometheus - the default is the client IP
# for netdata to uniquely identify it
# for Netdata to uniquely identify it
#server: ['prometheus1']
honor_labels: true
@ -180,21 +180,21 @@ sudo systemctl enable prometheus
Prometheus should now start and listen on port 9090. Attempt to head there with your browser.
If everything is working correctly when you fetch `http://your.prometheus.ip:9090` you will see a 'Status' tab. Click this and click on 'targets' We should see the netdata host as a scraped target.
If everything is working correctly when you fetch `http://your.prometheus.ip:9090` you will see a 'Status' tab. Click this and click on 'targets' We should see the Netdata host as a scraped target.
---
## Netdata support for prometheus
> IMPORTANT: the format netdata sends metrics to prometheus has changed since netdata v1.6. The new format allows easier queries for metrics and supports both `as collected` and normalized metrics.
> IMPORTANT: the format Netdata sends metrics to prometheus has changed since Netdata v1.6. The new format allows easier queries for metrics and supports both `as collected` and normalized metrics.
Before explaining the changes, we have to understand the key differences between netdata and prometheus.
Before explaining the changes, we have to understand the key differences between Netdata and prometheus.
### understanding netdata metrics
### understanding Netdata metrics
##### charts
Each chart in netdata has several properties (common to all its metrics):
Each chart in Netdata has several properties (common to all its metrics):
- `chart_id` - uniquely identifies a chart.
@ -208,32 +208,32 @@ Each chart in netdata has several properties (common to all its metrics):
##### dimensions
Then each netdata chart contains metrics called `dimensions`. All the dimensions of a chart have the same units of measurement, and are contextually in the same category (ie. the metrics for disk bandwidth are `read` and `write` and they are both in the same chart).
Then each Netdata chart contains metrics called `dimensions`. All the dimensions of a chart have the same units of measurement, and are contextually in the same category (ie. the metrics for disk bandwidth are `read` and `write` and they are both in the same chart).
### netdata data source
### Netdata data source
Netdata can send metrics to prometheus from 3 data sources:
- `as collected` or `raw` - this data source sends the metrics to prometheus as they are collected. No conversion is done by netdata. The latest value for each metric is just given to prometheus. This is the most preferred method by prometheus, but it is also the harder to work with. To work with this data source, you will need to understand how to get meaningful values out of them.
- `as collected` or `raw` - this data source sends the metrics to prometheus as they are collected. No conversion is done by Netdata. The latest value for each metric is just given to prometheus. This is the most preferred method by prometheus, but it is also the harder to work with. To work with this data source, you will need to understand how to get meaningful values out of them.
The format of the metrics is: `CONTEXT{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
If the metric is a counter (`incremental` in netdata lingo), `_total` is appended the context.
If the metric is a counter (`incremental` in Netdata lingo), `_total` is appended the context.
Unlike prometheus, netdata allows each dimension of a chart to have a different algorithm and conversion constants (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, netdata will use this format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}`
Unlike prometheus, Netdata allows each dimension of a chart to have a different algorithm and conversion constants (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, Netdata will use this format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}`
- `average` - this data source uses the netdata database to send the metrics to prometheus as they are presented on the netdata dashboard. So, all the metrics are sent as gauges, at the units they are presented in the netdata dashboard charts. This is the easiest to work with.
- `average` - this data source uses the Netdata database to send the metrics to prometheus as they are presented on the Netdata dashboard. So, all the metrics are sent as gauges, at the units they are presented in the Netdata dashboard charts. This is the easiest to work with.
The format of the metrics is: `CONTEXT_UNITS_average{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
When this source is used, netdata keeps track of the last access time for each prometheus server fetching the metrics. This last access time is used at the subsequent queries of the same prometheus server to identify the time-frame the `average` will be calculated. So, no matter how frequently prometheus scrapes netdata, it will get all the database data. To identify each prometheus server, netdata uses by default the IP of the client fetching the metrics. If there are multiple prometheus servers fetching data from the same netdata, using the same IP, each prometheus server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the prometheus server.
When this source is used, Netdata keeps track of the last access time for each prometheus server fetching the metrics. This last access time is used at the subsequent queries of the same prometheus server to identify the time-frame the `average` will be calculated. So, no matter how frequently prometheus scrapes Netdata, it will get all the database data. To identify each prometheus server, Netdata uses by default the IP of the client fetching the metrics. If there are multiple prometheus servers fetching data from the same Netdata, using the same IP, each prometheus server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the prometheus server.
- `sum` or `volume`, is like `average` but instead of averaging the values, it sums them.
The format of the metrics is: `CONTEXT_UNITS_sum{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
All the other operations are the same with `average`.
Keep in mind that early versions of netdata were sending the metrics as: `CHART_DIMENSION{}`.
Keep in mind that early versions of Netdata were sending the metrics as: `CHART_DIMENSION{}`.
### Querying Metrics
@ -241,11 +241,11 @@ Fetch with your web browser this URL:
`http://your.netdata.ip:19999/api/v1/allmetrics?format=prometheus&help=yes`
*(replace `your.netdata.ip` with the ip or hostname of your netdata server)*
*(replace `your.netdata.ip` with the ip or hostname of your Netdata server)*
netdata will respond with all the metrics it sends to prometheus.
Netdata will respond with all the metrics it sends to prometheus.
If you search that page for `"system.cpu"` you will find all the metrics netdata is exporting to prometheus for this chart. `system.cpu` is the chart name on the netdata dashboard (on the netdata dashboard all charts have a text heading such as : `Total CPU utilization (system.cpu)`. What we are interested here in the chart name: `system.cpu`).
If you search that page for `"system.cpu"` you will find all the metrics Netdata is exporting to prometheus for this chart. `system.cpu` is the chart name on the Netdata dashboard (on the Netdata dashboard all charts have a text heading such as : `Total CPU utilization (system.cpu)`. What we are interested here in the chart name: `system.cpu`).
Searching for `"system.cpu"` reveals:
@ -272,7 +272,7 @@ netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension=
# COMMENT netdata_system_cpu_percentage_average: dimension "idle", value is percentage, gauge, dt 1500066653 to 1500066662 inclusive
netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension="idle"} 92.3630770 1500066662000
```
*(netdata response for `system.cpu` with source=`average`)*
*(Netdata response for `system.cpu` with source=`average`)*
In `average` or `sum` data sources, all values are normalized and are reported to prometheus as gauges. Now, use the 'expression' text form in prometheus. Begin to type the metrics we are looking for: `netdata_system_cpu`. You should see that the text form begins to auto-fill as prometheus knows about this metric.
@ -302,13 +302,13 @@ netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="iowait"} 233
netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="idle"} 918470 1500066716438
```
*(netdata response for `system.cpu` with source=`as-collected`)*
*(Netdata response for `system.cpu` with source=`as-collected`)*
For more information check prometheus documentation.
### Streaming data from upstream hosts
The `format=prometheus` parameter only exports the host's netdata metrics. If you are using the master/slave functionality of netdata this ignores any upstream hosts - so you should consider using the below in your **prometheus.yml**:
The `format=prometheus` parameter only exports the host's Netdata metrics. If you are using the master/slave functionality of Netdata this ignores any upstream hosts - so you should consider using the below in your **prometheus.yml**:
```
metrics_path: '/api/v1/allmetrics'
@ -321,13 +321,13 @@ This will report all upstream host data, and `honor_labels` will make Prometheus
### Timestamps
To pass the metrics through prometheus pushgateway, netdata supports the option `&timestamps=no` to send the metrics without timestamps.
To pass the metrics through prometheus pushgateway, Netdata supports the option `&timestamps=no` to send the metrics without timestamps.
## Netdata host variables
netdata collects various system configuration metrics, like the max number of TCP sockets supported, the max number of files allowed system-wide, various IPC sizes, etc. These metrics are not exposed to prometheus by default.
Netdata collects various system configuration metrics, like the max number of TCP sockets supported, the max number of files allowed system-wide, various IPC sizes, etc. These metrics are not exposed to prometheus by default.
To expose them, append `variables=yes` to the netdata URL.
To expose them, append `variables=yes` to the Netdata URL.
### TYPE and HELP
@ -335,7 +335,7 @@ To save bandwidth, and because prometheus does not use them anyway, `# TYPE` and
### Names and IDs
netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names are human friendly labels (also unique).
Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names are human friendly labels (also unique).
Most charts and metrics have the same ID and name, but in several cases they are different: disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
@ -353,7 +353,7 @@ You can overwrite it from prometheus, by appending to the URL:
### Filtering metrics sent to prometheus
netdata can filter the metrics it sends to prometheus with this setting:
Netdata can filter the metrics it sends to prometheus with this setting:
```
[backend]
@ -362,9 +362,9 @@ netdata can filter the metrics it sends to prometheus with this setting:
This settings accepts a space separated list of patterns to match the **charts** to be sent to prometheus. Each pattern can use ` * ` as wildcard, any number of times (e.g `*a*b*c*` is valid). Patterns starting with ` ! ` give a negative match (e.g `!*.bad users.* groups.*` will send all the users and groups except `bad` user and `bad` group). The order is important: the first match (positive or negative) left to right, is used.
### Changing the prefix of netdata metrics
### Changing the prefix of Netdata metrics
netdata sends all metrics prefixed with `netdata_`. You can change this in `netdata.conf`, like this:
Netdata sends all metrics prefixed with `netdata_`. You can change this in `netdata.conf`, like this:
```
[backend]
@ -383,8 +383,8 @@ To get the metric names as they were before v1.12, append to the URL `&oldunits=
### Accuracy of `average` and `sum` data sources
When the data source is set to `average` or `sum`, netdata remembers the last access of each client accessing prometheus metrics and uses this last access time to respond with the `average` or `sum` of all the entries in the database since that. This means that prometheus servers are not losing data when they access netdata with data source = `average` or `sum`.
When the data source is set to `average` or `sum`, Netdata remembers the last access of each client accessing prometheus metrics and uses this last access time to respond with the `average` or `sum` of all the entries in the database since that. This means that prometheus servers are not losing data when they access Netdata with data source = `average` or `sum`.
To uniquely identify each prometheus server, netdata uses the IP of the client accessing the metrics. If however the IP is not good enough for identifying a single prometheus server (e.g. when prometheus servers are accessing netdata through a web proxy, or when multiple prometheus servers are NATed to a single IP), each prometheus may append `&server=NAME` to the URL. This `NAME` is used by netdata to uniquely identify each prometheus server and keep track of its last access time.
To uniquely identify each prometheus server, Netdata uses the IP of the client accessing the metrics. If however the IP is not good enough for identifying a single prometheus server (e.g. when prometheus servers are accessing Netdata through a web proxy, or when multiple prometheus servers are NATed to a single IP), each prometheus may append `&server=NAME` to the URL. This `NAME` is used by Netdata to uniquely identify each prometheus server and keep track of its last access time.
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2Fprometheus%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

View file

@ -2,7 +2,7 @@
## Prerequisites
To use the prometheus remote write API with [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) [protobuf](https://developers.google.com/protocol-buffers/) and [snappy](https://github.com/google/snappy) libraries should be installed first. Next, netdata should be re-installed from the source. The installer will detect that the required libraries and utilities are now available.
To use the prometheus remote write API with [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) [protobuf](https://developers.google.com/protocol-buffers/) and [snappy](https://github.com/google/snappy) libraries should be installed first. Next, Netdata should be re-installed from the source. The installer will detect that the required libraries and utilities are now available.
## Configuration

View file

@ -1,20 +1,20 @@
# Data collection plugins
netdata supports **internal** and **external** data collection plugins:
Netdata supports **internal** and **external** data collection plugins:
- **internal** plugins are written in `C` and run as threads inside the netdata daemon.
- **internal** plugins are written in `C` and run as threads inside the `netdat`a` daemon.
- **external** plugins may be written in any computer language and are spawn as independent long-running processes by the netdata daemon.
They communicate with the netdata daemon via `pipes` (`stdout` communication).
- **external** plugins may be written in any computer language and are spawn as independent long-running processes by the `netdata` daemon.
They communicate with the `netdata` daemon via `pipes` (`stdout` communication).
To minimize the number of processes spawn for data collection, netdata also supports **plugin orchestrators**.
To minimize the number of processes spawn for data collection, Netdata also supports **plugin orchestrators**.
- **plugin orchestrators** are external plugins that do not collect any data by themeselves.
Instead they support data collection **modules** written in the language of the orchestrator.
Usually the orchestrator provides a higher level abstraction, making it ideal for writing new
data collection modules with the minimum of code.
Currently netdata provides plugin orchestrators
Currently Netdata provides plugin orchestrators
BASH v4+ [charts.d.plugin](charts.d.plugin/),
node.js [node.d.plugin](node.d.plugin/) and
python v2+ (including v3) [python.d.plugin](python.d.plugin/).
@ -42,7 +42,7 @@ plugin|lang|O/S|runs as|modular|description
[plugins.d](plugins.d/)|`C`|any|internal|-|implements the **external plugins** API and serves external plugins
[proc.plugin](proc.plugin/)|`C`|linux|internal|yes|collects resource usage and performance data on Linux systems
[python.d.plugin](python.d.plugin/)|`python` v2+|any|external|yes|a **plugin orchestrator** for data collection modules written in `python` v2 or v3 (both are supported).
[statsd.plugin](statsd.plugin/)|`C`|any|internal|-|implements a high performance **statsd** server for netdata
[statsd.plugin](statsd.plugin/)|`C`|any|internal|-|implements a high performance **statsd** server for Netdata
[tc.plugin](tc.plugin/)|`C`|linux|internal|-|collects traffic QoS metrics (`tc`) of Linux network interfaces
## Enabling and Disabling plugins
@ -59,7 +59,7 @@ All **external plugins** are managed by [plugins.d](plugins.d/), which provides
### Internal Plugins
Each of the internal plugins runs as a thread inside the netdata daemon.
Each of the internal plugins runs as a thread inside the `netdata` daemon.
Once this thread has started, the plugin may spawn additional threads according to its design.
#### Internal Plugins API
@ -72,7 +72,7 @@ collect_data() {
collected_number collected_value = collect_a_value();
// give the metrics to netdata
// give the metrics to Netdata
static RRDSET *st = NULL; // the chart
static RRDDIM *rd = NULL; // a dimension attached to this chart
@ -100,20 +100,19 @@ collect_data() {
}
else {
// this chart is already created
// let netdata know we start a new iteration on it
// let Netdata know we start a new iteration on it
rrdset_next(st);
}
// give the collected value(s) to the chart
rrddim_set_by_pointer(st, rd, collected_value);
// signal netdata we are done with this iteration
// signal Netdata we are done with this iteration
rrdset_done(st);
}
```
Of course netdata has a lot of libraries to help you also in collecting the metrics.
The best way to find your way through this, is to examine what other similar plugins do.
Of course, Netdata has a lot of libraries to help you also in collecting the metrics. The best way to find your way through this, is to examine what other similar plugins do.
### External Plugins

View file

@ -5,9 +5,9 @@
To achieve this task, it iterates through the whole process tree, collecting resource usage information
for every process found running.
Since netdata needs to present this information in charts and track them through time,
Since Netdata needs to present this information in charts and track them through time,
instead of presenting a `top` like list, `apps.plugin` uses a pre-defined list of **process groups**
to which it assigns all running processes. This list is [customizable](apps_groups.conf) and netdata
to which it assigns all running processes. This list is [customizable](apps_groups.conf) and Netdata
ships with a good default for most cases (to edit it on your system run `/etc/netdata/edit-config apps_groups.conf`).
So, `apps.plugin` builds a process tree (much like `ps fax` does in Linux), and groups
@ -15,7 +15,7 @@ processes together (evaluating both child and parent processes) so that the resu
a predefined set of members (of course, only process groups found running are reported).
> If you find that `apps.plugin` categorizes standard applications as `other`, we would be
> glad to accept pull requests improving the [defaults](apps_groups.conf) shipped with netdata.
> glad to accept pull requests improving the [defaults](apps_groups.conf) shipped with Netdata.
Unlike traditional process monitoring tools (like `top`), `apps.plugin` is able to account the resource
utilization of exit processes. Their utilization is accounted at their currently running parents.
@ -26,9 +26,9 @@ that fork/spawn other short lived processes hundreds of times per second.
`apps.plugin` provides charts for 3 sections:
1. Per application charts as **Applications** at netdata dashboards
2. Per user charts as **Users** at netdata dashboards
3. Per user group charts as **User Groups** at netdata dashboards
1. Per application charts as **Applications** at Netdata dashboards
2. Per user charts as **Users** at Netdata dashboards
3. Per user group charts as **User Groups** at Netdata dashboards
Each of these sections provides the same number of charts:
@ -64,7 +64,7 @@ The above are reported:
`apps.plugin` is a complex piece of software and has a lot of work to do
We are proud that `apps.plugin` is a lot faster compared to any other similar tool,
while collecting a lot more information for the processes, however the fact is that
this plugin requires more CPU resources than the netdata daemon itself.
this plugin requires more CPU resources than the `netdata` daemon itself.
Under Linux, for each process running, `apps.plugin` reads several `/proc` files
per process. Doing this work per-second, especially on hosts with several thousands
@ -135,14 +135,14 @@ The order of the entries in this list is important: the first that matches a pro
ones at the top. Processes not matched by any row, will inherit it from their parents or children.
The order also controls the order of the dimensions on the generated charts (although applications started
after apps.plugin is started, will be appended to the existing list of dimensions the netdata daemon maintains).
after apps.plugin is started, will be appended to the existing list of dimensions the `netdata` daemon maintains).
## Permissions
`apps.plugin` requires additional privileges to collect all the information it needs.
The problem is described in issue #157.
When netdata is installed, `apps.plugin` is given the capabilities `cap_dac_read_search,cap_sys_ptrace+ep`.
When Netdata is installed, `apps.plugin` is given the capabilities `cap_dac_read_search,cap_sys_ptrace+ep`.
If this fails (i.e. `setcap` fails), `apps.plugin` is setuid to `root`.
#### linux capabilities in containers
@ -158,15 +158,15 @@ chown root:netdata /usr/libexec/netdata/plugins.d/apps.plugin
chmod 4750 /usr/libexec/netdata/plugins.d/apps.plugin
```
You will have to run these, every time you update netdata.
You will have to run these, every time you update Netdata.
## Security
`apps.plugin` performs a hard-coded function of building the process tree in memory,
iterating forever, collecting metrics for each running process and sending them to netdata.
This is a one-way communication, from `apps.plugin` to netdata.
iterating forever, collecting metrics for each running process and sending them to Netdata.
This is a one-way communication, from `apps.plugin` to Netdata.
So, since `apps.plugin` cannot be instructed by netdata for the actions it performs,
So, since `apps.plugin` cannot be instructed by Netdata for the actions it performs,
we think it is pretty safe to allow it have these increased privileges.
Keep in mind that `apps.plugin` will still run without escalated permissions,
@ -210,7 +210,7 @@ For more information about badges check [Generating Badges](../../web/api/badges
## Comparison with console tools
Ssh to a server running netdata and execute this:
SSH to a server running Netdata and execute this:
```sh
while true; do ls -l /var/run >/dev/null; done
@ -318,24 +318,24 @@ FILE SYS Used Total 0.3 2.1 7009 netdata 0 S /usr/sbin/netdata
/ (vda1) 1.56G 29.5G 0.0 0.0 17 root 0 S oom_reaper
```
#### why this happens?
#### why does this happen?
All the console tools report usage based on the processes found running *at the moment they
examine the process tree*. So, they see just one `ls` command, which is actually very quick
with minor CPU utilization. But the shell, is spawning hundreds of them, one after another
(much like shell scripts do).
#### what netdata reports?
#### What does Netdata report?
The total CPU utilization of the system:
![image](https://cloud.githubusercontent.com/assets/2662304/21076212/9198e5a6-bf2e-11e6-9bc0-6bdea25befb2.png)
<br/>_**Figure 1**: The system overview section at netdata, just a few seconds after the command was run_
<br/>_**Figure 1**: The system overview section at Netdata, just a few seconds after the command was run_
And at the applications `apps.plugin` breaks down CPU usage per application:
![image](https://cloud.githubusercontent.com/assets/2662304/21076220/c9687848-bf2e-11e6-8d81-348592c5aca2.png)
<br/>_**Figure 2**: The Applications section at netdata, just a few seconds after the command was run_
<br/>_**Figure 2**: The Applications section at Netdata, just a few seconds after the command was run_
So, the `ssh` session is using 95% CPU time.
@ -344,7 +344,7 @@ Why `ssh`?
`apps.plugin` groups all processes based on its configuration file
[`/etc/netdata/apps_groups.conf`](apps_groups.conf)
(to edit it on your system run `/etc/netdata/edit-config apps_groups.conf`).
The default configuration has nothing for `bash`, but it has for `sshd`, so netdata accumulates
The default configuration has nothing for `bash`, but it has for `sshd`, so Netdata accumulates
all ssh sessions to a dimension on the charts, called `ssh`. This includes all the processes in
the process tree of `sshd`, **including the exited children**.
@ -353,9 +353,9 @@ the process tree of `sshd`, **including the exited children**.
> `apps.plugin` does not use these mechanisms. The process grouping made by `apps.plugin` works
> on any Linux, `systemd` based or not.
#### a more technical description of how netdata works
#### a more technical description of how Netdata works
netdata reads `/proc/<pid>/stat` for all processes, once per second and extracts `utime` and
Netdata reads `/proc/<pid>/stat` for all processes, once per second and extracts `utime` and
`stime` (user and system cpu utilization), much like all the console tools do.
But it [also extracts `cutime` and `cstime`](https://github.com/netdata/netdata/blob/62596cc6b906b1564657510ca9135c08f6d4cdda/src/apps_plugin.c#L636-L642)
@ -369,7 +369,7 @@ been reported for it prior to this iteration.
It is even trickier, because walking through the entire process tree takes some time itself. So,
if you sum the CPU utilization of all processes, you might have more CPU time than the reported
total cpu time of the system. netdata solves this, by adapting the per process cpu utilization to
total cpu time of the system. Netdata solves this, by adapting the per process cpu utilization to
the total of the system. [Netdata adds charts that document this normalization](https://london.my-netdata.io/default.html#menu_netdata_submenu_apps_plugin).
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fcollectors%2Fapps.plugin%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

View file

@ -6,11 +6,11 @@ cgroups (or control groups), are a Linux kernel feature that provides accounting
cgroups are hierarchical, meaning that cgroups can contain child cgroups, which can contain more cgroups, etc. All accounting is reported (and resource usage limits are applied) also in a hierarchical way.
To visualize cgroup metrics netdata provides configuration for cherry picking the cgroups of interest. By default (without any configuration) netdata should pick **systemd services**, all kinds of **containers** (lxc, docker, etc) and **virtual machines** spawn by managers that register them with cgroups (qemu, libvirt, etc).
To visualize cgroup metrics Netdata provides configuration for cherry picking the cgroups of interest. By default (without any configuration) Netdata should pick **systemd services**, all kinds of **containers** (lxc, docker, etc) and **virtual machines** spawn by managers that register them with cgroups (qemu, libvirt, etc).
## configuring netdata for cgroups
## configuring Netdata for cgroups
For each cgroup available in the system, netdata provides this configuration:
For each cgroup available in the system, Netdata provides this configuration:
```
[plugin:cgroups]
@ -21,9 +21,9 @@ But it also provides a few patterns to provide a sane default (`yes` or `no`).
Below we see, how this works.
### how netdata finds the available cgroups
### how Netdata finds the available cgroups
Linux exposes resource usage reporting and provides dynamic configuration for cgroups, using virtual files (usually) under `/sys/fs/cgroup`. netdata reads `/proc/self/mountinfo` to detect the exact mount point of cgroups. netdata also allows manual configuration of this mount point, using these settings:
Linux exposes resource usage reporting and provides dynamic configuration for cgroups, using virtual files (usually) under `/sys/fs/cgroup`. Netdata reads `/proc/self/mountinfo` to detect the exact mount point of cgroups. Netdata also allows manual configuration of this mount point, using these settings:
```
[plugin:cgroups]
@ -34,27 +34,27 @@ Linux exposes resource usage reporting and provides dynamic configuration for cg
path to /sys/fs/cgroup/devices = /sys/fs/cgroup/devices
```
netdata rescans these directories for added or removed cgroups every `check for new cgroups every` seconds.
Netdata rescans these directories for added or removed cgroups every `check for new cgroups every` seconds.
### hierarchical search for cgroups
Since cgroups are hierarchical, for each of the directories shown above, netdata walks through the subdirectories recursively searching for cgroups (each subdirectory is another cgroup).
Since cgroups are hierarchical, for each of the directories shown above, Netdata walks through the subdirectories recursively searching for cgroups (each subdirectory is another cgroup).
For each of the directories found, netdata provides a configuration variable:
For each of the directories found, Netdata provides a configuration variable:
```
[plugin:cgroups]
search for cgroups under PATH = yes | no
```
To provide a sane default for this setting, netdata uses the following pattern list (patterns starting with `!` give a negative match and their order is important: the first matching a path will be used):
To provide a sane default for this setting, Netdata uses the following pattern list (patterns starting with `!` give a negative match and their order is important: the first matching a path will be used):
```
[plugin:cgroups]
search for cgroups in subpaths matching = !*/init.scope !*-qemu !/init.scope !/system !/systemd !/user !/user.slice *
```
So, we disable checking for **child cgroups** in systemd internal cgroups ([systemd services are monitored by netdata](#monitoring-systemd-services)), user cgroups (normally used for desktop and remote user sessions), qemu virtual machines (child cgroups of virtual machines) and `init.scope`. All others are enabled.
So, we disable checking for **child cgroups** in systemd internal cgroups ([systemd services are monitored by Netdata](#monitoring-systemd-services)), user cgroups (normally used for desktop and remote user sessions), qemu virtual machines (child cgroups of virtual machines) and `init.scope`. All others are enabled.
### unified cgroups (cgroups v2) support
@ -71,14 +71,14 @@ Unified cgroups use same name pattern matching as v1 cgroups. `cgroup_enable_sys
### enabled cgroups
To check if the cgroup is enabled, netdata uses this setting:
To check if the cgroup is enabled, Netdata uses this setting:
```
[plugin:cgroups]
enable cgroup NAME = yes | no
```
To provide a sane default, netdata uses the following pattern list (it checks the pattern against the path of the cgroup):
To provide a sane default, Netdata uses the following pattern list (it checks the pattern against the path of the cgroup):
```
[plugin:cgroups]
@ -87,9 +87,9 @@ To provide a sane default, netdata uses the following pattern list (it checks th
The above provides the default `yes` or `no` setting for the cgroup. However, there is an additional step. In many cases the cgroups found in the `/sys/fs/cgroup` hierarchy are just random numbers and in many cases these numbers are ephemeral: they change across reboots or sessions.
So, we need to somehow map the paths of the cgroups to names, to provide consistent netdata configuration (i.e. there is no point to say `enable cgroup 1234 = yes | no`, if `1234` is a random number that changes over time - we need a name for the cgroup first, so that `enable cgroup NAME = yes | no` will be consistent).
So, we need to somehow map the paths of the cgroups to names, to provide consistent Netdata configuration (i.e. there is no point to say `enable cgroup 1234 = yes | no`, if `1234` is a random number that changes over time - we need a name for the cgroup first, so that `enable cgroup NAME = yes | no` will be consistent).
For this mapping netdata provides 2 configuration options:
For this mapping Netdata provides 2 configuration options:
```
[plugin:cgroups]
@ -99,11 +99,11 @@ For this mapping netdata provides 2 configuration options:
The whole point for the additional pattern list, is to limit the number of times the script will be called. Without this pattern list, the script might be called thousands of times, depending on the number of cgroups available in the system.
The above pattern list is matched against the path of the cgroup. For matched cgroups, netdata calls the script [cgroup-name.sh](cgroup-name.sh.in) to get its name. This script queries `docker`, or applies heuristics to find give a name for the cgroup.
The above pattern list is matched against the path of the cgroup. For matched cgroups, Netdata calls the script [cgroup-name.sh](cgroup-name.sh.in) to get its name. This script queries `docker`, or applies heuristics to find give a name for the cgroup.
### charts with zero metrics
By default, Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Set `yes` for a chart instead of `auto` to enable it permanently. For example:
By default, Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Set `yes` for a chart instead of `auto` to enable it permanently. For example:
```
[plugin:cgroups]
@ -118,7 +118,7 @@ CPU and memory limits are watched and used to rise alarms. Memory usage for ever
## Monitoring systemd services
netdata monitors **systemd services**. Example:
Netdata monitors **systemd services**. Example:
![image](https://cloud.githubusercontent.com/assets/2662304/21964372/20cd7b84-db53-11e6-98a2-b9c986b082c0.png)
@ -175,7 +175,7 @@ sudo systemctl daemon-reexec
(`systemctl daemon-reload` does not reload the configuration of the server - so you have to execute `systemctl daemon-reexec`).
Now, when you run `systemd-cgtop`, services will start reporting usage (if it does not, restart a service - any service - to wake it up). Refresh your netdata dashboard, and you will have the charts too.
Now, when you run `systemd-cgtop`, services will start reporting usage (if it does not, restart a service - any service - to wake it up). Refresh your Netdata dashboard, and you will have the charts too.
In case memory accounting is missing, you will need to enable it at your kernel, by appending the following kernel boot options and rebooting:
@ -185,7 +185,7 @@ cgroup_enable=memory swapaccount=1
You can add the above, directly at the `linux` line in your `/boot/grub/grub.cfg` or appending them to the `GRUB_CMDLINE_LINUX` in `/etc/default/grub` (in which case you will have to run `update-grub` before rebooting). On DigitalOcean debian images you may have to set it at `/etc/default/grub.d/50-cloudimg-settings.cfg`.
Which systemd services are monitored by netdata is determined by the following pattern list:
Which systemd services are monitored by Netdata is determined by the following pattern list:
```
[plugin:cgroups]
@ -196,27 +196,27 @@ Which systemd services are monitored by netdata is determined by the following p
## Monitoring ephemeral containers
netdata monitors containers automatically when it is installed at the host, or when it is installed in a container that has access to the `/proc` and `/sys` filesystems of the host.
Netdata monitors containers automatically when it is installed at the host, or when it is installed in a container that has access to the `/proc` and `/sys` filesystems of the host.
netdata prior to v1.6 had 2 issues when such containers were monitored:
Netdata prior to v1.6 had 2 issues when such containers were monitored:
1. network interface alarms where triggering when containers were stopped
2. charts were never cleaned up, so after some time dozens of containers were showing up on the dashboard, and they were occupying memory.
### the current netdata
### the current Netdata
network interfaces and cgroups (containers) are now self-cleaned.
So, when a network interface or container stops, netdata might log a few errors in error.log complaining about files it cannot find, but immediately:
So, when a network interface or container stops, Netdata might log a few errors in error.log complaining about files it cannot find, but immediately:
1. it will detect this is a removed container or network interface
2. it will freeze/pause all alarms for them
3. it will mark their charts as obsolete
4. obsolete charts are not be offered on new dashboard sessions (so hit F5 and the charts are gone)
5. existing dashboard sessions will continue to see them, but of course they will not refresh
6. obsolete charts will be removed from memory, 1 hour after the last user viewed them (configurable with `[global].cleanup obsolete charts after seconds = 3600` (at netdata.conf).
6. obsolete charts will be removed from memory, 1 hour after the last user viewed them (configurable with `[global].cleanup obsolete charts after seconds = 3600` (at `netdata.conf`).
7. when obsolete charts are removed from memory they are also deleted from disk (configurable with `[global].delete obsolete charts files = yes`)

View file

@ -1,10 +1,10 @@
# charts.d.plugin
`charts.d.plugin` is a netdata external plugin. It is an **orchestrator** for data collection modules written in `BASH` v4+.
`charts.d.plugin` is a Netdata external plugin. It is an **orchestrator** for data collection modules written in `BASH` v4+.
1. It runs as an independent process `ps fax` shows it
2. It is started and stopped automatically by netdata
3. It communicates with netdata via a unidirectional pipe (sending data to the netdata daemon)
2. It is started and stopped automatically by Netdata
3. It communicates with Netdata via a unidirectional pipe (sending data to the `netdata` daemon)
4. Supports any number of data collection **modules**
`charts.d.plugin` has been designed so that the actual script that will do data collection will be permanently in
@ -43,7 +43,7 @@ For a module called `X`, the following criteria must be met:
2. If the module needs a configuration, it should be called `X.conf` and placed in `/etc/netdata/charts.d`.
The configuration file `X.conf` is also a BASH script itself.
To edit the default files supplied by netdata run `/etc/netdata/edit-config charts.d/X.conf`,
To edit the default files supplied by Netdata, run `/etc/netdata/edit-config charts.d/X.conf`,
where `X` is the name of the module.
3. All functions and global variables defined in the script and its configuration, must begin with `X_`.
@ -54,11 +54,11 @@ For a module called `X`, the following criteria must be met:
(following the standard Linux command line return codes: 0 = OK, the collector can operate and 1 = FAILED,
the collector cannot be used).
- `X_create()` - creates the netdata charts, following the standard netdata plugin guides as described in
- `X_create()` - creates the Netdata charts, following the standard Netdata plugin guides as described in
**[External Plugins](../plugins.d/)** (commands `CHART` and `DIMENSION`).
The return value does matter: 0 = OK, 1 = FAILED.
- `X_update()` - collects the values for the defined charts, following the standard netdata plugin guides
- `X_update()` - collects the values for the defined charts, following the standard Netdata plugin guides
as described in **[External Plugins](../plugins.d/)** (commands `BEGIN`, `SET`, `END`).
The return value also matters: 0 = OK, 1 = FAILED.
@ -67,7 +67,7 @@ For a module called `X`, the following criteria must be met:
The module script may use more functions or variables. But all of them must begin with `X_`.
The standard netdata plugin variables are also available (check **[External Plugins](../plugins.d/)**).
The standard Netdata plugin variables are also available (check **[External Plugins](../plugins.d/)**).
### X_check()
@ -80,7 +80,7 @@ connect to a local mysql database to find out if it can read the values it needs
### X_create()
The purpose of the BASH function `X_create()` is to create the charts and dimensions using the standard netdata
The purpose of the BASH function `X_create()` is to create the charts and dimensions using the standard Netdata
plugin guides (**[External Plugins](../plugins.d/)**).
`X_create()` will be called just once and only after `X_check()` was successful.
@ -90,8 +90,8 @@ A non-zero return value will disable the collector.
### X_update()
`X_update()` will be called repeatedly every `X_update_every` seconds, to collect new values and send them to netdata,
following the netdata plugin guides (**[External Plugins](../plugins.d/)**).
`X_update()` will be called repeatedly every `X_update_every` seconds, to collect new values and send them to Netdata,
following the Netdata plugin guides (**[External Plugins](../plugins.d/)**).
The function will be called with one parameter: microseconds since the last time it was run. This value should be
appended to the `BEGIN` statement of every chart updated by the collector script.
@ -167,7 +167,7 @@ Keep in mind that if your configs are not in `/etc/netdata`, you should do the f
export NETDATA_USER_CONFIG_DIR="/path/to/etc/netdata"
```
Also, remember that netdata runs `chart.d.plugin` as user `netdata` (or any other user netdata is configured to run as).
Also, remember that Netdata runs `chart.d.plugin` as user `netdata` (or any other user the `netdata` process is configured to run as).
## Running multiple instances of charts.d.plugin
@ -188,7 +188,7 @@ This is what you need to do:
3. link `/usr/libexec/netdata/plugins.d/charts.d.plugin` to `/usr/libexec/netdata/plugins.d/charts2.d.plugin`.
Netdata will spawn a new charts.d process.
Execute the above in this order, since netdata will (by default) attempt to start new plugins soon after they are
Execute the above in this order, since Netdata will (by default) attempt to start new plugins soon after they are
created in `/usr/libexec/netdata/plugins.d/`.

View file

@ -2,7 +2,7 @@
The `ap` collector visualizes data related to access points.
## Example netdata charts
## Example Netdata charts
![image](https://cloud.githubusercontent.com/assets/2662304/12377654/9f566e88-bd2d-11e5-855a-e0ba96b8fd98.png)

View file

@ -7,7 +7,7 @@
The `apache` collector visualizes key performance data for an apache web server.
## Example netdata charts
## Example Netdata charts
For apache 2.2:
@ -80,7 +80,7 @@ From the apache status output it collects:
- total accesses (incremental value, rendered as requests/s)
- total bandwidth (incremental value, rendered as bandwidth/s)
- requests per second (this appears to be calculated by apache as an average for its lifetime, while the one calculated by netdata using the total accesses counter is real-time)
- requests per second (this appears to be calculated by apache as an average for its lifetime, while the one calculated by Netdata using the total accesses counter is real-time)
- bytes per second (average for the lifetime of the apache server)
- bytes per request (average for the lifetime of the apache server)
- workers by status (`busy` and `idle`)
@ -106,7 +106,7 @@ apache_curl_opts=
apache_update_every=
```
The default `apache_update_every` is configured in netdata.
The default `apache_update_every` is configured in Netdata.
## Auto-detection
@ -122,7 +122,7 @@ If you are able to run successfully, by hand this command:
curl "http://127.0.0.1:80/server-status?auto"
```
netdata will be able to do it too.
Netdata will be able to do it too.
Notice: You may need to have the default `000-default.conf ` website enabled in order for the status mod to work.

View file

@ -13,7 +13,7 @@ The plugin will provide charts for all configured system sensors
> kernel provided values, this plugin will not perform.
> So, the values graphed, are the raw hardware values of the sensors.
The plugin will create netdata charts for:
The plugin will create Netdata charts for:
1. **Temperature**
2. **Voltage**

View file

@ -10,7 +10,7 @@ Two charts are available for every mount:
Simple patterns can be used to exclude mounts from showed statistics based on path or filesystem. By default read-only mounts are not displayed. To display them `yes` should be set for a chart instead of `auto`.
By default, Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Set `yes` for a chart instead of `auto` to enable it permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
By default, Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Set `yes` for a chart instead of `auto` to enable it permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
```

View file

@ -23,7 +23,7 @@ fping="/usr/local/bin/fping"
# I suggest to use hostnames and put their IPs in /etc/hosts
hosts="host1 host2 host3"
# override the chart update frequency - the default is inherited from netdata
# override the chart update frequency - the default is inherited from Netdata
update_every=1
# time in milliseconds (1 sec = 1000 ms) to ping the hosts
@ -36,7 +36,7 @@ fping_opts="-R -b 56 -i 1 -r 0 -t 5000"
## alarms
netdata will automatically attach a few alarms for each host.
Netdata will automatically attach a few alarms for each host.
Check the [latest versions of the fping alarms](../../health/health.d/fping.conf)
## Additional Tips
@ -60,7 +60,7 @@ ping_every=5000
You may need to run multiple fping plugins with different settings for different end points.
For example, you may need to ping a few hosts 10 times per second, and others once per second.
netdata allows you to add as many `fping` plugins as you like.
Netdata allows you to add as many `fping` plugins as you like.
Follow this procedure:
@ -90,7 +90,7 @@ cd /usr/libexec/netdata/plugins.d
ln -s fping.plugin fping2.plugin
```
That's it. netdata will detect the new plugin and start it.
That's it. Netdata will detect the new plugin and start it.
You can name the new plugin any name you like.
Just make sure the plugin and the configuration file have the same name.

View file

@ -2,6 +2,6 @@
Collects resource usage and performance data on FreeBSD systems
By default, Netdata will enable monitoring metrics for disks, memory, and network only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Use `yes` instead of `auto` in plugin configuration sections to enable these charts permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
By default, Netdata will enable monitoring metrics for disks, memory, and network only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Use `yes` instead of `auto` in plugin configuration sections to enable these charts permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fcollectors%2Ffreebsd.plugin%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

View file

@ -8,11 +8,11 @@ Netdata has a [freeipmi](https://www.gnu.org/software/freeipmi/) plugin.
1. install `libipmimonitoring-dev` or `libipmimonitoring-devel` (`freeipmi-devel` on RHEL based OS) using the package manager of your system.
2. re-install netdata from source. The installer will detect that the required libraries are now available and will also build `freeipmi.plugin`.
2. re-install Netdata from source. The installer will detect that the required libraries are now available and will also build `freeipmi.plugin`.
Keep in mind IPMI requires root access, so the plugin is setuid to root.
If you just installed the required IPMI tools, please run at least once the command `ipmimonitoring` and verify it returns sensors information. This command initialises IPMI configuration, so that the netdata plugin will be able to work.
If you just installed the required IPMI tools, please run at least once the command `ipmimonitoring` and verify it returns sensors information. This command initialises IPMI configuration, so that the Netdata plugin will be able to work.
## Netdata use
@ -111,7 +111,7 @@ Append to `command options = ` the settings you need. The minimum `update every`
## Ignoring specific sensors
Specific sensor IDs can be excluded from freeipmi tools by editing `/etc/freeipmi/freeipmi.conf` and setting the IDs to be ignored at `ipmi-sensors-exclude-record-ids`. **However this file is not used by `libipmimonitoring`** (the library used by netdata's `freeipmi.plugin`).
Specific sensor IDs can be excluded from freeipmi tools by editing `/etc/freeipmi/freeipmi.conf` and setting the IDs to be ignored at `ipmi-sensors-exclude-record-ids`. **However this file is not used by `libipmimonitoring`** (the library used by Netdata's `freeipmi.plugin`).
So, `freeipmi.plugin` supports the option `ignore` that accepts a comma separated list of sensor IDs to ignore. To configure it, edit `/etc/netdata/netdata.conf` and set:
@ -180,7 +180,7 @@ options ipmi_si kipmid_max_busy_us=10
This instructs the kernel IPMI module to pause for a tick between checking IPMI. Querying IPMI will be a lot slower now (e.g. several seconds for IPMI to respond), but `kipmi` will not use any noticeable CPU. You can also use a higher number (this is the number of microseconds to poll IPMI for a response, before waiting for a tick).
If you need to disable IPMI for netdata, edit `/etc/netdata/netdata.conf` and set:
If you need to disable IPMI for Netdata, edit `/etc/netdata/netdata.conf` and set:
```
[plugins]

View file

@ -10,7 +10,7 @@ The supplied plugin can install it, by running:
/usr/libexec/netdata/plugins.d/ioping.plugin install
```
The `-e` option can be supplied to indicate where the netdata environment file is installed. The default path is `/etc/netdata/.environment`.
The `-e` option can be supplied to indicate where the Netdata environment file is installed. The default path is `/etc/netdata/.environment`.
The above will download, build and install the right version as `/usr/libexec/netdata/plugins.d/ioping`.
@ -24,7 +24,7 @@ ioping="/usr/libexec/netdata/plugins.d/ioping"
# set here the directory/file/device, you need to ping
destination="destination"
# override the chart update frequency - the default is inherited from netdata
# override the chart update frequency - the default is inherited from Netdata
update_every="1s"
# the request size in bytes to ping the destination
@ -36,7 +36,7 @@ ioping_opts="-T 1000000 -R"
## alarms
netdata will automatically attach a few alarms for each host.
Netdata will automatically attach a few alarms for each host.
Check the [latest versions of the ioping alarms](../../health/health.d/ioping.conf)
## Multiple ioping Plugins With Different Settings
@ -44,7 +44,7 @@ Check the [latest versions of the ioping alarms](../../health/health.d/ioping.co
You may need to run multiple ioping plugins with different settings or different end points.
For example, you may need to ping one destination once per 10 seconds, and another once per second.
netdata allows you to add as many `ioping` plugins as you like.
Netdata allows you to add as many `ioping` plugins as you like.
Follow this procedure:
@ -74,7 +74,7 @@ cd /usr/libexec/netdata/plugins.d
ln -s ioping.plugin ioping2.plugin
```
That's it. netdata will detect the new plugin and start it.
That's it. Netdata will detect the new plugin and start it.
You can name the new plugin any name you like.
Just make sure the plugin and the configuration file have the same name.

View file

@ -2,6 +2,6 @@
Collects resource usage and performance data on MacOS systems
By default, Netdata will enable monitoring metrics for disks, memory, and network only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Use `yes` instead of `auto` in plugin configuration sections to enable these charts permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
By default, Netdata will enable monitoring metrics for disks, memory, and network only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Use `yes` instead of `auto` in plugin configuration sections to enable these charts permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fcollectors%2Fmacos.plugin%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

View file

@ -6,7 +6,7 @@
1. install `libmnl-dev` and `libnetfilter_acct-dev` using the package manager of your system.
2. re-install netdata from source. The installer will detect that the required libraries are now available and will also build netdata.plugin.
2. re-install Netdata from source. The installer will detect that the required libraries are now available and will also build `netdata.plugin`.
Keep in mind that NFACCT requires root access, so the plugin is setuid to root.
@ -27,7 +27,7 @@ Netfilter accounting:
## Configuration
If you need to disable NFACCT for netdata, edit /etc/netdata/netdata.conf and set:
If you need to disable NFACCT for Netdata, edit /etc/netdata/netdata.conf and set:
```
[plugins]

View file

@ -1,10 +1,10 @@
# node.d.plugin
`node.d.plugin` is a netdata external plugin. It is an **orchestrator** for data collection modules written in `node.js`.
`node.d.plugin` is a Netdata external plugin. It is an **orchestrator** for data collection modules written in `node.js`.
1. It runs as an independent process `ps fax` shows it
2. It is started and stopped automatically by netdata
3. It communicates with netdata via a unidirectional pipe (sending data to the netdata daemon)
2. It is started and stopped automatically by Netdata
3. It communicates with Netdata via a unidirectional pipe (sending data to the `netdata` daemon)
4. Supports any number of data collection **modules**
5. Allows each **module** to have one or more data collection **jobs**
6. Each **job** is collecting one or more metrics from a single data source
@ -28,7 +28,7 @@ At minimum, to be buildable and testable, the PR needs to include:
Node.js is perfect for asynchronous operations. It is very fast and quite common (actually the whole web is based on it).
Since data collection is not a CPU intensive task, node.js is an ideal solution for it.
`node.d.plugin` is a netdata plugin that provides an abstraction layer to allow easy and quick development of data
`node.d.plugin` is a Netdata plugin that provides an abstraction layer to allow easy and quick development of data
collectors in node.js. It also manages all its data collectors (placed in `/usr/libexec/netdata/node.d`) using a single
instance of node, thus lowering the memory footprint of data collection.
@ -54,7 +54,7 @@ For more information check the **[[Installation]]** guide.
Unfortunately, `JSON` files do not accept comments. So, the best way to describe them is to have markdown text files
with instructions.
`JSON` has a very strict formatting. If you get errors from netdata at `/var/log/netdata/error.log` that a certain
`JSON` has a very strict formatting. If you get errors from Netdata at `/var/log/netdata/error.log` that a certain
configuration file cannot be loaded, we suggest to verify it at [http://jsonlint.com/](http://jsonlint.com/).
The files in this directory, provide usable examples for configuring each `node.d.plugin` module.
@ -93,7 +93,7 @@ Your data collection module should be split in 3 parts:
so you don't need to do anything about it for http.
- a function to process the fetched/manipulate the data fetched. This function will make a number of calls
to create charts and dimensions and pass the collected values to netdata.
to create charts and dimensions and pass the collected values to Netdata.
This is the only function you need to write for collecting http JSON data.
- a `configure` and an `update` function, which take care of your module configuration and data refresh
@ -127,7 +127,7 @@ netdata.processors.myprocessor = {
var mymodule = {
processResponse: function(service, data) {
/* send information to the netdata server here */
/* send information to the Netdata server here */
},
@ -221,7 +221,7 @@ The configuration file `/etc/netdata/node.d/mymodule.conf` may contain whatever
`data` may be `null` or whatever the processor specified in the `service` returned.
The `service` object defines a set of functions to allow you send information to the netdata core about:
The `service` object defines a set of functions to allow you send information to the Netdata core about:
1. Charts and dimension definitions
2. Updated values, from the collected values

View file

@ -3,7 +3,7 @@
This module collects metrics from the configured solar power installation from Fronius Symo.
**Requirements**
* Configuration file `fronius.conf` in the node.d netdata config dir (default: `/etc/netdata/node.d/fronius.conf`)
* Configuration file `fronius.conf` in the node.d Netdata config dir (default: `/etc/netdata/node.d/fronius.conf`)
* Fronius Symo with network access (http)
It produces per server:
@ -61,7 +61,7 @@ The plugin has been tested with a single inverter, namely Fronius Symo 8.2-3-M:
Other products and versions may work, but without any guarantees.
Example netdata configuration for node.d/fronius.conf. Copy this section to fronius.conf and change name/ip.
Example Netdata configuration for node.d/fronius.conf. Copy this section to fronius.conf and change name/ip.
The module supports any number of servers. Sometimes there is a lag when collecting every 3 seconds, so 5 should be okay too. You can modify this per server.
```json
{

View file

@ -1,8 +1,8 @@
# ISC Bind Statistics
Using this netdata collector, you can monitor one or more ISC Bind servers.
Using this Netdata collector, you can monitor one or more ISC Bind servers.
## Example netdata charts
## Example Netdata charts
Depending on the number of views your bind has, you may get a large number of charts.
Here this is with just one view:

View file

@ -3,7 +3,7 @@
[SMA Sunny Webbox](http://files.sma.de/dl/4253/WEBBOX-DUS131916W.pdf)
Example netdata configuration for node.d/sma_webbox.conf
Example Netdata configuration for node.d/sma_webbox.conf
The module supports any number of name servers, like this:

View file

@ -1,6 +1,6 @@
# SNMP Data Collector
Using this collector, netdata can collect data from any SNMP device.
Using this collector, Netdata can collect data from any SNMP device.
This collector supports:
@ -88,7 +88,7 @@ In this example:
`update_every` is the update frequency for each server, in seconds.
`max_request_size` limits the maximum number of OIDs that will be requested in a single call. The default is 50. Lower this number of you get `TooBig` errors in netdata error.log.
`max_request_size` limits the maximum number of OIDs that will be requested in a single call. The default is 50. Lower this number of you get `TooBig` errors in Netdata's `error.log`.
`family` sets the name of the submenu of the dashboard each chart will appear under.
@ -177,9 +177,9 @@ To test it, you can run:
/usr/libexec/netdata/plugins.d/node.d.plugin 1 snmp
```
The above will run it on your console and you will be able to see what netdata sees, but also errors. You can get a very detailed output by appending `debug` to the command line.
The above will run it on your console and you will be able to see what Netdata sees, but also errors. You can get a very detailed output by appending `debug` to the command line.
If it works, restart netdata to activate the snmp collector and refresh the dashboard (if your SNMP device responds with a delay, you may need to refresh the dashboard in a few seconds).
If it works, restart Netdata to activate the snmp collector and refresh the dashboard (if your SNMP device responds with a delay, you may need to refresh the dashboard in a few seconds).
## Data collection speed

View file

@ -3,7 +3,7 @@
This module collects metrics from the configured heat pump and hot water installation from Stiebel Eltron ISG web.
**Requirements**
* Configuration file `stiebeleltron.conf` in the node.d netdata config dir (default: `/etc/netdata/node.d/stiebeleltron.conf`)
* Configuration file `stiebeleltron.conf` in the node.d Netdata config dir (default: `/etc/netdata/node.d/stiebeleltron.conf`)
* Stiebel Eltron ISG web with network access (http), without password login
The charts are configurable, however, the provided default configuration collects the following:

View file

@ -1,7 +1,7 @@
# External plugins overview
`plugins.d` is the netdata internal plugin that collects metrics
from external processes, thus allowing netdata to use **external plugins**.
`plugins.d` is the Netdata internal plugin that collects metrics
from external processes, thus allowing Netdata to use **external plugins**.
## Provided External Plugins
@ -19,19 +19,19 @@ plugin|language|O/S|description
[node.d.plugin](../node.d.plugin/)|`node.js`|all|a **plugin orchestrator** for data collection modules written in `node.js`.
[python.d.plugin](../python.d.plugin/)|`python`|all|a **plugin orchestrator** for data collection modules written in `python` v2 or v3 (both are supported).
Plugin orchestrators may also be described as **modular plugins**. They are modular since they accept custom made modules to be included. Writing modules for these plugins is easier than accessing the native netdata API directly. You will find modules already available for each orchestrator under the directory of the particular modular plugin (e.g. under python.d.plugin for the python orchestrator).
Plugin orchestrators may also be described as **modular plugins**. They are modular since they accept custom made modules to be included. Writing modules for these plugins is easier than accessing the native Netdata API directly. You will find modules already available for each orchestrator under the directory of the particular modular plugin (e.g. under python.d.plugin for the python orchestrator).
Each of these modular plugins has each own methods for defining modules. Please check the examples and their documentation.
## Motivation
This plugin allows netdata to use **external plugins** for data collection:
This plugin allows Netdata to use **external plugins** for data collection:
1. external data collection plugins may be written in any computer language.
2. external data collection plugins may use O/S capabilities or `setuid` to
run with escalated privileges (compared to the netdata daemon).
The communication between the external plugin and netdata is unidirectional
(from the plugin to netdata), so that netdata cannot manipulate an external
run with escalated privileges (compared to the `netdata` daemon).
The communication between the external plugin and Netdata is unidirectional
(from the plugin to Netdata), so that Netdata cannot manipulate an external
plugin running with escalated privileges.
## Operation
@ -39,23 +39,23 @@ This plugin allows netdata to use **external plugins** for data collection:
Each of the external plugins is expected to run forever.
Netdata will start it when it starts and stop it when it exits.
If the external plugin exits or crashes, netdata will log an error.
If the external plugin exits or crashes without pushing metrics to netdata, netdata will not start it again.
If the external plugin exits or crashes, Netdata will log an error.
If the external plugin exits or crashes without pushing metrics to Netdata, Netdata will not start it again.
- Plugins that exit with any value other than zero, will be disabled. Plugins that exit with zero, will be restarted after some time.
- Plugins may also be disabled by netdata if they output things that netdata does not understand.
- Plugins may also be disabled by Netdata if they output things that Netdata does not understand.
The `stdout` of external plugins is connected to netdata to receive metrics,
The `stdout` of external plugins is connected to Netdata to receive metrics,
with the API defined below.
The `stderr` of external plugins is connected to netdata `error.log`.
The `stderr` of external plugins is connected to Netdata's `error.log`.
Plugins can create any number of charts with any number of dimensions each. Each chart can have its own characteristics independently of the others generated by the same plugin. For example, one chart may have an update frequency of 1 second, another may have 5 seconds and a third may have 10 seconds.
## Configuration
Netdata will supply the environment variables `NETDATA_USER_CONFIG_DIR` (for user supplied) and `NETDATA_STOCK_CONFIG_DIR` (for netdata supplied) configuration files to identify the directory where configuration files are stored. It is up to the plugin to read the configuration it needs.
Netdata will supply the environment variables `NETDATA_USER_CONFIG_DIR` (for user supplied) and `NETDATA_STOCK_CONFIG_DIR` (for Netdata supplied) configuration files to identify the directory where configuration files are stored. It is up to the plugin to read the configuration it needs.
The `netdata.conf` section [plugins] section contains a list of all the plugins found at the system where netdata runs, with a boolean setting to enable them or not.
The `netdata.conf` section [plugins] section contains a list of all the plugins found at the system where Netdata runs, with a boolean setting to enable them or not.
Example:
@ -100,19 +100,19 @@ Netdata will call the plugin with just one command line parameter: the number of
Other than the above, the plugin configuration is up to the plugin.
Keep in mind, that the user may use netdata configuration to overwrite chart and dimension parameters. This is transparent to the plugin.
Keep in mind, that the user may use Netdata configuration to overwrite chart and dimension parameters. This is transparent to the plugin.
### Autoconfiguration
Plugins should attempt to autoconfigure themselves when possible.
For example, if your plugin wants to monitor `squid`, you can search for it on port `3128` or `8080`. If any succeeds, you can proceed. If it fails you can output an error (on stderr) saying that you cannot find `squid` running and giving instructions about the plugin configuration. Then you can stop (exit with non-zero value), so that netdata will not attempt to start the plugin again.
For example, if your plugin wants to monitor `squid`, you can search for it on port `3128` or `8080`. If any succeeds, you can proceed. If it fails you can output an error (on stderr) saying that you cannot find `squid` running and giving instructions about the plugin configuration. Then you can stop (exit with non-zero value), so that Netdata will not attempt to start the plugin again.
## External Plugins API
Any program that can print a few values to its standard output can become a netdata external plugin.
Any program that can print a few values to its standard output can become a Netdata external plugin.
There are 7 lines netdata parses. lines starting with:
Netdata parses 7 lines starting with:
- `CHART` - create or update a chart
- `DIMENSION` - add or update a dimension to the chart just created
@ -129,7 +129,7 @@ Charts can be added any time (not just the beginning).
### command line parameters
The plugin **MUST** accept just **one** parameter: **the number of seconds it is
expected to update the values for its charts**. The value passed by netdata
expected to update the values for its charts**. The value passed by Netdata
to the plugin is controlled via its configuration file (so there is no need
for the plugin to handle this configuration option).
@ -144,24 +144,24 @@ available for the plugin to use.
variable|description
:------:|:----------
`NETDATA_USER_CONFIG_DIR`|The directory where all netdata related user configuration should be stored. If the plugin requires custom user configuration, this is the place the user has saved it (normally under `/etc/netdata`).
`NETDATA_STOCK_CONFIG_DIR`|The directory where all netdata related stock configuration should be stored. If the plugin is shipped with configuration files, this is the place they can be found (normally under `/usr/lib/netdata/conf.d`).
`NETDATA_PLUGINS_DIR`|The directory where all netdata plugins are stored.
`NETDATA_WEB_DIR`|The directory where the web files of netdata are saved.
`NETDATA_CACHE_DIR`|The directory where the cache files of netdata are stored. Use this directory if the plugin requires a place to store data. A new directory should be created for the plugin for this purpose, inside this directory.
`NETDATA_LOG_DIR`|The directory where the log files are stored. By default the `stderr` output of the plugin will be saved in the `error.log` file of netdata.
`NETDATA_USER_CONFIG_DIR`|The directory where all Netdata-related user configuration should be stored. If the plugin requires custom user configuration, this is the place the user has saved it (normally under `/etc/netdata`).
`NETDATA_STOCK_CONFIG_DIR`|The directory where all Netdata -related stock configuration should be stored. If the plugin is shipped with configuration files, this is the place they can be found (normally under `/usr/lib/netdata/conf.d`).
`NETDATA_PLUGINS_DIR`|The directory where all Netdata plugins are stored.
`NETDATA_WEB_DIR`|The directory where the web files of Netdata are saved.
`NETDATA_CACHE_DIR`|The directory where the cache files of Netdata are stored. Use this directory if the plugin requires a place to store data. A new directory should be created for the plugin for this purpose, inside this directory.
`NETDATA_LOG_DIR`|The directory where the log files are stored. By default the `stderr` output of the plugin will be saved in the `error.log` file of Netdata.
`NETDATA_HOST_PREFIX`|This is used in environments where system directories like `/sys` and `/proc` have to be accessed at a different path.
`NETDATA_DEBUG_FLAGS`|This is a number (probably in hex starting with `0x`), that enables certain netdata debugging features. Check **[[Tracing Options]]** for more information.
`NETDATA_UPDATE_EVERY`|The minimum number of seconds between chart refreshes. This is like the **internal clock** of netdata (it is user configurable, defaulting to `1`). There is no meaning for a plugin to update its values more frequently than this number of seconds.
`NETDATA_DEBUG_FLAGS`|This is a number (probably in hex starting with `0x`), that enables certain Netdata debugging features. Check **[[Tracing Options]]** for more information.
`NETDATA_UPDATE_EVERY`|The minimum number of seconds between chart refreshes. This is like the **internal clock** of Netdata (it is user configurable, defaulting to `1`). There is no meaning for a plugin to update its values more frequently than this number of seconds.
### The output of the plugin
The plugin should output instructions for netdata to its output (`stdout`). Since this uses pipes, please make sure you flush stdout after every iteration.
The plugin should output instructions for Netdata to its output (`stdout`). Since this uses pipes, please make sure you flush stdout after every iteration.
#### DISABLE
`DISABLE` will disable this plugin. This will prevent netdata from restarting the plugin. You can also exit with the value `1` to have the same effect.
`DISABLE` will disable this plugin. This will prevent Netdata from restarting the plugin. You can also exit with the value `1` to have the same effect.
#### CHART
@ -225,11 +225,11 @@ the template is:
- `options`
a space separated list of options, enclosed in quotes. 4 options are currently supported: `obsolete` to mark a chart as obsolete (netdata will hide it and delete it after some time), `detail` to mark a chart as insignificant (this may be used by dashboards to make the charts smaller, or somehow visualize properly a less important chart), `store_first` to make netdata store the first collected value, assuming there was an invisible previous value set to zero (this is used by statsd charts - if the first data collected value of incremental dimensions is not zero based, unrealistic spikes will appear with this option set) and `hidden` to perform all operations on a chart, but do not offer it on dashboards (the chart will be send to backends). `CHART` options have been added in netdata v1.7 and the `hidden` option was added in 1.10.
a space separated list of options, enclosed in quotes. 4 options are currently supported: `obsolete` to mark a chart as obsolete (Netdata will hide it and delete it after some time), `detail` to mark a chart as insignificant (this may be used by dashboards to make the charts smaller, or somehow visualize properly a less important chart), `store_first` to make Netdata store the first collected value, assuming there was an invisible previous value set to zero (this is used by statsd charts - if the first data collected value of incremental dimensions is not zero based, unrealistic spikes will appear with this option set) and `hidden` to perform all operations on a chart, but do not offer it on dashboards (the chart will be send to backends). `CHART` options have been added in Netdata v1.7 and the `hidden` option was added in 1.10.
- `plugin` and `module`
both are just names that are used to let the user identify the plugin and the module that generated the chart. If `plugin` is unset or empty, netdata will automatically set the filename of the plugin that generated the chart. `module` has not default.
both are just names that are used to let the user identify the plugin and the module that generated the chart. If `plugin` is unset or empty, Netdata will automatically set the filename of the plugin that generated the chart. `module` has not default.
#### DIMENSION
@ -290,7 +290,7 @@ the template is:
- `options`
a space separated list of options, enclosed in quotes. Options supported: `obsolete` to mark a dimension as obsolete (netdata will delete it after some time) and `hidden` to make this dimension hidden, it will take part in the calculations but will not be presented in the chart.
a space separated list of options, enclosed in quotes. Options supported: `obsolete` to mark a dimension as obsolete (Netdata will delete it after some time) and `hidden` to make this dimension hidden, it will take part in the calculations but will not be presented in the chart.
#### VARIABLE
@ -302,7 +302,7 @@ the template is:
Variables support 2 scopes:
- `GLOBAL` or `HOST` to define the variable at the host level.
- `LOCAL` or `CHART` to define the variable at the chart level. Use chart-local variables when the same variable may exist for different charts (i.e. netdata monitors 2 mysql servers, and you need to set the `max_connections` each server accepts). Using chart-local variables is the ideal to build alarm templates.
- `LOCAL` or `CHART` to define the variable at the chart level. Use chart-local variables when the same variable may exist for different charts (i.e. Netdata monitors 2 mysql servers, and you need to set the `max_connections` each server accepts). Using chart-local variables is the ideal to build alarm templates.
The position of the `VARIABLE` line, sets its default scope (in case you do not specify a scope). So, defining a `VARIABLE` before any `CHART`, or between `END` and `BEGIN` (outside any chart), sets `GLOBAL` scope, while defining a `VARIABLE` just after a `CHART` or a `DIMENSION`, or within the `BEGIN` - `END` block of a chart, sets `LOCAL` scope.
@ -310,9 +310,9 @@ These variables can be set and updated at any point.
Variable names should use alphanumeric characters, the `.` and the `_`.
The `value` is floating point (netdata used `long double`).
The `value` is floating point (Netdata used `long double`).
Variables are transferred to upstream netdata servers (streaming and database replication).
Variables are transferred to upstream Netdata servers (streaming and database replication).
## Data collection
@ -329,12 +329,12 @@ data collection is defined as a series of `BEGIN` -> `SET` -> `END` lines
is the number of microseconds since the last update of the chart. It is optional.
Under heavy system load, the system may have some latency transferring
data from the plugins to netdata via the pipe. This number improves
data from the plugins to Netdata via the pipe. This number improves
accuracy significantly, since the plugin is able to calculate the
duration between its iterations better than netdata.
duration between its iterations better than Netdata.
The first time the plugin is started, no microseconds should be given
to netdata.
to Netdata.
> SET id = value
@ -360,15 +360,15 @@ If more charts need to be updated, each chart should have its own
`BEGIN` -> `SET` -> `END` block.
If, for any reason, a plugin has issued a `BEGIN` but wants to cancel it,
it can issue a `FLUSH`. The `FLUSH` command will instruct netdata to ignore
it can issue a `FLUSH`. The `FLUSH` command will instruct Netdata to ignore
all the values collected since the last `BEGIN` command.
If a plugin does not behave properly (outputs invalid lines, or does not
follow these guidelines), will be disabled by netdata.
follow these guidelines), will be disabled by Netdata.
### collected values
netdata will collect any **signed** value in the 64bit range:
Netdata will collect any **signed** value in the 64bit range:
`-9.223.372.036.854.775.808` to `+9.223.372.036.854.775.807`
If a value is not collected, leave it empty, like this:
@ -381,7 +381,7 @@ or do not output the line at all.
1. **python**, use `python.d.plugin`, there are many examples in the [python.d directory](../python.d.plugin/)
python is ideal for netdata plugins. It is a simple, yet powerful way to collect data, it has a very small memory footprint, although it is not the most CPU efficient way to do it.
python is ideal for Netdata plugins. It is a simple, yet powerful way to collect data, it has a very small memory footprint, although it is not the most CPU efficient way to do it.
2. **node.js**, use `node.d.plugin`, there are a few examples in the [node.d directory](../node.d.plugin/)
@ -393,7 +393,7 @@ or do not output the line at all.
4. **C**
Of course, C is the most efficient way of collecting data. This is why netdata itself is written in C.
Of course, C is the most efficient way of collecting data. This is why Netdata itself is written in C.
## Writing Plugins Properly
@ -436,7 +436,7 @@ There are a few rules for writing plugins properly:
/*
* find the time of the next loop
* this makes sure we are always aligned
* with the netdata daemon
* with the Netdata daemon
*/
next_run = now - (now % update_every) + update_every;
@ -461,7 +461,7 @@ There are a few rules for writing plugins properly:
/* do your magic here to collect values */
collectValues();
/* send the collected data to netdata */
/* send the collected data to Netdata */
printValues(dt_since_last_run); /* print BEGIN, SET, END statements */
}
```

View file

@ -22,7 +22,7 @@
- `/sys/class/power_supply` (power supply properties)
- `ipc` (IPC semaphores and message queues)
- `ksm` Kernel Same-Page Merging performance (several files under `/sys/kernel/mm/ksm`).
- `netdata` (internal netdata resources utilization)
- `netdata` (internal Netdata resources utilization)
---
@ -59,35 +59,35 @@ Hopefully, the Linux kernel provides many metrics that can provide deep insights
- **Total I/O time**
The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute multiple I/O operations in parallel.
- **Space usage**
For mounted disks, netdata will provide a chart for their space, with 3 dimensions:
For mounted disks, Netdata will provide a chart for their space, with 3 dimensions:
1. free
2. used
3. reserved for root
- **inode usage**
For mounted disks, netdata will provide a chart for their inodes (number of file and directories), with 3 dimensions:
For mounted disks, Netdata will provide a chart for their inodes (number of file and directories), with 3 dimensions:
1. free
2. used
3. reserved for root
### disk names
netdata will automatically set the name of disks on the dashboard, from the mount point they are mounted, of course only when they are mounted. Changes in mount points are not currently detected (you will have to restart netdata to change the name of the disk). To use disk IDs provided by `/dev/disk/by-id`, the `name disks by id` option should be enabled. The `preferred disk ids` simple pattern allows choosing disk IDs to be used in the first place.
Netdata will automatically set the name of disks on the dashboard, from the mount point they are mounted, of course only when they are mounted. Changes in mount points are not currently detected (you will have to restart Netdata to change the name of the disk). To use disk IDs provided by `/dev/disk/by-id`, the `name disks by id` option should be enabled. The `preferred disk ids` simple pattern allows choosing disk IDs to be used in the first place.
### performance metrics
By default, Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Set `yes` for a chart instead of `auto` to enable it permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
By default, Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Set `yes` for a chart instead of `auto` to enable it permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
netdata categorizes all block devices in 3 categories:
Netdata categorizes all block devices in 3 categories:
1. physical disks (i.e. block devices that does not have slaves and are not partitions)
2. virtual disks (i.e. block devices that have slaves - like RAID devices)
3. disk partitions (i.e. block devices that are part of a physical disk)
Performance metrics are enabled by default for all disk devices, except partitions and not-mounted virtual disks. Of course, you can enable/disable monitoring any block device by editing the netdata configuration file.
Performance metrics are enabled by default for all disk devices, except partitions and not-mounted virtual disks. Of course, you can enable/disable monitoring any block device by editing the Netdata configuration file.
### netdata configuration
### Netdata configuration
You can get the running netdata configuration using this:
You can get the running Netdata configuration using this:
```sh
cd /etc/netdata
@ -150,7 +150,7 @@ For all configuration options:
Of course, to set options, you will have to uncomment them. The comments show the internal defaults.
After saving `/etc/netdata/netdata.conf`, restart your netdata to apply them.
After saving `/etc/netdata/netdata.conf`, restart your Netdata to apply them.
#### Disabling performance metrics for individual device and to multiple devices by device type
You can pretty easy disable performance metrics for individual device, for ex.:
@ -278,7 +278,7 @@ each state.
- **Network Interface Events (events/s)**
The number of packet framing errors, collisions detected on the interface, and carrier losses detected by the device driver.
By default netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though).
By default Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though).
#### alarms
@ -340,7 +340,7 @@ Netdata does not enable SYNPROXY. It just uses the SYNPROXY metrics exposed by y
### Real-time monitoring of Linux Anti-DDoS
netdata is able to monitor in real-time (per second updates) the operation of the Linux Anti-DDoS protection.
Netdata is able to monitor in real-time (per second updates) the operation of the Linux Anti-DDoS protection.
It visualizes 4 charts:
@ -353,7 +353,7 @@ Example image:
![ddos](https://cloud.githubusercontent.com/assets/2662304/14398891/6016e3fc-fdf0-11e5-942b-55de6a52cb66.gif)
See Linux Anti-DDoS in action at: **[netdata demo site (with SYNPROXY enabled)](https://registry.my-netdata.io/#menu_netfilter_submenu_synproxy)**
See Linux Anti-DDoS in action at: **[Netdata demo site (with SYNPROXY enabled)](https://registry.my-netdata.io/#menu_netfilter_submenu_synproxy)**
## Linux power supply

View file

@ -1,10 +1,10 @@
# python.d.plugin
`python.d.plugin` is a netdata external plugin. It is an **orchestrator** for data collection modules written in `python`.
`python.d.plugin` is a Netdata external plugin. It is an **orchestrator** for data collection modules written in `python`.
1. It runs as an independent process `ps fax` shows it
2. It is started and stopped automatically by netdata
3. It communicates with netdata via a unidirectional pipe (sending data to the netdata daemon)
2. It is started and stopped automatically by Netdata
3. It communicates with Netdata via a unidirectional pipe (sending data to the `netdata` daemon)
4. Supports any number of data collection **modules**
5. Allows each **module** to have one or more data collection **jobs**
6. Each **job** is collecting one or more metrics from a single data source

View file

@ -14,7 +14,7 @@ It produces:
* system time
**Requirements:**
Verify that user netdata can execute `chronyc tracking`. If necessary, update `/etc/chrony.conf`, `cmdallow`.
Verify that user Netdata can execute `chronyc tracking`. If necessary, update `/etc/chrony.conf`, `cmdallow`.
### Configuration

View file

@ -9,7 +9,7 @@ Module isn't compatible with new statistic api (v2.3), but you are still able to
by following [upgrading steps.](https://wiki2.dovecot.org/Upgrading/2.3).
**Requirement:**
Dovecot UNIX socket with R/W permissions for user netdata or Dovecot with configured TCP/IP socket.
Dovecot UNIX socket with R/W permissions for user `netdata` or Dovecot with configured TCP/IP socket.
Module gives information with following charts:

View file

@ -3,7 +3,7 @@
Module monitor fail2ban log file to show all bans for all active jails
**Requirements:**
* fail2ban.log file MUST BE readable by netdata (A good idea is to add **create 0640 root netdata** to fail2ban conf at logrotate.d)
* fail2ban.log file MUST BE readable by Netdata (A good idea is to add **create 0640 root netdata** to fail2ban conf at logrotate.d)
It produces one chart with multiple lines (one line per jail)

View file

@ -104,7 +104,7 @@ number of currently running Goroutines and updates these stats every second.
In the next section, we will cover how to monitor and chart these exposed stats with
the use of `netdata`s ```go_expvar``` module.
### Using netdata go_expvar module
### Using Netdata go_expvar module
The `go_expvar` module is disabled by default. To enable it, edit [`python.d.conf`](../python.d.conf)
(to edit it on your system run `/etc/netdata/edit-config python.d.conf`), and change the `go_expvar`
@ -143,7 +143,7 @@ Let's go over each of the defined options:
name: 'app1'
This is the job name that will appear at the netdata dashboard.
This is the job name that will appear at the Netdata dashboard.
If not defined, the job_name (top level key) will be used.
url: 'http://127.0.0.1:8080/debug/vars'
@ -164,7 +164,7 @@ Will be explained in more detail below.
**Note: if `collect_memstats` is disabled and no `extra_charts` are defined, the plugin will
disable itself, as there will be no data to collect!**
Apart from these options, each job supports options inherited from netdata's `python.d.plugin`
Apart from these options, each job supports options inherited from Netdata's `python.d.plugin`
and its base `UrlService` class. These are:
update_every: 1 # the job's data collection frequency
@ -174,21 +174,21 @@ and its base `UrlService` class. These are:
### Monitoring custom vars with go_expvar
Now, memory stats might be useful, but what if you want netdata to monitor some custom values
Now, memory stats might be useful, but what if you want Netdata to monitor some custom values
that your Go application exposes? The `go_expvar` module can do that as well with the use of
the `extra_charts` configuration variable.
The `extra_charts` variable is a YaML list of netdata chart definitions.
The `extra_charts` variable is a YaML list of Netdata chart definitions.
Each chart definition has the following keys:
id: netdata chart ID
id: Netdata chart ID
options: a key-value mapping of chart options
lines: a list of line definitions
**Note: please do not use dots in the chart or line ID field.
See [this issue](https://github.com/netdata/netdata/pull/1902#issuecomment-284494195) for explanation.**
Please see these two links to the official netdata documentation for more information about the values:
Please see these two links to the official Netdata documentation for more information about the values:
- [External plugins - charts](../../plugins.d/#chart)
- [Chart variables](../#global-variables-order-and-chart)
@ -202,9 +202,9 @@ Each line can have the following options:
# mandatory
expvar_key: the name of the expvar as present in the JSON output of /debug/vars endpoint
expvar_type: value type; supported are "float" or "int"
id: the id of this line/dimension in netdata
id: the id of this line/dimension in Netdata
# optional - netdata defaults are used if these options are not defined
# optional - Netdata defaults are used if these options are not defined
name: ''
algorithm: absolute
multiplier: 1
@ -267,7 +267,7 @@ app1:
**Netdata charts example**
The images below show how do the final charts in netdata look.
The images below show how do the final charts in Netdata look.
![Memory stats charts](https://cloud.githubusercontent.com/assets/15180106/26762052/62b4af58-493b-11e7-9e69-146705acfc2c.png)

View file

@ -6,7 +6,7 @@ And health metrics such as backend servers status (server check should be used).
Plugin can obtain data from url **OR** unix socket.
**Requirement:**
Socket MUST be readable AND writable by netdata user.
Socket MUST be readable AND writable by the `netdata` user.
It produces:

View file

@ -33,7 +33,7 @@ server:
### notes
* The status chart is primarily intended for alarms, badges or for access via API.
* A system/service/firewall might block netdata's access if a portscan or
* A system/service/firewall might block Netdata's access if a portscan or
similar is detected.
* This plugin is meant for simple use cases. Currently, the accuracy of the
response time is low and should be used as reference only.

View file

@ -3,7 +3,7 @@
Module monitor leases database to show all active leases for given pools.
**Requirements:**
* dhcpd leases file MUST BE readable by netdata
* dhcpd leases file MUST BE readable by Netdata
* pools MUST BE in CIDR format
It produces:

View file

@ -19,7 +19,7 @@ It provides the following charts:
### configuration
This module needs no configuration. Just make sure the netdata user
This module needs no configuration. Just make sure the `netdata` user
can run the `loginctl` command and get a session list without having to
specify a path.

View file

@ -122,7 +122,7 @@ Number of charts depends on mongodb version, storage engine and other features (
* member (time when last heartbeat was received from replica set member)
### prerequisite
Create a read-only user for the netdata in the admin database.
Create a read-only user for Netdata in the admin database.
1. Authenticate as the admin user.

View file

@ -42,7 +42,7 @@ To use the Oracle module do the following:
2. Install Oracle Client libraries ([link](https://cx-oracle.readthedocs.io/en/latest/installation.html#install-oracle-client)).
3. Create a read-only netdata user with proper access to your Oracle Database Server.
3. Create a read-only `netdata` user with proper access to your Oracle Database Server.
Connect to your Oracle database with an administrative user and execute:

View file

@ -28,7 +28,7 @@ server:
### notes
* The error chart is intended for alarms, badges or for access via API.
* A system/service/firewall might block netdata's access if a portscan or
* A system/service/firewall might block Netdata's access if a portscan or
similar is detected.
* Currently, the accuracy of the latency is low and should be used as reference only.

View file

@ -6,7 +6,7 @@ Web server log files exist for more than 20 years. All web servers of all kinds,
Yet, after the appearance of google analytics and similar services, and the recent rise of APM (Application Performance Monitoring) with sophisticated time-series databases that collect and analyze metrics at the application level, all these web server log files are mostly just filling our disks, rotated every night without any use whatsoever.
netdata turns this "useless" log file, into a powerful performance and health monitoring tool, capable of detecting, **in real-time**, most common web server problems, such as:
Netdata turns this "useless" log file, into a powerful performance and health monitoring tool, capable of detecting, **in real-time**, most common web server problems, such as:
- too many redirects (i.e. **oops!** *this should not redirect clients to itself*)
- too many bad requests (i.e. **oops!** *a few files were not uploaded*)
@ -18,7 +18,7 @@ netdata turns this "useless" log file, into a powerful performance and health mo
## Usage
If netdata is installed on a system running a web server, it will detect it and it will automatically present a series of charts, with information obtained from the web server API, like these (*these do not come from the web server log file*):
If Netdata is installed on a system running a web server, it will detect it and it will automatically present a series of charts, with information obtained from the web server API, like these (*these do not come from the web server log file*):
![image](https://cloud.githubusercontent.com/assets/2662304/22900686/e283f636-f237-11e6-93d2-cbdf63de150c.png)
*[**netdata**](https://my-netdata.io/) charts based on metrics collected by querying the `nginx` API (i.e. `/stub_status`).*
@ -197,7 +197,7 @@ alarm|description|minimum<br/>requests|warning|critical
The column `minimum requests` state the minimum number of requests required for the alarm to be evaluated. We found that when the site is receiving requests above this rate, these alarms are pretty accurate (i.e. no false-positives).
[**netdata**](https://my-netdata.io/) alarms are user configurable. Sample config files can be found under directory `health/health.d` of the netdata github repository. So, even [`web_log` alarms can be adapted to your needs](../../../health/health.d/web_log.conf).
[**netdata**](https://my-netdata.io/) alarms are user configurable. Sample config files can be found under directory `health/health.d` of the [Netdata GitHub repository](https://github.com/netdata/netdata/). So, even [`web_log` alarms can be adapted to your needs](../../../health/health.d/web_log.conf).
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fcollectors%2Fpython.d.plugin%2Fweb_log%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

View file

@ -4,7 +4,7 @@ statsd is a system to collect data from any application. Applications are sendin
There is a [plethora of client libraries](https://github.com/etsy/statsd/wiki#client-implementations) for embedding statsd metrics to any application framework. This makes statsd quite popular for custom application metrics.
netdata is a fully featured statsd server. It can collect statsd formatted metrics, visualize them on its dashboards, stream them to other netdata servers or archive them to backend time-series databases.
Netdata is a fully featured statsd server. It can collect statsd formatted metrics, visualize them on its dashboards, stream them to other Netdata servers or archive them to backend time-series databases.
Netdata statsd is inside Netdata (an internal plugin, running inside the Netdata daemon), it is configured via `netdata.conf` and by-default listens on standard statsd ports (tcp and udp 8125 - yes, Netdata statsd server supports both tcp and udp at the same time).
@ -62,19 +62,19 @@ The application may append `|@sampling_rate`, where `sampling_rate` is a number
#### Overlapping metrics
netdata statsd maintains different indexes for each of the types supported. This means the same metric `name` may exist under different types concurrently.
Netdata's statsd server maintains different indexes for each of the types supported. This means the same metric `name` may exist under different types concurrently.
#### Multiple metrics per packet
netdata accepts multiple metrics per packet if each is terminated with `\n`.
Netdata accepts multiple metrics per packet if each is terminated with `\n`.
#### TCP packets
netdata listens for both TCP and UDP packets. For TCP though, is it important to always append `\n` on each metric. netdata uses this to detect if a metric is split into multiple TCP packets. On disconnect, even the remaining (non terminated with `\n`) buffer, is processed.
Netdata listens for both TCP and UDP packets. For TCP though, is it important to always append `\n` on each metric. Netdata uses this to detect if a metric is split into multiple TCP packets. On disconnect, even the remaining (non terminated with `\n`) buffer, is processed.
#### UDP packets
When sending multiple packets over UDP, it is important not to exceed the network MTU (usually 1500 bytes minus a few bytes for the headers). netdata will accept UDP packets up to 9000 bytes, but the underlying network will not exceed MTU.
When sending multiple packets over UDP, it is important not to exceed the network MTU (usually 1500 bytes minus a few bytes for the headers). Netdata will accept UDP packets up to 9000 bytes, but the underlying network will not exceed MTU.
## configuration
@ -107,7 +107,7 @@ This is the statsd configuration at `/etc/netdata/netdata.conf`:
### statsd main config options
- `enabled = yes|no`
controls if statsd will be enabled for this netdata. The default is enabled.
controls if statsd will be enabled for this Netdata. The default is enabled.
- `default port = 8125`
@ -117,15 +117,15 @@ This is the statsd configuration at `/etc/netdata/netdata.conf`:
is a space separated list of IPs and ports to listen to. The format is `PROTOCOL:IP:PORT` - if `PORT` is omitted, the `default port` will be used. If `IP` is IPv6, it needs to be enclosed in `[]`. `IP` can also be ` * ` (to listen on all IPs) or even a hostname.
- `update every (flushInterval) = 1` seconds, controls the frequency statsd will push the collected metrics to netdata charts.
- `update every (flushInterval) = 1` seconds, controls the frequency statsd will push the collected metrics to Netdata charts.
- `decimal detail = 1000` controls the number of fractional digits in gauges and histograms. netdata collects metrics using signed 64 bit integers and their fractional detail is controlled using multipliers and divisors. This setting is used to multiply all collected values to convert them to integers and is also set as the divisors, so that the final data will be a floating point number with this fractional detail (1000 = X.0 - X.999, 10000 = X.0 - X.9999, etc).
- `decimal detail = 1000` controls the number of fractional digits in gauges and histograms. Netdata collects metrics using signed 64 bit integers and their fractional detail is controlled using multipliers and divisors. This setting is used to multiply all collected values to convert them to integers and is also set as the divisors, so that the final data will be a floating point number with this fractional detail (1000 = X.0 - X.999, 10000 = X.0 - X.9999, etc).
The rest of the settings are discussed below.
## statsd charts
netdata can visualize statsd collected metrics in 2 ways:
Netdata can visualize statsd collected metrics in 2 ways:
1. Each metric gets its own **private chart**. This is the default and does not require any configuration (although there are a few options to tweak).
@ -143,11 +143,11 @@ create private charts for metrics matching = !myapp.*.badmetric myapp.*
The default is to render private charts for all metrics.
The `memory mode` of the round robin database and the `history` of private metric charts are controlled with `private charts memory mode` and `private charts history`. The defaults for both settings is to use the global netdata settings. So, you need to edit them only when you want statsd to use different settings compared to the global ones.
The `memory mode` of the round robin database and the `history` of private metric charts are controlled with `private charts memory mode` and `private charts history`. The defaults for both settings is to use the global Netdata settings. So, you need to edit them only when you want statsd to use different settings compared to the global ones.
If you have thousands of metrics, each with its own private chart, you may notice that your web browser becomes slow when you view the netdata dashboard (this is a web browser issue we need to address at the netdata UI). So, netdata has a protection to stop creating charts when `max private charts allowed = 200` (soft limit) is reached.
If you have thousands of metrics, each with its own private chart, you may notice that your web browser becomes slow when you view the Netdata dashboard (this is a web browser issue we need to address at the Netdata UI). So, Netdata has a protection to stop creating charts when `max private charts allowed = 200` (soft limit) is reached.
The metrics above this soft limit are still processed by netdata and will be available to be sent to backend time-series databases, up to `max private charts hard limit = 1000`. So, between 200 and 1000 charts, netdata will still generate charts, but they will automatically be created with `memory mode = none` (netdata will not maintain a database for them). These metrics will be sent to backend time series databases, if the backend configuration is set to `as collected`.
The metrics above this soft limit are still processed by Netdata and will be available to be sent to backend time-series databases, up to `max private charts hard limit = 1000`. So, between 200 and 1000 charts, Netdata will still generate charts, but they will automatically be created with `memory mode = none` (Netdata will not maintain a database for them). These metrics will be sent to backend time series databases, if the backend configuration is set to `as collected`.
Metrics above the hard limit are still collected, but they can only be used in synthetic charts (once a metric is added to chart, it will be sent to backend servers too).
@ -217,7 +217,7 @@ Using synthetic charts, you can create dedicated sections on the dashboard to re
Synthetic charts are organized in
- **applications** (i.e. entries at the main menu of the netdata dashboard)
- **applications** (i.e. entries at the main menu of the Netdata dashboard)
- **charts for each application** (grouped in families - i.e. submenus at the dashboard menu)
- **statsd metrics for each chart** (i.e. dimensions of the charts)
@ -257,11 +257,11 @@ Using the above configuration `myapp` should get its own section on the dashboar
`[app]` starts a new application definition. The supported settings in this section are:
- `name` defines the name of the app.
- `metrics` is a netdata simple pattern (space separated patterns, using `*` for wildcard, possibly starting with `!` for negative match). This pattern should match all the possible statsd metrics that will be participating in the application `myapp`.
- `metrics` is a Netdata simple pattern (space separated patterns, using `*` for wildcard, possibly starting with `!` for negative match). This pattern should match all the possible statsd metrics that will be participating in the application `myapp`.
- `private charts = yes|no`, enables or disables private charts for the metrics matched.
- `gaps when not collected = yes|no`, enables or disables gaps on the charts of the application, when metrics are not collected.
- `memory mode` sets the memory mode for all charts of the application. The default is the global default for netdata (not the global default for statsd private charts).
- `history` sets the size of the round robin database for this application. The default is the global default for netdata (not the global default for statsd private charts).
- `memory mode` sets the memory mode for all charts of the application. The default is the global default for Netdata (not the global default for statsd private charts).
- `history` sets the size of the round robin database for this application. The default is the global default for Netdata (not the global default for statsd private charts).
`[dictionary]` defines name-value associations. These are used to renaming metrics, when added to synthetic charts. Metric names are also defined at each `dimension` line. However, using the dictionary dimension names can be declared globally, for each app and is the only way to rename dimensions when using patterns. Of course the dictionary can be empty or missing.
@ -281,7 +281,7 @@ So, the format is this:
dimension = [pattern] METRIC NAME TYPE MULTIPLIER DIVIDER OPTIONS
```
`pattern` is a keyword. When set, `METRIC` is expected to be a netdata simple pattern that will be used to match all the statsd metrics to be added to the chart. So, `pattern` automatically matches any number of statsd metrics, all of which will be added as separate chart dimensions.
`pattern` is a keyword. When set, `METRIC` is expected to be a Netdata simple pattern that will be used to match all the statsd metrics to be added to the chart. So, `pattern` automatically matches any number of statsd metrics, all of which will be added as separate chart dimensions.
`TYPE`, `MUTLIPLIER`, `DIVIDER` and `OPTIONS` are optional.
@ -336,13 +336,13 @@ and this synthetic chart:
The `[dictionary]` section accepts any number of `name = value` pairs.
netdata uses this dictionary as follows:
Netdata uses this dictionary as follows:
1. When a `dimension` has a non-empty `NAME`, that name is looked up at the dictionary.
2. If the above lookup gives nothing, or the `dimension` has an empty `NAME`, the original statsd metric name is looked up at the dictionary.
3. If any of the above succeeds, netdata uses the `value` of the dictionary, to set the name of the dimension. The dimensions will have as ID the original statsd metric name, and as name, the dictionary value.
3. If any of the above succeeds, Netdata uses the `value` of the dictionary, to set the name of the dimension. The dimensions will have as ID the original statsd metric name, and as name, the dictionary value.
So, you can use the dictionary in 2 ways:
@ -351,11 +351,11 @@ So, you can use the dictionary in 2 ways:
In both cases, the dimension will be added with ID `myapp.metric1` and will be named `metric1 name`. So, in alarms you can use either of the 2 as `${myapp.metric1}` or `${metric1 name}`.
> keep in mind that if you add multiple times the same statsd metric to a chart, netdata will append `TYPE` to the dimension ID, so `myapp.metric1` will be added as `myapp.metric1_last` or `myapp.metric1_events`, etc. If you add multiple times the same metric with the same `TYPE` to a chart, netdata will also append an incremental counter to the dimension ID, i.e. `myapp.metric1_last1`, `myapp.metric1_last2`, etc.
> keep in mind that if you add multiple times the same statsd metric to a chart, Netdata will append `TYPE` to the dimension ID, so `myapp.metric1` will be added as `myapp.metric1_last` or `myapp.metric1_events`, etc. If you add multiple times the same metric with the same `TYPE` to a chart, Netdata will also append an incremental counter to the dimension ID, i.e. `myapp.metric1_last1`, `myapp.metric1_last2`, etc.
#### dimension patterns
netdata allows adding multiple dimensions to a chart, by matching the statsd metrics with a netdata simple pattern.
Netdata allows adding multiple dimensions to a chart, by matching the statsd metrics with a Netdata simple pattern.
Assume we have an API that provides statsd metrics for each response code per method it supports, like these:
@ -382,7 +382,7 @@ To add all response codes of `myapp.api.get` to a chart use this:
dimension = pattern 'myapp.api.get.* '' last 1 1
```
The above will add dimension named `200`, `400` and `500` (yes, netdata extracts the wildcarded part of the metric name - so the dimensions will be named with whatever the `*` matched). You can rename the dimensions with this:
The above will add dimension named `200`, `400` and `500` (yes, Netdata extracts the wildcarded part of the metric name - so the dimensions will be named with whatever the `*` matched). You can rename the dimensions with this:
```
[dictionary]
@ -435,17 +435,17 @@ Using the above, the dimensions will be added as `GET`, `ADD` and `DELETE`.
## interpolation
~~If you send just one value to statsd, you will notice that the chart is created but no value is shown. The reason is that netdata interpolates all values at second boundaries. For incremental values (`counters` and `meters` in statsd terminology), if you send 10 at 00:00:00.500, 20 at 00:00:01.500 and 30 at 00:00:02.500, netdata will show 15 at 00:00:01 and 25 at 00:00:02.~~
~~If you send just one value to statsd, you will notice that the chart is created but no value is shown. The reason is that Netdata interpolates all values at second boundaries. For incremental values (`counters` and `meters` in statsd terminology), if you send 10 at 00:00:00.500, 20 at 00:00:01.500 and 30 at 00:00:02.500, Netdata will show 15 at 00:00:01 and 25 at 00:00:02.~~
~~This interpolation is automatic and global in netdata for all charts, for incremental values. This means that for the chart to start showing values you need to send 2 values across 2 flush intervals.~~
~~This interpolation is automatic and global in Netdata for all charts, for incremental values. This means that for the chart to start showing values you need to send 2 values across 2 flush intervals.~~
~~(although this is required for incremental values, netdata allows mixing incremental and absolute values on the same charts, so this little limitation [i.e. 2 values to start visualization], is applied on all netdata dimensions).~~
~~(although this is required for incremental values, Netdata allows mixing incremental and absolute values on the same charts, so this little limitation [i.e. 2 values to start visualization], is applied on all Netdata dimensions).~~
(statsd metrics do not loose their first data collection due to interpolation anymore - fixed with [PR #2411](https://github.com/netdata/netdata/pull/2411))
## sending statsd metrics from shell scripts
You can send/update statsd metrics from shell scripts. You can use this feature, to visualize in netdata automated jobs you run on your servers.
You can send/update statsd metrics from shell scripts. You can use this feature, to visualize in Netdata automated jobs you run on your servers.
The command you need to run is:

View file

@ -42,7 +42,7 @@ QoS is about 2 features:
1. **Monitoring the bandwidth used by services**
netdata provides wonderful real-time charts, like this one (wait to see the orange `rsync` part):
Netdata provides wonderful real-time charts, like this one (wait to see the orange `rsync` part):
![qos3](https://cloud.githubusercontent.com/assets/2662304/14474189/713ede84-0104-11e6-8c9c-8dca5c2abd63.gif)
@ -62,7 +62,7 @@ QoS is about 2 features:
When your system is under a DDoS attack, it will get a lot more bandwidth compared to the one it can handle and probably your applications will crash. Setting a limit on the inbound traffic using QoS, will protect your servers (throttle the requests) and depending on the size of the attack may allow your legitimate users to access the server, while the attack is taking place.
Using QoS together with a [SYNPROXY](../proc.plugin/README.md#linux-anti-ddos) will provide a great degree of protection against most DDoS attacks. Actually when I wrote that article, a few folks tried to DDoS the netdata demo site to see in real-time the SYNPROXY operation. They did not do it right, but anyway a great deal of requests reached the netdata server. What saved netdata was QoS. The netdata demo server has QoS installed, so the requests were throttled and the server did not even reach the point of resource starvation. Read about it [here](../proc.plugin/README.md#linux-anti-ddos).
Using QoS together with a [SYNPROXY](../proc.plugin/README.md#linux-anti-ddos) will provide a great degree of protection against most DDoS attacks. Actually when I wrote that article, a few folks tried to DDoS the Netdata demo site to see in real-time the SYNPROXY operation. They did not do it right, but anyway a great deal of requests reached the Netdata server. What saved Netdata was QoS. The Netdata demo server has QoS installed, so the requests were throttled and the server did not even reach the point of resource starvation. Read about it [here](../proc.plugin/README.md#linux-anti-ddos).
On top of all these, QoS is extremely light. You will configure it once, and this is it. It will not bother you again and it will not use any noticeable CPU resources, especially on application and database servers.
@ -72,7 +72,7 @@ On top of all these, QoS is extremely light. You will configure it once, and thi
- ensure each end-user connection will get a fair cut of the available bandwidth.
Once **traffic classification** is applied, we can use **[netdata](https://github.com/netdata/netdata)** to visualize the bandwidth consumption per class in real-time (no configuration is needed for netdata - it will figure it out).
Once **traffic classification** is applied, we can use **[netdata](https://github.com/netdata/netdata)** to visualize the bandwidth consumption per class in real-time (no configuration is needed for Netdata - it will figure it out).
QoS, is extremely light. You will configure it once, and this is it. It will not bother you again and it will not use any noticeable CPU resources, especially on application and database servers.
@ -115,10 +115,10 @@ To do it the hard way, you can go through the [tc configuration steps](#qos-conf
The **[FireHOL](https://firehol.org/)** package already distributes **[FireQOS](https://firehol.org/tutorial/fireqos-new-user/)**. Check the **[FireQOS tutorial](https://firehol.org/tutorial/fireqos-new-user/)** to learn how to write your own QoS configuration.
With **[FireQOS](https://firehol.org/tutorial/fireqos-new-user/)**, it is **really simple for everyone to use QoS in Linux**. Just install the package `firehol`. It should already be available for your distribution. If not, check the **[FireHOL Installation Guide](https://firehol.org/installing/)**. After that, you will have the `fireqos` command which uses a configuration like the following `/etc/firehol/fireqos.conf`, used at the netdata demo site:
With **[FireQOS](https://firehol.org/tutorial/fireqos-new-user/)**, it is **really simple for everyone to use QoS in Linux**. Just install the package `firehol`. It should already be available for your distribution. If not, check the **[FireHOL Installation Guide](https://firehol.org/installing/)**. After that, you will have the `fireqos` command which uses a configuration like the following `/etc/firehol/fireqos.conf`, used at the Netdata demo site:
```sh
# configure the netdata ports
# configure the Netdata ports
server_netdata_ports="tcp/19999"
interface eth0 world bidirectional ethernet balanced rate 50Mbit
@ -155,7 +155,7 @@ With **[FireQOS](https://firehol.org/tutorial/fireqos-new-user/)**, it is **real
match input src 10.2.3.5
```
Nothing more is needed. You just run `fireqos start` to apply this configuration, restart netdata and you have real-time visualization of the bandwidth consumption of your applications. FireQOS is not a daemon. It will just convert the configuration to `tc` commands. It will run them and it will exit.
Nothing more is needed. You just run `fireqos start` to apply this configuration, restart Netdata and you have real-time visualization of the bandwidth consumption of your applications. FireQOS is not a daemon. It will just convert the configuration to `tc` commands. It will run them and it will exit.
**IMPORTANT**: If you copy this configuration to apply it to your system, please adapt the speeds - experiment in non-production environments to learn the tool, before applying it on your servers.
@ -191,7 +191,7 @@ Add the following configuration option in `/etc/netdata.conf`:
Finally, create `/etc/netdata/tc-qos-helper.conf` with this content:
```tc_show="class"```
Please note, that by default Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Set `yes` for a chart instead of `auto` to enable it permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
Please note, that by default Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Set `yes` for a chart instead of `auto` to enable it permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fcollectors%2Ftc.plugin%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

View file

@ -6,7 +6,7 @@
1. install `xen-dom0-libs-devel` and `yajl-devel` using the package manager of your system.
2. re-install netdata from source. The installer will detect that the required libraries are now available and will also build xenstat.plugin.
2. re-install Netdata from source. The installer will detect that the required libraries are now available and will also build xenstat.plugin.
Keep in mind that `libxenstat` requires root access, so the plugin is setuid to root.
@ -25,7 +25,7 @@ Domain:
## Configuration
If you need to disable xenstat for netdata, edit /etc/netdata/netdata.conf and set:
If you need to disable xenstat for Netdata, edit /etc/netdata/netdata.conf and set:
```
[plugins]

View file

@ -1,4 +1,4 @@
# netdata contrib
# Netdata contrib
## Building .deb packages
@ -7,8 +7,8 @@ Debian package. It has been tested on Debian Jessie and Wheezy,
but should work, possibly with minor changes, if you have other
dpkg-based systems such as Ubuntu or Mint.
To build netdata for a Debian Jessie system, the debian directory
has to be available in the root of the netdata source. The easiest
To build Netdata for a Debian Jessie system, the debian directory
has to be available in the root of the Netdata source. The easiest
way to do this is with a symlink:
~/netdata$ ln -s contrib/debian
@ -50,9 +50,9 @@ updates first.
Then proceed as the main instructions above.
### Reinstalling netdata
### Reinstalling Netdata
The recommended way to upgrade netdata packages built from this
The recommended way to upgrade Netdata packages built from this
source is to remove the current package from your system, then
install the new package. Upgrading on wheezy is known to not
work cleanly; Jessie may behave as expected.

View file

@ -1,10 +1,10 @@
# spec to build netdata RPM for sles 11
# Spec to build Netdata RPM for sles 11
Based on [opensuse rpm spec](https://build.opensuse.org/package/show/network/netdata) with some
changes and additions for sles 11 backport, namely:
- init.d script
- run-time dependency on python ordereddict backport
- patch for netdata python.d plugin to work with older python
- patch for Netdata python.d plugin to work with older python
- crude hack of notification script to work with bash 3 (email and syslog only, one destination,
see comments at the top)

View file

@ -2,10 +2,10 @@
## Starting netdata
- You can start netdata by executing it with `/usr/sbin/netdata` (the installer will also start it).
- You can start Netdata by executing it with `/usr/sbin/netdata` (the installer will also start it).
- You can stop netdata by killing it with `killall netdata`.
You can stop and start netdata at any point. Netdata saves on exit its round robbin
- You can stop Netdata by killing it with `killall netdata`.
You can stop and start Netdata at any point. Netdata saves on exit its round robbin
database to `/var/cache/netdata` so that it will continue from where it stopped the last time.
Access to the web site, for all graphs, is by default on port `19999`, so go to:
@ -16,7 +16,7 @@ Access to the web site, for all graphs, is by default on port `19999`, so go to:
You can get the running config file at any time, by accessing `http://127.0.0.1:19999/netdata.conf`.
### Starting netdata at boot
### Starting Netdata at boot
In the `system` directory you can find scripts and configurations for the various distros.
@ -27,7 +27,7 @@ The installer already installs `netdata.service` if it detects a systemd system.
To install `netdata.service` by hand, run:
```sh
# stop netdata
# stop Netdata
killall netdata
# copy netdata.service to systemd
@ -36,10 +36,10 @@ cp system/netdata.service /etc/systemd/system/
# let systemd know there is a new service
systemctl daemon-reload
# enable netdata at boot
# enable Netdata at boot
systemctl enable netdata
# start netdata
# start Netdata
systemctl start netdata
```
@ -48,7 +48,7 @@ systemctl start netdata
In the system directory you can find `netdata-lsb`. Copy it to the proper place according to your distribution documentation. For Ubuntu, this can be done via running the following commands as root.
```sh
# copy the netdata startup file to /etc/init.d
# copy the Netdata startup file to /etc/init.d
cp system/netdata-lsb /etc/init.d/netdata
# make sure it is executable
@ -67,7 +67,7 @@ In the `system` directory you can find `netdata-openrc`. Copy it to the proper p
For older versions of RHEL/CentOS that don't have systemd, an init script is included in the system directory. This can be installed by running the following commands as root.
```sh
# copy the netdata startup file to /etc/init.d
# copy the Netdata startup file to /etc/init.d
cp system/netdata-init-d /etc/init.d/netdata
# make sure it is executable
@ -81,7 +81,7 @@ _There have been some recent work on the init script, see PR https://github.com/
#### other systems
You can start netdata by running it from `/etc/rc.local` or equivalent.
You can start Netdata by running it from `/etc/rc.local` or equivalent.
## Command line options
@ -97,7 +97,7 @@ netdata -h
The program will print the supported command line parameters.
The command line options of the netdata 1.10.0 version are the following:
The command line options of the Netdata 1.10.0 version are the following:
```
^
@ -182,7 +182,7 @@ The command line options of the netdata 1.10.0 version are the following:
## Log files
netdata uses 3 log files:
Netdata uses 3 log files:
1. `error.log`
2. `access.log`
@ -190,18 +190,18 @@ netdata uses 3 log files:
Any of them can be disabled by setting it to `/dev/null` or `none` in `netdata.conf`.
By default `error.log` and `access.log` are enabled. `debug.log` is only enabled if
debugging/tracing is also enabled (netdata needs to be compiled with debugging enabled).
debugging/tracing is also enabled (Netdata needs to be compiled with debugging enabled).
Log files are stored in `/var/log/netdata/` by default.
#### error.log
The `error.log` is the `stderr` of the netdata daemon and all external plugins run by netdata.
The `error.log` is the `stderr` of the `netdata` daemon and all external plugins run by netdata.
So if any process, in the netdata process tree, writes anything to its standard error,
So if any process, in the Netdata process tree, writes anything to its standard error,
it will appear in `error.log`.
For most netdata programs (including standard external plugins shipped by netdata), the
For most Netdata programs (including standard external plugins shipped by netdata), the
following lines may appear:
tag|description
@ -213,7 +213,7 @@ tag|description
So, when auto-detection of data collection fail, `ERROR` lines are logged and the relevant modules
are disabled, but the program continues to run.
When a netdata program cannot run at all, a `FATAL` line is logged.
When a Netdata program cannot run at all, a `FATAL` line is logged.
#### access.log
@ -231,7 +231,7 @@ where:
- `PERCENT_COMPRESSION` is the percentage of traffic saved due to compression.
- `PREP_TIME` is the time in milliseconds needed to prepared the response.
- `SENT_TIME` is the time in milliseconds needed to sent the response to the client.
- `TOTAL_TIME` is the total time the request was inside netdata (from the first byte of the request to the last byte of the response).
- `TOTAL_TIME` is the total time the request was inside Netdata (from the first byte of the request to the last byte of the response).
- `ACTION` can be `filecopy`, `options` (used in CORS), `data` (API call).
@ -242,17 +242,17 @@ See [debugging](#debugging).
## OOM Score
netdata runs with `OOMScore = 1000`. This means netdata will be the first to be killed when your
Netdata runs with `OOMScore = 1000`. This means Netdata will be the first to be killed when your
server runs out of memory.
You can set netdata OOMScore in `netdata.conf`, like this:
You can set Netdata OOMScore in `netdata.conf`, like this:
```
[global]
OOM score = 1000
```
netdata logs its OOM score when it starts:
Netdata logs its OOM score when it starts:
```sh
# grep OOM /var/log/netdata/error.log
@ -261,16 +261,16 @@ netdata logs its OOM score when it starts:
#### OOM score and systemd
netdata will not be able to lower its OOM Score below zero, when it is started as the `netdata`
Netdata will not be able to lower its OOM Score below zero, when it is started as the `netdata`
user (systemd case).
To allow netdata control its OOM Score in such cases, you will need to edit
To allow Netdata control its OOM Score in such cases, you will need to edit
`netdata.service` and set:
```
[Service]
# The minimum netdata Out-Of-Memory (OOM) score.
# netdata (via [global].OOM score in netdata.conf) can only increase the value set here.
# The minimum Netdata Out-Of-Memory (OOM) score.
# Netdata (via [global].OOM score in netdata.conf) can only increase the value set here.
# To decrease it, set the minimum here and set the same or a higher value in netdata.conf.
# Valid values: -1000 (never kill netdata) to 1000 (always kill netdata).
OOMScoreAdjust=-1000
@ -278,7 +278,7 @@ OOMScoreAdjust=-1000
Run `systemctl daemon-reload` to reload these changes.
The above, sets and OOMScore for netdata to `-1000`, so that netdata can increase it via
The above, sets and OOMScore for Netdata to `-1000`, so that Netdata can increase it via
`netdata.conf`.
If you want to control it entirely via systemd, you can set in `netdata.conf`:
@ -293,9 +293,9 @@ Using the above, whatever OOM Score you have set at `netdata.service` will be ma
## Netdata process scheduling policy
By default netdata runs with the `idle` process scheduling policy, so that it uses CPU resources, only when there is idle CPU to spare. On very busy servers (or weak servers), this can lead to gaps on the charts.
By default Netdata runs with the `idle` process scheduling policy, so that it uses CPU resources, only when there is idle CPU to spare. On very busy servers (or weak servers), this can lead to gaps on the charts.
You can set netdata scheduling policy in `netdata.conf`, like this:
You can set Netdata scheduling policy in `netdata.conf`, like this:
```
[global]
@ -306,7 +306,7 @@ You can use the following:
policy|description
:-----:|:--------
`idle`|use CPU only when there is spare - this is lower than nice 19 - it is the default for netdata and it is so low that netdata will run in "slow motion" under extreme system load, resulting in short (1-2 seconds) gaps at the charts.
`idle`|use CPU only when there is spare - this is lower than nice 19 - it is the default for Netdata and it is so low that Netdata will run in "slow motion" under extreme system load, resulting in short (1-2 seconds) gaps at the charts.
`other`<br/>or<br/>`nice`|this is the default policy for all processes under Linux. It provides dynamic priorities based on the `nice` level of each process. Check below for setting this `nice` level for netdata.
`batch`|This policy is similar to `other` in that it schedules the thread according to its dynamic priority (based on the `nice` value). The difference is that this policy will cause the scheduler to always assume that the thread is CPU-intensive. Consequently, the scheduler will apply a small scheduling penalty with respect to wake-up behavior, so that this thread is mildly disfavored in scheduling decisions.
`fifo`|`fifo` can be used only with static priorities higher than 0, which means that when a `fifo` threads becomes runnable, it will always immediately preempt any currently running `other`, `batch`, or `idle` thread. `fifo` is a simple scheduling algorithm without time slicing.
@ -337,30 +337,30 @@ When the policy is set to `other`, `nice`, or `batch`, the following will appear
## scheduling settings and systemd
netdata will not be able to set its scheduling policy and priority to more important values when it is started as the `netdata` user (systemd case).
Netdata will not be able to set its scheduling policy and priority to more important values when it is started as the `netdata` user (systemd case).
You can set these settings at `/etc/systemd/system/netdata.service`:
```
[Service]
# By default netdata switches to scheduling policy idle, which makes it use CPU, only
# By default Netdata switches to scheduling policy idle, which makes it use CPU, only
# when there is spare available.
# Valid policies: other (the system default) | batch | idle | fifo | rr
#CPUSchedulingPolicy=other
# This sets the maximum scheduling priority netdata can set (for policies: rr and fifo).
# netdata (via [global].process scheduling priority in netdata.conf) can only lower this value.
# This sets the maximum scheduling priority Netdata can set (for policies: rr and fifo).
# Netdata (via [global].process scheduling priority in netdata.conf) can only lower this value.
# Priority gets values 1 (lowest) to 99 (highest).
#CPUSchedulingPriority=1
# For scheduling policy 'other' and 'batch', this sets the lowest niceness of netdata.
# netdata (via [global].process nice level in netdata.conf) can only increase the value set here.
# Netdata (via [global].process nice level in netdata.conf) can only increase the value set here.
#Nice=0
```
Run `systemctl daemon-reload` to reload these changes.
Now, tell netdata to keep these settings, as set by systemd, by editing `netdata.conf` and setting:
Now, tell Netdata to keep these settings, as set by systemd, by editing `netdata.conf` and setting:
```
[global]
@ -370,9 +370,9 @@ Now, tell netdata to keep these settings, as set by systemd, by editing `netdata
Using the above, whatever scheduling settings you have set at `netdata.service` will be maintained by netdata.
#### Example 1: netdata with nice -1 on non-systemd systems
#### Example 1: Netdata with nice -1 on non-systemd systems
On a system that is not based on systemd, to make netdata run with nice level -1 (a little bit higher to the default for all programs), edit netdata.conf and set:
On a system that is not based on systemd, to make Netdata run with nice level -1 (a little bit higher to the default for all programs), edit `netdata.conf` and set:
```
[global]
@ -387,9 +387,9 @@ sudo service netdata restart
```
#### Example 2: netdata with nice -1 on systemd systems
#### Example 2: Netdata with nice -1 on systemd systems
On a system that is based on systemd, to make netdata run with nice level -1 (a little bit higher to the default for all programs), edit netdata.conf and set:
On a system that is based on systemd, to make Netdata run with nice level -1 (a little bit higher to the default for all programs), edit `netdata.conf` and set:
```
[global]
@ -415,9 +415,9 @@ sudo systemctl restart netdata
You may notice that netdata's virtual memory size, as reported by `ps` or `/proc/pid/status` (or even netdata's applications virtual memory chart) is unrealistically high.
For example, it may be reported to be 150+MB, even if the resident memory size is just 25MB. Similar values may be reported for netdata plugins too.
For example, it may be reported to be 150+MB, even if the resident memory size is just 25MB. Similar values may be reported for Netdata plugins too.
Check this for example: A netdata installation with default settings on Ubuntu 16.04LTS. The top chart is **real memory used**, while the bottom one is **virtual memory**:
Check this for example: A Netdata installation with default settings on Ubuntu 16.04LTS. The top chart is **real memory used**, while the bottom one is **virtual memory**:
![image](https://cloud.githubusercontent.com/assets/2662304/19013772/5eb7173e-87e3-11e6-8f2b-a2ccfeb06faf.png)
@ -431,19 +431,18 @@ number of threads running.
The system does this for speed. Having a separate memory arena for each thread, allows the
threads to run in parallel in multi-core systems, without any locks between them.
This behaviour is system specific. For example, the chart above when running netdata on alpine
linux (that uses **musl** instead of **glibc**) is this:
This behaviour is system specific. For example, the chart above when running Netdata on Alpine Linux (that uses **musl** instead of **glibc**) is this:
![image](https://cloud.githubusercontent.com/assets/2662304/19013807/7cf5878e-87e4-11e6-9651-082e68701eab.png)
**Can we do anything to lower it?**
Since netdata already uses minimal memory allocations while it runs (i.e. it adapts its memory on start, so that while repeatedly collects data it does not do memory allocations), it already instructs the system memory allocator to minimize the memory arenas for each thread. We have also added [2 configuration options](https://github.com/netdata/netdata/blob/5645b1ee35248d94e6931b64a8688f7f0d865ec6/src/main.c#L410-L418)
Since Netdata already uses minimal memory allocations while it runs (i.e. it adapts its memory on start, so that while repeatedly collects data it does not do memory allocations), it already instructs the system memory allocator to minimize the memory arenas for each thread. We have also added [2 configuration options](https://github.com/netdata/netdata/blob/5645b1ee35248d94e6931b64a8688f7f0d865ec6/src/main.c#L410-L418)
to allow you tweak these settings: `glibc malloc arena max for plugins` and `glibc malloc arena max for netdata`.
However, even if we instructed the memory allocator to use just one arena, it seems it allocates an arena per thread.
netdata also supports `jemalloc` and `tcmalloc`, however both behave exactly the same to the glibc memory allocator in this aspect.
Netdata also supports `jemalloc` and `tcmalloc`, however both behave exactly the same to the glibc memory allocator in this aspect.
**Is this a problem?**
@ -452,53 +451,53 @@ No, it is not.
Linux reserves real memory (physical RAM) in pages (on x86 machines pages are 4KB each).
So even if the system memory allocator is allocating huge amounts of virtual memory,
only the 4KB pages that are actually used are reserving physical RAM. The **real memory** chart
on netdata application section, shows the amount of physical memory these pages occupy(it
on Netdata application section, shows the amount of physical memory these pages occupy(it
accounts the whole pages, even if parts of them are actually used).
## Debugging
When you compile netdata with debugging:
When you compile Netdata with debugging:
1. compiler optimizations for your CPU are disabled (netdata will run somewhat slower)
1. compiler optimizations for your CPU are disabled (Netdata will run somewhat slower)
2. a lot of code is added all over netdata, to log debug messages to `/var/log/netdata/debug.log`. However, nothing is printed by default. netdata allows you to select which sections of netdata you want to trace. Tracing is activated via the config option `debug flags`. It accepts a hex number, to enable or disable specific sections. You can find the options supported at [log.h](../libnetdata/log/log.h). They are the `D_*` defines. The value `0xffffffffffffffff` will enable all possible debug flags.
2. a lot of code is added all over netdata, to log debug messages to `/var/log/netdata/debug.log`. However, nothing is printed by default. Netdata allows you to select which sections of Netdata you want to trace. Tracing is activated via the config option `debug flags`. It accepts a hex number, to enable or disable specific sections. You can find the options supported at [log.h](../libnetdata/log/log.h). They are the `D_*` defines. The value `0xffffffffffffffff` will enable all possible debug flags.
Once netdata is compiled with debugging and tracing is enabled for a few sections, the file `/var/log/netdata/debug.log` will contain the messages.
Once Netdata is compiled with debugging and tracing is enabled for a few sections, the file `/var/log/netdata/debug.log` will contain the messages.
> Do not forget to disable tracing (`debug flags = 0`) when you are done tracing. The file `debug.log` can grow too fast.
#### compiling netdata with debugging
#### compiling Netdata with debugging
To compile netdata with debugging, use this:
To compile Netdata with debugging, use this:
```sh
# step into the netdata source directory
# step into the Netdata source directory
cd /usr/src/netdata.git
# run the installer with debugging enabled
CFLAGS="-O1 -ggdb -DNETDATA_INTERNAL_CHECKS=1" ./netdata-installer.sh
```
The above will compile and install netdata with debugging info embedded. You can now use `debug flags` to set the section(s) you need to trace.
The above will compile and install Netdata with debugging info embedded. You can now use `debug flags` to set the section(s) you need to trace.
#### debugging crashes
We have made the most to make netdata crash free. If however, netdata crashes on your system, it would be very helpful to provide stack traces of the crash. Without them, is will be almost impossible to find the issue (the code base is quite large to find such an issue by just objerving it).
We have made the most to make Netdata crash free. If however, Netdata crashes on your system, it would be very helpful to provide stack traces of the crash. Without them, is will be almost impossible to find the issue (the code base is quite large to find such an issue by just objerving it).
To provide stack traces, **you need to have netdata compiled with debugging**. There is no need to enable any tracing (`debug flags`).
To provide stack traces, **you need to have Netdata compiled with debugging**. There is no need to enable any tracing (`debug flags`).
Then you need to be in one of the following 2 cases:
1. netdata crashes and you have a core dump
1. Netdata crashes and you have a core dump
2. you can reproduce the crash
If you are not on these cases, you need to find a way to be (i.e. if your system does not produce core dumps, check your distro documentation to enable them).
#### netdata crashes and you have a core dump
#### Netdata crashes and you have a core dump
> you need to have netdata compiled with debugging info for this to work (check above)
> you need to have Netdata compiled with debugging info for this to work (check above)
Run the following command and post the output on a github issue.
@ -506,9 +505,9 @@ Run the following command and post the output on a github issue.
gdb $(which netdata) /path/to/core/dump
```
#### you can reproduce a netdata crash on your system
#### you can reproduce a Netdata crash on your system
> you need to have netdata compiled with debugging info for this to work (check above)
> you need to have Netdata compiled with debugging info for this to work (check above)
Install the package `valgrind` and run:
@ -516,7 +515,7 @@ Install the package `valgrind` and run:
valgrind $(which netdata) -D
```
netdata will start and it will be a lot slower. Now reproduce the crash and `valgrind` will dump on your console the stack trace. Open a new github issue and post the output.
Netdata will start and it will be a lot slower. Now reproduce the crash and `valgrind` will dump on your console the stack trace. Open a new github issue and post the output.
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fdaemon%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

View file

@ -8,21 +8,21 @@ This config file **is not needed by default**. Netdata works fine out of the box
`netdata.conf` has sections stated with `[section]`. You will see the following sections:
1. `[global]` to [configure](#global-section-options) the [netdata daemon](../).
1. `[global]` to [configure](#global-section-options) the [Netdata daemon](../).
2. `[web]` to [configure the web server](../../web/server).
3. `[plugins]` to [configure](#plugins-section-options) which [collectors](../../collectors) to use and PATH settings.
4. `[health]` to [configure](#health-section-options) general settings for [health monitoring](../../health)
5. `[registry]` for the [netdata registry](../../registry).
5. `[registry]` for the [Netdata registry](../../registry).
6. `[backend]` to set up [streaming and replication](../../streaming) options.
7. `[statsd]` for the general settings of the [stats.d.plugin](../../collectors/statsd.plugin).
8. `[plugin:NAME]` sections for each collector plugin, under the comment [Per plugin configuration](#per-plugin-configuration).
9. `[CHART_NAME]` sections for each chart defined, under the comment [Per chart configuration](#per-chart-configuration).
The configuration file is a `name = value` dictionary. Netdata will not complain if you set options unknown to it. When you check the running configuration by accessing the URL `/netdata.conf` on your netdata server, netdata will add a comment on settings it does not currently use.
The configuration file is a `name = value` dictionary. Netdata will not complain if you set options unknown to it. When you check the running configuration by accessing the URL `/netdata.conf` on your Netdata server, Netdata will add a comment on settings it does not currently use.
## Applying changes
After `netdata.conf` has been modified, netdata needs to be restarted for changes to apply:
After `netdata.conf` has been modified, Netdata needs to be restarted for changes to apply:
```bash
sudo service netdata restart
@ -42,36 +42,36 @@ Please note that your data history will be lost if you have modified `history` p
setting | default | info
:------:|:-------:|:----
process scheduling policy | `keep` | See [netdata process scheduling policy](../#netdata-process-scheduling-policy)
process scheduling policy | `keep` | See [Netdata process scheduling policy](../#netdata-process-scheduling-policy)
OOM score | `1000` | See [OOM score](../#oom-score)
glibc malloc arena max for plugins | `1` | See [Virtual memory](../#virtual-memory).
glibc malloc arena max for netdata | `1` | See [Virtual memory](../#virtual-memory).
hostname | auto-detected | The hostname of the computer running netdata.
history | `3996` | The number of entries the netdata daemon will by default keep in memory for each chart dimension. This setting can also be configured per chart. Check [Memory Requirements](../../database/#database) for more information.
glibc malloc arena max for Netdata | `1` | See [Virtual memory](../#virtual-memory).
hostname | auto-detected | The hostname of the computer running Netdata.
history | `3996` | The number of entries the `netdata` daemon will by default keep in memory for each chart dimension. This setting can also be configured per chart. Check [Memory Requirements](../../database/#database) for more information.
update every | `1` | The frequency in seconds, for data collection. For more information see [Performance](../../docs/Performance.md#performance).
config directory | `/etc/netdata` | The directory configuration files are kept.
stock config directory | `/usr/lib/netdata/conf.d` |
log directory | `/var/log/netdata` | The directory in which the [log files](../#log-files) are kept.
web files directory | `/usr/share/netdata/web` | The directory the web static files are kept.
cache directory | `/var/cache/netdata` | The directory the memory database will be stored if and when netdata exits. Netdata will re-read the database when it will start again, to continue from the same point.
lib directory | `/var/lib/netdata` | Contains the alarm log and the netdata instance guid.
cache directory | `/var/cache/netdata` | The directory the memory database will be stored if and when Netdata exits. Netdata will re-read the database when it will start again, to continue from the same point.
lib directory | `/var/lib/netdata` | Contains the alarm log and the Netdata instance guid.
home directory | `/var/cache/netdata` | Contains the db files for the collected metrics
plugins directory | `"/usr/libexec/netdata/plugins.d" "/etc/netdata/custom-plugins.d"` | The directory plugin programs are kept. This setting supports multiple directories, space separated. If any directory path contains spaces, enclose it in single or double quotes.
memory mode | `save` | When set to `save` netdata will save its round robin database on exit and load it on startup. When set to `map` the cache files will be updated in real time (check `man mmap` - do not set this on systems with heavy load or slow disks - the disks will continuously sync the in-memory database of netdata). When set to `dbengine` it behaves similarly to `map` but with much better disk and memory efficiency, however, with higher overhead. When set to `ram` the round robin database will be temporary and it will be lost when netdata exits. `none` disables the database at this host. This also disables health monitoring (there cannot be health monitoring without a database). host access prefix | | This is used in docker environments where /proc, /sys, etc have to be accessed via another path. You may also have to set SYS_PTRACE capability on the docker for this work. Check [issue 43](https://github.com/netdata/netdata/issues/43).
memory deduplication (ksm) | `yes` | When set to `yes`, netdata will offer its in-memory round robin database to kernel same page merging (KSM) for deduplication. For more information check [Memory Deduplication - Kernel Same Page Merging - KSM](../../database/#ksm)
memory mode | `save` | When set to `save` Netdata will save its round robin database on exit and load it on startup. When set to `map` the cache files will be updated in real time (check `man mmap` - do not set this on systems with heavy load or slow disks - the disks will continuously sync the in-memory database of Netdata). When set to `dbengine` it behaves similarly to `map` but with much better disk and memory efficiency, however, with higher overhead. When set to `ram` the round robin database will be temporary and it will be lost when Netdata exits. `none` disables the database at this host. This also disables health monitoring (there cannot be health monitoring without a database). host access prefix | | This is used in docker environments where /proc, /sys, etc have to be accessed via another path. You may also have to set SYS_PTRACE capability on the docker for this work. Check [issue 43](https://github.com/netdata/netdata/issues/43).
memory deduplication (ksm) | `yes` | When set to `yes`, Netdata will offer its in-memory round robin database to kernel same page merging (KSM) for deduplication. For more information check [Memory Deduplication - Kernel Same Page Merging - KSM](../../database/#ksm)
TZ environment variable | `:/etc/localtime` | Where to find the timezone
timezone | auto-detected | The timezone retrieved from the environment variable
debug flags | `0x0000000000000000` | Bitmap of debug options to enable. For more information check [Tracing Options](../#debugging).
debug log | `/var/log/netdata/debug.log` | The filename to save debug information. This file will not be created if debugging is not enabled. You can also set it to `syslog` to send the debug messages to syslog, or `none` to disable this log. For more information check [Tracing Options](../#debugging).
error log | `/var/log/netdata/error.log` | The filename to save error messages for netdata daemon and all plugins (`stderr` is sent here for all netdata programs, including the plugins). You can also set it to `syslog` to send the errors to syslog, or `none` to disable this log.
access log | `/var/log/netdata/access.log` | The filename to save the log of web clients accessing netdata charts. You can also set it to `syslog` to send the access log to syslog, or `none` to disable this log.
error log | `/var/log/netdata/error.log` | The filename to save error messages for Netdata daemon and all plugins (`stderr` is sent here for all Netdata programs, including the plugins). You can also set it to `syslog` to send the errors to syslog, or `none` to disable this log.
access log | `/var/log/netdata/access.log` | The filename to save the log of web clients accessing Netdata charts. You can also set it to `syslog` to send the access log to syslog, or `none` to disable this log.
errors flood protection period | `1200` | UNUSED - Length of period (in sec) during which the number of errors should not exceed the `errors to trigger flood protection`.
errors to trigger flood protection | `200` | UNUSED - Number of errors written to the log in `errors flood protection period` sec before flood protection is activated.
run as user | `netdata` | The user netdata will run as.
run as user | `netdata` | The user Netdata will run as.
pthread stack size | auto-detected |
cleanup obsolete charts after seconds | `3600` | See [monitoring ephemeral containers](../../collectors/cgroups.plugin/#monitoring-ephemeral-containers), also sets the timeout for cleaning up obsolete dimensions
gap when lost iterations above | `1` |
cleanup orphan hosts after seconds | `3600` | How long to wait until automatically removing from the DB a remote netdata host (slave) that is no longer sending data.
cleanup orphan hosts after seconds | `3600` | How long to wait until automatically removing from the DB a remote Netdata host (slave) that is no longer sending data.
delete obsolete charts files | `yes` | See [monitoring ephemeral containers](../../collectors/cgroups.plugin/#monitoring-ephemeral-containers), also affects the deletion of files for obsolete dimensions
delete orphan hosts files | `yes` | Set to `no` to disable non-responsive host removal.
enable zero metrics | `no` | Set to `yes` to show charts when all their metrics are zero.
@ -90,8 +90,8 @@ setting | default | info
:------:|:-------:|:----
PATH environment variable | `auto-detected` |
PYTHONPATH environment variable | | Used to set a custom python path
enable running new plugins | `yes` | When set to `yes`, netdata will enable detected plugins, even if they are not configured explicitly. Setting this to `no` will only enable plugins explicitly configirued in this file with a `yes`
check for new plugins every | 60 | The time in seconds to check for new plugins in the plugins directory. This allows having other applications dynamically creating plugins for netdata.
enable running new plugins | `yes` | When set to `yes`, Netdata will enable detected plugins, even if they are not configured explicitly. Setting this to `no` will only enable plugins explicitly configirued in this file with a `yes`
check for new plugins every | 60 | The time in seconds to check for new plugins in the plugins directory. This allows having other applications dynamically creating plugins for Netdata.
checks | `no` | This is a debugging plugin for the internal latency
### [health] section options
@ -129,7 +129,7 @@ The configuration options for plugins appear in sections following the pattern `
Most internal plugins will provide additional options. Check [Internal Plugins](../../collectors/) for more information.
Please note, that by default Netdata will enable monitoring metrics for disks, memory, and network only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Use `yes` instead of `auto` in plugin configuration sections to enable these charts permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
Please note, that by default Netdata will enable monitoring metrics for disks, memory, and network only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Use `yes` instead of `auto` in plugin configuration sections to enable these charts permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
#### External plugins

View file

@ -3,7 +3,7 @@
Although `netdata` does all its calculations using `long double`, it stores all values using
a [custom-made 32-bit number](../libnetdata/storage_number/).
So, for each dimension of a chart, netdata will need: `4 bytes for the value * the entries
So, for each dimension of a chart, Netdata will need: `4 bytes for the value * the entries
of its history`. It will not store any other data for each value in the time series database.
Since all its values are stored in a time series with fixed step, the time each value
corresponds can be calculated at run time, using the position of a value in the round robin database.
@ -23,22 +23,22 @@ use the **[Database Engine](engine/)**.
## Memory modes
Currently netdata supports 6 memory modes:
Currently Netdata supports 6 memory modes:
1. `ram`, data are purely in memory. Data are never saved on disk. This mode uses `mmap()` and
supports [KSM](#ksm).
2. `save`, (the default) data are only in RAM while netdata runs and are saved to / loaded from
disk on netdata restart. It also uses `mmap()` and supports [KSM](#ksm).
2. `save`, (the default) data are only in RAM while Netdata runs and are saved to / loaded from
disk on Netdata restart. It also uses `mmap()` and supports [KSM](#ksm).
3. `map`, data are in memory mapped files. This works like the swap. Keep in mind though, this
will have a constant write on your disk. When netdata writes data on its memory, the Linux kernel
will have a constant write on your disk. When Netdata writes data on its memory, the Linux kernel
marks the related memory pages as dirty and automatically starts updating them on disk.
Unfortunately we cannot control how frequently this works. The Linux kernel uses exactly the
same algorithm it uses for its swap memory. Check below for additional information on running a
dedicated central netdata server. This mode uses `mmap()` but does not support [KSM](#ksm).
dedicated central Netdata server. This mode uses `mmap()` but does not support [KSM](#ksm).
4. `none`, without a database (collected metrics can only be streamed to another netdata).
4. `none`, without a database (collected metrics can only be streamed to another Netdata).
5. `alloc`, like `ram` but it uses `calloc()` and does not support [KSM](#ksm). This mode is the
fallback for all others except `none`.
@ -49,7 +49,7 @@ Currently netdata supports 6 memory modes:
but depends on the configured disk space and the effective compression ratio of the data stored.
For more details see [here](engine/).
You can select the memory mode by editing netdata.conf and setting:
You can select the memory mode by editing `netdata.conf` and setting:
```
[global]
@ -60,7 +60,7 @@ You can select the memory mode by editing netdata.conf and setting:
cache directory = /var/cache/netdata
```
## Running netdata in embedded devices
## Running Netdata in embedded devices
Embedded devices usually have very limited RAM resources available.
@ -74,36 +74,36 @@ second updates.
If you set `update every = 2` and `history = 1800`, you will still have an hour of data, but
collected once every 2 seconds. This will **cut in half** both CPU and RAM resources consumed
by netdata. Of course experiment a bit. On very weak devices you might have to use
by Netdata. Of course experiment a bit. On very weak devices you might have to use
`update every = 5` and `history = 720` (still 1 hour of data, but 1/5 of the CPU and RAM resources).
You can also disable [data collection plugins](../collectors) you don't need.
Disabling such plugins will also free both CPU and RAM resources.
## Running a dedicated central netdata server
## Running a dedicated central Netdata server
Netdata allows streaming data between netdata nodes. This allows us to have a central netdata
Netdata allows streaming data between Netdata nodes. This allows us to have a central Netdata
server that will maintain the entire database for all nodes, and will also run health checks/alarms
for all nodes.
For this central netdata, memory size can be a problem. Fortunately, netdata supports several
For this central Netdata, memory size can be a problem. Fortunately, Netdata supports several
memory modes. **One interesting option** for this setup is `memory mode = map`.
### map
In this mode, the database of netdata is stored in memory mapped files. netdata continues to read
In this mode, the database of Netdata is stored in memory mapped files. Netdata continues to read
and write the database in memory, but the kernel automatically loads and saves memory pages from/to
disk.
**We suggest _not_ to use this mode on nodes that run other applications.** There will always be
dirty memory to be synced and this syncing process may influence the way other applications work.
This mode however is useful when we need a central netdata server that would normally need huge
This mode however is useful when we need a central Netdata server that would normally need huge
amounts of memory. Using memory mode `map` we can overcome all memory restrictions.
There are a few kernel options that provide finer control on the way this syncing works. But before
explaining them, a brief introduction of how netdata database works is needed.
explaining them, a brief introduction of how Netdata database works is needed.
For each chart, netdata maps the following files:
For each chart, Netdata maps the following files:
1. `chart/main.db`, this is the file that maintains chart information. Every time data are collected
for a chart, this is updated.
@ -111,7 +111,7 @@ For each chart, netdata maps the following files:
2. `chart/dimension_name.db`, this is the file for each dimension. At its beginning there is a
header, followed by the round robin database where metrics are stored.
So, every time netdata collects data, the following pages will become dirty:
So, every time Netdata collects data, the following pages will become dirty:
1. the chart file
2. the header part of all dimension files
@ -147,8 +147,8 @@ There are 2 more options to tweak:
2. `dirty_ratio`, by default `20`.
These control the amount of memory that should be dirty for disk syncing to be triggered.
On dedicated netdata servers, you can use: `80` and `90` respectively, so that all RAM is given
to netdata.
On dedicated Netdata servers, you can use: `80` and `90` respectively, so that all RAM is given
to Netdata.
With these settings, you can expect a little `iowait` spike once every 10 minutes and in case
of system crash, data on disk will be up to 10 minutes old.
@ -169,7 +169,7 @@ for this setup** is `memory mode = dbengine`.
### dbengine
In this mode, the database of netdata is stored in database files. The [Database Engine](engine/)
In this mode, the database of Netdata is stored in database files. The [Database Engine](engine/)
works like a traditional database. There is some amount of RAM dedicated to data caching and
indexing and the rest of the data reside compressed on disk. The number of history entries is not
fixed in this case, but depends on the configured disk space and the effective compression ratio
@ -187,10 +187,10 @@ Netdata offers all its round robin database to kernel for deduplication
In the past KSM has been criticized for consuming a lot of CPU resources.
Although this is true when KSM is used for deduplicating certain applications, it is not true with
netdata, since the netdata memory is written very infrequently (if you have 24 hours of metrics in
netdata, since the Netdata memory is written very infrequently (if you have 24 hours of metrics in
netdata, each byte at the in-memory database will be updated just once per day).
KSM is a solution that will provide 60+% memory savings to netdata.
KSM is a solution that will provide 60+% memory savings to Netdata.
### Enable KSM in kernel

View file

@ -23,13 +23,13 @@ journalfile-1-0000000003.njf
They are located under their host's cache directory in the directory `./dbengine`
(e.g. for localhost the default location is `/var/cache/netdata/dbengine/*`). The higher
numbered filenames contain more recent metric data. The user can safely delete some pairs
of files when netdata is stopped to manually free up some space.
of files when Netdata is stopped to manually free up some space.
*Users should* **back up** *their `./dbengine` folders if they consider this data to be important.*
## Configuration
There is one DB engine instance per netdata host/node. That is, there is one `./dbengine` folder
There is one DB engine instance per Netdata host/node. That is, there is one `./dbengine` folder
per node, and all charts of `dbengine` memory mode in such a host share the same storage space
and DB engine instance memory state. You can select the memory mode for localhost by editing
netdata.conf and setting:
@ -59,10 +59,10 @@ quota. Both numbers are in **MiB**. All DB engine instances will allocate the co
separately.
The `page cache size` option determines the amount of RAM in **MiB** that is dedicated to caching
netdata metric values themselves.
Netdata metric values themselves.
The `dbengine disk space` option determines the amount of disk space in **MiB** that is dedicated
to storing netdata metric values and all related metadata describing them.
to storing Netdata metric values and all related metadata describing them.
## Operation
@ -72,7 +72,7 @@ the **Page Cache**.
When those pages fill up they are slowly compressed and flushed to disk.
It can take `4096 / 4 = 1024 seconds = 17 minutes`, for a chart dimension that is being collected
every 1 second, to fill a page. Pages can be cut short when we stop netdata or the DB engine
every 1 second, to fill a page. Pages can be cut short when we stop Netdata or the DB engine
instance so as to not lose the data. When we query the DB engine for data we trigger disk read
I/O requests that fill the Page Cache with the requested pages and potentially evict cold
(not recently used) pages.
@ -91,7 +91,7 @@ applications.
Using memory mode `dbengine` we can overcome most memory restrictions and store a dataset that
is much larger than the available memory.
There are explicit memory requirements **per** DB engine **instance**, meaning **per** netdata
There are explicit memory requirements **per** DB engine **instance**, meaning **per** Netdata
**node** (e.g. localhost and streaming recipient nodes):
- `page cache size` must be at least `#dimensions-being-collected x 4096 x 2` bytes.
@ -115,11 +115,11 @@ file descriptors available per `dbengine` instance.
Netdata allocates 25% of the available file descriptors to its Database Engine instances. This means that only 25%
of the file descriptors that are available to the Netdata service are accessible by dbengine instances.
You should take that into account when configuring your service
or system-wide file descriptor limits. You can roughly estimate that the netdata service needs 2048 file
or system-wide file descriptor limits. You can roughly estimate that the Netdata service needs 2048 file
descriptors for every 10 streaming slave hosts when streaming is configured to use `memory mode = dbengine`.
If for example one wants to allocate 65536 file descriptors to the netdata service on a systemd system
one needs to override the netdata service by running `sudo systemctl edit netdata` and creating a
If for example one wants to allocate 65536 file descriptors to the Netdata service on a systemd system
one needs to override the Netdata service by running `sudo systemctl edit netdata` and creating a
file with contents:
```

View file

@ -187,7 +187,7 @@ rethinkdb|python<br/>v2 or v3|Connects to multiple rethinkdb servers (local or r
application|language|notes|
:---------:|:------:|:----|
retroshare|python<br/>v2 or v3|Connects to multiple retroshare servers (local or remote) to collect real-time performance metrics.<br/>&nbsp;<br/>netdata plugin: [python.d.plugin](../collectors/python.d.plugin)<br/>plugin module: [retroshare.chart.py](../collectors/python.d.plugin/retroshare)<br/>configuration file: [python.d/retroshare.conf](../collectors/python.d.plugin/retroshare)|
retroshare|python<br/>v2 or v3|Connects to multiple retroshare servers (local or remote) to collect real-time performance metrics.<br/>&nbsp;<br/>Netdata plugin: [python.d.plugin](../collectors/python.d.plugin)<br/>plugin module: [retroshare.chart.py](../collectors/python.d.plugin/retroshare)<br/>configuration file: [python.d/retroshare.conf](../collectors/python.d.plugin/retroshare)|
---
@ -196,7 +196,7 @@ retroshare|python<br/>v2 or v3|Connects to multiple retroshare servers (local or
application|language|notes|
:---------:|:------:|:----|
squid|python<br/>v2 or v3|Connects to multiple squid servers (local or remote) to collect real-time performance metrics.<br/>&nbsp;<br/>netdata plugin: [python.d.plugin](../collectors/python.d.plugin)<br/>plugin module: [squid.chart.py](../collectors/python.d.plugin/squid)<br/>configuration file: [python.d/squid.conf](../collectors/python.d.plugin/squid)|
squid|python<br/>v2 or v3|Connects to multiple squid servers (local or remote) to collect real-time performance metrics.<br/>&nbsp;<br/>Netdata plugin: [python.d.plugin](../collectors/python.d.plugin)<br/>plugin module: [squid.chart.py](../collectors/python.d.plugin/squid)<br/>configuration file: [python.d/squid.conf](../collectors/python.d.plugin/squid)|
squid|BASH<br/>Shell Script|Connects to a squid server (local or remote) to collect real-time performance metrics.<br/><br/>DEPRECATED IN FAVOR OF THE PYTHON ONE. It is still supplied only as an example module to shell scripting plugins.<br/>&nbsp;<br/>Netdata plugin: [charts.d.plugin](../collectors/charts.d.plugin#chartsdplugin)<br/>plugin module: [squid.chart.sh](../collectors/charts.d.plugin/squid)<br/>configuration file: [charts.d/squid.conf](../collectors/charts.d.plugin/squid)|
@ -298,8 +298,8 @@ postfix|BASH<br/>Shell Script|Charts the postfix queue size.<br/><br/>DEPRECATED
application|language|notes|
:---------:|:------:|:----|
NFS Client|`C`|This is handled entirely by the Netdata daemon.<br/>&nbsp;<br/>Configuration: `netdata.conf`, section `[plugin:proc:/proc/net/rpc/nfs]`.
NFS Server|`C`|This is handled entirely by the netdata daemon.<br/>&nbsp;<br/>Configuration: `netdata.conf`, section `[plugin:proc:/proc/net/rpc/nfsd]`.
samba|python<br/>v2 or v3|Performance metrics of Samba SMB2 file sharing.<br/>&nbsp;<br/>documentation page: [python.d.plugin module samba](../collectors/python.d.plugin/samba)<br/>netdata plugin: [python.d.plugin](../collectors/python.d.plugin)<br/>plugin module: [samba.chart.py](../collectors/python.d.plugin/samba)<br/>configuration file: [python.d/samba.conf](../collectors/python.d.plugin/samba)|
NFS Server|`C`|This is handled entirely by the `netdata` daemon.<br/>&nbsp;<br/>Configuration: `netdata.conf`, section `[plugin:proc:/proc/net/rpc/nfsd]`.
samba|python<br/>v2 or v3|Performance metrics of Samba SMB2 file sharing.<br/>&nbsp;<br/>documentation page: [python.d.plugin module samba](../collectors/python.d.plugin/samba)<br/>Netdata plugin: [python.d.plugin](../collectors/python.d.plugin)<br/>plugin module: [samba.chart.py](../collectors/python.d.plugin/samba)<br/>configuration file: [python.d/samba.conf](../collectors/python.d.plugin/samba)|
---
@ -307,7 +307,7 @@ samba|python<br/>v2 or v3|Performance metrics of Samba SMB2 file sharing.<br/>&n
application|language|notes|
:---------:|:------:|:----|
CUPS|C|Charts metrics of printers, jobs and other cups destinations.<br/>&nbsp;<br/>netdata plugin: [cups.plugin](../collectors/cups.plugin)
CUPS|C|Charts metrics of printers, jobs and other cups destinations.<br/>&nbsp;<br/>Netdata plugin: [cups.plugin](../collectors/cups.plugin)
---
@ -315,7 +315,7 @@ CUPS|C|Charts metrics of printers, jobs and other cups destinations.<br/>&nbsp;<
application|language|notes|
:---------:|:------:|:----|
xenstat|C|Collects host and domain statistics for XenServer or XCP-ng hypervisors.<br/>&nbsp;<br/>netdata plugin: [xenstat.plugin](../collectors/xenstat.plugin)
xenstat|C|Collects host and domain statistics for XenServer or XCP-ng hypervisors.<br/>&nbsp;<br/>Netdata plugin: [xenstat.plugin](../collectors/xenstat.plugin)
---

View file

@ -32,9 +32,9 @@ If still Netdata does not receive the requests, something is blocking them. A fi
</details>&nbsp;<br/>
When you install multiple Netdata servers, all your servers will appear at the node menu at the top left of the dashboard. For this to work, you have to manually access just once, the dashboard of each of your netdata servers.
When you install multiple Netdata servers, all your servers will appear at the node menu at the top left of the dashboard. For this to work, you have to manually access just once, the dashboard of each of your Netdata servers.
The node menu is more than just browser bookmarks. When switching Netdata servers from that menu, any settings of the current view are propagated to the other netdata server:
The node menu is more than just browser bookmarks. When switching Netdata servers from that menu, any settings of the current view are propagated to the other Netdata server:
- the current charts panning (drag the charts left or right),
- the current charts zooming (`SHIFT` + mouse wheel over a chart),

View file

@ -14,7 +14,7 @@ The software is known for its low impact on memory resources, high scalability,
- Password-protect access to Netdata, until distributed authentication is implemented via the Netdata cloud Sign In mechanism.
- A proxy was necessary to encrypt the communication to netdata, until v1.16.0, which provided TLS (HTTPS) support.
- A proxy was necessary to encrypt the communication to Netdata, until v1.16.0, which provided TLS (HTTPS) support.
## Nginx configuration file

View file

@ -26,7 +26,7 @@ To ensure anonymity of the stored information, we have configured GTM's GA varia
|page|netdata-dashboard
|hostname|dashboard.my-netdata.io
|anonymizeIp|true
|title|netdata dashboard
|title|Netdata dashboard
|campaignSource|{{machine_guid}}
|campaignMedium|web
|referrer|http://dashboard.my-netdata.io
@ -35,7 +35,7 @@ To ensure anonymity of the stored information, we have configured GTM's GA varia
|Page Path|/netdata-dashboard
|location|http://dashboard.my-netdata.io
In addition, the netdata-generated unique machine guid is sent to GA via a custom dimension.
In addition, the Netdata-generated unique machine guid is sent to GA via a custom dimension.
You can verify the effect of these settings by examining the GA `collect` request parameters.
The only thing that's impossible for us to prevent from being **sent** is the URL in the "Referrer" Header of the browser request to GA. However, the settings above ensure that all **stored** URLs and host names are anonymized.

View file

@ -59,7 +59,7 @@ Entire plugins can be turned off from the [netdata.conf [plugins]](../daemon/con
##### Show charts with zero metrics
By default, Netdata will enable monitoring metrics for disks, memory, and network only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Use `yes` instead of `auto` in plugin configuration sections to enable these charts permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
By default, Netdata will enable monitoring metrics for disks, memory, and network only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Use `yes` instead of `auto` in plugin configuration sections to enable these charts permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
### Modify alarms and notifications
@ -92,11 +92,11 @@ You have several options under the [netdata.conf [web]](../web/server/#access-li
##### Stop sending info to registry.my-netdata.io
You will need to configure the [registry] section in netdata.conf. First read the [registry documentation](../registry/). In it, are instructions on how to [run your own registry](../registry/#run-your-own-registry).
You will need to configure the [registry] section in `netdata.conf`. First read the [registry documentation](../registry/). In it, are instructions on how to [run your own registry](../registry/#run-your-own-registry).
##### Change the IP address/port Netdata listens to
The settings are under netdata.conf [web]. Look at the [web server documentation](../web/server/#binding-netdata-to-multiple-ports) for more info.
The settings are under `netdata.conf` [web]. Look at the [web server documentation](../web/server/#binding-netdata-to-multiple-ports) for more info.
### System resource usage
@ -110,7 +110,7 @@ The page on [Netdata performance](Performance.md) has an excellent guide on how
##### Prevent Netdata from getting immediately killed when my server runs out of memory
You can change the Netdata [OOM score](../daemon/#oom-score) in netdata.conf [global].
You can change the Netdata [OOM score](../daemon/#oom-score) in `netdata.conf` [global].
### Other

View file

@ -132,7 +132,7 @@ iptables -t filter -A netdata -j DROP
iptables -t filter -D INPUT -p tcp --dport ${NETDATA_PORT} -m conntrack --ctstate NEW -j netdata 2>/dev/null
# add the input chain hook (again)
# to send all new netdata connections to our filtering chain
# to send all new Netdata connections to our filtering chain
iptables -t filter -I INPUT -p tcp --dport ${NETDATA_PORT} -m conntrack --ctstate NEW -j netdata
```
_script to allow access to Netdata only from a number of hosts_

View file

@ -37,22 +37,22 @@ The menu lists the Netdata servers you have visited. For example, when you jump
(like the currently viewed charts, the current zoom and pan operations on the charts, etc.) are propagated to the new server, so that the new dashboard will come with exactly the
same view. The global registry keeps track of 4 entities:
1. **machines**: i.e. the netdata installations (a random GUID generated by each netdata the first time it starts; we call this **machine_guid**)
1. **machines**: i.e. the Netdata installations (a random GUID generated by each Netdata the first time it starts; we call this **machine_guid**)
For each netdata installation (each `machine_guid`) the registry keeps track of the different URLs it is accessed.
For each Netdata installation (each `machine_guid`) the registry keeps track of the different URLs it is accessed.
2. **persons**: i.e. the web browsers accessing the netdata installations (a random GUID generated by the registry the first time it sees a new web browser; we call this **person_guid**)
2. **persons**: i.e. the web browsers accessing the Netdata installations (a random GUID generated by the registry the first time it sees a new web browser; we call this **person_guid**)
For each person, the registry keeps track of the netdata installations it has accessed and their URLs.
For each person, the registry keeps track of the Netdata installations it has accessed and their URLs.
3. **URLs** of netdata installations (as seen by the web browsers)
3. **URLs** of Netdata installations (as seen by the web browsers)
For each URL, the registry keeps the URL and nothing more. Each URL is linked to *persons* and *machines*. The only way to find a URL is to know its **machine_guid** or have a **person_guid** it is linked to it.
4. **accounts**: i.e. the information used to sign-in via one of the available sign-in methods. Depending on the method, this may include an email, an email and a profile picture.
For *persons*/*accounts* and *machines*, the registry keeps links to *URLs*, each link with 2 timestamps (first time seen, last time seen) and a counter (number of times it has been seen).
*machines*, *persons*, and timestamps are stored in the netdata registry regardless of whether you sign in or not.
*machines*, *persons*, and timestamps are stored in the Netdata registry regardless of whether you sign in or not.
If sending this information is against your policies, you can [run your own registry](../registry/#run-your-own-registry).
Note that ND versions with the 'Sign in' feature of the ND Cloud do not use the global registry.

View file

@ -1,9 +1,9 @@
# Health monitoring
Each netdata node runs an independent thread evaluating health monitoring checks.
Each Netdata node runs an independent thread evaluating health monitoring checks.
This thread has lock free access to the database, so that it can operate as a watchdog.
Health checks (alarms) are attached to netdata charts, allowing netdata to automatically
Health checks (alarms) are attached to Netdata charts, allowing Netdata to automatically
activate an alarm as soon as a chart is created. This is very important for
netdata, since many charts are dynamically created during runtime (for example, the
chart tracking network interface packet drops, is automatically created on the first
@ -20,15 +20,15 @@ use expressions combining the latest value of any number of metrics.
## Health configuration reference
Stock netdata health configuration is in `/usr/lib/netdata/conf.d/health.d`.
Stock Netdata health configuration is in `/usr/lib/netdata/conf.d/health.d`.
These files can be overwritten by copying them and editing them in `/etc/netdata/health.d`
(run `/etc/netdata/edit-config` to edit them).
In `/etc/netdata/health.d` you can also put any number of files (in any number of sub-directories)
with a suffix `.conf` to have them processed by netdata.
with a suffix `.conf` to have them processed by Netdata.
Health configuration can be reloaded at any time, without restarting netdata.
Just send netdata the SIGUSR2 signal, like this:
Health configuration can be reloaded at any time, without restarting Netdata.
Just send Netdata the SIGUSR2 signal, like this:
```sh
killall -USR2 netdata
@ -50,7 +50,7 @@ The only difference is the label `alarm` or `template`.
Netdata supports overriding **templates** with **alarms**.
For example, when a template is defined for a set of charts, an alarm with exactly the
same name attached to the same chart the template matches, will have higher precedence
(i.e. netdata will use the alarm on this chart and prevent the template from being applied
(i.e. Netdata will use the alarm on this chart and prevent the template from being applied
to it).
### The format
@ -135,7 +135,7 @@ hosts: server1 server2 database* !redis3 redis*
The above says: use this alarm on all hosts named `server1`, `server2`, `database*`, and
all `redis*` except `redis3`.
This is useful when you centralize metrics from multiple hosts, to one netdata.
This is useful when you centralize metrics from multiple hosts, to one Netdata.
---
@ -187,7 +187,7 @@ Everything is the same with [badges](../web/api/badges/). In short:
- `of DIMENSIONS` is optional and has to be the last parameter. Dimensions have to be separated
by `,` or `|`. The space characters found in dimensions will be kept as-is (a few dimensions
have spaces in their names). This accepts netdata simple patterns and the `match-ids` and
have spaces in their names). This accepts Netdata simple patterns and the `match-ids` and
`match-names` options affect the searches for dimensions.
The result of the lookup will be available as `$this` and `$NAME` in expressions.
@ -289,8 +289,8 @@ Format:
exec: SCRIPT
```
The default `SCRIPT` is netdata's `alarm-notify.sh`, which supports all the notifications
methods netdata supports, including custom hooks.
The default `SCRIPT` is Netdata's `alarm-notify.sh`, which supports all the notifications
methods Netdata supports, including custom hooks.
---
@ -373,19 +373,17 @@ For some alarms we need compare two time-frames, to detect anomalies. For exampl
### Expressions
netdata has an internal [infix expression parser](../libnetdata/eval).
Netdata has an internal [infix expression parser](../libnetdata/eval).
This parses expressions and creates an internal structure that allows fast execution of them.
These operators are supported `+`, `-`, `*`, `/`, `<`, `<=`, `<>`, `!=`, `>`, `>=`, `&&`, `||`,
`!`, `AND`, `OR`, `NOT`. Boolean operators result in either `1` (true) or `0` (false).
The conditional evaluation operator `?` is supported too. Using this operator IF-THEN-ELSE
conditional statements can be specified. The format is: `(condition) ? (true expression) :
(false expression)`. So, netdata will first evaluate the `condition` and based on the result
will either evaluate `true expression` or `false expression`.
The conditional evaluation operator `?` is supported too. Using this operator IF-THEN-ELSE conditional statements can be specified. The format is: `(condition) ? (true expression) :(false expression)`. So, Netdata will first evaluate the `condition` and based on the result will either evaluate `true expression` or `false expression`.
Example: `($this > 0) ? ($avail * 2) : ($used / 2)`.
Nested such expressions are also supported (i.e. `true expression` and `false expression` can
contain conditional evaluations).
Nested such expressions are also supported (i.e. `true expression` and `false expression` can contain conditional evaluations).
Expressions also support the `abs()` function.
@ -407,7 +405,7 @@ or warning thresholds. This usage helps to avoid bogus messages resulting from
variations in the value when it is varying regularly but staying close to the threshold
value, without needing to delay sending messages at all.
An example of such usage from the default CPU usage alarms bundled with netdata is:
An example of such usage from the default CPU usage alarms bundled with Netdata is:
```
warn: $this > (($status >= $WARNING) ? (75) : (85))
@ -491,7 +489,7 @@ Although the `alarm_variables` link shows you variables for a particular chart,
Alarms can have the following statuses:
- `REMOVED` - the alarm has been deleted (this happens when a SIGUSR2 is sent to netdata
- `REMOVED` - the alarm has been deleted (this happens when a SIGUSR2 is sent to Netdata
to reload health configuration)
- `UNINITIALIZED` - the alarm is not initialized yet
@ -509,7 +507,7 @@ The external script will be called for all status changes.
## Examples
Check the `health/health.d/` directory for all alarms shipped with netdata.
Check the `health/health.d/` directory for all alarms shipped with Netdata.
Here are a few examples:
@ -526,7 +524,7 @@ template: apache_last_collected_secs
crit: $this > (10 * $update_every)
```
The above checks that netdata is able to collect data from apache. In detail:
The above checks that Netdata is able to collect data from apache. In detail:
```
template: apache_last_collected_secs
@ -653,12 +651,12 @@ The `lookup` line will calculate the sum of the all dropped packets in the last
The `crit` line will issue a critical alarm if even a single packet has been dropped.
Note that the drops chart does not exist if a network interface has never dropped a single packet.
When netdata detects a dropped packet, it will add the chart and it will automatically attach this
When Netdata detects a dropped packet, it will add the chart and it will automatically attach this
alarm to it.
## Troubleshooting
You can compile netdata with [debugging](../daemon#debugging) and then set in `netdata.conf`:
You can compile Netdata with [debugging](../daemon#debugging) and then set in `netdata.conf`:
```
[global]
@ -671,7 +669,7 @@ Important: this will generate a lot of output in debug.log.
You can find the context of charts by looking up the chart in either
`http://your.netdata:19999/netdata.conf` or `http://your.netdata:19999/api/v1/charts`.
You can find how netdata interpreted the expressions by examining the alarm at `http://your.netdata:19999/api/v1/alarms?all`. For each expression, netdata will return the expression as given in its config file, and the same expression with additional parentheses added to indicate the evaluation flow of the expression.
You can find how Netdata interpreted the expressions by examining the alarm at `http://your.netdata:19999/api/v1/alarms?all`. For each expression, Netdata will return the expression as given in its config file, and the same expression with additional parentheses added to indicate the evaluation flow of the expression.
## Disabling health checks or silencing notifications at runtime

View file

@ -14,9 +14,9 @@ To get this working, you will need:
* The Amazon Web Services CLI tools. Most distributions provide these with the package name `awscli`.
* An actual home directory for the user you run Netdata as, instead of just using `/` as a home directory. Setup of this is distribution specific. `/var/lib/netdata` is the recommended directory (because the permissions will already be correct) if you are using a dedicated user (which is how most distributions work).
* An Amazon SNS topic to send notifications to with one or more subscribers. The [Getting Started](https://docs.aws.amazon.com/sns/latest/dg/GettingStarted.html) section of the Amazon SNS documentation covers the basics of how to set this up. Make note of the Topic ARN when you create the topic.
* While not mandatory, it is highly recommended to create a dedicated IAM user on your account for netdata to send notifications. This user needs to have programmatic access, and should only allow access to SNS. If you're really paranoid, you can create one for each system or group of systems.
* While not mandatory, it is highly recommended to create a dedicated IAM user on your account for Netdata to send notifications. This user needs to have programmatic access, and should only allow access to SNS. If you're really paranoid, you can create one for each system or group of systems.
Once you have all the above, run the following command as the user netdata runs under:
Once you have all the above, run the following command as the user Netdata runs under:
aws configure
@ -28,6 +28,6 @@ Notes:
* Netdata's native email notification support is far better in almost all respects than it's support through Amazon SNS. If you want email notifications, use the native support, not SNS.
* If you need to change the notification format for SNS notifications, you can do so by specifying the format in `AWSSNS_MESSAGE_FORMAT` in the configuration. This variable supports all the same vairiables you can use in custom notifications.
* While Amazon SNS supports sending differently formatted messages for different delivery methods, netdata does not currently support this functionality.
* While Amazon SNS supports sending differently formatted messages for different delivery methods, Netdata does not currently support this functionality.
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fhealth%2Fnotifications%2Fawssns%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

View file

@ -46,7 +46,7 @@ Variables available to the custom_sender:
- `${alarm_id}` the unique id of the alarm that generated this event
- `${event_id}` the incremental id of the event, for this alarm id
- `${when}` the timestamp this event occurred
- `${name}` the name of the alarm, as given in netdata health.d entries
- `${name}` the name of the alarm, as given in Netdata health.d entries
- `${url_name}` same as `${name}` but URL encoded
- `${chart}` the name of the chart (type.id)
- `${url_chart}` same as `${chart}` but URL encoded
@ -67,7 +67,7 @@ Variables available to the custom_sender:
- `${old_value_string}` friendly old value (with units)
- `${image}` the URL of an image to represent the status of the alarm
- `${color}` a color in #AABBCC format for the alarm
- `${goto_url}` the URL the user can click to see the netdata dashboard
- `${goto_url}` the URL the user can click to see the Netdata dashboard
- `${calc_expression}` the expression evaluated to provide the value for the alarm
- `${calc_param_values}` the value of the variables in the evaluated expression
- `${total_warnings}` the total number of alarms in WARNING state on the host

View file

@ -6,7 +6,7 @@ This is what you will get:
You need:
1. The **incoming webhook URL** as given by Discord. Create a webhook by following the official [Discord documentation](https://support.discordapp.com/hc/en-us/articles/228383668-Intro-to-Webhooks). You can use the same on all your netdata servers (or you can have multiple if you like - your decision).
1. The **incoming webhook URL** as given by Discord. Create a webhook by following the official [Discord documentation](https://support.discordapp.com/hc/en-us/articles/228383668-Intro-to-Webhooks). You can use the same on all your Netdata servers (or you can have multiple if you like - your decision).
2. One or more Discord channels to post the messages to.
Set them in `/etc/netdata/health_alarm_notify.conf` (to edit it on your system run `/etc/netdata/edit-config health_alarm_notify.conf`), like this:

View file

@ -2,7 +2,7 @@
You need a working `sendmail` command for email alerts to work. Almost all MTAs provide a `sendmail` interface.
netdata sends all emails as user `netdata`, so make sure your `sendmail` works for local users.
Netdata sends all emails as user `netdata`, so make sure your `sendmail` works for local users.
email notifications look like this:
@ -16,7 +16,7 @@ You can configure recipients in [`/etc/netdata/health_alarm_notify.conf`](https:
You can also configure per role recipients [in the same file, a few lines below](https://github.com/netdata/netdata/blob/99d44b7d0c4e006b11318a28ba4a7e7d3f9b3bae/conf.d/health_alarm_notify.conf#L313).
Changes to this file do not require netdata restart.
Changes to this file do not require a Netdata restart.
You can test your configuration by issuing the commands:

View file

@ -7,7 +7,7 @@ This is what you will get:
You need:
The **incoming webhook URL** as given by flock.com. You can use the same on all your netdata servers (or you can have multiple if you like - your decision).
The **incoming webhook URL** as given by flock.com. You can use the same on all your Netdata servers (or you can have multiple if you like - your decision).
Get them here: https://admin.flock.com/webhooks
@ -21,8 +21,8 @@ Set them in `/etc/netdata/health_alarm_notify.conf` (to edit it on your system r
SEND_FLOCK="YES"
# Login to flock.com and create an incoming webhook.
# You need only one for all your netdata servers.
# Without it, netdata cannot send flock notifications.
# You need only one for all your Netdata servers.
# Without it, Netdata cannot send flock notifications.
FLOCK_WEBHOOK_URL="https://api.flock.com/hooks/sendMessage/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
# if a role recipient is not configured, no notification will be sent

View file

@ -10,7 +10,7 @@ Irssi terminal client:
You need:
1. The `nc` utility. If you do not set the path, netdata will search for it in your system `$PATH`.
1. The `nc` utility. If you do not set the path, Netdata will search for it in your system `$PATH`.
Set the path for `nc` in `/etc/netdata/health_alarm_notify.conf` (to edit it on your system run `/etc/netdata/edit-config health_alarm_notify.conf`), like this:

View file

@ -32,7 +32,7 @@ SEND_KAVENEGAR="YES"
# copy your api key. You can generate new API Key too.
# You can find and select kevenegar sender number from this place.
# Without an API key, netdata cannot send KAVENEGAR text messages.
# Without an API key, Netdata cannot send KAVENEGAR text messages.
KAVENEGAR_API_KEY=""
KAVENEGAR_SENDER=""
DEFAULT_RECIPIENT_KAVENEGAR=""

View file

@ -31,7 +31,7 @@ SEND_MESSAGEBIRD="YES"
# to get the API key, click on 'API' in the sidebar, then 'API Access (REST)'
# click 'Add access key' and fill in data (you want a live key to send SMS)
# Without an access key, netdata cannot send Messagebird text messages.
# Without an access key, Netdata cannot send Messagebird text messages.
MESSAGEBIRD_ACCESS_KEY="XXXXXXXX"
MESSAGEBIRD_NUMBER="XXXXXXX"
DEFAULT_RECIPIENT_MESSAGEBIRD="XXXXXXX"

View file

@ -2,11 +2,11 @@
[PagerDuty](https://www.pagerduty.com/company/) is the enterprise incident resolution service that integrates with ITOps and DevOps monitoring stacks to improve operational reliability and agility. From enriching and aggregating events to correlating them into incidents, PagerDuty streamlines the incident management process by reducing alert noise and resolution times.
Here is an example of a PagerDuty dashboard with netdata notifications:
Here is an example of a PagerDuty dashboard with Netdata notifications:
![PagerDuty dashboard with netdata notifications](https://cloud.githubusercontent.com/assets/19278582/21233877/b466a08a-c2a5-11e6-8d66-ee6eed43818f.png)
![PagerDuty dashboard with Netdata notifications](https://cloud.githubusercontent.com/assets/19278582/21233877/b466a08a-c2a5-11e6-8d66-ee6eed43818f.png)
To have netdata send notifications to PagerDuty, you'll first need to set up a PagerDuty `Generic API` service and install the PagerDuty agent on the host running netdata. See the following guide for details:
To have Netdata send notifications to PagerDuty, you'll first need to set up a PagerDuty `Generic API` service and install the PagerDuty agent on the host running Netdata. See the following guide for details:
https://www.pagerduty.com/docs/guides/agent-install-guide/

View file

@ -36,7 +36,7 @@ SEND_PUSHBULLET="YES"
# not have a pushbullet account, the pushbullet service will send an email
# to that address instead
# Without an access token, netdata cannot send pushbullet notifications.
# Without an access token, Netdata cannot send pushbullet notifications.
PUSHBULLET_ACCESS_TOKEN="o.Sometokenhere"
DEFAULT_RECIPIENT_PUSHBULLET="admin1@example.com admin3@somemail.com"
```

View file

@ -2,11 +2,11 @@
pushover.net allows you to receive push notifications on your mobile phone. The service seems free for up to 7.500 messages per month.
netdata will send warning messages with priority `0` and critical messages with priority `1`. pushover.net allows you to select do-not-disturb hours. The way this is configured, critical notifications will ring and vibrate your phone, even during the do-not-disturb-hours. All other notifications will be delivered silently.
Netdata will send warning messages with priority `0` and critical messages with priority `1`. pushover.net allows you to select do-not-disturb hours. The way this is configured, critical notifications will ring and vibrate your phone, even during the do-not-disturb-hours. All other notifications will be delivered silently.
You need:
1. APP TOKEN. You can use the same on all your netdata servers.
1. APP TOKEN. You can use the same on all your Netdata servers.
2. USER TOKEN for each user you are going to send notifications to. This is the actual recipient of the notification.
The configuration is like above (slack messages).

View file

@ -4,7 +4,7 @@ This is what you will get:
![Netdata on RocketChat](https://i.imgur.com/Zu4t3j3.png)
You need:
1. The **incoming webhook URL** as given by RocketChat. You can use the same on all your netdata servers (or you can have multiple if you like - your decision).
1. The **incoming webhook URL** as given by RocketChat. You can use the same on all your Netdata servers (or you can have multiple if you like - your decision).
2. One or more channels to post the messages to.
Get them here: https://rocket.chat/docs/administrator-guides/integrations/index.html#how-to-create-a-new-incoming-webhook
@ -22,8 +22,8 @@ Set them in `/etc/netdata/health_alarm_notify.conf` (to edit it on your system r
SEND_ROCKETCHAT="YES"
# Login to rocket.chat and create an incoming webhook. You need only one for all
# your netdata servers (or you can have one for each of your netdata).
# Without it, netdata cannot send rocketchat notifications.
# your Netdata servers (or you can have one for each of your Netdata).
# Without it, Netdata cannot send rocketchat notifications.
ROCKETCHAT_WEBHOOK_URL="<your_incoming_webhook_url>"
# if a role's recipients are not configured, a notification will be send to

View file

@ -5,7 +5,7 @@ This is what you will get:
You need:
1. The **incoming webhook URL** as given by slack.com. You can use the same on all your netdata servers (or you can have multiple if you like - your decision).
1. The **incoming webhook URL** as given by slack.com. You can use the same on all your Netdata servers (or you can have multiple if you like - your decision).
2. One or more channels to post the messages to.
To get a webhook that works on multiple channels, you will need to login to your slack.com workspace and create an incoming webhook using the [Incoming Webhooks App](https://slack.com/apps/A0F7XDUAZ-incoming-webhooks).
@ -29,7 +29,7 @@ DEFAULT_RECIPIENT_SLACK="alarms"
You can define multiple recipients like this: `# #alarms systems @myuser`.
This example will send the alarm to:
- The recipient defined in slack for the webhook (not known to netdata)
- The recipient defined in slack for the webhook (not known to Netdata)
- The channel 'alarms'
- The channel 'systems'
- The user @myuser

View file

@ -2,7 +2,7 @@
The [SMS Server Tools 3](http://smstools3.kekekasvi.com/) is a SMS Gateway software which can send and receive short messages through GSM modems and mobile phones.
To have netdata send notifications via SMS Server Tools 3, you'll first need to [install](http://smstools3.kekekasvi.com/index.php?p=compiling) and [configure](http://smstools3.kekekasvi.com/index.php?p=configure) smsd.
To have Netdata send notifications via SMS Server Tools 3, you'll first need to [install](http://smstools3.kekekasvi.com/index.php?p=compiling) and [configure](http://smstools3.kekekasvi.com/index.php?p=configure) smsd.
Ensure that the user `netdata` can execute `sendsms`. Any user executing `sendsms` needs to:

View file

@ -18,7 +18,7 @@ Targets are defined as follows:
`prefix` defines what the log messages are prefixed with. By default, all lines are prefixed with 'netdata'.
The `facility` and `level` are the standard syslog facility and level options, for more info on them see your local `logger` and `syslog` documentation. By default, netdata will log to the `local6` facility, with a log level dependent on the type of message (`crit` for CRITICAL, `warning` for WARNING, and `info` for everything else).
The `facility` and `level` are the standard syslog facility and level options, for more info on them see your local `logger` and `syslog` documentation. By default, Netdata will log to the `local6` facility, with a log level dependent on the type of message (`crit` for CRITICAL, `warning` for WARNING, and `info` for everything else).
You can configure sending directly to remote log servers by specifying a host (and optionally a port). However, this has a somewhat high overhead, so it is much preferred to use your local syslog daemon to handle the forwarding of messages to remote systems (pretty much all of them allow at least simple forwarding, and most of the really popular ones support complex queueing and routing of messages to remote log servers).

View file

@ -4,7 +4,7 @@
With Telegram, you can send messages, photos, videos and files of any type (doc, zip, mp3, etc), as well as create groups for up to 30,000 people or channels for broadcasting to unlimited audiences. You can write to your phone contacts and find people by their usernames. As a result, Telegram is like SMS and email combined — and can take care of all your personal or business messaging needs.
netdata will send warning messages without vibration.
Netdata will send warning messages without vibration.
You need:

View file

@ -32,7 +32,7 @@ SEND_TWILIO="YES"
# Then just set the recipients' phone numbers.
# The trial account is only allowed to use the number specified when set up.
# Without an account sid and token, netdata cannot send Twilio text messages.
# Without an account sid and token, Netdata cannot send Twilio text messages.
TWILIO_ACCOUNT_SID="xxxxxxxxx"
TWILIO_ACCOUNT_TOKEN="xxxxxxxxxx"
TWILIO_NUMBER="xxxxxxxxxxx"

View file

@ -1,6 +1,6 @@
# Dashboard
The netdata dashboard shows HTML notifications, when it is open.
The Netdata dashboard shows HTML notifications, when it is open.
Such web notifications look like this:
![image](https://cloud.githubusercontent.com/assets/2662304/18407279/82bac6a6-7714-11e6-847e-c2e84eeacbfb.png)

View file

@ -1,6 +1,6 @@
# libnetdata
`libnetdata` is a collection of library code that is used by all netdata `C` programs.
`libnetdata` is a collection of library code that is used by all Netdata `C` programs.

View file

@ -1,10 +1,10 @@
# Adaptive Re-sortable List (ARL)
This library allows netdata to read a series of `name - value` pairs
This library allows Netdata to read a series of `name - value` pairs
in the **fastest possible way**.
ARLs are used all over netdata, as they are the most
ARLs are used all over Netdata, as they are the most
CPU utilization efficient way to process `/proc` files. They are used to
process both vertical (csv like) and horizontal (one pair per line) `name - value` pairs.
@ -82,7 +82,7 @@ test|code|string comparison|number parsing|duration
Compared to unoptimized code (test No 1: 4.6sec):
- before ARL netdata was using test No **7** with hashing and a custom `str2ull()` to achieve 602ms.
- before ARL Netdata was using test No **7** with hashing and a custom `str2ull()` to achieve 602ms.
- the current ARL implementation is test No **9** that needs only 157ms (29 times faster vs unoptimized code, about 4 times faster vs optimized code).
[Check the source code of this test](../../tests/profile/benchmark-value-pairs.c).

View file

@ -1,6 +1,6 @@
# netdata ini config files
# Netdata ini config files
Configuration files `netdata.conf` and `stream.conf` are netdata ini files.
Configuration files `netdata.conf` and `stream.conf` are Netdata ini files.
## Motivation
@ -17,7 +17,7 @@ developers and the users.
So, we did this:
1. No configuration is required to run netdata
1. No configuration is required to run Netdata
2. There are plenty of options to tweak
3. There is minimal documentation (or no at all)
@ -35,9 +35,9 @@ file, the default is used. The lookup is made using B-Trees and hashes
settings can be `my super duper setting that once set to yes, will turn the world upside down = no`
- so goodbye to most of the documentation involved.
Next, netdata can generate a valid configuration for the user to edit.
Next, Netdata can generate a valid configuration for the user to edit.
No need to remember anything or copy and paste settings. Just get the
configuration from the server (`/netdata.conf` on your netdata server),
configuration from the server (`/netdata.conf` on your Netdata server),
edit it and save it.
Last, what about options you believe you have set, but you misspelled?

View file

@ -58,6 +58,6 @@ When the caller exits:
To achieve this kind of performance, the library tries to work in batches so that the code
and the data are inside the processor's caches.
This library is extensively used in netdata and its plugins.
This library is extensively used in Netdata and its plugins.
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Flibnetdata%2Fprocfile%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

View file

@ -1,9 +1,9 @@
## netdata simple patterns
## Netdata simple patterns
Unix prefers regular expressions. But they are just too hard, too cryptic
to use, write and understand.
So, netdata supports **simple patterns**.
So, Netdata supports **simple patterns**.
Simple patterns are a space separated list of words, that can have `*`
as a wildcard. Each world may use any number of `*`. Simple patterns
@ -16,7 +16,7 @@ Simple patterns are quite powerful: `pattern = *foobar* !foo* !*bar *`
matches everything containing `foobar`, except strings that start
with `foo` or end with `bar`.
You can use the netdata command line to check simple patterns,
You can use the Netdata command line to check simple patterns,
like this:
```sh
@ -30,7 +30,7 @@ RESULT: NOT MATCHED - pattern '*foobar* !foo* !*bar *' does not match 'hello wor
RESULT: MATCHED - pattern '*foobar* !foo* !*bar *' matches 'hello world foobar'
```
netdata stops processing to the first positive or negative match
Netdata stops processing to the first positive or negative match
(left to right). If it is not matched by either positive or negative
patterns, it is denied at the end.

View file

@ -1,4 +1,4 @@
# netdata storage number
# Netdata storage number
Although `netdata` does all its calculations using `long double`, it stores all values using
a **custom-made 32-bit number**.

View file

@ -1,23 +1,22 @@
# Install netdata with Docker
# Install Netdata with Docker
> :warning: As of Sep 9th, 2018 we ship [new docker builds](https://github.com/netdata/netdata/pull/3995), running netdata in docker with an [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint) directive, not a COMMAND directive. Please adapt your execution scripts accordingly. You can find more information about ENTRYPOINT vs COMMAND is presented by goinbigdata [here](http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/) and by docker docs [here](https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact).
> :warning: As of Sep 9th, 2018 we ship [new docker builds](https://github.com/netdata/netdata/pull/3995), running Netdata in Docker with an [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint) directive, not a COMMAND directive. Please adapt your execution scripts accordingly. You can find more information about ENTRYPOINT vs COMMAND is presented by goinbigdata [here](http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/) and by docker docs [here](https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact).
>
> Also, the `latest` is now based on alpine, so **`alpine` is not updated any more** and `armv7hf` is now replaced with `armhf` (to comply with https://github.com/multiarch naming), so **`armv7hf` is not updated** either.
## Limitations
Running netdata in a container for monitoring the whole host, can limit its capabilities. Some data is not accessible or not as detailed as when running netdata on the host.
Running Netdata in a container for monitoring the whole host, can limit its capabilities. Some data is not accessible or not as detailed as when running Netdata on the host.
## Package scrambling in runtime (x86_64 only)
By default on x86_64 architecture our docker images use Polymorphic Polyverse Linux package scrambling. For increased security you can enable rescrambling of packages during runtime. To do this set environment variable `RESCRAMBLE=true` while starting netdata docker container.
By default on x86_64 architecture our docker images use Polymorphic Polyverse Linux package scrambling. For increased security you can enable rescrambling of packages during runtime. To do this set environment variable `RESCRAMBLE=true` while starting Netdata docker container.
For more information go to [Polyverse site](https://polyverse.io/how-it-works/)
## Run netdata with docker command
## Run Netdata with the docker command
Quickly start netdata with the docker command line.
Netdata is then available at http://host:19999
Quickly start Netdata with the `docker` command. Netdata is then available at http://host:19999
This is good for an internal network or to quickly analyse a host.
@ -58,18 +57,18 @@ If you don't want to use the apps.plugin functionality, you can remove the mount
### Docker container names resolution
There are a few options for resolving container names within netdata. Some methods of doing so will allow root access to your machine from within the container. Please read the following carefully.
There are a few options for resolving container names within Netdata. Some methods of doing so will allow root access to your machine from within the container. Please read the following carefully.
#### Docker Socket Proxy (Safest Option)
#### Docker socket proxy (safest option)
Deploy a Docker socket proxy that accepts and filter out requests using something like [HAProxy](https://docs.netdata.cloud/docs/running-behind-haproxy/) so that it restricts connections to read-only access to the CONTAINERS endpoint.
The reason it's safer to expose the socket to the proxy is because netdata has a TCP port exposed outside the Docker network. Access to the proxy container is limited to only within the network.
The reason it's safer to expose the socket to the proxy is because Netdata has a TCP port exposed outside the Docker network. Access to the proxy container is limited to only within the network.
#### Giving group access to Docker Socket (Less safe)
#### Giving group access to the Docker socket (less safe)
**Important Note**: You should seriously consider the necessity of activating this option,
as it grants to the netdata user access to the privileged socket connection of docker service and therefore your whole machine.
as it grants to the `netdata` user access to the privileged socket connection of docker service and therefore your whole machine.
If you want to have your container names resolved by Netdata, make the `netdata` user be part of the group that owns the socket.
@ -82,10 +81,10 @@ This group number can be found by running the following (if socket group ownersh
grep docker /etc/group | cut -d ':' -f 3
```
#### Running as root (Unsafe)
#### Running as root (unsafe)
**Important Note**: You should seriously consider the necessity of activating this option,
as it grants to the netdata user access to the privileged socket connection of docker service and therefore your whole machine.
as it grants to the `netdata` user access to the privileged socket connection of docker service and therefore your whole machine.
```yaml
version: '3'
@ -102,13 +101,13 @@ services:
### Pass command line options to Netdata
Since we use an [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint) directive, you can provide [netdata daemon command line options](https://docs.netdata.cloud/daemon/#command-line-options) such as the IP address netdata will be running on, using the [command instruction](https://docs.docker.com/engine/reference/builder/#cmd).
Since we use an [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint) directive, you can provide [Netdata daemon command line options](https://docs.netdata.cloud/daemon/#command-line-options) such as the IP address Netdata will be running on, using the [command instruction](https://docs.docker.com/engine/reference/builder/#cmd).
## Install Netdata using Docker Compose with SSL/TLS enabled http proxy
For a permanent installation on a public server, you should [secure the netdata instance](../../docs/netdata-security.md). This section contains an example of how to install netdata with an SSL reverse proxy and basic authentication.
For a permanent installation on a public server, you should [secure your Netdata instance](../../docs/netdata-security.md). This section contains an example of how to install Netdata with an SSL reverse proxy and basic authentication.
You can use use the following docker-compose.yml and Caddyfile files to run netdata with docker. Replace the Domains and email address for [Letsencrypt](https://letsencrypt.org/) before starting.
You can use use the following docker-compose.yml and Caddyfile files to run Netdata with docker. Replace the Domains and email address for [Letsencrypt](https://letsencrypt.org/) before starting.
### Prerequisites
* [Docker](https://docs.docker.com/install/#server)
@ -128,7 +127,7 @@ netdata.example.org {
### docker-compose.yml
After setting Caddyfile run this with `docker-compose up -d` to have fully functioning netdata setup behind HTTP reverse proxy.
After setting Caddyfile run this with `docker-compose up -d` to have fully functioning Netdata setup behind HTTP reverse proxy.
```yaml
version: '3'
@ -170,7 +169,7 @@ You can restrict access by following [official caddy guide](https://caddyserver.
## Publish a test image to your own repository
At netdata we provide multiple ways of testing your docker images using your own repositories.
At Netdata, we provide multiple ways of testing your Docker images using your own repositories.
You may either use the command line tools available or take advantage of our Travis CI infrastructure.
### Using tools manually from the command line
@ -205,22 +204,22 @@ Then we can run `helm install [path to our helmchart clone]`.
If we make changes to the code, we execute the same `build-test.sh` command, followed by `helm upgrade [name] [path to our helmchart clone]`
### Inside netdata organization, using Travis CI
### Inside Netdata organization, using Travis CI
To enable Travis CI integration on your own repositories (Docker and Github), you need to be part of the Netdata organization.
Once you have contacted the netdata owners to setup you up on Github and Travis, execute the following steps
Once you have contacted the Netdata owners to setup you up on Github and Travis, execute the following steps
- Preparation
- Have netdata forked on your personal GITHUB account
- Get a GITHUB token: Go to Github settings -> Developer Settings -> Personal access tokens, generate a new token with full access to repo_hook, read only access to admin:org, public_repo, repo_deployment, repo:status and user:email settings enabled. This will be your GITHUB_TOKEN that is described later in the instructions, so keep it somewhere safe until is needed.
- Contact netdata team and seek for permissions on https://scan.coverity.com should you require Travis to be able to push your forked code to coverity for analysis and report. Once you are setup, you should have your email you used in coverity and a token from them. These will be your COVERITY_SCAN_SUBMIT_EMAIL and COVERITY_SCAN_TOKEN that we will refer to later.
- Have Netdata forked on your personal GitHub account
- Get a GITHUB token: Go to GitHub settings -> Developer Settings -> Personal access tokens, generate a new token with full access to repo_hook, read only access to admin:org, public_repo, repo_deployment, repo:status and user:email settings enabled. This will be your GITHUB_TOKEN that is described later in the instructions, so keep it somewhere safe until is needed.
- Contact the Netdata team and seek for permissions on https://scan.coverity.com should you require Travis to be able to push your forked code to coverity for analysis and report. Once you are setup, you should have your email you used in coverity and a token from them. These will be your COVERITY_SCAN_SUBMIT_EMAIL and COVERITY_SCAN_TOKEN that we will refer to later.
- Have a valid Docker hub account, the credentials from this account will be your DOCKER_USERNAME and DOCKER_PWD mentioned later
- Setting up Travis CI for your own fork (Detailed instructions provided by Travis team [here](https://docs.travis-ci.com/user/tutorial/))
- Login to travis with your own GITHUB credentials (There is Open Auth access)
- Go to your profile settings, under [repositories](https://travis-ci.com/account/repositories) section and setup your netdata fork to be built by travis
- Go to your profile settings, under [repositories](https://travis-ci.com/account/repositories) section and setup your Netdata fork to be built by travis
- Once the repository has been setup, go to repository settings within travis (usually under https://travis-ci.com/NETDATA_DEVELOPER/netdata/settings, where "NETDATA_DEVELOPER" is your github handle) and select your desired settings.
- While in Travis settings, under netdata repository settings in the Environment Variables section, you need to add the following:
- While in Travis settings, under Netdata repository settings in the Environment Variables section, you need to add the following:
- DOCKER_USERNAME and DOCKER_PWD variables so that Travis can login to your docker hub account and publish docker images there.
- REPOSITORY variable to "NETDATA_DEVELOPER/netdata" where NETDATA_DEVELOPER is your github handle again.
- GITHUB_TOKEN variable with the token generated on the preparation step, for travis workflows to function properly

View file

@ -371,7 +371,7 @@ To enable the Netdata service:
service netdata config set enable=true
```
To start the netdata service:
To start the Netdata service:
```
service netdata start
```

View file

@ -1,6 +1,6 @@
# Uninstalling netdata
# Uninstalling Netdata
Our self-contained uninstaller is able to remove netdata installations created with shell installer. It doesn't need any other netdata repository files to be run. All it needs is an .environment file, which is created during installation (with shell installer) and put in ${NETDATA_USER_CONFIG_DIR}/.environment (by default /etc/netdata/.environment). That file contains some parameters which are passed to our installer and which are needed during uninstallation process. Mainly two parameters are needed:
Our self-contained uninstaller is able to remove Netdata installations created with shell installer. It doesn't need any other Netdata repository files to be run. All it needs is an .environment file, which is created during installation (with shell installer) and put in ${NETDATA_USER_CONFIG_DIR}/.environment (by default /etc/netdata/.environment). That file contains some parameters which are passed to our installer and which are needed during uninstallation process. Mainly two parameters are needed:
```
NETDATA_PREFIX
NETDATA_ADDED_TO_GROUPS
@ -9,10 +9,10 @@ NETDATA_ADDED_TO_GROUPS
A workflow for uninstallation looks like this:
1. Find your `.environment` file, which is usually `/etc/netdata/.environment` in a default installation.
2. If you cannot find that file and would like to uninstall netdata, then create new file with following content:
2. If you cannot find that file and would like to uninstall Netdata, then create new file with following content:
```
NETDATA_PREFIX="<installation prefix>" # put what you used as a parameter to shell installed `--install` flag. Otherwise it should be empty
NETDATA_ADDED_TO_GROUPS="<additional groups>" # Additional groups for a user running netdata process
NETDATA_ADDED_TO_GROUPS="<additional groups>" # Additional groups for a user running the Netdata process
```
3. Run `netdata-uninstaller.sh` as follows
```
@ -29,6 +29,6 @@ chmod +x ./netdata-uninstaller.sh
The default `environment_file` is `/etc/netdata/.environment`.
Note: This uninstallation method assumes previous installation with netdata-installer.sh or kickstart script. Currently using it when netdata was installed by a package manager can work or cause unexpected results.
Note: This uninstallation method assumes previous installation with `netdata-installer.sh` or the kickstart script. Currently using it when Netdata was installed by a package manager can work or cause unexpected results.
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Finstaller%2FUNINSTALL&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

View file

@ -1,9 +1,9 @@
# Updating netdata after its installation
# Updating Netdata after its installation
![image8](https://cloud.githubusercontent.com/assets/2662304/14253735/536f4580-fa95-11e5-9f7b-99112b31a5d7.gif)
We suggest to keep your netdata updated. We are actively developing it and you should always update to the latest version.
We suggest to keep your Netdata updated. We are actively developing it and you should always update to the latest version.
The update procedure depends on how you installed it:
@ -11,7 +11,7 @@ The update procedure depends on how you installed it:
### Manual update to get the latest git commit
netdata versions older than `v1.12.0-rc2-52` had a `netdata-updater.sh` script in the root directory of the source code, which has now been deprecated. The manual process that works for all versions to get the latest commit in git is to use the `netdata-installer.sh`. The installer preserves your custom configuration and updates the information of the installation in the `.environment` file under the user configuration directory.
Netdata versions older than `v1.12.0-rc2-52` had a `netdata-updater.sh` script in the root directory of the source code, which has now been deprecated. The manual process that works for all versions to get the latest commit in git is to use the `netdata-installer.sh`. The installer preserves your custom configuration and updates the information of the installation in the `.environment` file under the user configuration directory.
```sh
# go to the git downloaded directory
@ -20,13 +20,13 @@ cd /path/to/git/downloaded/netdata
# update your local copy
git pull
# run the netdata installer
# run the Netdata installer
sudo ./netdata-installer.sh
```
_Netdata will be restarted with the new version._
Keep in mind, netdata may now have new features, or certain old features may now behave differently. So pay some attention to it after updating.
Keep in mind, Netdata may now have new features, or certain old features may now behave differently. So pay some attention to it after updating.
### Manual update to get the latest nightly build
@ -40,16 +40,16 @@ bash <(curl -Ss https://my-netdata.io/kickstart.sh) --no-updates
_Please, consider the risks of running an auto-update. Something can always go wrong. Keep an eye on your installation, and run a manual update if something ever fails._
Calling the `netdata-installer.sh` with the `--auto-update` or `-u` option will create the `netdata-updater` script under
either `/etc/cron.daily/`, or `/etc/periodic/daily/`. Whenever the `netdata-updater` is executed, it checks if a newer nightly build is available and then handles the download, installation and netdata restart.
either `/etc/cron.daily/`, or `/etc/periodic/daily/`. Whenever the `netdata-updater` is executed, it checks if a newer nightly build is available and then handles the download, installation and Netdata restart.
Note that after Jan 2019, the `kickstart.sh` one-liner `bash <(curl -Ss https://my-netdata.io/kickstart.sh)` calls the `netdata-installer.sh` with the auto-update option. So if you just run the one-liner without options once, your netdata will be kept auto-updated.
Note that after Jan 2019, the `kickstart.sh` one-liner `bash <(curl -Ss https://my-netdata.io/kickstart.sh)` calls the `netdata-installer.sh` with the auto-update option. So if you just run the one-liner without options once, your Netdata will be kept auto-updated.
## You downloaded a binary package
If you installed it from a binary package, the best way is to **obtain a newer copy** from the source you got it in the first place. This includes the static binary installation via `kickstart-base64.sh`, which would need to be executed again.
If a newer version of netdata is not available from the source you got it, we suggest to uninstall the version you have and follow the [installation](README.md) instructions for installing a fresh version of netdata.
If a newer version of Netdata is not available from the source you got it, we suggest to uninstall the version you have and follow the [installation](README.md) instructions for installing a fresh version of Netdata.
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Finstaller%2FUPDATE&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

View file

@ -1,6 +1,6 @@
# Package Maintainers
This page tracks the package maintainers for netdata, for various operating systems and versions.
This page tracks the package maintainers for Netdata, for various operating systems and versions.
> Feel free to update it, so that it reflects the current status.

View file

@ -1,4 +1,4 @@
# netdata static binary build
# Netdata static binary build
To build the static binary 64-bit distribution package, run:
@ -11,16 +11,16 @@ The program will:
1. setup a new docker container with Alpine Linux
2. install the required alpine packages (the build environment, needed libraries, etc)
3. download and compile third party apps that are packaged with netdata (`bash`, `curl`, etc)
4. compile netdata
3. download and compile third party apps that are packaged with Netdata (`bash`, `curl`, etc)
4. compile Netdata
Once finished, a file named `netdata-vX.X.X-gGITHASH-x86_64-DATE-TIME.run` will be created in the current directory. This is the netdata binary package that can be run to install netdata on any other computer.
Once finished, a file named `netdata-vX.X.X-gGITHASH-x86_64-DATE-TIME.run` will be created in the current directory. This is the Netdata binary package that can be run to install Netdata on any other computer.
---
## building binaries with debug info
To build netdata binaries with debugging / tracing information in them, use:
To build Netdata binaries with debugging / tracing information in them, use:
```bash
$ cd /path/to/netdata.git
@ -29,20 +29,20 @@ $ ./packaging/makeself/build-x86_64-static.sh debug
These binaries are not optimized (they are a bit slower), they have certain features disables (like log flood protection), other features enables (like `debug flags`) and are not stripped (the binary files are bigger, since they now include source code tracing information).
#### debugging netdata binaries
#### debugging Netdata binaries
Once you have installed a binary package with debugging info, you will need to install `valgrind` and run this command to start netdata:
Once you have installed a binary package with debugging info, you will need to install `valgrind` and run this command to start Netdata:
```bash
PATH="/opt/netdata/bin:${PATH}" valgrind --undef-value-errors=no /opt/netdata/bin/srv/netdata -D
```
The above command, will run netdata under `valgrind`. While netdata runs under `valgrind` it will be 10x slower and use a lot more memory.
The above command, will run Netdata under `valgrind`. While Netdata runs under `valgrind` it will be 10x slower and use a lot more memory.
If netdata crashes, `valgrind` will print a stack trace of the issue. Open a github issue to let us know.
If Netdata crashes, `valgrind` will print a stack trace of the issue. Open a github issue to let us know.
To stop netdata while it runs under `valgrind`, press Control-C on the console.
To stop Netdata while it runs under `valgrind`, press Control-C on the console.
> If you omit the parameter `--undef-value-errors=no` to valgrind, you will get hundreds of errors about conditional jumps that depend on uninitialized values. This is normal. Valgrind has heuristics to prevent it from printing such errors for system libraries, but for the static netdata binary, all the required libraries are built into netdata. So, valgrind cannot appply its heuristics and prints them.
> If you omit the parameter `--undef-value-errors=no` to valgrind, you will get hundreds of errors about conditional jumps that depend on uninitialized values. This is normal. Valgrind has heuristics to prevent it from printing such errors for system libraries, but for the static Netdata binary, all the required libraries are built into Netdata. So, valgrind cannot appply its heuristics and prints them.
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fmakeself%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

View file

@ -1,7 +1,7 @@
# Registry
The Netdata registry implements the node menu on the top left corner of the netdata dashboards and enables the Netdata cloud features, such as the node view.
The node menu lists the netdata servers you have visited. The node view offers a lot of additional features on top of the menu,
The Netdata registry implements the node menu on the top left corner of the Netdata dashboards and enables the Netdata cloud features, such as the node view.
The node menu lists the Netdata servers you have visited. The node view offers a lot of additional features on top of the menu,
[with many more to come](https://blog.netdata.cloud/posts/netdata-cloud-announcement/).
To enable the global Netdata registry and the cloud features, you need to Sign In to Netdata cloud. By signing in, you opt in to let the registry receive and store
the information described [here](#what-data-does-the-registry-store).
@ -11,7 +11,7 @@ You can still get the node menu, but not the cloud features, if you [run your ow
Netdata provides distributed monitoring.
Traditional monitoring solutions centralize all the data to provide unified dashboards across all servers. Before netdata, this was the standard practice. However it has a few issues:
Traditional monitoring solutions centralize all the data to provide unified dashboards across all servers. Before Netdata, this was the standard practice. However it has a few issues:
1. due to the resources required, the number of metrics collected is limited.
1. for the same reason, the data collection frequency is not that high, at best it will be once every 10 or 15 seconds, at worst every 5 or 10 mins.
@ -23,14 +23,14 @@ Netdata follows a different approach:
1. data collection happens per second
1. thousands of metrics per server are collected
1. data do not leave the server where they are collected
1. netdata servers do not talk to each other
1. your browser connects all the netdata servers
1. Netdata servers do not talk to each other
1. your browser connects all the Netdata servers
Using netdata, your monitoring infrastructure is embedded on each server, limiting significantly the need of additional resources. Netdata is blazingly fast, very resource efficient and utilizes server resources that already exist and are spare (on each server). This allows **scaling out** the monitoring infrastructure.
Using Netdata, your monitoring infrastructure is embedded on each server, limiting significantly the need of additional resources. Netdata is blazingly fast, very resource efficient and utilizes server resources that already exist and are spare (on each server). This allows **scaling out** the monitoring infrastructure.
However, the netdata approach introduces a few new issues that need to be addressed, one being **the list of netdata we have installed**, i.e. the URLs our netdata servers are listening.
However, the Netdata approach introduces a few new issues that need to be addressed, one being **the list of Netdata we have installed**, i.e. the URLs our Netdata servers are listening.
To solve this, netdata utilizes a **central registry**. This registry, together with certain browser features, allow netdata to provide unified cross-server dashboards.
To solve this, Netdata utilizes a **central registry**. This registry, together with certain browser features, allow Netdata to provide unified cross-server dashboards.
For example, when you jump from server to server using the node menu, several session settings (like the currently viewed charts, the current zoom and pan operations on the charts, etc.) are propagated to the new server, so that the new dashboard will come with exactly the same view.
Netdata cloud has a roadmap to [offer many more features](https://blog.netdata.cloud/posts/netdata-cloud-announcement/) over and above the simple node menu.
@ -38,28 +38,28 @@ Netdata cloud has a roadmap to [offer many more features](https://blog.netdata.c
The registry keeps track of 4 entities:
1. **machines**: i.e. the netdata installations (a random GUID generated by each netdata the first time it starts; we call this **machine_guid**)
1. **machines**: i.e. the Netdata installations (a random GUID generated by each Netdata the first time it starts; we call this **machine_guid**)
For each netdata installation (each `machine_guid`) the registry keeps track of the different URLs it is accessed.
For each Netdata installation (each `machine_guid`) the registry keeps track of the different URLs it is accessed.
2. **persons**: i.e. the web browsers accessing the netdata installations (a random GUID generated by the registry the first time it sees a new web browser; we call this **person_guid**)
2. **persons**: i.e. the web browsers accessing the Netdata installations (a random GUID generated by the registry the first time it sees a new web browser; we call this **person_guid**)
For each person, the registry keeps track of the netdata installations it has accessed and their URLs.
For each person, the registry keeps track of the Netdata installations it has accessed and their URLs.
3. **URLs** of netdata installations (as seen by the web browsers)
3. **URLs** of Netdata installations (as seen by the web browsers)
For each URL, the registry keeps the URL and nothing more. Each URL is linked to *persons* and *machines*. The only way to find a URL is to know its **machine_guid** or have a **person_guid** it is linked to it.
4. **accounts**: i.e. the information used to sign-in via one of the available sign-in methods. Depending on the method, this may include an email, an email and a profile picture.
For *persons*/*accounts* and *machines*, the registry keeps links to *URLs*, each link with 2 timestamps (first time seen, last time seen) and a counter (number of times it has been seen).
*machines*, *persons* and timestamps are stored in the netdata registry regardless of whether you sign in or not.
*machines*, *persons* and timestamps are stored in the Netdata registry regardless of whether you sign in or not.
## Who talks to the registry?
Your web browser **only**! If sending this information is against your policies, you can [run your own registry](#run-your-own-registry)
Your netdata servers do not talk to the registry. This is a UML diagram of its operation:
Your Netdata servers do not talk to the registry. This is a UML diagram of its operation:
![registry](https://cloud.githubusercontent.com/assets/2662304/19448565/11a70632-94ab-11e6-9d80-f410b4acb797.png)
@ -69,7 +69,7 @@ Your netdata servers do not talk to the registry. This is a UML diagram of its o
`https://registry.my-netdata.io`, which is currently served by `https://london.my-netdata.io`. This registry listens to both HTTP and HTTPS requests but the default is HTTPS.
`https://netdata.cloud` is the additional registry endpoint, that enables [the cloud features](https://blog.netdata.cloud/posts/netdata-cloud-announcement/). It only accepts HTTPS.
### Can this registry handle the global load of netdata installations?
### Can this registry handle the global load of Netdata installations?
Yeap! The registry can handle 50.000 - 100.000 requests **per second per core** (depending on the type of CPU, the computer's memory bandwidth, etc). 50.000 is on J1900 (celeron 2Ghz).
@ -77,9 +77,9 @@ We believe, it can do it...
## Run your own registry
**Every netdata can be a registry**. Just pick one and configure it.
**Every Netdata can be a registry**. Just pick one and configure it.
**To turn any netdata into a registry**, edit `/etc/netdata/netdata.conf` and set:
**To turn any Netdata into a registry**, edit `/etc/netdata/netdata.conf` and set:
```
[registry]
@ -87,9 +87,9 @@ We believe, it can do it...
registry to announce = http://your.registry:19999
```
Restart your netdata to activate it.
Restart your Netdata to activate it.
Then, you need to tell **all your other netdata servers to advertise your registry**, instead of the default. To do this, on each of your netdata servers, edit `/etc/netdata/netdata.conf` and set:
Then, you need to tell **all your other Netdata servers to advertise your registry**, instead of the default. To do this, on each of your Netdata servers, edit `/etc/netdata/netdata.conf` and set:
```
[registry]
@ -97,11 +97,11 @@ Then, you need to tell **all your other netdata servers to advertise your regist
registry to announce = http://your.registry:19999
```
Note that we have not enabled the registry on the other servers. Only one netdata (the registry) needs `[registry].enabled = yes`.
Note that we have not enabled the registry on the other servers. Only one Netdata (the registry) needs `[registry].enabled = yes`.
This is it. You have your registry now.
You may also want to give your server different names under the node menu (i.e. to have them sorted / grouped). You can change its registry name, by setting on each netdata server:
You may also want to give your server different names under the node menu (i.e. to have them sorted / grouped). You can change its registry name, by setting on each Netdata server:
```
[registry]
@ -112,15 +112,15 @@ So this server will appear in the node menu as `Group1 - Master DB`. The max nam
### Limiting access to the registry
netdata v1.9+ support limiting access to the registry from given IPs, like this:
Netdata v1.9+ support limiting access to the registry from given IPs, like this:
```
[registry]
allow from = *
```
`allow from` settings are [netdata simple patterns](../libnetdata/simple_pattern/): string matches that use `*` as wildcard (any number of times) and a `!` prefix for a negative match. So: `allow from = !10.1.2.3 10.*` will allow all IPs in `10.*` except `10.1.2.3`. The order is important: left to right, the first positive or negative match is used.
`allow from` settings are [Netdata simple patterns](../libnetdata/simple_pattern/): string matches that use `*` as wildcard (any number of times) and a `!` prefix for a negative match. So: `allow from = !10.1.2.3 10.*` will allow all IPs in `10.*` except `10.1.2.3`. The order is important: left to right, the first positive or negative match is used.
Keep in mind that connections to netdata API ports are filtered by `[web].allow connections from`. So, IPs allowed by `[registry].allow from` should also be allowed by `[web].allow connection from`.
Keep in mind that connections to Netdata API ports are filtered by `[web].allow connections from`. So, IPs allowed by `[registry].allow from` should also be allowed by `[web].allow connection from`.
### Where is the registry database stored?
@ -134,19 +134,19 @@ There can be up to 2 files:
- `registry.db`, the database
every `[registry].registry save db every new entries` entries in `registry-log.db`, netdata will save its database to `registry.db` and empty `registry-log.db`.
every `[registry].registry save db every new entries` entries in `registry-log.db`, Netdata will save its database to `registry.db` and empty `registry-log.db`.
Both files are machine readable text files.
## The future
The registry opens a whole world of new possibilities for netdata. Check here what we think: https://github.com/netdata/netdata/issues/416
The registry opens a whole world of new possibilities for Netdata. Check here what we think: https://github.com/netdata/netdata/issues/416
## Troubleshooting the registry
The registry URL should be set to the URL of a netdata dashboard. This server has to have `[registry].enabled = yes`. So, accessing the registry URL directly with your web browser, should present the dashboard of the netdata operating the registry.
The registry URL should be set to the URL of a Netdata dashboard. This server has to have `[registry].enabled = yes`. So, accessing the registry URL directly with your web browser, should present the dashboard of the Netdata operating the registry.
To use the registry, your web browser needs to support **third party cookies**, since the cookies are set by the registry while you are browsing the dashboard of another netdata server. The registry, the first time it sees a new web browser it tries to figure if the web browser has cookies enabled or not. It does this by setting a cookie and redirecting the browser back to itself hoping that it will receive the cookie. If it does not receive the cookie, the registry will keep redirecting your web browser back to itself, which after a few redirects will fail with an error like this:
To use the registry, your web browser needs to support **third party cookies**, since the cookies are set by the registry while you are browsing the dashboard of another Netdata server. The registry, the first time it sees a new web browser it tries to figure if the web browser has cookies enabled or not. It does this by setting a cookie and redirecting the browser back to itself hoping that it will receive the cookie. If it does not receive the cookie, the registry will keep redirecting your web browser back to itself, which after a few redirects will fail with an error like this:
```
ERROR 409: Cannot ACCESS netdata registry: https://registry.my-netdata.io responded with: {"status":"redirect","registry":"https://registry.my-netdata.io"}

View file

@ -1,11 +1,10 @@
# Streaming and replication
Each netdata is able to replicate/mirror its database to another netdata, by streaming collected
Each Netdata is able to replicate/mirror its database to another Netdata, by streaming collected
metrics, in real-time to it. This is quite different to [data archiving to third party time-series
databases](../backends).
When a netdata streams metrics to another netdata, the receiving one is able to perform everything
a netdata performs:
When Netdata streams metrics to another Netdata, the receiving one is able to perform everything a Netdata instance is capable of:
- visualize them with a dashboard
- run health checks that trigger alarms and send alarm notifications
@ -13,25 +12,25 @@ a netdata performs:
## Supported configurations
### netdata without a database or web API (headless collector)
### Netdata without a database or web API (headless collector)
Local netdata (`slave`), **without any database or alarms**, collects metrics and sends them to
another netdata (`master`).
Local Netdata (`slave`), **without any database or alarms**, collects metrics and sends them to
another Netdata (`master`).
The node menu shows a list of all "databases streamed to" the master. Clicking one of those links allows the user to view the full dashboard of the `slave` netdata. The URL has the form http://master-host:master-port/host/slave-host/.
The node menu shows a list of all "databases streamed to" the master. Clicking one of those links allows the user to view the full dashboard of the `slave` Netdata. The URL has the form http://master-host:master-port/host/slave-host/.
Alarms for the `slave` are served by the `master`.
In this mode the `slave` is just a plain data collector. It spawns all external plugins, but instead
of maintaining a local database and accepting dashboard requests, it streams all metrics to the
`master`. The memory footprint is reduced significantly, to between 6 MiB and 40 MiB, depending on the enabled plugins. To reduce the memory usage as much as possible, refer to [running netdata in embedded devices](../docs/Performance.md#running-netdata-in-embedded-devices).
`master`. The memory footprint is reduced significantly, to between 6 MiB and 40 MiB, depending on the enabled plugins. To reduce the memory usage as much as possible, refer to [running Netdata in embedded devices](../docs/Performance.md#running-netdata-in-embedded-devices).
The same `master` can collect data for any number of `slaves`.
### database replication
Local netdata (`slave`), **with a local database (and possibly alarms)**, collects metrics and
sends them to another netdata (`master`).
Local Netdata (`slave`), **with a local database (and possibly alarms)**, collects metrics and
sends them to another Netdata (`master`).
The user can use all the functions **at both** http://slave-ip:slave-port/ and
http://master-host:master-port/host/slave-host/.
@ -43,15 +42,15 @@ each can have different alarms configurations or have alarms disabled).
Take a note, that custom chart names, configured on the `slave`, should be in the form `type.name` to work correctly. The `master` will truncate the `type` part and substitute the original chart `type` to store the name in the database.
### netdata proxies
### Netdata proxies
Local netdata (`slave`), with or without a database, collects metrics and sends them to another
netdata (`proxy`), which may or may not maintain a database, which forwards them to another
netdata (`master`).
Local Netdata (`slave`), with or without a database, collects metrics and sends them to another
Netdata (`proxy`), which may or may not maintain a database, which forwards them to another
Netdata (`master`).
Alarms for the slave can be triggered by any of the involved hosts that maintains a database.
Any number of daisy chaining netdata servers are supported, each with or without a database and
Any number of daisy chaining Netdata servers are supported, each with or without a database and
with or without alarms for the `slave` metrics.
### mix and match with backends
@ -61,17 +60,17 @@ This allows quite complex setups.
Example:
1. netdata `A`, `B` do not maintain a database and stream metrics to netdata `C`(live streaming functionality, i.e. this PR)
2. netdata `C` maintains a database for `A`, `B`, `C` and archives all metrics to `graphite` with 10 second detail (backends functionality)
3. netdata `C` also streams data for `A`, `B`, `C` to netdata `D`, which also collects data from `E`, `F` and `G` from another DMZ (live streaming functionality, i.e. this PR)
4. netdata `D` is just a proxy, without a database, that streams all data to a remote site at netdata `H`
5. netdata `H` maintains a database for `A`, `B`, `C`, `D`, `E`, `F`, `G`, `H` and sends all data to `opentsdb` with 5 seconds detail (backends functionality)
1. Netdata `A`, `B` do not maintain a database and stream metrics to Netdata `C`(live streaming functionality, i.e. this PR)
2. Netdata `C` maintains a database for `A`, `B`, `C` and archives all metrics to `graphite` with 10 second detail (backends functionality)
3. Netdata `C` also streams data for `A`, `B`, `C` to Netdata `D`, which also collects data from `E`, `F` and `G` from another DMZ (live streaming functionality, i.e. this PR)
4. Netdata `D` is just a proxy, without a database, that streams all data to a remote site at Netdata `H`
5. Netdata `H` maintains a database for `A`, `B`, `C`, `D`, `E`, `F`, `G`, `H` and sends all data to `opentsdb` with 5 seconds detail (backends functionality)
6. alarms are triggered by `H` for all hosts
7. users can use all the netdata that maintain a database to view metrics (i.e. at `H` all hosts can be viewed).
7. users can use all the Netdata that maintain a database to view metrics (i.e. at `H` all hosts can be viewed).
## Configuration
These are options that affect the operation of netdata in this area:
These are options that affect the operation of Netdata in this area:
```
[global]
@ -87,7 +86,7 @@ monitoring (there cannot be health monitoring without a database).
accept a streaming request every seconds = 0
```
`[web].mode = none` disables the API (netdata will not listen to any ports).
`[web].mode = none` disables the API (Netdata will not listen to any ports).
This also disables the registry (there cannot be a registry without an API).
`accept a streaming request every seconds` can be used to set a limit on how often a master Netdata server will accept streaming requests from the slaves. 0 sets no limit, 1 means maximum once every second. If this is set, you may see error log entries "... too busy to accept new streaming request. Will be allowed in X secs".
@ -107,18 +106,18 @@ this host).
A new file is introduced: [stream.conf](stream.conf) (to edit it on your system run
`/etc/netdata/edit-config stream.conf`). This file holds streaming configuration for both the
sending and the receiving netdata.
sending and the receiving Netdata.
API keys are used to authorize the communication of a pair of sending-receiving netdata.
Once the communication is authorized, the sending netdata can push metrics for any number of hosts.
API keys are used to authorize the communication of a pair of sending-receiving Netdata.
Once the communication is authorized, the sending Netdata can push metrics for any number of hosts.
You can generate an API key with the command `uuidgen`. API keys are just random GUIDs.
You can use the same API key on all your netdata, or use a different API key for any pair of
sending-receiving netdata.
You can use the same API key on all your Netdata, or use a different API key for any pair of
sending-receiving Netdata.
##### options for the sending node
This is the section for the sending netdata. On the receiving node, `[stream].enabled` can be `no`.
This is the section for the sending Netdata. On the receiving node, `[stream].enabled` can be `no`.
If it is `yes`, the receiving node will also stream the metrics to another node (i.e. it will be
a `proxy`).
@ -170,22 +169,22 @@ You can also add sections like this:
```
The above is the receiver configuration of a single host, at the receiver end. `MACHINE_GUID` is
the unique id the netdata generating the metrics (i.e. the netdata that originally collects
them `/var/lib/netdata/registry/netdata.unique.id`). So, metrics for netdata `A` that pass through
any number of other netdata, will have the same `MACHINE_GUID`.
the unique id the Netdata generating the metrics (i.e. the Netdata that originally collects
them `/var/lib/netdata/registry/netdata.unique.id`). So, metrics for Netdata `A` that pass through
any number of other Netdata, will have the same `MACHINE_GUID`.
You can also use `default memory mode = dbengine` for an API key or `memory mode = dbengine` for
a single host. The additional `page cache size` and `dbengine disk space` configuration options
are inherited from the global netdata configuration.
are inherited from the global Netdata configuration.
##### allow from
`allow from` settings are [netdata simple patterns](../libnetdata/simple_pattern): string matches
`allow from` settings are [Netdata simple patterns](../libnetdata/simple_pattern): string matches
that use `*` as wildcard (any number of times) and a `!` prefix for a negative match.
So: `allow from = !10.1.2.3 10.*` will allow all IPs in `10.*` except `10.1.2.3`. The order is
important: left to right, the first positive or negative match is used.
`allow from` is available in netdata v1.9+
`allow from` is available in Netdata v1.9+
##### tracing
@ -211,7 +210,7 @@ The receiving end (`proxy` or `master`) logs entries like these:
2017-02-25 01:58:14: netdata: INFO : STREAM costa-pc [receive from [10.11.12.11]:33554]: receiving metrics...
```
For netdata v1.9+, streaming can also be monitored via `access.log`.
For Netdata v1.9+, streaming can also be monitored via `access.log`.
### Securing streaming communications
@ -302,7 +301,7 @@ Yes | -/force/optional | Yes | yes | The master-slave stream is encrypted.
## Viewing remote host dashboards, using mirrored databases
On any receiving netdata, that maintains remote databases and has its web server enabled,
On any receiving Netdata, that maintains remote databases and has its web server enabled,
The node menu will include a list of the mirrored databases.
![image](https://cloud.githubusercontent.com/assets/2662304/24080824/24cd2d3c-0caf-11e7-909d-a8dd1dbb95d7.png)
@ -326,11 +325,11 @@ In auto-scaling, all servers are ephemeral, they live for just a few hours. Ever
So, how can we monitor them? How can we be sure that everything is working as expected on all of them?
### The netdata way
### The Netdata way
We recently made a significant improvement at the core of netdata to support monitoring such setups.
We recently made a significant improvement at the core of Netdata to support monitoring such setups.
Following the netdata way of monitoring, we wanted:
Following the Netdata way of monitoring, we wanted:
1. **real-time performance monitoring**, collecting **_thousands of metrics per server per second_**, visualized in interactive, automatically created dashboards.
2. **real-time alarms**, for all nodes.
@ -339,18 +338,18 @@ Following the netdata way of monitoring, we wanted:
### How it works
All monitoring solutions, including netdata, work like this:
All monitoring solutions, including Netdata, work like this:
1. `collect metrics`, from the system and the running applications
2. `store metrics`, in a time-series database
3. `examine metrics` periodically, for triggering alarms and sending alarm notifications
4. `visualize metrics`, so that users can see what exactly is happening
netdata used to be self-contained, so that all these functions were handled entirely by each server. The changes we made, allow each netdata to be configured independently for each function. So, each netdata can now act as:
Netdata used to be self-contained, so that all these functions were handled entirely by each server. The changes we made, allow each Netdata to be configured independently for each function. So, each Netdata can now act as:
- a `self contained system`, much like it used to be.
- a `data collector`, that collects metrics from a host and pushes them to another netdata (with or without a local database and alarms).
- a `proxy`, that receives metrics from other hosts and pushes them immediately to other netdata servers. netdata proxies can also be `store and forward proxies` meaning that they are able to maintain a local database for all metrics passing through them (with or without alarms).
- a `data collector`, that collects metrics from a host and pushes them to another Netdata (with or without a local database and alarms).
- a `proxy`, that receives metrics from other hosts and pushes them immediately to other Netdata servers. Netdata proxies can also be `store and forward proxies` meaning that they are able to maintain a local database for all metrics passing through them (with or without alarms).
- a `time-series database` node, where data are kept, alarms are run and queries are served to visualise the metrics.
### Configuring an auto-scaling setup
@ -359,7 +358,7 @@ netdata used to be self-contained, so that all these functions were handled enti
<img src="https://cloud.githubusercontent.com/assets/2662304/23627468/96daf7ba-02b9-11e7-95ac-1f767dd8dab8.png"/>
</p>
You need a netdata `master`. This node should not be ephemeral. It will be the node where all ephemeral nodes (let's call them `slaves`) will be sending their metrics.
You need a Netdata `master`. This node should not be ephemeral. It will be the node where all ephemeral nodes (let's call them `slaves`) will be sending their metrics.
The master will need to authorize the slaves for accepting their metrics. This is done with an API key.
@ -393,11 +392,11 @@ On the master, edit `/etc/netdata/stream.conf` (to edit it on your system run `/
If you used many API keys, you can add one such section for each API key.
When done, restart netdata on the `master` node. It is now ready to receive metrics.
When done, restart Netdata on the `master` node. It is now ready to receive metrics.
Note that `health enabled by default = auto` will still trigger `last_collected` alarms, if a connected slave does not exit gracefully. If the netdata running on the slave is
Note that `health enabled by default = auto` will still trigger `last_collected` alarms, if a connected slave does not exit gracefully. If the `netdata` process running on the slave is
stopped, it will close the connection to the master, ensuring that no `last_collected` alarms are triggered. For example, a proper container restart would first terminate
the netdata process, but a system power issue would leave the connection open on the master side. In the second case, you will still receive alarms.
the `netdata` process, but a system power issue would leave the connection open on the master side. In the second case, you will still receive alarms.
#### Configuring the `slaves`
@ -405,7 +404,7 @@ On each of the slaves, edit `/etc/netdata/stream.conf` (to edit it on your syste
```bash
[stream]
# stream metrics to another netdata
# stream metrics to another Netdata
enabled = yes
# the IP and PORT of the master
@ -416,7 +415,7 @@ On each of the slaves, edit `/etc/netdata/stream.conf` (to edit it on your syste
```
*`stream.conf` on slaves, to enable pushing metrics to master at `10.11.12.13:19999`.*
Using just the above configuration, the `slaves` will be pushing their metrics to the `master` netdata, but they will still maintain a local database of the metrics and run health checks. To disable them, edit `/etc/netdata/netdata.conf` and set:
Using just the above configuration, the `slaves` will be pushing their metrics to the `master` Netdata, but they will still maintain a local database of the metrics and run health checks. To disable them, edit `/etc/netdata/netdata.conf` and set:
```bash
[global]
@ -431,9 +430,9 @@ Using just the above configuration, the `slaves` will be pushing their metrics t
Keep in mind that setting `memory mode = none` will also force `[health].enabled = no` (health checks require access to a local database). But you can keep the database and disable health checks if you need to. You are however sending all the metrics to the master server, which can handle the health checking (`[health].enabled = yes`)
#### netdata unique id
#### Netdata unique id
The file `/var/lib/netdata/registry/netdata.public.unique.id` contains a random GUID that **uniquely identifies each netdata**. This file is automatically generated, by netdata, the first time it is started and remains unaltered forever.
The file `/var/lib/netdata/registry/netdata.public.unique.id` contains a random GUID that **uniquely identifies each Netdata**. This file is automatically generated, by Netdata, the first time it is started and remains unaltered forever.
> If you are building an image to be used for automated provisioning of autoscaled VMs, it important to delete that file from the image, so that each instance of your image will generate its own.
@ -469,7 +468,7 @@ and something like this on the slave:
### Archiving to a time-series database
The `master` netdata node can also archive metrics, for all `slaves`, to a time-series database. At the time of this writing, netdata supports:
The `master` Netdata node can also archive metrics, for all `slaves`, to a time-series database. At the time of this writing, Netdata supports:
- graphite
- opentsdb
@ -477,7 +476,7 @@ The `master` netdata node can also archive metrics, for all `slaves`, to a time-
- json document DBs
- all the compatibles to the above (e.g. kairosdb, influxdb, etc)
Check the netdata [backends documentation](../backends) for configuring this.
Check the Netdata [backends documentation](../backends) for configuring this.
This is how such a solution will work:
@ -487,7 +486,7 @@ This is how such a solution will work:
### An advanced setup
netdata also supports `proxies` with and without a local database, and data retention can be different between all nodes.
Netdata also supports `proxies` with and without a local database, and data retention can be different between all nodes.
This means a setup like the following is also possible:
@ -498,16 +497,16 @@ This means a setup like the following is also possible:
## proxies
A proxy is a netdata that is receiving metrics from a netdata, and streams them to another netdata.
A proxy is a Netdata instance that is receiving metrics from a Netdata, and streams them to another Netdata.
netdata proxies may or may not maintain a database for the metrics passing through them.
Netdata proxies may or may not maintain a database for the metrics passing through them.
When they maintain a database, they can also run health checks (alarms and notifications)
for the remote host that is streaming the metrics.
To configure a proxy, configure it as a receiving and a sending netdata at the same time,
To configure a proxy, configure it as a receiving and a sending Netdata at the same time,
using [stream.conf](stream.conf).
The sending side of a netdata proxy, connects and disconnects to the final destination of the
The sending side of a Netdata proxy, connects and disconnects to the final destination of the
metrics, following the same pattern of the receiving side.
For a practical example see [Monitoring ephemeral nodes](#monitoring-ephemeral-nodes).

View file

@ -18,7 +18,7 @@ If you change that file, your changes will be overwritten when Netdata is update
You have to copy the example file under a new name, so that it will not be overwritten with Netdata updates.
To configure your info file set in netdata.conf:
To configure your info file set in `netdata.conf`:
```
[web]

View file

@ -2,13 +2,13 @@
## Netdata REST API
The complete documentation of the netdata API is available at the **[Swagger Editor](https://editor.swagger.io/?url=https://raw.githubusercontent.com/netdata/netdata/master/web/api/netdata-swagger.yaml)**.
The complete documentation of the Netdata API is available at the **[Swagger Editor](https://editor.swagger.io/?url=https://raw.githubusercontent.com/netdata/netdata/master/web/api/netdata-swagger.yaml)**.
If your prefer it over the Swagger Editor, you can also use **[Swagger UI](https://registry.my-netdata.io/swagger/#!/default/get_data)**. This however does not provide all the information available.
## Google charts API
netdata is a [Google Visualization API datatable and datasource provider](https://developers.google.com/chart/interactive/docs/reference), so it can directly be used with [Google Charts](https://developers.google.com/chart/interactive/docs/).
Netdata is a [Google Visualization API datatable and datasource provider](https://developers.google.com/chart/interactive/docs/reference), so it can directly be used with [Google Charts](https://developers.google.com/chart/interactive/docs/).
Check this [single chart, jsfiddle example](https://jsfiddle.net/ktsaou/ensu4uws/9/):

View file

@ -6,11 +6,11 @@ Netdata can generate badges for any chart and any dimension at any time-frame. B
**Netdata badges are powerful**!
Given that netdata collects from **1.000** to **5.000** metrics per server (depending on the number of network interfaces, disks, cpu cores, applications running, users logged in, containers running, etc) and that netdata already has data reduction/aggregation functions embedded, the badges can be quite powerful.
Given that Netdata collects from **1.000** to **5.000** metrics per server (depending on the number of network interfaces, disks, cpu cores, applications running, users logged in, containers running, etc) and that Netdata already has data reduction/aggregation functions embedded, the badges can be quite powerful.
For each metric/dimension and for arbitrary time-frames badges can show **min**, **max** or **average** value, but also **sum** or **incremental-sum** to have their **volume**.
For example, there is [a chart in netdata that shows the current requests/s of nginx](http://london.my-netdata.io/#nginx_local_nginx). Using this chart alone we can show the following badges (we could add more time-frames, like **today**, **yesterday**, etc):
For example, there is [a chart in Netdata that shows the current requests/s of nginx](http://london.my-netdata.io/#nginx_local_nginx). Using this chart alone we can show the following badges (we could add more time-frames, like **today**, **yesterday**, etc):
<a href="https://registry.my-netdata.io/#nginx_local_nginx"><img src="https://registry.my-netdata.io/api/v1/badge.svg?chart=nginx_local.connections&dimensions=active&value_color=grey:null%7Cblue&label=nginx%20active%20connections%20now&units=null&precision=0"/></a> <a href="https://registry.my-netdata.io/#nginx_local_nginx"><img src="https://registry.my-netdata.io/api/v1/badge.svg?chart=nginx_local.connections&dimensions=active&after=-3600&value_color=orange&label=last%20hour%20average&units=null&options=unaligned&precision=0"/></a> <a href="https://registry.my-netdata.io/#nginx_local_nginx"><img src="https://registry.my-netdata.io/api/v1/badge.svg?chart=nginx_local.connections&dimensions=active&group=max&after=-3600&value_color=red&label=last%20hour%20max&units=null&options=unaligned&precision=0"/></a>
@ -18,9 +18,9 @@ Similarly, there is [a chart that shows outbound bandwidth per class](http://lon
<a href="https://registry.my-netdata.io/#tc_eth0"><img src="https://registry.my-netdata.io/api/v1/badge.svg?chart=tc.world_out&dimensions=web_server&value_color=green&label=web%20server%20sends%20now&units=kbps"/></a> <a href="https://registry.my-netdata.io/#tc_eth0"><img src="https://registry.my-netdata.io/api/v1/badge.svg?chart=tc.world_out&dimensions=web_server&after=-86400&options=unaligned&group=sum&divide=8388608&value_color=blue&label=web%20server%20sent%20today&units=GB"/></a>
The right one is a **volume** calculation. Netdata calculated the total of the last 86.400 seconds (a day) which gives `kilobits`, then divided it by 8 to make it KB, then by 1024 to make it MB and then by 1024 to make it GB. Calculations like this are quite accurate, since for every value collected, every second, netdata interpolates it to second boundary using microsecond calculations.
The right one is a **volume** calculation. Netdata calculated the total of the last 86.400 seconds (a day) which gives `kilobits`, then divided it by 8 to make it KB, then by 1024 to make it MB and then by 1024 to make it GB. Calculations like this are quite accurate, since for every value collected, every second, Netdata interpolates it to second boundary using microsecond calculations.
Let's see a few more badge examples (they come from the [netdata registry](../../../registry/)):
Let's see a few more badge examples (they come from the [Netdata registry](../../../registry/)):
- **cpu usage of user `root`** (you can pick any user; 100% = 1 core). This will be `green <10%`, `yellow <20%`, `orange <50%`, `blue <100%` (1 core), `red` otherwise (you define thresholds and colors on the URL).
@ -36,7 +36,7 @@ Let's see a few more badge examples (they come from the [netdata registry](../..
---
> So, every single line on the charts of a [netdata dashboard](http://london.my-netdata.io/), can become a badge and this badge can calculate **average**, **min**, **max**, or **volume** for any time-frame! And you can also vary the badge color using conditions on the calculated value.
> So, every single line on the charts of a [Netdata dashboard](http://london.my-netdata.io/), can become a badge and this badge can calculate **average**, **min**, **max**, or **volume** for any time-frame! And you can also vary the badge color using conditions on the calculated value.
---
@ -44,13 +44,13 @@ Let's see a few more badge examples (they come from the [netdata registry](../..
The basic URL is `http://your.netdata:19999/api/v1/badge.svg?option1&option2&option3&...`.
Here is what you can put for `options` (these are standard netdata API options):
Here is what you can put for `options` (these are standard Netdata API options):
- `chart=CHART.NAME`
The chart to get the values from.
**This is the only parameter required** and with just this parameter, netdata will return the sum of the latest values of all chart dimensions.
**This is the only parameter required** and with just this parameter, Netdata will return the sum of the latest values of all chart dimensions.
Example:
@ -76,7 +76,7 @@ Here is what you can put for `options` (these are standard netdata API options):
- `dimensions=DIMENSION1|DIMENSION2|...`
The dimensions of the chart to use. If you don't set any dimension, all will be used. When multiple dimensions are used, netdata will sum their values. You can append `options=absolute` if you want this sum to convert all values to positive before adding them.
The dimensions of the chart to use. If you don't set any dimension, all will be used. When multiple dimensions are used, Netdata will sum their values. You can append `options=absolute` if you want this sum to convert all values to positive before adding them.
Pipes in HTML have to escaped with `%7C`.
@ -132,7 +132,7 @@ Here is what you can put for `options` (these are standard netdata API options):
- `group=min` or `group=max` or `group=average` (the default) or `group=sum` or `group=incremental-sum`
If netdata will have to reduce (aggregate) the data to calculate the value, which aggregation method to use.
If Netdata will have to reduce (aggregate) the data to calculate the value, which aggregation method to use.
- `max` will find the max value for the timeframe. This works on both positive and negative dimensions. It will find the most extreme value.
@ -156,7 +156,7 @@ Here is what you can put for `options` (these are standard netdata API options):
- `min2max`, when multiple dimensions are given, do not sum them, but take their `max - min`.
- `unaligned`, when data are reduced / aggregated (e.g. the request is about the average of the last minute, or hour), netdata by default aligns them so that the charts will have a constant shape (so average per minute returns always XX:XX:00 - XX:XX:59). Setting the `unaligned` option, netdata will aggregate data without any alignment, so if the request is for 60 seconds, it will aggregate the latest 60 seconds of collected data.
- `unaligned`, when data are reduced / aggregated (e.g. the request is about the average of the last minute, or hour), Netdata by default aligns them so that the charts will have a constant shape (so average per minute returns always XX:XX:00 - XX:XX:59). Setting the `unaligned` option, Netdata will aggregate data without any alignment, so if the request is for 60 seconds, it will aggregate the latest 60 seconds of collected data.
These are options dedicated to badges:
@ -166,9 +166,9 @@ These are options dedicated to badges:
- `units=TEXT`
The units of the badge. If you want to put a `/`, please put a `\`. This is because netdata allows badges parameters to be given as path in URL, instead of query string. You can also use `null` or `empty` to show it without any units.
The units of the badge. If you want to put a `/`, please put a `\`. This is because Netdata allows badges parameters to be given as path in URL, instead of query string. You can also use `null` or `empty` to show it without any units.
The units `seconds`, `minutes` and `hours` trigger special formatting. The value has to be in this unit, and netdata will automatically change it to show a more pretty duration.
The units `seconds`, `minutes` and `hours` trigger special formatting. The value has to be in this unit, and Netdata will automatically change it to show a more pretty duration.
- `multiply=NUMBER`
@ -180,7 +180,7 @@ These are options dedicated to badges:
- `label_color=COLOR`
The color of the label (the left part). You can use any HTML color, include `#NNN` and `#NNNNNN`. The following colors are defined in netdata (and you can use them by name): `green`, `brightgreen`, `yellow`, `yellowgreen`, `orange`, `red`, `blue`, `grey`, `gray`, `lightgrey`, `lightgray`. These are taken from https://github.com/badges/shields so they are compatible with standard badges.
The color of the label (the left part). You can use any HTML color, include `#NNN` and `#NNNNNN`. The following colors are defined in Netdata (and you can use them by name): `green`, `brightgreen`, `yellow`, `yellowgreen`, `orange`, `red`, `blue`, `grey`, `gray`, `lightgrey`, `lightgray`. These are taken from https://github.com/badges/shields so they are compatible with standard badges.
- `value_color=COLOR:null|COLOR<VALUE|COLOR>VALUE|COLOR>=VALUE|COLOR<=VALUE|...`
@ -188,13 +188,13 @@ These are options dedicated to badges:
Example: `value_color=grey:null|green<10|yellow<100|orange<1000|blue<10000|red`
The above will set `grey` if no value exists (not collected within the `gap when lost iterations above` in netdata.conf for the chart), `green` if the value is less than 10, `yellow` if the value is less than 100, etc up to `red` which will be used if no other conditions match.
The above will set `grey` if no value exists (not collected within the `gap when lost iterations above` in `netdata.conf` for the chart), `green` if the value is less than 10, `yellow` if the value is less than 100, etc up to `red` which will be used if no other conditions match.
The supported operators are `<`, `>`, `<=`, `>=`, `=` (or `:`) and `!=` (or `<>`).
- `precision=NUMBER`
The number of decimal digits of the value. By default netdata will add:
The number of decimal digits of the value. By default Netdata will add:
- no decimal digits for values > 1000
- 1 decimal digit for values > 100
@ -217,7 +217,7 @@ These are options dedicated to badges:
- `refresh=auto` or `refresh=SECONDS`
This option enables auto-refreshing of images. netdata will send the HTTP header `Refresh: SECONDS` to the web browser, thus requesting automatic refresh of the images at regular intervals.
This option enables auto-refreshing of images. Netdata will send the HTTP header `Refresh: SECONDS` to the web browser, thus requesting automatic refresh of the images at regular intervals.
`auto` will calculate the proper `SECONDS` to avoid unnecessary refreshes. If `SECONDS` is zero, this feature is disabled (it is also disabled by default).
@ -227,7 +227,7 @@ These are options dedicated to badges:
<embed src="BADGE_URL" type="image/svg+xml" height="20" />
```
Another way is to use javascript to auto-refresh them. You can auto-refresh all the netdata badges on a page using javascript. You have to add a class to all the netdata badges, like this `<img class="netdata-badge" src="..."/>`. Then add this javascript code to your page (it requires jquery):
Another way is to use javascript to auto-refresh them. You can auto-refresh all the Netdata badges on a page using javascript. You have to add a class to all the Netdata badges, like this `<img class="netdata-badge" src="..."/>`. Then add this javascript code to your page (it requires jquery):
```html
<script>
@ -264,9 +264,9 @@ character|name|escape sequence
## FAQ
#### Is it fast?
On modern hardware, netdata can generate about **2.000 badges per second per core**, before noticing any delays. It generates a badge in about half a millisecond!
On modern hardware, Netdata can generate about **2.000 badges per second per core**, before noticing any delays. It generates a badge in about half a millisecond!
Of course these timing are for badges that use recent data. If you need badges that do calculations over long durations (a day, or more), timing will differ. netdata logs its timings at its `access.log`, so take a look there before adding a heavy badge on a busy web site. Of course, you can cache such badges or have a cron job get them from netdata and save them at your web server at regular intervals.
Of course these timing are for badges that use recent data. If you need badges that do calculations over long durations (a day, or more), timing will differ. Netdata logs its timings at its `access.log`, so take a look there before adding a heavy badge on a busy web site. Of course, you can cache such badges or have a cron job get them from Netdata and save them at your web server at regular intervals.
#### Embedding badges in github

View file

@ -1,5 +1,5 @@
# prometheus exporter
The prometheus exporter for netdata is located at the [backends section for prometheus](../../../../backends/prometheus).
The prometheus exporter for Netdata is located at the [backends section for prometheus](../../../../backends/prometheus).
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fweb%2Fapi%2Fexporters%2Fprometheus%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

View file

@ -1,18 +1,18 @@
# shell exporter
Shell scripts can now query netdata:
Shell scripts can now query Netdata:
```sh
eval "$(curl -s 'http://localhost:19999/api/v1/allmetrics')"
```
after this command, all the netdata metrics are exposed to shell. Check:
after this command, all the Netdata metrics are exposed to shell. Check:
```sh
# source the metrics
eval "$(curl -s 'http://localhost:19999/api/v1/allmetrics')"
# let's see if there are variables exposed by netdata for system.cpu
# let's see if there are variables exposed by Netdata for system.cpu
set | grep "^NETDATA_SYSTEM_CPU"
NETDATA_SYSTEM_CPU_GUEST=0
@ -50,7 +50,7 @@ user 0m0,000s
sys 0m0,007s
# it is...
# 0.07 seconds for curl to be loaded, connect to netdata and fetch the response back...
# 0.07 seconds for curl to be loaded, connect to Netdata and fetch the response back...
```
The `_VISIBLETOTAL` variable sums up all the dimensions of each chart.

View file

@ -59,9 +59,9 @@ This is such an object:
## Downloading data query result files
Following the [Google Visualization Provider guidelines](https://developers.google.com/chart/interactive/docs/dev/implementing_data_source),
netdata supports parsing `tqx` options.
Netdata supports parsing `tqx` options.
Using these options, any netdata data query can instruct the web browser to download
Using these options, any Netdata data query can instruct the web browser to download
the result and save it under a given filename.
For example, to download a CSV file with CPU utilization of the last hour,

Some files were not shown because too many files have changed in this diff Show more