mirror of
https://github.com/netdata/netdata.git
synced 2025-04-17 11:12:42 +00:00
Add chart filtering parameter to the allmetrics API query (#12820)
* Add chart filtering in the allmetrics API call * Fix compilation warnings * Remove unnecessary function * Update the documentation * Apply suggestions from code review * Check for filter instead of filter_string * Do not check both - chart id and name for prometheus and shell formats * Fix unit tests Co-authored-by: Ilya Mashchenko <ilya@netdata.cloud>
This commit is contained in:
parent
4f3d90a405
commit
464695b410
12 changed files with 157 additions and 80 deletions
database/sqlite
exporting
web/api
|
@ -198,6 +198,7 @@ bind_fail:
|
|||
#else
|
||||
UNUSED(host);
|
||||
UNUSED(ae);
|
||||
UNUSED(skip_filter);
|
||||
#endif
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -28,61 +28,82 @@ X seconds (though, it can send them per second if you need it to).
|
|||
|
||||
## Features
|
||||
|
||||
1. The exporting engine uses a number of connectors to send Netdata metrics to external time-series databases. See our
|
||||
[list of supported databases](/docs/export/external-databases.md#supported-databases) for information on which
|
||||
connector to enable and configure for your database of choice.
|
||||
### Integration
|
||||
|
||||
- [**AWS Kinesis Data Streams**](/exporting/aws_kinesis/README.md): Metrics are sent to the service in `JSON`
|
||||
format.
|
||||
- [**Google Cloud Pub/Sub Service**](/exporting/pubsub/README.md): Metrics are sent to the service in `JSON`
|
||||
format.
|
||||
- [**Graphite**](/exporting/graphite/README.md): A plaintext interface. Metrics are sent to the database server as
|
||||
`prefix.hostname.chart.dimension`. `prefix` is configured below, `hostname` is the hostname of the machine (can
|
||||
also be configured). Learn more in our guide to [export and visualize Netdata metrics in
|
||||
Graphite](/docs/guides/export/export-netdata-metrics-graphite.md).
|
||||
- [**JSON** document databases](/exporting/json/README.md)
|
||||
- [**OpenTSDB**](/exporting/opentsdb/README.md): Use a plaintext or HTTP interfaces. Metrics are sent to
|
||||
OpenTSDB as `prefix.chart.dimension` with tag `host=hostname`.
|
||||
- [**MongoDB**](/exporting/mongodb/README.md): Metrics are sent to the database in `JSON` format.
|
||||
- [**Prometheus**](/exporting/prometheus/README.md): Use an existing Prometheus installation to scrape metrics
|
||||
from node using the Netdata API.
|
||||
- [**Prometheus remote write**](/exporting/prometheus/remote_write/README.md). A binary snappy-compressed protocol
|
||||
buffer encoding over HTTP. Supports many [storage
|
||||
providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage).
|
||||
- [**TimescaleDB**](/exporting/TIMESCALE.md): Use a community-built connector that takes JSON streams from a
|
||||
Netdata client and writes them to a TimescaleDB table.
|
||||
The exporting engine uses a number of connectors to send Netdata metrics to external time-series databases. See our
|
||||
[list of supported databases](/docs/export/external-databases.md#supported-databases) for information on which
|
||||
connector to enable and configure for your database of choice.
|
||||
|
||||
2. Netdata can filter metrics (at the chart level), to send only a subset of the collected metrics.
|
||||
- [**AWS Kinesis Data Streams**](/exporting/aws_kinesis/README.md): Metrics are sent to the service in `JSON`
|
||||
format.
|
||||
- [**Google Cloud Pub/Sub Service**](/exporting/pubsub/README.md): Metrics are sent to the service in `JSON`
|
||||
format.
|
||||
- [**Graphite**](/exporting/graphite/README.md): A plaintext interface. Metrics are sent to the database server as
|
||||
`prefix.hostname.chart.dimension`. `prefix` is configured below, `hostname` is the hostname of the machine (can
|
||||
also be configured). Learn more in our guide to [export and visualize Netdata metrics in
|
||||
Graphite](/docs/guides/export/export-netdata-metrics-graphite.md).
|
||||
- [**JSON** document databases](/exporting/json/README.md)
|
||||
- [**OpenTSDB**](/exporting/opentsdb/README.md): Use a plaintext or HTTP interfaces. Metrics are sent to
|
||||
OpenTSDB as `prefix.chart.dimension` with tag `host=hostname`.
|
||||
- [**MongoDB**](/exporting/mongodb/README.md): Metrics are sent to the database in `JSON` format.
|
||||
- [**Prometheus**](/exporting/prometheus/README.md): Use an existing Prometheus installation to scrape metrics
|
||||
from node using the Netdata API.
|
||||
- [**Prometheus remote write**](/exporting/prometheus/remote_write/README.md). A binary snappy-compressed protocol
|
||||
buffer encoding over HTTP. Supports many [storage
|
||||
providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage).
|
||||
- [**TimescaleDB**](/exporting/TIMESCALE.md): Use a community-built connector that takes JSON streams from a
|
||||
Netdata client and writes them to a TimescaleDB table.
|
||||
|
||||
3. Netdata supports three modes of operation for all exporting connectors:
|
||||
### Chart filtering
|
||||
|
||||
- `as-collected` sends to external databases the metrics as they are collected, in the units they are collected.
|
||||
So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do. For example,
|
||||
to calculate CPU utilization in this format, you need to know how to convert kernel ticks to percentage.
|
||||
Netdata can filter metrics, to send only a subset of the collected metrics. You can use the
|
||||
configuration file
|
||||
|
||||
- `average` sends to external databases normalized metrics from the Netdata database. In this mode, all metrics
|
||||
are sent as gauges, in the units Netdata uses. This abstracts data collection and simplifies visualization, but
|
||||
you will not be able to copy and paste queries from other sources to convert units. For example, CPU utilization
|
||||
percentage is calculated by Netdata, so Netdata will convert ticks to percentage and send the average percentage
|
||||
to the external database.
|
||||
```txt
|
||||
[prometheus:exporter]
|
||||
send charts matching = system.*
|
||||
```
|
||||
|
||||
- `sum` or `volume`: the sum of the interpolated values shown on the Netdata graphs is sent to the external
|
||||
database. So, if Netdata is configured to send data to the database every 10 seconds, the sum of the 10 values
|
||||
shown on the Netdata charts will be used.
|
||||
or the URL parameter `filter` in the `allmetrics` API call.
|
||||
|
||||
Time-series databases suggest to collect the raw values (`as-collected`). If you plan to invest on building your
|
||||
monitoring around a time-series database and you already know (or you will invest in learning) how to convert units
|
||||
and normalize the metrics in Grafana or other visualization tools, we suggest to use `as-collected`.
|
||||
```txt
|
||||
http://localhost:19999/api/v1/allmetrics?format=shell&filter=system.*
|
||||
```
|
||||
|
||||
If, on the other hand, you just need long term archiving of Netdata metrics and you plan to mainly work with
|
||||
Netdata, we suggest to use `average`. It decouples visualization from data collection, so it will generally be a lot
|
||||
simpler. Furthermore, if you use `average`, the charts shown in the external service will match exactly what you
|
||||
see in Netdata, which is not necessarily true for the other modes of operation.
|
||||
### Operation modes
|
||||
|
||||
4. This code is smart enough, not to slow down Netdata, independently of the speed of the external database server. You
|
||||
should keep in mind though that many exporting connector instances can consume a lot of CPU resources if they run
|
||||
their batches at the same time. You can set different update intervals for every exporting connector instance, but
|
||||
even in that case they can occasionally synchronize their batches for a moment.
|
||||
Netdata supports three modes of operation for all exporting connectors:
|
||||
|
||||
- `as-collected` sends to external databases the metrics as they are collected, in the units they are collected.
|
||||
So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do. For example,
|
||||
to calculate CPU utilization in this format, you need to know how to convert kernel ticks to percentage.
|
||||
|
||||
- `average` sends to external databases normalized metrics from the Netdata database. In this mode, all metrics
|
||||
are sent as gauges, in the units Netdata uses. This abstracts data collection and simplifies visualization, but
|
||||
you will not be able to copy and paste queries from other sources to convert units. For example, CPU utilization
|
||||
percentage is calculated by Netdata, so Netdata will convert ticks to percentage and send the average percentage
|
||||
to the external database.
|
||||
|
||||
- `sum` or `volume`: the sum of the interpolated values shown on the Netdata graphs is sent to the external
|
||||
database. So, if Netdata is configured to send data to the database every 10 seconds, the sum of the 10 values
|
||||
shown on the Netdata charts will be used.
|
||||
|
||||
Time-series databases suggest to collect the raw values (`as-collected`). If you plan to invest on building your
|
||||
monitoring around a time-series database and you already know (or you will invest in learning) how to convert units
|
||||
and normalize the metrics in Grafana or other visualization tools, we suggest to use `as-collected`.
|
||||
|
||||
If, on the other hand, you just need long term archiving of Netdata metrics and you plan to mainly work with
|
||||
Netdata, we suggest to use `average`. It decouples visualization from data collection, so it will generally be a lot
|
||||
simpler. Furthermore, if you use `average`, the charts shown in the external service will match exactly what you
|
||||
see in Netdata, which is not necessarily true for the other modes of operation.
|
||||
|
||||
### Independent operation
|
||||
|
||||
This code is smart enough, not to slow down Netdata, independently of the speed of the external database server.
|
||||
|
||||
> ❗ You should keep in mind though that many exporting connector instances can consume a lot of CPU resources if they
|
||||
> run their batches at the same time. You can set different update intervals for every exporting connector instance,
|
||||
> but even in that case they can occasionally synchronize their batches for a moment.
|
||||
|
||||
## Configuration
|
||||
|
||||
|
@ -252,7 +273,8 @@ Configure individual connectors and override any global settings with the follow
|
|||
within each pattern). The patterns are checked against both chart id and chart name. A pattern starting with `!`
|
||||
gives a negative match. So to match all charts named `apps.*` except charts ending in `*reads`, use `!*reads
|
||||
apps.*` (so, the order is important: the first pattern matching the chart id or the chart name will be used -
|
||||
positive or negative).
|
||||
positive or negative). There is also a URL parameter `filter` that can be used while querying `allmetrics`. The URL
|
||||
parameter has a higher priority than the configuration option.
|
||||
|
||||
- `send names instead of ids = yes | no` controls the metric names Netdata should send to the external database.
|
||||
Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system
|
||||
|
|
|
@ -12,9 +12,10 @@
|
|||
*
|
||||
* @param instance an instance data structure.
|
||||
* @param st a chart.
|
||||
* @param filter a simple pattern to match against.
|
||||
* @return Returns 1 if the chart can be sent, 0 otherwise.
|
||||
*/
|
||||
inline int can_send_rrdset(struct instance *instance, RRDSET *st)
|
||||
inline int can_send_rrdset(struct instance *instance, RRDSET *st, SIMPLE_PATTERN *filter)
|
||||
{
|
||||
#ifdef NETDATA_INTERNAL_CHECKS
|
||||
RRDHOST *host = st->rrdhost;
|
||||
|
@ -27,19 +28,29 @@ inline int can_send_rrdset(struct instance *instance, RRDSET *st)
|
|||
if (unlikely(rrdset_flag_check(st, RRDSET_FLAG_EXPORTING_IGNORE)))
|
||||
return 0;
|
||||
|
||||
if (unlikely(!rrdset_flag_check(st, RRDSET_FLAG_EXPORTING_SEND))) {
|
||||
// we have not checked this chart
|
||||
if (simple_pattern_matches(instance->config.charts_pattern, st->id) ||
|
||||
simple_pattern_matches(instance->config.charts_pattern, st->name))
|
||||
rrdset_flag_set(st, RRDSET_FLAG_EXPORTING_SEND);
|
||||
else {
|
||||
rrdset_flag_set(st, RRDSET_FLAG_EXPORTING_IGNORE);
|
||||
debug(
|
||||
D_EXPORTING,
|
||||
"EXPORTING: not sending chart '%s' of host '%s', because it is disabled for exporting.",
|
||||
st->id,
|
||||
host->hostname);
|
||||
return 0;
|
||||
if (filter) {
|
||||
if (instance->config.options & EXPORTING_OPTION_SEND_NAMES) {
|
||||
if (!simple_pattern_matches(filter, st->name))
|
||||
return 0;
|
||||
} else {
|
||||
if (!simple_pattern_matches(filter, st->id))
|
||||
return 0;
|
||||
}
|
||||
} else {
|
||||
if (unlikely(!rrdset_flag_check(st, RRDSET_FLAG_EXPORTING_SEND))) {
|
||||
// we have not checked this chart
|
||||
if (simple_pattern_matches(instance->config.charts_pattern, st->id) ||
|
||||
simple_pattern_matches(instance->config.charts_pattern, st->name))
|
||||
rrdset_flag_set(st, RRDSET_FLAG_EXPORTING_SEND);
|
||||
else {
|
||||
rrdset_flag_set(st, RRDSET_FLAG_EXPORTING_IGNORE);
|
||||
debug(
|
||||
D_EXPORTING,
|
||||
"EXPORTING: not sending chart '%s' of host '%s', because it is disabled for exporting.",
|
||||
st->id,
|
||||
host->hostname);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -480,6 +491,7 @@ static void generate_as_collected_prom_metric(BUFFER *wb, struct gen_parameters
|
|||
*
|
||||
* @param instance an instance data structure.
|
||||
* @param host a data collecting host.
|
||||
* @param filter_string a simple pattern filter.
|
||||
* @param wb the buffer to fill with metrics.
|
||||
* @param prefix a prefix for every metric.
|
||||
* @param exporting_options options to configure what data is exported.
|
||||
|
@ -489,12 +501,14 @@ static void generate_as_collected_prom_metric(BUFFER *wb, struct gen_parameters
|
|||
static void rrd_stats_api_v1_charts_allmetrics_prometheus(
|
||||
struct instance *instance,
|
||||
RRDHOST *host,
|
||||
const char *filter_string,
|
||||
BUFFER *wb,
|
||||
const char *prefix,
|
||||
EXPORTING_OPTIONS exporting_options,
|
||||
int allhosts,
|
||||
PROMETHEUS_OUTPUT_OPTIONS output_options)
|
||||
{
|
||||
SIMPLE_PATTERN *filter = simple_pattern_create(filter_string, NULL, SIMPLE_PATTERN_EXACT);
|
||||
rrdhost_rdlock(host);
|
||||
|
||||
char hostname[PROMETHEUS_ELEMENT_MAX + 1];
|
||||
|
@ -592,7 +606,7 @@ static void rrd_stats_api_v1_charts_allmetrics_prometheus(
|
|||
rrdset_foreach_read(st, host)
|
||||
{
|
||||
|
||||
if (likely(can_send_rrdset(instance, st))) {
|
||||
if (likely(can_send_rrdset(instance, st, filter))) {
|
||||
rrdset_rdlock(st);
|
||||
|
||||
char chart[PROMETHEUS_ELEMENT_MAX + 1];
|
||||
|
@ -777,6 +791,7 @@ static void rrd_stats_api_v1_charts_allmetrics_prometheus(
|
|||
}
|
||||
|
||||
rrdhost_unlock(host);
|
||||
simple_pattern_free(filter);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -850,6 +865,7 @@ static inline time_t prometheus_preparation(
|
|||
* Write metrics and auxiliary information for one host to a buffer.
|
||||
*
|
||||
* @param host a data collecting host.
|
||||
* @param filter_string a simple pattern filter.
|
||||
* @param wb the buffer to write to.
|
||||
* @param server the name of a Prometheus server.
|
||||
* @param prefix a prefix for every metric.
|
||||
|
@ -858,6 +874,7 @@ static inline time_t prometheus_preparation(
|
|||
*/
|
||||
void rrd_stats_api_v1_charts_allmetrics_prometheus_single_host(
|
||||
RRDHOST *host,
|
||||
const char *filter_string,
|
||||
BUFFER *wb,
|
||||
const char *server,
|
||||
const char *prefix,
|
||||
|
@ -880,13 +897,14 @@ void rrd_stats_api_v1_charts_allmetrics_prometheus_single_host(
|
|||
output_options);
|
||||
|
||||
rrd_stats_api_v1_charts_allmetrics_prometheus(
|
||||
prometheus_exporter_instance, host, wb, prefix, exporting_options, 0, output_options);
|
||||
prometheus_exporter_instance, host, filter_string, wb, prefix, exporting_options, 0, output_options);
|
||||
}
|
||||
|
||||
/**
|
||||
* Write metrics and auxiliary information for all hosts to a buffer.
|
||||
*
|
||||
* @param host a data collecting host.
|
||||
* @param filter_string a simple pattern filter.
|
||||
* @param wb the buffer to write to.
|
||||
* @param server the name of a Prometheus server.
|
||||
* @param prefix a prefix for every metric.
|
||||
|
@ -895,6 +913,7 @@ void rrd_stats_api_v1_charts_allmetrics_prometheus_single_host(
|
|||
*/
|
||||
void rrd_stats_api_v1_charts_allmetrics_prometheus_all_hosts(
|
||||
RRDHOST *host,
|
||||
const char *filter_string,
|
||||
BUFFER *wb,
|
||||
const char *server,
|
||||
const char *prefix,
|
||||
|
@ -920,7 +939,7 @@ void rrd_stats_api_v1_charts_allmetrics_prometheus_all_hosts(
|
|||
rrdhost_foreach_read(host)
|
||||
{
|
||||
rrd_stats_api_v1_charts_allmetrics_prometheus(
|
||||
prometheus_exporter_instance, host, wb, prefix, exporting_options, 1, output_options);
|
||||
prometheus_exporter_instance, host, filter_string, wb, prefix, exporting_options, 1, output_options);
|
||||
}
|
||||
rrd_unlock();
|
||||
}
|
||||
|
|
|
@ -23,13 +23,13 @@ typedef enum prometheus_output_flags {
|
|||
} PROMETHEUS_OUTPUT_OPTIONS;
|
||||
|
||||
extern void rrd_stats_api_v1_charts_allmetrics_prometheus_single_host(
|
||||
RRDHOST *host, BUFFER *wb, const char *server, const char *prefix,
|
||||
RRDHOST *host, const char *filter_string, BUFFER *wb, const char *server, const char *prefix,
|
||||
EXPORTING_OPTIONS exporting_options, PROMETHEUS_OUTPUT_OPTIONS output_options);
|
||||
extern void rrd_stats_api_v1_charts_allmetrics_prometheus_all_hosts(
|
||||
RRDHOST *host, BUFFER *wb, const char *server, const char *prefix,
|
||||
RRDHOST *host, const char *filter_string, BUFFER *wb, const char *server, const char *prefix,
|
||||
EXPORTING_OPTIONS exporting_options, PROMETHEUS_OUTPUT_OPTIONS output_options);
|
||||
|
||||
int can_send_rrdset(struct instance *instance, RRDSET *st);
|
||||
int can_send_rrdset(struct instance *instance, RRDSET *st, SIMPLE_PATTERN *filter);
|
||||
size_t prometheus_name_copy(char *d, const char *s, size_t usable);
|
||||
size_t prometheus_label_copy(char *d, const char *s, size_t usable);
|
||||
char *prometheus_units_copy(char *d, const char *s, size_t usable, int showoldunits);
|
||||
|
|
|
@ -988,21 +988,21 @@ static void test_can_send_rrdset(void **state)
|
|||
{
|
||||
(void)*state;
|
||||
|
||||
assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root), 1);
|
||||
assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root, NULL), 1);
|
||||
|
||||
rrdset_flag_set(localhost->rrdset_root, RRDSET_FLAG_EXPORTING_IGNORE);
|
||||
assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root), 0);
|
||||
assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root, NULL), 0);
|
||||
rrdset_flag_clear(localhost->rrdset_root, RRDSET_FLAG_EXPORTING_IGNORE);
|
||||
|
||||
// TODO: test with a denying simple pattern
|
||||
|
||||
rrdset_flag_set(localhost->rrdset_root, RRDSET_FLAG_OBSOLETE);
|
||||
assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root), 0);
|
||||
assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root, NULL), 0);
|
||||
rrdset_flag_clear(localhost->rrdset_root, RRDSET_FLAG_OBSOLETE);
|
||||
|
||||
localhost->rrdset_root->rrd_memory_mode = RRD_MEMORY_MODE_NONE;
|
||||
prometheus_exporter_instance->config.options |= EXPORTING_SOURCE_DATA_AVERAGE;
|
||||
assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root), 0);
|
||||
assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root, NULL), 0);
|
||||
}
|
||||
|
||||
static void test_prometheus_name_copy(void **state)
|
||||
|
@ -1067,7 +1067,7 @@ static void rrd_stats_api_v1_charts_allmetrics_prometheus(void **state)
|
|||
expect_function_call(__wrap_exporting_calculate_value_from_stored_data);
|
||||
will_return(__wrap_exporting_calculate_value_from_stored_data, pack_storage_number(27, SN_DEFAULT_FLAGS));
|
||||
|
||||
rrd_stats_api_v1_charts_allmetrics_prometheus_single_host(localhost, buffer, "test_server", "test_prefix", 0, 0);
|
||||
rrd_stats_api_v1_charts_allmetrics_prometheus_single_host(localhost, NULL, buffer, "test_server", "test_prefix", 0, 0);
|
||||
|
||||
assert_string_equal(
|
||||
buffer_tostring(buffer),
|
||||
|
@ -1085,7 +1085,7 @@ static void rrd_stats_api_v1_charts_allmetrics_prometheus(void **state)
|
|||
will_return(__wrap_exporting_calculate_value_from_stored_data, pack_storage_number(27, SN_DEFAULT_FLAGS));
|
||||
|
||||
rrd_stats_api_v1_charts_allmetrics_prometheus_single_host(
|
||||
localhost, buffer, "test_server", "test_prefix", 0, PROMETHEUS_OUTPUT_NAMES | PROMETHEUS_OUTPUT_TYPES);
|
||||
localhost, NULL, buffer, "test_server", "test_prefix", 0, PROMETHEUS_OUTPUT_NAMES | PROMETHEUS_OUTPUT_TYPES);
|
||||
|
||||
assert_string_equal(
|
||||
buffer_tostring(buffer),
|
||||
|
@ -1103,7 +1103,7 @@ static void rrd_stats_api_v1_charts_allmetrics_prometheus(void **state)
|
|||
expect_function_call(__wrap_exporting_calculate_value_from_stored_data);
|
||||
will_return(__wrap_exporting_calculate_value_from_stored_data, pack_storage_number(27, SN_DEFAULT_FLAGS));
|
||||
|
||||
rrd_stats_api_v1_charts_allmetrics_prometheus_all_hosts(localhost, buffer, "test_server", "test_prefix", 0, 0);
|
||||
rrd_stats_api_v1_charts_allmetrics_prometheus_all_hosts(localhost, NULL, buffer, "test_server", "test_prefix", 0, 0);
|
||||
|
||||
assert_string_equal(
|
||||
buffer_tostring(buffer),
|
||||
|
|
|
@ -19,6 +19,7 @@ struct prometheus_output_options {
|
|||
|
||||
inline int web_client_api_request_v1_allmetrics(RRDHOST *host, struct web_client *w, char *url) {
|
||||
int format = ALLMETRICS_SHELL;
|
||||
const char *filter = NULL;
|
||||
const char *prometheus_server = w->client_ip;
|
||||
|
||||
uint32_t prometheus_exporting_options;
|
||||
|
@ -57,6 +58,9 @@ inline int web_client_api_request_v1_allmetrics(RRDHOST *host, struct web_client
|
|||
else
|
||||
format = 0;
|
||||
}
|
||||
else if(!strcmp(name, "filter")) {
|
||||
filter = value;
|
||||
}
|
||||
else if(!strcmp(name, "server")) {
|
||||
prometheus_server = value;
|
||||
}
|
||||
|
@ -87,18 +91,19 @@ inline int web_client_api_request_v1_allmetrics(RRDHOST *host, struct web_client
|
|||
switch(format) {
|
||||
case ALLMETRICS_JSON:
|
||||
w->response.data->contenttype = CT_APPLICATION_JSON;
|
||||
rrd_stats_api_v1_charts_allmetrics_json(host, w->response.data);
|
||||
rrd_stats_api_v1_charts_allmetrics_json(host, filter, w->response.data);
|
||||
return HTTP_RESP_OK;
|
||||
|
||||
case ALLMETRICS_SHELL:
|
||||
w->response.data->contenttype = CT_TEXT_PLAIN;
|
||||
rrd_stats_api_v1_charts_allmetrics_shell(host, w->response.data);
|
||||
rrd_stats_api_v1_charts_allmetrics_shell(host, filter, w->response.data);
|
||||
return HTTP_RESP_OK;
|
||||
|
||||
case ALLMETRICS_PROMETHEUS:
|
||||
w->response.data->contenttype = CT_PROMETHEUS;
|
||||
rrd_stats_api_v1_charts_allmetrics_prometheus_single_host(
|
||||
host
|
||||
, filter
|
||||
, w->response.data
|
||||
, prometheus_server
|
||||
, prometheus_prefix
|
||||
|
@ -111,6 +116,7 @@ inline int web_client_api_request_v1_allmetrics(RRDHOST *host, struct web_client
|
|||
w->response.data->contenttype = CT_PROMETHEUS;
|
||||
rrd_stats_api_v1_charts_allmetrics_prometheus_all_hosts(
|
||||
host
|
||||
, filter
|
||||
, w->response.data
|
||||
, prometheus_server
|
||||
, prometheus_prefix
|
||||
|
|
|
@ -22,13 +22,17 @@ static inline size_t shell_name_copy(char *d, const char *s, size_t usable) {
|
|||
|
||||
#define SHELL_ELEMENT_MAX 100
|
||||
|
||||
void rrd_stats_api_v1_charts_allmetrics_shell(RRDHOST *host, BUFFER *wb) {
|
||||
void rrd_stats_api_v1_charts_allmetrics_shell(RRDHOST *host, const char *filter_string, BUFFER *wb) {
|
||||
analytics_log_shell();
|
||||
SIMPLE_PATTERN *filter = simple_pattern_create(filter_string, NULL, SIMPLE_PATTERN_EXACT);
|
||||
rrdhost_rdlock(host);
|
||||
|
||||
// for each chart
|
||||
RRDSET *st;
|
||||
rrdset_foreach_read(st, host) {
|
||||
if (filter && !simple_pattern_matches(filter, st->name))
|
||||
continue;
|
||||
|
||||
calculated_number total = 0.0;
|
||||
char chart[SHELL_ELEMENT_MAX + 1];
|
||||
shell_name_copy(chart, st->name?st->name:st->id, SHELL_ELEMENT_MAX);
|
||||
|
@ -88,12 +92,14 @@ void rrd_stats_api_v1_charts_allmetrics_shell(RRDHOST *host, BUFFER *wb) {
|
|||
}
|
||||
|
||||
rrdhost_unlock(host);
|
||||
simple_pattern_free(filter);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
|
||||
void rrd_stats_api_v1_charts_allmetrics_json(RRDHOST *host, BUFFER *wb) {
|
||||
void rrd_stats_api_v1_charts_allmetrics_json(RRDHOST *host, const char *filter_string, BUFFER *wb) {
|
||||
analytics_log_json();
|
||||
SIMPLE_PATTERN *filter = simple_pattern_create(filter_string, NULL, SIMPLE_PATTERN_EXACT);
|
||||
rrdhost_rdlock(host);
|
||||
|
||||
buffer_strcat(wb, "{");
|
||||
|
@ -104,6 +110,9 @@ void rrd_stats_api_v1_charts_allmetrics_json(RRDHOST *host, BUFFER *wb) {
|
|||
// for each chart
|
||||
RRDSET *st;
|
||||
rrdset_foreach_read(st, host) {
|
||||
if (filter && !(simple_pattern_matches(filter, st->id) || simple_pattern_matches(filter, st->name)))
|
||||
continue;
|
||||
|
||||
if(rrdset_is_available_for_viewers(st)) {
|
||||
rrdset_rdlock(st);
|
||||
|
||||
|
@ -160,5 +169,6 @@ void rrd_stats_api_v1_charts_allmetrics_json(RRDHOST *host, BUFFER *wb) {
|
|||
|
||||
buffer_strcat(wb, "\n}");
|
||||
rrdhost_unlock(host);
|
||||
simple_pattern_free(filter);
|
||||
}
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
#define ALLMETRICS_JSON 3
|
||||
#define ALLMETRICS_PROMETHEUS_ALL_HOSTS 4
|
||||
|
||||
extern void rrd_stats_api_v1_charts_allmetrics_json(RRDHOST *host, BUFFER *wb);
|
||||
extern void rrd_stats_api_v1_charts_allmetrics_shell(RRDHOST *host, BUFFER *wb);
|
||||
extern void rrd_stats_api_v1_charts_allmetrics_json(RRDHOST *host, const char *filter_string, BUFFER *wb);
|
||||
extern void rrd_stats_api_v1_charts_allmetrics_shell(RRDHOST *host, const char *filter_string, BUFFER *wb);
|
||||
|
||||
#endif //NETDATA_API_ALLMETRICS_SHELL_H
|
||||
|
|
|
@ -711,6 +711,16 @@
|
|||
"default": "shell"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "filter",
|
||||
"in": "query",
|
||||
"description": "Allows to filter charts out using simple patterns.",
|
||||
"required": false,
|
||||
"schema": {
|
||||
"type": "string",
|
||||
"format": "any text"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "variables",
|
||||
"in": "query",
|
||||
|
|
|
@ -593,6 +593,13 @@ paths:
|
|||
- prometheus_all_hosts
|
||||
- json
|
||||
default: shell
|
||||
- name: filter
|
||||
in: query
|
||||
description: Allows to filter charts out using simple patterns.
|
||||
required: false
|
||||
schema:
|
||||
type: string
|
||||
format: any text
|
||||
- name: variables
|
||||
in: query
|
||||
description: When enabled, netdata will expose various system
|
||||
|
|
|
@ -16,6 +16,7 @@ void free_temporary_host(RRDHOST *host)
|
|||
void *__wrap_free_temporary_host(RRDHOST *host)
|
||||
{
|
||||
(void) host;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -16,6 +16,7 @@ void free_temporary_host(RRDHOST *host)
|
|||
void *__wrap_free_temporary_host(RRDHOST *host)
|
||||
{
|
||||
(void) host;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
RRDHOST *sql_create_host_by_uuid(char *hostname)
|
||||
|
|
Loading…
Add table
Reference in a new issue