![]() * Database engine prototype version 0 * Database engine initial integration with netdata POC * Scalable database engine with file and memory management. * Database engine integration with netdata * Added MIN MAX definitions to fix alpine build of travis CI * Bugfix for backends and new DB engine, remove useless rrdset_time2slot() calls and erroneous checks * DB engine disk protocol correction * Moved DB engine storage file location to /var/cache/netdata/{host}/dbengine * Fix configure to require openSSL for DB engine * Fix netdata daemon health not holding read lock when iterating chart dimensions * Optimized query API for new DB engine and old netdata DB fallback code-path * netdata database internal query API improvements and cleanup * Bugfix for DB engine queries returning empty values * Added netdata internal check for data queries for old and new DB * Added statistics to DB engine and fixed memory corruption bug * Added preliminary charts for DB engine statistics * Changed DB engine ratio statistics to incremental * Added netdata statistics charts for DB engine internal statistics * Fix for netdata not compiling successfully when missing dbengine dependencies * Added DB engine functional test to netdata unittest command parameter * Implemented DB engine dataset generator based on example.random chart * Fix build error in CI * Support older versions of libuv1 * Fixes segmentation fault when using multiple DB engine instances concurrently * Fix memory corruption bug * Fixed createdataset advanced option not exiting * Fix for DB engine not working on FreeBSD * Support FreeBSD library paths of new dependencies * Workaround for unsupported O_DIRECT in OS X * Fix unittest crashing during cleanup * Disable DB engine FS caching in Apple OS X since O_DIRECT is not available * Fix segfault when unittest and DB engine dataset generator don't have permissions to create temporary host * Modified DB engine dataset generator to create multiple files * Toned down overzealous page cache prefetcher * Reduce internal memory fragmentation for page-cache data pages * Added documentation describing the DB engine * Documentation bugfixes * Fixed unit tests compilation errors since last rebase * Added note to back-up the DB engine files in documentation * Added codacy fix. * Support old gcc versions for atomic counters in DB engine |
||
---|---|---|
.. | ||
csv | ||
json | ||
ssv | ||
value | ||
charts2json.c | ||
charts2json.h | ||
json_wrapper.c | ||
json_wrapper.h | ||
Makefile.am | ||
README.md | ||
rrd2json.c | ||
rrd2json.h | ||
rrdset2json.c | ||
rrdset2json.h |
Query formatting
API data queries need to be formatted before returned to the caller. Using API parameters, the caller may define the format he/she wishes to get back.
The following formats are supported:
format | module | content type | description |
---|---|---|---|
array |
ssv | application/json | a JSON array |
csv |
csv | text/plain | a text table, comma separated, with a header line (dimension names) and \r\n at the end of the lines |
csvjsonarray |
csv | application/json | a JSON array, with each row as another array (the first row has the dimension names) |
datasource |
json | application/json | a Google Visualization Provider datasource javascript callback |
datatable |
json | application/json | a Google datatable |
html |
csv | text/html | an html table |
json |
json | application/json | a JSON object |
jsonp |
json | application/json | a JSONP javascript callback |
markdown |
csv | text/plain | a markdown table |
ssv |
ssv | text/plain | a space separated list of values |
ssvcomma |
ssv | text/plain | a comma separated list of values |
tsv |
csv | text/plain | a TAB delimited csv (MS Excel flavor) |
For examples of each format, check the relative module documentation.
Metadata with the jsonwrap
option
All data queries can be encapsulated to JSON object having metadata about the query and the results.
This is done by adding the options=jsonwrap
to the API URL (if there are other options
append
,jsonwrap
to the existing ones).
This is such an object:
# curl -Ss 'https://registry.my-netdata.io/api/v1/data?chart=system.cpu&after=-3600&points=6&group=average&format=csv&options=nonzero,jsonwrap'
{
"api": 1,
"id": "system.cpu",
"name": "system.cpu",
"view_update_every": 600,
"update_every": 1,
"first_entry": 1540387074,
"last_entry": 1540647070,
"before": 1540647000,
"after": 1540644000,
"dimension_names": ["steal", "softirq", "user", "system", "iowait"],
"dimension_ids": ["steal", "softirq", "user", "system", "iowait"],
"latest_values": [0, 0.2493766, 1.745636, 0.4987531, 0],
"view_latest_values": [0.0158314, 0.0516506, 0.866549, 0.7196127, 0.0050002],
"dimensions": 5,
"points": 6,
"format": "csv",
"result": "time,steal,softirq,user,system,iowait\n2018-10-27 13:30:00,0.0158314,0.0516506,0.866549,0.7196127,0.0050002\n2018-10-27 13:20:00,0.0149856,0.0529183,0.8673155,0.7121144,0.0049979\n2018-10-27 13:10:00,0.0137501,0.053315,0.8578097,0.7197613,0.0054209\n2018-10-27 13:00:00,0.0154252,0.0554688,0.899432,0.7200638,0.0067252\n2018-10-27 12:50:00,0.0145866,0.0495922,0.8404341,0.7011141,0.0041688\n2018-10-27 12:40:00,0.0162366,0.0595954,0.8827475,0.7020573,0.0041636\n",
"min": 0,
"max": 0
}
Downloading data query result files
Following the Google Visualization Provider guidelines,
netdata supports parsing tqx
options.
Using these options, any netdata data query can instruct the web browser to download the result and save it under a given filename.
For example, to download a CSV file with CPU utilization of the last hour, click here.
This is done by appending &tqx=outFileName:FILENAME
to any data query.
The output will be in the format given with &format=
.