mirror of
https://github.com/netdata/netdata.git
synced 2025-04-03 04:55:33 +00:00
WIP - Netdata v2 (#18125)
* split claiming into multiple files; WIP claiming with api * pidfile is now dynamically allocated * netdata_exe_path is now dynamically allocated * remove ENABLE_CLOUD and ENABLE_ACLK * fix compilation * remove ENABLE_HTTPS and ENABLE_OPENSSL * remove the ability to disable cloud * remove netdata_cloud_enabled variable; split rooms into a json array * global libcurl initialization * detect common claiming errors * more common claiming errors * finished claiming via API * same as before * same as before * remove the old claiming logic that runs the claim script * working claim.conf * cleanup * fix log message; default proxy is env * fix log message * remove netdata-claim.sh from run.sh * remove netdata-claim.sh from everywhere, except kickstart scripts * create cloud.d if it does not exist. * better error handling and logging * handle proxy disable * merged master * fix cmakelists for new files * left-overs removal * Include libcurl in required dependencies. * Fix typo in dependency script. * Use pkg-config for finding cURL. This properly handles transitive dependencies, unlike the FindCURL module. * netdata installer writes claiming info to /etc/netdata/claim.conf * remove claim from netdata * add libcurl to windows packages * add libcurl to windows packages * compile-on-windows.sh installs too * add NODE_ID streaming back to child and INDIRECT cloud status * log child kill on windows * fixes for spawn server on windows to ensure we have a valid pid and the process is properly terminated * better handling to windows processes exit code * pass the cloud url from parents to children * add retries and timeout to claiming curl request * remove FILE * from plugins.d * spawn-tester to unittest spawning processes communication * spawn-tester now tests FILE pointer I/O * external plugins run in posix mode * set blocking I/O on all pipes * working spawn server on windows * latest changes in spawn_popen applied to linux tools * push environment * repeated tests of fds * export variable CYGWIN_BASE_PATH * renamed to NETDATA_CYGWIN_BASE_PATH * added cmd and help to adapt the command and the information to be presented to users during claiming * split spawn server versions into files * restored spawn server libuv based * working libuv based spawn server * fixes in libuv for windows * working spawn server based on posix_spawn() * fix fd leads on all spawn servers * fixed windows spawn server * fix signal handling to ensure proper cooperation with libuv * switched windows to posix_spawn() based spawn server * improvement on libuv version * callocz() event loop * simplification of libuv spawn server * minor fixes in libuv and spawn tester * api split into parts and separated by version; introduced /api/v3; no changes to old /api/v1 and /api/v2 * completed APIs splitting * function renames * remove dead code * split basic functions into a directory * execute external plugins in nofork spawn server with posix_spawn() for improved performance * reset signals when using posix_spawn() * fix spawn server logs and log cmdline in posix server * bearer_get_token() implemented as function * agent cloud status now exposes parent claim_id in indirect mode * fixes for node id streaming from parent to children * extract claimed id to separate file * claim_id is no longer in host structure; there is a global claim_id for this agent and there are parent and origin claim ids in host structure * fix issue on older compilers * implement /api/v3 using calls from v1 and v2 * prevent asan leaks on local-sockets callback * codacy fixes * moved claim web api to web/api/v2 * when the agent is offline, prefer indirect connection when available; log a warning when a node changes node id * improve inheritance of claim id from parent * claim_id for bearer token show match any of the claim ids known * aclk_connected replaced with functions * aclk api can now be limited to node information, implementing [cloud].scope = license manager * comment out most options in stream.conf so that internal defaults will be applied * respect negative matches for send charts matching * hidden functions are not accessible via the API; bearer_get_token function checks the request is coming from Netdata Cloud * /api/v3/settings API * added error logs to settings api * saving and loading of bearer tokens * Fix parameter when calling send_to_plugin * Prevent overflow * expose struct parser and typedef PARSER to enforce strict type checking on send_to_plugin() * ensure the parser will not go away randomly from the receiver - it is now cleared when the receiver lock is acquired; also ensure the output sockets are set in the parser as long as the parser runs * Add newline * Send parent claim id downstream * do not send anything when nodeid is zero * code re-organization and cleanup * add aclk capabilities, nodes summary and api version and protection to /api/v2,3/info * added /api/v3/me which returns information about the current user * make /api/v3/info accessible always * Partially revert "remove netdata-claim.sh from everywhere, except kickstart scripts" Due to how we handle files in our static builds and local builds, we actually need to continue installing `netdata-claim.sh` to enable a seamless transition to the new claiming mechanims without breaking compatibility with existing installs or existing automation tooling that is directly invoking the claiming script. The script itself will be rewritten in a subsequent commit to simply wrap the new claiming methodology, together with some additional changes to ensure that a warning is issued if the script is invoked by anything other than the kickstart script. * Rewrite claiming script to use new claiming method. * Revert "netdata installer writes claiming info to /etc/netdata/claim.conf" Same reasoning as for 2e27bedb3fbf9df523bff407f2e8c8428e350e38. We need to keep the old claiming support code in the kickstart script for the forseeable future so that existing installs can still be claimed, since the kickstart script is _NOT_ versioned with the agent. A later commit will add native support for the new claiming method and use that in preference to the claiming script if it appears to be available. * Add support for new claiming method to kickstart.sh. This adds native support to the kickstart script to use the new claiming method without depending on the claiming script, as well as adding a few extra tweaks to the claiming script to enable it to better handle the transition. Expected behavior is for the kickstart script to use the new claiming code path if the claiming script is either not installed, or does not contain the specific string `%%NEW_CLAIMING_METHOD%%`. This way we will skip the claiming script on systems which have the updated copy that uses the new claiming approach, which should keep kickstart behavior consistent with what Netdata itself supports. * Depend on JSON-C 0.14 as a minimum supported version. Needed for uint64 functions. * Fix claiming option validation in kickstart script. * do not cache auth in web client * reuse bearer tokens when the request to create one matches an existing * dictionaries dfe loops now allow using return statement * bearer token files are now fixed for specific agents by having the machine guid of the agent in them * systemd journal now respects facets and disables the default facets when not given * fixed commands.c * restored log for not openning config file * Fix Netdata group templating for claiming script. * Warn on failed templating in claiming script. * Make `--require-cloud` a slient no-op. We don’t need to warn users that it does nothing, we should just have ti do nothing. * added debugging info to claiming * log also the response * do not send double / at the url * properly remove keyword from parameters * disable debug during claimming * fix log messages * Update packaging/installer/kickstart.sh * Update packaging/installer/kickstart.sh * implemented POST request payload parsing for systemd-journal * added missing reset of facets in json parsing * JSON payload does not need hashes any more. I can accept the raw values --------- Co-authored-by: Ilya Mashchenko <ilya@netdata.cloud> Co-authored-by: Austin S. Hemmelgarn <austin@netdata.cloud> Co-authored-by: Stelios Fragkakis <52996999+stelfrag@users.noreply.github.com> Co-authored-by: Austin S. Hemmelgarn <ahferroin7@gmail.com>
This commit is contained in:
parent
4e18df9e21
commit
4c0122063f
271 changed files with 12851 additions and 9416 deletions
.github
.gitignoreCMakeLists.txtnetdata-installer.shnetdata.spec.inpackaging
src
aclk
README.mdaclk.caclk.haclk_capas.caclk_capas.haclk_otp.caclk_proxy.caclk_query.caclk_rx_msgs.caclk_tx_msgs.caclk_util.caclk_util.h
claim
README.mdclaim-with-api.cclaim.cclaim.hclaim_id.cclaim_id.hcloud-conf.ccloud-status.ccloud-status.hnetdata-claim.sh.in
collectors
apps.plugin
cgroups.plugin
diskspace.plugin
freeipmi.plugin
plugins.d
proc.plugin
statsd.plugin
systemd-journal.plugin
tc.plugin
daemon
README.mdanalytics.canalytics.hbuildinfo.ccommands.ccommands.hcommon.ccommon.h
config
daemon.cdaemon.henvironment.clibuv_workers.clibuv_workers.hmain.cservice.csignals.csignals.hstatic_threads.cunit_test.cwinsvc.ccdatabase
2
.github/dockerfiles/Dockerfile.clang
vendored
2
.github/dockerfiles/Dockerfile.clang
vendored
|
@ -16,4 +16,4 @@ WORKDIR /netdata
|
|||
COPY . .
|
||||
|
||||
# Build Netdata
|
||||
RUN ./netdata-installer.sh --dont-wait --dont-start-it --disable-go --require-cloud
|
||||
RUN ./netdata-installer.sh --dont-wait --dont-start-it --disable-go
|
||||
|
|
12
.github/workflows/build.yml
vendored
12
.github/workflows/build.yml
vendored
|
@ -930,24 +930,18 @@ jobs:
|
|||
id: load
|
||||
if: needs.file-check.outputs.run == 'true'
|
||||
run: docker load --input image.tar
|
||||
- name: netdata-installer on ${{ matrix.distro }}, disable cloud
|
||||
id: build-no-cloud
|
||||
if: needs.file-check.outputs.run == 'true'
|
||||
run: |
|
||||
docker run --security-opt seccomp=unconfined -w /netdata test:${{ matrix.artifact_key }} \
|
||||
/bin/sh -c './netdata-installer.sh --dont-wait --dont-start-it --disable-cloud --one-time-build ${{ needs.file-check.outputs.skip-go }}'
|
||||
- name: netdata-installer on ${{ matrix.distro }}, require cloud
|
||||
id: build-cloud
|
||||
if: needs.file-check.outputs.run == 'true'
|
||||
run: |
|
||||
docker run --security-opt seccomp=unconfined -w /netdata test:${{ matrix.artifact_key }} \
|
||||
/bin/sh -c './netdata-installer.sh --dont-wait --dont-start-it --require-cloud --one-time-build ${{ needs.file-check.outputs.skip-go }}'
|
||||
/bin/sh -c './netdata-installer.sh --dont-wait --dont-start-it --one-time-build ${{ needs.file-check.outputs.skip-go }}'
|
||||
- name: netdata-installer on ${{ matrix.distro }}, require cloud, no JSON-C
|
||||
id: build-no-jsonc
|
||||
if: matrix.jsonc_removal != '' && needs.file-check.outputs.run == 'true'
|
||||
run: |
|
||||
docker run --security-opt seccomp=unconfined -w /netdata test:${{ matrix.artifact_key }} \
|
||||
/bin/sh -c '/rmjsonc.sh && ./netdata-installer.sh --dont-wait --dont-start-it --require-cloud --one-time-build ${{ needs.file-check.outputs.skip-go }}'
|
||||
/bin/sh -c '/rmjsonc.sh && ./netdata-installer.sh --dont-wait --dont-start-it --one-time-build ${{ needs.file-check.outputs.skip-go }}'
|
||||
- name: Failure Notification
|
||||
uses: rtCamp/action-slack-notify@v2
|
||||
env:
|
||||
|
@ -1015,7 +1009,7 @@ jobs:
|
|||
id: build-source
|
||||
if: needs.file-check.outputs.run == 'true'
|
||||
run: |
|
||||
sudo bash ./netdata-installer.sh --install-no-prefix /usr/local/netdata --dont-wait --dont-start-it --require-cloud --one-time-build
|
||||
sudo bash ./netdata-installer.sh --install-no-prefix /usr/local/netdata --dont-wait --dont-start-it --one-time-build
|
||||
- name: Test Agent start up
|
||||
id: test-agent
|
||||
if: needs.file-check.outputs.run == 'true'
|
||||
|
|
1
.gitignore
vendored
1
.gitignore
vendored
|
@ -88,7 +88,6 @@ system/systemd/netdata-updater.service
|
|||
!system/systemd/netdata.service.*.in
|
||||
|
||||
src/health/notifications/alarm-notify.sh
|
||||
claim/netdata-claim.sh
|
||||
src/collectors/cgroups.plugin/cgroup-name.sh
|
||||
src/collectors/cgroups.plugin/cgroup-network-helper.sh
|
||||
src/collectors/tc.plugin/tc-qos-helper.sh
|
||||
|
|
168
CMakeLists.txt
168
CMakeLists.txt
|
@ -147,8 +147,6 @@ option(DEFAULT_FEATURE_STATE "Specify the default state for most optional featur
|
|||
mark_as_advanced(DEFAULT_FEATURE_STATE)
|
||||
|
||||
# High-level features
|
||||
option(ENABLE_ACLK "Enable Netdata Cloud support (ACLK)" ${DEFAULT_FEATURE_STATE})
|
||||
option(ENABLE_CLOUD "Enable Netdata Cloud by default at runtime" ${DEFAULT_FEATURE_STATE})
|
||||
option(ENABLE_ML "Enable machine learning features" ${DEFAULT_FEATURE_STATE})
|
||||
option(ENABLE_DBENGINE "Enable dbengine metrics storage" True)
|
||||
|
||||
|
@ -198,11 +196,7 @@ mark_as_advanced(BUILD_FOR_PACKAGING)
|
|||
cmake_dependent_option(FORCE_LEGACY_LIBBPF "Force usage of libbpf 0.0.9 instead of the latest version." False "ENABLE_PLUGIN_EBPF" False)
|
||||
mark_as_advanced(FORCE_LEGACY_LIBBPF)
|
||||
|
||||
if(ENABLE_ACLK OR ENABLE_EXPORTER_PROMETHEUS_REMOTE_WRITE)
|
||||
set(NEED_PROTOBUF True)
|
||||
else()
|
||||
set(NEED_PROTOBUF False)
|
||||
endif()
|
||||
set(NEED_PROTOBUF True)
|
||||
|
||||
if(ENABLE_PLUGIN_GO)
|
||||
include(NetdataGoTools)
|
||||
|
@ -261,6 +255,9 @@ if(ENABLE_PLUGIN_EBPF)
|
|||
netdata_fetch_ebpf_co_re()
|
||||
endif()
|
||||
|
||||
pkg_check_modules(CURL libcurl>=7.21 REQUIRED IMPORTED_TARGET)
|
||||
set(HAVE_LIBCURL TRUE)
|
||||
|
||||
#
|
||||
# Libm
|
||||
#
|
||||
|
@ -524,7 +521,6 @@ if(OS_FREEBSD OR OS_MACOS)
|
|||
endif()
|
||||
|
||||
# openssl/crypto
|
||||
set(ENABLE_OPENSSL True)
|
||||
pkg_check_modules(TLS IMPORTED_TARGET openssl)
|
||||
|
||||
if(NOT TARGET PkgConfig::TLS)
|
||||
|
@ -743,14 +739,23 @@ set(LIBNETDATA_FILES
|
|||
src/libnetdata/os/setenv.h
|
||||
src/libnetdata/os/strndup.c
|
||||
src/libnetdata/os/strndup.h
|
||||
src/libnetdata/spawn_server/spawn_server.c
|
||||
src/libnetdata/spawn_server/spawn_server_nofork.c
|
||||
src/libnetdata/spawn_server/spawn_server.h
|
||||
src/libnetdata/spawn_server/spawn_popen.c
|
||||
src/libnetdata/spawn_server/spawn_popen.h
|
||||
src/libnetdata/spawn_server/spawn_server_windows.c
|
||||
src/libnetdata/spawn_server/spawn_server_internals.h
|
||||
src/libnetdata/spawn_server/spawn_server_libuv.c
|
||||
src/libnetdata/spawn_server/spawn_server_posix.c
|
||||
src/libnetdata/spawn_server/spawn_library.c
|
||||
src/libnetdata/spawn_server/spawn_library.h
|
||||
src/libnetdata/os/close_range.c
|
||||
src/libnetdata/os/close_range.h
|
||||
src/libnetdata/os/setproctitle.c
|
||||
src/libnetdata/os/setproctitle.h
|
||||
src/libnetdata/paths/paths.c
|
||||
src/libnetdata/paths/paths.h
|
||||
src/libnetdata/json/json-c-parser-inline.c
|
||||
)
|
||||
|
||||
if(ENABLE_PLUGIN_EBPF)
|
||||
|
@ -849,14 +854,15 @@ set(DAEMON_FILES
|
|||
src/daemon/common.h
|
||||
src/daemon/daemon.c
|
||||
src/daemon/daemon.h
|
||||
src/daemon/event_loop.c
|
||||
src/daemon/event_loop.h
|
||||
src/daemon/libuv_workers.c
|
||||
src/daemon/libuv_workers.h
|
||||
src/daemon/global_statistics.c
|
||||
src/daemon/global_statistics.h
|
||||
src/daemon/analytics.c
|
||||
src/daemon/analytics.h
|
||||
src/daemon/main.c
|
||||
src/daemon/main.h
|
||||
src/daemon/environment.c
|
||||
src/daemon/win_system-info.c
|
||||
src/daemon/win_system-info.h
|
||||
src/daemon/signals.c
|
||||
|
@ -905,16 +911,65 @@ set(API_PLUGIN_FILES
|
|||
src/web/api/web_api_v1.h
|
||||
src/web/api/web_api_v2.c
|
||||
src/web/api/web_api_v2.h
|
||||
src/web/api/web_api_v3.c
|
||||
src/web/api/web_api_v3.h
|
||||
src/web/api/http_auth.c
|
||||
src/web/api/http_auth.h
|
||||
src/web/api/http_header.c
|
||||
src/web/api/http_header.h
|
||||
src/web/api/badges/web_buffer_svg.c
|
||||
src/web/api/badges/web_buffer_svg.h
|
||||
src/web/api/exporters/allmetrics.c
|
||||
src/web/api/exporters/allmetrics.h
|
||||
src/web/api/exporters/shell/allmetrics_shell.c
|
||||
src/web/api/exporters/shell/allmetrics_shell.h
|
||||
src/web/api/maps/rrdr_options.c
|
||||
src/web/api/maps/rrdr_options.h
|
||||
src/web/api/maps/contexts_options.c
|
||||
src/web/api/maps/contexts_options.h
|
||||
src/web/api/maps/datasource_formats.c
|
||||
src/web/api/maps/datasource_formats.h
|
||||
src/web/api/maps/maps.h
|
||||
src/web/api/maps/contexts_alert_statuses.c
|
||||
src/web/api/maps/contexts_alert_statuses.h
|
||||
src/web/api/v1/api_v1_allmetrics.c
|
||||
src/web/api/v1/api_v1_badge/web_buffer_svg.c
|
||||
src/web/api/v1/api_v1_function.c
|
||||
src/web/api/v1/api_v1_manage.c
|
||||
src/web/api/v1/api_v1_calls.h
|
||||
src/web/api/v1/api_v1_dbengine.c
|
||||
src/web/api/v1/api_v1_config.c
|
||||
src/web/api/v1/api_v1_functions.c
|
||||
src/web/api/v1/api_v1_weights.c
|
||||
src/web/api/v1/api_v1_info.c
|
||||
src/web/api/v1/api_v1_registry.c
|
||||
src/web/api/v1/api_v1_data.c
|
||||
src/web/api/v1/api_v1_contexts.c
|
||||
src/web/api/v1/api_v1_ml_info.c
|
||||
src/web/api/v1/api_v1_aclk.c
|
||||
src/web/api/v1/api_v1_context.c
|
||||
src/web/api/v1/api_v1_alarms.c
|
||||
src/web/api/v1/api_v1_charts.c
|
||||
src/web/api/v2/api_v2_info.c
|
||||
src/web/api/v2/api_v2_nodes.c
|
||||
src/web/api/v2/api_v2_node_instances.c
|
||||
src/web/api/v2/api_v2_q.c
|
||||
src/web/api/v2/api_v2_versions.c
|
||||
src/web/api/v2/api_v2_functions.c
|
||||
src/web/api/v2/api_v2_alerts.c
|
||||
src/web/api/v2/api_v2_alert_transitions.c
|
||||
src/web/api/v2/api_v2_ilove/ilove.c
|
||||
src/web/api/v2/api_v2_bearer.c
|
||||
src/web/api/v2/api_v2_calls.h
|
||||
src/web/api/v2/api_v2_data.c
|
||||
src/web/api/v2/api_v2_progress.c
|
||||
src/web/api/v2/api_v2_weights.c
|
||||
src/web/api/v2/api_v2_alert_config.c
|
||||
src/web/api/v2/api_v2_contexts.c
|
||||
src/web/api/v2/api_v2_claim.c
|
||||
src/web/api/v2/api_v2_webrtc.c
|
||||
src/web/api/v3/api_v3_calls.h
|
||||
src/web/api/v3/api_v3_settings.c
|
||||
src/web/api/functions/functions.c
|
||||
src/web/api/functions/functions.h
|
||||
src/web/api/functions/function-progress.c
|
||||
src/web/api/functions/function-progress.h
|
||||
src/web/api/functions/function-streaming.c
|
||||
src/web/api/functions/function-streaming.h
|
||||
src/web/api/queries/rrdr.c
|
||||
src/web/api/queries/rrdr.h
|
||||
src/web/api/queries/query.c
|
||||
|
@ -961,10 +1016,11 @@ set(API_PLUGIN_FILES
|
|||
src/web/api/formatters/charts2json.h
|
||||
src/web/api/formatters/rrdset2json.c
|
||||
src/web/api/formatters/rrdset2json.h
|
||||
src/web/api/ilove/ilove.c
|
||||
src/web/api/ilove/ilove.h
|
||||
src/web/rtc/webrtc.c
|
||||
src/web/rtc/webrtc.h
|
||||
src/web/api/functions/function-bearer_get_token.c
|
||||
src/web/api/functions/function-bearer_get_token.h
|
||||
src/web/api/v3/api_v3_me.c
|
||||
)
|
||||
|
||||
set(EXPORTING_ENGINE_FILES
|
||||
|
@ -1055,8 +1111,14 @@ set(PLUGINSD_PLUGIN_FILES
|
|||
)
|
||||
|
||||
set(RRD_PLUGIN_FILES
|
||||
src/database/contexts/api_v1.c
|
||||
src/database/contexts/api_v2.c
|
||||
src/database/contexts/api_v1_contexts.c
|
||||
src/database/contexts/api_v2_contexts.c
|
||||
src/database/contexts/api_v2_contexts.h
|
||||
src/database/contexts/api_v2_contexts_agents.c
|
||||
src/database/contexts/api_v2_contexts_alerts.c
|
||||
src/database/contexts/api_v2_contexts_alerts.h
|
||||
src/database/contexts/api_v2_contexts_alert_transitions.c
|
||||
src/database/contexts/api_v2_contexts_alert_config.c
|
||||
src/database/contexts/context.c
|
||||
src/database/contexts/instance.c
|
||||
src/database/contexts/internal.h
|
||||
|
@ -1073,10 +1135,6 @@ set(RRD_PLUGIN_FILES
|
|||
src/database/rrdfunctions.h
|
||||
src/database/rrdfunctions-inline.c
|
||||
src/database/rrdfunctions-inline.h
|
||||
src/database/rrdfunctions-progress.c
|
||||
src/database/rrdfunctions-progress.h
|
||||
src/database/rrdfunctions-streaming.c
|
||||
src/database/rrdfunctions-streaming.h
|
||||
src/database/rrdhost.c
|
||||
src/database/rrdlabels.c
|
||||
src/database/rrd.c
|
||||
|
@ -1200,6 +1258,10 @@ set(STREAMING_PLUGIN_FILES
|
|||
src/streaming/replication.c
|
||||
src/streaming/replication.h
|
||||
src/streaming/common.h
|
||||
src/streaming/protocol/command-nodeid.c
|
||||
src/streaming/protocol/commands.c
|
||||
src/streaming/protocol/commands.h
|
||||
src/streaming/protocol/command-claimed_id.c
|
||||
)
|
||||
|
||||
set(WEB_PLUGIN_FILES
|
||||
|
@ -1216,6 +1278,12 @@ set(WEB_PLUGIN_FILES
|
|||
set(CLAIM_PLUGIN_FILES
|
||||
src/claim/claim.c
|
||||
src/claim/claim.h
|
||||
src/claim/claim_id.c
|
||||
src/claim/claim_id.h
|
||||
src/claim/cloud-conf.c
|
||||
src/claim/claim-with-api.c
|
||||
src/claim/cloud-status.c
|
||||
src/claim/cloud-status.h
|
||||
)
|
||||
|
||||
set(ACLK_ALWAYS_BUILD
|
||||
|
@ -1677,10 +1745,7 @@ endif()
|
|||
#
|
||||
# mqtt library
|
||||
#
|
||||
if (ENABLE_H2O OR ENABLE_ACLK)
|
||||
set(ENABLE_MQTTWEBSOCKETS True)
|
||||
endif()
|
||||
|
||||
set(ENABLE_MQTTWEBSOCKETS True)
|
||||
if(ENABLE_MQTTWEBSOCKETS)
|
||||
add_library(mqttwebsockets STATIC ${MQTT_WEBSOCKETS_FILES})
|
||||
|
||||
|
@ -1695,20 +1760,17 @@ if(ENABLE_MQTTWEBSOCKETS)
|
|||
|
||||
endif()
|
||||
|
||||
if(ENABLE_ACLK)
|
||||
#
|
||||
# proto definitions
|
||||
#
|
||||
netdata_protoc_generate_cpp("${CMAKE_SOURCE_DIR}/src/aclk/aclk-schemas"
|
||||
"${CMAKE_SOURCE_DIR}/src/aclk/aclk-schemas"
|
||||
ACLK_PROTO_BUILT_SRCS
|
||||
ACLK_PROTO_BUILT_HDRS
|
||||
${ACLK_PROTO_DEFS})
|
||||
#
|
||||
# proto definitions
|
||||
#
|
||||
netdata_protoc_generate_cpp("${CMAKE_SOURCE_DIR}/src/aclk/aclk-schemas"
|
||||
"${CMAKE_SOURCE_DIR}/src/aclk/aclk-schemas"
|
||||
ACLK_PROTO_BUILT_SRCS
|
||||
ACLK_PROTO_BUILT_HDRS
|
||||
${ACLK_PROTO_DEFS})
|
||||
|
||||
list(APPEND ACLK_FILES ${ACLK_PROTO_BUILT_SRCS}
|
||||
${ACLK_PROTO_BUILT_HDRS})
|
||||
|
||||
endif()
|
||||
list(APPEND ACLK_FILES ${ACLK_PROTO_BUILT_SRCS}
|
||||
${ACLK_PROTO_BUILT_HDRS})
|
||||
|
||||
#
|
||||
# build plugins
|
||||
|
@ -1740,6 +1802,9 @@ if(ENABLE_PLUGIN_DEBUGFS)
|
|||
endif()
|
||||
endif()
|
||||
|
||||
add_executable(spawn-tester src/libnetdata/spawn_server/spawn-tester.c)
|
||||
target_link_libraries(spawn-tester libnetdata)
|
||||
|
||||
if(ENABLE_PLUGIN_APPS)
|
||||
pkg_check_modules(CAP QUIET libcap)
|
||||
|
||||
|
@ -2164,7 +2229,7 @@ endif()
|
|||
|
||||
add_executable(netdata
|
||||
${NETDATA_FILES}
|
||||
"$<$<BOOL:${ENABLE_ACLK}>:${ACLK_FILES}>"
|
||||
"${ACLK_FILES}"
|
||||
"$<$<BOOL:${ENABLE_H2O}>:${H2O_FILES}>"
|
||||
"$<$<BOOL:${ENABLE_EXPORTER_MONGODB}>:${MONGODB_EXPORTING_FILES}>"
|
||||
"$<$<BOOL:${ENABLE_EXPORTER_PROMETHEUS_REMOTE_WRITE}>:${PROMETHEUS_REMOTE_WRITE_EXPORTING_FILES}>"
|
||||
|
@ -2180,7 +2245,7 @@ target_compile_options(netdata PRIVATE
|
|||
)
|
||||
|
||||
target_include_directories(netdata PRIVATE
|
||||
"$<$<BOOL:${ENABLE_ACLK}>:${CMAKE_SOURCE_DIR}/src/aclk/aclk-schemas>"
|
||||
"${CMAKE_SOURCE_DIR}/src/aclk/aclk-schemas"
|
||||
"$<$<BOOL:${ENABLE_EXPORTER_MONGODB}>:${MONGOC_INCLUDE_DIRS}>"
|
||||
"$<$<BOOL:${ENABLE_EXPORTER_PROMETHEUS_REMOTE_WRITE}>:${SNAPPY_INCLUDE_DIRS}>"
|
||||
)
|
||||
|
@ -2196,6 +2261,7 @@ target_link_libraries(netdata PRIVATE
|
|||
"$<$<BOOL:${ENABLE_SENTRY}>:sentry>"
|
||||
"$<$<BOOL:${ENABLE_WEBRTC}>:LibDataChannel::LibDataChannelStatic>"
|
||||
"$<$<BOOL:${ENABLE_H2O}>:h2o>"
|
||||
"$<$<BOOL:${CURL_FOUND}>:PkgConfig::CURL>"
|
||||
)
|
||||
|
||||
if(NEED_PROTOBUF)
|
||||
|
@ -2349,19 +2415,7 @@ set(cachedir_POST "${NETDATA_RUNTIME_PREFIX}/var/cache/netdata")
|
|||
set(registrydir_POST "${NETDATA_RUNTIME_PREFIX}/var/lib/netdata/registry")
|
||||
set(varlibdir_POST "${NETDATA_RUNTIME_PREFIX}/var/lib/netdata")
|
||||
set(netdata_user_POST "${NETDATA_USER}")
|
||||
|
||||
# netdata-claim.sh
|
||||
if(ENABLE_CLOUD)
|
||||
set(enable_cloud_POST "yes")
|
||||
else()
|
||||
set(enable_cloud_POST "no")
|
||||
endif()
|
||||
|
||||
if(ENABLE_ACLK)
|
||||
set(enable_aclk_POST "yes")
|
||||
else()
|
||||
set(enable_aclk_POST "no")
|
||||
endif()
|
||||
set(netdata_group_POST "${NETDATA_USER}")
|
||||
|
||||
configure_file(src/claim/netdata-claim.sh.in src/claim/netdata-claim.sh @ONLY)
|
||||
install(PROGRAMS
|
||||
|
|
|
@ -202,12 +202,9 @@ USAGE: ${PROGRAM} [options]
|
|||
--nightly-channel Use most recent nightly updates instead of GitHub releases.
|
||||
This results in more frequent updates.
|
||||
--disable-ebpf Disable eBPF Kernel plugin. Default: enabled.
|
||||
--disable-cloud Disable all Netdata Cloud functionality.
|
||||
--require-cloud Fail the install if it can't build Netdata Cloud support.
|
||||
--force-legacy-cxx Force usage of an older C++ standard to allow building on older systems. This will usually be autodetected.
|
||||
--enable-plugin-freeipmi Enable the FreeIPMI plugin. Default: enable it when libipmimonitoring is available.
|
||||
--disable-plugin-freeipmi Explicitly disable the FreeIPMI plugin.
|
||||
--disable-https Explicitly disable TLS support.
|
||||
--disable-dbengine Explicitly disable DB engine support.
|
||||
--enable-plugin-go Enable the Go plugin. Default: Enabled when possible.
|
||||
--disable-plugin-go Disable the Go plugin.
|
||||
|
@ -257,7 +254,6 @@ NETDATA_ENABLE_ML=""
|
|||
ENABLE_DBENGINE=1
|
||||
ENABLE_GO=1
|
||||
ENABLE_H2O=1
|
||||
ENABLE_CLOUD=1
|
||||
FORCE_LEGACY_CXX=0
|
||||
NETDATA_CMAKE_OPTIONS="${NETDATA_CMAKE_OPTIONS-}"
|
||||
|
||||
|
@ -279,9 +275,7 @@ while [ -n "${1}" ]; do
|
|||
"--enable-plugin-freeipmi") ENABLE_FREEIPMI=1 ;;
|
||||
"--disable-plugin-freeipmi") ENABLE_FREEIPMI=0 ;;
|
||||
"--disable-https")
|
||||
ENABLE_DBENGINE=0
|
||||
ENABLE_H2O=0
|
||||
ENABLE_CLOUD=0
|
||||
warning "HTTPS cannot be disabled."
|
||||
;;
|
||||
"--disable-dbengine") ENABLE_DBENGINE=0 ;;
|
||||
"--enable-plugin-go") ENABLE_GO=1 ;;
|
||||
|
@ -328,21 +322,9 @@ while [ -n "${1}" ]; do
|
|||
# XXX: No longer supported
|
||||
;;
|
||||
"--disable-cloud")
|
||||
if [ -n "${NETDATA_REQUIRE_CLOUD}" ]; then
|
||||
warning "Cloud explicitly enabled, ignoring --disable-cloud."
|
||||
else
|
||||
ENABLE_CLOUD=0
|
||||
NETDATA_DISABLE_CLOUD=1
|
||||
fi
|
||||
;;
|
||||
"--require-cloud")
|
||||
if [ -n "${NETDATA_DISABLE_CLOUD}" ]; then
|
||||
warning "Cloud explicitly disabled, ignoring --require-cloud."
|
||||
else
|
||||
ENABLE_CLOUD=1
|
||||
NETDATA_REQUIRE_CLOUD=1
|
||||
fi
|
||||
warning "Cloud cannot be disabled."
|
||||
;;
|
||||
"--require-cloud") ;;
|
||||
"--build-json-c")
|
||||
NETDATA_BUILD_JSON_C=1
|
||||
;;
|
||||
|
|
|
@ -388,8 +388,6 @@ happened, on your systems and applications.
|
|||
%else
|
||||
-DENABLE_EXPORTER_MONGODB=Off \
|
||||
%endif
|
||||
-DENABLE_ACLK=On \
|
||||
-DENABLE_CLOUD=On \
|
||||
-DENABLE_DBENGINE=On \
|
||||
-DENABLE_H2O=On \
|
||||
-DENABLE_PLUGIN_APPS=On \
|
||||
|
|
|
@ -26,8 +26,6 @@ add_cmake_option() {
|
|||
|
||||
add_cmake_option CMAKE_BUILD_TYPE RelWithDebInfo
|
||||
add_cmake_option CMAKE_INSTALL_PREFIX /
|
||||
add_cmake_option ENABLE_ACLK On
|
||||
add_cmake_option ENABLE_CLOUD On
|
||||
add_cmake_option ENABLE_DBENGINE On
|
||||
add_cmake_option ENABLE_H2O On
|
||||
add_cmake_option ENABLE_ML On
|
||||
|
|
|
@ -71,7 +71,7 @@ endfunction()
|
|||
# NETDATA_JSONC_* variables for later use.
|
||||
macro(netdata_detect_jsonc)
|
||||
if(NOT ENABLE_BUNDLED_JSONC)
|
||||
pkg_check_modules(JSONC json-c)
|
||||
pkg_check_modules(JSONC json-c>=0.14)
|
||||
endif()
|
||||
|
||||
if(NOT JSONC_FOUND)
|
||||
|
|
|
@ -67,6 +67,7 @@
|
|||
#cmakedefine HAVE_GETPRIORITY
|
||||
#cmakedefine HAVE_SETENV
|
||||
#cmakedefine HAVE_DLSYM
|
||||
#cmakedefine HAVE_LIBCURL
|
||||
|
||||
#cmakedefine HAVE_BACKTRACE
|
||||
#cmakedefine HAVE_CLOSE_RANGE
|
||||
|
@ -103,14 +104,10 @@
|
|||
|
||||
// enabled features
|
||||
|
||||
#cmakedefine ENABLE_OPENSSL
|
||||
#cmakedefine ENABLE_CLOUD
|
||||
#cmakedefine ENABLE_ACLK
|
||||
#cmakedefine ENABLE_ML
|
||||
#cmakedefine ENABLE_EXPORTING_MONGODB
|
||||
#cmakedefine ENABLE_H2O
|
||||
#cmakedefine ENABLE_DBENGINE
|
||||
#cmakedefine ENABLE_HTTPS
|
||||
#cmakedefine ENABLE_LZ4
|
||||
#cmakedefine ENABLE_ZSTD
|
||||
#cmakedefine ENABLE_BROTLI
|
||||
|
@ -182,7 +179,6 @@
|
|||
// #cmakedefine ENABLE_PROMETHEUS_REMOTE_WRITE
|
||||
|
||||
// /* NSA spy stuff */
|
||||
// #define ENABLE_HTTPS 1
|
||||
// #cmakedefine01 HAVE_X509_VERIFY_PARAM_set1_host
|
||||
|
||||
#define HAVE_CRYPTO
|
||||
|
|
|
@ -345,7 +345,6 @@ def static_build_netdata(
|
|||
"--dont-wait",
|
||||
"--dont-start-it",
|
||||
"--disable-exporting-mongodb",
|
||||
"--require-cloud",
|
||||
"--use-system-protobuf",
|
||||
"--dont-scrub-cflags-even-though-it-may-break-things",
|
||||
"--one-time-build",
|
||||
|
|
|
@ -47,7 +47,6 @@ RUN mkdir -p /app/usr/sbin/ \
|
|||
mv /var/lib/netdata /app/var/lib/ && \
|
||||
mv /etc/netdata /app/etc/ && \
|
||||
mv /usr/sbin/netdata /app/usr/sbin/ && \
|
||||
mv /usr/sbin/netdata-claim.sh /app/usr/sbin/ && \
|
||||
mv /usr/sbin/netdatacli /app/usr/sbin/ && \
|
||||
mv /usr/sbin/systemd-cat-native /app/usr/sbin/ && \
|
||||
mv packaging/docker/run.sh /app/usr/sbin/ && \
|
||||
|
|
|
@ -110,14 +110,4 @@ if [ -w "/etc/netdata" ]; then
|
|||
fi
|
||||
fi
|
||||
|
||||
if [ -n "${NETDATA_CLAIM_URL}" ] && [ -n "${NETDATA_CLAIM_TOKEN}" ] && [ ! -f /var/lib/netdata/cloud.d/claimed_id ]; then
|
||||
# shellcheck disable=SC2086
|
||||
/usr/sbin/netdata-claim.sh -token="${NETDATA_CLAIM_TOKEN}" \
|
||||
-url="${NETDATA_CLAIM_URL}" \
|
||||
${NETDATA_CLAIM_ROOMS:+-rooms="${NETDATA_CLAIM_ROOMS}"} \
|
||||
${NETDATA_CLAIM_PROXY:+-proxy="${NETDATA_CLAIM_PROXY}"} \
|
||||
${NETDATA_EXTRA_CLAIM_OPTS} \
|
||||
-daemon-not-running
|
||||
fi
|
||||
|
||||
exec /usr/sbin/netdata -u "${DOCKER_USR}" -D -s /host -p "${NETDATA_LISTENER_PORT}" "$@"
|
||||
|
|
|
@ -341,8 +341,6 @@ prepare_cmake_options() {
|
|||
enable_feature PLUGIN_NETWORK_VIEWER "${IS_LINUX}"
|
||||
enable_feature PLUGIN_EBPF "${ENABLE_EBPF:-0}"
|
||||
|
||||
enable_feature ACLK "${ENABLE_CLOUD:-1}"
|
||||
enable_feature CLOUD "${ENABLE_CLOUD:-1}"
|
||||
enable_feature BUNDLED_JSONC "${NETDATA_BUILD_JSON_C:-0}"
|
||||
enable_feature DBENGINE "${ENABLE_DBENGINE:-1}"
|
||||
enable_feature H2O "${ENABLE_H2O:-1}"
|
||||
|
|
|
@ -825,6 +825,17 @@ declare -A pkg_libuuid_dev=(
|
|||
['default']=""
|
||||
)
|
||||
|
||||
declare -A pkg_libcurl_dev=(
|
||||
['alpine']="curl-dev"
|
||||
['arch']="curl"
|
||||
['clearlinux']="devpkg-curl"
|
||||
['debian']="libcurl4-openssl-dev"
|
||||
['gentoo']="net-misc/curl"
|
||||
['ubuntu']="libcurl4-openssl-dev"
|
||||
['macos']="curl"
|
||||
['default']="libcurl-devel"
|
||||
)
|
||||
|
||||
declare -A pkg_libmnl_dev=(
|
||||
['alpine']="libmnl-dev"
|
||||
['arch']="libmnl"
|
||||
|
@ -1246,6 +1257,7 @@ packages() {
|
|||
suitable_package libyaml-dev
|
||||
suitable_package libsystemd-dev
|
||||
suitable_package pcre2
|
||||
suitable_package libcurl-dev
|
||||
fi
|
||||
|
||||
# -------------------------------------------------------------------------
|
||||
|
|
|
@ -53,11 +53,9 @@ INSTALL_PREFIX=""
|
|||
NETDATA_AUTO_UPDATES="default"
|
||||
NETDATA_CLAIM_URL="https://app.netdata.cloud"
|
||||
NETDATA_COMMAND="default"
|
||||
NETDATA_DISABLE_CLOUD=0
|
||||
NETDATA_INSTALLER_OPTIONS=""
|
||||
NETDATA_FORCE_METHOD=""
|
||||
NETDATA_OFFLINE_INSTALL_SOURCE=""
|
||||
NETDATA_REQUIRE_CLOUD=1
|
||||
NETDATA_WARNINGS=""
|
||||
RELEASE_CHANNEL="default"
|
||||
|
||||
|
@ -149,8 +147,6 @@ main() {
|
|||
|
||||
if [ -n "${NETDATA_CLAIM_TOKEN}" ]; then
|
||||
claim
|
||||
elif [ "${NETDATA_DISABLE_CLOUD}" -eq 1 ]; then
|
||||
soft_disable_cloud
|
||||
fi
|
||||
|
||||
set_auto_updates
|
||||
|
@ -185,8 +181,6 @@ USAGE: kickstart.sh [options]
|
|||
--native-only Only install if native binary packages are available.
|
||||
--static-only Only install if a static build is available.
|
||||
--build-only Only install using a local build.
|
||||
--disable-cloud Disable support for Netdata Cloud (default: detect)
|
||||
--require-cloud Only install if Netdata Cloud can be enabled. Overrides --disable-cloud.
|
||||
--install-prefix <path> Specify an installation prefix for local builds (default: autodetect based on system type).
|
||||
--old-install-prefix <path> Specify an old local builds installation prefix for uninstall/reinstall (if it's not default).
|
||||
--install-version <version> Specify the version of Netdata to install.
|
||||
|
@ -1183,41 +1177,6 @@ handle_existing_install() {
|
|||
esac
|
||||
}
|
||||
|
||||
soft_disable_cloud() {
|
||||
set_tmpdir
|
||||
|
||||
cloud_prefix="${INSTALL_PREFIX}/var/lib/netdata/cloud.d"
|
||||
|
||||
run_as_root mkdir -p "${cloud_prefix}"
|
||||
|
||||
cat > "${tmpdir}/cloud.conf" << EOF
|
||||
[global]
|
||||
enabled = no
|
||||
EOF
|
||||
|
||||
run_as_root cp "${tmpdir}/cloud.conf" "${cloud_prefix}/cloud.conf"
|
||||
|
||||
if [ -z "${NETDATA_NO_START}" ]; then
|
||||
case "${SYSTYPE}" in
|
||||
Darwin) run_as_root launchctl kickstart -k com.github.netdata ;;
|
||||
FreeBSD) run_as_root service netdata restart ;;
|
||||
Linux)
|
||||
initpath="$(run_as_root readlink /proc/1/exe)"
|
||||
|
||||
if command -v service > /dev/null 2>&1; then
|
||||
run_as_root service netdata restart
|
||||
elif command -v rc-service > /dev/null 2>&1; then
|
||||
run_as_root rc-service netdata restart
|
||||
elif [ "$(basename "${initpath}" 2> /dev/null)" = "systemd" ]; then
|
||||
run_as_root systemctl restart netdata
|
||||
elif [ -f /etc/init.d/netdata ]; then
|
||||
run_as_root /etc/init.d/netdata restart
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
}
|
||||
|
||||
confirm_install_prefix() {
|
||||
if [ -n "${INSTALL_PREFIX}" ] && [ "${NETDATA_FORCE_METHOD}" != 'build' ]; then
|
||||
fatal "The --install-prefix option is only supported together with the --build-only option." F0204
|
||||
|
@ -1246,10 +1205,9 @@ check_claim_opts() {
|
|||
# shellcheck disable=SC2235,SC2030
|
||||
if [ -z "${NETDATA_CLAIM_TOKEN}" ] && [ -n "${NETDATA_CLAIM_ROOMS}" ]; then
|
||||
fatal "Invalid claiming options, claim rooms may only be specified when a token is specified." F0204
|
||||
elif [ -z "${NETDATA_CLAIM_TOKEN}" ] && [ -n "${NETDATA_CLAIM_EXTRA}" ]; then
|
||||
elif [ -z "${NETDATA_CLAIM_TOKEN}" ] && [ -n "${NETDATA_CLAIM_EXTRA}${NETDATA_CLAIM_PROXY}${NETDATA_CLAIM_NORELOAD}${NETDATA_CLAIM_INSECURE}" ]; then
|
||||
# The above condition checks if _any_ claiming options other than the rooms have been set when the token is unset.
|
||||
fatal "Invalid claiming options, a claiming token must be specified." F0204
|
||||
elif [ "${NETDATA_DISABLE_CLOUD}" -eq 1 ] && [ -n "${NETDATA_CLAIM_TOKEN}" ]; then
|
||||
fatal "Cloud explicitly disabled, but automatic claiming requested. Either enable Netdata Cloud, or remove the --claim-* options." F0204
|
||||
fi
|
||||
}
|
||||
|
||||
|
@ -1277,6 +1235,93 @@ is_netdata_running() {
|
|||
fi
|
||||
}
|
||||
|
||||
write_claim_config() {
|
||||
if [ -z "${INSTALL_PREFIX}" ] || [ "${INSTALL_PREFIX}" = "/" ]; then
|
||||
config_path="/etc/netdata"
|
||||
netdatacli="$(command -v netdatacli)"
|
||||
elif [ "${INSTALL_PREFIX}" = "/opt/netdata" ]; then
|
||||
config_path="/opt/netdata/etc/netdata"
|
||||
netdatacli="/opt/netdata/bin/netdatacli"
|
||||
elif [ ! -d "${INSTALL_PREFIX}/netdata" ]; then
|
||||
config_path="${INSTALL_PREFIX}/etc/netdata"
|
||||
netdatacli="${INSTALL_PREFIX}/usr/sbin/netdatacli"
|
||||
else
|
||||
config_path="${INSTALL_PREFIX}/netdata/etc/netdata"
|
||||
netdatacli="${INSTALL_PREFIX}/netdata/usr/sbin/netdatacli"
|
||||
fi
|
||||
|
||||
claim_config="${config_path}/claim.conf"
|
||||
|
||||
if [ "${DRY_RUN}" -eq 1 ]; then
|
||||
progress "Would attempt to write claiming configuration to ${claim_config}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
progress "Writing claiming configuration to ${claim_config}"
|
||||
|
||||
config="[global]"
|
||||
config="${config}\n url = ${NETDATA_CLAIM_URL}"
|
||||
config="${config}\n token = ${NETDATA_CLAIM_TOKEN}"
|
||||
if [ -n "${NETDATA_CLAIM_ROOMS}" ]; then
|
||||
config="${config}\n rooms = ${NETDATA_CLAIM_ROOMS}"
|
||||
fi
|
||||
if [ -n "${NETDATA_CLAIM_PROXY}" ]; then
|
||||
config="${config}\n proxy = ${NETDATA_CLAIM_PROXY}"
|
||||
fi
|
||||
if [ -n "${NETDATA_CLAIM_INSECURE}" ]; then
|
||||
config="${config}\n insecure = ${NETDATA_CLAIM_INSECURE}"
|
||||
fi
|
||||
|
||||
run_as_root touch "${claim_config}.tmp" || return 1
|
||||
run_as_root chmod 0640 "${claim_config}.tmp" || return 1
|
||||
run_as_root chown ":${NETDATA_CLAIM_GROUP:-netdata}" "${claim_config}.tmp" || return 1
|
||||
run_as_root echo "${config}" > "${claim_config}.tmp" || return 1
|
||||
run_as_root mv -f "${claim_config}.tmp" "${claim_config}" || return 1
|
||||
|
||||
if [ -z "${NETDATA_CLAIM_NORELOAD}" ]; then
|
||||
run_as_root "${netdatacli}" reload-claiming-state || return 1
|
||||
fi
|
||||
}
|
||||
|
||||
run_claim_script() {
|
||||
if [ -n "${NETDATA_CLAIM_NORELOAD}" ]; then
|
||||
NETDATA_CLAIM_EXTRA="${NETDATA_CLAIM_EXTRA} -daemon-not-running"
|
||||
fi
|
||||
|
||||
if [ -n "${NETDATA_CLAIM_INSECURE}" ]; then
|
||||
NETDATA_CLAIM_EXTRA="${NETDATA_CLAIM_EXTRA} -insecure"
|
||||
fi
|
||||
|
||||
if [ -n "${NETDATA_CLAIM_PROXY}" ]; then
|
||||
if [ "${NETDATA_CLAIM_PROXY}" = "none" ]; then
|
||||
NETDATA_CLAIM_EXTRA="${NETDATA_CLAIM_EXTRA} -noproxy"
|
||||
else
|
||||
NETDATA_CLAIM_EXTRA="${NETDATA_CLAIM_EXTRA} -proxy=${NETDATA_CLAIM_PROXY}"
|
||||
fi
|
||||
fi
|
||||
|
||||
# shellcheck disable=SC2086
|
||||
run_as_root "${NETDATA_CLAIM_PATH}" -token="${NETDATA_CLAIM_TOKEN}" -rooms="${NETDATA_CLAIM_ROOMS}" -url="${NETDATA_CLAIM_URL}" ${NETDATA_CLAIM_EXTRA}
|
||||
case $? in
|
||||
0) progress "Successfully claimed node" ;;
|
||||
1) warning "Unable to claim node due to invalid claiming options. If you are seeing this message, you’ve probably found a bug and should open a bug report at ${AGENT_BUG_REPORT_URL}" ;;
|
||||
2) warning "Unable to claim node due to issues creating the claiming directory or preparing the local claiming key. Make sure you have a working openssl command and that ${INSTALL_PREFIX}/var/lib/netdata/cloud.d exists, then try again." ;;
|
||||
3) warning "Unable to claim node due to missing dependencies. Usually this means that the Netdata Agent was built without support for Netdata Cloud. If you built the agent from source, please install all needed dependencies for Cloud support. If you used the regular installation script and see this error, please file a bug report at ${AGENT_BUG_REPORT_URL}." ;;
|
||||
4) warning "Failed to claim node due to inability to connect to ${NETDATA_CLAIM_URL}. Usually this either means that the specified claiming URL is wrong, or that you are having networking problems." ;;
|
||||
5) progress "Successfully claimed node, but was not able to notify the Netdata Agent. You will need to restart the Netdata service on this node before it will show up in the Cloud." ;;
|
||||
8) warning "Failed to claim node due to an invalid agent ID. You can usually resolve this by removing ${INSTALL_PREFIX}/var/lib/netdata/registry/netdata.public.unique.id and restarting the agent. Then try to claim it again using the same options." ;;
|
||||
9) warning "Failed to claim node due to an invalid node name. This probably means you tried to specify a custom name for this node (for example, using the --claim-hostname option), but the hostname itself was either empty or consisted solely of whitespace. You can resolve this by specifying a valid host name and trying again." ;;
|
||||
10) warning "Failed to claim node due to an invalid room ID. This issue is most likely caused by a typo. Please check if the room(s) you are trying to add appear on the list of rooms provided to the --claim-rooms option ('${NETDATA_CLAIM_ROOMS}'). Then verify if the rooms are visible in Netdata Cloud and try again." ;;
|
||||
11) warning "Failed to claim node due to an issue with the generated RSA key pair. You can usually resolve this by removing all files in ${INSTALL_PREFIX}/var/lib/netdata/cloud.d and then trying again." ;;
|
||||
12) warning "Failed to claim node due to an invalid or expired claiming token. Please check that the token specified with the --claim-token option ('${NETDATA_CLAIM_TOKEN}') matches what you see in the Cloud and try again." ;;
|
||||
13) warning "Failed to claim node because the Cloud thinks it is already claimed. If this node was created by cloning a VM or as a container from a template, please remove the file ${INSTALL_PREFIX}/var/lib/netdata/registry/netdata.public.unique.id and restart the agent. Then try to claim it again with the same options. Otherwise, if you are certain this node has never been claimed before, you can use the --claim-id option to specify a new node ID to use for claiming, for example by using the uuidgen command like so: --claim-id \"\$(uuidgen)\"" ;;
|
||||
14) warning "Failed to claim node because the node is already in the process of being claimed. You should not need to do anything to resolve this, the node should show up properly in the Cloud soon. If it does not, please report a bug at ${AGENT_BUG_REPORT_URL}." ;;
|
||||
15|16|17) warning "Failed to claim node due to an internal server error in the Cloud. Please retry claiming this node later, and if you still see this message file a bug report at ${CLOUD_BUG_REPORT_URL}." ;;
|
||||
18) warning "Unable to claim node because this Netdata installation does not have a unique ID yet. Make sure the agent is running and started up correctly, and then try again." ;;
|
||||
*) warning "Failed to claim node for an unknown reason. This usually means either networking problems or a bug. Please retry claiming later, and if you still see this message file a bug report at ${AGENT_BUG_REPORT_URL}" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
claim() {
|
||||
if [ "${DRY_RUN}" -eq 1 ]; then
|
||||
progress "Would attempt to claim agent to ${NETDATA_CLAIM_URL}"
|
||||
|
@ -1300,17 +1345,18 @@ claim() {
|
|||
NETDATA_CLAIM_PATH="${INSTALL_PREFIX}/netdata/usr/sbin/netdata-claim.sh"
|
||||
fi
|
||||
|
||||
method="script"
|
||||
err_msg=
|
||||
err_code=
|
||||
if [ -z "${NETDATA_CLAIM_PATH}" ]; then
|
||||
err_msg="Unable to claim node: could not find usable claiming script. Reinstalling Netdata may resolve this."
|
||||
err_code=F050B
|
||||
method="config"
|
||||
elif [ ! -e "${NETDATA_CLAIM_PATH}" ]; then
|
||||
err_msg="Unable to claim node: ${NETDATA_CLAIM_PATH} does not exist."
|
||||
err_code=F0512
|
||||
method="config"
|
||||
elif [ ! -f "${NETDATA_CLAIM_PATH}" ]; then
|
||||
err_msg="Unable to claim node: ${NETDATA_CLAIM_PATH} is not a file."
|
||||
err_code=F0513
|
||||
elif grep -q '%%NEW_CLAIMING_METHOD%%' "${NETDATA_CLAIM_PATH}"; then
|
||||
method="config"
|
||||
elif [ ! -x "${NETDATA_CLAIM_PATH}" ]; then
|
||||
err_msg="Unable to claim node: claiming script at ${NETDATA_CLAIM_PATH} is not executable. Reinstalling Netdata may resolve this."
|
||||
err_code=F0514
|
||||
|
@ -1326,34 +1372,16 @@ claim() {
|
|||
fi
|
||||
|
||||
if ! is_netdata_running; then
|
||||
NETDATA_CLAIM_EXTRA="${NETDATA_CLAIM_EXTRA} -daemon-not-running"
|
||||
NETDATA_CLAIM_NORELOAD=1
|
||||
fi
|
||||
|
||||
# shellcheck disable=SC2086
|
||||
run_as_root "${NETDATA_CLAIM_PATH}" -token="${NETDATA_CLAIM_TOKEN}" -rooms="${NETDATA_CLAIM_ROOMS}" -url="${NETDATA_CLAIM_URL}" ${NETDATA_CLAIM_EXTRA}
|
||||
case $? in
|
||||
0)
|
||||
progress "Successfully claimed node"
|
||||
return 0
|
||||
case ${method} in
|
||||
script) run_claim_script ;;
|
||||
config)
|
||||
if ! write_claim_config; then
|
||||
warning "Failed to write claiming configuration. This usually means you do not have permissions to access the configuration directory."
|
||||
fi
|
||||
;;
|
||||
1) warning "Unable to claim node due to invalid claiming options. If you are seeing this message, you’ve probably found a bug and should open a bug report at ${AGENT_BUG_REPORT_URL}" ;;
|
||||
2) warning "Unable to claim node due to issues creating the claiming directory or preparing the local claiming key. Make sure you have a working openssl command and that ${INSTALL_PREFIX}/var/lib/netdata/cloud.d exists, then try again." ;;
|
||||
3) warning "Unable to claim node due to missing dependencies. Usually this means that the Netdata Agent was built without support for Netdata Cloud. If you built the agent from source, please install all needed dependencies for Cloud support. If you used the regular installation script and see this error, please file a bug report at ${AGENT_BUG_REPORT_URL}." ;;
|
||||
4) warning "Failed to claim node due to inability to connect to ${NETDATA_CLAIM_URL}. Usually this either means that the specified claiming URL is wrong, or that you are having networking problems." ;;
|
||||
5)
|
||||
progress "Successfully claimed node, but was not able to notify the Netdata Agent. You will need to restart the Netdata service on this node before it will show up in the Cloud."
|
||||
return 0
|
||||
;;
|
||||
8) warning "Failed to claim node due to an invalid agent ID. You can usually resolve this by removing ${INSTALL_PREFIX}/var/lib/netdata/registry/netdata.public.unique.id and restarting the agent. Then try to claim it again using the same options." ;;
|
||||
9) warning "Failed to claim node due to an invalid node name. This probably means you tried to specify a custom name for this node (for example, using the --claim-hostname option), but the hostname itself was either empty or consisted solely of whitespace. You can resolve this by specifying a valid host name and trying again." ;;
|
||||
10) warning "Failed to claim node due to an invalid room ID. This issue is most likely caused by a typo. Please check if the room(s) you are trying to add appear on the list of rooms provided to the --claim-rooms option ('${NETDATA_CLAIM_ROOMS}'). Then verify if the rooms are visible in Netdata Cloud and try again." ;;
|
||||
11) warning "Failed to claim node due to an issue with the generated RSA key pair. You can usually resolve this by removing all files in ${INSTALL_PREFIX}/var/lib/netdata/cloud.d and then trying again." ;;
|
||||
12) warning "Failed to claim node due to an invalid or expired claiming token. Please check that the token specified with the --claim-token option ('${NETDATA_CLAIM_TOKEN}') matches what you see in the Cloud and try again." ;;
|
||||
13) warning "Failed to claim node because the Cloud thinks it is already claimed. If this node was created by cloning a VM or as a container from a template, please remove the file ${INSTALL_PREFIX}/var/lib/netdata/registry/netdata.public.unique.id and restart the agent. Then try to claim it again with the same options. Otherwise, if you are certain this node has never been claimed before, you can use the --claim-id option to specify a new node ID to use for claiming, for example by using the uuidgen command like so: --claim-id \"\$(uuidgen)\"" ;;
|
||||
14) warning "Failed to claim node because the node is already in the process of being claimed. You should not need to do anything to resolve this, the node should show up properly in the Cloud soon. If it does not, please report a bug at ${AGENT_BUG_REPORT_URL}." ;;
|
||||
15|16|17) warning "Failed to claim node due to an internal server error in the Cloud. Please retry claiming this node later, and if you still see this message file a bug report at ${CLOUD_BUG_REPORT_URL}." ;;
|
||||
18) warning "Unable to claim node because this Netdata installation does not have a unique ID yet. Make sure the agent is running and started up correctly, and then try again." ;;
|
||||
*) warning "Failed to claim node for an unknown reason. This usually means either networking problems or a bug. Please retry claiming later, and if you still see this message file a bug report at ${AGENT_BUG_REPORT_URL}" ;;
|
||||
esac
|
||||
|
||||
if [ "${ACTION}" = "claim" ]; then
|
||||
|
@ -1938,12 +1966,6 @@ build_and_install() {
|
|||
opts="${opts} --stable-channel"
|
||||
fi
|
||||
|
||||
if [ "${NETDATA_REQUIRE_CLOUD}" -eq 1 ]; then
|
||||
opts="${opts} --require-cloud"
|
||||
elif [ "${NETDATA_DISABLE_CLOUD}" -eq 1 ]; then
|
||||
opts="${opts} --disable-cloud"
|
||||
fi
|
||||
|
||||
# shellcheck disable=SC2086
|
||||
run_script ./netdata-installer.sh ${opts}
|
||||
|
||||
|
@ -2392,12 +2414,10 @@ parse_args() {
|
|||
esac
|
||||
;;
|
||||
"--disable-cloud")
|
||||
NETDATA_DISABLE_CLOUD=1
|
||||
NETDATA_REQUIRE_CLOUD=0
|
||||
warning "Cloud cannot be disabled"
|
||||
;;
|
||||
"--require-cloud")
|
||||
NETDATA_DISABLE_CLOUD=0
|
||||
NETDATA_REQUIRE_CLOUD=1
|
||||
warning "Cloud is always required"
|
||||
;;
|
||||
"--dont-start-it")
|
||||
NETDATA_NO_START=1
|
||||
|
@ -2447,26 +2467,21 @@ parse_args() {
|
|||
"--native-only") NETDATA_FORCE_METHOD="native" ;;
|
||||
"--static-only") NETDATA_FORCE_METHOD="static" ;;
|
||||
"--build-only") NETDATA_FORCE_METHOD="build" ;;
|
||||
"--claim-token")
|
||||
NETDATA_CLAIM_TOKEN="${2}"
|
||||
shift 1
|
||||
;;
|
||||
"--claim-rooms")
|
||||
NETDATA_CLAIM_ROOMS="${2}"
|
||||
shift 1
|
||||
;;
|
||||
"--claim-url")
|
||||
NETDATA_CLAIM_URL="${2}"
|
||||
shift 1
|
||||
;;
|
||||
"--claim-"*)
|
||||
optname="$(echo "${1}" | cut -d '-' -f 4-)"
|
||||
case "${optname}" in
|
||||
id|proxy|user|hostname)
|
||||
token) NETDATA_CLAIM_TOKEN="${2}"; shift 1 ;;
|
||||
rooms) NETDATA_CLAIM_ROOMS="${2}"; shift 1 ;;
|
||||
url) NETDATA_CLAIM_URL="${2}"; shift 1 ;;
|
||||
proxy) NETDATA_CLAIM_PROXY="${2}"; shift 1 ;;
|
||||
noproxy) NETDATA_CLAIM_PROXY="none" ;;
|
||||
insecure) NETDATA_CLAIM_INSECURE=yes ;;
|
||||
noreload) NETDATA_CLAIM_NORELOAD=1 ;;
|
||||
id|user|hostname)
|
||||
NETDATA_CLAIM_EXTRA="${NETDATA_CLAIM_EXTRA} -${optname}=${2}"
|
||||
shift 1
|
||||
;;
|
||||
verbose|insecure|noproxy|noreload|daemon-not-running) NETDATA_CLAIM_EXTRA="${NETDATA_CLAIM_EXTRA} -${optname}" ;;
|
||||
verbose|daemon-not-running) NETDATA_CLAIM_EXTRA="${NETDATA_CLAIM_EXTRA} -${optname}" ;;
|
||||
*) warning "Ignoring unrecognized claiming option ${optname}" ;;
|
||||
esac
|
||||
;;
|
||||
|
|
|
@ -67,7 +67,7 @@ gunzip netdata*.tar.gz && tar xf netdata*.tar && rm -rf netdata*.tar
|
|||
Install Netdata in `/opt/netdata`. If you want to enable automatic updates, add `--auto-update` or `-u` to install `netdata-updater` in `cron` (**need root permission**):
|
||||
|
||||
```sh
|
||||
cd netdata-v* && ./netdata-installer.sh --install-prefix /opt && cp /opt/netdata/usr/sbin/netdata-claim.sh /usr/sbin/
|
||||
cd netdata-v* && ./netdata-installer.sh --install-prefix /opt
|
||||
```
|
||||
|
||||
You also need to enable the `netdata` service in `/etc/rc.conf`:
|
||||
|
@ -113,9 +113,6 @@ The `kickstart.sh` script accepts a number of optional parameters to control how
|
|||
- `--native-only`: Only install if native binary packages are available.
|
||||
- `--static-only`: Only install if a static build is available.
|
||||
- `--build-only`: Only install using a local build.
|
||||
- `--disable-cloud`: For local builds, don’t build any of the cloud code at all. For native packages and static builds,
|
||||
use runtime configuration to disable cloud support.
|
||||
- `--require-cloud`: Only install if Netdata Cloud can be enabled. Overrides `--disable-cloud`.
|
||||
- `--install-prefix`: Specify an installation prefix for local builds (by default, we use a sane prefix based on the type of system).
|
||||
- `--install-version`: Specify the version of Netdata to install.
|
||||
- `--old-install-prefix`: Specify the custom local build's installation prefix that should be removed.
|
||||
|
|
|
@ -245,10 +245,6 @@ By default, the kickstart script will provide a Netdata agent installation that
|
|||
Specify a proxy to use when connecting to the cloud in the form of `http://[user:pass@]host:ip` for an HTTP(S) proxy. See [connecting through a proxy](/src/claim/README.md#connect-through-a-proxy) for details.
|
||||
- `--claim-only`
|
||||
If there is an existing install, only try to claim it without attempting to update it. If there is no existing install, install and claim Netdata normally.
|
||||
- `--require-cloud`
|
||||
Only install if Netdata Cloud can be enabled.
|
||||
- `--disable-cloud`
|
||||
For local builds, don’t build any of the Netdata Cloud code at all. For native packages and static builds, use runtime configuration to disable Netdata Cloud support.
|
||||
|
||||
### anonymous telemetry
|
||||
|
||||
|
|
|
@ -32,7 +32,6 @@ run ./netdata-installer.sh \
|
|||
--dont-wait \
|
||||
--dont-start-it \
|
||||
--disable-exporting-mongodb \
|
||||
--require-cloud \
|
||||
--use-system-protobuf \
|
||||
--dont-scrub-cflags-even-though-it-may-break-things \
|
||||
--one-time-build \
|
||||
|
|
78
packaging/utils/compile-on-windows.sh
Normal file
78
packaging/utils/compile-on-windows.sh
Normal file
|
@ -0,0 +1,78 @@
|
|||
#!/bin/sh
|
||||
|
||||
# On MSYS2, install these dependencies to build netdata:
|
||||
install_dependencies() {
|
||||
pacman -S \
|
||||
git cmake ninja base-devel msys2-devel \
|
||||
libyaml-devel libzstd-devel libutil-linux libutil-linux-devel \
|
||||
mingw-w64-x86_64-toolchain mingw-w64-ucrt-x86_64-toolchain \
|
||||
mingw64/mingw-w64-x86_64-mold ucrt64/mingw-w64-ucrt-x86_64-mold \
|
||||
msys/gdb ucrt64/mingw-w64-ucrt-x86_64-gdb mingw64/mingw-w64-x86_64-gdb \
|
||||
msys/zlib-devel mingw64/mingw-w64-x86_64-zlib ucrt64/mingw-w64-ucrt-x86_64-zlib \
|
||||
msys/libuv-devel ucrt64/mingw-w64-ucrt-x86_64-libuv mingw64/mingw-w64-x86_64-libuv \
|
||||
liblz4-devel mingw64/mingw-w64-x86_64-lz4 ucrt64/mingw-w64-ucrt-x86_64-lz4 \
|
||||
openssl-devel mingw64/mingw-w64-x86_64-openssl ucrt64/mingw-w64-ucrt-x86_64-openssl \
|
||||
protobuf-devel mingw64/mingw-w64-x86_64-protobuf ucrt64/mingw-w64-ucrt-x86_64-protobuf \
|
||||
msys/pcre2-devel mingw64/mingw-w64-x86_64-pcre2 ucrt64/mingw-w64-ucrt-x86_64-pcre2 \
|
||||
msys/brotli-devel mingw64/mingw-w64-x86_64-brotli ucrt64/mingw-w64-ucrt-x86_64-brotli \
|
||||
msys/ccache ucrt64/mingw-w64-ucrt-x86_64-ccache mingw64/mingw-w64-x86_64-ccache \
|
||||
mingw64/mingw-w64-x86_64-go ucrt64/mingw-w64-ucrt-x86_64-go \
|
||||
mingw64/mingw-w64-x86_64-nsis \
|
||||
msys/libcurl msys/libcurl-devel
|
||||
}
|
||||
|
||||
if [ "${1}" = "install" ]
|
||||
then
|
||||
install_dependencies || exit 1
|
||||
exit 0
|
||||
fi
|
||||
|
||||
BUILD_FOR_PACKAGING="Off"
|
||||
if [ "${1}" = "package" ]
|
||||
then
|
||||
BUILD_FOR_PACKAGING="On"
|
||||
fi
|
||||
|
||||
export PATH="/usr/local/bin:${PATH}"
|
||||
|
||||
WT_ROOT="$(pwd)"
|
||||
BUILD_TYPE="Debug"
|
||||
NULL=""
|
||||
|
||||
if [ -z "${MSYSTEM}" ]; then
|
||||
build="${WT_ROOT}/build-${OSTYPE}"
|
||||
else
|
||||
build="${WT_ROOT}/build-${OSTYPE}-${MSYSTEM}"
|
||||
fi
|
||||
|
||||
if [ "$USER" = "vk" ]; then
|
||||
build="${WT_ROOT}/build"
|
||||
fi
|
||||
|
||||
set -exu -o pipefail
|
||||
|
||||
if [ -d "${build}" ]
|
||||
then
|
||||
rm -rf "${build}"
|
||||
fi
|
||||
|
||||
/usr/bin/cmake -S "${WT_ROOT}" -B "${build}" \
|
||||
-G Ninja \
|
||||
-DCMAKE_INSTALL_PREFIX="/opt/netdata" \
|
||||
-DCMAKE_BUILD_TYPE="${BUILD_TYPE}" \
|
||||
-DCMAKE_C_FLAGS="-fstack-protector-all -O0 -ggdb -Wall -Wextra -Wno-char-subscripts -Wa,-mbig-obj -pipe -DNETDATA_INTERNAL_CHECKS=1 -D_FILE_OFFSET_BITS=64 -D__USE_MINGW_ANSI_STDIO=1" \
|
||||
-DBUILD_FOR_PACKAGING=${BUILD_FOR_PACKAGING} \
|
||||
-DUSE_MOLD=Off \
|
||||
-DNETDATA_USER="${USER}" \
|
||||
-DDEFAULT_FEATURE_STATE=Off \
|
||||
-DENABLE_H2O=Off \
|
||||
-DENABLE_ML=On \
|
||||
-DENABLE_BUNDLED_JSONC=On \
|
||||
-DENABLE_BUNDLED_PROTOBUF=Off \
|
||||
${NULL}
|
||||
|
||||
ninja -v -C "${build}" install || ninja -v -C "${build}" -j 1
|
||||
|
||||
echo
|
||||
echo "Compile with:"
|
||||
echo "ninja -v -C \"${build}\" install || ninja -v -C \"${build}\" -j 1"
|
|
@ -15,11 +15,23 @@ pacman -S --noconfirm --needed \
|
|||
base-devel \
|
||||
cmake \
|
||||
git \
|
||||
ninja \
|
||||
python \
|
||||
liblz4-devel \
|
||||
libutil-linux \
|
||||
libutil-linux-devel \
|
||||
libyaml-devel \
|
||||
libzstd-devel \
|
||||
msys2-devel \
|
||||
msys/brotli-devel \
|
||||
msys/libuv-devel \
|
||||
msys/pcre2-devel \
|
||||
msys/zlib-devel \
|
||||
msys/libcurl-devel \
|
||||
openssl-devel \
|
||||
protobuf-devel \
|
||||
mingw-w64-x86_64-toolchain \
|
||||
mingw-w64-ucrt-x86_64-toolchain \
|
||||
mingw64/mingw-w64-x86_64-brotli \
|
||||
mingw64/mingw-w64-x86_64-go \
|
||||
mingw64/mingw-w64-x86_64-libuv \
|
||||
|
@ -29,16 +41,6 @@ pacman -S --noconfirm --needed \
|
|||
mingw64/mingw-w64-x86_64-pcre2 \
|
||||
mingw64/mingw-w64-x86_64-protobuf \
|
||||
mingw64/mingw-w64-x86_64-zlib \
|
||||
mingw-w64-ucrt-x86_64-toolchain \
|
||||
mingw-w64-x86_64-toolchain \
|
||||
msys2-devel \
|
||||
msys/brotli-devel \
|
||||
msys/libuv-devel \
|
||||
msys/pcre2-devel \
|
||||
msys/zlib-devel \
|
||||
openssl-devel \
|
||||
protobuf-devel \
|
||||
python \
|
||||
ucrt64/mingw-w64-ucrt-x86_64-brotli \
|
||||
ucrt64/mingw-w64-ucrt-x86_64-go \
|
||||
ucrt64/mingw-w64-ucrt-x86_64-libuv \
|
||||
|
|
|
@ -28,18 +28,10 @@ However, to be able to offer the stunning visualizations and advanced functional
|
|||
|
||||
## Enable and configure the ACLK
|
||||
|
||||
The ACLK is enabled by default, with its settings automatically configured and stored in the Agent's memory. No file is
|
||||
created at `/var/lib/netdata/cloud.d/cloud.conf` until you either connect a node or create it yourself. The default
|
||||
configuration uses two settings:
|
||||
|
||||
```conf
|
||||
[global]
|
||||
enabled = yes
|
||||
cloud base url = https://app.netdata.cloud
|
||||
```
|
||||
The ACLK is enabled by default, with its settings automatically configured and stored in the Agent's memory.
|
||||
|
||||
If your Agent needs to use a proxy to access the internet, you must [set up a proxy for
|
||||
connecting to cloud](/src/claim/README.md#connect-through-a-proxy).
|
||||
connecting to cloud](/src/claim/README.md).
|
||||
|
||||
You can configure following keys in the `netdata.conf` section `[cloud]`:
|
||||
```
|
||||
|
@ -50,84 +42,3 @@ You can configure following keys in the `netdata.conf` section `[cloud]`:
|
|||
|
||||
- `statistics` enables/disables ACLK related statistics and their charts. You can disable this to save some space in the database and slightly reduce memory usage of Netdata Agent.
|
||||
- `query thread count` specifies the number of threads to process cloud queries. Increasing this setting is useful for nodes with many children (streaming), which can expect to handle more queries (and/or more complicated queries).
|
||||
|
||||
## Disable the ACLK
|
||||
|
||||
You have two options if you prefer to disable the ACLK and not use Netdata Cloud.
|
||||
|
||||
### Disable at installation
|
||||
|
||||
You can pass the `--disable-cloud` parameter to the Agent installation when using a kickstart script
|
||||
([kickstart.sh](/packaging/installer/methods/kickstart.md), or a [manual installation from
|
||||
Git](/packaging/installer/methods/manual.md).
|
||||
|
||||
When you pass this parameter, the installer does not download or compile any extra libraries. Once running, the Agent
|
||||
kills the thread responsible for the ACLK and connecting behavior, and behaves as though the ACLK, and thus Netdata Cloud,
|
||||
does not exist.
|
||||
|
||||
### Disable at runtime
|
||||
|
||||
You can change a runtime setting in your `cloud.conf` file to disable the ACLK. This setting only stops the Agent from
|
||||
attempting any connection via the ACLK, but does not prevent the installer from downloading and compiling the ACLK's
|
||||
dependencies.
|
||||
|
||||
The file typically exists at `/var/lib/netdata/cloud.d/cloud.conf`, but can change if you set a prefix during
|
||||
installation. To disable the ACLK, open that file and change the `enabled` setting to `no`:
|
||||
|
||||
```conf
|
||||
[global]
|
||||
enabled = no
|
||||
```
|
||||
|
||||
If the file at `/var/lib/netdata/cloud.d/cloud.conf` doesn't exist, you need to create it.
|
||||
|
||||
Copy and paste the first two lines from below, which will change your prompt to `cat`.
|
||||
|
||||
```bash
|
||||
cd /var/lib/netdata/cloud.d
|
||||
cat > cloud.conf << EOF
|
||||
```
|
||||
|
||||
Copy and paste in lines 3-6, and after the final `EOF`, hit **Enter**. The final line must contain only `EOF`. Hit **Enter** again to return to your normal prompt with the newly-created file.
|
||||
|
||||
To get your normal prompt back, the final line
|
||||
must contain only `EOF`.
|
||||
|
||||
```bash
|
||||
[global]
|
||||
enabled = no
|
||||
cloud base url = https://app.netdata.cloud
|
||||
EOF
|
||||
```
|
||||
|
||||
You also need to change the file's permissions. Use `grep "run as user" /etc/netdata/netdata.conf` to figure out which
|
||||
user your Agent runs as (typically `netdata`), and replace `netdata:netdata` as shown below if necessary:
|
||||
|
||||
```bash
|
||||
sudo chmod 0770 cloud.conf
|
||||
sudo chown netdata:netdata cloud.conf
|
||||
```
|
||||
|
||||
Restart your Agent to disable the ACLK.
|
||||
|
||||
### Re-enable the ACLK
|
||||
|
||||
If you first disable the ACLK and any Cloud functionality and then decide you would like to use Cloud, you must either
|
||||
[reinstall Netdata](/packaging/installer/REINSTALL.md) with Cloud enabled or change the runtime setting in your
|
||||
`cloud.conf` file.
|
||||
|
||||
If you passed `--disable-cloud` to `netdata-installer.sh` during installation, you must
|
||||
[reinstall](/packaging/installer/REINSTALL.md) your Agent. Use the same method as before, but pass `--require-cloud` to
|
||||
the installer. When installation finishes you can [connect your node](/src/claim/README.md#how-to-connect-a-node).
|
||||
|
||||
If you changed the runtime setting in your `var/lib/netdata/cloud.d/cloud.conf` file, edit the file again and change
|
||||
`enabled` to `yes`:
|
||||
|
||||
```conf
|
||||
[global]
|
||||
enabled = yes
|
||||
```
|
||||
|
||||
Restart your Agent and [connect your node](/src/claim/README.md#how-to-connect-a-node).
|
||||
|
||||
|
||||
|
|
164
src/aclk/aclk.c
164
src/aclk/aclk.c
|
@ -2,7 +2,6 @@
|
|||
|
||||
#include "aclk.h"
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
#include "aclk_stats.h"
|
||||
#include "mqtt_websockets/mqtt_wss_client.h"
|
||||
#include "aclk_otp.h"
|
||||
|
@ -14,7 +13,6 @@
|
|||
#include "https_client.h"
|
||||
#include "schema-wrappers/schema_wrappers.h"
|
||||
#include "aclk_capas.h"
|
||||
|
||||
#include "aclk_proxy.h"
|
||||
|
||||
#ifdef ACLK_LOG_CONVERSATION_DIR
|
||||
|
@ -25,14 +23,35 @@
|
|||
|
||||
#define ACLK_STABLE_TIMEOUT 3 // Minimum delay to mark AGENT as stable
|
||||
|
||||
#endif /* ENABLE_ACLK */
|
||||
|
||||
int aclk_pubacks_per_conn = 0; // How many PubAcks we got since MQTT conn est.
|
||||
int aclk_rcvd_cloud_msgs = 0;
|
||||
int aclk_connection_counter = 0;
|
||||
int disconnect_req = 0;
|
||||
|
||||
int aclk_connected = 0;
|
||||
static bool aclk_connected = false;
|
||||
static inline void aclk_set_connected(void) {
|
||||
__atomic_store_n(&aclk_connected, true, __ATOMIC_RELAXED);
|
||||
}
|
||||
static inline void aclk_set_disconnected(void) {
|
||||
__atomic_store_n(&aclk_connected, false, __ATOMIC_RELAXED);
|
||||
}
|
||||
|
||||
inline bool aclk_online(void) {
|
||||
return __atomic_load_n(&aclk_connected, __ATOMIC_RELAXED);
|
||||
}
|
||||
|
||||
bool aclk_online_for_contexts(void) {
|
||||
return aclk_online() && aclk_query_scope_has(HTTP_ACL_METRICS);
|
||||
}
|
||||
|
||||
bool aclk_online_for_alerts(void) {
|
||||
return aclk_online() && aclk_query_scope_has(HTTP_ACL_ALERTS);
|
||||
}
|
||||
|
||||
bool aclk_online_for_nodes(void) {
|
||||
return aclk_online() && aclk_query_scope_has(HTTP_ACL_NODES);
|
||||
}
|
||||
|
||||
int aclk_ctx_based = 0;
|
||||
int aclk_disable_runtime = 0;
|
||||
int aclk_stats_enabled;
|
||||
|
@ -49,7 +68,6 @@ float last_backoff_value = 0;
|
|||
|
||||
time_t aclk_block_until = 0;
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
mqtt_wss_client mqttwss_client;
|
||||
|
||||
//netdata_mutex_t aclk_shared_state_mutex = NETDATA_MUTEX_INITIALIZER;
|
||||
|
@ -152,19 +170,6 @@ biofailed:
|
|||
return 1;
|
||||
}
|
||||
|
||||
static int wait_till_cloud_enabled()
|
||||
{
|
||||
nd_log(NDLS_DAEMON, NDLP_INFO,
|
||||
"Waiting for Cloud to be enabled");
|
||||
|
||||
while (!netdata_cloud_enabled) {
|
||||
sleep_usec(USEC_PER_SEC * 1);
|
||||
if (!service_running(SERVICE_ACLK))
|
||||
return 1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Will block until agent is claimed. Returns only if agent claimed
|
||||
* or if agent needs to shutdown.
|
||||
|
@ -175,14 +180,13 @@ static int wait_till_cloud_enabled()
|
|||
static int wait_till_agent_claimed(void)
|
||||
{
|
||||
//TODO prevent malloc and freez
|
||||
char *agent_id = get_agent_claimid();
|
||||
while (likely(!agent_id)) {
|
||||
ND_UUID uuid = claim_id_get_uuid();
|
||||
while (likely(UUIDiszero(uuid))) {
|
||||
sleep_usec(USEC_PER_SEC * 1);
|
||||
if (!service_running(SERVICE_ACLK))
|
||||
return 1;
|
||||
agent_id = get_agent_claimid();
|
||||
uuid = claim_id_get_uuid();
|
||||
}
|
||||
freez(agent_id);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -204,7 +208,7 @@ static int wait_till_agent_claim_ready()
|
|||
|
||||
// The NULL return means the value was never initialised, but this value has been initialized in post_conf_load.
|
||||
// We trap the impossible NULL here to keep the linter happy without using a fatal() in the code.
|
||||
char *cloud_base_url = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "cloud base url", NULL);
|
||||
const char *cloud_base_url = cloud_config_url_get();
|
||||
if (cloud_base_url == NULL) {
|
||||
netdata_log_error("Do not move the cloud base url out of post_conf_load!!");
|
||||
return 1;
|
||||
|
@ -387,7 +391,7 @@ static inline void mqtt_connected_actions(mqtt_wss_client client)
|
|||
mqtt_wss_subscribe(client, topic, 1);
|
||||
|
||||
aclk_stats_upd_online(1);
|
||||
aclk_connected = 1;
|
||||
aclk_set_connected();
|
||||
aclk_pubacks_per_conn = 0;
|
||||
aclk_rcvd_cloud_msgs = 0;
|
||||
aclk_connection_counter++;
|
||||
|
@ -427,7 +431,7 @@ void aclk_graceful_disconnect(mqtt_wss_client client)
|
|||
|
||||
aclk_stats_upd_online(0);
|
||||
last_disconnect_time = now_realtime_sec();
|
||||
aclk_connected = 0;
|
||||
aclk_set_disconnected();
|
||||
|
||||
nd_log(NDLS_DAEMON, NDLP_DEBUG,
|
||||
"Attempting to gracefully shutdown the MQTT/WSS connection");
|
||||
|
@ -601,7 +605,7 @@ static int aclk_attempt_to_connect(mqtt_wss_client client)
|
|||
#endif
|
||||
|
||||
while (service_running(SERVICE_ACLK)) {
|
||||
aclk_cloud_base_url = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "cloud base url", NULL);
|
||||
aclk_cloud_base_url = cloud_config_url_get();
|
||||
if (aclk_cloud_base_url == NULL) {
|
||||
error_report("Do not move the cloud base url out of post_conf_load!!");
|
||||
aclk_status = ACLK_STATUS_NO_CLOUD_URL;
|
||||
|
@ -817,18 +821,8 @@ void *aclk_main(void *ptr)
|
|||
|
||||
unsigned int proto_hdl_cnt = aclk_init_rx_msg_handlers();
|
||||
|
||||
#if defined( DISABLE_CLOUD ) || !defined( ENABLE_ACLK )
|
||||
nd_log(NDLS_DAEMON, NDLP_INFO,
|
||||
"Killing ACLK thread -> cloud functionality has been disabled");
|
||||
|
||||
static_thread->enabled = NETDATA_MAIN_THREAD_EXITED;
|
||||
return NULL;
|
||||
#endif
|
||||
query_threads.count = read_query_thread_count();
|
||||
|
||||
if (wait_till_cloud_enabled())
|
||||
goto exit;
|
||||
|
||||
if (wait_till_agent_claim_ready())
|
||||
goto exit;
|
||||
|
||||
|
@ -875,7 +869,7 @@ void *aclk_main(void *ptr)
|
|||
if (handle_connection(mqttwss_client)) {
|
||||
aclk_stats_upd_online(0);
|
||||
last_disconnect_time = now_realtime_sec();
|
||||
aclk_connected = 0;
|
||||
aclk_set_disconnected();
|
||||
nd_log(NDLS_ACCESS, NDLP_WARNING, "ACLK DISCONNECTED");
|
||||
}
|
||||
} while (service_running(SERVICE_ACLK));
|
||||
|
@ -914,11 +908,11 @@ void aclk_host_state_update(RRDHOST *host, int cmd, int queryable)
|
|||
nd_uuid_t node_id;
|
||||
int ret = 0;
|
||||
|
||||
if (!aclk_connected)
|
||||
if (!aclk_online())
|
||||
return;
|
||||
|
||||
if (host->node_id && !uuid_is_null(*host->node_id)) {
|
||||
uuid_copy(node_id, *host->node_id);
|
||||
if (!uuid_is_null(host->node_id)) {
|
||||
uuid_copy(node_id, host->node_id);
|
||||
}
|
||||
else {
|
||||
ret = get_node_id(&host->host_uuid, &node_id);
|
||||
|
@ -931,15 +925,17 @@ void aclk_host_state_update(RRDHOST *host, int cmd, int queryable)
|
|||
// node_id not found
|
||||
aclk_query_t create_query;
|
||||
create_query = aclk_query_new(REGISTER_NODE);
|
||||
rrdhost_aclk_state_lock(localhost);
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
|
||||
node_instance_creation_t node_instance_creation = {
|
||||
.claim_id = localhost->aclk_state.claimed_id,
|
||||
.claim_id = claim_id_is_set(claim_id) ? claim_id.str : NULL,
|
||||
.hops = host->system_info->hops,
|
||||
.hostname = rrdhost_hostname(host),
|
||||
.machine_guid = host->machine_guid};
|
||||
|
||||
create_query->data.bin_payload.payload =
|
||||
generate_node_instance_creation(&create_query->data.bin_payload.size, &node_instance_creation);
|
||||
rrdhost_aclk_state_unlock(localhost);
|
||||
|
||||
create_query->data.bin_payload.topic = ACLK_TOPICID_CREATE_NODE;
|
||||
create_query->data.bin_payload.msg_name = "CreateNodeInstance";
|
||||
nd_log(NDLS_DAEMON, NDLP_DEBUG,
|
||||
|
@ -962,10 +958,9 @@ void aclk_host_state_update(RRDHOST *host, int cmd, int queryable)
|
|||
|
||||
node_state_update.capabilities = aclk_get_agent_capas();
|
||||
|
||||
rrdhost_aclk_state_lock(localhost);
|
||||
node_state_update.claim_id = localhost->aclk_state.claimed_id;
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
node_state_update.claim_id = claim_id_is_set(claim_id) ? claim_id.str : NULL;
|
||||
query->data.bin_payload.payload = generate_node_instance_connection(&query->data.bin_payload.size, &node_state_update);
|
||||
rrdhost_aclk_state_unlock(localhost);
|
||||
|
||||
nd_log(NDLS_DAEMON, NDLP_DEBUG,
|
||||
"Queuing status update for node=%s, live=%d, hops=%u, queryable=%d",
|
||||
|
@ -1007,10 +1002,9 @@ void aclk_send_node_instances()
|
|||
}
|
||||
node_state_update.capabilities = aclk_get_node_instance_capas(host);
|
||||
|
||||
rrdhost_aclk_state_lock(localhost);
|
||||
node_state_update.claim_id = localhost->aclk_state.claimed_id;
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
node_state_update.claim_id = claim_id_is_set(claim_id) ? claim_id.str : NULL;
|
||||
query->data.bin_payload.payload = generate_node_instance_connection(&query->data.bin_payload.size, &node_state_update);
|
||||
rrdhost_aclk_state_unlock(localhost);
|
||||
|
||||
nd_log(NDLS_DAEMON, NDLP_DEBUG,
|
||||
"Queuing status update for node=%s, live=%d, hops=%d, queryable=1",
|
||||
|
@ -1032,10 +1026,10 @@ void aclk_send_node_instances()
|
|||
uuid_unparse_lower(list->host_id, (char*)node_instance_creation.machine_guid);
|
||||
create_query->data.bin_payload.topic = ACLK_TOPICID_CREATE_NODE;
|
||||
create_query->data.bin_payload.msg_name = "CreateNodeInstance";
|
||||
rrdhost_aclk_state_lock(localhost);
|
||||
node_instance_creation.claim_id = localhost->aclk_state.claimed_id,
|
||||
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
node_instance_creation.claim_id = claim_id_is_set(claim_id) ? claim_id.str : NULL,
|
||||
create_query->data.bin_payload.payload = generate_node_instance_creation(&create_query->data.bin_payload.size, &node_instance_creation);
|
||||
rrdhost_aclk_state_unlock(localhost);
|
||||
|
||||
nd_log(NDLS_DAEMON, NDLP_DEBUG,
|
||||
"Queuing registration for host=%s, hops=%d",
|
||||
|
@ -1087,16 +1081,15 @@ char *aclk_state(void)
|
|||
);
|
||||
buffer_sprintf(wb, "Protocol Used: Protobuf\nMQTT Version: %d\nClaimed: ", 5);
|
||||
|
||||
char *agent_id = get_agent_claimid();
|
||||
if (agent_id == NULL)
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
if (!claim_id_is_set(claim_id))
|
||||
buffer_strcat(wb, "No\n");
|
||||
else {
|
||||
char *cloud_base_url = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "cloud base url", NULL);
|
||||
buffer_sprintf(wb, "Yes\nClaimed Id: %s\nCloud URL: %s\n", agent_id, cloud_base_url ? cloud_base_url : "null");
|
||||
freez(agent_id);
|
||||
const char *cloud_base_url = cloud_config_url_get();
|
||||
buffer_sprintf(wb, "Yes\nClaimed Id: %s\nCloud URL: %s\n", claim_id.str, cloud_base_url ? cloud_base_url : "null");
|
||||
}
|
||||
|
||||
buffer_sprintf(wb, "Online: %s\nReconnect count: %d\nBanned By Cloud: %s\n", aclk_connected ? "Yes" : "No", aclk_connection_counter > 0 ? (aclk_connection_counter - 1) : 0, aclk_disable_runtime ? "Yes" : "No");
|
||||
buffer_sprintf(wb, "Online: %s\nReconnect count: %d\nBanned By Cloud: %s\n", aclk_online() ? "Yes" : "No", aclk_connection_counter > 0 ? (aclk_connection_counter - 1) : 0, aclk_disable_runtime ? "Yes" : "No");
|
||||
if (last_conn_time_mqtt && (tmptr = localtime_r(&last_conn_time_mqtt, &tmbuf)) ) {
|
||||
char timebuf[26];
|
||||
strftime(timebuf, 26, "%Y-%m-%d %H:%M:%S", tmptr);
|
||||
|
@ -1112,13 +1105,13 @@ char *aclk_state(void)
|
|||
strftime(timebuf, 26, "%Y-%m-%d %H:%M:%S", tmptr);
|
||||
buffer_sprintf(wb, "Last Disconnect Time: %s\n", timebuf);
|
||||
}
|
||||
if (!aclk_connected && next_connection_attempt && (tmptr = localtime_r(&next_connection_attempt, &tmbuf)) ) {
|
||||
if (!aclk_online() && next_connection_attempt && (tmptr = localtime_r(&next_connection_attempt, &tmbuf)) ) {
|
||||
char timebuf[26];
|
||||
strftime(timebuf, 26, "%Y-%m-%d %H:%M:%S", tmptr);
|
||||
buffer_sprintf(wb, "Next Connection Attempt At: %s\nLast Backoff: %.3f", timebuf, last_backoff_value);
|
||||
}
|
||||
|
||||
if (aclk_connected) {
|
||||
if (aclk_online()) {
|
||||
buffer_sprintf(wb, "Received Cloud MQTT Messages: %d\nMQTT Messages Confirmed by Remote Broker (PUBACKs): %d", aclk_rcvd_cloud_msgs, aclk_pubacks_per_conn);
|
||||
|
||||
RRDHOST *host;
|
||||
|
@ -1127,19 +1120,17 @@ char *aclk_state(void)
|
|||
buffer_sprintf(wb, "\n\n> Node Instance for mGUID: \"%s\" hostname \"%s\"\n", host->machine_guid, rrdhost_hostname(host));
|
||||
|
||||
buffer_strcat(wb, "\tClaimed ID: ");
|
||||
rrdhost_aclk_state_lock(host);
|
||||
if (host->aclk_state.claimed_id)
|
||||
buffer_strcat(wb, host->aclk_state.claimed_id);
|
||||
claim_id = rrdhost_claim_id_get(host);
|
||||
if(claim_id_is_set(claim_id))
|
||||
buffer_strcat(wb, claim_id.str);
|
||||
else
|
||||
buffer_strcat(wb, "null");
|
||||
rrdhost_aclk_state_unlock(host);
|
||||
|
||||
|
||||
if (host->node_id == NULL || uuid_is_null(*host->node_id)) {
|
||||
if (uuid_is_null(host->node_id))
|
||||
buffer_strcat(wb, "\n\tNode ID: null\n");
|
||||
} else {
|
||||
else {
|
||||
char node_id[GUID_LEN + 1];
|
||||
uuid_unparse_lower(*host->node_id, node_id);
|
||||
uuid_unparse_lower(host->node_id, node_id);
|
||||
buffer_sprintf(wb, "\n\tNode ID: %s\n", node_id);
|
||||
}
|
||||
|
||||
|
@ -1204,22 +1195,21 @@ char *aclk_state_json(void)
|
|||
json_object_array_add(grp, tmp);
|
||||
json_object_object_add(msg, "protocols-supported", grp);
|
||||
|
||||
char *agent_id = get_agent_claimid();
|
||||
tmp = json_object_new_boolean(agent_id != NULL);
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
tmp = json_object_new_boolean(claim_id_is_set(claim_id));
|
||||
json_object_object_add(msg, "agent-claimed", tmp);
|
||||
|
||||
if (agent_id) {
|
||||
tmp = json_object_new_string(agent_id);
|
||||
freez(agent_id);
|
||||
} else
|
||||
if (claim_id_is_set(claim_id))
|
||||
tmp = json_object_new_string(claim_id.str);
|
||||
else
|
||||
tmp = NULL;
|
||||
json_object_object_add(msg, "claimed-id", tmp);
|
||||
|
||||
char *cloud_base_url = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "cloud base url", NULL);
|
||||
const char *cloud_base_url = cloud_config_url_get();
|
||||
tmp = cloud_base_url ? json_object_new_string(cloud_base_url) : NULL;
|
||||
json_object_object_add(msg, "cloud-url", tmp);
|
||||
|
||||
tmp = json_object_new_boolean(aclk_connected);
|
||||
tmp = json_object_new_boolean(aclk_online());
|
||||
json_object_object_add(msg, "online", tmp);
|
||||
|
||||
tmp = json_object_new_string("Protobuf");
|
||||
|
@ -1240,9 +1230,9 @@ char *aclk_state_json(void)
|
|||
json_object_object_add(msg, "last-connect-time-utc", timestamp_to_json(&last_conn_time_mqtt));
|
||||
json_object_object_add(msg, "last-connect-time-puback-utc", timestamp_to_json(&last_conn_time_appl));
|
||||
json_object_object_add(msg, "last-disconnect-time-utc", timestamp_to_json(&last_disconnect_time));
|
||||
json_object_object_add(msg, "next-connection-attempt-utc", !aclk_connected ? timestamp_to_json(&next_connection_attempt) : NULL);
|
||||
json_object_object_add(msg, "next-connection-attempt-utc", !aclk_online() ? timestamp_to_json(&next_connection_attempt) : NULL);
|
||||
tmp = NULL;
|
||||
if (!aclk_connected && last_backoff_value)
|
||||
if (!aclk_online() && last_backoff_value)
|
||||
tmp = json_object_new_double(last_backoff_value);
|
||||
json_object_object_add(msg, "last-backoff-value", tmp);
|
||||
|
||||
|
@ -1262,19 +1252,18 @@ char *aclk_state_json(void)
|
|||
tmp = json_object_new_string(host->machine_guid);
|
||||
json_object_object_add(nodeinstance, "mguid", tmp);
|
||||
|
||||
rrdhost_aclk_state_lock(host);
|
||||
if (host->aclk_state.claimed_id) {
|
||||
tmp = json_object_new_string(host->aclk_state.claimed_id);
|
||||
claim_id = rrdhost_claim_id_get(host);
|
||||
if(claim_id_is_set(claim_id)) {
|
||||
tmp = json_object_new_string(claim_id.str);
|
||||
json_object_object_add(nodeinstance, "claimed_id", tmp);
|
||||
} else
|
||||
json_object_object_add(nodeinstance, "claimed_id", NULL);
|
||||
rrdhost_aclk_state_unlock(host);
|
||||
|
||||
if (host->node_id == NULL || uuid_is_null(*host->node_id)) {
|
||||
if (uuid_is_null(host->node_id)) {
|
||||
json_object_object_add(nodeinstance, "node-id", NULL);
|
||||
} else {
|
||||
char node_id[GUID_LEN + 1];
|
||||
uuid_unparse_lower(*host->node_id, node_id);
|
||||
uuid_unparse_lower(host->node_id, node_id);
|
||||
tmp = json_object_new_string(node_id);
|
||||
json_object_object_add(nodeinstance, "node-id", tmp);
|
||||
}
|
||||
|
@ -1301,12 +1290,10 @@ char *aclk_state_json(void)
|
|||
json_object_put(msg);
|
||||
return str;
|
||||
}
|
||||
#endif /* ENABLE_ACLK */
|
||||
|
||||
void add_aclk_host_labels(void) {
|
||||
RRDLABELS *labels = localhost->rrdlabels;
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
rrdlabels_add(labels, "_aclk_available", "true", RRDLABEL_SRC_AUTO|RRDLABEL_SRC_ACLK);
|
||||
ACLK_PROXY_TYPE aclk_proxy;
|
||||
char *proxy_str;
|
||||
|
@ -1327,9 +1314,6 @@ void add_aclk_host_labels(void) {
|
|||
rrdlabels_add(labels, "_mqtt_version", "5", RRDLABEL_SRC_AUTO);
|
||||
rrdlabels_add(labels, "_aclk_proxy", proxy_str, RRDLABEL_SRC_AUTO);
|
||||
rrdlabels_add(labels, "_aclk_ng_new_cloud_protocol", "true", RRDLABEL_SRC_AUTO|RRDLABEL_SRC_ACLK);
|
||||
#else
|
||||
rrdlabels_add(labels, "_aclk_available", "false", RRDLABEL_SRC_AUTO|RRDLABEL_SRC_ACLK);
|
||||
#endif
|
||||
}
|
||||
|
||||
void aclk_queue_node_info(RRDHOST *host, bool immediate)
|
||||
|
|
|
@ -4,14 +4,12 @@
|
|||
|
||||
#include "daemon/common.h"
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
#include "aclk_util.h"
|
||||
#include "aclk_rrdhost_state.h"
|
||||
|
||||
// How many MQTT PUBACKs we need to get to consider connection
|
||||
// stable for the purposes of TBEB (truncated binary exponential backoff)
|
||||
#define ACLK_PUBACKS_CONN_STABLE 3
|
||||
#endif /* ENABLE_ACLK */
|
||||
|
||||
typedef enum __attribute__((packed)) {
|
||||
ACLK_STATUS_CONNECTED = 0,
|
||||
|
@ -39,12 +37,19 @@ extern ACLK_STATUS aclk_status;
|
|||
extern const char *aclk_cloud_base_url;
|
||||
const char *aclk_status_to_string(void);
|
||||
|
||||
extern int aclk_connected;
|
||||
extern int aclk_ctx_based;
|
||||
extern int aclk_disable_runtime;
|
||||
extern int aclk_stats_enabled;
|
||||
extern int aclk_kill_link;
|
||||
|
||||
bool aclk_online(void);
|
||||
bool aclk_online_for_contexts(void);
|
||||
bool aclk_online_for_alerts(void);
|
||||
bool aclk_online_for_nodes(void);
|
||||
|
||||
void aclk_config_get_query_scope(void);
|
||||
bool aclk_query_scope_has(HTTP_ACL acl);
|
||||
|
||||
extern time_t last_conn_time_mqtt;
|
||||
extern time_t last_conn_time_appl;
|
||||
extern time_t last_disconnect_time;
|
||||
|
@ -59,7 +64,6 @@ extern time_t aclk_block_until;
|
|||
extern int aclk_connection_counter;
|
||||
extern int disconnect_req;
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
void *aclk_main(void *ptr);
|
||||
|
||||
extern netdata_mutex_t aclk_shared_state_mutex;
|
||||
|
@ -80,8 +84,6 @@ void aclk_send_node_instances(void);
|
|||
|
||||
void aclk_send_bin_msg(char *msg, size_t msg_len, enum aclk_topics subtopic, const char *msgname);
|
||||
|
||||
#endif /* ENABLE_ACLK */
|
||||
|
||||
char *aclk_state(void);
|
||||
char *aclk_state_json(void);
|
||||
void add_aclk_host_labels(void);
|
||||
|
|
|
@ -6,6 +6,10 @@
|
|||
|
||||
#define HTTP_API_V2_VERSION 6
|
||||
|
||||
size_t aclk_get_http_api_version(void) {
|
||||
return HTTP_API_V2_VERSION;
|
||||
}
|
||||
|
||||
const struct capability *aclk_get_agent_capas()
|
||||
{
|
||||
static struct capability agent_capabilities[] = {
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
|
||||
#include "schema-wrappers/capability.h"
|
||||
|
||||
size_t aclk_get_http_api_version(void);
|
||||
const struct capability *aclk_get_agent_capas();
|
||||
struct capability *aclk_get_node_instance_capas(RRDHOST *host);
|
||||
|
||||
|
|
|
@ -488,16 +488,15 @@ int aclk_get_mqtt_otp(RSA *p_key, char **mqtt_id, char **mqtt_usr, char **mqtt_p
|
|||
unsigned char *challenge = NULL;
|
||||
int challenge_bytes;
|
||||
|
||||
char *agent_id = get_agent_claimid();
|
||||
if (agent_id == NULL) {
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
if (!claim_id_is_set(claim_id)) {
|
||||
netdata_log_error("Agent was not claimed - cannot perform challenge/response");
|
||||
return 1;
|
||||
}
|
||||
|
||||
// Get Challenge
|
||||
if (aclk_get_otp_challenge(target, agent_id, &challenge, &challenge_bytes)) {
|
||||
if (aclk_get_otp_challenge(target, claim_id.str, &challenge, &challenge_bytes)) {
|
||||
netdata_log_error("Error getting challenge");
|
||||
freez(agent_id);
|
||||
return 1;
|
||||
}
|
||||
|
||||
|
@ -508,17 +507,15 @@ int aclk_get_mqtt_otp(RSA *p_key, char **mqtt_id, char **mqtt_usr, char **mqtt_p
|
|||
netdata_log_error("Couldn't decrypt the challenge received");
|
||||
freez(response_plaintext);
|
||||
freez(challenge);
|
||||
freez(agent_id);
|
||||
return 1;
|
||||
}
|
||||
freez(challenge);
|
||||
|
||||
// Encode and Send Challenge
|
||||
struct auth_data data = { .client_id = NULL, .passwd = NULL, .username = NULL };
|
||||
if (aclk_send_otp_response(agent_id, response_plaintext, response_plaintext_bytes, target, &data)) {
|
||||
if (aclk_send_otp_response(claim_id.str, response_plaintext, response_plaintext_bytes, target, &data)) {
|
||||
netdata_log_error("Error getting response");
|
||||
freez(response_plaintext);
|
||||
freez(agent_id);
|
||||
return 1;
|
||||
}
|
||||
|
||||
|
@ -527,7 +524,6 @@ int aclk_get_mqtt_otp(RSA *p_key, char **mqtt_id, char **mqtt_usr, char **mqtt_p
|
|||
*mqtt_id = data.client_id;
|
||||
|
||||
freez(response_plaintext);
|
||||
freez(agent_id);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -831,17 +827,14 @@ int aclk_get_env(aclk_env_t *env, const char* aclk_hostname, int aclk_port) {
|
|||
|
||||
req.request_type = HTTP_REQ_GET;
|
||||
|
||||
char *agent_id = get_agent_claimid();
|
||||
if (agent_id == NULL)
|
||||
{
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
if (!claim_id_is_set(claim_id)) {
|
||||
netdata_log_error("Agent was not claimed - cannot perform challenge/response");
|
||||
buffer_free(buf);
|
||||
return 1;
|
||||
}
|
||||
|
||||
buffer_sprintf(buf, "/api/v1/env?v=%s&cap=proto,ctx&claim_id=%s", &(NETDATA_VERSION[1]) /* skip 'v' at beginning */, agent_id);
|
||||
|
||||
freez(agent_id);
|
||||
buffer_sprintf(buf, "/api/v1/env?v=%s&cap=proto,ctx&claim_id=%s", &(NETDATA_VERSION[1]) /* skip 'v' at beginning */, claim_id.str);
|
||||
|
||||
req.host = (char*)aclk_hostname;
|
||||
req.port = aclk_port;
|
||||
|
|
|
@ -79,7 +79,7 @@ static inline int check_socks_enviroment(const char **proxy)
|
|||
{
|
||||
char *tmp = getenv("socks_proxy");
|
||||
|
||||
if (!tmp)
|
||||
if (!tmp || !*tmp)
|
||||
return 1;
|
||||
|
||||
if (aclk_verify_proxy(tmp) == PROXY_TYPE_SOCKS5) {
|
||||
|
@ -97,7 +97,7 @@ static inline int check_http_enviroment(const char **proxy)
|
|||
{
|
||||
char *tmp = getenv("http_proxy");
|
||||
|
||||
if (!tmp)
|
||||
if (!tmp || !*tmp)
|
||||
return 1;
|
||||
|
||||
if (aclk_verify_proxy(tmp) == PROXY_TYPE_HTTP) {
|
||||
|
@ -113,15 +113,11 @@ static inline int check_http_enviroment(const char **proxy)
|
|||
|
||||
const char *aclk_lws_wss_get_proxy_setting(ACLK_PROXY_TYPE *type)
|
||||
{
|
||||
const char *proxy = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, ACLK_PROXY_CONFIG_VAR, ACLK_PROXY_ENV);
|
||||
|
||||
// backward compatibility: "proxy" was in "netdata.conf"
|
||||
if (config_exists(CONFIG_SECTION_CLOUD, ACLK_PROXY_CONFIG_VAR))
|
||||
proxy = config_get(CONFIG_SECTION_CLOUD, ACLK_PROXY_CONFIG_VAR, ACLK_PROXY_ENV);
|
||||
const char *proxy = cloud_config_proxy_get();
|
||||
|
||||
*type = PROXY_DISABLED;
|
||||
|
||||
if (strcmp(proxy, "none") == 0)
|
||||
if (!proxy || !*proxy || strcmp(proxy, "none") == 0)
|
||||
return proxy;
|
||||
|
||||
if (strcmp(proxy, ACLK_PROXY_ENV) == 0) {
|
||||
|
|
|
@ -7,6 +7,8 @@
|
|||
|
||||
#define WEB_HDR_ACCEPT_ENC "Accept-Encoding:"
|
||||
|
||||
static HTTP_ACL default_aclk_http_acl = HTTP_ACL_ALL_FEATURES;
|
||||
|
||||
pthread_cond_t query_cond_wait = PTHREAD_COND_INITIALIZER;
|
||||
pthread_mutex_t query_lock_wait = PTHREAD_MUTEX_INITIALIZER;
|
||||
#define QUERY_THREAD_LOCK pthread_mutex_lock(&query_lock_wait)
|
||||
|
@ -24,6 +26,16 @@ struct pending_req_list {
|
|||
static struct pending_req_list *pending_req_list_head = NULL;
|
||||
static pthread_mutex_t pending_req_list_lock = PTHREAD_MUTEX_INITIALIZER;
|
||||
|
||||
void aclk_config_get_query_scope(void) {
|
||||
const char *s = config_get(CONFIG_SECTION_CLOUD, "scope", "full");
|
||||
if(strcmp(s, "license manager") == 0)
|
||||
default_aclk_http_acl = HTTP_ACL_ACLK_LICENSE_MANAGER;
|
||||
}
|
||||
|
||||
bool aclk_query_scope_has(HTTP_ACL acl) {
|
||||
return (default_aclk_http_acl & acl) == acl;
|
||||
}
|
||||
|
||||
static struct pending_req_list *pending_req_list_add(const char *msg_id)
|
||||
{
|
||||
struct pending_req_list *new = callocz(1, sizeof(struct pending_req_list));
|
||||
|
@ -106,7 +118,7 @@ static int http_api_v2(struct aclk_query_thread *query_thr, aclk_query_t query)
|
|||
|
||||
struct web_client *w = web_client_get_from_cache();
|
||||
web_client_set_conn_cloud(w);
|
||||
w->port_acl = HTTP_ACL_ACLK | HTTP_ACL_ALL_FEATURES;
|
||||
w->port_acl = HTTP_ACL_ACLK | default_aclk_http_acl;
|
||||
w->acl = w->port_acl;
|
||||
web_client_set_permissions(w, HTTP_ACCESS_MAP_OLD_MEMBER, HTTP_USER_ROLE_MEMBER, WEB_CLIENT_FLAG_AUTH_CLOUD);
|
||||
|
||||
|
|
|
@ -268,7 +268,7 @@ int create_node_instance_result(const char *msg, size_t msg_len)
|
|||
freez(res.node_id);
|
||||
return 1;
|
||||
}
|
||||
update_node_id(&host_id, &node_id);
|
||||
sql_update_node_id(&host_id, &node_id);
|
||||
|
||||
aclk_query_t query = aclk_query_new(NODE_STATE_UPDATE);
|
||||
node_instance_connection_t node_state_update = {
|
||||
|
@ -292,10 +292,9 @@ int create_node_instance_result(const char *msg, size_t msg_len)
|
|||
node_state_update.capabilities = aclk_get_node_instance_capas(host);
|
||||
}
|
||||
|
||||
rrdhost_aclk_state_lock(localhost);
|
||||
node_state_update.claim_id = localhost->aclk_state.claimed_id;
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
node_state_update.claim_id = claim_id_is_set(claim_id) ? claim_id.str : NULL;
|
||||
query->data.bin_payload.payload = generate_node_instance_connection(&query->data.bin_payload.size, &node_state_update);
|
||||
rrdhost_aclk_state_unlock(localhost);
|
||||
|
||||
freez((void *)node_state_update.capabilities);
|
||||
|
||||
|
|
|
@ -219,19 +219,19 @@ uint16_t aclk_send_agent_connection_update(mqtt_wss_client client, int reachable
|
|||
.capabilities = aclk_get_agent_capas()
|
||||
};
|
||||
|
||||
rrdhost_aclk_state_lock(localhost);
|
||||
if (unlikely(!localhost->aclk_state.claimed_id)) {
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
if (unlikely(!claim_id_is_set(claim_id))) {
|
||||
netdata_log_error("Internal error. Should not come here if not claimed");
|
||||
rrdhost_aclk_state_unlock(localhost);
|
||||
return 0;
|
||||
}
|
||||
if (localhost->aclk_state.prev_claimed_id)
|
||||
conn.claim_id = localhost->aclk_state.prev_claimed_id;
|
||||
|
||||
CLAIM_ID previous_claim_id = claim_id_get_last_working();
|
||||
if (claim_id_is_set(previous_claim_id))
|
||||
conn.claim_id = previous_claim_id.str;
|
||||
else
|
||||
conn.claim_id = localhost->aclk_state.claimed_id;
|
||||
conn.claim_id = claim_id.str;
|
||||
|
||||
char *msg = generate_update_agent_connection(&len, &conn);
|
||||
rrdhost_aclk_state_unlock(localhost);
|
||||
|
||||
if (!msg) {
|
||||
netdata_log_error("Error generating agent::v1::UpdateAgentConnection payload");
|
||||
|
@ -239,10 +239,9 @@ uint16_t aclk_send_agent_connection_update(mqtt_wss_client client, int reachable
|
|||
}
|
||||
|
||||
pid = aclk_send_bin_message_subtopic_pid(client, msg, len, ACLK_TOPICID_AGENT_CONN, "UpdateAgentConnection");
|
||||
if (localhost->aclk_state.prev_claimed_id) {
|
||||
freez(localhost->aclk_state.prev_claimed_id);
|
||||
localhost->aclk_state.prev_claimed_id = NULL;
|
||||
}
|
||||
if (claim_id_is_set(previous_claim_id))
|
||||
claim_id_clear_previous_working();
|
||||
|
||||
return pid;
|
||||
}
|
||||
|
||||
|
@ -254,16 +253,14 @@ char *aclk_generate_lwt(size_t *size) {
|
|||
.capabilities = NULL
|
||||
};
|
||||
|
||||
rrdhost_aclk_state_lock(localhost);
|
||||
if (unlikely(!localhost->aclk_state.claimed_id)) {
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
if(!claim_id_is_set(claim_id)) {
|
||||
netdata_log_error("Internal error. Should not come here if not claimed");
|
||||
rrdhost_aclk_state_unlock(localhost);
|
||||
return NULL;
|
||||
}
|
||||
conn.claim_id = localhost->aclk_state.claimed_id;
|
||||
conn.claim_id = claim_id.str;
|
||||
|
||||
char *msg = generate_update_agent_connection(size, &conn);
|
||||
rrdhost_aclk_state_unlock(localhost);
|
||||
|
||||
if (!msg)
|
||||
netdata_log_error("Error generating agent::v1::UpdateAgentConnection payload for LWT");
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
|
||||
#include "aclk_util.h"
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
|
||||
#include "aclk_proxy.h"
|
||||
|
||||
#include "daemon/common.h"
|
||||
|
@ -186,20 +184,18 @@ static void topic_generate_final(struct aclk_topic *t) {
|
|||
if (!replace_tag)
|
||||
return;
|
||||
|
||||
rrdhost_aclk_state_lock(localhost);
|
||||
if (unlikely(!localhost->aclk_state.claimed_id)) {
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
if (unlikely(!claim_id_is_set(claim_id))) {
|
||||
netdata_log_error("This should never be called if agent not claimed");
|
||||
rrdhost_aclk_state_unlock(localhost);
|
||||
return;
|
||||
}
|
||||
|
||||
t->topic = mallocz(strlen(t->topic_recvd) + 1 - strlen(CLAIM_ID_REPLACE_TAG) + strlen(localhost->aclk_state.claimed_id));
|
||||
t->topic = mallocz(strlen(t->topic_recvd) + 1 - strlen(CLAIM_ID_REPLACE_TAG) + strlen(claim_id.str));
|
||||
memcpy(t->topic, t->topic_recvd, replace_tag - t->topic_recvd);
|
||||
dest = t->topic + (replace_tag - t->topic_recvd);
|
||||
|
||||
memcpy(dest, localhost->aclk_state.claimed_id, strlen(localhost->aclk_state.claimed_id));
|
||||
dest += strlen(localhost->aclk_state.claimed_id);
|
||||
rrdhost_aclk_state_unlock(localhost);
|
||||
memcpy(dest, claim_id.str, strlen(claim_id.str));
|
||||
dest += strlen(claim_id.str);
|
||||
replace_tag += strlen(CLAIM_ID_REPLACE_TAG);
|
||||
strcpy(dest, replace_tag);
|
||||
dest += strlen(replace_tag);
|
||||
|
@ -440,7 +436,6 @@ void aclk_set_proxy(char **ohost, int *port, char **uname, char **pwd, enum mqtt
|
|||
|
||||
freez(proxy);
|
||||
}
|
||||
#endif /* ENABLE_ACLK */
|
||||
|
||||
#if defined(OPENSSL_VERSION_NUMBER) && OPENSSL_VERSION_NUMBER < OPENSSL_VERSION_110
|
||||
static EVP_ENCODE_CTX *EVP_ENCODE_CTX_new(void)
|
||||
|
|
|
@ -3,8 +3,6 @@
|
|||
#define ACLK_UTIL_H
|
||||
|
||||
#include "libnetdata/libnetdata.h"
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
#include "mqtt_websockets/mqtt_wss_client.h"
|
||||
|
||||
#define CLOUD_EC_MALFORMED_NODE_ID 1
|
||||
|
@ -114,7 +112,6 @@ unsigned long int aclk_tbeb_delay(int reset, int base, unsigned long int min, un
|
|||
#define aclk_tbeb_reset(x) aclk_tbeb_delay(1, 0, 0, 0)
|
||||
|
||||
void aclk_set_proxy(char **ohost, int *port, char **uname, char **pwd, enum mqtt_wss_proxy_type *type);
|
||||
#endif /* ENABLE_ACLK */
|
||||
|
||||
int base64_encode_helper(unsigned char *out, int *outl, const unsigned char *in, int in_len);
|
||||
|
||||
|
|
|
@ -11,8 +11,7 @@ features like centralized monitoring and easier collaboration.
|
|||
There are two places in the UI where you can add/connect your Node:
|
||||
|
||||
- **Space/Room settings**: Click the cogwheel (the bottom-left corner or next to the Room name at the top) and
|
||||
select "Nodes." Click the "+" button to add
|
||||
a new node.
|
||||
select "Nodes." Click the "+" button to add a new node.
|
||||
- [**Nodes tab**](/docs/dashboards-and-charts/nodes-tab.md): Click on the "Add nodes" button.
|
||||
|
||||
Netdata Cloud will generate a command that you can execute on your Node to install and claim the Agent. The command is
|
||||
|
@ -28,12 +27,13 @@ Once you've chosen your installation method, follow the provided instructions to
|
|||
|
||||
### Connect an Existing Agent
|
||||
|
||||
There are two methods to connect an already installed Netdata Agent to your Netdata Cloud Space:
|
||||
There are three methods to connect an already installed Netdata Agent to your Netdata Cloud Space:
|
||||
|
||||
- using the Netdata Cloud user interface (UI).
|
||||
- using the claiming script.
|
||||
- Manually, via the UI
|
||||
- Automatically, via a provisioning system (or the command line)
|
||||
- Automatically, via environment variables (e.g. kubernetes, docker, etc)
|
||||
|
||||
#### Using the UI (recommended)
|
||||
#### Manually, via the UI
|
||||
|
||||
The UI method is the easiest and recommended way to connect your Agent. Here's how:
|
||||
|
||||
|
@ -42,36 +42,54 @@ The UI method is the easiest and recommended way to connect your Agent. Here's h
|
|||
3. Click the "Connect" button.
|
||||
4. Follow the on-screen instructions to connect your Agent.
|
||||
|
||||
#### Using claiming script
|
||||
#### Automatically, via a provisioning system or the command line
|
||||
|
||||
You can connect an Agent by running
|
||||
the [netdata-claim.sh](https://github.com/netdata/netdata/blob/master/src/claim/netdata-claim.sh.in) script directly.
|
||||
You can either run it with root privileges using `sudo` or as the user running the Agent (typically `netdata`).
|
||||
|
||||
The claiming script accepts options that control the connection process. You can specify these options using the
|
||||
following format:
|
||||
Netdata Agents can be connected to Netdata Cloud by creating the file `/etc/netdata/claim.conf`
|
||||
(or `/opt/netdata/etc/netdata/claim.conf` depending on your installation), like this:
|
||||
|
||||
```bash
|
||||
netdata-claim.sh -OPTION=VALUE ...
|
||||
[global]
|
||||
url = The Netdata Cloud base URL (optional, defaults to `https://app.netdata.cloud`)
|
||||
token = The claiming token for your Netdata Cloud Space (required)
|
||||
rooms = A comma-separated list of Rooms to add the Agent to (optional)
|
||||
proxy = The URL of a proxy server to use for the connection, or none, or env (optional, defaults to env)
|
||||
insecure = Either yes or no (optional)
|
||||
```
|
||||
|
||||
Claiming script options:
|
||||
- `proxy` can get anything libcurl accepts as proxy, or the keywords `none` and `env`. `none` or just empty disables proxy configuration, while `env` instructs libcurl to use the environment for determining proxy configuration (usually the environment variable `https_proxy`).
|
||||
- `insecure` is a boolean (either `yes`, or `no`) and when set to `yes` it instructs libcurl to disable host verification.
|
||||
|
||||
| Option | Description | Required | Default value |
|
||||
|--------|--------------------------------------------------------------------|:--------:|:------------------------------------------------------|
|
||||
| token | The claiming token for your Netdata Cloud Space. | yes | |
|
||||
| rooms | A comma-separated list of Rooms to add the Agent to. | no | The Agent will be added to the "All nodes" Room only. |
|
||||
| id | The unique identifier of the Agent. | no | The Agent's MACHINE_GUID. |
|
||||
| proxy | The URL of a proxy server to use for the connection, if necessary. | no | |
|
||||
|
||||
Example:
|
||||
example:
|
||||
|
||||
```bash
|
||||
netdata-claim.sh -token=MYTOKEN1234567 -rooms=room1,room2
|
||||
[global]
|
||||
url = https://app.netdata.cloud
|
||||
token = NETDATA_CLOUD_SPACE_TOKEN
|
||||
rooms = ROOM_KEY1,ROOM_KEY2,ROOM_KEY3
|
||||
proxy = http://username:password@myproxy:8080
|
||||
insecure = no
|
||||
```
|
||||
|
||||
This command connects the Agent and adds it to the "room1" and "room2" Rooms using your claiming token
|
||||
MYTOKEN1234567.
|
||||
If the agent is already running, you can either run `netdatacli reload-claiming-state` or restart the agent.
|
||||
Otherwise, the agent will be claimed when it starts.
|
||||
|
||||
If claiming fails for whatever reason, daemon.log will log the reason (search for `CLAIM`),
|
||||
and also `http://ip:19999/api/v2/info` would also state the reason at the `cloud` section of the response.
|
||||
|
||||
#### Automatically, via environment variables
|
||||
|
||||
Netdata will use the following environment variables:
|
||||
|
||||
- `NETDATA_CLAIM_URL`: The Netdata Cloud base URL (optional, defaults to `https://app.netdata.cloud`)
|
||||
- `NETDATA_CLAIM_TOKEN`: The claiming token for your Netdata Cloud Space (required)
|
||||
- `NETDATA_CLAIM_ROOMS`: A comma-separated list of Rooms to add the Agent to (optional)
|
||||
- `NETDATA_CLAIM_PROXY`: The URL of a proxy server to use for the connection (optional)
|
||||
- `NETDATA_EXTRA_CLAIM_OPTS`, may contain a space separated list of options. The option `-insecure` is the only currently used.
|
||||
|
||||
The `NETDATA_CLAIM_TOKEN` alone is enough for triggering the claiming process.
|
||||
|
||||
If claiming fails for whatever reason, daemon.log will log the reason (search for `CLAIM`),
|
||||
and also `http://ip:19999/api/v2/info` would also state the reason at the `cloud` section of the response.
|
||||
|
||||
## Reconnect
|
||||
|
||||
|
@ -84,19 +102,12 @@ cd /var/lib/netdata # Replace with your Netdata library directory, if not /var
|
|||
sudo rm -rf cloud.d/
|
||||
```
|
||||
|
||||
> IMPORTANT:<br/>
|
||||
> Keep in mind that the Agent will be **re-claimed automatically** if the environment variables or `claim.conf` exist when the agent is restarted.
|
||||
|
||||
This node no longer has access to the credentials it was used when connecting to Netdata Cloud via the ACLK. You will
|
||||
still be able to see this node in your Rooms in an **unreachable** state.
|
||||
|
||||
If you want to reconnect this node, you need to:
|
||||
|
||||
1. Ensure that the `/var/lib/netdata/cloud.d` directory doesn't exist. In some installations, the path
|
||||
is `/opt/netdata/var/lib/netdata/cloud.d`
|
||||
2. Stop the Agent
|
||||
3. Ensure that the `uuidgen-runtime` package is installed. Run ```echo "$(uuidgen)"``` and validate you get back a UUID
|
||||
4. Copy the kickstart.sh command to add a node from your space and add to the end of it `--claim-id "$(uuidgen)"`. Run
|
||||
the command and look for the message `Node was successfully claimed.`
|
||||
5. Start the Agent
|
||||
|
||||
### Docker based installations
|
||||
|
||||
To remove a node from you Space in Netdata Cloud, and connect it to another Space, follow these steps:
|
||||
|
@ -113,7 +124,6 @@ To remove a node from you Space in Netdata Cloud, and connect it to another Spac
|
|||
|
||||
```bash
|
||||
rm -rf /var/lib/netdata/cloud.d/
|
||||
|
||||
rm /var/lib/netdata/registry/netdata.public.unique.id
|
||||
```
|
||||
|
||||
|
@ -123,7 +133,6 @@ To remove a node from you Space in Netdata Cloud, and connect it to another Spac
|
|||
|
||||
```bash
|
||||
docker stop CONTAINER_NAME
|
||||
|
||||
docker rm CONTAINER_NAME
|
||||
```
|
||||
|
||||
|
@ -163,16 +172,9 @@ Only the administrators of a Space in Netdata Cloud can trigger this action.
|
|||
If you're having trouble connecting a node, this may be because
|
||||
the [ACLK](/src/aclk/README.md) cannot connect to Cloud.
|
||||
|
||||
With the Netdata Agent running, visit `http://NODE:19999/api/v1/info` in your browser, replacing `NODE` with the IP
|
||||
address or hostname of your Agent. The returned JSON contains four keys that will be helpful to diagnose any issues you
|
||||
might be having with the ACLK or connection process.
|
||||
|
||||
```
|
||||
"cloud-enabled"
|
||||
"cloud-available"
|
||||
"agent-claimed"
|
||||
"aclk-available"
|
||||
```
|
||||
With the Netdata Agent running, visit `http://NODE:19999/api/v2/info` in your browser, replacing `NODE` with the IP
|
||||
address or hostname of your Agent. The returned JSON contains a section called `cloud` with helpful information to
|
||||
diagnose any issues you might be having with the ACLK or connection process.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
|
@ -216,28 +218,12 @@ Failed to write new machine GUID. Please make sure you have rights to write to /
|
|||
For a successful execution you will need to run the script with root privileges or run it with the user that is running
|
||||
the Agent.
|
||||
|
||||
### bash: netdata-claim.sh: command not found
|
||||
|
||||
If you run the claiming script and see a `command not found` error, you either installed Netdata in a non-standard
|
||||
location or are using an unsupported package. If you installed Netdata in a non-standard path using
|
||||
the `--install-prefix` option, you need to update your `$PATH` or run `netdata-claim.sh` using the full path.
|
||||
|
||||
For example, if you installed Netdata to `/opt/netdata`, use `/opt/netdata/bin/netdata-claim.sh` to run the claiming
|
||||
script.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> If you are using an unsupported package, such as a third-party `.deb`/`.rpm` package provided by your distribution,
|
||||
> please remove that package and reinstall using
|
||||
>
|
||||
our [recommended kickstart script](/packaging/installer/methods/kickstart.md).
|
||||
|
||||
### Connecting on older distributions (Ubuntu 14.04, Debian 8, CentOS 6)
|
||||
|
||||
If you're running an older Linux distribution or one that has reached EOL, such as Ubuntu 14.04 LTS, Debian 8, or CentOS
|
||||
6, your Agent may not be able to securely connect to Netdata Cloud due to an outdated version of OpenSSL. These old
|
||||
versions of OpenSSL cannot perform [hostname validation](https://wiki.openssl.org/index.php/Hostname_validation), which
|
||||
helps securely encrypt SSL connections.
|
||||
versions of OpenSSL cannot perform [hostname validation](https://wiki.openssl.org/index.php/Hostname_validation),
|
||||
which helps securely encrypt SSL connections.
|
||||
|
||||
We recommend you reinstall Netdata with
|
||||
a [static build](/packaging/installer/methods/kickstart.md#static-builds),
|
||||
|
@ -246,102 +232,3 @@ which uses an up-to-date version of OpenSSL with hostname validation enabled.
|
|||
If you choose to continue using the outdated version of OpenSSL, your node will still connect to Netdata Cloud, albeit
|
||||
with hostname verification disabled. Without verification, your Netdata Cloud connection could be vulnerable to
|
||||
man-in-the-middle attacks.
|
||||
|
||||
### cloud-enabled is false
|
||||
|
||||
If `cloud-enabled` is `false`, you probably ran the installer with `--disable-cloud` option.
|
||||
|
||||
Additionally, check that the `enabled` setting in `var/lib/netdata/cloud.d/cloud.conf` is set to `true`:
|
||||
|
||||
```conf
|
||||
[global]
|
||||
enabled = true
|
||||
```
|
||||
|
||||
To fix this issue, reinstall Netdata using
|
||||
your [preferred method](/packaging/installer/README.md) and do not add
|
||||
the `--disable-cloud` option.
|
||||
|
||||
### cloud-available is false / ACLK Available: No
|
||||
|
||||
If `cloud-available` is `false` after you verified Cloud is enabled in the previous step, the most likely issue is that
|
||||
Cloud features failed to build during installation.
|
||||
|
||||
If Cloud features fail to build, the installer continues and finishes the process without Cloud functionality as opposed
|
||||
to failing the installation altogether.
|
||||
|
||||
We do this to ensure the Agent will always finish installing.
|
||||
|
||||
If you can't see an explicit error in the installer's output, you can run the installer with the `--require-cloud`
|
||||
option. This option causes the installation to fail if Cloud functionality can't be built and enabled, and the
|
||||
installer's output should give you more error details.
|
||||
|
||||
You may see one of the following error messages during installation:
|
||||
|
||||
- `Failed to build libmosquitto. The install process will continue, but you will not be able to connect this node to Netdata Cloud.`
|
||||
- `Unable to fetch sources for libmosquitto. The install process will continue, but you will not be able to connect this node to Netdata Cloud.`
|
||||
- `Failed to build libwebsockets. The install process will continue, but you may not be able to connect this node to Netdata Cloud.`
|
||||
- `Unable to fetch sources for libwebsockets. The install process will continue, but you may not be able to connect this node to Netdata Cloud.`
|
||||
- `Could not find cmake, which is required to build libwebsockets. The install process will continue, but you may not be able to connect this node to Netdata Cloud.`
|
||||
- `Could not find cmake, which is required to build JSON-C. The install process will continue, but Netdata Cloud support will be disabled.`
|
||||
- `Failed to build JSON-C. Netdata Cloud support will be disabled.`
|
||||
- `Unable to fetch sources for JSON-C. Netdata Cloud support will be disabled.`
|
||||
|
||||
One common cause of the installer failing to build Cloud features is not having one of the following dependencies on
|
||||
your system: `cmake`, `json-c` and `OpenSSL`, including corresponding `devel` packages.
|
||||
|
||||
You can also look for error messages in `/var/log/netdata/error.log`. Try one of the following two commands to search
|
||||
for ACLK-related errors.
|
||||
|
||||
```bash
|
||||
less /var/log/netdata/error.log
|
||||
grep -i ACLK /var/log/netdata/error.log
|
||||
```
|
||||
|
||||
If the installer's output does not help you enable Cloud features, contact us
|
||||
by [creating an issue on GitHub](https://github.com/netdata/netdata/issues/new?assignees=&labels=bug%2Cneeds+triage&template=BUG_REPORT.yml&title=The+installer+failed+to+prepare+the+required+dependencies+for+Netdata+Cloud+functionality)
|
||||
with details about your system and relevant output from `error.log`.
|
||||
|
||||
### agent-claimed is false / Claimed: No
|
||||
|
||||
You must [connect your node](#connect).
|
||||
|
||||
### aclk-available is false / Online: No
|
||||
|
||||
If `aclk-available` is `false` and all other keys are `true`, your Agent is having trouble connecting to the Cloud
|
||||
through the ACLK. Please check your system's firewall.
|
||||
|
||||
If your Agent needs to use a proxy to access the internet, you must set up a proxy for connecting.
|
||||
|
||||
If you are certain firewall and proxy settings are not the issue, you should consult the Agent's `error.log`
|
||||
at `/var/log/netdata/error.log` and contact us
|
||||
by [creating an issue on GitHub](https://github.com/netdata/netdata/issues/new?assignees=&labels=bug%2Cneeds+triage&template=BUG_REPORT.yml&title=ACLK-available-is-false)
|
||||
with details about your system and relevant output from `error.log`.
|
||||
|
||||
## Connecting reference
|
||||
|
||||
In the sections below, you can find reference material for the kickstart script, claiming script, connecting via the
|
||||
Agent's command line tool, and details about the files found in `cloud.d`.
|
||||
|
||||
### The `cloud.conf` file
|
||||
|
||||
This section defines how and whether your Agent connects to Netdata Cloud using
|
||||
the [Agent-Cloud link](/src/aclk/README.md)(ACLK).
|
||||
|
||||
| setting | default | info |
|
||||
|:---------------|:----------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| enabled | yes | Controls whether the ACLK is active. Set to no to prevent the Agent from connecting to Netdata Cloud. |
|
||||
| cloud base url | <https://app.netdata.cloud> | The URL for the Netdata Cloud web application. Typically, this should not be changed. |
|
||||
| proxy | env | Specifies the proxy setting for the ACLK. Options: none (no proxy), env (use environment's proxy), or a URL (e.g., `http://proxy.example.com:1080`). |
|
||||
|
||||
### Connection directory
|
||||
|
||||
Netdata stores the Agent's connection-related state in the Netdata library directory under `cloud.d`. For a default
|
||||
installation, this directory exists at `/var/lib/netdata/cloud.d`. The directory and its files should be owned by the
|
||||
user that runs the Agent, which is typically the `netdata` user.
|
||||
|
||||
The `cloud.d/token` file should contain the claiming-token and the `cloud.d/rooms` file should contain the list of War
|
||||
Rooms you added that node to.
|
||||
|
||||
The user can also put the Cloud endpoint's full certificate chain in `cloud.d/cloud_fullchain.pem` so that the Agent
|
||||
can trust the endpoint if necessary.
|
||||
|
|
491
src/claim/claim-with-api.c
Normal file
491
src/claim/claim-with-api.c
Normal file
|
@ -0,0 +1,491 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#include "claim.h"
|
||||
|
||||
#include "registry/registry.h"
|
||||
|
||||
#include <curl/curl.h>
|
||||
#include <openssl/evp.h>
|
||||
#include <openssl/pem.h>
|
||||
#include <openssl/err.h>
|
||||
|
||||
static bool check_and_generate_certificates() {
|
||||
FILE *fp;
|
||||
EVP_PKEY *pkey = NULL;
|
||||
EVP_PKEY_CTX *pctx = NULL;
|
||||
|
||||
CLEAN_CHAR_P *private_key_file = filename_from_path_entry_strdupz(netdata_configured_cloud_dir, "private.pem");
|
||||
CLEAN_CHAR_P *public_key_file = filename_from_path_entry_strdupz(netdata_configured_cloud_dir, "public.pem");
|
||||
|
||||
// Check if private key exists
|
||||
fp = fopen(public_key_file, "r");
|
||||
if (fp) {
|
||||
fclose(fp);
|
||||
return true;
|
||||
}
|
||||
|
||||
// Generate the RSA key
|
||||
pctx = EVP_PKEY_CTX_new_id(EVP_PKEY_RSA, NULL);
|
||||
if (!pctx) {
|
||||
claim_agent_failure_reason_set("Cannot generate RSA key, EVP_PKEY_CTX_new_id() failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
if (EVP_PKEY_keygen_init(pctx) <= 0) {
|
||||
claim_agent_failure_reason_set("Cannot generate RSA key, EVP_PKEY_keygen_init() failed");
|
||||
EVP_PKEY_CTX_free(pctx);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (EVP_PKEY_CTX_set_rsa_keygen_bits(pctx, 2048) <= 0) {
|
||||
claim_agent_failure_reason_set("Cannot generate RSA key, EVP_PKEY_CTX_set_rsa_keygen_bits() failed");
|
||||
EVP_PKEY_CTX_free(pctx);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (EVP_PKEY_keygen(pctx, &pkey) <= 0) {
|
||||
claim_agent_failure_reason_set("Cannot generate RSA key, EVP_PKEY_keygen() failed");
|
||||
EVP_PKEY_CTX_free(pctx);
|
||||
return false;
|
||||
}
|
||||
|
||||
EVP_PKEY_CTX_free(pctx);
|
||||
|
||||
// Save private key
|
||||
fp = fopen(private_key_file, "wb");
|
||||
if (!fp || !PEM_write_PrivateKey(fp, pkey, NULL, NULL, 0, NULL, NULL)) {
|
||||
claim_agent_failure_reason_set("Cannot write private key file: %s", private_key_file);
|
||||
if (fp) fclose(fp);
|
||||
EVP_PKEY_free(pkey);
|
||||
return false;
|
||||
}
|
||||
fclose(fp);
|
||||
|
||||
// Save public key
|
||||
fp = fopen(public_key_file, "wb");
|
||||
if (!fp || !PEM_write_PUBKEY(fp, pkey)) {
|
||||
claim_agent_failure_reason_set("Cannot write public key file: %s", public_key_file);
|
||||
if (fp) fclose(fp);
|
||||
EVP_PKEY_free(pkey);
|
||||
return false;
|
||||
}
|
||||
fclose(fp);
|
||||
|
||||
EVP_PKEY_free(pkey);
|
||||
return true;
|
||||
}
|
||||
|
||||
static size_t response_write_callback(void *ptr, size_t size, size_t nmemb, void *stream) {
|
||||
BUFFER *wb = stream;
|
||||
size_t real_size = size * nmemb;
|
||||
|
||||
buffer_memcat(wb, ptr, real_size);
|
||||
|
||||
return real_size;
|
||||
}
|
||||
|
||||
static const char *curl_add_json_room(BUFFER *wb, const char *start, const char *end) {
|
||||
size_t len = end - start;
|
||||
|
||||
// copy the item to an new buffer and terminate it
|
||||
char buf[len + 1];
|
||||
memcpy(buf, start, len);
|
||||
buf[len] = '\0';
|
||||
|
||||
// add it to the json array
|
||||
const char *trimmed = trim(buf); // remove leading and trailing spaces
|
||||
if(trimmed)
|
||||
buffer_json_add_array_item_string(wb, trimmed);
|
||||
|
||||
// prepare for the next item
|
||||
start = end + 1;
|
||||
|
||||
// skip multiple separators or spaces
|
||||
while(*start == ',' || *start == ' ') start++;
|
||||
|
||||
return start;
|
||||
}
|
||||
|
||||
void curl_add_rooms_json_array(BUFFER *wb, const char *rooms) {
|
||||
buffer_json_member_add_array(wb, "rooms");
|
||||
if(rooms && *rooms) {
|
||||
const char *start = rooms, *end = NULL;
|
||||
|
||||
// Skip initial separators or spaces
|
||||
while (*start == ',' || *start == ' ')
|
||||
start++;
|
||||
|
||||
// Process each item in the comma-separated list
|
||||
while ((end = strchr(start, ',')) != NULL)
|
||||
start = curl_add_json_room(wb, start, end);
|
||||
|
||||
// Process the last item if any
|
||||
if (*start)
|
||||
curl_add_json_room(wb, start, &start[strlen(start)]);
|
||||
}
|
||||
buffer_json_array_close(wb);
|
||||
}
|
||||
|
||||
static int debug_callback(CURL *handle, curl_infotype type, char *data, size_t size, void *userptr) {
|
||||
(void)handle; // Unused
|
||||
(void)userptr; // Unused
|
||||
|
||||
if (type == CURLINFO_TEXT)
|
||||
nd_log(NDLS_DAEMON, NDLP_INFO, "CLAIM: Info: %s", data);
|
||||
else if (type == CURLINFO_HEADER_OUT)
|
||||
nd_log(NDLS_DAEMON, NDLP_INFO, "CLAIM: Send header: %.*s", (int)size, data);
|
||||
else if (type == CURLINFO_DATA_OUT)
|
||||
nd_log(NDLS_DAEMON, NDLP_INFO, "CLAIM: Send data: %.*s", (int)size, data);
|
||||
else if (type == CURLINFO_SSL_DATA_OUT)
|
||||
nd_log(NDLS_DAEMON, NDLP_INFO, "CLAIM: Send SSL data: %.*s", (int)size, data);
|
||||
else if (type == CURLINFO_HEADER_IN)
|
||||
nd_log(NDLS_DAEMON, NDLP_INFO, "CLAIM: Receive header: %.*s", (int)size, data);
|
||||
else if (type == CURLINFO_DATA_IN)
|
||||
nd_log(NDLS_DAEMON, NDLP_INFO, "CLAIM: Receive data: %.*s", (int)size, data);
|
||||
else if (type == CURLINFO_SSL_DATA_IN)
|
||||
nd_log(NDLS_DAEMON, NDLP_INFO, "CLAIM: Receive SSL data: %.*s", (int)size, data);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool send_curl_request(const char *machine_guid, const char *hostname, const char *token, const char *rooms, const char *url, const char *proxy, int insecure, bool *can_retry) {
|
||||
CURL *curl;
|
||||
CURLcode res;
|
||||
char target_url[2048];
|
||||
char public_key[2048] = ""; // Adjust size as needed
|
||||
FILE *fp;
|
||||
struct curl_slist *headers = NULL;
|
||||
|
||||
// create a new random claim id
|
||||
nd_uuid_t claimed_id;
|
||||
uuid_generate_random(claimed_id);
|
||||
char claimed_id_str[UUID_STR_LEN];
|
||||
uuid_unparse_lower(claimed_id, claimed_id_str);
|
||||
|
||||
// generate the URL to post
|
||||
snprintf(target_url, sizeof(target_url), "%s%sapi/v1/spaces/nodes/%s",
|
||||
url, strendswith(url, "/") ? "" : "/", claimed_id_str);
|
||||
|
||||
// Read the public key
|
||||
CLEAN_CHAR_P *public_key_file = filename_from_path_entry_strdupz(netdata_configured_cloud_dir, "public.pem");
|
||||
fp = fopen(public_key_file, "r");
|
||||
if (!fp || fread(public_key, 1, sizeof(public_key), fp) == 0) {
|
||||
claim_agent_failure_reason_set("cannot read public key file '%s'", public_key_file);
|
||||
if (fp) fclose(fp);
|
||||
*can_retry = false;
|
||||
return false;
|
||||
}
|
||||
fclose(fp);
|
||||
|
||||
// check if we have trusted.pem
|
||||
// or cloud_fullchain.pem, for backwards compatibility
|
||||
CLEAN_CHAR_P *trusted_key_file = filename_from_path_entry_strdupz(netdata_configured_cloud_dir, "trusted.pem");
|
||||
fp = fopen(trusted_key_file, "r");
|
||||
if(fp)
|
||||
fclose(fp);
|
||||
else {
|
||||
freez(trusted_key_file);
|
||||
trusted_key_file = filename_from_path_entry_strdupz(netdata_configured_cloud_dir, "cloud_fullchain.pem");
|
||||
fp = fopen(trusted_key_file, "r");
|
||||
if(fp)
|
||||
fclose(fp);
|
||||
else {
|
||||
freez(trusted_key_file);
|
||||
trusted_key_file = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
// generate the JSON request message
|
||||
CLEAN_BUFFER *wb = buffer_create(0, NULL);
|
||||
buffer_json_initialize(wb, "\"", "\"", 0, true, BUFFER_JSON_OPTIONS_MINIFY);
|
||||
|
||||
buffer_json_member_add_object(wb, "node");
|
||||
{
|
||||
buffer_json_member_add_string(wb, "id", claimed_id_str);
|
||||
buffer_json_member_add_string(wb, "hostname", hostname);
|
||||
}
|
||||
buffer_json_object_close(wb); // node
|
||||
|
||||
buffer_json_member_add_string(wb, "token", token);
|
||||
curl_add_rooms_json_array(wb, rooms);
|
||||
buffer_json_member_add_string(wb, "publicKey", public_key);
|
||||
buffer_json_member_add_string(wb, "mGUID", machine_guid);
|
||||
buffer_json_finalize(wb);
|
||||
|
||||
// initialize libcurl
|
||||
curl = curl_easy_init();
|
||||
if(!curl) {
|
||||
claim_agent_failure_reason_set("Cannot initialize request (curl_easy_init() failed)");
|
||||
*can_retry = true;
|
||||
return false;
|
||||
}
|
||||
|
||||
// curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L);
|
||||
curl_easy_setopt(curl, CURLOPT_DEBUGFUNCTION, debug_callback);
|
||||
|
||||
// we will receive the response in this
|
||||
CLEAN_BUFFER *response = buffer_create(0, NULL);
|
||||
|
||||
// configure the request
|
||||
headers = curl_slist_append(headers, "Content-Type: application/json");
|
||||
curl_easy_setopt(curl, CURLOPT_URL, target_url);
|
||||
curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "PUT");
|
||||
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, buffer_tostring(wb));
|
||||
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);
|
||||
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, response_write_callback);
|
||||
curl_easy_setopt(curl, CURLOPT_WRITEDATA, response);
|
||||
|
||||
if(trusted_key_file)
|
||||
curl_easy_setopt(curl, CURLOPT_CAINFO, trusted_key_file);
|
||||
|
||||
// Proxy configuration
|
||||
if (proxy) {
|
||||
if (!*proxy || strcmp(proxy, "none") == 0)
|
||||
// disable proxy configuration in libcurl
|
||||
curl_easy_setopt(curl, CURLOPT_PROXY, "");
|
||||
|
||||
else if (strcmp(proxy, "env") != 0)
|
||||
// set the custom proxy for libcurl
|
||||
curl_easy_setopt(curl, CURLOPT_PROXY, proxy);
|
||||
|
||||
// otherwise, libcurl will use its own proxy environment variables
|
||||
}
|
||||
|
||||
// Insecure option
|
||||
if (insecure) {
|
||||
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0L);
|
||||
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYHOST, 0L);
|
||||
}
|
||||
|
||||
// Set timeout options
|
||||
curl_easy_setopt(curl, CURLOPT_TIMEOUT, 10);
|
||||
curl_easy_setopt(curl, CURLOPT_CONNECTTIMEOUT, 5);
|
||||
|
||||
// execute the request
|
||||
res = curl_easy_perform(curl);
|
||||
if (res != CURLE_OK) {
|
||||
claim_agent_failure_reason_set("Request failed with error: %s", curl_easy_strerror(res));
|
||||
curl_easy_cleanup(curl);
|
||||
*can_retry = true;
|
||||
return false;
|
||||
}
|
||||
|
||||
// Get HTTP response code
|
||||
long http_status_code;
|
||||
curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, &http_status_code);
|
||||
|
||||
bool ret = false;
|
||||
if(http_status_code == 204) {
|
||||
if(!cloud_conf_regenerate(claimed_id_str, machine_guid, hostname, token, rooms, url, proxy, insecure)) {
|
||||
claim_agent_failure_reason_set("Failed to save claiming info to disk");
|
||||
}
|
||||
else {
|
||||
claim_agent_failure_reason_set(NULL);
|
||||
ret = true;
|
||||
}
|
||||
|
||||
*can_retry = false;
|
||||
}
|
||||
else if (http_status_code == 422) {
|
||||
if(buffer_strlen(response)) {
|
||||
struct json_object *parsed_json;
|
||||
struct json_object *error_key_obj;
|
||||
const char *error_key = NULL;
|
||||
|
||||
parsed_json = json_tokener_parse(buffer_tostring(response));
|
||||
if(parsed_json) {
|
||||
if (json_object_object_get_ex(parsed_json, "errorMsgKey", &error_key_obj))
|
||||
error_key = json_object_get_string(error_key_obj);
|
||||
|
||||
if (strcmp(error_key, "ErrInvalidNodeID") == 0)
|
||||
claim_agent_failure_reason_set("Failed: the node id is invalid");
|
||||
else if (strcmp(error_key, "ErrInvalidNodeName") == 0)
|
||||
claim_agent_failure_reason_set("Failed: the node name is invalid");
|
||||
else if (strcmp(error_key, "ErrInvalidRoomID") == 0)
|
||||
claim_agent_failure_reason_set("Failed: one or more room ids are invalid");
|
||||
else if (strcmp(error_key, "ErrInvalidPublicKey") == 0)
|
||||
claim_agent_failure_reason_set("Failed: the public key is invalid");
|
||||
else
|
||||
claim_agent_failure_reason_set("Failed with description '%s'", error_key);
|
||||
|
||||
json_object_put(parsed_json);
|
||||
}
|
||||
else
|
||||
claim_agent_failure_reason_set("Failed with a response code %ld", http_status_code);
|
||||
}
|
||||
else
|
||||
claim_agent_failure_reason_set("Failed with an empty response, code %ld", http_status_code);
|
||||
|
||||
*can_retry = false;
|
||||
}
|
||||
else if(http_status_code == 102) {
|
||||
claim_agent_failure_reason_set("Claiming is in progress");
|
||||
*can_retry = false;
|
||||
}
|
||||
else if(http_status_code == 403) {
|
||||
claim_agent_failure_reason_set("Failed: token is expired, not found, or invalid");
|
||||
*can_retry = false;
|
||||
}
|
||||
else if(http_status_code == 409) {
|
||||
claim_agent_failure_reason_set("Failed: agent is already claimed");
|
||||
*can_retry = false;
|
||||
}
|
||||
else if(http_status_code == 500) {
|
||||
claim_agent_failure_reason_set("Failed: received Internal Server Error");
|
||||
*can_retry = true;
|
||||
}
|
||||
else if(http_status_code == 503) {
|
||||
claim_agent_failure_reason_set("Failed: Netdata Cloud is unavailable");
|
||||
*can_retry = true;
|
||||
}
|
||||
else if(http_status_code == 504) {
|
||||
claim_agent_failure_reason_set("Failed: Gateway Timeout");
|
||||
*can_retry = true;
|
||||
}
|
||||
else {
|
||||
claim_agent_failure_reason_set("Failed with response code %ld", http_status_code);
|
||||
*can_retry = true;
|
||||
}
|
||||
|
||||
curl_easy_cleanup(curl);
|
||||
return ret;
|
||||
}
|
||||
|
||||
bool claim_agent(const char *url, const char *token, const char *rooms, const char *proxy, bool insecure) {
|
||||
static SPINLOCK spinlock = NETDATA_SPINLOCK_INITIALIZER;
|
||||
spinlock_lock(&spinlock);
|
||||
|
||||
if (!check_and_generate_certificates()) {
|
||||
spinlock_unlock(&spinlock);
|
||||
return false;
|
||||
}
|
||||
|
||||
bool done = false, can_retry = true;
|
||||
size_t retries = 0;
|
||||
do {
|
||||
done = send_curl_request(registry_get_this_machine_guid(), registry_get_this_machine_hostname(), token, rooms, url, proxy, insecure, &can_retry);
|
||||
if (done) break;
|
||||
sleep_usec(300 * USEC_PER_MS + 100 * retries * USEC_PER_MS);
|
||||
retries++;
|
||||
} while(can_retry && retries < 5);
|
||||
|
||||
spinlock_unlock(&spinlock);
|
||||
return done;
|
||||
}
|
||||
|
||||
bool claim_agent_from_environment(void) {
|
||||
const char *url = getenv("NETDATA_CLAIM_URL");
|
||||
if(!url || !*url) {
|
||||
url = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "url", DEFAULT_CLOUD_BASE_URL);
|
||||
if(!url || !*url) return false;
|
||||
}
|
||||
|
||||
const char *token = getenv("NETDATA_CLAIM_TOKEN");
|
||||
if(!token || !*token)
|
||||
return false;
|
||||
|
||||
const char *rooms = getenv("NETDATA_CLAIM_ROOMS");
|
||||
if(!rooms)
|
||||
rooms = "";
|
||||
|
||||
const char *proxy = getenv("NETDATA_CLAIM_PROXY");
|
||||
if(!proxy || !*proxy)
|
||||
proxy = "";
|
||||
|
||||
bool insecure = CONFIG_BOOLEAN_NO;
|
||||
const char *from_env = getenv("NETDATA_EXTRA_CLAIM_OPTS");
|
||||
if(from_env && *from_env && strstr(from_env, "-insecure") == 0)
|
||||
insecure = CONFIG_BOOLEAN_YES;
|
||||
|
||||
return claim_agent(url, token, rooms, proxy, insecure);
|
||||
}
|
||||
|
||||
bool claim_agent_from_claim_conf(void) {
|
||||
static struct config claim_config = {
|
||||
.first_section = NULL,
|
||||
.last_section = NULL,
|
||||
.mutex = NETDATA_MUTEX_INITIALIZER,
|
||||
.index = {
|
||||
.avl_tree = {
|
||||
.root = NULL,
|
||||
.compar = appconfig_section_compare
|
||||
},
|
||||
.rwlock = AVL_LOCK_INITIALIZER
|
||||
}
|
||||
};
|
||||
static SPINLOCK spinlock = NETDATA_SPINLOCK_INITIALIZER;
|
||||
bool ret = false;
|
||||
|
||||
spinlock_lock(&spinlock);
|
||||
|
||||
errno_clear();
|
||||
char *filename = filename_from_path_entry_strdupz(netdata_configured_user_config_dir, "claim.conf");
|
||||
bool loaded = appconfig_load(&claim_config, filename, 1, NULL);
|
||||
freez(filename);
|
||||
|
||||
if(loaded) {
|
||||
const char *url = appconfig_get(&claim_config, CONFIG_SECTION_GLOBAL, "url", DEFAULT_CLOUD_BASE_URL);
|
||||
const char *token = appconfig_get(&claim_config, CONFIG_SECTION_GLOBAL, "token", "");
|
||||
const char *rooms = appconfig_get(&claim_config, CONFIG_SECTION_GLOBAL, "rooms", "");
|
||||
const char *proxy = appconfig_get(&claim_config, CONFIG_SECTION_GLOBAL, "proxy", "");
|
||||
bool insecure = appconfig_get_boolean(&claim_config, CONFIG_SECTION_GLOBAL, "insecure", CONFIG_BOOLEAN_NO);
|
||||
|
||||
if(token && *token && url && *url)
|
||||
ret = claim_agent(url, token, rooms, proxy, insecure);
|
||||
}
|
||||
|
||||
spinlock_unlock(&spinlock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
bool claim_agent_from_split_files(void) {
|
||||
char filename[FILENAME_MAX + 1];
|
||||
|
||||
snprintfz(filename, sizeof(filename), "%s/token", netdata_configured_cloud_dir);
|
||||
long token_len = 0;
|
||||
char *token = read_by_filename(filename, &token_len);
|
||||
if(!token || !*token)
|
||||
return false;
|
||||
|
||||
snprintfz(filename, sizeof(filename), "%s/rooms", netdata_configured_cloud_dir);
|
||||
long rooms_len = 0;
|
||||
char *rooms = read_by_filename(filename, &rooms_len);
|
||||
if(!rooms || !*rooms)
|
||||
rooms = NULL;
|
||||
|
||||
bool ret = claim_agent(cloud_config_url_get(), token, rooms, cloud_config_proxy_get(), cloud_config_insecure_get());
|
||||
|
||||
if(ret) {
|
||||
snprintfz(filename, sizeof(filename), "%s/token", netdata_configured_cloud_dir);
|
||||
unlink(filename);
|
||||
|
||||
snprintfz(filename, sizeof(filename), "%s/rooms", netdata_configured_cloud_dir);
|
||||
unlink(filename);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
bool claim_agent_automatically(void) {
|
||||
// Use /etc/netdata/claim.conf
|
||||
|
||||
if(claim_agent_from_claim_conf())
|
||||
return true;
|
||||
|
||||
// Users may set NETDATA_CLAIM_TOKEN and NETDATA_CLAIM_ROOMS
|
||||
// A good choice for docker container users.
|
||||
|
||||
if(claim_agent_from_environment())
|
||||
return true;
|
||||
|
||||
// Users may store token and rooms in /var/lib/netdata/cloud.d
|
||||
// This was a bad choice, since users may have to create this directory
|
||||
// which may end up with the wrong permissions, preventing netdata from storing
|
||||
// the required information there.
|
||||
|
||||
if(claim_agent_from_split_files())
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
|
@ -1,134 +1,140 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#include "claim.h"
|
||||
#include "registry/registry_internals.h"
|
||||
#include "aclk/aclk.h"
|
||||
#include "aclk/aclk_proxy.h"
|
||||
|
||||
char *claiming_pending_arguments = NULL;
|
||||
// --------------------------------------------------------------------------------------------------------------------
|
||||
// keep track of the last claiming failure reason
|
||||
|
||||
static char *claiming_errors[] = {
|
||||
"Agent claimed successfully", // 0
|
||||
"Unknown argument", // 1
|
||||
"Problems with claiming working directory", // 2
|
||||
"Missing dependencies", // 3
|
||||
"Failure to connect to endpoint", // 4
|
||||
"The CLI didn't work", // 5
|
||||
"Wrong user", // 6
|
||||
"Unknown HTTP error message", // 7
|
||||
"invalid node id", // 8
|
||||
"invalid node name", // 9
|
||||
"invalid room id", // 10
|
||||
"invalid public key", // 11
|
||||
"token expired/token not found/invalid token", // 12
|
||||
"already claimed", // 13
|
||||
"processing claiming", // 14
|
||||
"Internal Server Error", // 15
|
||||
"Gateway Timeout", // 16
|
||||
"Service Unavailable", // 17
|
||||
"Agent Unique Id Not Readable" // 18
|
||||
};
|
||||
static char cloud_claim_failure_reason[4096] = "";
|
||||
|
||||
/* Retrieve the claim id for the agent.
|
||||
* Caller owns the string.
|
||||
*/
|
||||
char *get_agent_claimid()
|
||||
{
|
||||
char *result;
|
||||
rrdhost_aclk_state_lock(localhost);
|
||||
result = (localhost->aclk_state.claimed_id == NULL) ? NULL : strdupz(localhost->aclk_state.claimed_id);
|
||||
rrdhost_aclk_state_unlock(localhost);
|
||||
return result;
|
||||
void claim_agent_failure_reason_set(const char *format, ...) {
|
||||
if(!format || !*format) {
|
||||
cloud_claim_failure_reason[0] = '\0';
|
||||
return;
|
||||
}
|
||||
|
||||
va_list args;
|
||||
va_start(args, format);
|
||||
vsnprintf(cloud_claim_failure_reason, sizeof(cloud_claim_failure_reason), format, args);
|
||||
va_end(args);
|
||||
|
||||
nd_log(NDLS_DAEMON, NDLP_ERR,
|
||||
"CLAIM: %s", cloud_claim_failure_reason);
|
||||
}
|
||||
|
||||
#define CLAIMING_COMMAND_LENGTH 16384
|
||||
#define CLAIMING_PROXY_LENGTH (CLAIMING_COMMAND_LENGTH/4)
|
||||
const char *claim_agent_failure_reason_get(void) {
|
||||
if(!cloud_claim_failure_reason[0])
|
||||
return "Agent is not claimed yet";
|
||||
else
|
||||
return cloud_claim_failure_reason;
|
||||
}
|
||||
|
||||
/* rrd_init() and post_conf_load() must have been called before this function */
|
||||
CLAIM_AGENT_RESPONSE claim_agent(const char *claiming_arguments, bool force, const char **msg __maybe_unused)
|
||||
{
|
||||
if (!force || !netdata_cloud_enabled) {
|
||||
netdata_log_error("Refusing to claim agent -> cloud functionality has been disabled");
|
||||
return CLAIM_AGENT_CLOUD_DISABLED;
|
||||
// --------------------------------------------------------------------------------------------------------------------
|
||||
// claimed_id load/save
|
||||
|
||||
bool claimed_id_save_to_file(const char *claimed_id_str) {
|
||||
bool ret;
|
||||
const char *filename = filename_from_path_entry_strdupz(netdata_configured_cloud_dir, "claimed_id");
|
||||
FILE *fp = fopen(filename, "w");
|
||||
if(fp) {
|
||||
fprintf(fp, "%s", claimed_id_str);
|
||||
fclose(fp);
|
||||
ret = true;
|
||||
}
|
||||
else {
|
||||
nd_log(NDLS_DAEMON, NDLP_ERR,
|
||||
"CLAIM: cannot open file '%s' for writing.", filename);
|
||||
ret = false;
|
||||
}
|
||||
|
||||
#ifndef DISABLE_CLOUD
|
||||
char command_exec_buffer[CLAIMING_COMMAND_LENGTH + 1];
|
||||
char command_line_buffer[CLAIMING_COMMAND_LENGTH + 1];
|
||||
freez((void *)filename);
|
||||
return ret;
|
||||
}
|
||||
|
||||
// This is guaranteed to be set early in main via post_conf_load()
|
||||
char *cloud_base_url = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "cloud base url", NULL);
|
||||
if (cloud_base_url == NULL) {
|
||||
internal_fatal(true, "Do not move the cloud base url out of post_conf_load!!");
|
||||
return CLAIM_AGENT_NO_CLOUD_URL;
|
||||
static ND_UUID claimed_id_parse(const char *claimed_id, const char *source) {
|
||||
ND_UUID uuid;
|
||||
|
||||
if(uuid_parse_flexi(claimed_id, uuid.uuid) != 0) {
|
||||
uuid = UUID_ZERO;
|
||||
nd_log(NDLS_DAEMON, NDLP_ERR,
|
||||
"CLAIM: claimed_id '%s' (loaded from '%s'), is not a valid UUID.",
|
||||
claimed_id, source);
|
||||
}
|
||||
|
||||
const char *proxy_str;
|
||||
ACLK_PROXY_TYPE proxy_type;
|
||||
char proxy_flag[CLAIMING_PROXY_LENGTH] = "-noproxy";
|
||||
return uuid;
|
||||
}
|
||||
|
||||
proxy_str = aclk_get_proxy(&proxy_type);
|
||||
static ND_UUID claimed_id_load_from_file(void) {
|
||||
ND_UUID uuid;
|
||||
|
||||
if (proxy_type == PROXY_TYPE_SOCKS5 || proxy_type == PROXY_TYPE_HTTP)
|
||||
snprintf(proxy_flag, CLAIMING_PROXY_LENGTH, "-proxy=\"%s\"", proxy_str);
|
||||
long bytes_read;
|
||||
const char *filename = filename_from_path_entry_strdupz(netdata_configured_cloud_dir, "claimed_id");
|
||||
char *claimed_id = read_by_filename(filename, &bytes_read);
|
||||
|
||||
snprintfz(command_exec_buffer, CLAIMING_COMMAND_LENGTH,
|
||||
"exec \"%s%snetdata-claim.sh\"",
|
||||
netdata_exe_path[0] ? netdata_exe_path : "",
|
||||
netdata_exe_path[0] ? "/" : ""
|
||||
);
|
||||
if(!claimed_id)
|
||||
uuid = UUID_ZERO;
|
||||
else
|
||||
uuid = claimed_id_parse(claimed_id, filename);
|
||||
|
||||
snprintfz(command_line_buffer,
|
||||
CLAIMING_COMMAND_LENGTH,
|
||||
"%s %s -hostname=%s -id=%s -url=%s -noreload %s",
|
||||
command_exec_buffer,
|
||||
proxy_flag,
|
||||
netdata_configured_hostname,
|
||||
localhost->machine_guid,
|
||||
cloud_base_url,
|
||||
claiming_arguments);
|
||||
freez(claimed_id);
|
||||
freez((void *)filename);
|
||||
return uuid;
|
||||
}
|
||||
|
||||
netdata_log_info("Executing agent claiming command: %s", command_exec_buffer);
|
||||
POPEN_INSTANCE *instance = spawn_popen_run(command_line_buffer);
|
||||
if(!instance) {
|
||||
netdata_log_error("Cannot popen(\"%s\").", command_exec_buffer);
|
||||
return CLAIM_AGENT_CANNOT_EXECUTE_CLAIM_SCRIPT;
|
||||
static ND_UUID claimed_id_get_from_cloud_conf(void) {
|
||||
if(appconfig_exists(&cloud_config, CONFIG_SECTION_GLOBAL, "claimed_id")) {
|
||||
const char *claimed_id = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "claimed_id", "");
|
||||
if(claimed_id && *claimed_id)
|
||||
return claimed_id_parse(claimed_id, "cloud.conf");
|
||||
}
|
||||
return UUID_ZERO;
|
||||
}
|
||||
|
||||
netdata_log_info("Waiting for claiming command '%s' to finish.", command_exec_buffer);
|
||||
char read_buffer[100 + 1];
|
||||
while (fgets(read_buffer, 100, instance->child_stdout_fp) != NULL) ;
|
||||
static ND_UUID claimed_id_load(void) {
|
||||
ND_UUID uuid = claimed_id_get_from_cloud_conf();
|
||||
if(UUIDiszero(uuid))
|
||||
uuid = claimed_id_load_from_file();
|
||||
|
||||
int exit_code = spawn_popen_wait(instance);
|
||||
return uuid;
|
||||
}
|
||||
|
||||
netdata_log_info("Agent claiming command '%s' returned with code %d", command_exec_buffer, exit_code);
|
||||
if (0 == exit_code) {
|
||||
load_claiming_state();
|
||||
return CLAIM_AGENT_OK;
|
||||
}
|
||||
if (exit_code < 0) {
|
||||
netdata_log_error("Agent claiming command '%s' failed to complete its run", command_exec_buffer);
|
||||
return CLAIM_AGENT_CLAIM_SCRIPT_FAILED;
|
||||
}
|
||||
errno_clear();
|
||||
unsigned maximum_known_exit_code = sizeof(claiming_errors) / sizeof(claiming_errors[0]) - 1;
|
||||
bool is_agent_claimed(void) {
|
||||
ND_UUID uuid = claim_id_get_uuid();
|
||||
return !UUIDiszero(uuid);
|
||||
}
|
||||
|
||||
if ((unsigned)exit_code > maximum_known_exit_code) {
|
||||
netdata_log_error("Agent failed to be claimed with an unknown error. Cmd: '%s'", command_exec_buffer);
|
||||
return CLAIM_AGENT_CLAIM_SCRIPT_RETURNED_INVALID_CODE;
|
||||
}
|
||||
// --------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
netdata_log_error("Agent failed to be claimed using the command '%s' with the following error message: %s",
|
||||
command_exec_buffer, claiming_errors[exit_code]);
|
||||
bool claim_id_matches(const char *claim_id) {
|
||||
ND_UUID this_one = UUID_ZERO;
|
||||
if(uuid_parse_flexi(claim_id, this_one.uuid) != 0 || UUIDiszero(this_one))
|
||||
return false;
|
||||
|
||||
if(msg) *msg = claiming_errors[exit_code];
|
||||
ND_UUID having = claim_id_get_uuid();
|
||||
if(!UUIDiszero(having) && UUIDeq(having, this_one))
|
||||
return true;
|
||||
|
||||
#else
|
||||
UNUSED(claiming_arguments);
|
||||
UNUSED(claiming_errors);
|
||||
#endif
|
||||
return false;
|
||||
}
|
||||
|
||||
return CLAIM_AGENT_FAILED_WITH_MESSAGE;
|
||||
bool claim_id_matches_any(const char *claim_id) {
|
||||
ND_UUID this_one = UUID_ZERO;
|
||||
if(uuid_parse_flexi(claim_id, this_one.uuid) != 0 || UUIDiszero(this_one))
|
||||
return false;
|
||||
|
||||
ND_UUID having = claim_id_get_uuid();
|
||||
if(!UUIDiszero(having) && UUIDeq(having, this_one))
|
||||
return true;
|
||||
|
||||
having = localhost->aclk.claim_id_of_parent;
|
||||
if(!UUIDiszero(having) && UUIDeq(having, this_one))
|
||||
return true;
|
||||
|
||||
having = localhost->aclk.claim_id_of_origin;
|
||||
if(!UUIDiszero(having) && UUIDeq(having, this_one))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/* Change the claimed state of the agent.
|
||||
|
@ -138,333 +144,66 @@ CLAIM_AGENT_RESPONSE claim_agent(const char *claiming_arguments, bool force, con
|
|||
* - after spawning the claim because of a command-line argument
|
||||
* If this happens with the ACLK active under an old claim then we MUST KILL THE LINK
|
||||
*/
|
||||
void load_claiming_state(void)
|
||||
{
|
||||
// --------------------------------------------------------------------
|
||||
// Check if the cloud is enabled
|
||||
#if defined( DISABLE_CLOUD ) || !defined( ENABLE_ACLK )
|
||||
netdata_cloud_enabled = false;
|
||||
#else
|
||||
nd_uuid_t uuid;
|
||||
|
||||
// Propagate into aclk and registry. Be kind of atomic...
|
||||
appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "cloud base url", DEFAULT_CLOUD_BASE_URL);
|
||||
|
||||
rrdhost_aclk_state_lock(localhost);
|
||||
if (localhost->aclk_state.claimed_id) {
|
||||
if (aclk_connected)
|
||||
localhost->aclk_state.prev_claimed_id = strdupz(localhost->aclk_state.claimed_id);
|
||||
freez(localhost->aclk_state.claimed_id);
|
||||
localhost->aclk_state.claimed_id = NULL;
|
||||
}
|
||||
if (aclk_connected)
|
||||
{
|
||||
netdata_log_info("Agent was already connected to Cloud - forcing reconnection under new credentials");
|
||||
bool load_claiming_state(void) {
|
||||
if (aclk_online()) {
|
||||
nd_log(NDLS_DAEMON, NDLP_ERR,
|
||||
"CLAIM: agent was already connected to NC - forcing reconnection under new credentials");
|
||||
aclk_kill_link = 1;
|
||||
}
|
||||
aclk_disable_runtime = 0;
|
||||
|
||||
char filename[FILENAME_MAX + 1];
|
||||
snprintfz(filename, FILENAME_MAX, "%s/cloud.d/claimed_id", netdata_configured_varlib_dir);
|
||||
|
||||
long bytes_read;
|
||||
char *claimed_id = read_by_filename(filename, &bytes_read);
|
||||
if(claimed_id && uuid_parse(claimed_id, uuid)) {
|
||||
netdata_log_error("claimed_id \"%s\" doesn't look like valid UUID", claimed_id);
|
||||
freez(claimed_id);
|
||||
claimed_id = NULL;
|
||||
ND_UUID uuid = claimed_id_load();
|
||||
if(UUIDiszero(uuid)) {
|
||||
// not found
|
||||
if(claim_agent_automatically())
|
||||
uuid = claimed_id_load();
|
||||
}
|
||||
|
||||
if(claimed_id) {
|
||||
localhost->aclk_state.claimed_id = mallocz(UUID_STR_LEN);
|
||||
uuid_unparse_lower(uuid, localhost->aclk_state.claimed_id);
|
||||
bool have_claimed_id = false;
|
||||
if(!UUIDiszero(uuid)) {
|
||||
// we go it somehow
|
||||
claim_id_set(uuid);
|
||||
have_claimed_id = true;
|
||||
}
|
||||
|
||||
rrdhost_aclk_state_unlock(localhost);
|
||||
invalidate_node_instances(&localhost->host_uuid, claimed_id ? &uuid : NULL);
|
||||
metaqueue_store_claim_id(&localhost->host_uuid, claimed_id ? &uuid : NULL);
|
||||
invalidate_node_instances(&localhost->host_uuid, have_claimed_id ? &uuid.uuid : NULL);
|
||||
metaqueue_store_claim_id(&localhost->host_uuid, have_claimed_id ? &uuid.uuid : NULL);
|
||||
|
||||
if (!claimed_id) {
|
||||
netdata_log_info("Unable to load '%s', setting state to AGENT_UNCLAIMED", filename);
|
||||
return;
|
||||
}
|
||||
|
||||
freez(claimed_id);
|
||||
|
||||
netdata_log_info("File '%s' was found. Setting state to AGENT_CLAIMED.", filename);
|
||||
netdata_cloud_enabled = appconfig_get_boolean_ondemand(&cloud_config, CONFIG_SECTION_GLOBAL, "enabled", netdata_cloud_enabled);
|
||||
#endif
|
||||
}
|
||||
|
||||
struct config cloud_config = { .first_section = NULL,
|
||||
.last_section = NULL,
|
||||
.mutex = NETDATA_MUTEX_INITIALIZER,
|
||||
.index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
|
||||
.rwlock = AVL_LOCK_INITIALIZER } };
|
||||
|
||||
void load_cloud_conf(int silent)
|
||||
{
|
||||
char *nd_disable_cloud = getenv("NETDATA_DISABLE_CLOUD");
|
||||
if (nd_disable_cloud && !strncmp(nd_disable_cloud, "1", 1))
|
||||
netdata_cloud_enabled = CONFIG_BOOLEAN_NO;
|
||||
|
||||
char *filename;
|
||||
errno_clear();
|
||||
|
||||
int ret = 0;
|
||||
if (!have_claimed_id)
|
||||
nd_log(NDLS_DAEMON, NDLP_ERR,
|
||||
"CLAIM: Unable to find our claimed_id, setting state to AGENT_UNCLAIMED");
|
||||
else
|
||||
nd_log(NDLS_DAEMON, NDLP_INFO,
|
||||
"CLAIM: Found a valid claimed_id, setting state to AGENT_CLAIMED");
|
||||
|
||||
filename = strdupz_path_subpath(netdata_configured_varlib_dir, "cloud.d/cloud.conf");
|
||||
|
||||
ret = appconfig_load(&cloud_config, filename, 1, NULL);
|
||||
if(!ret && !silent)
|
||||
netdata_log_info("CONFIG: cannot load cloud config '%s'. Running with internal defaults.", filename);
|
||||
|
||||
freez(filename);
|
||||
|
||||
// --------------------------------------------------------------------
|
||||
// Check if the cloud is enabled
|
||||
|
||||
#if defined( DISABLE_CLOUD ) || !defined( ENABLE_ACLK )
|
||||
netdata_cloud_enabled = CONFIG_BOOLEAN_NO;
|
||||
#else
|
||||
netdata_cloud_enabled = appconfig_get_boolean_ondemand(&cloud_config, CONFIG_SECTION_GLOBAL, "enabled", netdata_cloud_enabled);
|
||||
#endif
|
||||
|
||||
// This must be set before any point in the code that accesses it. Do not move it from this function.
|
||||
appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "cloud base url", DEFAULT_CLOUD_BASE_URL);
|
||||
return have_claimed_id;
|
||||
}
|
||||
|
||||
static char *netdata_random_session_id_filename = NULL;
|
||||
static nd_uuid_t netdata_random_session_id = { 0 };
|
||||
CLOUD_STATUS claim_reload_and_wait_online(void) {
|
||||
nd_log(NDLS_DAEMON, NDLP_INFO,
|
||||
"CLAIM: Reloading Agent Claiming configuration.");
|
||||
|
||||
bool netdata_random_session_id_generate(void) {
|
||||
static char guid[UUID_STR_LEN] = "";
|
||||
|
||||
uuid_generate_random(netdata_random_session_id);
|
||||
uuid_unparse_lower(netdata_random_session_id, guid);
|
||||
|
||||
char filename[FILENAME_MAX + 1];
|
||||
snprintfz(filename, FILENAME_MAX, "%s/netdata_random_session_id", netdata_configured_varlib_dir);
|
||||
|
||||
bool ret = true;
|
||||
|
||||
(void)unlink(filename);
|
||||
|
||||
// save it
|
||||
int fd = open(filename, O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 640);
|
||||
if(fd == -1) {
|
||||
netdata_log_error("Cannot create random session id file '%s'.", filename);
|
||||
ret = false;
|
||||
}
|
||||
else {
|
||||
if (write(fd, guid, UUID_STR_LEN - 1) != UUID_STR_LEN - 1) {
|
||||
netdata_log_error("Cannot write the random session id file '%s'.", filename);
|
||||
ret = false;
|
||||
} else {
|
||||
ssize_t bytes = write(fd, "\n", 1);
|
||||
UNUSED(bytes);
|
||||
}
|
||||
close(fd);
|
||||
}
|
||||
|
||||
if(ret && (!netdata_random_session_id_filename || strcmp(netdata_random_session_id_filename, filename) != 0)) {
|
||||
freez(netdata_random_session_id_filename);
|
||||
netdata_random_session_id_filename = strdupz(filename);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
const char *netdata_random_session_id_get_filename(void) {
|
||||
if(!netdata_random_session_id_filename)
|
||||
netdata_random_session_id_generate();
|
||||
|
||||
return netdata_random_session_id_filename;
|
||||
}
|
||||
|
||||
bool netdata_random_session_id_matches(const char *guid) {
|
||||
if(uuid_is_null(netdata_random_session_id))
|
||||
return false;
|
||||
|
||||
nd_uuid_t uuid;
|
||||
|
||||
if(uuid_parse(guid, uuid))
|
||||
return false;
|
||||
|
||||
if(uuid_compare(netdata_random_session_id, uuid) == 0)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool check_claim_param(const char *s) {
|
||||
if(!s || !*s) return true;
|
||||
|
||||
do {
|
||||
if(isalnum((uint8_t)*s) || *s == '.' || *s == ',' || *s == '-' || *s == ':' || *s == '/' || *s == '_')
|
||||
;
|
||||
else
|
||||
return false;
|
||||
|
||||
} while(*++s);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void claim_reload_all(void) {
|
||||
nd_log_limits_unlimited();
|
||||
load_claiming_state();
|
||||
cloud_conf_load(0);
|
||||
bool claimed = load_claiming_state();
|
||||
registry_update_cloud_base_url();
|
||||
rrdpush_send_claimed_id(localhost);
|
||||
rrdpush_sender_send_claimed_id(localhost);
|
||||
nd_log_limits_reset();
|
||||
}
|
||||
|
||||
int api_v2_claim(struct web_client *w, char *url) {
|
||||
char *key = NULL;
|
||||
char *token = NULL;
|
||||
char *rooms = NULL;
|
||||
char *base_url = NULL;
|
||||
|
||||
while (url) {
|
||||
char *value = strsep_skip_consecutive_separators(&url, "&");
|
||||
if (!value || !*value) continue;
|
||||
|
||||
char *name = strsep_skip_consecutive_separators(&value, "=");
|
||||
if (!name || !*name) continue;
|
||||
if (!value || !*value) continue;
|
||||
|
||||
if(!strcmp(name, "key"))
|
||||
key = value;
|
||||
else if(!strcmp(name, "token"))
|
||||
token = value;
|
||||
else if(!strcmp(name, "rooms"))
|
||||
rooms = value;
|
||||
else if(!strcmp(name, "url"))
|
||||
base_url = value;
|
||||
}
|
||||
|
||||
BUFFER *wb = w->response.data;
|
||||
buffer_flush(wb);
|
||||
buffer_json_initialize(wb, "\"", "\"", 0, true, BUFFER_JSON_OPTIONS_DEFAULT);
|
||||
|
||||
time_t now_s = now_realtime_sec();
|
||||
CLOUD_STATUS status = buffer_json_cloud_status(wb, now_s);
|
||||
|
||||
bool can_be_claimed = false;
|
||||
switch(status) {
|
||||
case CLOUD_STATUS_AVAILABLE:
|
||||
case CLOUD_STATUS_DISABLED:
|
||||
case CLOUD_STATUS_OFFLINE:
|
||||
can_be_claimed = true;
|
||||
break;
|
||||
|
||||
case CLOUD_STATUS_UNAVAILABLE:
|
||||
case CLOUD_STATUS_BANNED:
|
||||
case CLOUD_STATUS_ONLINE:
|
||||
can_be_claimed = false;
|
||||
break;
|
||||
}
|
||||
|
||||
buffer_json_member_add_boolean(wb, "can_be_claimed", can_be_claimed);
|
||||
|
||||
if(can_be_claimed && key) {
|
||||
if(!netdata_random_session_id_matches(key)) {
|
||||
buffer_reset(wb);
|
||||
buffer_strcat(wb, "invalid key");
|
||||
netdata_random_session_id_generate(); // generate a new key, to avoid an attack to find it
|
||||
return HTTP_RESP_FORBIDDEN;
|
||||
}
|
||||
|
||||
if(!token || !base_url || !check_claim_param(token) || !check_claim_param(base_url) || (rooms && !check_claim_param(rooms))) {
|
||||
buffer_reset(wb);
|
||||
buffer_strcat(wb, "invalid parameters");
|
||||
netdata_random_session_id_generate(); // generate a new key, to avoid an attack to find it
|
||||
return HTTP_RESP_BAD_REQUEST;
|
||||
}
|
||||
|
||||
netdata_random_session_id_generate(); // generate a new key, to avoid an attack to find it
|
||||
|
||||
netdata_cloud_enabled = CONFIG_BOOLEAN_AUTO;
|
||||
appconfig_set_boolean(&cloud_config, CONFIG_SECTION_GLOBAL, "enabled", CONFIG_BOOLEAN_AUTO);
|
||||
appconfig_set(&cloud_config, CONFIG_SECTION_GLOBAL, "cloud base url", base_url);
|
||||
|
||||
nd_uuid_t claimed_id;
|
||||
uuid_generate_random(claimed_id);
|
||||
char claimed_id_str[UUID_STR_LEN];
|
||||
uuid_unparse_lower(claimed_id, claimed_id_str);
|
||||
|
||||
BUFFER *t = buffer_create(1024, NULL);
|
||||
if(rooms)
|
||||
buffer_sprintf(t, "-id=%s -token=%s -rooms=%s", claimed_id_str, token, rooms);
|
||||
else
|
||||
buffer_sprintf(t, "-id=%s -token=%s", claimed_id_str, token);
|
||||
|
||||
bool success = false;
|
||||
const char *msg = NULL;
|
||||
CLAIM_AGENT_RESPONSE rc = claim_agent(buffer_tostring(t), true, &msg);
|
||||
switch(rc) {
|
||||
case CLAIM_AGENT_OK:
|
||||
msg = "ok";
|
||||
success = true;
|
||||
can_be_claimed = false;
|
||||
claim_reload_all();
|
||||
{
|
||||
int ms = 0;
|
||||
do {
|
||||
status = cloud_status();
|
||||
if (status == CLOUD_STATUS_ONLINE && __atomic_load_n(&localhost->node_id, __ATOMIC_RELAXED))
|
||||
break;
|
||||
|
||||
sleep_usec(50 * USEC_PER_MS);
|
||||
ms += 50;
|
||||
} while (ms < 10000);
|
||||
}
|
||||
break;
|
||||
|
||||
case CLAIM_AGENT_NO_CLOUD_URL:
|
||||
msg = "No Netdata Cloud URL.";
|
||||
break;
|
||||
|
||||
case CLAIM_AGENT_CLAIM_SCRIPT_FAILED:
|
||||
msg = "Claiming script failed.";
|
||||
break;
|
||||
|
||||
case CLAIM_AGENT_CLOUD_DISABLED:
|
||||
msg = "Netdata Cloud is disabled on this agent.";
|
||||
break;
|
||||
|
||||
case CLAIM_AGENT_CANNOT_EXECUTE_CLAIM_SCRIPT:
|
||||
msg = "Failed to execute claiming script.";
|
||||
break;
|
||||
|
||||
case CLAIM_AGENT_CLAIM_SCRIPT_RETURNED_INVALID_CODE:
|
||||
msg = "Claiming script returned invalid code.";
|
||||
break;
|
||||
|
||||
default:
|
||||
case CLAIM_AGENT_FAILED_WITH_MESSAGE:
|
||||
if(!msg)
|
||||
msg = "Unknown error";
|
||||
break;
|
||||
}
|
||||
|
||||
// our status may have changed
|
||||
// refresh the status in our output
|
||||
buffer_flush(wb);
|
||||
buffer_json_initialize(wb, "\"", "\"", 0, true, BUFFER_JSON_OPTIONS_DEFAULT);
|
||||
now_s = now_realtime_sec();
|
||||
buffer_json_cloud_status(wb, now_s);
|
||||
|
||||
// and this is the status of the claiming command we run
|
||||
buffer_json_member_add_boolean(wb, "success", success);
|
||||
buffer_json_member_add_string(wb, "message", msg);
|
||||
}
|
||||
|
||||
if(can_be_claimed)
|
||||
buffer_json_member_add_string(wb, "key_filename", netdata_random_session_id_get_filename());
|
||||
|
||||
buffer_json_agents_v2(wb, NULL, now_s, false, false);
|
||||
buffer_json_finalize(wb);
|
||||
|
||||
return HTTP_RESP_OK;
|
||||
|
||||
CLOUD_STATUS status = cloud_status();
|
||||
if(claimed) {
|
||||
int ms = 0;
|
||||
do {
|
||||
status = cloud_status();
|
||||
if ((status == CLOUD_STATUS_ONLINE || status == CLOUD_STATUS_INDIRECT) && !uuid_is_null(localhost->host_uuid))
|
||||
break;
|
||||
|
||||
sleep_usec(50 * USEC_PER_MS);
|
||||
ms += 50;
|
||||
} while (ms < 10000);
|
||||
}
|
||||
|
||||
return status;
|
||||
}
|
||||
|
|
|
@ -4,29 +4,32 @@
|
|||
#define NETDATA_CLAIM_H 1
|
||||
|
||||
#include "daemon/common.h"
|
||||
#include "cloud-status.h"
|
||||
#include "claim_id.h"
|
||||
|
||||
const char *claim_agent_failure_reason_get(void);
|
||||
void claim_agent_failure_reason_set(const char *format, ...) PRINTFLIKE(1, 2);
|
||||
|
||||
extern char *claiming_pending_arguments;
|
||||
extern struct config cloud_config;
|
||||
|
||||
typedef enum __attribute__((packed)) {
|
||||
CLAIM_AGENT_OK,
|
||||
CLAIM_AGENT_CLOUD_DISABLED,
|
||||
CLAIM_AGENT_NO_CLOUD_URL,
|
||||
CLAIM_AGENT_CANNOT_EXECUTE_CLAIM_SCRIPT,
|
||||
CLAIM_AGENT_CLAIM_SCRIPT_FAILED,
|
||||
CLAIM_AGENT_CLAIM_SCRIPT_RETURNED_INVALID_CODE,
|
||||
CLAIM_AGENT_FAILED_WITH_MESSAGE,
|
||||
} CLAIM_AGENT_RESPONSE;
|
||||
bool claim_agent(const char *url, const char *token, const char *rooms, const char *proxy, bool insecure);
|
||||
bool claim_agent_automatically(void);
|
||||
|
||||
CLAIM_AGENT_RESPONSE claim_agent(const char *claiming_arguments, bool force, const char **msg);
|
||||
char *get_agent_claimid(void);
|
||||
void load_claiming_state(void);
|
||||
void load_cloud_conf(int silent);
|
||||
void claim_reload_all(void);
|
||||
bool claimed_id_save_to_file(const char *claimed_id_str);
|
||||
|
||||
bool netdata_random_session_id_generate(void);
|
||||
const char *netdata_random_session_id_get_filename(void);
|
||||
bool netdata_random_session_id_matches(const char *guid);
|
||||
int api_v2_claim(struct web_client *w, char *url);
|
||||
bool is_agent_claimed(void);
|
||||
bool claim_id_matches(const char *claim_id);
|
||||
bool claim_id_matches_any(const char *claim_id);
|
||||
bool load_claiming_state(void);
|
||||
void cloud_conf_load(int silent);
|
||||
void cloud_conf_init_after_registry(void);
|
||||
bool cloud_conf_save(void);
|
||||
bool cloud_conf_regenerate(const char *claimed_id_str, const char *machine_guid, const char *hostname, const char *token, const char *rooms, const char *url, const char *proxy, int insecure);
|
||||
CLOUD_STATUS claim_reload_and_wait_online(void);
|
||||
|
||||
const char *cloud_config_url_get(void);
|
||||
void cloud_config_url_set(const char *url);
|
||||
const char *cloud_config_proxy_get(void);
|
||||
bool cloud_config_insecure_get(void);
|
||||
|
||||
#endif //NETDATA_CLAIM_H
|
||||
|
|
123
src/claim/claim_id.c
Normal file
123
src/claim/claim_id.c
Normal file
|
@ -0,0 +1,123 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#include "claim_id.h"
|
||||
|
||||
static struct {
|
||||
SPINLOCK spinlock;
|
||||
ND_UUID claim_uuid;
|
||||
ND_UUID claim_uuid_saved;
|
||||
} claim = {
|
||||
.spinlock = NETDATA_SPINLOCK_INITIALIZER,
|
||||
};
|
||||
|
||||
void claim_id_clear_previous_working(void) {
|
||||
spinlock_lock(&claim.spinlock);
|
||||
claim.claim_uuid_saved = UUID_ZERO;
|
||||
spinlock_unlock(&claim.spinlock);
|
||||
}
|
||||
|
||||
void claim_id_set(ND_UUID new_claim_id) {
|
||||
spinlock_lock(&claim.spinlock);
|
||||
|
||||
if(!UUIDiszero(claim.claim_uuid)) {
|
||||
if(aclk_online())
|
||||
claim.claim_uuid_saved = claim.claim_uuid;
|
||||
claim.claim_uuid = UUID_ZERO;
|
||||
}
|
||||
|
||||
claim.claim_uuid = new_claim_id;
|
||||
if(localhost)
|
||||
localhost->aclk.claim_id_of_origin = claim.claim_uuid;
|
||||
|
||||
spinlock_unlock(&claim.spinlock);
|
||||
}
|
||||
|
||||
// returns true when the supplied str is a valid UUID.
|
||||
// giving NULL, an empty string, or "NULL" is valid.
|
||||
bool claim_id_set_str(const char *claim_id_str) {
|
||||
bool rc;
|
||||
|
||||
ND_UUID uuid;
|
||||
if(!claim_id_str || !*claim_id_str || strcmp(claim_id_str, "NULL") == 0) {
|
||||
uuid = UUID_ZERO,
|
||||
rc = true;
|
||||
}
|
||||
else
|
||||
rc = uuid_parse(claim_id_str, uuid.uuid) == 0;
|
||||
|
||||
claim_id_set(uuid);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
ND_UUID claim_id_get_uuid(void) {
|
||||
static ND_UUID uuid;
|
||||
spinlock_lock(&claim.spinlock);
|
||||
uuid = claim.claim_uuid;
|
||||
spinlock_unlock(&claim.spinlock);
|
||||
return uuid;
|
||||
}
|
||||
|
||||
void claim_id_get_str(char str[UUID_STR_LEN]) {
|
||||
ND_UUID uuid = claim_id_get_uuid();
|
||||
|
||||
if(UUIDiszero(uuid))
|
||||
memset(str, 0, UUID_STR_LEN);
|
||||
else
|
||||
uuid_unparse_lower(uuid.uuid, str);
|
||||
}
|
||||
|
||||
const char *claim_id_get_str_mallocz(void) {
|
||||
char *str = mallocz(UUID_STR_LEN);
|
||||
claim_id_get_str(str);
|
||||
return str;
|
||||
}
|
||||
|
||||
CLAIM_ID claim_id_get(void) {
|
||||
CLAIM_ID ret = {
|
||||
.uuid = claim_id_get_uuid(),
|
||||
};
|
||||
|
||||
if(claim_id_is_set(ret))
|
||||
uuid_unparse_lower(ret.uuid.uuid, ret.str);
|
||||
else
|
||||
ret.str[0] = '\0';
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
CLAIM_ID claim_id_get_last_working(void) {
|
||||
CLAIM_ID ret = { 0 };
|
||||
|
||||
spinlock_lock(&claim.spinlock);
|
||||
ret.uuid = claim.claim_uuid_saved;
|
||||
spinlock_unlock(&claim.spinlock);
|
||||
|
||||
if(claim_id_is_set(ret))
|
||||
uuid_unparse_lower(ret.uuid.uuid, ret.str);
|
||||
else
|
||||
ret.str[0] = '\0';
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
CLAIM_ID rrdhost_claim_id_get(RRDHOST *host) {
|
||||
CLAIM_ID ret = { 0 };
|
||||
|
||||
if(host == localhost) {
|
||||
ret.uuid = claim_id_get_uuid();
|
||||
if(UUIDiszero(ret.uuid))
|
||||
ret.uuid = host->aclk.claim_id_of_parent;
|
||||
}
|
||||
else {
|
||||
if (!UUIDiszero(host->aclk.claim_id_of_origin))
|
||||
ret.uuid = host->aclk.claim_id_of_origin;
|
||||
else
|
||||
ret.uuid = host->aclk.claim_id_of_parent;
|
||||
}
|
||||
|
||||
if(claim_id_is_set(ret))
|
||||
uuid_unparse_lower(ret.uuid.uuid, ret.str);
|
||||
|
||||
return ret;
|
||||
}
|
28
src/claim/claim_id.h
Normal file
28
src/claim/claim_id.h
Normal file
|
@ -0,0 +1,28 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#ifndef NETDATA_CLAIM_ID_H
|
||||
#define NETDATA_CLAIM_ID_H
|
||||
|
||||
#include "claim.h"
|
||||
|
||||
void claim_id_keep_current(void);
|
||||
|
||||
bool claim_id_set_str(const char *claim_id_str);
|
||||
void claim_id_set(ND_UUID new_claim_id);
|
||||
void claim_id_clear_previous_working(void);
|
||||
ND_UUID claim_id_get_uuid(void);
|
||||
void claim_id_get_str(char str[UUID_STR_LEN]);
|
||||
const char *claim_id_get_str_mallocz(void);
|
||||
|
||||
typedef struct {
|
||||
ND_UUID uuid;
|
||||
char str[UUID_STR_LEN];
|
||||
} CLAIM_ID;
|
||||
|
||||
#define claim_id_is_set(claim_id) (!UUIDiszero(claim_id.uuid))
|
||||
|
||||
CLAIM_ID claim_id_get(void);
|
||||
CLAIM_ID claim_id_get_last_working(void);
|
||||
CLAIM_ID rrdhost_claim_id_get(RRDHOST *host);
|
||||
|
||||
#endif //NETDATA_CLAIM_ID_H
|
128
src/claim/cloud-conf.c
Normal file
128
src/claim/cloud-conf.c
Normal file
|
@ -0,0 +1,128 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#include "claim.h"
|
||||
|
||||
struct config cloud_config = {
|
||||
.first_section = NULL,
|
||||
.last_section = NULL,
|
||||
.mutex = NETDATA_MUTEX_INITIALIZER,
|
||||
.index = {
|
||||
.avl_tree = {
|
||||
.root = NULL,
|
||||
.compar = appconfig_section_compare
|
||||
},
|
||||
.rwlock = AVL_LOCK_INITIALIZER
|
||||
}
|
||||
};
|
||||
|
||||
const char *cloud_config_url_get(void) {
|
||||
return appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "url", DEFAULT_CLOUD_BASE_URL);
|
||||
}
|
||||
|
||||
void cloud_config_url_set(const char *url) {
|
||||
if(!url || *url) return;
|
||||
|
||||
const char *existing = cloud_config_url_get();
|
||||
if(strcmp(existing, url) != 0)
|
||||
appconfig_set(&cloud_config, CONFIG_SECTION_GLOBAL, "url", url);
|
||||
}
|
||||
|
||||
const char *cloud_config_proxy_get(void) {
|
||||
// load cloud.conf or internal default
|
||||
const char *proxy = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "proxy", "env");
|
||||
|
||||
// backwards compatibility, from when proxy was in netdata.conf
|
||||
// netdata.conf has bigger priority
|
||||
if (config_exists(CONFIG_SECTION_CLOUD, "proxy")) {
|
||||
// get it from netdata.conf
|
||||
proxy = config_get(CONFIG_SECTION_CLOUD, "proxy", proxy);
|
||||
|
||||
// update cloud.conf
|
||||
proxy = appconfig_set(&cloud_config, CONFIG_SECTION_GLOBAL, "proxy", proxy);
|
||||
}
|
||||
else {
|
||||
// set in netdata.conf the proxy of cloud.conf
|
||||
config_set(CONFIG_SECTION_CLOUD, "proxy", proxy);
|
||||
}
|
||||
|
||||
return proxy;
|
||||
}
|
||||
|
||||
bool cloud_config_insecure_get(void) {
|
||||
// load it from cloud.conf or use internal default
|
||||
return appconfig_get_boolean(&cloud_config, CONFIG_SECTION_GLOBAL, "insecure", CONFIG_BOOLEAN_NO);
|
||||
}
|
||||
|
||||
static void cloud_conf_load_defaults(void) {
|
||||
appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "url", DEFAULT_CLOUD_BASE_URL);
|
||||
appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "proxy", "env");
|
||||
appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "token", "");
|
||||
appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "rooms", "");
|
||||
appconfig_get_boolean(&cloud_config, CONFIG_SECTION_GLOBAL, "insecure", CONFIG_BOOLEAN_NO);
|
||||
appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "machine_guid", "");
|
||||
appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "claimed_id", "");
|
||||
appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "hostname", "");
|
||||
}
|
||||
|
||||
void cloud_conf_load(int silent) {
|
||||
errno_clear();
|
||||
char *filename = filename_from_path_entry_strdupz(netdata_configured_cloud_dir, "cloud.conf");
|
||||
int ret = appconfig_load(&cloud_config, filename, 1, NULL);
|
||||
|
||||
if(!ret && !silent)
|
||||
nd_log(NDLS_DAEMON, NDLP_ERR,
|
||||
"CLAIM: cannot load cloud config '%s'. Running with internal defaults.", filename);
|
||||
|
||||
freez(filename);
|
||||
|
||||
appconfig_move(&cloud_config,
|
||||
CONFIG_SECTION_GLOBAL, "cloud base url",
|
||||
CONFIG_SECTION_GLOBAL, "url");
|
||||
|
||||
cloud_conf_load_defaults();
|
||||
}
|
||||
|
||||
void cloud_conf_init_after_registry(void) {
|
||||
const char *machine_guid = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "machine_guid", "");
|
||||
const char *hostname = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "hostname", "");
|
||||
|
||||
// for machine guid and hostname we have to use appconfig_set() for that they will be saved uncommented
|
||||
if(!machine_guid || !*machine_guid)
|
||||
appconfig_set(&cloud_config, CONFIG_SECTION_GLOBAL, "machine_guid", registry_get_this_machine_guid());
|
||||
|
||||
if(!hostname || !*hostname)
|
||||
appconfig_set(&cloud_config, CONFIG_SECTION_GLOBAL, "hostname", registry_get_this_machine_hostname());
|
||||
}
|
||||
|
||||
bool cloud_conf_save(void) {
|
||||
char filename[FILENAME_MAX + 1];
|
||||
|
||||
CLEAN_BUFFER *wb = buffer_create(0, NULL);
|
||||
appconfig_generate(&cloud_config, wb, false, false);
|
||||
snprintfz(filename, sizeof(filename), "%s/cloud.conf", netdata_configured_cloud_dir);
|
||||
FILE *fp = fopen(filename, "w");
|
||||
if(!fp) {
|
||||
nd_log(NDLS_DAEMON, NDLP_ERR, "Cannot open file '%s' for writing.", filename);
|
||||
return false;
|
||||
}
|
||||
|
||||
fprintf(fp, "%s", buffer_tostring(wb));
|
||||
fclose(fp);
|
||||
return true;
|
||||
}
|
||||
|
||||
bool cloud_conf_regenerate(const char *claimed_id_str, const char *machine_guid, const char *hostname, const char *token, const char *rooms, const char *url, const char *proxy, int insecure) {
|
||||
// for backwards compatibility (older agents), save the claimed_id to its file
|
||||
claimed_id_save_to_file(claimed_id_str);
|
||||
|
||||
appconfig_set(&cloud_config, CONFIG_SECTION_GLOBAL, "url", url);
|
||||
appconfig_set(&cloud_config, CONFIG_SECTION_GLOBAL, "proxy", proxy ? proxy : "");
|
||||
appconfig_set(&cloud_config, CONFIG_SECTION_GLOBAL, "token", token ? token : "");
|
||||
appconfig_set(&cloud_config, CONFIG_SECTION_GLOBAL, "rooms", rooms ? rooms : "");
|
||||
appconfig_set_boolean(&cloud_config, CONFIG_SECTION_GLOBAL, "insecure", insecure);
|
||||
appconfig_set(&cloud_config, CONFIG_SECTION_GLOBAL, "machine_guid", machine_guid ? machine_guid : "");
|
||||
appconfig_set(&cloud_config, CONFIG_SECTION_GLOBAL, "claimed_id", claimed_id_str ? claimed_id_str : "");
|
||||
appconfig_set(&cloud_config, CONFIG_SECTION_GLOBAL, "hostname", hostname ? hostname : "");
|
||||
|
||||
return cloud_conf_save();
|
||||
}
|
134
src/claim/cloud-status.c
Normal file
134
src/claim/cloud-status.c
Normal file
|
@ -0,0 +1,134 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#include "claim.h"
|
||||
|
||||
const char *cloud_status_to_string(CLOUD_STATUS status) {
|
||||
switch(status) {
|
||||
default:
|
||||
case CLOUD_STATUS_AVAILABLE:
|
||||
return "available";
|
||||
|
||||
case CLOUD_STATUS_BANNED:
|
||||
return "banned";
|
||||
|
||||
case CLOUD_STATUS_OFFLINE:
|
||||
return "offline";
|
||||
|
||||
case CLOUD_STATUS_ONLINE:
|
||||
return "online";
|
||||
|
||||
case CLOUD_STATUS_INDIRECT:
|
||||
return "indirect";
|
||||
}
|
||||
}
|
||||
|
||||
CLOUD_STATUS cloud_status(void) {
|
||||
if(unlikely(aclk_disable_runtime))
|
||||
return CLOUD_STATUS_BANNED;
|
||||
|
||||
if(likely(aclk_online()))
|
||||
return CLOUD_STATUS_ONLINE;
|
||||
|
||||
if(localhost->sender &&
|
||||
rrdhost_flag_check(localhost, RRDHOST_FLAG_RRDPUSH_SENDER_READY_4_METRICS) &&
|
||||
stream_has_capability(localhost->sender, STREAM_CAP_NODE_ID) &&
|
||||
!uuid_is_null(localhost->node_id) &&
|
||||
!UUIDiszero(localhost->aclk.claim_id_of_parent))
|
||||
return CLOUD_STATUS_INDIRECT;
|
||||
|
||||
if(is_agent_claimed())
|
||||
return CLOUD_STATUS_OFFLINE;
|
||||
|
||||
return CLOUD_STATUS_AVAILABLE;
|
||||
}
|
||||
|
||||
time_t cloud_last_change(void) {
|
||||
time_t ret = MAX(last_conn_time_mqtt, last_disconnect_time);
|
||||
if(!ret) ret = netdata_start_time;
|
||||
return ret;
|
||||
}
|
||||
|
||||
time_t cloud_next_connection_attempt(void) {
|
||||
return next_connection_attempt;
|
||||
}
|
||||
|
||||
size_t cloud_connection_id(void) {
|
||||
return aclk_connection_counter;
|
||||
}
|
||||
|
||||
const char *cloud_status_aclk_offline_reason() {
|
||||
if(aclk_disable_runtime)
|
||||
return "banned";
|
||||
|
||||
return aclk_status_to_string();
|
||||
}
|
||||
|
||||
const char *cloud_status_aclk_base_url() {
|
||||
return aclk_cloud_base_url;
|
||||
}
|
||||
|
||||
CLOUD_STATUS buffer_json_cloud_status(BUFFER *wb, time_t now_s) {
|
||||
CLOUD_STATUS status = cloud_status();
|
||||
|
||||
buffer_json_member_add_object(wb, "cloud");
|
||||
{
|
||||
size_t id = cloud_connection_id();
|
||||
time_t last_change = cloud_last_change();
|
||||
time_t next_connect = cloud_next_connection_attempt();
|
||||
buffer_json_member_add_uint64(wb, "id", id);
|
||||
buffer_json_member_add_string(wb, "status", cloud_status_to_string(status));
|
||||
buffer_json_member_add_time_t(wb, "since", last_change);
|
||||
buffer_json_member_add_time_t(wb, "age", now_s - last_change);
|
||||
|
||||
switch(status) {
|
||||
default:
|
||||
case CLOUD_STATUS_AVAILABLE:
|
||||
// the agent is not claimed
|
||||
buffer_json_member_add_string(wb, "url", cloud_config_url_get());
|
||||
buffer_json_member_add_string(wb, "reason", claim_agent_failure_reason_get());
|
||||
break;
|
||||
|
||||
case CLOUD_STATUS_BANNED: {
|
||||
// the agent is claimed, but has been banned from NC
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
buffer_json_member_add_string(wb, "claim_id", claim_id.str);
|
||||
buffer_json_member_add_string(wb, "url", cloud_status_aclk_base_url());
|
||||
buffer_json_member_add_string(wb, "reason", "Agent is banned from Netdata Cloud");
|
||||
buffer_json_member_add_string(wb, "url", cloud_config_url_get());
|
||||
break;
|
||||
}
|
||||
|
||||
case CLOUD_STATUS_OFFLINE: {
|
||||
// the agent is claimed, but cannot get online
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
buffer_json_member_add_string(wb, "claim_id", claim_id.str);
|
||||
buffer_json_member_add_string(wb, "url", cloud_status_aclk_base_url());
|
||||
buffer_json_member_add_string(wb, "reason", cloud_status_aclk_offline_reason());
|
||||
if (next_connect > now_s) {
|
||||
buffer_json_member_add_time_t(wb, "next_check", next_connect);
|
||||
buffer_json_member_add_time_t(wb, "next_in", next_connect - now_s);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case CLOUD_STATUS_ONLINE: {
|
||||
// the agent is claimed and online
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
buffer_json_member_add_string(wb, "claim_id", claim_id.str);
|
||||
buffer_json_member_add_string(wb, "url", cloud_status_aclk_base_url());
|
||||
buffer_json_member_add_string(wb, "reason", "");
|
||||
break;
|
||||
}
|
||||
|
||||
case CLOUD_STATUS_INDIRECT: {
|
||||
CLAIM_ID claim_id = rrdhost_claim_id_get(localhost);
|
||||
buffer_json_member_add_string(wb, "claim_id", claim_id.str);
|
||||
buffer_json_member_add_string(wb, "url", cloud_config_url_get());
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
buffer_json_object_close(wb); // cloud
|
||||
|
||||
return status;
|
||||
}
|
26
src/claim/cloud-status.h
Normal file
26
src/claim/cloud-status.h
Normal file
|
@ -0,0 +1,26 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#ifndef NETDATA_CLOUD_STATUS_H
|
||||
#define NETDATA_CLOUD_STATUS_H
|
||||
|
||||
#include "daemon/common.h"
|
||||
|
||||
typedef enum __attribute__((packed)) {
|
||||
CLOUD_STATUS_AVAILABLE = 1, // cloud and aclk functionality is available, but the agent is not claimed
|
||||
CLOUD_STATUS_BANNED, // the agent has been banned from cloud
|
||||
CLOUD_STATUS_OFFLINE, // the agent tries to connect to cloud, but cannot do it
|
||||
CLOUD_STATUS_INDIRECT, // the agent is connected to cloud via a parent
|
||||
CLOUD_STATUS_ONLINE, // the agent is connected to cloud
|
||||
} CLOUD_STATUS;
|
||||
|
||||
const char *cloud_status_to_string(CLOUD_STATUS status);
|
||||
CLOUD_STATUS cloud_status(void);
|
||||
|
||||
time_t cloud_last_change(void);
|
||||
time_t cloud_next_connection_attempt(void);
|
||||
size_t cloud_connection_id(void);
|
||||
const char *cloud_status_aclk_offline_reason(void);
|
||||
const char *cloud_status_aclk_base_url(void);
|
||||
CLOUD_STATUS buffer_json_cloud_status(BUFFER *wb, time_t now_s);
|
||||
|
||||
#endif //NETDATA_CLOUD_STATUS_H
|
|
@ -1,451 +1,111 @@
|
|||
#!/usr/bin/env bash
|
||||
# netdata
|
||||
# real-time performance and health monitoring, done right!
|
||||
# (C) 2023 Netdata Inc.
|
||||
#!/bin/sh
|
||||
#
|
||||
# Copyright (c) 2024 Netdata Inc.
|
||||
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
# Exit code: 0 - Success
|
||||
# Exit code: 1 - Unknown argument
|
||||
# Exit code: 2 - Problems with claiming working directory
|
||||
# Exit code: 3 - Missing dependencies
|
||||
# Exit code: 4 - Failure to connect to endpoint
|
||||
# Exit code: 5 - The CLI didn't work
|
||||
# Exit code: 6 - Wrong user
|
||||
# Exit code: 7 - Unknown HTTP error message
|
||||
#
|
||||
# OK: Agent claimed successfully
|
||||
# HTTP Status code: 204
|
||||
# Exit code: 0
|
||||
#
|
||||
# Unknown HTTP error message
|
||||
# HTTP Status code: 422
|
||||
# Exit code: 7
|
||||
ERROR_KEYS[7]="None"
|
||||
ERROR_MESSAGES[7]="Unknown HTTP error message"
|
||||
# %%NEW_CLAIMING_METHOD%%
|
||||
|
||||
# Error: The agent id is invalid; it does not fulfill the constraints
|
||||
# HTTP Status code: 422
|
||||
# Exit code: 8
|
||||
ERROR_KEYS[8]="ErrInvalidNodeID"
|
||||
ERROR_MESSAGES[8]="invalid node id"
|
||||
set -e
|
||||
|
||||
# Error: The agent hostname is invalid; it does not fulfill the constraints
|
||||
# HTTP Status code: 422
|
||||
# Exit code: 9
|
||||
ERROR_KEYS[9]="ErrInvalidNodeName"
|
||||
ERROR_MESSAGES[9]="invalid node name"
|
||||
|
||||
# Error: At least one of the given rooms ids is invalid; it does not fulfill the constraints
|
||||
# HTTP Status code: 422
|
||||
# Exit code: 10
|
||||
ERROR_KEYS[10]="ErrInvalidRoomID"
|
||||
ERROR_MESSAGES[10]="invalid room id"
|
||||
|
||||
# Error: Invalid public key; the public key is empty or not present
|
||||
# HTTP Status code: 422
|
||||
# Exit code: 11
|
||||
ERROR_KEYS[11]="ErrInvalidPublicKey"
|
||||
ERROR_MESSAGES[11]="invalid public key"
|
||||
#
|
||||
# Error: Expired, missing or invalid token
|
||||
# HTTP Status code: 403
|
||||
# Exit code: 12
|
||||
ERROR_KEYS[12]="ErrForbidden"
|
||||
ERROR_MESSAGES[12]="token expired/token not found/invalid token"
|
||||
|
||||
# Error: Duplicate agent id; an agent with the same id is already registered in the cloud
|
||||
# HTTP Status code: 409
|
||||
# Exit code: 13
|
||||
ERROR_KEYS[13]="ErrAlreadyClaimed"
|
||||
ERROR_MESSAGES[13]="already claimed"
|
||||
|
||||
# Error: The node claiming process is still in progress.
|
||||
# HTTP Status code: 102
|
||||
# Exit code: 14
|
||||
ERROR_KEYS[14]="ErrProcessingClaim"
|
||||
ERROR_MESSAGES[14]="processing claiming"
|
||||
|
||||
# Error: Internal server error. Any other unexpected error (DB problems, etc.)
|
||||
# HTTP Status code: 500
|
||||
# Exit code: 15
|
||||
ERROR_KEYS[15]="ErrInternalServerError"
|
||||
ERROR_MESSAGES[15]="Internal Server Error"
|
||||
|
||||
# Error: There was a timeout processing the claim.
|
||||
# HTTP Status code: 504
|
||||
# Exit code: 16
|
||||
ERROR_KEYS[16]="ErrGatewayTimeout"
|
||||
ERROR_MESSAGES[16]="Gateway Timeout"
|
||||
|
||||
# Error: The service cannot handle the claiming request at this time.
|
||||
# HTTP Status code: 503
|
||||
# Exit code: 17
|
||||
ERROR_KEYS[17]="ErrServiceUnavailable"
|
||||
ERROR_MESSAGES[17]="Service Unavailable"
|
||||
|
||||
# Exit code: 18 - Agent unique id is not generated yet.
|
||||
|
||||
NETDATA_RUNNING=1
|
||||
|
||||
get_config_value() {
|
||||
conf_file="${1}"
|
||||
section="${2}"
|
||||
key_name="${3}"
|
||||
if [ "${NETDATA_RUNNING}" -eq 1 ]; then
|
||||
config_result=$(@sbindir_POST@/netdatacli 2>/dev/null read-config "$conf_file|$section|$key_name"; exit $?)
|
||||
result="$?"
|
||||
if [ "${result}" -ne 0 ]; then
|
||||
echo >&2 "Unable to communicate with Netdata daemon, querying config from disk instead."
|
||||
NETDATA_RUNNING=0
|
||||
fi
|
||||
fi
|
||||
if [ "${NETDATA_RUNNING}" -eq 0 ]; then
|
||||
config_result=$(@sbindir_POST@/netdata 2>/dev/null -W get2 "$conf_file" "$section" "$key_name" unknown_default)
|
||||
fi
|
||||
echo "$config_result"
|
||||
warning() {
|
||||
printf "WARNING: %s\n" "${1}" 1>&2
|
||||
}
|
||||
if command -v curl >/dev/null 2>&1 ; then
|
||||
URLTOOL="curl"
|
||||
elif command -v wget >/dev/null 2>&1 ; then
|
||||
URLTOOL="wget"
|
||||
else
|
||||
echo >&2 "I need curl or wget to proceed, but neither is available on this system."
|
||||
exit 3
|
||||
fi
|
||||
if ! command -v openssl >/dev/null 2>&1 ; then
|
||||
echo >&2 "I need openssl to proceed, but it is not available on this system."
|
||||
exit 3
|
||||
fi
|
||||
|
||||
# shellcheck disable=SC2050
|
||||
if [ "@enable_cloud_POST@" = "no" ]; then
|
||||
echo >&2 "This agent was built with --disable-cloud and cannot be claimed"
|
||||
exit 3
|
||||
fi
|
||||
# shellcheck disable=SC2050
|
||||
if [ "@enable_aclk_POST@" != "yes" ]; then
|
||||
echo >&2 "This agent was built without the dependencies for Cloud and cannot be claimed"
|
||||
exit 3
|
||||
fi
|
||||
error() {
|
||||
printf "ERROR: %s\n" "${1}" 1>&2
|
||||
exit "${2}"
|
||||
}
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# defaults to allow running this script by hand
|
||||
get_templated_value() {
|
||||
value="$1"
|
||||
default="$2"
|
||||
override="$3"
|
||||
|
||||
[ -z "${NETDATA_VARLIB_DIR}" ] && NETDATA_VARLIB_DIR="@varlibdir_POST@"
|
||||
MACHINE_GUID_FILE="@registrydir_POST@/netdata.public.unique.id"
|
||||
CLAIMING_DIR="${NETDATA_VARLIB_DIR}/cloud.d"
|
||||
TOKEN="unknown"
|
||||
URL_BASE=$(get_config_value cloud global "cloud base url")
|
||||
[ -z "$URL_BASE" ] && URL_BASE="https://app.netdata.cloud" # Cover post-install with --dont-start
|
||||
ID="unknown"
|
||||
ROOMS=""
|
||||
[ -z "$HOSTNAME" ] && HOSTNAME=$(hostname)
|
||||
CLOUD_CERTIFICATE_FILE="${CLAIMING_DIR}/cloud_fullchain.pem"
|
||||
VERBOSE=0
|
||||
INSECURE=0
|
||||
RELOAD=1
|
||||
NETDATA_USER=$(get_config_value netdata global "run as user")
|
||||
[ -z "$EUID" ] && EUID="$(id -u)"
|
||||
|
||||
|
||||
gen_id() {
|
||||
local id
|
||||
|
||||
if command -v uuidgen > /dev/null 2>&1; then
|
||||
id="$(uuidgen | tr '[:upper:]' '[:lower:]')"
|
||||
elif [ -r /proc/sys/kernel/random/uuid ]; then
|
||||
id="$(cat /proc/sys/kernel/random/uuid)"
|
||||
if [ -n "${override}" ]; then
|
||||
echo "${override}"
|
||||
elif [ -z "${value}" ]; then
|
||||
error "Expected templated value not present"
|
||||
elif (echo "${value}" | grep -q '@'); then
|
||||
echo "${default}"
|
||||
else
|
||||
echo >&2 "Unable to generate machine ID."
|
||||
exit 18
|
||||
fi
|
||||
|
||||
if [ "${id}" = "8a795b0c-2311-11e6-8563-000c295076a6" ] || [ "${id}" = "4aed1458-1c3e-11e6-a53f-000c290fc8f5" ]; then
|
||||
gen_id
|
||||
else
|
||||
echo "${id}"
|
||||
echo "${value}"
|
||||
fi
|
||||
}
|
||||
|
||||
# get the MACHINE_GUID by default
|
||||
if [ -r "${MACHINE_GUID_FILE}" ]; then
|
||||
ID="$(cat "${MACHINE_GUID_FILE}")"
|
||||
MGUID=$ID
|
||||
elif [ -f "${MACHINE_GUID_FILE}" ]; then
|
||||
echo >&2 "netdata.public.unique.id is not readable. Please make sure you have rights to read it (Filename: ${MACHINE_GUID_FILE})."
|
||||
exit 18
|
||||
else
|
||||
if mkdir -p "${MACHINE_GUID_FILE%/*}" && echo -n "$(gen_id)" > "${MACHINE_GUID_FILE}"; then
|
||||
ID="$(cat "${MACHINE_GUID_FILE}")"
|
||||
MGUID=$ID
|
||||
else
|
||||
echo >&2 "Failed to write new machine GUID. Please make sure you have rights to write to ${MACHINE_GUID_FILE}."
|
||||
exit 18
|
||||
fi
|
||||
fi
|
||||
config_dir="$(get_templated_value "@configdir_POST@" "/etc/netdata" "${NETDATA_CLAIM_CONFIG_DIR}")"
|
||||
claim_config="${config_dir}/claim.conf"
|
||||
netdatacli="$(get_templated_value "@sbindir_POST@/netdatacli" "$(command -v netdatacli 2>/dev/null)" "${NETDATA_CLAIM_NETDATACLI_PATH}")"
|
||||
netdata_group="$(get_templated_value "@netdata_group_POST@" "netdata" "${NETDATA_CLAIM_CONFIG_GROUP}")"
|
||||
|
||||
# get token from file
|
||||
if [ -r "${CLAIMING_DIR}/token" ]; then
|
||||
TOKEN="$(cat "${CLAIMING_DIR}/token")"
|
||||
fi
|
||||
write_config() {
|
||||
config="[global]"
|
||||
config="${config}\n url = ${NETDATA_CLAIM_URL}"
|
||||
config="${config}\n token = ${NETDATA_CLAIM_TOKEN}"
|
||||
if [ -n "${NETDATA_CLAIM_ROOMS}" ]; then
|
||||
config="${config}\n rooms = ${NETDATA_CLAIM_ROOMS}"
|
||||
fi
|
||||
if [ -n "${NETDATA_CLAIM_PROXY}" ]; then
|
||||
config="${config}\n proxy = ${NETDATA_CLAIM_PROXY}"
|
||||
fi
|
||||
if [ -n "${NETDATA_CLAIM_INSECURE}" ]; then
|
||||
config="${config}\n insecure = ${NETDATA_CLAIM_INSECURE}"
|
||||
fi
|
||||
|
||||
# get rooms from file
|
||||
if [ -r "${CLAIMING_DIR}/rooms" ]; then
|
||||
ROOMS="$(cat "${CLAIMING_DIR}/rooms")"
|
||||
fi
|
||||
touch "${claim_config}.tmp"
|
||||
chmod 0660 "${claim_config}.tmp"
|
||||
chown "root:${netdata_group}" "${claim_config}.tmp"
|
||||
echo "${config}" > "${claim_config}.tmp"
|
||||
chmod 0640 "${claim_config}.tmp"
|
||||
mv -f "${claim_config}.tmp" "${claim_config}"
|
||||
}
|
||||
|
||||
reload_claiming() {
|
||||
if [ -z "${NORELOAD}" ]; then
|
||||
"${netdatacli}" reload-claiming-state
|
||||
fi
|
||||
}
|
||||
|
||||
parse_args() {
|
||||
while [ -n "${1}" ]; do
|
||||
case "${1}" in
|
||||
--claim-token) NETDATA_CLAIM_TOKEN="${2}"; shift 1 ;;
|
||||
-token=*) NETDATA_CLAIM_TOKEN="$(echo "${1}" | sed 's/^-token=//')" ;;
|
||||
--claim-rooms) NETDATA_CLAIM_ROOMS="${2}"; shift 1 ;;
|
||||
-rooms=*) NETDATA_CLAIM_ROOMS="$(echo "${1}" | sed 's/^-rooms=//')" ;;
|
||||
--claim-url) NETDATA_CLAIM_URL="${2}"; shift 1 ;;
|
||||
-url=*) NETDATA_CLAIM_URL="$(echo "${1}" | sed 's/^-url=/')" ;;
|
||||
--claim-proxy) NETDATA_CLAIM_PROXY="${2}"; shift 1 ;;
|
||||
-proxy=*) NETDATA_CLAIM_PROXY="$(echo "${1}" | sed 's/-proxy=//')" ;;
|
||||
-noproxy|--noproxy) NETDATA_CLAIM_PROXY="none" ;;
|
||||
-noreload|--noreload) NORELOAD=1 ;;
|
||||
-insecure|--insecure) NETDATA_CLAIM_INSECURE=yes ;;
|
||||
-verbose) true ;;
|
||||
-daemon-not-running) true ;;
|
||||
-id=*) warning "-id option is no longer supported. Remove the node ID file instead." ;;
|
||||
-hostname=*) warning "-hostname option is no longer supported. Update the main netdata configuration manually instead." ;;
|
||||
-user=*) warning "-user option is no longer supported." ;;
|
||||
*) warning "Ignoring unrecognized option ${1}";;
|
||||
esac
|
||||
|
||||
variable_to_set=
|
||||
for arg in "$@"
|
||||
do
|
||||
if [ -z "$variable_to_set" ]; then
|
||||
case $arg in
|
||||
--claim-token) variable_to_set="TOKEN" ;;
|
||||
--claim-rooms) variable_to_set="ROOMS" ;;
|
||||
--claim-url) variable_to_set="URL_BASE" ;;
|
||||
-token=*) TOKEN=${arg:7} ;;
|
||||
-url=*) [ -n "${arg:5}" ] && URL_BASE=${arg:5} ;;
|
||||
-id=*) ID=$(echo "${arg:4}" | tr '[:upper:]' '[:lower:]');;
|
||||
-rooms=*) ROOMS=${arg:7} ;;
|
||||
-hostname=*) HOSTNAME=${arg:10} ;;
|
||||
-verbose) VERBOSE=1 ;;
|
||||
-insecure) INSECURE=1 ;;
|
||||
-proxy=*) PROXY=${arg:7} ;;
|
||||
-noproxy) NOPROXY=yes ;;
|
||||
-noreload) RELOAD=0 ;;
|
||||
-user=*) NETDATA_USER=${arg:6} ;;
|
||||
-daemon-not-running) NETDATA_RUNNING=0 ;;
|
||||
*) echo >&2 "Unknown argument ${arg}"
|
||||
exit 1 ;;
|
||||
esac
|
||||
else
|
||||
case "$variable_to_set" in
|
||||
TOKEN) TOKEN="$arg" ;;
|
||||
ROOMS) ROOMS="$arg" ;;
|
||||
URL_BASE) URL_BASE="$arg" ;;
|
||||
esac
|
||||
variable_to_set=
|
||||
fi
|
||||
shift 1
|
||||
done
|
||||
done
|
||||
|
||||
if [ "$EUID" != "0" ] && [ "$(whoami)" != "$NETDATA_USER" ]; then
|
||||
echo >&2 "This script must be run by the $NETDATA_USER user account"
|
||||
exit 6
|
||||
fi
|
||||
|
||||
# if curl not installed give warning SOCKS can't be used
|
||||
if [[ "${URLTOOL}" != "curl" && "${PROXY:0:5}" = socks ]] ; then
|
||||
echo >&2 "wget doesn't support SOCKS. Please install curl or disable SOCKS proxy."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo >&2 "Token: ****************"
|
||||
echo >&2 "Base URL: $URL_BASE"
|
||||
echo >&2 "Id: $ID"
|
||||
echo >&2 "Rooms: $ROOMS"
|
||||
echo >&2 "Hostname: $HOSTNAME"
|
||||
echo >&2 "Proxy: $PROXY"
|
||||
echo >&2 "Netdata user: $NETDATA_USER"
|
||||
|
||||
# create the claiming directory for this user
|
||||
if [ ! -d "${CLAIMING_DIR}" ] ; then
|
||||
mkdir -p "${CLAIMING_DIR}" && chmod 0770 "${CLAIMING_DIR}"
|
||||
# shellcheck disable=SC2181
|
||||
if [ $? -ne 0 ] ; then
|
||||
echo >&2 "Failed to create claiming working directory ${CLAIMING_DIR}"
|
||||
exit 2
|
||||
fi
|
||||
fi
|
||||
if [ ! -w "${CLAIMING_DIR}" ] ; then
|
||||
echo >&2 "No write permission in claiming working directory ${CLAIMING_DIR}"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
if [ ! -f "${CLAIMING_DIR}/private.pem" ] ; then
|
||||
echo >&2 "Generating private/public key for the first time."
|
||||
if ! openssl genrsa -out "${CLAIMING_DIR}/private.pem" 2048 ; then
|
||||
echo >&2 "Failed to generate private/public key pair."
|
||||
exit 2
|
||||
fi
|
||||
fi
|
||||
if [ ! -f "${CLAIMING_DIR}/public.pem" ] ; then
|
||||
echo >&2 "Extracting public key from private key."
|
||||
if ! openssl rsa -in "${CLAIMING_DIR}/private.pem" -outform PEM -pubout -out "${CLAIMING_DIR}/public.pem" ; then
|
||||
echo >&2 "Failed to extract public key."
|
||||
exit 2
|
||||
fi
|
||||
fi
|
||||
|
||||
TARGET_URL="${URL_BASE%/}/api/v1/spaces/nodes/${ID}"
|
||||
# shellcheck disable=SC2002
|
||||
KEY=$(cat "${CLAIMING_DIR}/public.pem" | tr '\n' '!' | sed -e 's/!/\\n/g')
|
||||
# shellcheck disable=SC2001
|
||||
[ -n "$ROOMS" ] && ROOMS=\"$(echo "$ROOMS" | sed s'/,/", "/g')\"
|
||||
|
||||
cat > "${CLAIMING_DIR}/tmpin.txt" <<EMBED_JSON
|
||||
{
|
||||
"node": {
|
||||
"id": "$ID",
|
||||
"hostname": "$HOSTNAME"
|
||||
},
|
||||
"token": "$TOKEN",
|
||||
"rooms" : [ $ROOMS ],
|
||||
"publicKey" : "$KEY",
|
||||
"mGUID" : "$MGUID"
|
||||
}
|
||||
EMBED_JSON
|
||||
|
||||
if [ "${VERBOSE}" == 1 ] ; then
|
||||
echo "Request to server:"
|
||||
cat "${CLAIMING_DIR}/tmpin.txt"
|
||||
fi
|
||||
|
||||
|
||||
if [ "${URLTOOL}" = "curl" ] ; then
|
||||
URLCOMMAND="curl --connect-timeout 30 --retry 0 -s -i -X PUT -d \"@${CLAIMING_DIR}/tmpin.txt\""
|
||||
if [ "${NOPROXY}" = "yes" ] ; then
|
||||
URLCOMMAND="${URLCOMMAND} -x \"\""
|
||||
elif [ -n "${PROXY}" ] ; then
|
||||
URLCOMMAND="${URLCOMMAND} -x \"${PROXY}\""
|
||||
fi
|
||||
else
|
||||
URLCOMMAND="wget -T 15 -O - -q --server-response --content-on-error=on --method=PUT \
|
||||
--body-file=\"${CLAIMING_DIR}/tmpin.txt\""
|
||||
if [ "${NOPROXY}" = "yes" ] ; then
|
||||
URLCOMMAND="${URLCOMMAND} --no-proxy"
|
||||
elif [ "${PROXY:0:4}" = http ] ; then
|
||||
URLCOMMAND="export http_proxy=${PROXY}; ${URLCOMMAND}"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "${INSECURE}" == 1 ] ; then
|
||||
if [ "${URLTOOL}" = "curl" ] ; then
|
||||
URLCOMMAND="${URLCOMMAND} --insecure"
|
||||
else
|
||||
URLCOMMAND="${URLCOMMAND} --no-check-certificate"
|
||||
if [ -z "${NETDATA_CLAIM_TOKEN}" ]; then
|
||||
error "Claim token must be specified" 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -r "${CLOUD_CERTIFICATE_FILE}" ] ; then
|
||||
if [ "${URLTOOL}" = "curl" ] ; then
|
||||
URLCOMMAND="${URLCOMMAND} --cacert \"${CLOUD_CERTIFICATE_FILE}\""
|
||||
else
|
||||
URLCOMMAND="${URLCOMMAND} --ca-certificate \"${CLOUD_CERTIFICATE_FILE}\""
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "${VERBOSE}" == 1 ]; then
|
||||
echo "${URLCOMMAND} \"${TARGET_URL}\""
|
||||
fi
|
||||
|
||||
attempt_contact () {
|
||||
if [ "${URLTOOL}" = "curl" ] ; then
|
||||
eval "${URLCOMMAND} \"${TARGET_URL}\"" >"${CLAIMING_DIR}/tmpout.txt"
|
||||
else
|
||||
eval "${URLCOMMAND} \"${TARGET_URL}\"" >"${CLAIMING_DIR}/tmpout.txt" 2>&1
|
||||
fi
|
||||
URLCOMMAND_EXIT_CODE=$?
|
||||
if [ "${URLTOOL}" = "wget" ] && [ "${URLCOMMAND_EXIT_CODE}" -eq 8 ] ; then
|
||||
# We consider the server issuing an error response a successful attempt at communicating
|
||||
URLCOMMAND_EXIT_CODE=0
|
||||
fi
|
||||
|
||||
# Check if URLCOMMAND connected and received reply
|
||||
if [ "${URLCOMMAND_EXIT_CODE}" -ne 0 ] ; then
|
||||
echo >&2 "Failed to connect to ${URL_BASE}, return code ${URLCOMMAND_EXIT_CODE}"
|
||||
rm -f "${CLAIMING_DIR}/tmpout.txt"
|
||||
return 4
|
||||
fi
|
||||
|
||||
if [ "${VERBOSE}" == 1 ] ; then
|
||||
echo "Response from server:"
|
||||
cat "${CLAIMING_DIR}/tmpout.txt"
|
||||
fi
|
||||
|
||||
return 0
|
||||
if [ -z "${NETDATA_CLAIM_URL}" ]; then
|
||||
NETDATA_CLAIM_URL="https://app.netdata.cloud/"
|
||||
fi
|
||||
}
|
||||
|
||||
for i in {1..3}
|
||||
do
|
||||
if attempt_contact ; then
|
||||
echo "Connection attempt $i successful"
|
||||
break
|
||||
fi
|
||||
echo "Connection attempt $i failed. Retry in ${i}s."
|
||||
if [ "$i" -eq 5 ] ; then
|
||||
rm -f "${CLAIMING_DIR}/tmpin.txt"
|
||||
exit 4
|
||||
fi
|
||||
sleep "$i"
|
||||
done
|
||||
|
||||
rm -f "${CLAIMING_DIR}/tmpin.txt"
|
||||
|
||||
ERROR_KEY=$(grep "\"errorMsgKey\":" "${CLAIMING_DIR}/tmpout.txt" | awk -F "errorMsgKey\":\"" '{print $2}' | awk -F "\"" '{print $1}')
|
||||
case ${ERROR_KEY} in
|
||||
"ErrInvalidNodeID") EXIT_CODE=8 ;;
|
||||
"ErrInvalidNodeName") EXIT_CODE=9 ;;
|
||||
"ErrInvalidRoomID") EXIT_CODE=10 ;;
|
||||
"ErrInvalidPublicKey") EXIT_CODE=11 ;;
|
||||
"ErrForbidden") EXIT_CODE=12 ;;
|
||||
"ErrAlreadyClaimed") EXIT_CODE=13 ;;
|
||||
"ErrProcessingClaim") EXIT_CODE=14 ;;
|
||||
"ErrInternalServerError") EXIT_CODE=15 ;;
|
||||
"ErrGatewayTimeout") EXIT_CODE=16 ;;
|
||||
"ErrServiceUnavailable") EXIT_CODE=17 ;;
|
||||
*) EXIT_CODE=7 ;;
|
||||
esac
|
||||
|
||||
HTTP_STATUS_CODE=$(grep "HTTP" "${CLAIMING_DIR}/tmpout.txt" | tail -1 | awk -F " " '{print $2}')
|
||||
if [ "${HTTP_STATUS_CODE}" = "204" ] ; then
|
||||
EXIT_CODE=0
|
||||
[ -z "$EUID" ] && EUID="$(id -u)"
|
||||
if [ "${EUID}" != "0" ] && [ ! -w "${config_dir}" ]; then
|
||||
error "Script must be run by a user with write access to ${config_dir}." 32
|
||||
fi
|
||||
|
||||
if [ "${HTTP_STATUS_CODE}" = "204" ] || [ "${ERROR_KEY}" = "ErrAlreadyClaimed" ] ; then
|
||||
rm -f "${CLAIMING_DIR}/tmpout.txt"
|
||||
if [ "${HTTP_STATUS_CODE}" = "204" ] ; then
|
||||
echo -n "${ID}" >"${CLAIMING_DIR}/claimed_id" || (echo >&2 "Claiming failed"; set -e; exit 2)
|
||||
fi
|
||||
rm -f "${CLAIMING_DIR}/token" || (echo >&2 "Claiming failed"; set -e; exit 2)
|
||||
|
||||
# Rewrite the cloud.conf on the disk
|
||||
cat > "$CLAIMING_DIR/cloud.conf" <<HERE_DOC
|
||||
[global]
|
||||
enabled = yes
|
||||
cloud base url = $URL_BASE
|
||||
${PROXY:+ proxy = $PROXY}
|
||||
HERE_DOC
|
||||
if [ "$EUID" == "0" ]; then
|
||||
chown -R "${NETDATA_USER}:${NETDATA_USER}" "${CLAIMING_DIR}" || (echo >&2 "Claiming failed"; set -e; exit 2)
|
||||
fi
|
||||
if [ "${RELOAD}" == "0" ] ; then
|
||||
exit $EXIT_CODE
|
||||
fi
|
||||
|
||||
# Update cloud.conf in the agent memory
|
||||
@sbindir_POST@/netdatacli write-config 'cloud|global|enabled|yes' && \
|
||||
@sbindir_POST@/netdatacli write-config "cloud|global|cloud base url|$URL_BASE" && \
|
||||
@sbindir_POST@/netdatacli reload-claiming-state && \
|
||||
if [ "${HTTP_STATUS_CODE}" = "204" ] ; then
|
||||
echo >&2 "Node was successfully claimed."
|
||||
else
|
||||
echo >&2 "The agent cloud base url is set to the url provided."
|
||||
echo >&2 "The cloud may have different credentials already registered for this agent ID and it cannot be reclaimed under different credentials for security reasons. If you are unable to connect use -id=\$(uuidgen) to overwrite this agent ID with a fresh value if the original credentials cannot be restored."
|
||||
echo >&2 "Failed to claim node with the following error message:\"${ERROR_MESSAGES[$EXIT_CODE]}\""
|
||||
fi && exit $EXIT_CODE
|
||||
|
||||
if [ "${ERROR_KEY}" = "ErrAlreadyClaimed" ] ; then
|
||||
echo >&2 "The cloud may have different credentials already registered for this agent ID and it cannot be reclaimed under different credentials for security reasons. If you are unable to connect use -id=\$(uuidgen) to overwrite this agent ID with a fresh value if the original credentials cannot be restored."
|
||||
echo >&2 "Failed to claim node with the following error message:\"${ERROR_MESSAGES[$EXIT_CODE]}\""
|
||||
exit $EXIT_CODE
|
||||
fi
|
||||
echo >&2 "The claim was successful but the agent could not be notified ($?)- it requires a restart to connect to the cloud."
|
||||
[ "$NETDATA_RUNNING" -eq 0 ] && exit 0 || exit 5
|
||||
fi
|
||||
|
||||
echo >&2 "Failed to claim node with the following error message:\"${ERROR_MESSAGES[$EXIT_CODE]}\""
|
||||
if [ "${VERBOSE}" == 1 ]; then
|
||||
echo >&2 "Error key was:\"${ERROR_KEYS[$EXIT_CODE]}\""
|
||||
fi
|
||||
rm -f "${CLAIMING_DIR}/tmpout.txt"
|
||||
exit $EXIT_CODE
|
||||
warning "This script is deprecated and will be officially unsupported in the near future. Please either use the kickstart script with the appropriate '--claim-*' options, or directly write out the claiming configuration instead."
|
||||
parse_args "${@}"
|
||||
write_config
|
||||
reload_claiming
|
||||
|
|
|
@ -237,7 +237,7 @@ Examples below for process group `sql`:
|
|||
- Open Pipes 
|
||||
- Open Sockets 
|
||||
|
||||
For more information about badges check [Generating Badges](/src/web/api/badges/README.md)
|
||||
For more information about badges check [Generating Badges](/src/web/api/v2/api_v3_badge/README.md)
|
||||
|
||||
## Comparison with console tools
|
||||
|
||||
|
|
|
@ -188,7 +188,7 @@ static inline void discovery_rename_cgroup(struct cgroup *cg) {
|
|||
}
|
||||
|
||||
char buffer[CGROUP_CHARTID_LINE_MAX + 1];
|
||||
char *new_name = fgets(buffer, CGROUP_CHARTID_LINE_MAX, instance->child_stdout_fp);
|
||||
char *new_name = fgets(buffer, CGROUP_CHARTID_LINE_MAX, spawn_popen_stdout(instance));
|
||||
int exit_code = spawn_popen_wait(instance);
|
||||
|
||||
switch (exit_code) {
|
||||
|
@ -1101,7 +1101,7 @@ static inline void read_cgroup_network_interfaces(struct cgroup *cg) {
|
|||
|
||||
char *s;
|
||||
char buffer[CGROUP_NETWORK_INTERFACE_MAX_LINE + 1];
|
||||
while((s = fgets(buffer, CGROUP_NETWORK_INTERFACE_MAX_LINE, instance->child_stdout_fp))) {
|
||||
while((s = fgets(buffer, CGROUP_NETWORK_INTERFACE_MAX_LINE, spawn_popen_stdout(instance)))) {
|
||||
trim(s);
|
||||
|
||||
if(*s && *s != '\n') {
|
||||
|
|
|
@ -394,8 +394,8 @@ static inline char *cgroup_chart_type(char *buffer, struct cgroup *cg) {
|
|||
#define RRDFUNCTIONS_CGTOP_HELP "View running containers"
|
||||
#define RRDFUNCTIONS_SYSTEMD_SERVICES_HELP "View systemd services"
|
||||
|
||||
int cgroup_function_cgroup_top(BUFFER *wb, const char *function);
|
||||
int cgroup_function_systemd_top(BUFFER *wb, const char *function);
|
||||
int cgroup_function_cgroup_top(BUFFER *wb, const char *function, BUFFER *payload, const char *source);
|
||||
int cgroup_function_systemd_top(BUFFER *wb, const char *function, BUFFER *payload, const char *source);
|
||||
|
||||
void cgroup_netdev_link_init(void);
|
||||
const DICTIONARY_ITEM *cgroup_netdev_get(struct cgroup *cg);
|
||||
|
|
|
@ -518,7 +518,7 @@ void call_the_helper(pid_t pid, const char *cgroup) {
|
|||
if(pi) {
|
||||
char buffer[CGROUP_NETWORK_INTERFACE_MAX_LINE + 1];
|
||||
char *s;
|
||||
while((s = fgets(buffer, CGROUP_NETWORK_INTERFACE_MAX_LINE, pi->child_stdout_fp))) {
|
||||
while((s = fgets(buffer, CGROUP_NETWORK_INTERFACE_MAX_LINE, spawn_popen_stdout(pi)))) {
|
||||
trim(s);
|
||||
|
||||
if(*s && *s != '\n') {
|
||||
|
|
|
@ -98,7 +98,7 @@ void cgroup_netdev_get_bandwidth(struct cgroup *cg, NETDATA_DOUBLE *received, NE
|
|||
*sent = t->sent[slot];
|
||||
}
|
||||
|
||||
int cgroup_function_cgroup_top(BUFFER *wb, const char *function __maybe_unused) {
|
||||
int cgroup_function_cgroup_top(BUFFER *wb, const char *function __maybe_unused, BUFFER *payload __maybe_unused, const char *source __maybe_unused) {
|
||||
buffer_flush(wb);
|
||||
wb->content_type = CT_APPLICATION_JSON;
|
||||
buffer_json_initialize(wb, "\"", "\"", 0, true, BUFFER_JSON_OPTIONS_DEFAULT);
|
||||
|
@ -341,7 +341,7 @@ int cgroup_function_cgroup_top(BUFFER *wb, const char *function __maybe_unused)
|
|||
return HTTP_RESP_OK;
|
||||
}
|
||||
|
||||
int cgroup_function_systemd_top(BUFFER *wb, const char *function __maybe_unused) {
|
||||
int cgroup_function_systemd_top(BUFFER *wb, const char *function __maybe_unused, BUFFER *payload __maybe_unused, const char *source __maybe_unused) {
|
||||
buffer_flush(wb);
|
||||
wb->content_type = CT_APPLICATION_JSON;
|
||||
buffer_json_initialize(wb, "\"", "\"", 0, true, BUFFER_JSON_OPTIONS_DEFAULT);
|
||||
|
|
|
@ -82,7 +82,7 @@ static enum cgroups_systemd_setting cgroups_detect_systemd(const char *exec)
|
|||
return retval;
|
||||
|
||||
struct pollfd pfd;
|
||||
pfd.fd = spawn_server_instance_read_fd(pi->si);
|
||||
pfd.fd = spawn_popen_read_fd(pi);
|
||||
pfd.events = POLLIN;
|
||||
|
||||
int timeout = 3000; // milliseconds
|
||||
|
@ -93,7 +93,7 @@ static enum cgroups_systemd_setting cgroups_detect_systemd(const char *exec)
|
|||
} else if (ret == 0) {
|
||||
collector_info("Cannot get the output of \"%s\" within timeout (%d ms)", exec, timeout);
|
||||
} else {
|
||||
while (fgets(buf, MAXSIZE_PROC_CMDLINE, pi->child_stdout_fp) != NULL) {
|
||||
while (fgets(buf, MAXSIZE_PROC_CMDLINE, spawn_popen_stdout(pi)) != NULL) {
|
||||
if ((begin = strstr(buf, SYSTEMD_HIERARCHY_STRING))) {
|
||||
end = begin = begin + strlen(SYSTEMD_HIERARCHY_STRING);
|
||||
if (!*begin)
|
||||
|
@ -153,18 +153,18 @@ static enum cgroups_type cgroups_try_detect_version()
|
|||
int cgroups2_available = 0;
|
||||
|
||||
// 1. check if cgroups2 available on system at all
|
||||
POPEN_INSTANCE *instance = spawn_popen_run("grep cgroup /proc/filesystems");
|
||||
if(!instance) {
|
||||
POPEN_INSTANCE *pi = spawn_popen_run("grep cgroup /proc/filesystems");
|
||||
if(!pi) {
|
||||
collector_error("cannot run 'grep cgroup /proc/filesystems'");
|
||||
return CGROUPS_AUTODETECT_FAIL;
|
||||
}
|
||||
while (fgets(buf, MAXSIZE_PROC_CMDLINE, instance->child_stdout_fp) != NULL) {
|
||||
while (fgets(buf, MAXSIZE_PROC_CMDLINE, spawn_popen_stdout(pi)) != NULL) {
|
||||
if (strstr(buf, "cgroup2")) {
|
||||
cgroups2_available = 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if(spawn_popen_wait(instance) != 0)
|
||||
if(spawn_popen_wait(pi) != 0)
|
||||
return CGROUPS_AUTODETECT_FAIL;
|
||||
|
||||
if(!cgroups2_available)
|
||||
|
|
|
@ -629,7 +629,7 @@ static void diskspace_main_cleanup(void *pptr) {
|
|||
#error WORKER_UTILIZATION_MAX_JOB_TYPES has to be at least 3
|
||||
#endif
|
||||
|
||||
int diskspace_function_mount_points(BUFFER *wb, const char *function __maybe_unused) {
|
||||
static int diskspace_function_mount_points(BUFFER *wb, const char *function __maybe_unused, BUFFER *payload __maybe_unused, const char *source __maybe_unused) {
|
||||
netdata_mutex_lock(&slow_mountinfo_mutex);
|
||||
|
||||
buffer_flush(wb);
|
||||
|
|
|
@ -1637,7 +1637,8 @@ close_and_send:
|
|||
// ----------------------------------------------------------------------------
|
||||
// main, command line arguments parsing
|
||||
|
||||
static NORETURN void plugin_exit(int code) {
|
||||
static void plugin_exit(int code) NORETURN;
|
||||
static void plugin_exit(int code) {
|
||||
fflush(stdout);
|
||||
function_plugin_should_exit = true;
|
||||
exit(code);
|
||||
|
|
|
@ -158,7 +158,7 @@ static void *pluginsd_worker_thread(void *arg) {
|
|||
rrdhost_hostname(cd->host), cd->cmd);
|
||||
break;
|
||||
}
|
||||
cd->unsafe.pid = spawn_server_instance_pid(cd->unsafe.pi->si);
|
||||
cd->unsafe.pid = spawn_popen_pid(cd->unsafe.pi);
|
||||
|
||||
nd_log(NDLS_DAEMON, NDLP_DEBUG,
|
||||
"PLUGINSD: 'host:%s' connected to '%s' running on pid %d",
|
||||
|
@ -181,7 +181,10 @@ static void *pluginsd_worker_thread(void *arg) {
|
|||
};
|
||||
ND_LOG_STACK_PUSH(lgs);
|
||||
|
||||
count = pluginsd_process(cd->host, cd, cd->unsafe.pi->child_stdin_fp, cd->unsafe.pi->child_stdout_fp, 0);
|
||||
count = pluginsd_process(cd->host, cd,
|
||||
spawn_popen_read_fd(cd->unsafe.pi),
|
||||
spawn_popen_write_fd(cd->unsafe.pi),
|
||||
0);
|
||||
|
||||
nd_log(NDLS_DAEMON, NDLP_DEBUG,
|
||||
"PLUGINSD: 'host:%s', '%s' (pid %d) disconnected after %zu successful data collections (ENDs).",
|
||||
|
|
|
@ -46,7 +46,10 @@ struct plugind {
|
|||
|
||||
extern struct plugind *pluginsd_root;
|
||||
|
||||
size_t pluginsd_process(RRDHOST *host, struct plugind *cd, FILE *fp_plugin_input, FILE *fp_plugin_output, int trust_durations);
|
||||
size_t pluginsd_process(RRDHOST *host, struct plugind *cd, int fd_input, int fd_output, int trust_durations);
|
||||
|
||||
struct parser;
|
||||
void pluginsd_process_cleanup(struct parser *parser);
|
||||
void pluginsd_process_thread_cleanup(void *pptr);
|
||||
|
||||
size_t pluginsd_initialize_plugin_directories();
|
||||
|
|
|
@ -2,10 +2,8 @@
|
|||
|
||||
#include "pluginsd_internals.h"
|
||||
|
||||
ssize_t send_to_plugin(const char *txt, void *data) {
|
||||
PARSER *parser = data;
|
||||
|
||||
if(!txt || !*txt)
|
||||
ssize_t send_to_plugin(const char *txt, PARSER *parser) {
|
||||
if(!txt || !*txt || !parser)
|
||||
return 0;
|
||||
|
||||
#ifdef ENABLE_H2O
|
||||
|
@ -17,7 +15,6 @@ ssize_t send_to_plugin(const char *txt, void *data) {
|
|||
spinlock_lock(&parser->writer.spinlock);
|
||||
ssize_t bytes = -1;
|
||||
|
||||
#ifdef ENABLE_HTTPS
|
||||
NETDATA_SSL *ssl = parser->ssl_output;
|
||||
if(ssl) {
|
||||
|
||||
|
@ -30,29 +27,14 @@ ssize_t send_to_plugin(const char *txt, void *data) {
|
|||
spinlock_unlock(&parser->writer.spinlock);
|
||||
return bytes;
|
||||
}
|
||||
#endif
|
||||
|
||||
if(parser->fp_output) {
|
||||
|
||||
bytes = fprintf(parser->fp_output, "%s", txt);
|
||||
if(bytes <= 0) {
|
||||
netdata_log_error("PLUGINSD: cannot send command (FILE)");
|
||||
bytes = -2;
|
||||
}
|
||||
else
|
||||
fflush(parser->fp_output);
|
||||
|
||||
spinlock_unlock(&parser->writer.spinlock);
|
||||
return bytes;
|
||||
}
|
||||
|
||||
if(parser->fd != -1) {
|
||||
if(parser->fd_output != -1) {
|
||||
bytes = 0;
|
||||
ssize_t total = (ssize_t)strlen(txt);
|
||||
ssize_t sent;
|
||||
|
||||
do {
|
||||
sent = write(parser->fd, &txt[bytes], total - bytes);
|
||||
sent = write(parser->fd_output, &txt[bytes], total - bytes);
|
||||
if(sent <= 0) {
|
||||
netdata_log_error("PLUGINSD: cannot send command (fd)");
|
||||
spinlock_unlock(&parser->writer.spinlock);
|
||||
|
@ -100,19 +82,16 @@ void parser_destroy(PARSER *parser) {
|
|||
}
|
||||
|
||||
|
||||
PARSER *parser_init(struct parser_user_object *user, FILE *fp_input, FILE *fp_output, int fd,
|
||||
PARSER *parser_init(struct parser_user_object *user, int fd_input, int fd_output,
|
||||
PARSER_INPUT_TYPE flags, void *ssl __maybe_unused) {
|
||||
PARSER *parser;
|
||||
|
||||
parser = callocz(1, sizeof(*parser));
|
||||
if(user)
|
||||
parser->user = *user;
|
||||
parser->fd = fd;
|
||||
parser->fp_input = fp_input;
|
||||
parser->fp_output = fp_output;
|
||||
#ifdef ENABLE_HTTPS
|
||||
parser->fd_input = fd_input;
|
||||
parser->fd_output = fd_output;
|
||||
parser->ssl_output = ssl;
|
||||
#endif
|
||||
parser->flags = flags;
|
||||
|
||||
spinlock_init(&parser->writer.spinlock);
|
||||
|
|
|
@ -13,7 +13,7 @@
|
|||
|
||||
PARSER_RC PLUGINSD_DISABLE_PLUGIN(PARSER *parser, const char *keyword, const char *msg);
|
||||
|
||||
ssize_t send_to_plugin(const char *txt, void *data);
|
||||
ssize_t send_to_plugin(const char *txt, PARSER *parser);
|
||||
|
||||
static inline RRDHOST *pluginsd_require_scope_host(PARSER *parser, const char *cmd) {
|
||||
RRDHOST *host = parser->user.host;
|
||||
|
|
|
@ -1081,52 +1081,7 @@ static inline PARSER_RC pluginsd_exit(char **words __maybe_unused, size_t num_wo
|
|||
return PARSER_RC_STOP;
|
||||
}
|
||||
|
||||
static inline PARSER_RC streaming_claimed_id(char **words, size_t num_words, PARSER *parser)
|
||||
{
|
||||
const char *host_uuid_str = get_word(words, num_words, 1);
|
||||
const char *claim_id_str = get_word(words, num_words, 2);
|
||||
|
||||
if (!host_uuid_str || !claim_id_str) {
|
||||
netdata_log_error("Command CLAIMED_ID came malformed, uuid = '%s', claim_id = '%s'",
|
||||
host_uuid_str ? host_uuid_str : "[unset]",
|
||||
claim_id_str ? claim_id_str : "[unset]");
|
||||
return PARSER_RC_ERROR;
|
||||
}
|
||||
|
||||
nd_uuid_t uuid;
|
||||
RRDHOST *host = parser->user.host;
|
||||
|
||||
// We don't need the parsed UUID
|
||||
// just do it to check the format
|
||||
if(uuid_parse(host_uuid_str, uuid)) {
|
||||
netdata_log_error("1st parameter (host GUID) to CLAIMED_ID command is not valid GUID. Received: \"%s\".", host_uuid_str);
|
||||
return PARSER_RC_ERROR;
|
||||
}
|
||||
if(uuid_parse(claim_id_str, uuid) && strcmp(claim_id_str, "NULL") != 0) {
|
||||
netdata_log_error("2nd parameter (Claim ID) to CLAIMED_ID command is not valid GUID. Received: \"%s\".", claim_id_str);
|
||||
return PARSER_RC_ERROR;
|
||||
}
|
||||
|
||||
if(strcmp(host_uuid_str, host->machine_guid) != 0) {
|
||||
netdata_log_error("Claim ID is for host \"%s\" but it came over connection for \"%s\"", host_uuid_str, host->machine_guid);
|
||||
return PARSER_RC_OK; //the message is OK problem must be somewhere else
|
||||
}
|
||||
|
||||
rrdhost_aclk_state_lock(host);
|
||||
|
||||
if (host->aclk_state.claimed_id)
|
||||
freez(host->aclk_state.claimed_id);
|
||||
|
||||
host->aclk_state.claimed_id = strcmp(claim_id_str, "NULL") ? strdupz(claim_id_str) : NULL;
|
||||
|
||||
rrdhost_aclk_state_unlock(host);
|
||||
|
||||
rrdhost_flag_set(host, RRDHOST_FLAG_METADATA_CLAIMID |RRDHOST_FLAG_METADATA_UPDATE);
|
||||
|
||||
rrdpush_send_claimed_id(host);
|
||||
|
||||
return PARSER_RC_OK;
|
||||
}
|
||||
PARSER_RC rrdpush_receiver_pluginsd_claimed_id(char **words, size_t num_words, PARSER *parser);
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
|
||||
|
@ -1135,8 +1090,7 @@ void pluginsd_cleanup_v2(PARSER *parser) {
|
|||
pluginsd_clear_scope_chart(parser, "THREAD CLEANUP");
|
||||
}
|
||||
|
||||
void pluginsd_process_thread_cleanup(void *pptr) {
|
||||
PARSER *parser = CLEANUP_FUNCTION_GET_PTR(pptr);
|
||||
void pluginsd_process_cleanup(PARSER *parser) {
|
||||
if(!parser) return;
|
||||
|
||||
pluginsd_cleanup_v2(parser);
|
||||
|
@ -1154,6 +1108,11 @@ void pluginsd_process_thread_cleanup(void *pptr) {
|
|||
parser_destroy(parser);
|
||||
}
|
||||
|
||||
void pluginsd_process_thread_cleanup(void *pptr) {
|
||||
PARSER *parser = CLEANUP_FUNCTION_GET_PTR(pptr);
|
||||
pluginsd_process_cleanup(parser);
|
||||
}
|
||||
|
||||
bool parser_reconstruct_node(BUFFER *wb, void *ptr) {
|
||||
PARSER *parser = ptr;
|
||||
if(!parser || !parser->user.host)
|
||||
|
@ -1181,30 +1140,15 @@ bool parser_reconstruct_context(BUFFER *wb, void *ptr) {
|
|||
return true;
|
||||
}
|
||||
|
||||
inline size_t pluginsd_process(RRDHOST *host, struct plugind *cd, FILE *fp_plugin_input, FILE *fp_plugin_output, int trust_durations)
|
||||
inline size_t pluginsd_process(RRDHOST *host, struct plugind *cd, int fd_input, int fd_output, int trust_durations)
|
||||
{
|
||||
int enabled = cd->unsafe.enabled;
|
||||
|
||||
if (!fp_plugin_input || !fp_plugin_output || !enabled) {
|
||||
if (fd_input == -1 || fd_output == -1 || !enabled) {
|
||||
cd->unsafe.enabled = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (unlikely(fileno(fp_plugin_input) == -1)) {
|
||||
netdata_log_error("input file descriptor given is not a valid stream");
|
||||
cd->serial_failures++;
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (unlikely(fileno(fp_plugin_output) == -1)) {
|
||||
netdata_log_error("output file descriptor given is not a valid stream");
|
||||
cd->serial_failures++;
|
||||
return 0;
|
||||
}
|
||||
|
||||
clearerr(fp_plugin_input);
|
||||
clearerr(fp_plugin_output);
|
||||
|
||||
PARSER *parser;
|
||||
{
|
||||
PARSER_USER_OBJECT user = {
|
||||
|
@ -1214,8 +1158,7 @@ inline size_t pluginsd_process(RRDHOST *host, struct plugind *cd, FILE *fp_plugi
|
|||
.trust_durations = trust_durations
|
||||
};
|
||||
|
||||
// fp_plugin_output = our input; fp_plugin_input = our output
|
||||
parser = parser_init(&user, fp_plugin_output, fp_plugin_input, -1, PARSER_INPUT_SPLIT, NULL);
|
||||
parser = parser_init(&user, fd_input, fd_output, PARSER_INPUT_SPLIT, NULL);
|
||||
}
|
||||
|
||||
pluginsd_keywords_init(parser, PARSER_INIT_PLUGINSD);
|
||||
|
@ -1240,10 +1183,8 @@ inline size_t pluginsd_process(RRDHOST *host, struct plugind *cd, FILE *fp_plugi
|
|||
|
||||
if(unlikely(!buffered_reader_next_line(&parser->reader, buffer))) {
|
||||
buffered_reader_ret_t ret = buffered_reader_read_timeout(
|
||||
&parser->reader,
|
||||
fileno((FILE *) parser->fp_input),
|
||||
2 * 60 * MSEC_PER_SEC, true
|
||||
);
|
||||
&parser->reader, parser->fd_input,
|
||||
2 * 60 * MSEC_PER_SEC, true);
|
||||
|
||||
if(unlikely(ret != BUFFERED_READER_READ_OK))
|
||||
break;
|
||||
|
@ -1320,7 +1261,7 @@ PARSER_RC parser_execute(PARSER *parser, const PARSER_KEYWORD *keyword, char **w
|
|||
case PLUGINSD_KEYWORD_ID_VARIABLE:
|
||||
return pluginsd_variable(words, num_words, parser);
|
||||
case PLUGINSD_KEYWORD_ID_CLAIMED_ID:
|
||||
return streaming_claimed_id(words, num_words, parser);
|
||||
return rrdpush_receiver_pluginsd_claimed_id(words, num_words, parser);
|
||||
case PLUGINSD_KEYWORD_ID_HOST:
|
||||
return pluginsd_host(words, num_words, parser);
|
||||
case PLUGINSD_KEYWORD_ID_HOST_DEFINE:
|
||||
|
@ -1362,7 +1303,7 @@ void parser_init_repertoire(PARSER *parser, PARSER_REPERTOIRE repertoire) {
|
|||
}
|
||||
|
||||
int pluginsd_parser_unittest(void) {
|
||||
PARSER *p = parser_init(NULL, NULL, NULL, -1, PARSER_INPUT_SPLIT, NULL);
|
||||
PARSER *p = parser_init(NULL, -1, -1, PARSER_INPUT_SPLIT, NULL);
|
||||
pluginsd_keywords_init(p, PARSER_INIT_PLUGINSD | PARSER_INIT_STREAMING);
|
||||
|
||||
char *lines[] = {
|
||||
|
|
|
@ -93,17 +93,15 @@ typedef struct parser_user_object {
|
|||
} v2;
|
||||
} PARSER_USER_OBJECT;
|
||||
|
||||
typedef struct parser {
|
||||
struct parser {
|
||||
uint8_t version; // Parser version
|
||||
PARSER_REPERTOIRE repertoire;
|
||||
uint32_t flags;
|
||||
int fd; // Socket
|
||||
FILE *fp_input; // Input source e.g. stream
|
||||
FILE *fp_output; // Stream to send commands to plugin
|
||||
int fd_input;
|
||||
int fd_output;
|
||||
|
||||
#ifdef ENABLE_HTTPS
|
||||
NETDATA_SSL *ssl_output;
|
||||
#endif
|
||||
|
||||
#ifdef ENABLE_H2O
|
||||
void *h2o_ctx; // if set we use h2o_stream functions to send data
|
||||
#endif
|
||||
|
@ -129,10 +127,11 @@ typedef struct parser {
|
|||
struct {
|
||||
SPINLOCK spinlock;
|
||||
} writer;
|
||||
};
|
||||
|
||||
} PARSER;
|
||||
typedef struct parser PARSER;
|
||||
|
||||
PARSER *parser_init(struct parser_user_object *user, FILE *fp_input, FILE *fp_output, int fd, PARSER_INPUT_TYPE flags, void *ssl);
|
||||
PARSER *parser_init(struct parser_user_object *user, int fd_input, int fd_output, PARSER_INPUT_TYPE flags, void *ssl);
|
||||
void parser_init_repertoire(PARSER *parser, PARSER_REPERTOIRE repertoire);
|
||||
void parser_destroy(PARSER *working_parser);
|
||||
void pluginsd_cleanup_v2(PARSER *parser);
|
||||
|
|
|
@ -998,7 +998,7 @@ static void disk_labels_cb(RRDSET *st, void *data) {
|
|||
add_labels_to_disk(data, st);
|
||||
}
|
||||
|
||||
static int diskstats_function_block_devices(BUFFER *wb, const char *function __maybe_unused) {
|
||||
static int diskstats_function_block_devices(BUFFER *wb, const char *function __maybe_unused, BUFFER *payload __maybe_unused, const char *source __maybe_unused) {
|
||||
buffer_flush(wb);
|
||||
wb->content_type = CT_APPLICATION_JSON;
|
||||
buffer_json_initialize(wb, "\"", "\"", 0, true, BUFFER_JSON_OPTIONS_DEFAULT);
|
||||
|
|
|
@ -473,7 +473,7 @@ static void netdev_rename_this_device(struct netdev *d) {
|
|||
|
||||
// ----------------------------------------------------------------------------
|
||||
|
||||
int netdev_function_net_interfaces(BUFFER *wb, const char *function __maybe_unused) {
|
||||
static int netdev_function_net_interfaces(BUFFER *wb, const char *function __maybe_unused, BUFFER *payload __maybe_unused, const char *source __maybe_unused) {
|
||||
buffer_flush(wb);
|
||||
wb->content_type = CT_APPLICATION_JSON;
|
||||
buffer_json_initialize(wb, "\"", "\"", 0, true, BUFFER_JSON_OPTIONS_DEFAULT);
|
||||
|
|
|
@ -1283,7 +1283,7 @@ static int statsd_readfile(const char *filename, STATSD_APP *app, STATSD_APP_CHA
|
|||
// find the directory name from the file we already read
|
||||
char *filename2 = strdupz(filename); // copy filename, since dirname() will change it
|
||||
char *dir = dirname(filename2); // find the directory part of the filename
|
||||
tmp = strdupz_path_subpath(dir, s); // compose the new filename to read;
|
||||
tmp = filename_from_path_entry_strdupz(dir, s); // compose the new filename to read;
|
||||
freez(filename2); // free the filename we copied
|
||||
}
|
||||
statsd_readfile(tmp, app, chart, dict);
|
||||
|
|
|
@ -58,6 +58,10 @@ static int systemd_journal_directories_dyncfg_update(BUFFER *result, BUFFER *pay
|
|||
struct json_object *journalDirectories;
|
||||
json_object_object_get_ex(jobj, JOURNAL_DIRECTORIES_JSON_NODE, &journalDirectories);
|
||||
|
||||
if (json_object_get_type(journalDirectories) != json_type_array)
|
||||
return dyncfg_default_response(result, HTTP_RESP_BAD_REQUEST,
|
||||
"member " JOURNAL_DIRECTORIES_JSON_NODE " is not an array");
|
||||
|
||||
size_t n_directories = json_object_array_length(journalDirectories);
|
||||
if(n_directories > MAX_JOURNAL_DIRECTORIES)
|
||||
return dyncfg_default_response(result, HTTP_RESP_BAD_REQUEST, "too many directories configured");
|
||||
|
|
|
@ -1413,7 +1413,7 @@ static int netdata_systemd_journal_query(BUFFER *wb, FACETS *facets, FUNCTION_QU
|
|||
}
|
||||
|
||||
static void netdata_systemd_journal_function_help(const char *transaction) {
|
||||
BUFFER *wb = buffer_create(0, NULL);
|
||||
CLEAN_BUFFER *wb = buffer_create(0, NULL);
|
||||
buffer_sprintf(wb,
|
||||
"%s / %s\n"
|
||||
"\n"
|
||||
|
@ -1517,17 +1517,400 @@ static void netdata_systemd_journal_function_help(const char *transaction) {
|
|||
netdata_mutex_lock(&stdout_mutex);
|
||||
pluginsd_function_result_to_stdout(transaction, HTTP_RESP_OK, "text/plain", now_realtime_sec() + 3600, wb);
|
||||
netdata_mutex_unlock(&stdout_mutex);
|
||||
}
|
||||
|
||||
buffer_free(wb);
|
||||
typedef struct {
|
||||
FACET_KEY_OPTIONS default_facet;
|
||||
bool info;
|
||||
bool data_only;
|
||||
bool slice;
|
||||
bool delta;
|
||||
bool tail;
|
||||
time_t after_s;
|
||||
time_t before_s;
|
||||
usec_t anchor;
|
||||
usec_t if_modified_since;
|
||||
size_t last;
|
||||
FACETS_ANCHOR_DIRECTION direction;
|
||||
const char *query;
|
||||
const char *chart;
|
||||
SIMPLE_PATTERN *sources;
|
||||
SD_JOURNAL_FILE_SOURCE_TYPE source_type;
|
||||
size_t filters;
|
||||
size_t sampling;
|
||||
} JOURNAL_QUERY;
|
||||
|
||||
static SD_JOURNAL_FILE_SOURCE_TYPE get_internal_source_type(const char *value) {
|
||||
if(strcmp(value, SDJF_SOURCE_ALL_NAME) == 0)
|
||||
return SDJF_ALL;
|
||||
else if(strcmp(value, SDJF_SOURCE_LOCAL_NAME) == 0)
|
||||
return SDJF_LOCAL_ALL;
|
||||
else if(strcmp(value, SDJF_SOURCE_REMOTES_NAME) == 0)
|
||||
return SDJF_REMOTE_ALL;
|
||||
else if(strcmp(value, SDJF_SOURCE_NAMESPACES_NAME) == 0)
|
||||
return SDJF_LOCAL_NAMESPACE;
|
||||
else if(strcmp(value, SDJF_SOURCE_LOCAL_SYSTEM_NAME) == 0)
|
||||
return SDJF_LOCAL_SYSTEM;
|
||||
else if(strcmp(value, SDJF_SOURCE_LOCAL_USERS_NAME) == 0)
|
||||
return SDJF_LOCAL_USER;
|
||||
else if(strcmp(value, SDJF_SOURCE_LOCAL_OTHER_NAME) == 0)
|
||||
return SDJF_LOCAL_OTHER;
|
||||
|
||||
return SDJF_NONE;
|
||||
}
|
||||
|
||||
static FACETS_ANCHOR_DIRECTION get_direction(const char *value) {
|
||||
return strcasecmp(value, "forward") == 0 ? FACETS_ANCHOR_DIRECTION_FORWARD : FACETS_ANCHOR_DIRECTION_BACKWARD;
|
||||
}
|
||||
|
||||
struct post_query_data {
|
||||
const char *transaction;
|
||||
FACETS *facets;
|
||||
JOURNAL_QUERY *q;
|
||||
BUFFER *wb;
|
||||
};
|
||||
|
||||
static bool parse_json_payload(json_object *jobj, const char *path, void *data, BUFFER *error) {
|
||||
struct post_query_data *qd = data;
|
||||
JOURNAL_QUERY *q = qd->q;
|
||||
BUFFER *wb = qd->wb;
|
||||
FACETS *facets = qd->facets;
|
||||
// const char *transaction = qd->transaction;
|
||||
|
||||
buffer_flush(error);
|
||||
|
||||
JSONC_PARSE_BOOL_OR_ERROR_AND_RETURN(jobj, path, JOURNAL_PARAMETER_INFO, q->info, error, false);
|
||||
JSONC_PARSE_BOOL_OR_ERROR_AND_RETURN(jobj, path, JOURNAL_PARAMETER_DELTA, q->delta, error, false);
|
||||
JSONC_PARSE_BOOL_OR_ERROR_AND_RETURN(jobj, path, JOURNAL_PARAMETER_TAIL, q->tail, error, false);
|
||||
JSONC_PARSE_BOOL_OR_ERROR_AND_RETURN(jobj, path, JOURNAL_PARAMETER_SLICE, q->slice, error, false);
|
||||
JSONC_PARSE_BOOL_OR_ERROR_AND_RETURN(jobj, path, JOURNAL_PARAMETER_DATA_ONLY, q->data_only, error, false);
|
||||
JSONC_PARSE_UINT64_OR_ERROR_AND_RETURN(jobj, path, JOURNAL_PARAMETER_SAMPLING, q->sampling, error, false);
|
||||
JSONC_PARSE_INT64_OR_ERROR_AND_RETURN(jobj, path, JOURNAL_PARAMETER_AFTER, q->after_s, error, false);
|
||||
JSONC_PARSE_INT64_OR_ERROR_AND_RETURN(jobj, path, JOURNAL_PARAMETER_BEFORE, q->before_s, error, false);
|
||||
JSONC_PARSE_UINT64_OR_ERROR_AND_RETURN(jobj, path, JOURNAL_PARAMETER_IF_MODIFIED_SINCE, q->if_modified_since, error, false);
|
||||
JSONC_PARSE_UINT64_OR_ERROR_AND_RETURN(jobj, path, JOURNAL_PARAMETER_ANCHOR, q->anchor, error, false);
|
||||
JSONC_PARSE_UINT64_OR_ERROR_AND_RETURN(jobj, path, JOURNAL_PARAMETER_LAST, q->last, error, false);
|
||||
JSONC_PARSE_TXT2ENUM_OR_ERROR_AND_RETURN(jobj, path, JOURNAL_PARAMETER_DIRECTION, get_direction, q->direction, error, false);
|
||||
JSONC_PARSE_TXT2STRDUPZ_OR_ERROR_AND_RETURN(jobj, path, JOURNAL_PARAMETER_QUERY, q->query, error, false);
|
||||
JSONC_PARSE_TXT2STRDUPZ_OR_ERROR_AND_RETURN(jobj, path, JOURNAL_PARAMETER_HISTOGRAM, q->chart, error, false);
|
||||
|
||||
json_object *sources;
|
||||
if (json_object_object_get_ex(jobj, JOURNAL_PARAMETER_SOURCE, &sources)) {
|
||||
if (json_object_get_type(sources) != json_type_array) {
|
||||
buffer_sprintf(error, "member '%s' is not an array", JOURNAL_PARAMETER_SOURCE);
|
||||
return false;
|
||||
}
|
||||
|
||||
buffer_json_member_add_array(wb, JOURNAL_PARAMETER_SOURCE);
|
||||
|
||||
CLEAN_BUFFER *sources_list = buffer_create(0, NULL);
|
||||
|
||||
q->source_type = SDJF_NONE;
|
||||
|
||||
size_t sources_len = json_object_array_length(sources);
|
||||
for (size_t i = 0; i < sources_len; i++) {
|
||||
json_object *src = json_object_array_get_idx(sources, i);
|
||||
|
||||
if (json_object_get_type(src) != json_type_string) {
|
||||
buffer_sprintf(error, "sources array item %zu is not a string", i);
|
||||
return false;
|
||||
}
|
||||
|
||||
const char *value = json_object_get_string(src);
|
||||
buffer_json_add_array_item_string(wb, value);
|
||||
|
||||
SD_JOURNAL_FILE_SOURCE_TYPE t = get_internal_source_type(value);
|
||||
if(t != SDJF_NONE) {
|
||||
q->source_type |= t;
|
||||
value = NULL;
|
||||
}
|
||||
else {
|
||||
// else, match the source, whatever it is
|
||||
if(buffer_strlen(sources_list))
|
||||
buffer_putc(sources_list, '|');
|
||||
|
||||
buffer_strcat(sources_list, value);
|
||||
}
|
||||
}
|
||||
|
||||
if(buffer_strlen(sources_list)) {
|
||||
simple_pattern_free(q->sources);
|
||||
q->sources = simple_pattern_create(buffer_tostring(sources_list), "|", SIMPLE_PATTERN_EXACT, false);
|
||||
}
|
||||
|
||||
buffer_json_array_close(wb); // source
|
||||
}
|
||||
|
||||
json_object *fcts;
|
||||
if (json_object_object_get_ex(jobj, JOURNAL_PARAMETER_FACETS, &fcts)) {
|
||||
if (json_object_get_type(sources) != json_type_array) {
|
||||
buffer_sprintf(error, "member '%s' is not an array", JOURNAL_PARAMETER_FACETS);
|
||||
return false;
|
||||
}
|
||||
|
||||
q->default_facet = FACET_KEY_OPTION_NONE;
|
||||
facets_reset_and_disable_all_facets(facets);
|
||||
|
||||
buffer_json_member_add_array(wb, JOURNAL_PARAMETER_FACETS);
|
||||
|
||||
size_t facets_len = json_object_array_length(fcts);
|
||||
for (size_t i = 0; i < facets_len; i++) {
|
||||
json_object *fct = json_object_array_get_idx(fcts, i);
|
||||
|
||||
if (json_object_get_type(fct) != json_type_string) {
|
||||
buffer_sprintf(error, "facets array item %zu is not a string", i);
|
||||
return false;
|
||||
}
|
||||
|
||||
const char *value = json_object_get_string(fct);
|
||||
facets_register_facet(facets, value, FACET_KEY_OPTION_FACET|FACET_KEY_OPTION_FTS|FACET_KEY_OPTION_REORDER);
|
||||
buffer_json_add_array_item_string(wb, value);
|
||||
}
|
||||
|
||||
buffer_json_array_close(wb); // facets
|
||||
}
|
||||
|
||||
json_object *selections;
|
||||
if (json_object_object_get_ex(jobj, "selections", &selections)) {
|
||||
if (json_object_get_type(selections) != json_type_object) {
|
||||
buffer_sprintf(error, "member 'selections' is not an object");
|
||||
return false;
|
||||
}
|
||||
|
||||
buffer_json_member_add_object(wb, "selections");
|
||||
|
||||
json_object_object_foreach(selections, key, val) {
|
||||
if (json_object_get_type(val) != json_type_array) {
|
||||
buffer_sprintf(error, "selection '%s' is not an array", key);
|
||||
return false;
|
||||
}
|
||||
|
||||
buffer_json_member_add_array(wb, key);
|
||||
|
||||
size_t values_len = json_object_array_length(val);
|
||||
for (size_t i = 0; i < values_len; i++) {
|
||||
json_object *value_obj = json_object_array_get_idx(val, i);
|
||||
|
||||
if (json_object_get_type(value_obj) != json_type_string) {
|
||||
buffer_sprintf(error, "selection '%s' array item %zu is not a string", key, i);
|
||||
return false;
|
||||
}
|
||||
|
||||
const char *value = json_object_get_string(value_obj);
|
||||
|
||||
// Call facets_register_facet_id_filter for each value
|
||||
facets_register_facet_filter(
|
||||
facets, key, value, FACET_KEY_OPTION_FACET | FACET_KEY_OPTION_FTS | FACET_KEY_OPTION_REORDER);
|
||||
|
||||
buffer_json_add_array_item_string(wb, value);
|
||||
q->filters++;
|
||||
}
|
||||
|
||||
buffer_json_array_close(wb); // key
|
||||
}
|
||||
|
||||
buffer_json_object_close(wb); // selections
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool parse_post_params(FACETS *facets, JOURNAL_QUERY *q, BUFFER *wb, BUFFER *payload, const char *transaction) {
|
||||
struct post_query_data qd = {
|
||||
.transaction = transaction,
|
||||
.facets = facets,
|
||||
.q = q,
|
||||
.wb = wb,
|
||||
};
|
||||
|
||||
int code;
|
||||
CLEAN_JSON_OBJECT *jobj = json_parse_function_payload_or_error(wb, payload, &code, parse_json_payload, &qd);
|
||||
if(!jobj || code != HTTP_RESP_OK) {
|
||||
netdata_mutex_lock(&stdout_mutex);
|
||||
pluginsd_function_result_to_stdout(transaction, code, "application/json", now_realtime_sec() + 1, wb);
|
||||
netdata_mutex_unlock(&stdout_mutex);
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool parse_get_params(FACETS *facets, JOURNAL_QUERY *q, BUFFER *wb, char *function, const char *transaction) {
|
||||
buffer_json_member_add_object(wb, "_request");
|
||||
|
||||
char *words[SYSTEMD_JOURNAL_MAX_PARAMS] = { NULL };
|
||||
size_t num_words = quoted_strings_splitter_pluginsd(function, words, SYSTEMD_JOURNAL_MAX_PARAMS);
|
||||
for(int i = 1; i < SYSTEMD_JOURNAL_MAX_PARAMS ;i++) {
|
||||
char *keyword = get_word(words, num_words, i);
|
||||
if(!keyword) break;
|
||||
|
||||
if(strcmp(keyword, JOURNAL_PARAMETER_HELP) == 0) {
|
||||
netdata_systemd_journal_function_help(transaction);
|
||||
return false;
|
||||
}
|
||||
else if(strcmp(keyword, JOURNAL_PARAMETER_INFO) == 0) {
|
||||
q->info = true;
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_DELTA ":", sizeof(JOURNAL_PARAMETER_DELTA ":") - 1) == 0) {
|
||||
char *v = &keyword[sizeof(JOURNAL_PARAMETER_DELTA ":") - 1];
|
||||
|
||||
if(strcmp(v, "false") == 0 || strcmp(v, "no") == 0 || strcmp(v, "0") == 0)
|
||||
q->delta = false;
|
||||
else
|
||||
q->delta = true;
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_TAIL ":", sizeof(JOURNAL_PARAMETER_TAIL ":") - 1) == 0) {
|
||||
char *v = &keyword[sizeof(JOURNAL_PARAMETER_TAIL ":") - 1];
|
||||
|
||||
if(strcmp(v, "false") == 0 || strcmp(v, "no") == 0 || strcmp(v, "0") == 0)
|
||||
q->tail = false;
|
||||
else
|
||||
q->tail = true;
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_SAMPLING ":", sizeof(JOURNAL_PARAMETER_SAMPLING ":") - 1) == 0) {
|
||||
q->sampling = str2ul(&keyword[sizeof(JOURNAL_PARAMETER_SAMPLING ":") - 1]);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_DATA_ONLY ":", sizeof(JOURNAL_PARAMETER_DATA_ONLY ":") - 1) == 0) {
|
||||
char *v = &keyword[sizeof(JOURNAL_PARAMETER_DATA_ONLY ":") - 1];
|
||||
|
||||
if(strcmp(v, "false") == 0 || strcmp(v, "no") == 0 || strcmp(v, "0") == 0)
|
||||
q->data_only = false;
|
||||
else
|
||||
q->data_only = true;
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_SLICE ":", sizeof(JOURNAL_PARAMETER_SLICE ":") - 1) == 0) {
|
||||
char *v = &keyword[sizeof(JOURNAL_PARAMETER_SLICE ":") - 1];
|
||||
|
||||
if(strcmp(v, "false") == 0 || strcmp(v, "no") == 0 || strcmp(v, "0") == 0)
|
||||
q->slice = false;
|
||||
else
|
||||
q->slice = true;
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_SOURCE ":", sizeof(JOURNAL_PARAMETER_SOURCE ":") - 1) == 0) {
|
||||
const char *value = &keyword[sizeof(JOURNAL_PARAMETER_SOURCE ":") - 1];
|
||||
|
||||
buffer_json_member_add_array(wb, JOURNAL_PARAMETER_SOURCE);
|
||||
|
||||
CLEAN_BUFFER *sources_list = buffer_create(0, NULL);
|
||||
|
||||
q->source_type = SDJF_NONE;
|
||||
while(value) {
|
||||
char *sep = strchr(value, ',');
|
||||
if(sep)
|
||||
*sep++ = '\0';
|
||||
|
||||
buffer_json_add_array_item_string(wb, value);
|
||||
|
||||
SD_JOURNAL_FILE_SOURCE_TYPE t = get_internal_source_type(value);
|
||||
if(t != SDJF_NONE) {
|
||||
q->source_type |= t;
|
||||
value = NULL;
|
||||
}
|
||||
else {
|
||||
// else, match the source, whatever it is
|
||||
if(buffer_strlen(sources_list))
|
||||
buffer_putc(sources_list, '|');
|
||||
|
||||
buffer_strcat(sources_list, value);
|
||||
}
|
||||
|
||||
value = sep;
|
||||
}
|
||||
|
||||
if(buffer_strlen(sources_list)) {
|
||||
simple_pattern_free(q->sources);
|
||||
q->sources = simple_pattern_create(buffer_tostring(sources_list), "|", SIMPLE_PATTERN_EXACT, false);
|
||||
}
|
||||
|
||||
buffer_json_array_close(wb); // source
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_AFTER ":", sizeof(JOURNAL_PARAMETER_AFTER ":") - 1) == 0) {
|
||||
q->after_s = str2l(&keyword[sizeof(JOURNAL_PARAMETER_AFTER ":") - 1]);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_BEFORE ":", sizeof(JOURNAL_PARAMETER_BEFORE ":") - 1) == 0) {
|
||||
q->before_s = str2l(&keyword[sizeof(JOURNAL_PARAMETER_BEFORE ":") - 1]);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_IF_MODIFIED_SINCE ":", sizeof(JOURNAL_PARAMETER_IF_MODIFIED_SINCE ":") - 1) == 0) {
|
||||
q->if_modified_since = str2ull(&keyword[sizeof(JOURNAL_PARAMETER_IF_MODIFIED_SINCE ":") - 1], NULL);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_ANCHOR ":", sizeof(JOURNAL_PARAMETER_ANCHOR ":") - 1) == 0) {
|
||||
q->anchor = str2ull(&keyword[sizeof(JOURNAL_PARAMETER_ANCHOR ":") - 1], NULL);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_DIRECTION ":", sizeof(JOURNAL_PARAMETER_DIRECTION ":") - 1) == 0) {
|
||||
q->direction = get_direction(&keyword[sizeof(JOURNAL_PARAMETER_DIRECTION ":") - 1]);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_LAST ":", sizeof(JOURNAL_PARAMETER_LAST ":") - 1) == 0) {
|
||||
q->last = str2ul(&keyword[sizeof(JOURNAL_PARAMETER_LAST ":") - 1]);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_QUERY ":", sizeof(JOURNAL_PARAMETER_QUERY ":") - 1) == 0) {
|
||||
freez((void *)q->query);
|
||||
q->query= strdupz(&keyword[sizeof(JOURNAL_PARAMETER_QUERY ":") - 1]);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_HISTOGRAM ":", sizeof(JOURNAL_PARAMETER_HISTOGRAM ":") - 1) == 0) {
|
||||
freez((void *)q->chart);
|
||||
q->chart = strdupz(&keyword[sizeof(JOURNAL_PARAMETER_HISTOGRAM ":") - 1]);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_FACETS ":", sizeof(JOURNAL_PARAMETER_FACETS ":") - 1) == 0) {
|
||||
q->default_facet = FACET_KEY_OPTION_NONE;
|
||||
facets_reset_and_disable_all_facets(facets);
|
||||
|
||||
char *value = &keyword[sizeof(JOURNAL_PARAMETER_FACETS ":") - 1];
|
||||
if(*value) {
|
||||
buffer_json_member_add_array(wb, JOURNAL_PARAMETER_FACETS);
|
||||
|
||||
while(value) {
|
||||
char *sep = strchr(value, ',');
|
||||
if(sep)
|
||||
*sep++ = '\0';
|
||||
|
||||
facets_register_facet_id(facets, value, FACET_KEY_OPTION_FACET|FACET_KEY_OPTION_FTS|FACET_KEY_OPTION_REORDER);
|
||||
buffer_json_add_array_item_string(wb, value);
|
||||
|
||||
value = sep;
|
||||
}
|
||||
|
||||
buffer_json_array_close(wb); // JOURNAL_PARAMETER_FACETS
|
||||
}
|
||||
}
|
||||
else {
|
||||
char *value = strchr(keyword, ':');
|
||||
if(value) {
|
||||
*value++ = '\0';
|
||||
|
||||
buffer_json_member_add_array(wb, keyword);
|
||||
|
||||
while(value) {
|
||||
char *sep = strchr(value, ',');
|
||||
if(sep)
|
||||
*sep++ = '\0';
|
||||
|
||||
facets_register_facet_filter_id(
|
||||
facets,
|
||||
keyword,
|
||||
value,
|
||||
FACET_KEY_OPTION_FACET | FACET_KEY_OPTION_FTS | FACET_KEY_OPTION_REORDER);
|
||||
|
||||
buffer_json_add_array_item_string(wb, value);
|
||||
q->filters++;
|
||||
|
||||
value = sep;
|
||||
}
|
||||
|
||||
buffer_json_array_close(wb); // keyword
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void function_systemd_journal(const char *transaction, char *function, usec_t *stop_monotonic_ut, bool *cancelled,
|
||||
BUFFER *payload __maybe_unused, HTTP_ACCESS access __maybe_unused,
|
||||
BUFFER *payload, HTTP_ACCESS access __maybe_unused,
|
||||
const char *source __maybe_unused, void *data __maybe_unused) {
|
||||
fstat_thread_calls = 0;
|
||||
fstat_thread_cached_responses = 0;
|
||||
|
||||
BUFFER *wb = buffer_create(0, NULL);
|
||||
CLEAN_BUFFER *wb = buffer_create(0, NULL);
|
||||
buffer_flush(wb);
|
||||
buffer_json_initialize(wb, "\"", "\"", 0, true, BUFFER_JSON_OPTIONS_MINIFY);
|
||||
|
||||
|
@ -1562,296 +1945,152 @@ void function_systemd_journal(const char *transaction, char *function, usec_t *s
|
|||
facets_accepted_param(facets, JOURNAL_PARAMETER_SLICE);
|
||||
#endif // HAVE_SD_JOURNAL_RESTART_FIELDS
|
||||
|
||||
// ------------------------------------------------------------------------
|
||||
// parse the parameters
|
||||
|
||||
JOURNAL_QUERY q = {
|
||||
.default_facet = FACET_KEY_OPTION_FACET,
|
||||
.info = false,
|
||||
.data_only = false,
|
||||
.slice = JOURNAL_DEFAULT_SLICE_MODE,
|
||||
.delta = false,
|
||||
.tail = false,
|
||||
.after_s = 0,
|
||||
.before_s = 0,
|
||||
.anchor = 0,
|
||||
.if_modified_since = 0,
|
||||
.last = 0,
|
||||
.direction = JOURNAL_DEFAULT_DIRECTION,
|
||||
.query = NULL,
|
||||
.chart = NULL,
|
||||
.sources = NULL,
|
||||
.source_type = SDJF_ALL,
|
||||
.filters = 0,
|
||||
.sampling = SYSTEMD_JOURNAL_DEFAULT_ITEMS_SAMPLING,
|
||||
};
|
||||
|
||||
if( (payload && !parse_post_params(facets, &q, wb, payload, transaction)) ||
|
||||
(!payload && !parse_get_params(facets, &q, wb, function, transaction)) )
|
||||
goto cleanup;
|
||||
|
||||
// ----------------------------------------------------------------------------------------------------------------
|
||||
// register the fields in the order you want them on the dashboard
|
||||
|
||||
facets_register_row_severity(facets, syslog_priority_to_facet_severity, NULL);
|
||||
|
||||
facets_register_key_name(facets, "_HOSTNAME",
|
||||
FACET_KEY_OPTION_FACET | FACET_KEY_OPTION_VISIBLE);
|
||||
facets_register_key_name(
|
||||
facets, "_HOSTNAME",
|
||||
q.default_facet | FACET_KEY_OPTION_VISIBLE);
|
||||
|
||||
facets_register_dynamic_key_name(facets, JOURNAL_KEY_ND_JOURNAL_PROCESS,
|
||||
FACET_KEY_OPTION_NEVER_FACET | FACET_KEY_OPTION_VISIBLE,
|
||||
netdata_systemd_journal_dynamic_row_id, NULL);
|
||||
facets_register_dynamic_key_name(
|
||||
facets, JOURNAL_KEY_ND_JOURNAL_PROCESS,
|
||||
FACET_KEY_OPTION_NEVER_FACET | FACET_KEY_OPTION_VISIBLE,
|
||||
netdata_systemd_journal_dynamic_row_id, NULL);
|
||||
|
||||
facets_register_key_name(facets, "MESSAGE",
|
||||
FACET_KEY_OPTION_NEVER_FACET | FACET_KEY_OPTION_MAIN_TEXT |
|
||||
FACET_KEY_OPTION_VISIBLE | FACET_KEY_OPTION_FTS);
|
||||
facets_register_key_name(
|
||||
facets, "MESSAGE",
|
||||
FACET_KEY_OPTION_NEVER_FACET | FACET_KEY_OPTION_MAIN_TEXT |
|
||||
FACET_KEY_OPTION_VISIBLE | FACET_KEY_OPTION_FTS);
|
||||
|
||||
// facets_register_dynamic_key_name(facets, "MESSAGE",
|
||||
// FACET_KEY_OPTION_NEVER_FACET | FACET_KEY_OPTION_MAIN_TEXT | FACET_KEY_OPTION_RICH_TEXT |
|
||||
// FACET_KEY_OPTION_VISIBLE | FACET_KEY_OPTION_FTS,
|
||||
// netdata_systemd_journal_rich_message, NULL);
|
||||
// facets_register_dynamic_key_name(
|
||||
// facets, "MESSAGE",
|
||||
// FACET_KEY_OPTION_NEVER_FACET | FACET_KEY_OPTION_MAIN_TEXT | FACET_KEY_OPTION_RICH_TEXT |
|
||||
// FACET_KEY_OPTION_VISIBLE | FACET_KEY_OPTION_FTS,
|
||||
// netdata_systemd_journal_rich_message, NULL);
|
||||
|
||||
facets_register_key_name_transformation(facets, "PRIORITY",
|
||||
FACET_KEY_OPTION_FACET | FACET_KEY_OPTION_TRANSFORM_VIEW |
|
||||
FACET_KEY_OPTION_EXPANDED_FILTER,
|
||||
netdata_systemd_journal_transform_priority, NULL);
|
||||
facets_register_key_name_transformation(
|
||||
facets, "PRIORITY",
|
||||
q.default_facet | FACET_KEY_OPTION_TRANSFORM_VIEW |
|
||||
FACET_KEY_OPTION_EXPANDED_FILTER,
|
||||
netdata_systemd_journal_transform_priority, NULL);
|
||||
|
||||
facets_register_key_name_transformation(facets, "SYSLOG_FACILITY",
|
||||
FACET_KEY_OPTION_FACET | FACET_KEY_OPTION_TRANSFORM_VIEW |
|
||||
FACET_KEY_OPTION_EXPANDED_FILTER,
|
||||
netdata_systemd_journal_transform_syslog_facility, NULL);
|
||||
facets_register_key_name_transformation(
|
||||
facets, "SYSLOG_FACILITY",
|
||||
q.default_facet | FACET_KEY_OPTION_TRANSFORM_VIEW |
|
||||
FACET_KEY_OPTION_EXPANDED_FILTER,
|
||||
netdata_systemd_journal_transform_syslog_facility, NULL);
|
||||
|
||||
facets_register_key_name_transformation(facets, "ERRNO",
|
||||
FACET_KEY_OPTION_FACET | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_errno, NULL);
|
||||
facets_register_key_name_transformation(
|
||||
facets, "ERRNO",
|
||||
q.default_facet | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_errno, NULL);
|
||||
|
||||
facets_register_key_name(facets, JOURNAL_KEY_ND_JOURNAL_FILE,
|
||||
FACET_KEY_OPTION_NEVER_FACET);
|
||||
facets_register_key_name(
|
||||
facets, JOURNAL_KEY_ND_JOURNAL_FILE,
|
||||
FACET_KEY_OPTION_NEVER_FACET);
|
||||
|
||||
facets_register_key_name(facets, "SYSLOG_IDENTIFIER",
|
||||
FACET_KEY_OPTION_FACET);
|
||||
facets_register_key_name(
|
||||
facets, "SYSLOG_IDENTIFIER",
|
||||
q.default_facet);
|
||||
|
||||
facets_register_key_name(facets, "UNIT",
|
||||
FACET_KEY_OPTION_FACET);
|
||||
facets_register_key_name(
|
||||
facets, "UNIT",
|
||||
q.default_facet);
|
||||
|
||||
facets_register_key_name(facets, "USER_UNIT",
|
||||
FACET_KEY_OPTION_FACET);
|
||||
facets_register_key_name(
|
||||
facets, "USER_UNIT",
|
||||
q.default_facet);
|
||||
|
||||
facets_register_key_name_transformation(facets, "MESSAGE_ID",
|
||||
FACET_KEY_OPTION_FACET | FACET_KEY_OPTION_TRANSFORM_VIEW |
|
||||
FACET_KEY_OPTION_EXPANDED_FILTER,
|
||||
netdata_systemd_journal_transform_message_id, NULL);
|
||||
facets_register_key_name_transformation(
|
||||
facets, "MESSAGE_ID",
|
||||
q.default_facet | FACET_KEY_OPTION_TRANSFORM_VIEW |
|
||||
FACET_KEY_OPTION_EXPANDED_FILTER,
|
||||
netdata_systemd_journal_transform_message_id, NULL);
|
||||
|
||||
facets_register_key_name_transformation(facets, "_BOOT_ID",
|
||||
FACET_KEY_OPTION_FACET | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_boot_id, NULL);
|
||||
facets_register_key_name_transformation(
|
||||
facets, "_BOOT_ID",
|
||||
q.default_facet | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_boot_id, NULL);
|
||||
|
||||
facets_register_key_name_transformation(facets, "_SYSTEMD_OWNER_UID",
|
||||
FACET_KEY_OPTION_FACET | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_uid, NULL);
|
||||
facets_register_key_name_transformation(
|
||||
facets, "_SYSTEMD_OWNER_UID",
|
||||
q.default_facet | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_uid, NULL);
|
||||
|
||||
facets_register_key_name_transformation(facets, "_UID",
|
||||
FACET_KEY_OPTION_FACET | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_uid, NULL);
|
||||
facets_register_key_name_transformation(
|
||||
facets, "_UID",
|
||||
q.default_facet | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_uid, NULL);
|
||||
|
||||
facets_register_key_name_transformation(facets, "OBJECT_SYSTEMD_OWNER_UID",
|
||||
FACET_KEY_OPTION_FACET | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_uid, NULL);
|
||||
facets_register_key_name_transformation(
|
||||
facets, "OBJECT_SYSTEMD_OWNER_UID",
|
||||
q.default_facet | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_uid, NULL);
|
||||
|
||||
facets_register_key_name_transformation(facets, "OBJECT_UID",
|
||||
FACET_KEY_OPTION_FACET | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_uid, NULL);
|
||||
facets_register_key_name_transformation(
|
||||
facets, "OBJECT_UID",
|
||||
q.default_facet | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_uid, NULL);
|
||||
|
||||
facets_register_key_name_transformation(facets, "_GID",
|
||||
FACET_KEY_OPTION_FACET | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_gid, NULL);
|
||||
facets_register_key_name_transformation(
|
||||
facets, "_GID",
|
||||
q.default_facet | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_gid, NULL);
|
||||
|
||||
facets_register_key_name_transformation(facets, "OBJECT_GID",
|
||||
FACET_KEY_OPTION_FACET | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_gid, NULL);
|
||||
facets_register_key_name_transformation(
|
||||
facets, "OBJECT_GID",
|
||||
q.default_facet | FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_gid, NULL);
|
||||
|
||||
facets_register_key_name_transformation(facets, "_CAP_EFFECTIVE",
|
||||
FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_cap_effective, NULL);
|
||||
facets_register_key_name_transformation(
|
||||
facets, "_CAP_EFFECTIVE",
|
||||
FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_cap_effective, NULL);
|
||||
|
||||
facets_register_key_name_transformation(facets, "_AUDIT_LOGINUID",
|
||||
FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_uid, NULL);
|
||||
facets_register_key_name_transformation(
|
||||
facets, "_AUDIT_LOGINUID",
|
||||
FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_uid, NULL);
|
||||
|
||||
facets_register_key_name_transformation(facets, "OBJECT_AUDIT_LOGINUID",
|
||||
FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_uid, NULL);
|
||||
facets_register_key_name_transformation(
|
||||
facets, "OBJECT_AUDIT_LOGINUID",
|
||||
FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_uid, NULL);
|
||||
|
||||
facets_register_key_name_transformation(facets, "_SOURCE_REALTIME_TIMESTAMP",
|
||||
FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_timestamp_usec, NULL);
|
||||
|
||||
// ------------------------------------------------------------------------
|
||||
// parse the parameters
|
||||
|
||||
bool info = false, data_only = false, slice = JOURNAL_DEFAULT_SLICE_MODE, delta = false, tail = false;
|
||||
time_t after_s = 0, before_s = 0;
|
||||
usec_t anchor = 0;
|
||||
usec_t if_modified_since = 0;
|
||||
size_t last = 0;
|
||||
FACETS_ANCHOR_DIRECTION direction = JOURNAL_DEFAULT_DIRECTION;
|
||||
const char *query = NULL;
|
||||
const char *chart = NULL;
|
||||
SIMPLE_PATTERN *sources = NULL;
|
||||
SD_JOURNAL_FILE_SOURCE_TYPE source_type = SDJF_ALL;
|
||||
size_t filters = 0;
|
||||
size_t sampling = SYSTEMD_JOURNAL_DEFAULT_ITEMS_SAMPLING;
|
||||
|
||||
buffer_json_member_add_object(wb, "_request");
|
||||
|
||||
char *words[SYSTEMD_JOURNAL_MAX_PARAMS] = { NULL };
|
||||
size_t num_words = quoted_strings_splitter_pluginsd(function, words, SYSTEMD_JOURNAL_MAX_PARAMS);
|
||||
for(int i = 1; i < SYSTEMD_JOURNAL_MAX_PARAMS ;i++) {
|
||||
char *keyword = get_word(words, num_words, i);
|
||||
if(!keyword) break;
|
||||
|
||||
if(strcmp(keyword, JOURNAL_PARAMETER_HELP) == 0) {
|
||||
netdata_systemd_journal_function_help(transaction);
|
||||
goto cleanup;
|
||||
}
|
||||
else if(strcmp(keyword, JOURNAL_PARAMETER_INFO) == 0) {
|
||||
info = true;
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_DELTA ":", sizeof(JOURNAL_PARAMETER_DELTA ":") - 1) == 0) {
|
||||
char *v = &keyword[sizeof(JOURNAL_PARAMETER_DELTA ":") - 1];
|
||||
|
||||
if(strcmp(v, "false") == 0 || strcmp(v, "no") == 0 || strcmp(v, "0") == 0)
|
||||
delta = false;
|
||||
else
|
||||
delta = true;
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_TAIL ":", sizeof(JOURNAL_PARAMETER_TAIL ":") - 1) == 0) {
|
||||
char *v = &keyword[sizeof(JOURNAL_PARAMETER_TAIL ":") - 1];
|
||||
|
||||
if(strcmp(v, "false") == 0 || strcmp(v, "no") == 0 || strcmp(v, "0") == 0)
|
||||
tail = false;
|
||||
else
|
||||
tail = true;
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_SAMPLING ":", sizeof(JOURNAL_PARAMETER_SAMPLING ":") - 1) == 0) {
|
||||
sampling = str2ul(&keyword[sizeof(JOURNAL_PARAMETER_SAMPLING ":") - 1]);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_DATA_ONLY ":", sizeof(JOURNAL_PARAMETER_DATA_ONLY ":") - 1) == 0) {
|
||||
char *v = &keyword[sizeof(JOURNAL_PARAMETER_DATA_ONLY ":") - 1];
|
||||
|
||||
if(strcmp(v, "false") == 0 || strcmp(v, "no") == 0 || strcmp(v, "0") == 0)
|
||||
data_only = false;
|
||||
else
|
||||
data_only = true;
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_SLICE ":", sizeof(JOURNAL_PARAMETER_SLICE ":") - 1) == 0) {
|
||||
char *v = &keyword[sizeof(JOURNAL_PARAMETER_SLICE ":") - 1];
|
||||
|
||||
if(strcmp(v, "false") == 0 || strcmp(v, "no") == 0 || strcmp(v, "0") == 0)
|
||||
slice = false;
|
||||
else
|
||||
slice = true;
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_SOURCE ":", sizeof(JOURNAL_PARAMETER_SOURCE ":") - 1) == 0) {
|
||||
const char *value = &keyword[sizeof(JOURNAL_PARAMETER_SOURCE ":") - 1];
|
||||
|
||||
buffer_json_member_add_array(wb, JOURNAL_PARAMETER_SOURCE);
|
||||
|
||||
BUFFER *sources_list = buffer_create(0, NULL);
|
||||
|
||||
source_type = SDJF_NONE;
|
||||
while(value) {
|
||||
char *sep = strchr(value, ',');
|
||||
if(sep)
|
||||
*sep++ = '\0';
|
||||
|
||||
buffer_json_add_array_item_string(wb, value);
|
||||
|
||||
if(strcmp(value, SDJF_SOURCE_ALL_NAME) == 0) {
|
||||
source_type |= SDJF_ALL;
|
||||
value = NULL;
|
||||
}
|
||||
else if(strcmp(value, SDJF_SOURCE_LOCAL_NAME) == 0) {
|
||||
source_type |= SDJF_LOCAL_ALL;
|
||||
value = NULL;
|
||||
}
|
||||
else if(strcmp(value, SDJF_SOURCE_REMOTES_NAME) == 0) {
|
||||
source_type |= SDJF_REMOTE_ALL;
|
||||
value = NULL;
|
||||
}
|
||||
else if(strcmp(value, SDJF_SOURCE_NAMESPACES_NAME) == 0) {
|
||||
source_type |= SDJF_LOCAL_NAMESPACE;
|
||||
value = NULL;
|
||||
}
|
||||
else if(strcmp(value, SDJF_SOURCE_LOCAL_SYSTEM_NAME) == 0) {
|
||||
source_type |= SDJF_LOCAL_SYSTEM;
|
||||
value = NULL;
|
||||
}
|
||||
else if(strcmp(value, SDJF_SOURCE_LOCAL_USERS_NAME) == 0) {
|
||||
source_type |= SDJF_LOCAL_USER;
|
||||
value = NULL;
|
||||
}
|
||||
else if(strcmp(value, SDJF_SOURCE_LOCAL_OTHER_NAME) == 0) {
|
||||
source_type |= SDJF_LOCAL_OTHER;
|
||||
value = NULL;
|
||||
}
|
||||
else {
|
||||
// else, match the source, whatever it is
|
||||
if(buffer_strlen(sources_list))
|
||||
buffer_strcat(sources_list, ",");
|
||||
|
||||
buffer_strcat(sources_list, value);
|
||||
}
|
||||
|
||||
value = sep;
|
||||
}
|
||||
|
||||
if(buffer_strlen(sources_list)) {
|
||||
simple_pattern_free(sources);
|
||||
sources = simple_pattern_create(buffer_tostring(sources_list), ",", SIMPLE_PATTERN_EXACT, false);
|
||||
}
|
||||
|
||||
buffer_free(sources_list);
|
||||
|
||||
buffer_json_array_close(wb); // source
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_AFTER ":", sizeof(JOURNAL_PARAMETER_AFTER ":") - 1) == 0) {
|
||||
after_s = str2l(&keyword[sizeof(JOURNAL_PARAMETER_AFTER ":") - 1]);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_BEFORE ":", sizeof(JOURNAL_PARAMETER_BEFORE ":") - 1) == 0) {
|
||||
before_s = str2l(&keyword[sizeof(JOURNAL_PARAMETER_BEFORE ":") - 1]);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_IF_MODIFIED_SINCE ":", sizeof(JOURNAL_PARAMETER_IF_MODIFIED_SINCE ":") - 1) == 0) {
|
||||
if_modified_since = str2ull(&keyword[sizeof(JOURNAL_PARAMETER_IF_MODIFIED_SINCE ":") - 1], NULL);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_ANCHOR ":", sizeof(JOURNAL_PARAMETER_ANCHOR ":") - 1) == 0) {
|
||||
anchor = str2ull(&keyword[sizeof(JOURNAL_PARAMETER_ANCHOR ":") - 1], NULL);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_DIRECTION ":", sizeof(JOURNAL_PARAMETER_DIRECTION ":") - 1) == 0) {
|
||||
direction = strcasecmp(&keyword[sizeof(JOURNAL_PARAMETER_DIRECTION ":") - 1], "forward") == 0 ? FACETS_ANCHOR_DIRECTION_FORWARD : FACETS_ANCHOR_DIRECTION_BACKWARD;
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_LAST ":", sizeof(JOURNAL_PARAMETER_LAST ":") - 1) == 0) {
|
||||
last = str2ul(&keyword[sizeof(JOURNAL_PARAMETER_LAST ":") - 1]);
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_QUERY ":", sizeof(JOURNAL_PARAMETER_QUERY ":") - 1) == 0) {
|
||||
query= &keyword[sizeof(JOURNAL_PARAMETER_QUERY ":") - 1];
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_HISTOGRAM ":", sizeof(JOURNAL_PARAMETER_HISTOGRAM ":") - 1) == 0) {
|
||||
chart = &keyword[sizeof(JOURNAL_PARAMETER_HISTOGRAM ":") - 1];
|
||||
}
|
||||
else if(strncmp(keyword, JOURNAL_PARAMETER_FACETS ":", sizeof(JOURNAL_PARAMETER_FACETS ":") - 1) == 0) {
|
||||
char *value = &keyword[sizeof(JOURNAL_PARAMETER_FACETS ":") - 1];
|
||||
if(*value) {
|
||||
buffer_json_member_add_array(wb, JOURNAL_PARAMETER_FACETS);
|
||||
|
||||
while(value) {
|
||||
char *sep = strchr(value, ',');
|
||||
if(sep)
|
||||
*sep++ = '\0';
|
||||
|
||||
facets_register_facet_id(facets, value, FACET_KEY_OPTION_FACET|FACET_KEY_OPTION_FTS|FACET_KEY_OPTION_REORDER);
|
||||
buffer_json_add_array_item_string(wb, value);
|
||||
|
||||
value = sep;
|
||||
}
|
||||
|
||||
buffer_json_array_close(wb); // JOURNAL_PARAMETER_FACETS
|
||||
}
|
||||
}
|
||||
else {
|
||||
char *value = strchr(keyword, ':');
|
||||
if(value) {
|
||||
*value++ = '\0';
|
||||
|
||||
buffer_json_member_add_array(wb, keyword);
|
||||
|
||||
while(value) {
|
||||
char *sep = strchr(value, ',');
|
||||
if(sep)
|
||||
*sep++ = '\0';
|
||||
|
||||
facets_register_facet_id_filter(facets, keyword, value, FACET_KEY_OPTION_FACET|FACET_KEY_OPTION_FTS|FACET_KEY_OPTION_REORDER);
|
||||
buffer_json_add_array_item_string(wb, value);
|
||||
filters++;
|
||||
|
||||
value = sep;
|
||||
}
|
||||
|
||||
buffer_json_array_close(wb); // keyword
|
||||
}
|
||||
}
|
||||
}
|
||||
facets_register_key_name_transformation(
|
||||
facets, "_SOURCE_REALTIME_TIMESTAMP",
|
||||
FACET_KEY_OPTION_TRANSFORM_VIEW,
|
||||
netdata_systemd_journal_transform_timestamp_usec, NULL);
|
||||
|
||||
// ------------------------------------------------------------------------
|
||||
// put this request into the progress db
|
||||
|
@ -1864,71 +2103,70 @@ void function_systemd_journal(const char *transaction, char *function, usec_t *s
|
|||
time_t now_s = now_realtime_sec();
|
||||
time_t expires = now_s + 1;
|
||||
|
||||
if(!after_s && !before_s) {
|
||||
before_s = now_s;
|
||||
after_s = before_s - SYSTEMD_JOURNAL_DEFAULT_QUERY_DURATION;
|
||||
if(!q.after_s && !q.before_s) {
|
||||
q.before_s = now_s;
|
||||
q.after_s = q.before_s - SYSTEMD_JOURNAL_DEFAULT_QUERY_DURATION;
|
||||
}
|
||||
else
|
||||
rrdr_relative_window_to_absolute(&after_s, &before_s, now_s);
|
||||
rrdr_relative_window_to_absolute(&q.after_s, &q.before_s, now_s);
|
||||
|
||||
if(after_s > before_s) {
|
||||
time_t tmp = after_s;
|
||||
after_s = before_s;
|
||||
before_s = tmp;
|
||||
if(q.after_s > q.before_s) {
|
||||
time_t tmp = q.after_s;
|
||||
q.after_s = q.before_s;
|
||||
q.before_s = tmp;
|
||||
}
|
||||
|
||||
if(after_s == before_s)
|
||||
after_s = before_s - SYSTEMD_JOURNAL_DEFAULT_QUERY_DURATION;
|
||||
|
||||
if(!last)
|
||||
last = SYSTEMD_JOURNAL_DEFAULT_ITEMS_PER_QUERY;
|
||||
if(q.after_s == q.before_s)
|
||||
q.after_s = q.before_s - SYSTEMD_JOURNAL_DEFAULT_QUERY_DURATION;
|
||||
|
||||
if(!q.last)
|
||||
q.last = SYSTEMD_JOURNAL_DEFAULT_ITEMS_PER_QUERY;
|
||||
|
||||
// ------------------------------------------------------------------------
|
||||
// set query time-frame, anchors and direction
|
||||
|
||||
fqs->transaction = transaction;
|
||||
fqs->after_ut = after_s * USEC_PER_SEC;
|
||||
fqs->before_ut = (before_s * USEC_PER_SEC) + USEC_PER_SEC - 1;
|
||||
fqs->if_modified_since = if_modified_since;
|
||||
fqs->data_only = data_only;
|
||||
fqs->delta = (fqs->data_only) ? delta : false;
|
||||
fqs->tail = (fqs->data_only && fqs->if_modified_since) ? tail : false;
|
||||
fqs->sources = sources;
|
||||
fqs->source_type = source_type;
|
||||
fqs->entries = last;
|
||||
fqs->after_ut = q.after_s * USEC_PER_SEC;
|
||||
fqs->before_ut = (q.before_s * USEC_PER_SEC) + USEC_PER_SEC - 1;
|
||||
fqs->if_modified_since = q.if_modified_since;
|
||||
fqs->data_only = q.data_only;
|
||||
fqs->delta = (fqs->data_only) ? q.delta : false;
|
||||
fqs->tail = (fqs->data_only && fqs->if_modified_since) ? q.tail : false;
|
||||
fqs->sources = q.sources;
|
||||
fqs->source_type = q.source_type;
|
||||
fqs->entries = q.last;
|
||||
fqs->last_modified = 0;
|
||||
fqs->filters = filters;
|
||||
fqs->query = (query && *query) ? query : NULL;
|
||||
fqs->histogram = (chart && *chart) ? chart : NULL;
|
||||
fqs->direction = direction;
|
||||
fqs->anchor.start_ut = anchor;
|
||||
fqs->filters = q.filters;
|
||||
fqs->query = (q.query && *q.query) ? q.query : NULL;
|
||||
fqs->histogram = (q.chart && *q.chart) ? q.chart : NULL;
|
||||
fqs->direction = q.direction;
|
||||
fqs->anchor.start_ut = q.anchor;
|
||||
fqs->anchor.stop_ut = 0;
|
||||
fqs->sampling = sampling;
|
||||
fqs->sampling = q.sampling;
|
||||
|
||||
if(fqs->anchor.start_ut && fqs->tail) {
|
||||
// a tail request
|
||||
// we need the top X entries from BEFORE
|
||||
// but, we need to calculate the facets and the
|
||||
// histogram up to the anchor
|
||||
fqs->direction = direction = FACETS_ANCHOR_DIRECTION_BACKWARD;
|
||||
fqs->direction = q.direction = FACETS_ANCHOR_DIRECTION_BACKWARD;
|
||||
fqs->anchor.start_ut = 0;
|
||||
fqs->anchor.stop_ut = anchor;
|
||||
fqs->anchor.stop_ut = q.anchor;
|
||||
}
|
||||
|
||||
if(anchor && anchor < fqs->after_ut) {
|
||||
if(q.anchor && q.anchor < fqs->after_ut) {
|
||||
log_fqs(fqs, "received anchor is too small for query timeframe, ignoring anchor");
|
||||
anchor = 0;
|
||||
q.anchor = 0;
|
||||
fqs->anchor.start_ut = 0;
|
||||
fqs->anchor.stop_ut = 0;
|
||||
fqs->direction = direction = FACETS_ANCHOR_DIRECTION_BACKWARD;
|
||||
fqs->direction = q.direction = FACETS_ANCHOR_DIRECTION_BACKWARD;
|
||||
}
|
||||
else if(anchor > fqs->before_ut) {
|
||||
else if(q.anchor > fqs->before_ut) {
|
||||
log_fqs(fqs, "received anchor is too big for query timeframe, ignoring anchor");
|
||||
anchor = 0;
|
||||
q.anchor = 0;
|
||||
fqs->anchor.start_ut = 0;
|
||||
fqs->anchor.stop_ut = 0;
|
||||
fqs->direction = direction = FACETS_ANCHOR_DIRECTION_BACKWARD;
|
||||
fqs->direction = q.direction = FACETS_ANCHOR_DIRECTION_BACKWARD;
|
||||
}
|
||||
|
||||
facets_set_anchor(facets, fqs->anchor.start_ut, fqs->anchor.stop_ut, fqs->direction);
|
||||
|
@ -1945,8 +2183,8 @@ void function_systemd_journal(const char *transaction, char *function, usec_t *s
|
|||
facets_set_query(facets, fqs->query);
|
||||
|
||||
#ifdef HAVE_SD_JOURNAL_RESTART_FIELDS
|
||||
fqs->slice = slice;
|
||||
if(slice)
|
||||
fqs->slice = q.slice;
|
||||
if(q.slice)
|
||||
facets_enable_slice_mode(facets);
|
||||
#else
|
||||
fqs->slice = false;
|
||||
|
@ -1971,7 +2209,7 @@ void function_systemd_journal(const char *transaction, char *function, usec_t *s
|
|||
buffer_json_member_add_uint64(wb, JOURNAL_PARAMETER_AFTER, fqs->after_ut / USEC_PER_SEC);
|
||||
buffer_json_member_add_uint64(wb, JOURNAL_PARAMETER_BEFORE, fqs->before_ut / USEC_PER_SEC);
|
||||
buffer_json_member_add_uint64(wb, "if_modified_since", fqs->if_modified_since);
|
||||
buffer_json_member_add_uint64(wb, JOURNAL_PARAMETER_ANCHOR, anchor);
|
||||
buffer_json_member_add_uint64(wb, JOURNAL_PARAMETER_ANCHOR, q.anchor);
|
||||
buffer_json_member_add_string(wb, JOURNAL_PARAMETER_DIRECTION, fqs->direction == FACETS_ANCHOR_DIRECTION_FORWARD ? "forward" : "backward");
|
||||
buffer_json_member_add_uint64(wb, JOURNAL_PARAMETER_LAST, fqs->entries);
|
||||
buffer_json_member_add_string(wb, JOURNAL_PARAMETER_QUERY, fqs->query);
|
||||
|
@ -1985,7 +2223,7 @@ void function_systemd_journal(const char *transaction, char *function, usec_t *s
|
|||
|
||||
int response;
|
||||
|
||||
if(info) {
|
||||
if(q.info) {
|
||||
facets_accepted_parameters_to_json_array(facets, wb, false);
|
||||
buffer_json_member_add_array(wb, "required_params");
|
||||
{
|
||||
|
@ -2033,7 +2271,8 @@ output:
|
|||
netdata_mutex_unlock(&stdout_mutex);
|
||||
|
||||
cleanup:
|
||||
simple_pattern_free(sources);
|
||||
freez((void *)q.query);
|
||||
freez((void *)q.chart);
|
||||
simple_pattern_free(q.sources);
|
||||
facets_destroy(facets);
|
||||
buffer_free(wb);
|
||||
}
|
||||
|
|
|
@ -46,7 +46,8 @@ int main(int argc __maybe_unused, char **argv __maybe_unused) {
|
|||
|
||||
bool cancelled = false;
|
||||
usec_t stop_monotonic_ut = now_monotonic_usec() + 600 * USEC_PER_SEC;
|
||||
char buf[] = "systemd-journal after:-8640000 before:0 direction:backward last:200 data_only:false slice:true source:all";
|
||||
// char buf[] = "systemd-journal after:-8640000 before:0 direction:backward last:200 data_only:false slice:true source:all";
|
||||
char buf[] = "systemd-journal after:-8640000 before:0 direction:backward last:200 data_only:false slice:true facets: source:all";
|
||||
// char buf[] = "systemd-journal after:1695332964 before:1695937764 direction:backward last:100 slice:true source:all DHKucpqUoe1:PtVoyIuX.MU";
|
||||
// char buf[] = "systemd-journal after:1694511062 before:1694514662 anchor:1694514122024403";
|
||||
function_systemd_journal("123", buf, &stop_monotonic_ut, &cancelled,
|
||||
|
|
|
@ -928,7 +928,7 @@ void *tc_main(void *ptr) {
|
|||
}
|
||||
|
||||
char buffer[TC_LINE_MAX+1] = "";
|
||||
while(fgets(buffer, TC_LINE_MAX, tc_child_instance->child_stdout_fp) != NULL) {
|
||||
while(fgets(buffer, TC_LINE_MAX, spawn_popen_stdout(tc_child_instance)) != NULL) {
|
||||
if(unlikely(!service_running(SERVICE_COLLECTORS))) break;
|
||||
|
||||
buffer[TC_LINE_MAX] = '\0';
|
||||
|
|
|
@ -104,9 +104,6 @@ The command line options of the Netdata 1.10.0 version are the following:
|
|||
-W simple-pattern pattern string
|
||||
Check if string matches pattern and exit.
|
||||
|
||||
-W "claim -token=TOKEN -rooms=ROOM1,ROOM2 url=https://app.netdata.cloud"
|
||||
Connect the agent to the workspace Rooms pointed to by TOKEN and ROOM*.
|
||||
|
||||
Signals netdata handles:
|
||||
|
||||
- HUP Close and reopen log files.
|
||||
|
|
|
@ -334,7 +334,7 @@ void analytics_alarms_notifications(void)
|
|||
if (instance) {
|
||||
char line[200 + 1];
|
||||
|
||||
while (fgets(line, 200, instance->child_stdout_fp) != NULL) {
|
||||
while (fgets(line, 200, spawn_popen_stdout(instance)) != NULL) {
|
||||
char *end = line;
|
||||
while (*end && *end != '\n')
|
||||
end++;
|
||||
|
@ -375,7 +375,6 @@ static void analytics_get_install_type(struct rrdhost_system_info *system_info)
|
|||
void analytics_https(void)
|
||||
{
|
||||
BUFFER *b = buffer_create(30, NULL);
|
||||
#ifdef ENABLE_HTTPS
|
||||
analytics_exporting_connectors_ssl(b);
|
||||
|
||||
buffer_strcat(b, netdata_ssl_streaming_sender_ctx &&
|
||||
|
@ -383,9 +382,6 @@ void analytics_https(void)
|
|||
SSL_connection(&localhost->sender->ssl) ? "streaming|" : "|");
|
||||
|
||||
buffer_strcat(b, netdata_ssl_web_server_ctx ? "web" : "");
|
||||
#else
|
||||
buffer_strcat(b, "||");
|
||||
#endif
|
||||
|
||||
analytics_set_data_str(&analytics_data.netdata_config_https_available, (char *)buffer_tostring(b));
|
||||
buffer_free(b);
|
||||
|
@ -468,13 +464,8 @@ void analytics_alarms(void)
|
|||
*/
|
||||
void analytics_misc(void)
|
||||
{
|
||||
#ifdef ENABLE_ACLK
|
||||
analytics_set_data(&analytics_data.netdata_host_cloud_available, "true");
|
||||
analytics_set_data_str(&analytics_data.netdata_host_aclk_implementation, "Next Generation");
|
||||
#else
|
||||
analytics_set_data(&analytics_data.netdata_host_cloud_available, "false");
|
||||
analytics_set_data_str(&analytics_data.netdata_host_aclk_implementation, "");
|
||||
#endif
|
||||
|
||||
analytics_data.exporting_enabled = appconfig_get_boolean(&exporting_config, CONFIG_SECTION_EXPORTING, "enabled", CONFIG_BOOLEAN_NO);
|
||||
analytics_set_data(&analytics_data.netdata_config_exporting_enabled, analytics_data.exporting_enabled ? "true" : "false");
|
||||
|
@ -495,13 +486,11 @@ void analytics_misc(void)
|
|||
|
||||
void analytics_aclk(void)
|
||||
{
|
||||
#ifdef ENABLE_ACLK
|
||||
if (aclk_connected) {
|
||||
if (aclk_online()) {
|
||||
analytics_set_data(&analytics_data.netdata_host_aclk_available, "true");
|
||||
analytics_set_data_str(&analytics_data.netdata_host_aclk_protocol, "New");
|
||||
}
|
||||
else
|
||||
#endif
|
||||
analytics_set_data(&analytics_data.netdata_host_aclk_available, "false");
|
||||
}
|
||||
|
||||
|
@ -535,9 +524,7 @@ void analytics_gather_mutable_meta_data(void)
|
|||
analytics_set_data(
|
||||
&analytics_data.netdata_config_is_parent, (rrdhost_hosts_available() > 1 || configured_as_parent()) ? "true" : "false");
|
||||
|
||||
char *claim_id = get_agent_claimid();
|
||||
analytics_set_data(&analytics_data.netdata_host_agent_claimed, claim_id ? "true" : "false");
|
||||
freez(claim_id);
|
||||
analytics_set_data(&analytics_data.netdata_host_agent_claimed, is_agent_claimed() ? "true" : "false");
|
||||
|
||||
{
|
||||
char b[21];
|
||||
|
@ -627,46 +614,15 @@ cleanup:
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static const char *verify_required_directory(const char *dir)
|
||||
{
|
||||
if (chdir(dir) == -1)
|
||||
fatal("Cannot change directory to '%s'", dir);
|
||||
|
||||
DIR *d = opendir(dir);
|
||||
if (!d)
|
||||
fatal("Cannot examine the contents of directory '%s'", dir);
|
||||
closedir(d);
|
||||
|
||||
return dir;
|
||||
}
|
||||
|
||||
static const char *verify_or_create_required_directory(const char *dir) {
|
||||
int result;
|
||||
|
||||
result = mkdir(dir, 0755);
|
||||
|
||||
if (result != 0 && errno != EEXIST)
|
||||
fatal("Cannot create required directory '%s'", dir);
|
||||
|
||||
return verify_required_directory(dir);
|
||||
}
|
||||
|
||||
/*
|
||||
* This is called after the rrdinit
|
||||
* These values will be sent on the START event
|
||||
*/
|
||||
void set_late_global_environment(struct rrdhost_system_info *system_info)
|
||||
void set_late_analytics_variables(struct rrdhost_system_info *system_info)
|
||||
{
|
||||
analytics_set_data(&analytics_data.netdata_config_stream_enabled, default_rrdpush_enabled ? "true" : "false");
|
||||
analytics_set_data_str(&analytics_data.netdata_config_memory_mode, (char *)rrd_memory_mode_name(default_rrd_memory_mode));
|
||||
|
||||
#ifdef DISABLE_CLOUD
|
||||
analytics_set_data(&analytics_data.netdata_host_cloud_enabled, "false");
|
||||
#else
|
||||
analytics_set_data(
|
||||
&analytics_data.netdata_host_cloud_enabled,
|
||||
appconfig_get_boolean_ondemand(&cloud_config, CONFIG_SECTION_GLOBAL, "enabled", netdata_cloud_enabled) ? "true" : "false");
|
||||
#endif
|
||||
analytics_set_data(&analytics_data.netdata_host_cloud_enabled, "true");
|
||||
|
||||
#ifdef ENABLE_DBENGINE
|
||||
{
|
||||
|
@ -679,11 +635,7 @@ void set_late_global_environment(struct rrdhost_system_info *system_info)
|
|||
}
|
||||
#endif
|
||||
|
||||
#ifdef ENABLE_HTTPS
|
||||
analytics_set_data(&analytics_data.netdata_config_https_enabled, "true");
|
||||
#else
|
||||
analytics_set_data(&analytics_data.netdata_config_https_enabled, "false");
|
||||
#endif
|
||||
|
||||
if (web_server_mode == WEB_SERVER_MODE_NONE)
|
||||
analytics_set_data(&analytics_data.netdata_config_web_enabled, "false");
|
||||
|
@ -831,119 +783,6 @@ void get_system_timezone(void)
|
|||
}
|
||||
}
|
||||
|
||||
void set_global_environment() {
|
||||
{
|
||||
char b[16];
|
||||
snprintfz(b, sizeof(b) - 1, "%d", default_rrd_update_every);
|
||||
setenv("NETDATA_UPDATE_EVERY", b, 1);
|
||||
}
|
||||
|
||||
setenv("NETDATA_VERSION", NETDATA_VERSION, 1);
|
||||
setenv("NETDATA_HOSTNAME", netdata_configured_hostname, 1);
|
||||
setenv("NETDATA_CONFIG_DIR", verify_required_directory(netdata_configured_user_config_dir), 1);
|
||||
setenv("NETDATA_USER_CONFIG_DIR", verify_required_directory(netdata_configured_user_config_dir), 1);
|
||||
setenv("NETDATA_STOCK_CONFIG_DIR", verify_required_directory(netdata_configured_stock_config_dir), 1);
|
||||
setenv("NETDATA_PLUGINS_DIR", verify_required_directory(netdata_configured_primary_plugins_dir), 1);
|
||||
setenv("NETDATA_WEB_DIR", verify_required_directory(netdata_configured_web_dir), 1);
|
||||
setenv("NETDATA_CACHE_DIR", verify_or_create_required_directory(netdata_configured_cache_dir), 1);
|
||||
setenv("NETDATA_LIB_DIR", verify_or_create_required_directory(netdata_configured_varlib_dir), 1);
|
||||
setenv("NETDATA_LOCK_DIR", verify_or_create_required_directory(netdata_configured_lock_dir), 1);
|
||||
setenv("NETDATA_LOG_DIR", verify_or_create_required_directory(netdata_configured_log_dir), 1);
|
||||
setenv("NETDATA_HOST_PREFIX", netdata_configured_host_prefix, 1);
|
||||
|
||||
{
|
||||
BUFFER *user_plugins_dirs = buffer_create(FILENAME_MAX, NULL);
|
||||
|
||||
for (size_t i = 1; i < PLUGINSD_MAX_DIRECTORIES && plugin_directories[i]; i++) {
|
||||
if (i > 1)
|
||||
buffer_strcat(user_plugins_dirs, " ");
|
||||
buffer_strcat(user_plugins_dirs, plugin_directories[i]);
|
||||
}
|
||||
|
||||
setenv("NETDATA_USER_PLUGINS_DIRS", buffer_tostring(user_plugins_dirs), 1);
|
||||
|
||||
buffer_free(user_plugins_dirs);
|
||||
}
|
||||
|
||||
analytics_data.data_length = 0;
|
||||
analytics_set_data(&analytics_data.netdata_config_stream_enabled, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_memory_mode, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_exporting_enabled, "null");
|
||||
analytics_set_data(&analytics_data.netdata_exporting_connectors, "null");
|
||||
analytics_set_data(&analytics_data.netdata_allmetrics_prometheus_used, "null");
|
||||
analytics_set_data(&analytics_data.netdata_allmetrics_shell_used, "null");
|
||||
analytics_set_data(&analytics_data.netdata_allmetrics_json_used, "null");
|
||||
analytics_set_data(&analytics_data.netdata_dashboard_used, "null");
|
||||
analytics_set_data(&analytics_data.netdata_collectors, "null");
|
||||
analytics_set_data(&analytics_data.netdata_collectors_count, "null");
|
||||
analytics_set_data(&analytics_data.netdata_buildinfo, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_page_cache_size, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_multidb_disk_quota, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_https_enabled, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_web_enabled, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_release_channel, "null");
|
||||
analytics_set_data(&analytics_data.netdata_mirrored_host_count, "null");
|
||||
analytics_set_data(&analytics_data.netdata_mirrored_hosts_reachable, "null");
|
||||
analytics_set_data(&analytics_data.netdata_mirrored_hosts_unreachable, "null");
|
||||
analytics_set_data(&analytics_data.netdata_notification_methods, "null");
|
||||
analytics_set_data(&analytics_data.netdata_alarms_normal, "null");
|
||||
analytics_set_data(&analytics_data.netdata_alarms_warning, "null");
|
||||
analytics_set_data(&analytics_data.netdata_alarms_critical, "null");
|
||||
analytics_set_data(&analytics_data.netdata_charts_count, "null");
|
||||
analytics_set_data(&analytics_data.netdata_metrics_count, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_is_parent, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_hosts_available, "null");
|
||||
analytics_set_data(&analytics_data.netdata_host_cloud_available, "null");
|
||||
analytics_set_data(&analytics_data.netdata_host_aclk_implementation, "null");
|
||||
analytics_set_data(&analytics_data.netdata_host_aclk_available, "null");
|
||||
analytics_set_data(&analytics_data.netdata_host_aclk_protocol, "null");
|
||||
analytics_set_data(&analytics_data.netdata_host_agent_claimed, "null");
|
||||
analytics_set_data(&analytics_data.netdata_host_cloud_enabled, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_https_available, "null");
|
||||
analytics_set_data(&analytics_data.netdata_install_type, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_is_private_registry, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_use_private_registry, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_oom_score, "null");
|
||||
analytics_set_data(&analytics_data.netdata_prebuilt_distro, "null");
|
||||
analytics_set_data(&analytics_data.netdata_fail_reason, "null");
|
||||
|
||||
analytics_data.prometheus_hits = 0;
|
||||
analytics_data.shell_hits = 0;
|
||||
analytics_data.json_hits = 0;
|
||||
analytics_data.dashboard_hits = 0;
|
||||
analytics_data.charts_count = 0;
|
||||
analytics_data.metrics_count = 0;
|
||||
analytics_data.exporting_enabled = false;
|
||||
|
||||
char *default_port = appconfig_get(&netdata_config, CONFIG_SECTION_WEB, "default port", NULL);
|
||||
int clean = 0;
|
||||
if (!default_port) {
|
||||
default_port = strdupz("19999");
|
||||
clean = 1;
|
||||
}
|
||||
|
||||
setenv("NETDATA_LISTEN_PORT", default_port, 1);
|
||||
if (clean)
|
||||
freez(default_port);
|
||||
|
||||
// set the path we need
|
||||
char path[4096], *p = getenv("PATH");
|
||||
if (!p) p = "/bin:/usr/bin";
|
||||
snprintfz(path, sizeof(path), "%s:%s", p, "/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin");
|
||||
setenv("PATH", config_get(CONFIG_SECTION_ENV_VARS, "PATH", path), 1);
|
||||
|
||||
// python options
|
||||
p = getenv("PYTHONPATH");
|
||||
if (!p) p = "";
|
||||
setenv("PYTHONPATH", config_get(CONFIG_SECTION_ENV_VARS, "PYTHONPATH", p), 1);
|
||||
|
||||
// disable buffering for python plugins
|
||||
setenv("PYTHONUNBUFFERED", "1", 1);
|
||||
|
||||
// switch to standard locale for plugins
|
||||
setenv("LC_ALL", "C", 1);
|
||||
}
|
||||
|
||||
void analytics_statistic_send(const analytics_statistic_t *statistic) {
|
||||
if (!statistic)
|
||||
return;
|
||||
|
@ -1053,7 +892,7 @@ void analytics_statistic_send(const analytics_statistic_t *statistic) {
|
|||
POPEN_INSTANCE *instance = spawn_popen_run(command_to_run);
|
||||
if (instance) {
|
||||
char buffer[4 + 1];
|
||||
char *s = fgets(buffer, 4, instance->child_stdout_fp);
|
||||
char *s = fgets(buffer, 4, spawn_popen_stdout(instance));
|
||||
int exit_code = spawn_popen_wait(instance);
|
||||
if (exit_code)
|
||||
|
||||
|
@ -1075,6 +914,58 @@ void analytics_statistic_send(const analytics_statistic_t *statistic) {
|
|||
freez(command_to_run);
|
||||
}
|
||||
|
||||
void analytics_reset(void) {
|
||||
analytics_data.data_length = 0;
|
||||
analytics_set_data(&analytics_data.netdata_config_stream_enabled, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_memory_mode, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_exporting_enabled, "null");
|
||||
analytics_set_data(&analytics_data.netdata_exporting_connectors, "null");
|
||||
analytics_set_data(&analytics_data.netdata_allmetrics_prometheus_used, "null");
|
||||
analytics_set_data(&analytics_data.netdata_allmetrics_shell_used, "null");
|
||||
analytics_set_data(&analytics_data.netdata_allmetrics_json_used, "null");
|
||||
analytics_set_data(&analytics_data.netdata_dashboard_used, "null");
|
||||
analytics_set_data(&analytics_data.netdata_collectors, "null");
|
||||
analytics_set_data(&analytics_data.netdata_collectors_count, "null");
|
||||
analytics_set_data(&analytics_data.netdata_buildinfo, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_page_cache_size, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_multidb_disk_quota, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_https_enabled, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_web_enabled, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_release_channel, "null");
|
||||
analytics_set_data(&analytics_data.netdata_mirrored_host_count, "null");
|
||||
analytics_set_data(&analytics_data.netdata_mirrored_hosts_reachable, "null");
|
||||
analytics_set_data(&analytics_data.netdata_mirrored_hosts_unreachable, "null");
|
||||
analytics_set_data(&analytics_data.netdata_notification_methods, "null");
|
||||
analytics_set_data(&analytics_data.netdata_alarms_normal, "null");
|
||||
analytics_set_data(&analytics_data.netdata_alarms_warning, "null");
|
||||
analytics_set_data(&analytics_data.netdata_alarms_critical, "null");
|
||||
analytics_set_data(&analytics_data.netdata_charts_count, "null");
|
||||
analytics_set_data(&analytics_data.netdata_metrics_count, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_is_parent, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_hosts_available, "null");
|
||||
analytics_set_data(&analytics_data.netdata_host_cloud_available, "null");
|
||||
analytics_set_data(&analytics_data.netdata_host_aclk_implementation, "null");
|
||||
analytics_set_data(&analytics_data.netdata_host_aclk_available, "null");
|
||||
analytics_set_data(&analytics_data.netdata_host_aclk_protocol, "null");
|
||||
analytics_set_data(&analytics_data.netdata_host_agent_claimed, "null");
|
||||
analytics_set_data(&analytics_data.netdata_host_cloud_enabled, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_https_available, "null");
|
||||
analytics_set_data(&analytics_data.netdata_install_type, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_is_private_registry, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_use_private_registry, "null");
|
||||
analytics_set_data(&analytics_data.netdata_config_oom_score, "null");
|
||||
analytics_set_data(&analytics_data.netdata_prebuilt_distro, "null");
|
||||
analytics_set_data(&analytics_data.netdata_fail_reason, "null");
|
||||
|
||||
analytics_data.prometheus_hits = 0;
|
||||
analytics_data.shell_hits = 0;
|
||||
analytics_data.json_hits = 0;
|
||||
analytics_data.dashboard_hits = 0;
|
||||
analytics_data.charts_count = 0;
|
||||
analytics_data.metrics_count = 0;
|
||||
analytics_data.exporting_enabled = false;
|
||||
}
|
||||
|
||||
void analytics_init(void)
|
||||
{
|
||||
spinlock_init(&analytics_data.spinlock);
|
||||
|
|
|
@ -76,9 +76,8 @@ struct analytics_data {
|
|||
bool exporting_enabled;
|
||||
};
|
||||
|
||||
void set_late_global_environment(struct rrdhost_system_info *system_info);
|
||||
void set_late_analytics_variables(struct rrdhost_system_info *system_info);
|
||||
void analytics_free_data(void);
|
||||
void set_global_environment(void);
|
||||
void analytics_log_shell(void);
|
||||
void analytics_log_json(void);
|
||||
void analytics_log_prometheus(void);
|
||||
|
@ -86,6 +85,7 @@ void analytics_log_dashboard(void);
|
|||
void analytics_gather_mutable_meta_data(void);
|
||||
void analytics_report_oom_score(long long int score);
|
||||
void get_system_timezone(void);
|
||||
void analytics_reset(void);
|
||||
void analytics_init(void);
|
||||
|
||||
typedef struct {
|
||||
|
|
|
@ -1069,18 +1069,8 @@ __attribute__((constructor)) void initialize_build_info(void) {
|
|||
#endif
|
||||
#endif
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
build_info_set_status(BIB_FEATURE_CLOUD, true);
|
||||
build_info_set_status(BIB_CONNECTIVITY_ACLK, true);
|
||||
#else
|
||||
build_info_set_status(BIB_FEATURE_CLOUD, false);
|
||||
#ifdef DISABLE_CLOUD
|
||||
build_info_set_value(BIB_FEATURE_CLOUD, "disabled");
|
||||
#else
|
||||
build_info_set_value(BIB_FEATURE_CLOUD, "unavailable");
|
||||
#endif
|
||||
#endif
|
||||
|
||||
build_info_set_status(BIB_FEATURE_HEALTH, true);
|
||||
build_info_set_status(BIB_FEATURE_STREAMING, true);
|
||||
build_info_set_status(BIB_FEATURE_BACKFILLING, true);
|
||||
|
@ -1126,9 +1116,7 @@ __attribute__((constructor)) void initialize_build_info(void) {
|
|||
#ifdef ENABLE_WEBRTC
|
||||
build_info_set_status(BIB_CONNECTIVITY_WEBRTC, true);
|
||||
#endif
|
||||
#ifdef ENABLE_HTTPS
|
||||
build_info_set_status(BIB_CONNECTIVITY_NATIVE_HTTPS, true);
|
||||
#endif
|
||||
#if defined(HAVE_X509_VERIFY_PARAM_set1_host) && HAVE_X509_VERIFY_PARAM_set1_host == 1
|
||||
build_info_set_status(BIB_CONNECTIVITY_TLS_HOST_VERIFY, true);
|
||||
#endif
|
||||
|
@ -1162,9 +1150,7 @@ __attribute__((constructor)) void initialize_build_info(void) {
|
|||
#ifdef HAVE_LIBDATACHANNEL
|
||||
build_info_set_status(BIB_LIB_LIBDATACHANNEL, true);
|
||||
#endif
|
||||
#ifdef ENABLE_OPENSSL
|
||||
build_info_set_status(BIB_LIB_OPENSSL, true);
|
||||
#endif
|
||||
#ifdef ENABLE_JSONC
|
||||
build_info_set_status(BIB_LIB_JSONC, true);
|
||||
#endif
|
||||
|
|
|
@ -47,9 +47,7 @@ static cmd_status_t cmd_ping_execute(char *args, char **message);
|
|||
static cmd_status_t cmd_aclk_state(char *args, char **message);
|
||||
static cmd_status_t cmd_version(char *args, char **message);
|
||||
static cmd_status_t cmd_dumpconfig(char *args, char **message);
|
||||
#ifdef ENABLE_ACLK
|
||||
static cmd_status_t cmd_remove_node(char *args, char **message);
|
||||
#endif
|
||||
|
||||
static command_info_t command_info_array[] = {
|
||||
{"help", cmd_help_execute, CMD_TYPE_HIGH_PRIORITY}, // show help menu
|
||||
|
@ -65,9 +63,7 @@ static command_info_t command_info_array[] = {
|
|||
{"aclk-state", cmd_aclk_state, CMD_TYPE_ORTHOGONAL},
|
||||
{"version", cmd_version, CMD_TYPE_ORTHOGONAL},
|
||||
{"dumpconfig", cmd_dumpconfig, CMD_TYPE_ORTHOGONAL},
|
||||
#ifdef ENABLE_ACLK
|
||||
{"remove-stale-node", cmd_remove_node, CMD_TYPE_ORTHOGONAL}
|
||||
#endif
|
||||
};
|
||||
|
||||
/* Mutexes for commands of type CMD_TYPE_ORTHOGONAL */
|
||||
|
@ -135,10 +131,8 @@ static cmd_status_t cmd_help_execute(char *args, char **message)
|
|||
" Returns current state of ACLK and Cloud connection. (optionally in json).\n"
|
||||
"dumpconfig\n"
|
||||
" Returns the current netdata.conf on stdout.\n"
|
||||
#ifdef ENABLE_ACLK
|
||||
"remove-stale-node node_id|machine_guid|hostname|ALL_NODES\n"
|
||||
" Unregisters and removes a node from the cloud.\n"
|
||||
#endif
|
||||
"version\n"
|
||||
" Returns the netdata version.\n",
|
||||
MAX_COMMAND_LENGTH - 1);
|
||||
|
@ -193,17 +187,42 @@ static cmd_status_t cmd_fatal_execute(char *args, char **message)
|
|||
return CMD_STATUS_SUCCESS;
|
||||
}
|
||||
|
||||
static cmd_status_t cmd_reload_claiming_state_execute(char *args, char **message)
|
||||
{
|
||||
(void)args;
|
||||
(void)message;
|
||||
#if defined(DISABLE_CLOUD) || !defined(ENABLE_ACLK)
|
||||
netdata_log_info("The claiming feature has been explicitly disabled");
|
||||
*message = strdupz("This agent cannot be claimed, it was built without support for Cloud");
|
||||
return CMD_STATUS_FAILURE;
|
||||
#endif
|
||||
netdata_log_info("COMMAND: Reloading Agent Claiming configuration.");
|
||||
claim_reload_all();
|
||||
static cmd_status_t cmd_reload_claiming_state_execute(char *args __maybe_unused, char **message) {
|
||||
char msg[1024];
|
||||
|
||||
CLOUD_STATUS status = claim_reload_and_wait_online();
|
||||
switch(status) {
|
||||
case CLOUD_STATUS_ONLINE:
|
||||
snprintfz(msg, sizeof(msg),
|
||||
"Netdata Agent is claimed to Netdata Cloud and is currently online.");
|
||||
break;
|
||||
|
||||
case CLOUD_STATUS_BANNED:
|
||||
snprintfz(msg, sizeof(msg),
|
||||
"Netdata Agent is claimed to Netdata Cloud, but it is banned.");
|
||||
break;
|
||||
|
||||
default:
|
||||
case CLOUD_STATUS_AVAILABLE:
|
||||
snprintfz(msg, sizeof(msg),
|
||||
"Netdata Agent is not claimed to Netdata Cloud: %s",
|
||||
claim_agent_failure_reason_get());
|
||||
break;
|
||||
|
||||
case CLOUD_STATUS_OFFLINE:
|
||||
snprintfz(msg, sizeof(msg),
|
||||
"Netdata Agent is claimed to Netdata Cloud, but it is currently offline: %s",
|
||||
cloud_status_aclk_offline_reason());
|
||||
break;
|
||||
|
||||
case CLOUD_STATUS_INDIRECT:
|
||||
snprintfz(msg, sizeof(msg),
|
||||
"Netdata Agent is not claimed to Netdata Cloud, but it is currently online via parent.");
|
||||
break;
|
||||
}
|
||||
|
||||
*message = strdupz(msg);
|
||||
|
||||
return CMD_STATUS_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -306,17 +325,10 @@ static cmd_status_t cmd_ping_execute(char *args, char **message)
|
|||
static cmd_status_t cmd_aclk_state(char *args, char **message)
|
||||
{
|
||||
netdata_log_info("COMMAND: Reopening aclk/cloud state.");
|
||||
#ifdef ENABLE_ACLK
|
||||
if (strstr(args, "json"))
|
||||
*message = aclk_state_json();
|
||||
else
|
||||
*message = aclk_state();
|
||||
#else
|
||||
if (strstr(args, "json"))
|
||||
*message = strdupz("{\"aclk-available\":false}");
|
||||
else
|
||||
*message = strdupz("ACLK Available: No");
|
||||
#endif
|
||||
|
||||
return CMD_STATUS_SUCCESS;
|
||||
}
|
||||
|
@ -338,14 +350,12 @@ static cmd_status_t cmd_dumpconfig(char *args, char **message)
|
|||
(void)args;
|
||||
|
||||
BUFFER *wb = buffer_create(1024, NULL);
|
||||
config_generate(wb, 0);
|
||||
netdata_conf_generate(wb, 0);
|
||||
*message = strdupz(buffer_tostring(wb));
|
||||
buffer_free(wb);
|
||||
return CMD_STATUS_SUCCESS;
|
||||
}
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
|
||||
static int remove_ephemeral_host(BUFFER *wb, RRDHOST *host, bool report_error)
|
||||
{
|
||||
if (host == localhost) {
|
||||
|
@ -365,8 +375,7 @@ static int remove_ephemeral_host(BUFFER *wb, RRDHOST *host, bool report_error)
|
|||
sql_set_host_label(&host->host_uuid, "_is_ephemeral", "true");
|
||||
aclk_host_state_update(host, 0, 0);
|
||||
unregister_node(host->machine_guid);
|
||||
freez(host->node_id);
|
||||
host->node_id = NULL;
|
||||
uuid_clear(host->node_id);
|
||||
buffer_sprintf(wb, "Unregistering node with machine guid %s, hostname = %s", host->machine_guid, rrdhost_hostname(host));
|
||||
rrd_wrlock();
|
||||
rrdhost_free___while_having_rrd_wrlock(host, true);
|
||||
|
@ -438,7 +447,6 @@ done:
|
|||
buffer_free(wb);
|
||||
return CMD_STATUS_SUCCESS;
|
||||
}
|
||||
#endif
|
||||
|
||||
static void cmd_lock_exclusive(unsigned index)
|
||||
{
|
||||
|
|
|
@ -20,9 +20,7 @@ typedef enum cmd {
|
|||
CMD_ACLK_STATE,
|
||||
CMD_VERSION,
|
||||
CMD_DUMPCONFIG,
|
||||
#ifdef ENABLE_ACLK
|
||||
CMD_REMOVE_NODE,
|
||||
#endif
|
||||
CMD_TOTAL_COMMANDS
|
||||
} cmd_t;
|
||||
|
||||
|
|
|
@ -11,6 +11,7 @@ char *netdata_configured_web_dir = WEB_DIR;
|
|||
char *netdata_configured_cache_dir = CACHE_DIR;
|
||||
char *netdata_configured_varlib_dir = VARLIB_DIR;
|
||||
char *netdata_configured_lock_dir = VARLIB_DIR "/lock";
|
||||
char *netdata_configured_cloud_dir = VARLIB_DIR "/cloud.d";
|
||||
char *netdata_configured_home_dir = VARLIB_DIR;
|
||||
char *netdata_configured_host_prefix = NULL;
|
||||
char *netdata_configured_timezone = NULL;
|
||||
|
@ -19,12 +20,6 @@ int32_t netdata_configured_utc_offset = 0;
|
|||
|
||||
bool netdata_ready = false;
|
||||
|
||||
#if defined( DISABLE_CLOUD ) || !defined( ENABLE_ACLK )
|
||||
int netdata_cloud_enabled = CONFIG_BOOLEAN_NO;
|
||||
#else
|
||||
int netdata_cloud_enabled = CONFIG_BOOLEAN_AUTO;
|
||||
#endif
|
||||
|
||||
long get_netdata_cpus(void) {
|
||||
static long processors = 0;
|
||||
|
||||
|
@ -63,135 +58,3 @@ long get_netdata_cpus(void) {
|
|||
|
||||
return processors;
|
||||
}
|
||||
|
||||
const char *cloud_status_to_string(CLOUD_STATUS status) {
|
||||
switch(status) {
|
||||
default:
|
||||
case CLOUD_STATUS_UNAVAILABLE:
|
||||
return "unavailable";
|
||||
|
||||
case CLOUD_STATUS_AVAILABLE:
|
||||
return "available";
|
||||
|
||||
case CLOUD_STATUS_DISABLED:
|
||||
return "disabled";
|
||||
|
||||
case CLOUD_STATUS_BANNED:
|
||||
return "banned";
|
||||
|
||||
case CLOUD_STATUS_OFFLINE:
|
||||
return "offline";
|
||||
|
||||
case CLOUD_STATUS_ONLINE:
|
||||
return "online";
|
||||
}
|
||||
}
|
||||
|
||||
CLOUD_STATUS cloud_status(void) {
|
||||
#ifdef ENABLE_ACLK
|
||||
if(aclk_disable_runtime)
|
||||
return CLOUD_STATUS_BANNED;
|
||||
|
||||
if(aclk_connected)
|
||||
return CLOUD_STATUS_ONLINE;
|
||||
|
||||
if(netdata_cloud_enabled == CONFIG_BOOLEAN_YES) {
|
||||
char *agent_id = get_agent_claimid();
|
||||
bool claimed = agent_id != NULL;
|
||||
freez(agent_id);
|
||||
|
||||
if(claimed)
|
||||
return CLOUD_STATUS_OFFLINE;
|
||||
}
|
||||
|
||||
if(netdata_cloud_enabled != CONFIG_BOOLEAN_NO)
|
||||
return CLOUD_STATUS_AVAILABLE;
|
||||
|
||||
return CLOUD_STATUS_DISABLED;
|
||||
#else
|
||||
return CLOUD_STATUS_UNAVAILABLE;
|
||||
#endif
|
||||
}
|
||||
|
||||
time_t cloud_last_change(void) {
|
||||
#ifdef ENABLE_ACLK
|
||||
time_t ret = MAX(last_conn_time_mqtt, last_disconnect_time);
|
||||
if(!ret) ret = netdata_start_time;
|
||||
return ret;
|
||||
#else
|
||||
return netdata_start_time;
|
||||
#endif
|
||||
}
|
||||
|
||||
time_t cloud_next_connection_attempt(void) {
|
||||
#ifdef ENABLE_ACLK
|
||||
return next_connection_attempt;
|
||||
#else
|
||||
return 0;
|
||||
#endif
|
||||
}
|
||||
|
||||
size_t cloud_connection_id(void) {
|
||||
#ifdef ENABLE_ACLK
|
||||
return aclk_connection_counter;
|
||||
#else
|
||||
return 0;
|
||||
#endif
|
||||
}
|
||||
|
||||
const char *cloud_offline_reason() {
|
||||
#ifdef ENABLE_ACLK
|
||||
if(!netdata_cloud_enabled)
|
||||
return "disabled";
|
||||
|
||||
if(aclk_disable_runtime)
|
||||
return "banned";
|
||||
|
||||
return aclk_status_to_string();
|
||||
#else
|
||||
return "disabled";
|
||||
#endif
|
||||
}
|
||||
|
||||
const char *cloud_base_url() {
|
||||
#ifdef ENABLE_ACLK
|
||||
return aclk_cloud_base_url;
|
||||
#else
|
||||
return NULL;
|
||||
#endif
|
||||
}
|
||||
|
||||
CLOUD_STATUS buffer_json_cloud_status(BUFFER *wb, time_t now_s) {
|
||||
CLOUD_STATUS status = cloud_status();
|
||||
|
||||
buffer_json_member_add_object(wb, "cloud");
|
||||
{
|
||||
size_t id = cloud_connection_id();
|
||||
time_t last_change = cloud_last_change();
|
||||
time_t next_connect = cloud_next_connection_attempt();
|
||||
buffer_json_member_add_uint64(wb, "id", id);
|
||||
buffer_json_member_add_string(wb, "status", cloud_status_to_string(status));
|
||||
buffer_json_member_add_time_t(wb, "since", last_change);
|
||||
buffer_json_member_add_time_t(wb, "age", now_s - last_change);
|
||||
|
||||
if (status != CLOUD_STATUS_ONLINE)
|
||||
buffer_json_member_add_string(wb, "reason", cloud_offline_reason());
|
||||
|
||||
if (status == CLOUD_STATUS_OFFLINE && next_connect > now_s) {
|
||||
buffer_json_member_add_time_t(wb, "next_check", next_connect);
|
||||
buffer_json_member_add_time_t(wb, "next_in", next_connect - now_s);
|
||||
}
|
||||
|
||||
if (cloud_base_url())
|
||||
buffer_json_member_add_string(wb, "url", cloud_base_url());
|
||||
|
||||
char *claim_id = get_agent_claimid();
|
||||
if(claim_id) {
|
||||
buffer_json_member_add_string(wb, "claim_id", claim_id);
|
||||
freez(claim_id);
|
||||
}
|
||||
}
|
||||
buffer_json_object_close(wb); // cloud
|
||||
|
||||
return status;
|
||||
}
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
#define NETDATA_COMMON_H 1
|
||||
|
||||
#include "libnetdata/libnetdata.h"
|
||||
#include "event_loop.h"
|
||||
#include "libuv_workers.h"
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
// shortcuts for the default netdata configuration
|
||||
|
@ -26,7 +26,7 @@
|
|||
#define config_exists(section, name) appconfig_exists(&netdata_config, section, name)
|
||||
#define config_move(section_old, name_old, section_new, name_new) appconfig_move(&netdata_config, section_old, name_old, section_new, name_new)
|
||||
|
||||
#define config_generate(buffer, only_changed) appconfig_generate(&netdata_config, buffer, only_changed)
|
||||
#define netdata_conf_generate(buffer, only_changed) appconfig_generate(&netdata_config, buffer, only_changed, true)
|
||||
|
||||
#define config_section_destroy(section) appconfig_section_destroy_non_loaded(&netdata_config, section)
|
||||
#define config_section_option_destroy(section, name) appconfig_section_option_destroy_non_loaded(&netdata_config, section, name)
|
||||
|
@ -34,6 +34,8 @@
|
|||
// ----------------------------------------------------------------------------
|
||||
// netdata include files
|
||||
|
||||
#include "web/api/maps/maps.h"
|
||||
|
||||
#include "daemon/config/dyncfg.h"
|
||||
|
||||
#include "global_statistics.h"
|
||||
|
@ -103,6 +105,7 @@ extern char *netdata_configured_web_dir;
|
|||
extern char *netdata_configured_cache_dir;
|
||||
extern char *netdata_configured_varlib_dir;
|
||||
extern char *netdata_configured_lock_dir;
|
||||
extern char *netdata_configured_cloud_dir;
|
||||
extern char *netdata_configured_home_dir;
|
||||
extern char *netdata_configured_host_prefix;
|
||||
extern char *netdata_configured_timezone;
|
||||
|
@ -111,28 +114,10 @@ extern int32_t netdata_configured_utc_offset;
|
|||
extern int netdata_anonymous_statistics_enabled;
|
||||
|
||||
extern bool netdata_ready;
|
||||
extern int netdata_cloud_enabled;
|
||||
|
||||
extern time_t netdata_start_time;
|
||||
|
||||
long get_netdata_cpus(void);
|
||||
|
||||
typedef enum __attribute__((packed)) {
|
||||
CLOUD_STATUS_UNAVAILABLE = 0, // cloud and aclk functionality is not available on this agent
|
||||
CLOUD_STATUS_AVAILABLE, // cloud and aclk functionality is available, but the agent is not claimed
|
||||
CLOUD_STATUS_DISABLED, // cloud and aclk functionality is available, but it is disabled
|
||||
CLOUD_STATUS_BANNED, // the agent has been banned from cloud
|
||||
CLOUD_STATUS_OFFLINE, // the agent tries to connect to cloud, but cannot do it
|
||||
CLOUD_STATUS_ONLINE, // the agent is connected to cloud
|
||||
} CLOUD_STATUS;
|
||||
|
||||
const char *cloud_status_to_string(CLOUD_STATUS status);
|
||||
CLOUD_STATUS cloud_status(void);
|
||||
time_t cloud_last_change(void);
|
||||
time_t cloud_next_connection_attempt(void);
|
||||
size_t cloud_connection_id(void);
|
||||
const char *cloud_offline_reason(void);
|
||||
const char *cloud_base_url(void);
|
||||
CLOUD_STATUS buffer_json_cloud_status(BUFFER *wb, time_t now_s);
|
||||
void set_environment_for_plugins_and_scripts(void);
|
||||
|
||||
#endif /* NETDATA_COMMON_H */
|
||||
|
|
|
@ -96,7 +96,7 @@ void dyncfg_echo(const DICTIONARY_ITEM *item, DYNCFG *df, const char *id __maybe
|
|||
dyncfg_echo_cb, e,
|
||||
NULL, NULL,
|
||||
NULL, NULL,
|
||||
NULL, string2str(df->dyncfg.source));
|
||||
NULL, string2str(df->dyncfg.source), false);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
|
@ -129,7 +129,7 @@ void dyncfg_echo_update(const DICTIONARY_ITEM *item, DYNCFG *df, const char *id)
|
|||
dyncfg_echo_cb, e,
|
||||
NULL, NULL,
|
||||
NULL, NULL,
|
||||
df->dyncfg.payload, string2str(df->dyncfg.source));
|
||||
df->dyncfg.payload, string2str(df->dyncfg.source), false);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
|
@ -164,7 +164,7 @@ static void dyncfg_echo_payload_add(const DICTIONARY_ITEM *item_template __maybe
|
|||
dyncfg_echo_cb, e,
|
||||
NULL, NULL,
|
||||
NULL, NULL,
|
||||
df_job->dyncfg.payload, string2str(df_job->dyncfg.source));
|
||||
df_job->dyncfg.payload, string2str(df_job->dyncfg.source), false);
|
||||
}
|
||||
|
||||
void dyncfg_echo_add(const DICTIONARY_ITEM *item_template, const DICTIONARY_ITEM *item_job, DYNCFG *df_template, DYNCFG *df_job, const char *template_id, const char *job_name) {
|
||||
|
|
|
@ -473,7 +473,7 @@ static int dyncfg_unittest_run(const char *cmd, BUFFER *wb, const char *payload,
|
|||
NULL, NULL,
|
||||
NULL, NULL,
|
||||
NULL, NULL,
|
||||
pld, source);
|
||||
pld, source, false);
|
||||
if(!DYNCFG_RESP_SUCCESS(rc)) {
|
||||
nd_log(NDLS_DAEMON, NDLP_ERR, "DYNCFG UNITTEST: failed to run: %s; returned code %d", cmd, rc);
|
||||
dyncfg_unittest_register_error(NULL, NULL);
|
||||
|
|
|
@ -3,34 +3,24 @@
|
|||
#include "common.h"
|
||||
#include <sched.h>
|
||||
|
||||
char pidfile[FILENAME_MAX + 1] = "";
|
||||
char claiming_directory[FILENAME_MAX + 1];
|
||||
char netdata_exe_path[FILENAME_MAX + 1];
|
||||
char netdata_exe_file[FILENAME_MAX + 1];
|
||||
char *pidfile = NULL;
|
||||
char *netdata_exe_path = NULL;
|
||||
|
||||
void get_netdata_execution_path(void) {
|
||||
int ret;
|
||||
size_t exepath_size = 0;
|
||||
struct passwd *passwd = NULL;
|
||||
char *user = NULL;
|
||||
struct passwd *passwd = getpwuid(getuid());
|
||||
char *user = (passwd && passwd->pw_name) ? passwd->pw_name : "";
|
||||
|
||||
passwd = getpwuid(getuid());
|
||||
user = (passwd && passwd->pw_name) ? passwd->pw_name : "";
|
||||
|
||||
exepath_size = sizeof(netdata_exe_file) - 1;
|
||||
ret = uv_exepath(netdata_exe_file, &exepath_size);
|
||||
if (0 != ret) {
|
||||
netdata_log_error("uv_exepath(\"%s\", %u) (user: %s) failed (%s).", netdata_exe_file, (unsigned)exepath_size, user,
|
||||
uv_strerror(ret));
|
||||
fatal("Cannot start netdata without getting execution path.");
|
||||
char b[FILENAME_MAX + 1];
|
||||
size_t b_size = sizeof(b) - 1;
|
||||
int ret = uv_exepath(b, &b_size);
|
||||
if (ret != 0) {
|
||||
fatal("Cannot start netdata without getting execution path. "
|
||||
"(uv_exepath(\"%s\", %zu), user: '%s', failed: %s).",
|
||||
b, b_size, user, uv_strerror(ret));
|
||||
}
|
||||
b[b_size] = '\0';
|
||||
|
||||
netdata_exe_file[exepath_size] = '\0';
|
||||
|
||||
// macOS's dirname(3) does not modify passed string
|
||||
char *tmpdir = strdupz(netdata_exe_file);
|
||||
strcpy(netdata_exe_path, dirname(tmpdir));
|
||||
freez(tmpdir);
|
||||
netdata_exe_path = strdupz(b);
|
||||
}
|
||||
|
||||
static void fix_directory_file_permissions(const char *dirname, uid_t uid, gid_t gid, bool recursive)
|
||||
|
@ -89,7 +79,7 @@ static void prepare_required_directories(uid_t uid, gid_t gid) {
|
|||
change_dir_ownership(netdata_configured_varlib_dir, uid, gid, false);
|
||||
change_dir_ownership(netdata_configured_lock_dir, uid, gid, false);
|
||||
change_dir_ownership(netdata_configured_log_dir, uid, gid, false);
|
||||
change_dir_ownership(claiming_directory, uid, gid, false);
|
||||
change_dir_ownership(netdata_configured_cloud_dir, uid, gid, false);
|
||||
|
||||
char filename[FILENAME_MAX + 1];
|
||||
snprintfz(filename, FILENAME_MAX, "%s/registry", netdata_configured_varlib_dir);
|
||||
|
@ -112,7 +102,7 @@ static int become_user(const char *username, int pid_fd) {
|
|||
|
||||
prepare_required_directories(uid, gid);
|
||||
|
||||
if(pidfile[0]) {
|
||||
if(pidfile && *pidfile) {
|
||||
if(chown(pidfile, uid, gid) == -1)
|
||||
netdata_log_error("Cannot chown '%s' to %u:%u", pidfile, (unsigned int)uid, (unsigned int)gid);
|
||||
}
|
||||
|
@ -465,7 +455,7 @@ int become_daemon(int dont_fork, const char *user)
|
|||
|
||||
// generate our pid file
|
||||
int pidfd = -1;
|
||||
if(pidfile[0]) {
|
||||
if(pidfile && *pidfile) {
|
||||
pidfd = open(pidfile, O_WRONLY | O_CREAT | O_CLOEXEC, 0644);
|
||||
if(pidfd >= 0) {
|
||||
if(ftruncate(pidfd, 0) != 0)
|
||||
|
@ -490,9 +480,6 @@ int become_daemon(int dont_fork, const char *user)
|
|||
// never become a problem
|
||||
sched_setscheduler_set();
|
||||
|
||||
// Set claiming directory based on user config directory with correct ownership
|
||||
snprintfz(claiming_directory, FILENAME_MAX, "%s/cloud.d", netdata_configured_varlib_dir);
|
||||
|
||||
if(user && *user) {
|
||||
if(become_user(user, pidfd) != 0) {
|
||||
netdata_log_error("Cannot become user '%s'. Continuing as we are.", user);
|
||||
|
|
|
@ -9,8 +9,7 @@ void netdata_cleanup_and_exit(int ret, const char *action, const char *action_re
|
|||
|
||||
void get_netdata_execution_path(void);
|
||||
|
||||
extern char pidfile[];
|
||||
extern char netdata_exe_file[];
|
||||
extern char netdata_exe_path[];
|
||||
extern char *pidfile;
|
||||
extern char *netdata_exe_path;
|
||||
|
||||
#endif /* NETDATA_DAEMON_H */
|
||||
|
|
99
src/daemon/environment.c
Normal file
99
src/daemon/environment.c
Normal file
|
@ -0,0 +1,99 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#include "common.h"
|
||||
|
||||
static const char *verify_required_directory(const char *dir)
|
||||
{
|
||||
if (chdir(dir) == -1)
|
||||
fatal("Cannot change directory to '%s'", dir);
|
||||
|
||||
DIR *d = opendir(dir);
|
||||
if (!d)
|
||||
fatal("Cannot examine the contents of directory '%s'", dir);
|
||||
closedir(d);
|
||||
|
||||
return dir;
|
||||
}
|
||||
|
||||
static const char *verify_or_create_required_directory(const char *dir) {
|
||||
errno_clear();
|
||||
|
||||
if (mkdir(dir, 0755) != 0 && errno != EEXIST)
|
||||
fatal("Cannot create required directory '%s'", dir);
|
||||
|
||||
return verify_required_directory(dir);
|
||||
}
|
||||
|
||||
static const char *verify_or_create_required_private_directory(const char *dir) {
|
||||
errno_clear();
|
||||
|
||||
if (mkdir(dir, 0770) != 0 && errno != EEXIST)
|
||||
fatal("Cannot create required directory '%s'", dir);
|
||||
|
||||
return verify_required_directory(dir);
|
||||
}
|
||||
|
||||
void set_environment_for_plugins_and_scripts(void) {
|
||||
{
|
||||
char b[16];
|
||||
snprintfz(b, sizeof(b) - 1, "%d", default_rrd_update_every);
|
||||
nd_setenv("NETDATA_UPDATE_EVERY", b, 1);
|
||||
}
|
||||
|
||||
nd_setenv("NETDATA_VERSION", NETDATA_VERSION, 1);
|
||||
nd_setenv("NETDATA_HOSTNAME", netdata_configured_hostname, 1);
|
||||
nd_setenv("NETDATA_CONFIG_DIR", verify_required_directory(netdata_configured_user_config_dir), 1);
|
||||
nd_setenv("NETDATA_USER_CONFIG_DIR", verify_required_directory(netdata_configured_user_config_dir), 1);
|
||||
nd_setenv("NETDATA_STOCK_CONFIG_DIR", verify_required_directory(netdata_configured_stock_config_dir), 1);
|
||||
nd_setenv("NETDATA_PLUGINS_DIR", verify_required_directory(netdata_configured_primary_plugins_dir), 1);
|
||||
nd_setenv("NETDATA_WEB_DIR", verify_required_directory(netdata_configured_web_dir), 1);
|
||||
nd_setenv("NETDATA_CACHE_DIR", verify_or_create_required_directory(netdata_configured_cache_dir), 1);
|
||||
nd_setenv("NETDATA_LIB_DIR", verify_or_create_required_directory(netdata_configured_varlib_dir), 1);
|
||||
nd_setenv("NETDATA_LOCK_DIR", verify_or_create_required_directory(netdata_configured_lock_dir), 1);
|
||||
nd_setenv("NETDATA_LOG_DIR", verify_or_create_required_directory(netdata_configured_log_dir), 1);
|
||||
nd_setenv("NETDATA_HOST_PREFIX", netdata_configured_host_prefix, 1);
|
||||
|
||||
nd_setenv("CLAIMING_DIR", verify_or_create_required_private_directory(netdata_configured_cloud_dir), 1);
|
||||
|
||||
{
|
||||
BUFFER *user_plugins_dirs = buffer_create(FILENAME_MAX, NULL);
|
||||
|
||||
for (size_t i = 1; i < PLUGINSD_MAX_DIRECTORIES && plugin_directories[i]; i++) {
|
||||
if (i > 1)
|
||||
buffer_strcat(user_plugins_dirs, " ");
|
||||
buffer_strcat(user_plugins_dirs, plugin_directories[i]);
|
||||
}
|
||||
|
||||
nd_setenv("NETDATA_USER_PLUGINS_DIRS", buffer_tostring(user_plugins_dirs), 1);
|
||||
|
||||
buffer_free(user_plugins_dirs);
|
||||
}
|
||||
|
||||
char *default_port = appconfig_get(&netdata_config, CONFIG_SECTION_WEB, "default port", NULL);
|
||||
int clean = 0;
|
||||
if (!default_port) {
|
||||
default_port = strdupz("19999");
|
||||
clean = 1;
|
||||
}
|
||||
|
||||
nd_setenv("NETDATA_LISTEN_PORT", default_port, 1);
|
||||
if (clean)
|
||||
freez(default_port);
|
||||
|
||||
// set the path we need
|
||||
char path[4096], *p = getenv("PATH");
|
||||
if (!p) p = "/bin:/usr/bin";
|
||||
snprintfz(path, sizeof(path), "%s:%s", p, "/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin");
|
||||
setenv("PATH", config_get(CONFIG_SECTION_ENV_VARS, "PATH", path), 1);
|
||||
|
||||
// python options
|
||||
p = getenv("PYTHONPATH");
|
||||
if (!p) p = "";
|
||||
setenv("PYTHONPATH", config_get(CONFIG_SECTION_ENV_VARS, "PYTHONPATH", p), 1);
|
||||
|
||||
// disable buffering for python plugins
|
||||
setenv("PYTHONUNBUFFERED", "1", 1);
|
||||
|
||||
// switch to standard locale for plugins
|
||||
setenv("LC_ALL", "C", 1);
|
||||
}
|
|
@ -1,7 +1,7 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#include <daemon/main.h>
|
||||
#include "event_loop.h"
|
||||
#include "libuv_workers.h"
|
||||
|
||||
// Register workers
|
||||
void register_libuv_worker_jobs() {
|
|
@ -6,6 +6,7 @@
|
|||
#include "static_threads.h"
|
||||
|
||||
#include "database/engine/page_test.h"
|
||||
#include <curl/curl.h>
|
||||
|
||||
#ifdef OS_WINDOWS
|
||||
#include "win_system-info.h"
|
||||
|
@ -480,15 +481,13 @@ void netdata_cleanup_and_exit(int ret, const char *action, const char *action_re
|
|||
|
||||
|
||||
// unlink the pid
|
||||
if(pidfile[0]) {
|
||||
if(pidfile && *pidfile) {
|
||||
if(unlink(pidfile) != 0)
|
||||
netdata_log_error("EXIT: cannot unlink pidfile '%s'.", pidfile);
|
||||
}
|
||||
watcher_step_complete(WATCHER_STEP_ID_REMOVE_PID_FILE);
|
||||
|
||||
#ifdef ENABLE_HTTPS
|
||||
netdata_ssl_cleanup();
|
||||
#endif
|
||||
watcher_step_complete(WATCHER_STEP_ID_FREE_OPENSSL_STRUCTURES);
|
||||
|
||||
(void) unlink(agent_incomplete_shutdown_file);
|
||||
|
@ -496,6 +495,7 @@ void netdata_cleanup_and_exit(int ret, const char *action, const char *action_re
|
|||
|
||||
watcher_shutdown_end();
|
||||
watcher_thread_stop();
|
||||
curl_global_cleanup();
|
||||
|
||||
#ifdef OS_WINDOWS
|
||||
return;
|
||||
|
@ -807,8 +807,6 @@ int help(int exitcode) {
|
|||
" are enabled or not, in JSON format.\n\n"
|
||||
" -W simple-pattern pattern string\n"
|
||||
" Check if string matches pattern and exit.\n\n"
|
||||
" -W \"claim -token=TOKEN -rooms=ROOM1,ROOM2\"\n"
|
||||
" Claim the agent to the workspace rooms pointed to by TOKEN and ROOM*.\n\n"
|
||||
#ifdef OS_WINDOWS
|
||||
" -W perflibdump [key]\n"
|
||||
" Dump the Windows Performance Counters Registry in JSON.\n\n"
|
||||
|
@ -825,7 +823,6 @@ int help(int exitcode) {
|
|||
return exitcode;
|
||||
}
|
||||
|
||||
#ifdef ENABLE_HTTPS
|
||||
static void security_init(){
|
||||
char filename[FILENAME_MAX + 1];
|
||||
snprintfz(filename, FILENAME_MAX, "%s/ssl/key.pem",netdata_configured_user_config_dir);
|
||||
|
@ -839,7 +836,6 @@ static void security_init(){
|
|||
|
||||
netdata_ssl_initialize_openssl();
|
||||
}
|
||||
#endif
|
||||
|
||||
static void log_init(void) {
|
||||
nd_log_set_facility(config_get(CONFIG_SECTION_LOGS, "facility", "daemon"));
|
||||
|
@ -881,21 +877,19 @@ static void log_init(void) {
|
|||
snprintfz(filename, FILENAME_MAX, "%s/health.log", netdata_configured_log_dir);
|
||||
nd_log_set_user_settings(NDLS_HEALTH, config_get(CONFIG_SECTION_LOGS, "health", filename));
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
aclklog_enabled = config_get_boolean(CONFIG_SECTION_CLOUD, "conversation log", CONFIG_BOOLEAN_NO);
|
||||
if (aclklog_enabled) {
|
||||
snprintfz(filename, FILENAME_MAX, "%s/aclk.log", netdata_configured_log_dir);
|
||||
nd_log_set_user_settings(NDLS_ACLK, config_get(CONFIG_SECTION_CLOUD, "conversation log file", filename));
|
||||
}
|
||||
#endif
|
||||
|
||||
aclk_config_get_query_scope();
|
||||
}
|
||||
|
||||
char *initialize_lock_directory_path(char *prefix)
|
||||
{
|
||||
static char *get_varlib_subdir_from_config(const char *prefix, const char *dir) {
|
||||
char filename[FILENAME_MAX + 1];
|
||||
snprintfz(filename, FILENAME_MAX, "%s/lock", prefix);
|
||||
|
||||
return config_get(CONFIG_SECTION_DIRECTORIES, "lock", filename);
|
||||
snprintfz(filename, FILENAME_MAX, "%s/%s", prefix, dir);
|
||||
return config_get(CONFIG_SECTION_DIRECTORIES, dir, filename);
|
||||
}
|
||||
|
||||
static void backwards_compatible_config() {
|
||||
|
@ -1175,7 +1169,8 @@ static void get_netdata_configured_variables()
|
|||
netdata_configured_cache_dir = config_get(CONFIG_SECTION_DIRECTORIES, "cache", netdata_configured_cache_dir);
|
||||
netdata_configured_varlib_dir = config_get(CONFIG_SECTION_DIRECTORIES, "lib", netdata_configured_varlib_dir);
|
||||
|
||||
netdata_configured_lock_dir = initialize_lock_directory_path(netdata_configured_varlib_dir);
|
||||
netdata_configured_lock_dir = get_varlib_subdir_from_config(netdata_configured_varlib_dir, "lock");
|
||||
netdata_configured_cloud_dir = get_varlib_subdir_from_config(netdata_configured_varlib_dir, "cloud.d");
|
||||
|
||||
{
|
||||
pluginsd_initialize_plugin_directories();
|
||||
|
@ -1309,14 +1304,14 @@ static bool load_netdata_conf(char *filename, char overwrite_used, char **user)
|
|||
netdata_log_error("CONFIG: cannot load config file '%s'.", filename);
|
||||
}
|
||||
else {
|
||||
filename = strdupz_path_subpath(netdata_configured_user_config_dir, "netdata.conf");
|
||||
filename = filename_from_path_entry_strdupz(netdata_configured_user_config_dir, "netdata.conf");
|
||||
|
||||
ret = config_load(filename, overwrite_used, NULL);
|
||||
if(!ret) {
|
||||
netdata_log_info("CONFIG: cannot load user config '%s'. Will try the stock version.", filename);
|
||||
freez(filename);
|
||||
|
||||
filename = strdupz_path_subpath(netdata_configured_stock_config_dir, "netdata.conf");
|
||||
filename = filename_from_path_entry_strdupz(netdata_configured_stock_config_dir, "netdata.conf");
|
||||
ret = config_load(filename, overwrite_used, NULL);
|
||||
if(!ret)
|
||||
netdata_log_info("CONFIG: cannot load stock config '%s'. Running with internal defaults.", filename);
|
||||
|
@ -1351,7 +1346,7 @@ int get_system_info(struct rrdhost_system_info *system_info) {
|
|||
char line[200 + 1];
|
||||
// Removed the double strlens, if the Coverity tainted string warning reappears I'll revert.
|
||||
// One time init code, but I'm curious about the warning...
|
||||
while (fgets(line, 200, instance->child_stdout_fp) != NULL) {
|
||||
while (fgets(line, 200, spawn_popen_stdout(instance)) != NULL) {
|
||||
char *value=line;
|
||||
while (*value && *value != '=') value++;
|
||||
if (*value=='=') {
|
||||
|
@ -1366,7 +1361,7 @@ int get_system_info(struct rrdhost_system_info *system_info) {
|
|||
if(unlikely(rrdhost_set_system_info_variable(system_info, line, value))) {
|
||||
netdata_log_error("Unexpected environment variable %s=%s", line, value);
|
||||
} else {
|
||||
setenv(line, value, 1);
|
||||
nd_setenv(line, value, 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1405,6 +1400,7 @@ int unittest_rrdpush_compressions(void);
|
|||
int uuid_unittest(void);
|
||||
int progress_unittest(void);
|
||||
int dyncfg_unittest(void);
|
||||
bool netdata_random_session_id_generate(void);
|
||||
|
||||
#ifdef OS_WINDOWS
|
||||
int windows_perflib_dump(const char *key);
|
||||
|
@ -1455,6 +1451,8 @@ int netdata_main(int argc, char **argv) {
|
|||
// set the name for logging
|
||||
program_name = "netdata";
|
||||
|
||||
curl_global_init(CURL_GLOBAL_ALL);
|
||||
|
||||
// parse options
|
||||
{
|
||||
int num_opts = sizeof(option_definitions) / sizeof(struct option_def);
|
||||
|
@ -1483,7 +1481,7 @@ int netdata_main(int argc, char **argv) {
|
|||
}
|
||||
else {
|
||||
netdata_log_debug(D_OPTIONS, "Configuration loaded from %s.", optarg);
|
||||
load_cloud_conf(1);
|
||||
cloud_conf_load(1);
|
||||
config_loaded = 1;
|
||||
}
|
||||
break;
|
||||
|
@ -1499,8 +1497,7 @@ int netdata_main(int argc, char **argv) {
|
|||
config_set(CONFIG_SECTION_WEB, "bind to", optarg);
|
||||
break;
|
||||
case 'P':
|
||||
strncpy(pidfile, optarg, FILENAME_MAX);
|
||||
pidfile[FILENAME_MAX] = '\0';
|
||||
pidfile = strdupz(optarg);
|
||||
break;
|
||||
case 'p':
|
||||
config_set(CONFIG_SECTION_GLOBAL, "default port", optarg);
|
||||
|
@ -1522,7 +1519,6 @@ int netdata_main(int argc, char **argv) {
|
|||
{
|
||||
char* stacksize_string = "stacksize=";
|
||||
char* debug_flags_string = "debug_flags=";
|
||||
char* claim_string = "claim";
|
||||
#ifdef ENABLE_DBENGINE
|
||||
char* createdataset_string = "createdataset=";
|
||||
char* stresstest_string = "stresstest=";
|
||||
|
@ -1870,7 +1866,7 @@ int netdata_main(int argc, char **argv) {
|
|||
if(!config_loaded) {
|
||||
fprintf(stderr, "warning: no configuration file has been loaded. Use -c CONFIG_FILE, before -W get. Using default config.\n");
|
||||
load_netdata_conf(NULL, 0, &user);
|
||||
load_cloud_conf(1);
|
||||
cloud_conf_load(1);
|
||||
}
|
||||
|
||||
get_netdata_configured_variables();
|
||||
|
@ -1884,10 +1880,6 @@ int netdata_main(int argc, char **argv) {
|
|||
printf("%s\n", value);
|
||||
return 0;
|
||||
}
|
||||
else if(strncmp(optarg, claim_string, strlen(claim_string)) == 0) {
|
||||
/* will trigger a claiming attempt when the agent is initialized */
|
||||
claiming_pending_arguments = optarg + strlen(claim_string);
|
||||
}
|
||||
else if(strcmp(optarg, "buildinfo") == 0) {
|
||||
print_build_info();
|
||||
return 0;
|
||||
|
@ -1919,12 +1911,12 @@ int netdata_main(int argc, char **argv) {
|
|||
if (close_open_fds == true) {
|
||||
// close all open file descriptors, except the standard ones
|
||||
// the caller may have left open files (lxc-attach has this issue)
|
||||
os_close_all_non_std_open_fds_except(NULL, 0);
|
||||
os_close_all_non_std_open_fds_except(NULL, 0, 0);
|
||||
}
|
||||
|
||||
if(!config_loaded) {
|
||||
load_netdata_conf(NULL, 0, &user);
|
||||
load_cloud_conf(0);
|
||||
cloud_conf_load(0);
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------------------
|
||||
|
@ -1970,7 +1962,8 @@ int netdata_main(int argc, char **argv) {
|
|||
|
||||
// prepare configuration environment variables for the plugins
|
||||
get_netdata_configured_variables();
|
||||
set_global_environment();
|
||||
set_environment_for_plugins_and_scripts();
|
||||
analytics_reset();
|
||||
|
||||
// work while we are cd into config_dir
|
||||
// to allow the plugins refer to their config
|
||||
|
@ -1987,7 +1980,7 @@ int netdata_main(int argc, char **argv) {
|
|||
// get the debugging flags from the configuration file
|
||||
|
||||
char *flags = config_get(CONFIG_SECTION_LOGS, "debug flags", "0x0000000000000000");
|
||||
setenv("NETDATA_DEBUG_FLAGS", flags, 1);
|
||||
nd_setenv("NETDATA_DEBUG_FLAGS", flags, 1);
|
||||
|
||||
debug_flags = strtoull(flags, NULL, 0);
|
||||
netdata_log_debug(D_OPTIONS, "Debug flags set to '0x%" PRIX64 "'.", debug_flags);
|
||||
|
@ -2021,8 +2014,6 @@ int netdata_main(int argc, char **argv) {
|
|||
|
||||
get_system_timezone();
|
||||
|
||||
bearer_tokens_init();
|
||||
|
||||
replication_initialize();
|
||||
|
||||
rrd_functions_inflight_init();
|
||||
|
@ -2030,9 +2021,7 @@ int netdata_main(int argc, char **argv) {
|
|||
// --------------------------------------------------------------------
|
||||
// get the certificate and start security
|
||||
|
||||
#ifdef ENABLE_HTTPS
|
||||
security_init();
|
||||
#endif
|
||||
|
||||
// --------------------------------------------------------------------
|
||||
// This is the safest place to start the SILENCERS structure
|
||||
|
@ -2053,8 +2042,7 @@ int netdata_main(int argc, char **argv) {
|
|||
// this causes the threads to block signals.
|
||||
|
||||
delta_startup_time("initialize signals");
|
||||
signals_block();
|
||||
signals_init(); // setup the signals we want to use
|
||||
nd_initialize_signals(); // setup the signals we want to use
|
||||
|
||||
// --------------------------------------------------------------------
|
||||
// check which threads are enabled and initialize them
|
||||
|
@ -2086,7 +2074,7 @@ int netdata_main(int argc, char **argv) {
|
|||
st->init_routine();
|
||||
|
||||
if(st->env_name)
|
||||
setenv(st->env_name, st->enabled?"YES":"NO", 1);
|
||||
nd_setenv(st->env_name, st->enabled?"YES":"NO", 1);
|
||||
|
||||
if(st->global_variable)
|
||||
*st->global_variable = (st->enabled) ? true : false;
|
||||
|
@ -2097,7 +2085,7 @@ int netdata_main(int argc, char **argv) {
|
|||
|
||||
delta_startup_time("initialize web server");
|
||||
|
||||
web_client_api_v1_init();
|
||||
nd_web_api_init();
|
||||
web_server_threading_selection();
|
||||
|
||||
if(web_server_mode != WEB_SERVER_MODE_NONE) {
|
||||
|
@ -2165,7 +2153,7 @@ int netdata_main(int argc, char **argv) {
|
|||
netdata_configured_home_dir = config_get(CONFIG_SECTION_DIRECTORIES, "home", pw->pw_dir);
|
||||
}
|
||||
|
||||
setenv("HOME", netdata_configured_home_dir, 1);
|
||||
nd_setenv("HOME", netdata_configured_home_dir, 1);
|
||||
|
||||
dyncfg_init(true);
|
||||
|
||||
|
@ -2178,6 +2166,7 @@ int netdata_main(int argc, char **argv) {
|
|||
// initialize internal registry
|
||||
delta_startup_time("initialize registry");
|
||||
registry_init();
|
||||
cloud_conf_init_after_registry();
|
||||
netdata_random_session_id_generate();
|
||||
|
||||
// ------------------------------------------------------------------------
|
||||
|
@ -2203,7 +2192,7 @@ int netdata_main(int argc, char **argv) {
|
|||
delta_startup_time("initialize RRD structures");
|
||||
|
||||
if(rrd_init(netdata_configured_hostname, system_info, false)) {
|
||||
set_late_global_environment(system_info);
|
||||
set_late_analytics_variables(system_info);
|
||||
fatal("Cannot initialize localhost instance with name '%s'.", netdata_configured_hostname);
|
||||
}
|
||||
|
||||
|
@ -2219,15 +2208,10 @@ int netdata_main(int argc, char **argv) {
|
|||
if (fd >= 0)
|
||||
close(fd);
|
||||
|
||||
|
||||
// ------------------------------------------------------------------------
|
||||
// Claim netdata agent to a cloud endpoint
|
||||
|
||||
delta_startup_time("collect claiming info");
|
||||
|
||||
if (claiming_pending_arguments)
|
||||
claim_agent(claiming_pending_arguments, false, NULL);
|
||||
|
||||
load_claiming_state();
|
||||
|
||||
// ------------------------------------------------------------------------
|
||||
|
@ -2242,11 +2226,13 @@ int netdata_main(int argc, char **argv) {
|
|||
// ------------------------------------------------------------------------
|
||||
// spawn the threads
|
||||
|
||||
bearer_tokens_init();
|
||||
|
||||
delta_startup_time("start the static threads");
|
||||
|
||||
web_server_config_options();
|
||||
|
||||
set_late_global_environment(system_info);
|
||||
set_late_analytics_variables(system_info);
|
||||
for (i = 0; static_threads[i].name != NULL ; i++) {
|
||||
struct netdata_static_thread *st = &static_threads[i];
|
||||
|
||||
|
@ -2295,28 +2281,7 @@ int netdata_main(int argc, char **argv) {
|
|||
}
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------------------
|
||||
// Report ACLK build failure
|
||||
#ifndef ENABLE_ACLK
|
||||
netdata_log_error("This agent doesn't have ACLK.");
|
||||
char filename[FILENAME_MAX + 1];
|
||||
snprintfz(filename, FILENAME_MAX, "%s/.aclk_report_sent", netdata_configured_varlib_dir);
|
||||
if (netdata_anonymous_statistics_enabled > 0 && access(filename, F_OK)) { // -1 -> not initialized
|
||||
analytics_statistic_t statistic = { "ACLK_DISABLED", "-", "-" };
|
||||
analytics_statistic_send(&statistic);
|
||||
|
||||
int fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC, 444);
|
||||
if (fd == -1)
|
||||
netdata_log_error("Cannot create file '%s'. Please fix this.", filename);
|
||||
else
|
||||
close(fd);
|
||||
}
|
||||
#endif
|
||||
|
||||
webrtc_initialize();
|
||||
|
||||
signals_unblock();
|
||||
|
||||
return 10;
|
||||
}
|
||||
|
||||
|
@ -2327,7 +2292,7 @@ int main(int argc, char *argv[])
|
|||
if (rc != 10)
|
||||
return rc;
|
||||
|
||||
signals_handle();
|
||||
nd_process_signals();
|
||||
return 1;
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -203,7 +203,7 @@ static void svc_rrd_cleanup_obsolete_charts_from_all_hosts() {
|
|||
if (host == localhost)
|
||||
continue;
|
||||
|
||||
netdata_mutex_lock(&host->receiver_lock);
|
||||
spinlock_lock(&host->receiver_lock);
|
||||
|
||||
time_t now = now_realtime_sec();
|
||||
|
||||
|
@ -215,7 +215,7 @@ static void svc_rrd_cleanup_obsolete_charts_from_all_hosts() {
|
|||
host->trigger_chart_obsoletion_check = 0;
|
||||
}
|
||||
|
||||
netdata_mutex_unlock(&host->receiver_lock);
|
||||
spinlock_unlock(&host->receiver_lock);
|
||||
}
|
||||
|
||||
rrd_rdunlock();
|
||||
|
@ -247,14 +247,12 @@ restart_after_removal:
|
|||
}
|
||||
|
||||
worker_is_busy(WORKER_JOB_FREE_HOST);
|
||||
#ifdef ENABLE_ACLK
|
||||
// in case we have cloud connection we inform cloud
|
||||
// a child disconnected
|
||||
if (netdata_cloud_enabled && force) {
|
||||
if (force) {
|
||||
aclk_host_state_update(host, 0, 0);
|
||||
unregister_node(host->machine_guid);
|
||||
}
|
||||
#endif
|
||||
rrdhost_free___while_having_rrd_wrlock(host, force);
|
||||
goto restart_after_removal;
|
||||
}
|
||||
|
|
|
@ -2,12 +2,6 @@
|
|||
|
||||
#include "common.h"
|
||||
|
||||
/*
|
||||
* IMPORTANT: Libuv uv_spawn() uses SIGCHLD internally:
|
||||
* https://github.com/libuv/libuv/blob/cc51217a317e96510fbb284721d5e6bc2af31e33/src/unix/process.c#L485
|
||||
* Extreme care is needed when mixing and matching POSIX and libuv.
|
||||
*/
|
||||
|
||||
typedef enum signal_action {
|
||||
NETDATA_SIGNAL_END_OF_LIST,
|
||||
NETDATA_SIGNAL_IGNORE,
|
||||
|
@ -56,24 +50,33 @@ static void signal_handler(int signo) {
|
|||
}
|
||||
}
|
||||
|
||||
void signals_block(void) {
|
||||
// Mask all signals, to ensure they will only be unmasked at the threads that can handle them.
|
||||
// This means that all third party libraries (including libuv) cannot use signals anymore.
|
||||
// The signals they are interested must be unblocked at their corresponding event loops.
|
||||
static void posix_mask_all_signals(void) {
|
||||
sigset_t sigset;
|
||||
sigfillset(&sigset);
|
||||
|
||||
if(pthread_sigmask(SIG_BLOCK, &sigset, NULL) == -1)
|
||||
netdata_log_error("SIGNAL: Could not block signals for threads");
|
||||
if(pthread_sigmask(SIG_BLOCK, &sigset, NULL) != 0)
|
||||
netdata_log_error("SIGNAL: cannot mask all signals");
|
||||
}
|
||||
|
||||
void signals_unblock(void) {
|
||||
// Unmask all signals the netdata main signal handler uses.
|
||||
// All other signals remain masked.
|
||||
static void posix_unmask_my_signals(void) {
|
||||
sigset_t sigset;
|
||||
sigfillset(&sigset);
|
||||
sigemptyset(&sigset);
|
||||
|
||||
if(pthread_sigmask(SIG_UNBLOCK, &sigset, NULL) == -1) {
|
||||
netdata_log_error("SIGNAL: Could not unblock signals for threads");
|
||||
}
|
||||
for (int i = 0; signals_waiting[i].action != NETDATA_SIGNAL_END_OF_LIST; i++)
|
||||
sigaddset(&sigset, signals_waiting[i].signo);
|
||||
|
||||
if (pthread_sigmask(SIG_UNBLOCK, &sigset, NULL) != 0)
|
||||
netdata_log_error("SIGNAL: cannot unmask netdata signals");
|
||||
}
|
||||
|
||||
void signals_init(void) {
|
||||
void nd_initialize_signals(void) {
|
||||
posix_mask_all_signals(); // block all signals for all threads
|
||||
|
||||
// Catch signals which we want to use
|
||||
struct sigaction sa;
|
||||
sa.sa_flags = 0;
|
||||
|
@ -97,22 +100,10 @@ void signals_init(void) {
|
|||
}
|
||||
}
|
||||
|
||||
void signals_reset(void) {
|
||||
struct sigaction sa;
|
||||
sigemptyset(&sa.sa_mask);
|
||||
sa.sa_handler = SIG_DFL;
|
||||
sa.sa_flags = 0;
|
||||
void nd_process_signals(void) {
|
||||
posix_unmask_my_signals();
|
||||
|
||||
int i;
|
||||
for (i = 0; signals_waiting[i].action != NETDATA_SIGNAL_END_OF_LIST; i++) {
|
||||
if(sigaction(signals_waiting[i].signo, &sa, NULL) == -1)
|
||||
netdata_log_error("SIGNAL: Failed to reset signal handler for: %s", signals_waiting[i].name);
|
||||
}
|
||||
}
|
||||
|
||||
void signals_handle(void) {
|
||||
while(1) {
|
||||
|
||||
// pause() causes the calling process (or thread) to sleep until a signal
|
||||
// is delivered that either terminates the process or causes the invocation
|
||||
// of a signal-catching function.
|
||||
|
|
|
@ -3,10 +3,7 @@
|
|||
#ifndef NETDATA_SIGNALS_H
|
||||
#define NETDATA_SIGNALS_H 1
|
||||
|
||||
void signals_init(void);
|
||||
void signals_block(void);
|
||||
void signals_unblock(void);
|
||||
void signals_reset(void);
|
||||
void signals_handle(void) NORETURN;
|
||||
void nd_initialize_signals(void);
|
||||
void nd_process_signals(void) NORETURN;
|
||||
|
||||
#endif //NETDATA_SIGNALS_H
|
||||
|
|
|
@ -133,7 +133,6 @@ const struct netdata_static_thread static_threads_common[] = {
|
|||
},
|
||||
#endif
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
{
|
||||
.name = "ACLK_MAIN",
|
||||
.config_section = NULL,
|
||||
|
@ -143,7 +142,6 @@ const struct netdata_static_thread static_threads_common[] = {
|
|||
.init_routine = NULL,
|
||||
.start_routine = aclk_main
|
||||
},
|
||||
#endif
|
||||
|
||||
{
|
||||
.name = "RRDCONTEXT",
|
||||
|
|
|
@ -1437,8 +1437,8 @@ int check_strdupz_path_subpath() {
|
|||
|
||||
size_t i;
|
||||
for(i = 0; checks[i].result ; i++) {
|
||||
char *s = strdupz_path_subpath(checks[i].path, checks[i].subpath);
|
||||
fprintf(stderr, "strdupz_path_subpath(\"%s\", \"%s\") = \"%s\": ", checks[i].path, checks[i].subpath, s);
|
||||
char *s = filename_from_path_entry_strdupz(checks[i].path, checks[i].subpath);
|
||||
fprintf(stderr, "filename_from_path_entry_strdupz(\"%s\", \"%s\") = \"%s\": ", checks[i].path, checks[i].subpath, s);
|
||||
if(!s || strcmp(s, checks[i].result) != 0) {
|
||||
freez(s);
|
||||
fprintf(stderr, "FAILED\n");
|
||||
|
|
|
@ -4,7 +4,7 @@ extern "C" {
|
|||
#include "libnetdata/libnetdata.h"
|
||||
|
||||
int netdata_main(int argc, char *argv[]);
|
||||
void signals_handle(void);
|
||||
void nd_process_signals(void);
|
||||
|
||||
}
|
||||
|
||||
|
@ -231,7 +231,7 @@ int main(int argc, char *argv[])
|
|||
if (rc != 10)
|
||||
return rc;
|
||||
|
||||
signals_handle();
|
||||
nd_process_signals();
|
||||
return 1;
|
||||
}
|
||||
else
|
||||
|
|
|
@ -399,8 +399,8 @@ int rrdcontexts_to_json(RRDHOST *host, BUFFER *wb, time_t after, time_t before,
|
|||
|
||||
char node_uuid[UUID_STR_LEN] = "";
|
||||
|
||||
if(host->node_id)
|
||||
uuid_unparse(*host->node_id, node_uuid);
|
||||
if(!uuid_is_null(host->node_id))
|
||||
uuid_unparse_lower(host->node_id, node_uuid);
|
||||
|
||||
if(after != 0 && before != 0)
|
||||
rrdr_relative_window_to_absolute_query(&after, &before, NULL, false);
|
||||
|
@ -409,7 +409,8 @@ int rrdcontexts_to_json(RRDHOST *host, BUFFER *wb, time_t after, time_t before,
|
|||
buffer_json_member_add_string(wb, "hostname", rrdhost_hostname(host));
|
||||
buffer_json_member_add_string(wb, "machine_guid", host->machine_guid);
|
||||
buffer_json_member_add_string(wb, "node_id", node_uuid);
|
||||
buffer_json_member_add_string(wb, "claim_id", host->aclk_state.claimed_id ? host->aclk_state.claimed_id : "");
|
||||
CLAIM_ID claim_id = rrdhost_claim_id_get(host);
|
||||
buffer_json_member_add_string(wb, "claim_id", claim_id.str);
|
||||
|
||||
if(options & RRDCONTEXT_OPTION_SHOW_LABELS) {
|
||||
buffer_json_member_add_object(wb, "host_labels");
|
File diff suppressed because it is too large
Load diff
1001
src/database/contexts/api_v2_contexts.c
Normal file
1001
src/database/contexts/api_v2_contexts.c
Normal file
File diff suppressed because it is too large
Load diff
98
src/database/contexts/api_v2_contexts.h
Normal file
98
src/database/contexts/api_v2_contexts.h
Normal file
|
@ -0,0 +1,98 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#ifndef NETDATA_API_V2_CONTEXTS_H
|
||||
#define NETDATA_API_V2_CONTEXTS_H
|
||||
|
||||
#include "internal.h"
|
||||
|
||||
typedef enum __attribute__ ((__packed__)) {
|
||||
FTS_MATCHED_NONE = 0,
|
||||
FTS_MATCHED_HOST,
|
||||
FTS_MATCHED_CONTEXT,
|
||||
FTS_MATCHED_INSTANCE,
|
||||
FTS_MATCHED_DIMENSION,
|
||||
FTS_MATCHED_LABEL,
|
||||
FTS_MATCHED_ALERT,
|
||||
FTS_MATCHED_ALERT_INFO,
|
||||
FTS_MATCHED_FAMILY,
|
||||
FTS_MATCHED_TITLE,
|
||||
FTS_MATCHED_UNITS,
|
||||
} FTS_MATCH;
|
||||
|
||||
typedef struct full_text_search_index {
|
||||
size_t searches;
|
||||
size_t string_searches;
|
||||
size_t char_searches;
|
||||
} FTS_INDEX;
|
||||
|
||||
struct contexts_v2_node {
|
||||
size_t ni;
|
||||
RRDHOST *host;
|
||||
};
|
||||
|
||||
struct rrdcontext_to_json_v2_data {
|
||||
time_t now;
|
||||
|
||||
BUFFER *wb;
|
||||
struct api_v2_contexts_request *request;
|
||||
|
||||
CONTEXTS_V2_MODE mode;
|
||||
CONTEXTS_OPTIONS options;
|
||||
struct query_versions versions;
|
||||
|
||||
struct {
|
||||
SIMPLE_PATTERN *scope_pattern;
|
||||
SIMPLE_PATTERN *pattern;
|
||||
size_t ni;
|
||||
DICTIONARY *dict; // the result set
|
||||
} nodes;
|
||||
|
||||
struct {
|
||||
SIMPLE_PATTERN *scope_pattern;
|
||||
SIMPLE_PATTERN *pattern;
|
||||
size_t ci;
|
||||
DICTIONARY *dict; // the result set
|
||||
} contexts;
|
||||
|
||||
struct {
|
||||
SIMPLE_PATTERN *alert_name_pattern;
|
||||
time_t alarm_id_filter;
|
||||
|
||||
size_t ati;
|
||||
|
||||
DICTIONARY *summary;
|
||||
DICTIONARY *alert_instances;
|
||||
|
||||
DICTIONARY *by_type;
|
||||
DICTIONARY *by_component;
|
||||
DICTIONARY *by_classification;
|
||||
DICTIONARY *by_recipient;
|
||||
DICTIONARY *by_module;
|
||||
} alerts;
|
||||
|
||||
struct {
|
||||
FTS_MATCH host_match;
|
||||
char host_node_id_str[UUID_STR_LEN];
|
||||
SIMPLE_PATTERN *pattern;
|
||||
FTS_INDEX fts;
|
||||
} q;
|
||||
|
||||
struct {
|
||||
DICTIONARY *dict; // the result set
|
||||
} functions;
|
||||
|
||||
struct {
|
||||
bool enabled;
|
||||
bool relative;
|
||||
time_t after;
|
||||
time_t before;
|
||||
} window;
|
||||
|
||||
struct query_timings timings;
|
||||
};
|
||||
|
||||
void agent_capabilities_to_json(BUFFER *wb, RRDHOST *host, const char *key);
|
||||
|
||||
#include "api_v2_contexts_alerts.h"
|
||||
|
||||
#endif //NETDATA_API_V2_CONTEXTS_H
|
178
src/database/contexts/api_v2_contexts_agents.c
Normal file
178
src/database/contexts/api_v2_contexts_agents.c
Normal file
|
@ -0,0 +1,178 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#include "api_v2_contexts.h"
|
||||
#include "aclk/aclk_capas.h"
|
||||
|
||||
void build_info_to_json_object(BUFFER *b);
|
||||
|
||||
static void convert_seconds_to_dhms(time_t seconds, char *result, int result_size) {
|
||||
int days, hours, minutes;
|
||||
|
||||
days = (int) (seconds / (24 * 3600));
|
||||
seconds = (int) (seconds % (24 * 3600));
|
||||
hours = (int) (seconds / 3600);
|
||||
seconds %= 3600;
|
||||
minutes = (int) (seconds / 60);
|
||||
seconds %= 60;
|
||||
|
||||
// Format the result into the provided string buffer
|
||||
BUFFER *buf = buffer_create(128, NULL);
|
||||
if (days)
|
||||
buffer_sprintf(buf,"%d day%s%s", days, days==1 ? "" : "s", hours || minutes ? ", " : "");
|
||||
if (hours)
|
||||
buffer_sprintf(buf,"%d hour%s%s", hours, hours==1 ? "" : "s", minutes ? ", " : "");
|
||||
if (minutes)
|
||||
buffer_sprintf(buf,"%d minute%s%s", minutes, minutes==1 ? "" : "s", seconds ? ", " : "");
|
||||
if (seconds)
|
||||
buffer_sprintf(buf,"%d second%s", (int) seconds, seconds==1 ? "" : "s");
|
||||
strncpyz(result, buffer_tostring(buf), result_size);
|
||||
buffer_free(buf);
|
||||
}
|
||||
|
||||
void buffer_json_agents_v2(BUFFER *wb, struct query_timings *timings, time_t now_s, bool info, bool array) {
|
||||
if(!now_s)
|
||||
now_s = now_realtime_sec();
|
||||
|
||||
if(array) {
|
||||
buffer_json_member_add_array(wb, "agents");
|
||||
buffer_json_add_array_item_object(wb);
|
||||
}
|
||||
else
|
||||
buffer_json_member_add_object(wb, "agent");
|
||||
|
||||
buffer_json_member_add_string(wb, "mg", localhost->machine_guid);
|
||||
buffer_json_member_add_uuid(wb, "nd", localhost->node_id);
|
||||
buffer_json_member_add_string(wb, "nm", rrdhost_hostname(localhost));
|
||||
buffer_json_member_add_time_t(wb, "now", now_s);
|
||||
|
||||
if(array)
|
||||
buffer_json_member_add_uint64(wb, "ai", 0);
|
||||
|
||||
if(info) {
|
||||
buffer_json_member_add_object(wb, "application");
|
||||
build_info_to_json_object(wb);
|
||||
buffer_json_object_close(wb); // application
|
||||
|
||||
buffer_json_cloud_status(wb, now_s);
|
||||
|
||||
buffer_json_member_add_object(wb, "nodes");
|
||||
{
|
||||
size_t receiving = 0, archived = 0, sending = 0, total = 0;
|
||||
RRDHOST *host;
|
||||
dfe_start_read(rrdhost_root_index, host) {
|
||||
total++;
|
||||
|
||||
if(host == localhost)
|
||||
continue;
|
||||
|
||||
if(rrdhost_state_cloud_emulation(host))
|
||||
receiving++;
|
||||
else
|
||||
archived++;
|
||||
|
||||
if(rrdhost_flag_check(host, RRDHOST_FLAG_RRDPUSH_SENDER_CONNECTED))
|
||||
sending++;
|
||||
}
|
||||
dfe_done(host);
|
||||
|
||||
buffer_json_member_add_uint64(wb, "total", total);
|
||||
buffer_json_member_add_uint64(wb, "receiving", receiving);
|
||||
buffer_json_member_add_uint64(wb, "sending", sending);
|
||||
buffer_json_member_add_uint64(wb, "archived", archived);
|
||||
}
|
||||
buffer_json_object_close(wb); // nodes
|
||||
|
||||
agent_capabilities_to_json(wb, localhost, "capabilities");
|
||||
|
||||
buffer_json_member_add_object(wb, "api");
|
||||
{
|
||||
buffer_json_member_add_uint64(wb, "version", aclk_get_http_api_version());
|
||||
buffer_json_member_add_boolean(wb, "bearer_protection", netdata_is_protected_by_bearer);
|
||||
}
|
||||
buffer_json_object_close(wb); // api
|
||||
|
||||
buffer_json_member_add_array(wb, "db_size");
|
||||
size_t group_seconds = localhost->rrd_update_every;
|
||||
for (size_t tier = 0; tier < storage_tiers; tier++) {
|
||||
STORAGE_ENGINE *eng = localhost->db[tier].eng;
|
||||
if (!eng) continue;
|
||||
|
||||
group_seconds *= storage_tiers_grouping_iterations[tier];
|
||||
uint64_t max = storage_engine_disk_space_max(eng->seb, localhost->db[tier].si);
|
||||
uint64_t used = storage_engine_disk_space_used(eng->seb, localhost->db[tier].si);
|
||||
#ifdef ENABLE_DBENGINE
|
||||
if (!max && eng->seb == STORAGE_ENGINE_BACKEND_DBENGINE) {
|
||||
max = get_directory_free_bytes_space(multidb_ctx[tier]);
|
||||
max += used;
|
||||
}
|
||||
#endif
|
||||
time_t first_time_s = storage_engine_global_first_time_s(eng->seb, localhost->db[tier].si);
|
||||
size_t currently_collected_metrics = storage_engine_collected_metrics(eng->seb, localhost->db[tier].si);
|
||||
|
||||
NETDATA_DOUBLE percent;
|
||||
if (used && max)
|
||||
percent = (NETDATA_DOUBLE) used * 100.0 / (NETDATA_DOUBLE) max;
|
||||
else
|
||||
percent = 0.0;
|
||||
|
||||
buffer_json_add_array_item_object(wb);
|
||||
buffer_json_member_add_uint64(wb, "tier", tier);
|
||||
char human_retention[128];
|
||||
convert_seconds_to_dhms((time_t) group_seconds, human_retention, sizeof(human_retention) - 1);
|
||||
buffer_json_member_add_string(wb, "point_every", human_retention);
|
||||
|
||||
buffer_json_member_add_uint64(wb, "metrics", storage_engine_metrics(eng->seb, localhost->db[tier].si));
|
||||
buffer_json_member_add_uint64(wb, "samples", storage_engine_samples(eng->seb, localhost->db[tier].si));
|
||||
|
||||
if(used || max) {
|
||||
buffer_json_member_add_uint64(wb, "disk_used", used);
|
||||
buffer_json_member_add_uint64(wb, "disk_max", max);
|
||||
buffer_json_member_add_double(wb, "disk_percent", percent);
|
||||
}
|
||||
|
||||
if(first_time_s) {
|
||||
time_t retention = now_s - first_time_s;
|
||||
|
||||
buffer_json_member_add_time_t(wb, "from", first_time_s);
|
||||
buffer_json_member_add_time_t(wb, "to", now_s);
|
||||
buffer_json_member_add_time_t(wb, "retention", retention);
|
||||
|
||||
convert_seconds_to_dhms(retention, human_retention, sizeof(human_retention) - 1);
|
||||
buffer_json_member_add_string(wb, "retention_human", human_retention);
|
||||
|
||||
if(used || max) { // we have disk space information
|
||||
time_t time_retention = 0;
|
||||
#ifdef ENABLE_DBENGINE
|
||||
time_retention = multidb_ctx[tier]->config.max_retention_s;
|
||||
#endif
|
||||
time_t space_retention = (time_t)((NETDATA_DOUBLE)(now_s - first_time_s) * 100.0 / percent);
|
||||
time_t actual_retention = MIN(space_retention, time_retention ? time_retention : space_retention);
|
||||
|
||||
if (time_retention) {
|
||||
convert_seconds_to_dhms(time_retention, human_retention, sizeof(human_retention) - 1);
|
||||
buffer_json_member_add_time_t(wb, "requested_retention", time_retention);
|
||||
buffer_json_member_add_string(wb, "requested_retention_human", human_retention);
|
||||
}
|
||||
|
||||
convert_seconds_to_dhms(actual_retention, human_retention, sizeof(human_retention) - 1);
|
||||
buffer_json_member_add_time_t(wb, "expected_retention", actual_retention);
|
||||
buffer_json_member_add_string(wb, "expected_retention_human", human_retention);
|
||||
}
|
||||
}
|
||||
|
||||
if(currently_collected_metrics)
|
||||
buffer_json_member_add_uint64(wb, "currently_collected_metrics", currently_collected_metrics);
|
||||
|
||||
buffer_json_object_close(wb);
|
||||
}
|
||||
buffer_json_array_close(wb); // db_size
|
||||
}
|
||||
|
||||
if(timings)
|
||||
buffer_json_query_timings(wb, "timings", timings);
|
||||
|
||||
buffer_json_object_close(wb);
|
||||
|
||||
if(array)
|
||||
buffer_json_array_close(wb);
|
||||
}
|
135
src/database/contexts/api_v2_contexts_alert_config.c
Normal file
135
src/database/contexts/api_v2_contexts_alert_config.c
Normal file
|
@ -0,0 +1,135 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#include "api_v2_contexts_alerts.h"
|
||||
|
||||
void contexts_v2_alert_config_to_json_from_sql_alert_config_data(struct sql_alert_config_data *t, void *data) {
|
||||
struct alert_transitions_callback_data *d = data;
|
||||
BUFFER *wb = d->wb;
|
||||
bool debug = d->debug;
|
||||
d->configs_added++;
|
||||
|
||||
if(d->only_one_config)
|
||||
buffer_json_add_array_item_object(wb); // alert config
|
||||
|
||||
{
|
||||
buffer_json_member_add_string(wb, "name", t->name);
|
||||
buffer_json_member_add_uuid_ptr(wb, "config_hash_id", t->config_hash_id);
|
||||
|
||||
buffer_json_member_add_object(wb, "selectors");
|
||||
{
|
||||
bool is_template = t->selectors.on_template && *t->selectors.on_template ? true : false;
|
||||
buffer_json_member_add_string(wb, "type", is_template ? "template" : "alarm");
|
||||
buffer_json_member_add_string(wb, "on", is_template ? t->selectors.on_template : t->selectors.on_key);
|
||||
|
||||
buffer_json_member_add_string(wb, "families", t->selectors.families);
|
||||
buffer_json_member_add_string(wb, "host_labels", t->selectors.host_labels);
|
||||
buffer_json_member_add_string(wb, "chart_labels", t->selectors.chart_labels);
|
||||
}
|
||||
buffer_json_object_close(wb); // selectors
|
||||
|
||||
buffer_json_member_add_object(wb, "value"); // value
|
||||
{
|
||||
// buffer_json_member_add_string(wb, "every", t->value.every); // does not exist in Netdata Cloud
|
||||
buffer_json_member_add_string(wb, "units", t->value.units);
|
||||
buffer_json_member_add_uint64(wb, "update_every", t->value.update_every);
|
||||
|
||||
if (t->value.db.after || debug) {
|
||||
buffer_json_member_add_object(wb, "db");
|
||||
{
|
||||
// buffer_json_member_add_string(wb, "lookup", t->value.db.lookup); // does not exist in Netdata Cloud
|
||||
|
||||
buffer_json_member_add_time_t(wb, "after", t->value.db.after);
|
||||
buffer_json_member_add_time_t(wb, "before", t->value.db.before);
|
||||
buffer_json_member_add_string(wb, "time_group_condition", alerts_group_conditions_id2txt(t->value.db.time_group_condition));
|
||||
buffer_json_member_add_double(wb, "time_group_value", t->value.db.time_group_value);
|
||||
buffer_json_member_add_string(wb, "dims_group", alerts_dims_grouping_id2group(t->value.db.dims_group));
|
||||
buffer_json_member_add_string(wb, "data_source", alerts_data_source_id2source(t->value.db.data_source));
|
||||
buffer_json_member_add_string(wb, "method", t->value.db.method);
|
||||
buffer_json_member_add_string(wb, "dimensions", t->value.db.dimensions);
|
||||
rrdr_options_to_buffer_json_array(wb, "options", (RRDR_OPTIONS)t->value.db.options);
|
||||
}
|
||||
buffer_json_object_close(wb); // db
|
||||
}
|
||||
|
||||
if (t->value.calc || debug)
|
||||
buffer_json_member_add_string(wb, "calc", t->value.calc);
|
||||
}
|
||||
buffer_json_object_close(wb); // value
|
||||
|
||||
if (t->status.warn || t->status.crit || debug) {
|
||||
buffer_json_member_add_object(wb, "status"); // status
|
||||
{
|
||||
NETDATA_DOUBLE green = t->status.green ? str2ndd(t->status.green, NULL) : NAN;
|
||||
NETDATA_DOUBLE red = t->status.red ? str2ndd(t->status.red, NULL) : NAN;
|
||||
|
||||
if (!isnan(green) || debug)
|
||||
buffer_json_member_add_double(wb, "green", green);
|
||||
|
||||
if (!isnan(red) || debug)
|
||||
buffer_json_member_add_double(wb, "red", red);
|
||||
|
||||
if (t->status.warn || debug)
|
||||
buffer_json_member_add_string(wb, "warn", t->status.warn);
|
||||
|
||||
if (t->status.crit || debug)
|
||||
buffer_json_member_add_string(wb, "crit", t->status.crit);
|
||||
}
|
||||
buffer_json_object_close(wb); // status
|
||||
}
|
||||
|
||||
buffer_json_member_add_object(wb, "notification");
|
||||
{
|
||||
buffer_json_member_add_string(wb, "type", "agent");
|
||||
buffer_json_member_add_string(wb, "exec", t->notification.exec ? t->notification.exec : NULL);
|
||||
buffer_json_member_add_string(wb, "to", t->notification.to_key ? t->notification.to_key : string2str(localhost->health.health_default_recipient));
|
||||
buffer_json_member_add_string(wb, "delay", t->notification.delay);
|
||||
buffer_json_member_add_string(wb, "repeat", t->notification.repeat);
|
||||
buffer_json_member_add_string(wb, "options", t->notification.options);
|
||||
}
|
||||
buffer_json_object_close(wb); // notification
|
||||
|
||||
buffer_json_member_add_string(wb, "class", t->classification);
|
||||
buffer_json_member_add_string(wb, "component", t->component);
|
||||
buffer_json_member_add_string(wb, "type", t->type);
|
||||
buffer_json_member_add_string(wb, "info", t->info);
|
||||
buffer_json_member_add_string(wb, "summary", t->summary);
|
||||
// buffer_json_member_add_string(wb, "source", t->source); // moved to alert instance
|
||||
}
|
||||
|
||||
if(d->only_one_config)
|
||||
buffer_json_object_close(wb);
|
||||
}
|
||||
|
||||
int contexts_v2_alert_config_to_json(struct web_client *w, const char *config_hash_id) {
|
||||
struct alert_transitions_callback_data data = {
|
||||
.wb = w->response.data,
|
||||
.debug = false,
|
||||
.only_one_config = false,
|
||||
};
|
||||
DICTIONARY *configs = dictionary_create(DICT_OPTION_SINGLE_THREADED | DICT_OPTION_DONT_OVERWRITE_VALUE);
|
||||
dictionary_set(configs, config_hash_id, NULL, 0);
|
||||
|
||||
buffer_flush(w->response.data);
|
||||
|
||||
buffer_json_initialize(w->response.data, "\"", "\"", 0, true, BUFFER_JSON_OPTIONS_DEFAULT);
|
||||
|
||||
int added = sql_get_alert_configuration(configs, contexts_v2_alert_config_to_json_from_sql_alert_config_data, &data, false);
|
||||
buffer_json_finalize(w->response.data);
|
||||
|
||||
int ret = HTTP_RESP_OK;
|
||||
|
||||
if(added <= 0) {
|
||||
buffer_flush(w->response.data);
|
||||
w->response.data->content_type = CT_TEXT_PLAIN;
|
||||
if(added < 0) {
|
||||
buffer_strcat(w->response.data, "Failed to execute SQL query.");
|
||||
ret = HTTP_RESP_INTERNAL_SERVER_ERROR;
|
||||
}
|
||||
else {
|
||||
buffer_strcat(w->response.data, "Config is not found.");
|
||||
ret = HTTP_RESP_NOT_FOUND;
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
487
src/database/contexts/api_v2_contexts_alert_transitions.c
Normal file
487
src/database/contexts/api_v2_contexts_alert_transitions.c
Normal file
|
@ -0,0 +1,487 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#include "api_v2_contexts_alerts.h"
|
||||
|
||||
struct alert_transitions_facets alert_transition_facets[] = {
|
||||
[ATF_STATUS] = {
|
||||
.id = "f_status",
|
||||
.name = "Alert Status",
|
||||
.query_param = "f_status",
|
||||
.order = 1,
|
||||
},
|
||||
[ATF_TYPE] = {
|
||||
.id = "f_type",
|
||||
.name = "Alert Type",
|
||||
.query_param = "f_type",
|
||||
.order = 2,
|
||||
},
|
||||
[ATF_ROLE] = {
|
||||
.id = "f_role",
|
||||
.name = "Recipient Role",
|
||||
.query_param = "f_role",
|
||||
.order = 3,
|
||||
},
|
||||
[ATF_CLASS] = {
|
||||
.id = "f_class",
|
||||
.name = "Alert Class",
|
||||
.query_param = "f_class",
|
||||
.order = 4,
|
||||
},
|
||||
[ATF_COMPONENT] = {
|
||||
.id = "f_component",
|
||||
.name = "Alert Component",
|
||||
.query_param = "f_component",
|
||||
.order = 5,
|
||||
},
|
||||
[ATF_NODE] = {
|
||||
.id = "f_node",
|
||||
.name = "Alert Node",
|
||||
.query_param = "f_node",
|
||||
.order = 6,
|
||||
},
|
||||
[ATF_ALERT_NAME] = {
|
||||
.id = "f_alert",
|
||||
.name = "Alert Name",
|
||||
.query_param = "f_alert",
|
||||
.order = 7,
|
||||
},
|
||||
[ATF_CHART_NAME] = {
|
||||
.id = "f_instance",
|
||||
.name = "Instance Name",
|
||||
.query_param = "f_instance",
|
||||
.order = 8,
|
||||
},
|
||||
[ATF_CONTEXT] = {
|
||||
.id = "f_context",
|
||||
.name = "Context",
|
||||
.query_param = "f_context",
|
||||
.order = 9,
|
||||
},
|
||||
|
||||
// terminator
|
||||
[ATF_TOTAL_ENTRIES] = {
|
||||
.id = NULL,
|
||||
.name = NULL,
|
||||
.query_param = NULL,
|
||||
.order = 9999,
|
||||
}
|
||||
};
|
||||
|
||||
#define SQL_TRANSITION_DATA_SMALL_STRING (6 * 8)
|
||||
#define SQL_TRANSITION_DATA_MEDIUM_STRING (12 * 8)
|
||||
#define SQL_TRANSITION_DATA_BIG_STRING 512
|
||||
|
||||
struct sql_alert_transition_fixed_size {
|
||||
usec_t global_id;
|
||||
nd_uuid_t transition_id;
|
||||
nd_uuid_t host_id;
|
||||
nd_uuid_t config_hash_id;
|
||||
uint32_t alarm_id;
|
||||
char alert_name[SQL_TRANSITION_DATA_SMALL_STRING];
|
||||
char chart[RRD_ID_LENGTH_MAX];
|
||||
char chart_name[RRD_ID_LENGTH_MAX];
|
||||
char chart_context[SQL_TRANSITION_DATA_MEDIUM_STRING];
|
||||
char family[SQL_TRANSITION_DATA_SMALL_STRING];
|
||||
char recipient[SQL_TRANSITION_DATA_MEDIUM_STRING];
|
||||
char units[SQL_TRANSITION_DATA_SMALL_STRING];
|
||||
char exec[SQL_TRANSITION_DATA_BIG_STRING];
|
||||
char info[SQL_TRANSITION_DATA_BIG_STRING];
|
||||
char summary[SQL_TRANSITION_DATA_BIG_STRING];
|
||||
char classification[SQL_TRANSITION_DATA_SMALL_STRING];
|
||||
char type[SQL_TRANSITION_DATA_SMALL_STRING];
|
||||
char component[SQL_TRANSITION_DATA_SMALL_STRING];
|
||||
time_t when_key;
|
||||
time_t duration;
|
||||
time_t non_clear_duration;
|
||||
uint64_t flags;
|
||||
time_t delay_up_to_timestamp;
|
||||
time_t exec_run_timestamp;
|
||||
int exec_code;
|
||||
int new_status;
|
||||
int old_status;
|
||||
int delay;
|
||||
time_t last_repeat;
|
||||
NETDATA_DOUBLE new_value;
|
||||
NETDATA_DOUBLE old_value;
|
||||
|
||||
char machine_guid[UUID_STR_LEN];
|
||||
struct sql_alert_transition_fixed_size *next;
|
||||
struct sql_alert_transition_fixed_size *prev;
|
||||
};
|
||||
|
||||
struct facet_entry {
|
||||
uint32_t count;
|
||||
};
|
||||
|
||||
static struct sql_alert_transition_fixed_size *contexts_v2_alert_transition_dup(struct sql_alert_transition_data *t, const char *machine_guid, struct sql_alert_transition_fixed_size *dst) {
|
||||
struct sql_alert_transition_fixed_size *n = dst ? dst : mallocz(sizeof(*n));
|
||||
|
||||
n->global_id = t->global_id;
|
||||
uuid_copy(n->transition_id, *t->transition_id);
|
||||
uuid_copy(n->host_id, *t->host_id);
|
||||
uuid_copy(n->config_hash_id, *t->config_hash_id);
|
||||
n->alarm_id = t->alarm_id;
|
||||
strncpyz(n->alert_name, t->alert_name ? t->alert_name : "", sizeof(n->alert_name) - 1);
|
||||
strncpyz(n->chart, t->chart ? t->chart : "", sizeof(n->chart) - 1);
|
||||
strncpyz(n->chart_name, t->chart_name ? t->chart_name : n->chart, sizeof(n->chart_name) - 1);
|
||||
strncpyz(n->chart_context, t->chart_context ? t->chart_context : "", sizeof(n->chart_context) - 1);
|
||||
strncpyz(n->family, t->family ? t->family : "", sizeof(n->family) - 1);
|
||||
strncpyz(n->recipient, t->recipient ? t->recipient : "", sizeof(n->recipient) - 1);
|
||||
strncpyz(n->units, t->units ? t->units : "", sizeof(n->units) - 1);
|
||||
strncpyz(n->exec, t->exec ? t->exec : "", sizeof(n->exec) - 1);
|
||||
strncpyz(n->info, t->info ? t->info : "", sizeof(n->info) - 1);
|
||||
strncpyz(n->summary, t->summary ? t->summary : "", sizeof(n->summary) - 1);
|
||||
strncpyz(n->classification, t->classification ? t->classification : "", sizeof(n->classification) - 1);
|
||||
strncpyz(n->type, t->type ? t->type : "", sizeof(n->type) - 1);
|
||||
strncpyz(n->component, t->component ? t->component : "", sizeof(n->component) - 1);
|
||||
n->when_key = t->when_key;
|
||||
n->duration = t->duration;
|
||||
n->non_clear_duration = t->non_clear_duration;
|
||||
n->flags = t->flags;
|
||||
n->delay_up_to_timestamp = t->delay_up_to_timestamp;
|
||||
n->exec_run_timestamp = t->exec_run_timestamp;
|
||||
n->exec_code = t->exec_code;
|
||||
n->new_status = t->new_status;
|
||||
n->old_status = t->old_status;
|
||||
n->delay = t->delay;
|
||||
n->last_repeat = t->last_repeat;
|
||||
n->new_value = t->new_value;
|
||||
n->old_value = t->old_value;
|
||||
|
||||
memcpy(n->machine_guid, machine_guid, sizeof(n->machine_guid));
|
||||
n->next = n->prev = NULL;
|
||||
|
||||
return n;
|
||||
}
|
||||
|
||||
static void contexts_v2_alert_transition_free(struct sql_alert_transition_fixed_size *t) {
|
||||
freez(t);
|
||||
}
|
||||
|
||||
static inline void contexts_v2_alert_transition_keep(struct alert_transitions_callback_data *d, struct sql_alert_transition_data *t, const char *machine_guid) {
|
||||
d->items_matched++;
|
||||
|
||||
if(unlikely(t->global_id <= d->ctl->request->alerts.global_id_anchor)) {
|
||||
// this is in our past, we are not interested
|
||||
d->operations.skips_before++;
|
||||
return;
|
||||
}
|
||||
|
||||
if(unlikely(!d->base)) {
|
||||
d->last_added = contexts_v2_alert_transition_dup(t, machine_guid, NULL);
|
||||
DOUBLE_LINKED_LIST_APPEND_ITEM_UNSAFE(d->base, d->last_added, prev, next);
|
||||
d->items_to_return++;
|
||||
d->operations.first++;
|
||||
return;
|
||||
}
|
||||
|
||||
struct sql_alert_transition_fixed_size *last = d->last_added;
|
||||
while(last->prev != d->base->prev && t->global_id > last->prev->global_id) {
|
||||
last = last->prev;
|
||||
d->operations.backwards++;
|
||||
}
|
||||
|
||||
while(last->next && t->global_id < last->next->global_id) {
|
||||
last = last->next;
|
||||
d->operations.forwards++;
|
||||
}
|
||||
|
||||
if(d->items_to_return >= d->max_items_to_return) {
|
||||
if(last == d->base->prev && t->global_id < last->global_id) {
|
||||
d->operations.skips_after++;
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
d->items_to_return++;
|
||||
|
||||
if(t->global_id > last->global_id) {
|
||||
if(d->items_to_return > d->max_items_to_return) {
|
||||
d->items_to_return--;
|
||||
d->operations.shifts++;
|
||||
d->last_added = d->base->prev;
|
||||
DOUBLE_LINKED_LIST_REMOVE_ITEM_UNSAFE(d->base, d->last_added, prev, next);
|
||||
d->last_added = contexts_v2_alert_transition_dup(t, machine_guid, d->last_added);
|
||||
}
|
||||
DOUBLE_LINKED_LIST_PREPEND_ITEM_UNSAFE(d->base, d->last_added, prev, next);
|
||||
d->operations.prepend++;
|
||||
}
|
||||
else {
|
||||
d->last_added = contexts_v2_alert_transition_dup(t, machine_guid, NULL);
|
||||
DOUBLE_LINKED_LIST_APPEND_ITEM_UNSAFE(d->base, d->last_added, prev, next);
|
||||
d->operations.append++;
|
||||
}
|
||||
|
||||
while(d->items_to_return > d->max_items_to_return) {
|
||||
// we have to remove something
|
||||
|
||||
struct sql_alert_transition_fixed_size *tmp = d->base->prev;
|
||||
DOUBLE_LINKED_LIST_REMOVE_ITEM_UNSAFE(d->base, tmp, prev, next);
|
||||
d->items_to_return--;
|
||||
|
||||
if(unlikely(d->last_added == tmp))
|
||||
d->last_added = d->base;
|
||||
|
||||
contexts_v2_alert_transition_free(tmp);
|
||||
|
||||
d->operations.shifts++;
|
||||
}
|
||||
}
|
||||
|
||||
static void contexts_v2_alert_transition_callback(struct sql_alert_transition_data *t, void *data) {
|
||||
struct alert_transitions_callback_data *d = data;
|
||||
d->items_evaluated++;
|
||||
|
||||
char machine_guid[UUID_STR_LEN] = "";
|
||||
uuid_unparse_lower(*t->host_id, machine_guid);
|
||||
|
||||
const char *facets[ATF_TOTAL_ENTRIES] = {
|
||||
[ATF_STATUS] = rrdcalc_status2string(t->new_status),
|
||||
[ATF_CLASS] = t->classification,
|
||||
[ATF_TYPE] = t->type,
|
||||
[ATF_COMPONENT] = t->component,
|
||||
[ATF_ROLE] = t->recipient && *t->recipient ? t->recipient : string2str(localhost->health.health_default_recipient),
|
||||
[ATF_NODE] = machine_guid,
|
||||
[ATF_ALERT_NAME] = t->alert_name,
|
||||
[ATF_CHART_NAME] = t->chart_name,
|
||||
[ATF_CONTEXT] = t->chart_context,
|
||||
};
|
||||
|
||||
for(size_t i = 0; i < ATF_TOTAL_ENTRIES ;i++) {
|
||||
if (!facets[i] || !*facets[i]) facets[i] = "unknown";
|
||||
|
||||
struct facet_entry tmp = {
|
||||
.count = 0,
|
||||
};
|
||||
dictionary_set(d->facets[i].dict, facets[i], &tmp, sizeof(tmp));
|
||||
}
|
||||
|
||||
bool selected[ATF_TOTAL_ENTRIES] = { 0 };
|
||||
|
||||
uint32_t selected_by = 0;
|
||||
for(size_t i = 0; i < ATF_TOTAL_ENTRIES ;i++) {
|
||||
selected[i] = !d->facets[i].pattern || simple_pattern_matches(d->facets[i].pattern, facets[i]);
|
||||
if(selected[i])
|
||||
selected_by++;
|
||||
}
|
||||
|
||||
if(selected_by == ATF_TOTAL_ENTRIES) {
|
||||
// this item is selected by all facets
|
||||
// put it in our result (if it fits)
|
||||
contexts_v2_alert_transition_keep(d, t, machine_guid);
|
||||
}
|
||||
|
||||
if(selected_by >= ATF_TOTAL_ENTRIES - 1) {
|
||||
// this item is selected by all, or all except one facet
|
||||
// in both cases we need to add it to our counters
|
||||
|
||||
for (size_t i = 0; i < ATF_TOTAL_ENTRIES; i++) {
|
||||
uint32_t counted_by = selected_by;
|
||||
|
||||
if (counted_by != ATF_TOTAL_ENTRIES) {
|
||||
counted_by = 0;
|
||||
for (size_t j = 0; j < ATF_TOTAL_ENTRIES; j++) {
|
||||
if (i == j || selected[j])
|
||||
counted_by++;
|
||||
}
|
||||
}
|
||||
|
||||
if (counted_by == ATF_TOTAL_ENTRIES) {
|
||||
// we need to count it on this facet
|
||||
struct facet_entry *x = dictionary_get(d->facets[i].dict, facets[i]);
|
||||
internal_fatal(!x, "facet is not found");
|
||||
if(x)
|
||||
x->count++;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void contexts_v2_alert_transitions_to_json(BUFFER *wb, struct rrdcontext_to_json_v2_data *ctl, bool debug) {
|
||||
struct alert_transitions_callback_data data = {
|
||||
.wb = wb,
|
||||
.ctl = ctl,
|
||||
.debug = debug,
|
||||
.only_one_config = true,
|
||||
.max_items_to_return = ctl->request->alerts.last,
|
||||
.items_to_return = 0,
|
||||
.base = NULL,
|
||||
};
|
||||
|
||||
for(size_t i = 0; i < ATF_TOTAL_ENTRIES ;i++) {
|
||||
data.facets[i].dict = dictionary_create_advanced(DICT_OPTION_SINGLE_THREADED | DICT_OPTION_FIXED_SIZE | DICT_OPTION_DONT_OVERWRITE_VALUE, NULL, sizeof(struct facet_entry));
|
||||
if(ctl->request->alerts.facets[i])
|
||||
data.facets[i].pattern = simple_pattern_create(ctl->request->alerts.facets[i], ",|", SIMPLE_PATTERN_EXACT, false);
|
||||
}
|
||||
|
||||
sql_alert_transitions(
|
||||
ctl->nodes.dict,
|
||||
ctl->window.after,
|
||||
ctl->window.before,
|
||||
ctl->request->contexts,
|
||||
ctl->request->alerts.alert,
|
||||
ctl->request->alerts.transition,
|
||||
contexts_v2_alert_transition_callback,
|
||||
&data,
|
||||
debug);
|
||||
|
||||
buffer_json_member_add_array(wb, "facets");
|
||||
for (size_t i = 0; i < ATF_TOTAL_ENTRIES; i++) {
|
||||
buffer_json_add_array_item_object(wb);
|
||||
{
|
||||
buffer_json_member_add_string(wb, "id", alert_transition_facets[i].id);
|
||||
buffer_json_member_add_string(wb, "name", alert_transition_facets[i].name);
|
||||
buffer_json_member_add_uint64(wb, "order", alert_transition_facets[i].order);
|
||||
buffer_json_member_add_array(wb, "options");
|
||||
{
|
||||
struct facet_entry *x;
|
||||
dfe_start_read(data.facets[i].dict, x) {
|
||||
buffer_json_add_array_item_object(wb);
|
||||
{
|
||||
buffer_json_member_add_string(wb, "id", x_dfe.name);
|
||||
if (i == ATF_NODE) {
|
||||
RRDHOST *host = rrdhost_find_by_guid(x_dfe.name);
|
||||
if (host)
|
||||
buffer_json_member_add_string(wb, "name", rrdhost_hostname(host));
|
||||
else
|
||||
buffer_json_member_add_string(wb, "name", x_dfe.name);
|
||||
} else
|
||||
buffer_json_member_add_string(wb, "name", x_dfe.name);
|
||||
buffer_json_member_add_uint64(wb, "count", x->count);
|
||||
}
|
||||
buffer_json_object_close(wb);
|
||||
}
|
||||
dfe_done(x);
|
||||
}
|
||||
buffer_json_array_close(wb); // options
|
||||
}
|
||||
buffer_json_object_close(wb); // facet
|
||||
}
|
||||
buffer_json_array_close(wb); // facets
|
||||
|
||||
buffer_json_member_add_array(wb, "transitions");
|
||||
for(struct sql_alert_transition_fixed_size *t = data.base; t ; t = t->next) {
|
||||
buffer_json_add_array_item_object(wb);
|
||||
{
|
||||
RRDHOST *host = rrdhost_find_by_guid(t->machine_guid);
|
||||
|
||||
buffer_json_member_add_uint64(wb, "gi", t->global_id);
|
||||
buffer_json_member_add_uuid(wb, "transition_id", t->transition_id);
|
||||
buffer_json_member_add_uuid(wb, "config_hash_id", t->config_hash_id);
|
||||
buffer_json_member_add_string(wb, "machine_guid", t->machine_guid);
|
||||
|
||||
if(host) {
|
||||
buffer_json_member_add_string(wb, "hostname", rrdhost_hostname(host));
|
||||
|
||||
if(!uuid_is_null(host->node_id))
|
||||
buffer_json_member_add_uuid(wb, "node_id", host->node_id);
|
||||
}
|
||||
|
||||
buffer_json_member_add_string(wb, "alert", *t->alert_name ? t->alert_name : NULL);
|
||||
buffer_json_member_add_string(wb, "instance", *t->chart ? t->chart : NULL);
|
||||
buffer_json_member_add_string(wb, "instance_n", *t->chart_name ? t->chart_name : NULL);
|
||||
buffer_json_member_add_string(wb, "context", *t->chart_context ? t->chart_context : NULL);
|
||||
// buffer_json_member_add_string(wb, "family", *t->family ? t->family : NULL);
|
||||
buffer_json_member_add_string(wb, "component", *t->component ? t->component : NULL);
|
||||
buffer_json_member_add_string(wb, "classification", *t->classification ? t->classification : NULL);
|
||||
buffer_json_member_add_string(wb, "type", *t->type ? t->type : NULL);
|
||||
|
||||
buffer_json_member_add_time_t(wb, "when", t->when_key);
|
||||
buffer_json_member_add_string(wb, "info", *t->info ? t->info : "");
|
||||
buffer_json_member_add_string(wb, "summary", *t->summary ? t->summary : "");
|
||||
buffer_json_member_add_string(wb, "units", *t->units ? t->units : NULL);
|
||||
buffer_json_member_add_object(wb, "new");
|
||||
{
|
||||
buffer_json_member_add_string(wb, "status", rrdcalc_status2string(t->new_status));
|
||||
buffer_json_member_add_double(wb, "value", t->new_value);
|
||||
}
|
||||
buffer_json_object_close(wb); // new
|
||||
buffer_json_member_add_object(wb, "old");
|
||||
{
|
||||
buffer_json_member_add_string(wb, "status", rrdcalc_status2string(t->old_status));
|
||||
buffer_json_member_add_double(wb, "value", t->old_value);
|
||||
buffer_json_member_add_time_t(wb, "duration", t->duration);
|
||||
buffer_json_member_add_time_t(wb, "raised_duration", t->non_clear_duration);
|
||||
}
|
||||
buffer_json_object_close(wb); // old
|
||||
|
||||
buffer_json_member_add_object(wb, "notification");
|
||||
{
|
||||
buffer_json_member_add_time_t(wb, "when", t->exec_run_timestamp);
|
||||
buffer_json_member_add_time_t(wb, "delay", t->delay);
|
||||
buffer_json_member_add_time_t(wb, "delay_up_to_time", t->delay_up_to_timestamp);
|
||||
health_entry_flags_to_json_array(wb, "flags", t->flags);
|
||||
buffer_json_member_add_string(wb, "exec", *t->exec ? t->exec : string2str(localhost->health.health_default_exec));
|
||||
buffer_json_member_add_uint64(wb, "exec_code", t->exec_code);
|
||||
buffer_json_member_add_string(wb, "to", *t->recipient ? t->recipient : string2str(localhost->health.health_default_recipient));
|
||||
}
|
||||
buffer_json_object_close(wb); // notification
|
||||
}
|
||||
buffer_json_object_close(wb); // a transition
|
||||
}
|
||||
buffer_json_array_close(wb); // all transitions
|
||||
|
||||
if(ctl->options & CONTEXTS_OPTION_ALERTS_WITH_CONFIGURATIONS) {
|
||||
DICTIONARY *configs = dictionary_create(DICT_OPTION_SINGLE_THREADED | DICT_OPTION_DONT_OVERWRITE_VALUE);
|
||||
|
||||
for(struct sql_alert_transition_fixed_size *t = data.base; t ; t = t->next) {
|
||||
char guid[UUID_STR_LEN];
|
||||
uuid_unparse_lower(t->config_hash_id, guid);
|
||||
dictionary_set(configs, guid, NULL, 0);
|
||||
}
|
||||
|
||||
buffer_json_member_add_array(wb, "configurations");
|
||||
sql_get_alert_configuration(configs, contexts_v2_alert_config_to_json_from_sql_alert_config_data, &data, debug);
|
||||
buffer_json_array_close(wb);
|
||||
|
||||
dictionary_destroy(configs);
|
||||
}
|
||||
|
||||
while(data.base) {
|
||||
struct sql_alert_transition_fixed_size *t = data.base;
|
||||
DOUBLE_LINKED_LIST_REMOVE_ITEM_UNSAFE(data.base, t, prev, next);
|
||||
contexts_v2_alert_transition_free(t);
|
||||
}
|
||||
|
||||
for(size_t i = 0; i < ATF_TOTAL_ENTRIES ;i++) {
|
||||
dictionary_destroy(data.facets[i].dict);
|
||||
simple_pattern_free(data.facets[i].pattern);
|
||||
}
|
||||
|
||||
buffer_json_member_add_object(wb, "items");
|
||||
{
|
||||
// all the items in the window, under the scope_nodes, ignoring the facets (filters)
|
||||
buffer_json_member_add_uint64(wb, "evaluated", data.items_evaluated);
|
||||
|
||||
// all the items matching the query (if you didn't put anchor_gi and last, these are all the items you would get back)
|
||||
buffer_json_member_add_uint64(wb, "matched", data.items_matched);
|
||||
|
||||
// the items included in this response
|
||||
buffer_json_member_add_uint64(wb, "returned", data.items_to_return);
|
||||
|
||||
// same as last=X parameter
|
||||
buffer_json_member_add_uint64(wb, "max_to_return", data.max_items_to_return);
|
||||
|
||||
// items before the first returned, this should be 0 if anchor_gi is not set
|
||||
buffer_json_member_add_uint64(wb, "before", data.operations.skips_before);
|
||||
|
||||
// items after the last returned, when this is zero there aren't any items after the current list
|
||||
buffer_json_member_add_uint64(wb, "after", data.operations.skips_after + data.operations.shifts);
|
||||
}
|
||||
buffer_json_object_close(wb); // items
|
||||
|
||||
if(debug) {
|
||||
buffer_json_member_add_object(wb, "stats");
|
||||
{
|
||||
buffer_json_member_add_uint64(wb, "first", data.operations.first);
|
||||
buffer_json_member_add_uint64(wb, "prepend", data.operations.prepend);
|
||||
buffer_json_member_add_uint64(wb, "append", data.operations.append);
|
||||
buffer_json_member_add_uint64(wb, "backwards", data.operations.backwards);
|
||||
buffer_json_member_add_uint64(wb, "forwards", data.operations.forwards);
|
||||
buffer_json_member_add_uint64(wb, "shifts", data.operations.shifts);
|
||||
buffer_json_member_add_uint64(wb, "skips_before", data.operations.skips_before);
|
||||
buffer_json_member_add_uint64(wb, "skips_after", data.operations.skips_after);
|
||||
}
|
||||
buffer_json_object_close(wb);
|
||||
}
|
||||
}
|
604
src/database/contexts/api_v2_contexts_alerts.c
Normal file
604
src/database/contexts/api_v2_contexts_alerts.c
Normal file
|
@ -0,0 +1,604 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#include "api_v2_contexts.h"
|
||||
|
||||
struct alert_counts {
|
||||
size_t critical;
|
||||
size_t warning;
|
||||
size_t clear;
|
||||
size_t error;
|
||||
};
|
||||
|
||||
struct alert_v2_entry {
|
||||
RRDCALC *tmp;
|
||||
|
||||
STRING *name;
|
||||
STRING *summary;
|
||||
RRDLABELS *recipient;
|
||||
RRDLABELS *classification;
|
||||
RRDLABELS *context;
|
||||
RRDLABELS *component;
|
||||
RRDLABELS *type;
|
||||
|
||||
size_t ati;
|
||||
|
||||
struct alert_counts counts;
|
||||
|
||||
size_t instances;
|
||||
DICTIONARY *nodes;
|
||||
DICTIONARY *configs;
|
||||
};
|
||||
|
||||
struct alert_by_x_entry {
|
||||
struct {
|
||||
struct alert_counts counts;
|
||||
size_t silent;
|
||||
size_t total;
|
||||
} running;
|
||||
|
||||
struct {
|
||||
size_t available;
|
||||
} prototypes;
|
||||
};
|
||||
|
||||
bool rrdcontext_matches_alert(struct rrdcontext_to_json_v2_data *ctl, RRDCONTEXT *rc) {
|
||||
size_t matches = 0;
|
||||
RRDINSTANCE *ri;
|
||||
dfe_start_read(rc->rrdinstances, ri) {
|
||||
if(ri->rrdset) {
|
||||
RRDSET *st = ri->rrdset;
|
||||
rw_spinlock_read_lock(&st->alerts.spinlock);
|
||||
for (RRDCALC *rcl = st->alerts.base; rcl; rcl = rcl->next) {
|
||||
if(ctl->alerts.alert_name_pattern && !simple_pattern_matches_string(ctl->alerts.alert_name_pattern, rcl->config.name))
|
||||
continue;
|
||||
|
||||
if(ctl->alerts.alarm_id_filter && ctl->alerts.alarm_id_filter != rcl->id)
|
||||
continue;
|
||||
|
||||
size_t m = ctl->request->alerts.status & CONTEXTS_ALERT_STATUSES ? 0 : 1;
|
||||
|
||||
if (!m) {
|
||||
if ((ctl->request->alerts.status & CONTEXT_ALERT_UNINITIALIZED) &&
|
||||
rcl->status == RRDCALC_STATUS_UNINITIALIZED)
|
||||
m++;
|
||||
|
||||
if ((ctl->request->alerts.status & CONTEXT_ALERT_UNDEFINED) &&
|
||||
rcl->status == RRDCALC_STATUS_UNDEFINED)
|
||||
m++;
|
||||
|
||||
if ((ctl->request->alerts.status & CONTEXT_ALERT_CLEAR) &&
|
||||
rcl->status == RRDCALC_STATUS_CLEAR)
|
||||
m++;
|
||||
|
||||
if ((ctl->request->alerts.status & CONTEXT_ALERT_RAISED) &&
|
||||
rcl->status >= RRDCALC_STATUS_RAISED)
|
||||
m++;
|
||||
|
||||
if ((ctl->request->alerts.status & CONTEXT_ALERT_WARNING) &&
|
||||
rcl->status == RRDCALC_STATUS_WARNING)
|
||||
m++;
|
||||
|
||||
if ((ctl->request->alerts.status & CONTEXT_ALERT_CRITICAL) &&
|
||||
rcl->status == RRDCALC_STATUS_CRITICAL)
|
||||
m++;
|
||||
|
||||
if(!m)
|
||||
continue;
|
||||
}
|
||||
|
||||
struct alert_v2_entry t = {
|
||||
.tmp = rcl,
|
||||
};
|
||||
struct alert_v2_entry *a2e =
|
||||
dictionary_set(ctl->alerts.summary, string2str(rcl->config.name),
|
||||
&t, sizeof(struct alert_v2_entry));
|
||||
size_t ati = a2e->ati;
|
||||
matches++;
|
||||
|
||||
dictionary_set_advanced(ctl->alerts.by_type,
|
||||
string2str(rcl->config.type),
|
||||
(ssize_t)string_strlen(rcl->config.type),
|
||||
NULL,
|
||||
sizeof(struct alert_by_x_entry),
|
||||
rcl);
|
||||
|
||||
dictionary_set_advanced(ctl->alerts.by_component,
|
||||
string2str(rcl->config.component),
|
||||
(ssize_t)string_strlen(rcl->config.component),
|
||||
NULL,
|
||||
sizeof(struct alert_by_x_entry),
|
||||
rcl);
|
||||
|
||||
dictionary_set_advanced(ctl->alerts.by_classification,
|
||||
string2str(rcl->config.classification),
|
||||
(ssize_t)string_strlen(rcl->config.classification),
|
||||
NULL,
|
||||
sizeof(struct alert_by_x_entry),
|
||||
rcl);
|
||||
|
||||
dictionary_set_advanced(ctl->alerts.by_recipient,
|
||||
string2str(rcl->config.recipient),
|
||||
(ssize_t)string_strlen(rcl->config.recipient),
|
||||
NULL,
|
||||
sizeof(struct alert_by_x_entry),
|
||||
rcl);
|
||||
|
||||
char *module = NULL;
|
||||
rrdlabels_get_value_strdup_or_null(st->rrdlabels, &module, "_collect_module");
|
||||
if(!module || !*module) module = "[unset]";
|
||||
|
||||
dictionary_set_advanced(ctl->alerts.by_module,
|
||||
module,
|
||||
-1,
|
||||
NULL,
|
||||
sizeof(struct alert_by_x_entry),
|
||||
rcl);
|
||||
|
||||
if (ctl->options & (CONTEXTS_OPTION_ALERTS_WITH_INSTANCES | CONTEXTS_OPTION_ALERTS_WITH_VALUES)) {
|
||||
char key[20 + 1];
|
||||
snprintfz(key, sizeof(key) - 1, "%p", rcl);
|
||||
|
||||
struct sql_alert_instance_v2_entry z = {
|
||||
.ati = ati,
|
||||
.tmp = rcl,
|
||||
};
|
||||
dictionary_set(ctl->alerts.alert_instances, key, &z, sizeof(z));
|
||||
}
|
||||
}
|
||||
rw_spinlock_read_unlock(&st->alerts.spinlock);
|
||||
}
|
||||
}
|
||||
dfe_done(ri);
|
||||
|
||||
return matches != 0;
|
||||
}
|
||||
|
||||
static void alert_counts_add(struct alert_counts *t, RRDCALC *rc) {
|
||||
switch(rc->status) {
|
||||
case RRDCALC_STATUS_CRITICAL:
|
||||
t->critical++;
|
||||
break;
|
||||
|
||||
case RRDCALC_STATUS_WARNING:
|
||||
t->warning++;
|
||||
break;
|
||||
|
||||
case RRDCALC_STATUS_CLEAR:
|
||||
t->clear++;
|
||||
break;
|
||||
|
||||
case RRDCALC_STATUS_REMOVED:
|
||||
case RRDCALC_STATUS_UNINITIALIZED:
|
||||
break;
|
||||
|
||||
case RRDCALC_STATUS_UNDEFINED:
|
||||
default:
|
||||
if(!netdata_double_isnumber(rc->value))
|
||||
t->error++;
|
||||
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
static void alerts_v2_add(struct alert_v2_entry *t, RRDCALC *rc) {
|
||||
t->instances++;
|
||||
|
||||
alert_counts_add(&t->counts, rc);
|
||||
|
||||
dictionary_set(t->nodes, rc->rrdset->rrdhost->machine_guid, NULL, 0);
|
||||
|
||||
char key[UUID_STR_LEN + 1];
|
||||
uuid_unparse_lower(rc->config.hash_id, key);
|
||||
dictionary_set(t->configs, key, NULL, 0);
|
||||
}
|
||||
|
||||
static void alerts_by_x_insert_callback(const DICTIONARY_ITEM *item __maybe_unused, void *value, void *data) {
|
||||
static STRING *silent = NULL;
|
||||
if(unlikely(!silent)) silent = string_strdupz("silent");
|
||||
|
||||
struct alert_by_x_entry *b = value;
|
||||
RRDCALC *rc = data;
|
||||
if(!rc) {
|
||||
// prototype
|
||||
b->prototypes.available++;
|
||||
}
|
||||
else {
|
||||
alert_counts_add(&b->running.counts, rc);
|
||||
|
||||
b->running.total++;
|
||||
|
||||
if (rc->config.recipient == silent)
|
||||
b->running.silent++;
|
||||
}
|
||||
}
|
||||
|
||||
static bool alerts_by_x_conflict_callback(const DICTIONARY_ITEM *item __maybe_unused, void *old_value, void *new_value __maybe_unused, void *data __maybe_unused) {
|
||||
alerts_by_x_insert_callback(item, old_value, data);
|
||||
return false;
|
||||
}
|
||||
|
||||
static void alerts_v2_insert_callback(const DICTIONARY_ITEM *item __maybe_unused, void *value, void *data) {
|
||||
struct rrdcontext_to_json_v2_data *ctl = data;
|
||||
struct alert_v2_entry *t = value;
|
||||
RRDCALC *rc = t->tmp;
|
||||
t->name = rc->config.name;
|
||||
t->summary = rc->config.summary; // the original summary
|
||||
t->context = rrdlabels_create();
|
||||
t->recipient = rrdlabels_create();
|
||||
t->classification = rrdlabels_create();
|
||||
t->component = rrdlabels_create();
|
||||
t->type = rrdlabels_create();
|
||||
if (string_strlen(rc->rrdset->context))
|
||||
rrdlabels_add(t->context, string2str(rc->rrdset->context), "yes", RRDLABEL_SRC_AUTO);
|
||||
if (string_strlen(rc->config.recipient))
|
||||
rrdlabels_add(t->recipient, string2str(rc->config.recipient), "yes", RRDLABEL_SRC_AUTO);
|
||||
if (string_strlen(rc->config.classification))
|
||||
rrdlabels_add(t->classification, string2str(rc->config.classification), "yes", RRDLABEL_SRC_AUTO);
|
||||
if (string_strlen(rc->config.component))
|
||||
rrdlabels_add(t->component, string2str(rc->config.component), "yes", RRDLABEL_SRC_AUTO);
|
||||
if (string_strlen(rc->config.type))
|
||||
rrdlabels_add(t->type, string2str(rc->config.type), "yes", RRDLABEL_SRC_AUTO);
|
||||
t->ati = ctl->alerts.ati++;
|
||||
|
||||
t->nodes = dictionary_create(DICT_OPTION_SINGLE_THREADED|DICT_OPTION_VALUE_LINK_DONT_CLONE|DICT_OPTION_NAME_LINK_DONT_CLONE);
|
||||
t->configs = dictionary_create(DICT_OPTION_SINGLE_THREADED|DICT_OPTION_VALUE_LINK_DONT_CLONE|DICT_OPTION_NAME_LINK_DONT_CLONE);
|
||||
|
||||
alerts_v2_add(t, rc);
|
||||
}
|
||||
|
||||
static bool alerts_v2_conflict_callback(const DICTIONARY_ITEM *item __maybe_unused, void *old_value, void *new_value, void *data __maybe_unused) {
|
||||
struct alert_v2_entry *t = old_value, *n = new_value;
|
||||
RRDCALC *rc = n->tmp;
|
||||
if (string_strlen(rc->rrdset->context))
|
||||
rrdlabels_add(t->context, string2str(rc->rrdset->context), "yes", RRDLABEL_SRC_AUTO);
|
||||
if (string_strlen(rc->config.recipient))
|
||||
rrdlabels_add(t->recipient, string2str(rc->config.recipient), "yes", RRDLABEL_SRC_AUTO);
|
||||
if (string_strlen(rc->config.classification))
|
||||
rrdlabels_add(t->classification, string2str(rc->config.classification), "yes", RRDLABEL_SRC_AUTO);
|
||||
if (string_strlen(rc->config.component))
|
||||
rrdlabels_add(t->component, string2str(rc->config.component), "yes", RRDLABEL_SRC_AUTO);
|
||||
if (string_strlen(rc->config.type))
|
||||
rrdlabels_add(t->type, string2str(rc->config.type), "yes", RRDLABEL_SRC_AUTO);
|
||||
alerts_v2_add(t, rc);
|
||||
return true;
|
||||
}
|
||||
|
||||
static void alerts_v2_delete_callback(const DICTIONARY_ITEM *item __maybe_unused, void *value, void *data __maybe_unused) {
|
||||
struct alert_v2_entry *t = value;
|
||||
|
||||
rrdlabels_destroy(t->context);
|
||||
rrdlabels_destroy(t->recipient);
|
||||
rrdlabels_destroy(t->classification);
|
||||
rrdlabels_destroy(t->component);
|
||||
rrdlabels_destroy(t->type);
|
||||
|
||||
dictionary_destroy(t->nodes);
|
||||
dictionary_destroy(t->configs);
|
||||
}
|
||||
|
||||
struct alert_instances_callback_data {
|
||||
BUFFER *wb;
|
||||
struct rrdcontext_to_json_v2_data *ctl;
|
||||
bool debug;
|
||||
};
|
||||
|
||||
static int contexts_v2_alert_instance_to_json_callback(const DICTIONARY_ITEM *item __maybe_unused, void *value, void *data) {
|
||||
struct sql_alert_instance_v2_entry *t = value;
|
||||
struct alert_instances_callback_data *d = data;
|
||||
struct rrdcontext_to_json_v2_data *ctl = d->ctl; (void)ctl;
|
||||
bool debug = d->debug; (void)debug;
|
||||
BUFFER *wb = d->wb;
|
||||
|
||||
buffer_json_add_array_item_object(wb);
|
||||
{
|
||||
buffer_json_member_add_uint64(wb, "ni", t->ni);
|
||||
|
||||
buffer_json_member_add_string(wb, "nm", string2str(t->name));
|
||||
buffer_json_member_add_string(wb, "ch", string2str(t->chart_id));
|
||||
buffer_json_member_add_string(wb, "ch_n", string2str(t->chart_name));
|
||||
|
||||
if(ctl->request->options & CONTEXTS_OPTION_ALERTS_WITH_SUMMARY)
|
||||
buffer_json_member_add_uint64(wb, "ati", t->ati);
|
||||
|
||||
if(ctl->request->options & CONTEXTS_OPTION_ALERTS_WITH_INSTANCES) {
|
||||
buffer_json_member_add_string(wb, "units", string2str(t->units));
|
||||
buffer_json_member_add_string(wb, "fami", string2str(t->family));
|
||||
buffer_json_member_add_string(wb, "info", string2str(t->info));
|
||||
buffer_json_member_add_string(wb, "sum", string2str(t->summary));
|
||||
buffer_json_member_add_string(wb, "ctx", string2str(t->context));
|
||||
buffer_json_member_add_string(wb, "st", rrdcalc_status2string(t->status));
|
||||
buffer_json_member_add_uuid(wb, "tr_i", t->last_transition_id);
|
||||
buffer_json_member_add_double(wb, "tr_v", t->last_status_change_value);
|
||||
buffer_json_member_add_time_t(wb, "tr_t", t->last_status_change);
|
||||
buffer_json_member_add_uuid(wb, "cfg", t->config_hash_id);
|
||||
buffer_json_member_add_string(wb, "src", string2str(t->source));
|
||||
|
||||
buffer_json_member_add_string(wb, "to", string2str(t->recipient));
|
||||
buffer_json_member_add_string(wb, "tp", string2str(t->type));
|
||||
buffer_json_member_add_string(wb, "cm", string2str(t->component));
|
||||
buffer_json_member_add_string(wb, "cl", string2str(t->classification));
|
||||
|
||||
// Agent specific fields
|
||||
buffer_json_member_add_uint64(wb, "gi", t->global_id);
|
||||
// rrdcalc_flags_to_json_array (wb, "flags", t->flags);
|
||||
}
|
||||
|
||||
if(ctl->request->options & CONTEXTS_OPTION_ALERTS_WITH_VALUES) {
|
||||
// Netdata Cloud fetched these by querying the agents
|
||||
buffer_json_member_add_double(wb, "v", t->value);
|
||||
buffer_json_member_add_time_t(wb, "t", t->last_updated);
|
||||
}
|
||||
}
|
||||
buffer_json_object_close(wb); // alert instance
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
static void contexts_v2_alerts_by_x_update_prototypes(void *data, STRING *type, STRING *component, STRING *classification, STRING *recipient) {
|
||||
struct rrdcontext_to_json_v2_data *ctl = data;
|
||||
|
||||
dictionary_set_advanced(ctl->alerts.by_type, string2str(type), (ssize_t)string_strlen(type), NULL, sizeof(struct alert_by_x_entry), NULL);
|
||||
dictionary_set_advanced(ctl->alerts.by_component, string2str(component), (ssize_t)string_strlen(component), NULL, sizeof(struct alert_by_x_entry), NULL);
|
||||
dictionary_set_advanced(ctl->alerts.by_classification, string2str(classification), (ssize_t)string_strlen(classification), NULL, sizeof(struct alert_by_x_entry), NULL);
|
||||
dictionary_set_advanced(ctl->alerts.by_recipient, string2str(recipient), (ssize_t)string_strlen(recipient), NULL, sizeof(struct alert_by_x_entry), NULL);
|
||||
}
|
||||
|
||||
static void contexts_v2_alerts_by_x_to_json(BUFFER *wb, DICTIONARY *dict, const char *key) {
|
||||
buffer_json_member_add_array(wb, key);
|
||||
{
|
||||
struct alert_by_x_entry *b;
|
||||
dfe_start_read(dict, b) {
|
||||
buffer_json_add_array_item_object(wb);
|
||||
{
|
||||
buffer_json_member_add_string(wb, "name", b_dfe.name);
|
||||
buffer_json_member_add_uint64(wb, "cr", b->running.counts.critical);
|
||||
buffer_json_member_add_uint64(wb, "wr", b->running.counts.warning);
|
||||
buffer_json_member_add_uint64(wb, "cl", b->running.counts.clear);
|
||||
buffer_json_member_add_uint64(wb, "er", b->running.counts.error);
|
||||
buffer_json_member_add_uint64(wb, "running", b->running.total);
|
||||
|
||||
buffer_json_member_add_uint64(wb, "running_silent", b->running.silent);
|
||||
|
||||
if(b->prototypes.available)
|
||||
buffer_json_member_add_uint64(wb, "available", b->prototypes.available);
|
||||
}
|
||||
buffer_json_object_close(wb);
|
||||
}
|
||||
dfe_done(b);
|
||||
}
|
||||
buffer_json_array_close(wb);
|
||||
}
|
||||
|
||||
static void contexts_v2_alert_instances_to_json(BUFFER *wb, const char *key, struct rrdcontext_to_json_v2_data *ctl, bool debug) {
|
||||
buffer_json_member_add_array(wb, key);
|
||||
{
|
||||
struct alert_instances_callback_data data = {
|
||||
.wb = wb,
|
||||
.ctl = ctl,
|
||||
.debug = debug,
|
||||
};
|
||||
dictionary_walkthrough_rw(ctl->alerts.alert_instances, DICTIONARY_LOCK_READ,
|
||||
contexts_v2_alert_instance_to_json_callback, &data);
|
||||
}
|
||||
buffer_json_array_close(wb); // alerts_instances
|
||||
}
|
||||
|
||||
void contexts_v2_alerts_to_json(BUFFER *wb, struct rrdcontext_to_json_v2_data *ctl, bool debug) {
|
||||
if(ctl->request->options & CONTEXTS_OPTION_ALERTS_WITH_SUMMARY) {
|
||||
buffer_json_member_add_array(wb, "alerts");
|
||||
{
|
||||
struct alert_v2_entry *t;
|
||||
dfe_start_read(ctl->alerts.summary, t)
|
||||
{
|
||||
buffer_json_add_array_item_object(wb);
|
||||
{
|
||||
buffer_json_member_add_uint64(wb, "ati", t->ati);
|
||||
|
||||
buffer_json_member_add_array(wb, "ni");
|
||||
void *host_guid;
|
||||
dfe_start_read(t->nodes, host_guid) {
|
||||
struct contexts_v2_node *cn = dictionary_get(ctl->nodes.dict,host_guid_dfe.name);
|
||||
buffer_json_add_array_item_int64(wb, (int64_t) cn->ni);
|
||||
}
|
||||
dfe_done(host_guid);
|
||||
buffer_json_array_close(wb);
|
||||
|
||||
buffer_json_member_add_string(wb, "nm", string2str(t->name));
|
||||
buffer_json_member_add_string(wb, "sum", string2str(t->summary));
|
||||
|
||||
buffer_json_member_add_uint64(wb, "cr", t->counts.critical);
|
||||
buffer_json_member_add_uint64(wb, "wr", t->counts.warning);
|
||||
buffer_json_member_add_uint64(wb, "cl", t->counts.clear);
|
||||
buffer_json_member_add_uint64(wb, "er", t->counts.error);
|
||||
|
||||
buffer_json_member_add_uint64(wb, "in", t->instances);
|
||||
buffer_json_member_add_uint64(wb, "nd", dictionary_entries(t->nodes));
|
||||
buffer_json_member_add_uint64(wb, "cfg", dictionary_entries(t->configs));
|
||||
|
||||
buffer_json_member_add_array(wb, "ctx");
|
||||
rrdlabels_key_to_buffer_array_item(t->context, wb);
|
||||
buffer_json_array_close(wb); // ctx
|
||||
|
||||
buffer_json_member_add_array(wb, "cls");
|
||||
rrdlabels_key_to_buffer_array_item(t->classification, wb);
|
||||
buffer_json_array_close(wb); // classification
|
||||
|
||||
|
||||
buffer_json_member_add_array(wb, "cp");
|
||||
rrdlabels_key_to_buffer_array_item(t->component, wb);
|
||||
buffer_json_array_close(wb); // component
|
||||
|
||||
buffer_json_member_add_array(wb, "ty");
|
||||
rrdlabels_key_to_buffer_array_item(t->type, wb);
|
||||
buffer_json_array_close(wb); // type
|
||||
|
||||
buffer_json_member_add_array(wb, "to");
|
||||
rrdlabels_key_to_buffer_array_item(t->recipient, wb);
|
||||
buffer_json_array_close(wb); // recipient
|
||||
}
|
||||
buffer_json_object_close(wb); // alert name
|
||||
}
|
||||
dfe_done(t);
|
||||
}
|
||||
buffer_json_array_close(wb); // alerts
|
||||
|
||||
health_prototype_metadata_foreach(ctl, contexts_v2_alerts_by_x_update_prototypes);
|
||||
contexts_v2_alerts_by_x_to_json(wb, ctl->alerts.by_type, "alerts_by_type");
|
||||
contexts_v2_alerts_by_x_to_json(wb, ctl->alerts.by_component, "alerts_by_component");
|
||||
contexts_v2_alerts_by_x_to_json(wb, ctl->alerts.by_classification, "alerts_by_classification");
|
||||
contexts_v2_alerts_by_x_to_json(wb, ctl->alerts.by_recipient, "alerts_by_recipient");
|
||||
contexts_v2_alerts_by_x_to_json(wb, ctl->alerts.by_module, "alerts_by_module");
|
||||
}
|
||||
|
||||
if(ctl->request->options & (CONTEXTS_OPTION_ALERTS_WITH_INSTANCES | CONTEXTS_OPTION_ALERTS_WITH_VALUES)) {
|
||||
contexts_v2_alert_instances_to_json(wb, "alert_instances", ctl, debug);
|
||||
}
|
||||
}
|
||||
|
||||
static void alert_instances_v2_insert_callback(const DICTIONARY_ITEM *item __maybe_unused, void *value, void *data) {
|
||||
struct rrdcontext_to_json_v2_data *ctl = data;
|
||||
struct sql_alert_instance_v2_entry *t = value;
|
||||
RRDCALC *rc = t->tmp;
|
||||
|
||||
t->context = rc->rrdset->context;
|
||||
t->chart_id = rc->rrdset->id;
|
||||
t->chart_name = rc->rrdset->name;
|
||||
t->family = rc->rrdset->family;
|
||||
t->units = rc->config.units;
|
||||
t->classification = rc->config.classification;
|
||||
t->type = rc->config.type;
|
||||
t->recipient = rc->config.recipient;
|
||||
t->component = rc->config.component;
|
||||
t->name = rc->config.name;
|
||||
t->source = rc->config.source;
|
||||
t->status = rc->status;
|
||||
t->flags = rc->run_flags;
|
||||
t->info = rc->config.info;
|
||||
t->summary = rc->summary;
|
||||
t->value = rc->value;
|
||||
t->last_updated = rc->last_updated;
|
||||
t->last_status_change = rc->last_status_change;
|
||||
t->last_status_change_value = rc->last_status_change_value;
|
||||
t->host = rc->rrdset->rrdhost;
|
||||
t->alarm_id = rc->id;
|
||||
t->ni = ctl->nodes.ni;
|
||||
|
||||
uuid_copy(t->config_hash_id, rc->config.hash_id);
|
||||
health_alarm_log_get_global_id_and_transition_id_for_rrdcalc(rc, &t->global_id, &t->last_transition_id);
|
||||
}
|
||||
|
||||
static bool alert_instances_v2_conflict_callback(const DICTIONARY_ITEM *item __maybe_unused, void *old_value __maybe_unused, void *new_value __maybe_unused, void *data __maybe_unused) {
|
||||
internal_fatal(true, "This should never happen!");
|
||||
return true;
|
||||
}
|
||||
|
||||
static void alert_instances_delete_callback(const DICTIONARY_ITEM *item __maybe_unused, void *value __maybe_unused, void *data __maybe_unused) {
|
||||
;
|
||||
}
|
||||
|
||||
static void rrdcontext_v2_set_transition_filter(const char *machine_guid, const char *context, time_t alarm_id, void *data) {
|
||||
struct rrdcontext_to_json_v2_data *ctl = data;
|
||||
|
||||
if(machine_guid && *machine_guid) {
|
||||
if(ctl->nodes.scope_pattern)
|
||||
simple_pattern_free(ctl->nodes.scope_pattern);
|
||||
|
||||
if(ctl->nodes.pattern)
|
||||
simple_pattern_free(ctl->nodes.pattern);
|
||||
|
||||
ctl->nodes.scope_pattern = string_to_simple_pattern(machine_guid);
|
||||
ctl->nodes.pattern = NULL;
|
||||
}
|
||||
|
||||
if(context && *context) {
|
||||
if(ctl->contexts.scope_pattern)
|
||||
simple_pattern_free(ctl->contexts.scope_pattern);
|
||||
|
||||
if(ctl->contexts.pattern)
|
||||
simple_pattern_free(ctl->contexts.pattern);
|
||||
|
||||
ctl->contexts.scope_pattern = string_to_simple_pattern(context);
|
||||
ctl->contexts.pattern = NULL;
|
||||
}
|
||||
|
||||
ctl->alerts.alarm_id_filter = alarm_id;
|
||||
}
|
||||
|
||||
bool rrdcontexts_v2_init_alert_dictionaries(struct rrdcontext_to_json_v2_data *ctl, struct api_v2_contexts_request *req) {
|
||||
if(req->alerts.transition) {
|
||||
ctl->options |= CONTEXTS_OPTION_ALERTS_WITH_INSTANCES | CONTEXTS_OPTION_ALERTS_WITH_VALUES;
|
||||
if(!sql_find_alert_transition(req->alerts.transition, rrdcontext_v2_set_transition_filter, &ctl))
|
||||
return false;
|
||||
}
|
||||
|
||||
ctl->alerts.summary = dictionary_create_advanced(
|
||||
DICT_OPTION_SINGLE_THREADED | DICT_OPTION_DONT_OVERWRITE_VALUE | DICT_OPTION_FIXED_SIZE,
|
||||
NULL,
|
||||
sizeof(struct alert_v2_entry));
|
||||
|
||||
dictionary_register_insert_callback(ctl->alerts.summary, alerts_v2_insert_callback, &ctl);
|
||||
dictionary_register_conflict_callback(ctl->alerts.summary, alerts_v2_conflict_callback, &ctl);
|
||||
dictionary_register_delete_callback(ctl->alerts.summary, alerts_v2_delete_callback, &ctl);
|
||||
|
||||
ctl->alerts.by_type = dictionary_create_advanced(
|
||||
DICT_OPTION_SINGLE_THREADED | DICT_OPTION_DONT_OVERWRITE_VALUE | DICT_OPTION_FIXED_SIZE,
|
||||
NULL,
|
||||
sizeof(struct alert_by_x_entry));
|
||||
|
||||
dictionary_register_insert_callback(ctl->alerts.by_type, alerts_by_x_insert_callback, NULL);
|
||||
dictionary_register_conflict_callback(ctl->alerts.by_type, alerts_by_x_conflict_callback, NULL);
|
||||
|
||||
ctl->alerts.by_component = dictionary_create_advanced(
|
||||
DICT_OPTION_SINGLE_THREADED | DICT_OPTION_DONT_OVERWRITE_VALUE | DICT_OPTION_FIXED_SIZE,
|
||||
NULL,
|
||||
sizeof(struct alert_by_x_entry));
|
||||
|
||||
dictionary_register_insert_callback(ctl->alerts.by_component, alerts_by_x_insert_callback, NULL);
|
||||
dictionary_register_conflict_callback(ctl->alerts.by_component, alerts_by_x_conflict_callback, NULL);
|
||||
|
||||
ctl->alerts.by_classification = dictionary_create_advanced(
|
||||
DICT_OPTION_SINGLE_THREADED | DICT_OPTION_DONT_OVERWRITE_VALUE | DICT_OPTION_FIXED_SIZE,
|
||||
NULL,
|
||||
sizeof(struct alert_by_x_entry));
|
||||
|
||||
dictionary_register_insert_callback(ctl->alerts.by_classification, alerts_by_x_insert_callback, NULL);
|
||||
dictionary_register_conflict_callback(ctl->alerts.by_classification, alerts_by_x_conflict_callback, NULL);
|
||||
|
||||
ctl->alerts.by_recipient = dictionary_create_advanced(
|
||||
DICT_OPTION_SINGLE_THREADED | DICT_OPTION_DONT_OVERWRITE_VALUE | DICT_OPTION_FIXED_SIZE,
|
||||
NULL,
|
||||
sizeof(struct alert_by_x_entry));
|
||||
|
||||
dictionary_register_insert_callback(ctl->alerts.by_recipient, alerts_by_x_insert_callback, NULL);
|
||||
dictionary_register_conflict_callback(ctl->alerts.by_recipient, alerts_by_x_conflict_callback, NULL);
|
||||
|
||||
ctl->alerts.by_module = dictionary_create_advanced(
|
||||
DICT_OPTION_SINGLE_THREADED | DICT_OPTION_DONT_OVERWRITE_VALUE | DICT_OPTION_FIXED_SIZE,
|
||||
NULL,
|
||||
sizeof(struct alert_by_x_entry));
|
||||
|
||||
dictionary_register_insert_callback(ctl->alerts.by_module, alerts_by_x_insert_callback, NULL);
|
||||
dictionary_register_conflict_callback(ctl->alerts.by_module, alerts_by_x_conflict_callback, NULL);
|
||||
|
||||
if(ctl->options & (CONTEXTS_OPTION_ALERTS_WITH_INSTANCES | CONTEXTS_OPTION_ALERTS_WITH_VALUES)) {
|
||||
ctl->alerts.alert_instances = dictionary_create_advanced(
|
||||
DICT_OPTION_SINGLE_THREADED | DICT_OPTION_DONT_OVERWRITE_VALUE | DICT_OPTION_FIXED_SIZE,
|
||||
NULL, sizeof(struct sql_alert_instance_v2_entry));
|
||||
|
||||
dictionary_register_insert_callback(ctl->alerts.alert_instances, alert_instances_v2_insert_callback, &ctl);
|
||||
dictionary_register_conflict_callback(ctl->alerts.alert_instances, alert_instances_v2_conflict_callback, &ctl);
|
||||
dictionary_register_delete_callback(ctl->alerts.alert_instances, alert_instances_delete_callback, &ctl);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void rrdcontexts_v2_alerts_cleanup(struct rrdcontext_to_json_v2_data *ctl) {
|
||||
dictionary_destroy(ctl->alerts.summary);
|
||||
dictionary_destroy(ctl->alerts.alert_instances);
|
||||
dictionary_destroy(ctl->alerts.by_type);
|
||||
dictionary_destroy(ctl->alerts.by_component);
|
||||
dictionary_destroy(ctl->alerts.by_classification);
|
||||
dictionary_destroy(ctl->alerts.by_recipient);
|
||||
dictionary_destroy(ctl->alerts.by_module);
|
||||
}
|
52
src/database/contexts/api_v2_contexts_alerts.h
Normal file
52
src/database/contexts/api_v2_contexts_alerts.h
Normal file
|
@ -0,0 +1,52 @@
|
|||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
#ifndef NETDATA_API_V2_CONTEXTS_ALERTS_H
|
||||
#define NETDATA_API_V2_CONTEXTS_ALERTS_H
|
||||
|
||||
#include "internal.h"
|
||||
#include "api_v2_contexts.h"
|
||||
|
||||
struct alert_transitions_callback_data {
|
||||
struct rrdcontext_to_json_v2_data *ctl;
|
||||
BUFFER *wb;
|
||||
bool debug;
|
||||
bool only_one_config;
|
||||
|
||||
struct {
|
||||
SIMPLE_PATTERN *pattern;
|
||||
DICTIONARY *dict;
|
||||
} facets[ATF_TOTAL_ENTRIES];
|
||||
|
||||
uint32_t max_items_to_return;
|
||||
uint32_t items_to_return;
|
||||
|
||||
uint32_t items_evaluated;
|
||||
uint32_t items_matched;
|
||||
|
||||
|
||||
struct sql_alert_transition_fixed_size *base; // double linked list - last item is base->prev
|
||||
struct sql_alert_transition_fixed_size *last_added; // the last item added, not the last of the list
|
||||
|
||||
struct {
|
||||
size_t first;
|
||||
size_t skips_before;
|
||||
size_t skips_after;
|
||||
size_t backwards;
|
||||
size_t forwards;
|
||||
size_t prepend;
|
||||
size_t append;
|
||||
size_t shifts;
|
||||
} operations;
|
||||
|
||||
uint32_t configs_added;
|
||||
};
|
||||
|
||||
void contexts_v2_alerts_to_json(BUFFER *wb, struct rrdcontext_to_json_v2_data *ctl, bool debug);
|
||||
bool rrdcontext_matches_alert(struct rrdcontext_to_json_v2_data *ctl, RRDCONTEXT *rc);
|
||||
void contexts_v2_alert_config_to_json_from_sql_alert_config_data(struct sql_alert_config_data *t, void *data);
|
||||
void contexts_v2_alert_transitions_to_json(BUFFER *wb, struct rrdcontext_to_json_v2_data *ctl, bool debug);
|
||||
|
||||
bool rrdcontexts_v2_init_alert_dictionaries(struct rrdcontext_to_json_v2_data *ctl, struct api_v2_contexts_request *req);
|
||||
void rrdcontexts_v2_alerts_cleanup(struct rrdcontext_to_json_v2_data *ctl);
|
||||
|
||||
#endif //NETDATA_API_V2_CONTEXTS_ALERTS_H
|
|
@ -18,8 +18,8 @@ ssize_t query_scope_foreach_host(SIMPLE_PATTERN *scope_hosts_sp, SIMPLE_PATTERN
|
|||
uint64_t t_hash = 0;
|
||||
|
||||
dfe_start_read(rrdhost_root_index, host) {
|
||||
if(host->node_id)
|
||||
uuid_unparse_lower(*host->node_id, host_node_id_str);
|
||||
if(!uuid_is_null(host->node_id))
|
||||
uuid_unparse_lower(host->node_id, host_node_id_str);
|
||||
else
|
||||
host_node_id_str[0] = '\0';
|
||||
|
||||
|
|
|
@ -897,9 +897,9 @@ static ssize_t query_node_add(void *data, RRDHOST *host, bool queryable_host) {
|
|||
QUERY_TARGET *qt = qtl->qt;
|
||||
QUERY_NODE *qn = query_node_allocate(qt, host);
|
||||
|
||||
if(host->node_id) {
|
||||
if(!uuid_is_null(host->node_id)) {
|
||||
if(!qtl->host_node_id_str[0])
|
||||
uuid_unparse_lower(*host->node_id, qn->node_id);
|
||||
uuid_unparse_lower(host->node_id, qn->node_id);
|
||||
else
|
||||
memcpy(qn->node_id, qtl->host_node_id_str, sizeof(qn->node_id));
|
||||
}
|
||||
|
@ -958,7 +958,7 @@ static ssize_t query_node_add(void *data, RRDHOST *host, bool queryable_host) {
|
|||
|
||||
void query_target_generate_name(QUERY_TARGET *qt) {
|
||||
char options_buffer[100 + 1];
|
||||
web_client_api_request_v1_data_options_to_string(options_buffer, 100, qt->request.options);
|
||||
web_client_api_request_data_vX_options_to_string(options_buffer, 100, qt->request.options);
|
||||
|
||||
char resampling_buffer[20 + 1] = "";
|
||||
if(qt->request.resampling_time > 1)
|
||||
|
@ -1120,8 +1120,8 @@ QUERY_TARGET *query_target_create(QUERY_TARGET_REQUEST *qtr) {
|
|||
}
|
||||
|
||||
if(host) {
|
||||
if(host->node_id)
|
||||
uuid_unparse_lower(*host->node_id, qtl.host_node_id_str);
|
||||
if(!uuid_is_null(host->node_id))
|
||||
uuid_unparse_lower(host->node_id, qtl.host_node_id_str);
|
||||
else
|
||||
qtl.host_node_id_str[0] = '\0';
|
||||
|
||||
|
|
|
@ -198,21 +198,16 @@ int rrdcontext_foreach_instance_with_rrdset_in_context(RRDHOST *host, const char
|
|||
// ----------------------------------------------------------------------------
|
||||
// ACLK interface
|
||||
|
||||
static bool rrdhost_check_our_claim_id(const char *claim_id) {
|
||||
if(!localhost->aclk_state.claimed_id) return false;
|
||||
return (strcasecmp(claim_id, localhost->aclk_state.claimed_id) == 0) ? true : false;
|
||||
}
|
||||
|
||||
void rrdcontext_hub_checkpoint_command(void *ptr) {
|
||||
struct ctxs_checkpoint *cmd = ptr;
|
||||
|
||||
if(!rrdhost_check_our_claim_id(cmd->claim_id)) {
|
||||
if(!claim_id_matches(cmd->claim_id)) {
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
nd_log(NDLS_DAEMON, NDLP_WARNING,
|
||||
"RRDCONTEXT: received checkpoint command for claim_id '%s', node id '%s', "
|
||||
"but this is not our claim id. Ours '%s', received '%s'. Ignoring command.",
|
||||
cmd->claim_id, cmd->node_id,
|
||||
localhost->aclk_state.claimed_id?localhost->aclk_state.claimed_id:"NOT SET",
|
||||
cmd->claim_id);
|
||||
claim_id.str, cmd->claim_id);
|
||||
|
||||
return;
|
||||
}
|
||||
|
@ -245,10 +240,9 @@ void rrdcontext_hub_checkpoint_command(void *ptr) {
|
|||
"Sending snapshot of all contexts.",
|
||||
cmd->version_hash, rrdhost_hostname(host), our_version_hash);
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
// prepare the snapshot
|
||||
char uuid[UUID_STR_LEN];
|
||||
uuid_unparse_lower(*host->node_id, uuid);
|
||||
uuid_unparse_lower(host->node_id, uuid);
|
||||
contexts_snapshot_t bundle = contexts_snapshot_new(cmd->claim_id, uuid, our_version_hash);
|
||||
|
||||
// do a deep scan on every metric of the host to make sure all our data are updated
|
||||
|
@ -262,7 +256,6 @@ void rrdcontext_hub_checkpoint_command(void *ptr) {
|
|||
|
||||
// send it
|
||||
aclk_send_contexts_snapshot(bundle);
|
||||
#endif
|
||||
}
|
||||
|
||||
nd_log(NDLS_DAEMON, NDLP_DEBUG,
|
||||
|
@ -271,7 +264,7 @@ void rrdcontext_hub_checkpoint_command(void *ptr) {
|
|||
|
||||
rrdhost_flag_set(host, RRDHOST_FLAG_ACLK_STREAM_CONTEXTS);
|
||||
char node_str[UUID_STR_LEN];
|
||||
uuid_unparse_lower(*host->node_id, node_str);
|
||||
uuid_unparse_lower(host->node_id, node_str);
|
||||
nd_log(NDLS_ACCESS, NDLP_DEBUG,
|
||||
"ACLK REQ [%s (%s)]: STREAM CONTEXTS ENABLED",
|
||||
node_str, rrdhost_hostname(host));
|
||||
|
@ -280,13 +273,13 @@ void rrdcontext_hub_checkpoint_command(void *ptr) {
|
|||
void rrdcontext_hub_stop_streaming_command(void *ptr) {
|
||||
struct stop_streaming_ctxs *cmd = ptr;
|
||||
|
||||
if(!rrdhost_check_our_claim_id(cmd->claim_id)) {
|
||||
if(!claim_id_matches(cmd->claim_id)) {
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
nd_log(NDLS_DAEMON, NDLP_WARNING,
|
||||
"RRDCONTEXT: received stop streaming command for claim_id '%s', node id '%s', "
|
||||
"but this is not our claim id. Ours '%s', received '%s'. Ignoring command.",
|
||||
cmd->claim_id, cmd->node_id,
|
||||
localhost->aclk_state.claimed_id?localhost->aclk_state.claimed_id:"NOT SET",
|
||||
cmd->claim_id);
|
||||
claim_id.str, cmd->claim_id);
|
||||
|
||||
return;
|
||||
}
|
||||
|
|
|
@ -623,10 +623,10 @@ struct api_v2_contexts_request {
|
|||
char *contexts;
|
||||
char *q;
|
||||
|
||||
CONTEXTS_V2_OPTIONS options;
|
||||
CONTEXTS_OPTIONS options;
|
||||
|
||||
struct {
|
||||
CONTEXTS_V2_ALERT_STATUS status;
|
||||
CONTEXTS_ALERT_STATUS status;
|
||||
char *alert;
|
||||
char *transition;
|
||||
uint32_t last;
|
||||
|
|
|
@ -818,7 +818,6 @@ void rrdcontext_message_send_unsafe(RRDCONTEXT *rc, bool snapshot __maybe_unused
|
|||
rc->hub.last_time_s = rrd_flag_is_collected(rc) ? 0 : rc->last_time_s;
|
||||
rc->hub.deleted = rrd_flag_is_deleted(rc) ? true : false;
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
struct context_updated message = {
|
||||
.id = rc->hub.id,
|
||||
.version = rc->hub.version,
|
||||
|
@ -840,7 +839,6 @@ void rrdcontext_message_send_unsafe(RRDCONTEXT *rc, bool snapshot __maybe_unused
|
|||
else
|
||||
contexts_updated_add_ctx_update(bundle, &message);
|
||||
}
|
||||
#endif
|
||||
|
||||
// store it to SQL
|
||||
|
||||
|
@ -956,7 +954,7 @@ static void rrdcontext_dequeue_from_hub_queue(RRDCONTEXT *rc) {
|
|||
static void rrdcontext_dispatch_queued_contexts_to_hub(RRDHOST *host, usec_t now_ut) {
|
||||
|
||||
// check if we have received a streaming command for this host
|
||||
if(!host->node_id || !rrdhost_flag_check(host, RRDHOST_FLAG_ACLK_STREAM_CONTEXTS) || !aclk_connected || !host->rrdctx.hub_queue)
|
||||
if(uuid_is_null(host->node_id) || !rrdhost_flag_check(host, RRDHOST_FLAG_ACLK_STREAM_CONTEXTS) || !aclk_online_for_contexts() || !host->rrdctx.hub_queue)
|
||||
return;
|
||||
|
||||
// check if there are queued items to send
|
||||
|
@ -975,9 +973,9 @@ static void rrdcontext_dispatch_queued_contexts_to_hub(RRDHOST *host, usec_t now
|
|||
|
||||
worker_is_busy(WORKER_JOB_QUEUED);
|
||||
usec_t dispatch_ut = rrdcontext_calculate_queued_dispatch_time_ut(rc, now_ut);
|
||||
char *claim_id = get_agent_claimid();
|
||||
CLAIM_ID claim_id = claim_id_get();
|
||||
|
||||
if(unlikely(now_ut >= dispatch_ut) && claim_id) {
|
||||
if(unlikely(now_ut >= dispatch_ut) && claim_id_is_set(claim_id)) {
|
||||
worker_is_busy(WORKER_JOB_CHECK);
|
||||
|
||||
rrdcontext_lock(rc);
|
||||
|
@ -985,15 +983,13 @@ static void rrdcontext_dispatch_queued_contexts_to_hub(RRDHOST *host, usec_t now
|
|||
if(check_if_cloud_version_changed_unsafe(rc, true)) {
|
||||
worker_is_busy(WORKER_JOB_SEND);
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
if(!bundle) {
|
||||
// prepare the bundle to send the messages
|
||||
char uuid[UUID_STR_LEN];
|
||||
uuid_unparse_lower(*host->node_id, uuid);
|
||||
uuid_unparse_lower(host->node_id, uuid);
|
||||
|
||||
bundle = contexts_updated_new(claim_id, uuid, 0, now_ut);
|
||||
bundle = contexts_updated_new(claim_id.str, uuid, 0, now_ut);
|
||||
}
|
||||
#endif
|
||||
// update the hub data of the context, give a new version, pack the message
|
||||
// and save an update to SQL
|
||||
rrdcontext_message_send_unsafe(rc, false, bundle);
|
||||
|
@ -1030,11 +1026,9 @@ static void rrdcontext_dispatch_queued_contexts_to_hub(RRDHOST *host, usec_t now
|
|||
else
|
||||
rrdcontext_unlock(rc);
|
||||
}
|
||||
freez(claim_id);
|
||||
}
|
||||
dfe_done(rc);
|
||||
|
||||
#ifdef ENABLE_ACLK
|
||||
if(service_running(SERVICE_CONTEXT) && bundle) {
|
||||
// we have a bundle to send messages
|
||||
|
||||
|
@ -1046,7 +1040,6 @@ static void rrdcontext_dispatch_queued_contexts_to_hub(RRDHOST *host, usec_t now
|
|||
}
|
||||
else if(bundle)
|
||||
contexts_updated_delete(bundle);
|
||||
#endif
|
||||
|
||||
}
|
||||
|
||||
|
|
|
@ -1188,7 +1188,7 @@ struct rrdhost {
|
|||
struct rrdhost_system_info *system_info; // information collected from the host environment
|
||||
|
||||
// ------------------------------------------------------------------------
|
||||
// streaming of data to remote hosts - rrdpush sender
|
||||
// streaming of data to remote hosts - rrdpush
|
||||
|
||||
struct {
|
||||
struct {
|
||||
|
@ -1204,6 +1204,10 @@ struct rrdhost {
|
|||
|
||||
uint32_t last_used; // the last slot we used for a chart (increments only)
|
||||
} pluginsd_chart_slots;
|
||||
|
||||
char *destination; // where to send metrics to
|
||||
char *api_key; // the api key at the receiving netdata
|
||||
SIMPLE_PATTERN *charts_matching; // pattern to match the charts to be sent
|
||||
} send;
|
||||
|
||||
struct {
|
||||
|
@ -1215,11 +1219,8 @@ struct rrdhost {
|
|||
} receive;
|
||||
} rrdpush;
|
||||
|
||||
char *rrdpush_send_destination; // where to send metrics to
|
||||
char *rrdpush_send_api_key; // the api key at the receiving netdata
|
||||
struct rrdpush_destinations *destinations; // a linked list of possible destinations
|
||||
struct rrdpush_destinations *destination; // the current destination from the above list
|
||||
SIMPLE_PATTERN *rrdpush_send_charts_matching; // pattern to match the charts to be sent
|
||||
|
||||
int32_t rrdpush_last_receiver_exit_reason;
|
||||
time_t rrdpush_seconds_to_replicate; // max time we want to replicate from the child
|
||||
|
@ -1247,7 +1248,7 @@ struct rrdhost {
|
|||
int connected_children_count; // number of senders currently streaming
|
||||
|
||||
struct receiver_state *receiver;
|
||||
netdata_mutex_t receiver_lock;
|
||||
SPINLOCK receiver_lock;
|
||||
int trigger_chart_obsoletion_check; // set when child connects, will instruct parent to
|
||||
// trigger a check for obsoleted charts since previous connect
|
||||
|
||||
|
@ -1307,10 +1308,12 @@ struct rrdhost {
|
|||
} retention;
|
||||
|
||||
nd_uuid_t host_uuid; // Global GUID for this host
|
||||
nd_uuid_t *node_id; // Cloud node_id
|
||||
nd_uuid_t node_id; // Cloud node_id
|
||||
|
||||
netdata_mutex_t aclk_state_lock;
|
||||
aclk_rrdhost_state aclk_state;
|
||||
struct {
|
||||
ND_UUID claim_id_of_origin;
|
||||
ND_UUID claim_id_of_parent;
|
||||
} aclk;
|
||||
|
||||
struct rrdhost *next;
|
||||
struct rrdhost *prev;
|
||||
|
@ -1325,9 +1328,6 @@ extern RRDHOST *localhost;
|
|||
#define rrdhost_program_name(host) string2str((host)->program_name)
|
||||
#define rrdhost_program_version(host) string2str((host)->program_version)
|
||||
|
||||
#define rrdhost_aclk_state_lock(host) netdata_mutex_lock(&((host)->aclk_state_lock))
|
||||
#define rrdhost_aclk_state_unlock(host) netdata_mutex_unlock(&((host)->aclk_state_lock))
|
||||
|
||||
#define rrdhost_receiver_replicating_charts(host) (__atomic_load_n(&((host)->rrdpush_receiver_replicating_charts), __ATOMIC_RELAXED))
|
||||
#define rrdhost_receiver_replicating_charts_plus_one(host) (__atomic_add_fetch(&((host)->rrdpush_receiver_replicating_charts), 1, __ATOMIC_RELAXED))
|
||||
#define rrdhost_receiver_replicating_charts_minus_one(host) (__atomic_sub_fetch(&((host)->rrdpush_receiver_replicating_charts), 1, __ATOMIC_RELAXED))
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue