0
0
Fork 0
mirror of https://github.com/netdata/netdata.git synced 2025-04-23 21:10:22 +00:00

Enable support for Netdata Cloud.

This PR merges the feature-branch to make the cloud live. It contains the following work:
Co-authored-by: Andrew Moss <1043609+amoss@users.noreply.github.com(opens in new tab)>
Co-authored-by: Jacek Kolasa <jacek.kolasa@gmail.com(opens in new tab)>
Co-authored-by: Austin S. Hemmelgarn <austin@netdata.cloud(opens in new tab)>
Co-authored-by: James Mills <prologic@shortcircuit.net.au(opens in new tab)>
Co-authored-by: Markos Fountoulakis <44345837+mfundul@users.noreply.github.com(opens in new tab)>
Co-authored-by: Timotej S <6674623+underhood@users.noreply.github.com(opens in new tab)>
Co-authored-by: Stelios Fragkakis <52996999+stelfrag@users.noreply.github.com(opens in new tab)>
* dashboard with new navbars, v1.0-alpha.9: PR 
* dashboard v1.0.11: 
Co-authored-by: Jacek Kolasa <jacek.kolasa@gmail.com(opens in new tab)>
* Added installer code to bundle JSON-c if it's not present. PR 
Co-authored-by: James Mills <prologic@shortcircuit.net.au(opens in new tab)>
* Fix claiming config PR 
* Adds JSON-c as hard dep. for ACLK PR 
* Fix SSL renegotiation errors in old versions of openssl. PR . Also - we have a transient problem with opensuse CI so this PR disables them with a commit from @prologic.
Co-authored-by: James Mills <prologic@shortcircuit.net.au(opens in new tab)>
* Fix claiming error handling PR 
* Added CI to verify JSON-C bundling code in installer PR 
* Make cloud-enabled flag in web/api/v1/info be independent of ACLK build success PR 
* Reduce ACLK_STABLE_TIMEOUT from 10 to 3 seconds PR 
* remove old-cloud related UI from old dashboard (accessible now via /old suffix) PR 
* dashboard v1.0.13 PR 
* dashboard v1.0.14 PR 
* Provide feedback on proxy setting changes PR 
* Change the name of the connect message to update during an ongoing session PR 
* Fetch active alarms from alarm_log PR 
This commit is contained in:
Andrew Moss 2020-05-11 08:34:29 +02:00 committed by James Mills
parent fd05e1d877
commit aa3ec552c8
No known key found for this signature in database
GPG key ID: AC4C014F1440EBD6
38 changed files with 704 additions and 285 deletions

View file

@ -7,5 +7,6 @@ ENV PRE=${PRE}
COPY . /netdata COPY . /netdata
RUN chmod +x /netdata/rmjsonc.sh
RUN /bin/sh /netdata/prep-cmd.sh RUN /bin/sh /netdata/prep-cmd.sh
RUN /netdata/packaging/installer/install-required-packages.sh --dont-wait --non-interactive netdata-all RUN /netdata/packaging/installer/install-required-packages.sh --dont-wait --non-interactive netdata-all

View file

@ -9,6 +9,7 @@ jobs:
build: build:
name: Build & Install name: Build & Install
strategy: strategy:
fail-fast: false
matrix: matrix:
distro: distro:
- 'alpine:edge' - 'alpine:edge'
@ -35,30 +36,59 @@ jobs:
include: include:
- distro: 'alpine:edge' - distro: 'alpine:edge'
pre: 'apk add -U bash' pre: 'apk add -U bash'
rmjsonc: 'apk del json-c-dev'
- distro: 'alpine:3.11' - distro: 'alpine:3.11'
pre: 'apk add -U bash' pre: 'apk add -U bash'
rmjsonc: 'apk del json-c-dev'
- distro: 'alpine:3.10' - distro: 'alpine:3.10'
pre: 'apk add -U bash' pre: 'apk add -U bash'
rmjsonc: 'apk del json-c-dev'
- distro: 'alpine:3.9' - distro: 'alpine:3.9'
pre: 'apk add -U bash' pre: 'apk add -U bash'
rmjsonc: 'apk del json-c-dev'
- distro: 'archlinux:latest' - distro: 'archlinux:latest'
pre: 'pacman --noconfirm -Sy grep libffi' pre: 'pacman --noconfirm -Sy grep libffi'
- distro: 'centos:8'
rmjsonc: 'dnf remove -y json-c-devel'
- distro: 'debian:bullseye' - distro: 'debian:bullseye'
pre: 'apt-get update' pre: 'apt-get update'
rmjsonc: 'apt-get remove -y libjson-c-dev'
- distro: 'debian:buster' - distro: 'debian:buster'
pre: 'apt-get update' pre: 'apt-get update'
rmjsonc: 'apt-get remove -y libjson-c-dev'
- distro: 'debian:stretch' - distro: 'debian:stretch'
pre: 'apt-get update' pre: 'apt-get update'
rmjsonc: 'apt-get remove -y libjson-c-dev'
- distro: 'fedora:32'
rmjsonc: 'dnf remove -y json-c-devel'
- distro: 'fedora:31'
rmjsonc: 'dnf remove -y json-c-devel'
- distro: 'fedora:30'
rmjsonc: 'dnf remove -y json-c-devel'
- distro: 'opensuse/leap:15.2'
rmjsonc: 'zypper rm -y libjson-c-devel'
- distro: 'opensuse/leap:15.1'
rmjsonc: 'zypper rm -y libjson-c-devel'
- distro: 'opensuse/tumbleweed:latest'
rmjsonc: 'zypper rm -y libjson-c-devel'
- distro: 'ubuntu:20.04' - distro: 'ubuntu:20.04'
pre: 'apt-get update' pre: 'apt-get update'
rmjsonc: 'apt-get remove -y libjson-c-dev'
- distro: 'ubuntu:19.10' - distro: 'ubuntu:19.10'
pre: 'apt-get update' pre: 'apt-get update'
rmjsonc: 'apt-get remove -y libjson-c-dev'
- distro: 'ubuntu:18.04' - distro: 'ubuntu:18.04'
pre: 'apt-get update' pre: 'apt-get update'
rmjsonc: 'apt-get remove -y libjson-c-dev'
- distro: 'ubuntu:16.04' - distro: 'ubuntu:16.04'
pre: 'apt-get update' pre: 'apt-get update'
rmjsonc: 'apt-get remove -y libjson-c-dev'
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Git clone repository - name: Git clone repository
@ -66,15 +96,22 @@ jobs:
- name: install-required-packages.sh on ${{ matrix.distro }} - name: install-required-packages.sh on ${{ matrix.distro }}
env: env:
PRE: ${{ matrix.pre }} PRE: ${{ matrix.pre }}
RMJSONC: ${{ matrix.rmjsonc }}
run: | run: |
echo $PRE > ./prep-cmd.sh echo $PRE > ./prep-cmd.sh
echo $RMJSONC > ./rmjsonc.sh
docker build . -f .github/dockerfiles/Dockerfile.build_test -t test --build-arg BASE=${{ matrix.distro }} docker build . -f .github/dockerfiles/Dockerfile.build_test -t test --build-arg BASE=${{ matrix.distro }}
- name: Regular build on ${{ matrix.distro }} - name: Regular build on ${{ matrix.distro }}
run: | run: |
docker run -w /netdata test /bin/sh -c 'autoreconf -ivf && ./configure && make -j2' docker run -w /netdata test /bin/sh -c 'autoreconf -ivf && ./configure && make -j2'
- name: netdata-installer on ${{ matrix.distro }} - name: netdata-installer on ${{ matrix.distro }}, disable cloud
run: | run: |
docker run -w /netdata test /bin/sh -c './netdata-installer.sh --dont-wait --dont-start-it --disable-cloud' docker run -w /netdata test /bin/sh -c './netdata-installer.sh --dont-wait --dont-start-it --disable-cloud'
- name: netdata-installer on ${{ matrix.distro }} - name: netdata-installer on ${{ matrix.distro }}, require cloud
run: | run: |
docker run -w /netdata test /bin/sh -c './netdata-installer.sh --dont-wait --dont-start-it --require-cloud' docker run -w /netdata test /bin/sh -c './netdata-installer.sh --dont-wait --dont-start-it --require-cloud'
- name: netdata-installer on ${{ matrix.distro }}, require cloud, no JSON-C
if: matrix.rmjsonc != ''
run: |
docker run -w /netdata test \
/bin/sh -c '/netdata/rmjsonc.sh && ./netdata-installer.sh --dont-wait --dont-start-it --require-cloud'

View file

@ -624,6 +624,10 @@ NETDATA_COMMON_LIBS = \
$(OPTIONAL_EBPF_LIBS) \ $(OPTIONAL_EBPF_LIBS) \
$(NULL) $(NULL)
if LINK_STATIC_JSONC
NETDATA_COMMON_LIBS += externaldeps/jsonc/libjson-c.a
endif
NETDATACLI_FILES = \ NETDATACLI_FILES = \
daemon/commands.h \ daemon/commands.h \
$(LIBNETDATA_FILES) \ $(LIBNETDATA_FILES) \

View file

@ -152,7 +152,6 @@ static void aclk_lws_wss_log_divert(int level, const char *line)
static int aclk_lws_wss_client_init( char *target_hostname, int target_port) static int aclk_lws_wss_client_init( char *target_hostname, int target_port)
{ {
static int lws_logging_initialized = 0; static int lws_logging_initialized = 0;
struct lws_context_creation_info info;
if (unlikely(!lws_logging_initialized)) { if (unlikely(!lws_logging_initialized)) {
lws_set_log_level(LLL_ERR | LLL_WARN, aclk_lws_wss_log_divert); lws_set_log_level(LLL_ERR | LLL_WARN, aclk_lws_wss_log_divert);
@ -167,14 +166,6 @@ static int aclk_lws_wss_client_init( char *target_hostname, int target_port)
engine_instance->host = target_hostname; engine_instance->host = target_hostname;
engine_instance->port = target_port; engine_instance->port = target_port;
memset(&info, 0, sizeof(struct lws_context_creation_info));
info.options = LWS_SERVER_OPTION_DO_SSL_GLOBAL_INIT;
info.port = CONTEXT_PORT_NO_LISTEN;
info.protocols = protocols;
engine_instance->lws_context = lws_create_context(&info);
if (!engine_instance->lws_context)
goto failure_cleanup_2;
aclk_lws_mutex_init(&engine_instance->write_buf_mutex); aclk_lws_mutex_init(&engine_instance->write_buf_mutex);
aclk_lws_mutex_init(&engine_instance->read_buf_mutex); aclk_lws_mutex_init(&engine_instance->read_buf_mutex);
@ -186,18 +177,27 @@ static int aclk_lws_wss_client_init( char *target_hostname, int target_port)
return 0; return 0;
failure_cleanup: failure_cleanup:
lws_context_destroy(engine_instance->lws_context);
failure_cleanup_2:
freez(engine_instance); freez(engine_instance);
return 1; return 1;
} }
void aclk_lws_wss_destroy_context()
{
if (!engine_instance)
return;
if (!engine_instance->lws_context)
return;
lws_context_destroy(engine_instance->lws_context);
engine_instance->lws_context = NULL;
}
void aclk_lws_wss_client_destroy() void aclk_lws_wss_client_destroy()
{ {
if (engine_instance == NULL) if (engine_instance == NULL)
return; return;
lws_context_destroy(engine_instance->lws_context);
engine_instance->lws_context = NULL; aclk_lws_wss_destroy_context();
engine_instance->lws_wsi = NULL; engine_instance->lws_wsi = NULL;
aclk_lws_wss_clear_io_buffers(engine_instance); aclk_lws_wss_clear_io_buffers(engine_instance);
@ -267,7 +267,25 @@ int aclk_lws_wss_connect(char *host, int port)
int n; int n;
if (!engine_instance) { if (!engine_instance) {
return aclk_lws_wss_client_init(host, port); if (aclk_lws_wss_client_init(host, port))
return 1; // Propagate failure
}
if (!engine_instance->lws_context)
{
// First time through (on this connection), create the context
struct lws_context_creation_info info;
memset(&info, 0, sizeof(struct lws_context_creation_info));
info.options = LWS_SERVER_OPTION_DO_SSL_GLOBAL_INIT;
info.port = CONTEXT_PORT_NO_LISTEN;
info.protocols = protocols;
engine_instance->lws_context = lws_create_context(&info);
if (!engine_instance->lws_context)
{
error("Failed to create lws_context, ACLK will not function");
return 1;
}
return 0;
// PROTOCOL_INIT callback will call again. // PROTOCOL_INIT callback will call again.
} }

View file

@ -70,6 +70,7 @@ struct aclk_lws_wss_engine_instance {
}; };
void aclk_lws_wss_client_destroy(); void aclk_lws_wss_client_destroy();
void aclk_lws_wss_destroy_context();
int aclk_lws_wss_connect(char *host, int port); int aclk_lws_wss_connect(char *host, int port);

View file

@ -23,6 +23,7 @@ static char *aclk_password = NULL;
static char *global_base_topic = NULL; static char *global_base_topic = NULL;
static int aclk_connecting = 0; static int aclk_connecting = 0;
int aclk_connected = 0; // Exposed in the web-api int aclk_connected = 0; // Exposed in the web-api
int aclk_force_reconnect = 0; // Indication from lower layers
usec_t aclk_session_us = 0; // Used by the mqtt layer usec_t aclk_session_us = 0; // Used by the mqtt layer
time_t aclk_session_sec = 0; // Used by the mqtt layer time_t aclk_session_sec = 0; // Used by the mqtt layer
@ -47,7 +48,7 @@ pthread_mutex_t query_lock_wait = PTHREAD_MUTEX_INITIALIZER;
#define QUERY_THREAD_WAKEUP pthread_cond_signal(&query_cond_wait) #define QUERY_THREAD_WAKEUP pthread_cond_signal(&query_cond_wait)
void lws_wss_check_queues(size_t *write_len, size_t *write_len_bytes, size_t *read_len); void lws_wss_check_queues(size_t *write_len, size_t *write_len_bytes, size_t *read_len);
void aclk_lws_wss_destroy_context();
/* /*
* Maintain a list of collectors and chart count * Maintain a list of collectors and chart count
* If all the charts of a collector are deleted * If all the charts of a collector are deleted
@ -149,7 +150,7 @@ static RSA *aclk_private_key = NULL;
static int create_private_key() static int create_private_key()
{ {
char filename[FILENAME_MAX + 1]; char filename[FILENAME_MAX + 1];
snprintfz(filename, FILENAME_MAX, "%s/claim.d/private.pem", netdata_configured_user_config_dir); snprintfz(filename, FILENAME_MAX, "%s/cloud.d/private.pem", netdata_configured_varlib_dir);
long bytes_read; long bytes_read;
char *private_key = read_by_filename(filename, &bytes_read); char *private_key = read_by_filename(filename, &bytes_read);
@ -1336,59 +1337,84 @@ void *aclk_main(void *ptr)
struct netdata_static_thread *static_thread = (struct netdata_static_thread *)ptr; struct netdata_static_thread *static_thread = (struct netdata_static_thread *)ptr;
struct netdata_static_thread *query_thread; struct netdata_static_thread *query_thread;
if (!netdata_cloud_setting) { // This thread is unusual in that it cannot be cancelled by cancel_main_threads()
info("Killing ACLK thread -> cloud functionality has been disabled"); // as it must notify the far end that it shutdown gracefully and avoid the LWT.
static_thread->enabled = NETDATA_MAIN_THREAD_EXITED; netdata_thread_disable_cancelability();
return NULL;
} #if defined( DISABLE_CLOUD ) || !defined( ENABLE_ACLK)
info("Killing ACLK thread -> cloud functionality has been disabled");
static_thread->enabled = NETDATA_MAIN_THREAD_EXITED;
return NULL;
#endif
info("Waiting for netdata to be ready"); info("Waiting for netdata to be ready");
while (!netdata_ready) { while (!netdata_ready) {
sleep_usec(USEC_PER_MS * 300); sleep_usec(USEC_PER_MS * 300);
} }
info("Waiting for Cloud to be enabled");
while (!netdata_cloud_setting) {
sleep_usec(USEC_PER_SEC * 1);
if (netdata_exit) {
static_thread->enabled = NETDATA_MAIN_THREAD_EXITED;
return NULL;
}
}
last_init_sequence = now_realtime_sec(); last_init_sequence = now_realtime_sec();
query_thread = NULL; query_thread = NULL;
char *aclk_hostname = NULL; // Initializers are over-written but prevent gcc complaining about clobbering. char *aclk_hostname = NULL; // Initializers are over-written but prevent gcc complaining about clobbering.
char *aclk_port = NULL; char *aclk_port = NULL;
uint32_t port_num = 0; uint32_t port_num = 0;
char *cloud_base_url = config_get(CONFIG_SECTION_CLOUD, "cloud base url", DEFAULT_CLOUD_BASE_URL);
if (aclk_decode_base_url(cloud_base_url, &aclk_hostname, &aclk_port)) {
error("Configuration error - cannot use agent cloud link");
static_thread->enabled = NETDATA_MAIN_THREAD_EXITED;
return NULL;
}
port_num = atoi(aclk_port); // SSL library uses the string, MQTT uses the numeric value
info("Waiting for netdata to be claimed"); info("Waiting for netdata to be claimed");
while(1) { while(1) {
while (likely(!is_agent_claimed())) { while (likely(!is_agent_claimed())) {
sleep_usec(USEC_PER_SEC * 5); sleep_usec(USEC_PER_SEC * 1);
if (netdata_exit) if (netdata_exit)
goto exited; goto exited;
} }
if (!create_private_key() && !_mqtt_lib_init()) // The NULL return means the value was never initialised, but this value has been initialized in post_conf_load.
break; // We trap the impossible NULL here to keep the linter happy without using a fatal() in the code.
char *cloud_base_url = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "cloud base url", NULL);
if (netdata_exit) if (cloud_base_url == NULL) {
error("Do not move the cloud base url out of post_conf_load!!");
goto exited; goto exited;
}
if (aclk_decode_base_url(cloud_base_url, &aclk_hostname, &aclk_port)) {
error("Agent is claimed but the configuration is invalid, please fix");
}
else
{
port_num = atoi(aclk_port); // SSL library uses the string, MQTT uses the numeric value
if (!create_private_key() && !_mqtt_lib_init())
break;
}
sleep_usec(USEC_PER_SEC * 60); for (int i=0; i<60; i++) {
if (netdata_exit)
goto exited;
sleep_usec(USEC_PER_SEC * 1);
}
} }
create_publish_base_topic(); create_publish_base_topic();
usec_t reconnect_expiry = 0; // In usecs usec_t reconnect_expiry = 0; // In usecs
netdata_thread_disable_cancelability();
while (!netdata_exit) { while (!netdata_exit) {
static int first_init = 0; static int first_init = 0;
size_t write_q, write_q_bytes, read_q; size_t write_q, write_q_bytes, read_q;
lws_wss_check_queues(&write_q, &write_q_bytes, &read_q); lws_wss_check_queues(&write_q, &write_q_bytes, &read_q);
if (aclk_force_reconnect) {
aclk_lws_wss_destroy_context();
aclk_force_reconnect = 0;
}
//info("loop state first_init_%d connected=%d connecting=%d wq=%zu (%zu-bytes) rq=%zu", //info("loop state first_init_%d connected=%d connecting=%d wq=%zu (%zu-bytes) rq=%zu",
// first_init, aclk_connected, aclk_connecting, write_q, write_q_bytes, read_q); // first_init, aclk_connected, aclk_connecting, write_q, write_q_bytes, read_q);
if (unlikely(!netdata_exit && !aclk_connected)) { if (unlikely(!netdata_exit && !aclk_connected && !aclk_force_reconnect)) {
if (unlikely(!first_init)) { if (unlikely(!first_init)) {
aclk_try_to_connect(aclk_hostname, aclk_port, port_num); aclk_try_to_connect(aclk_hostname, aclk_port, port_num);
first_init = 1; first_init = 1;
@ -1414,7 +1440,7 @@ void *aclk_main(void *ptr)
} }
_link_event_loop(); _link_event_loop();
if (unlikely(!aclk_connected)) if (unlikely(!aclk_connected || aclk_force_reconnect))
continue; continue;
/*static int stress_counter = 0; /*static int stress_counter = 0;
if (write_q_bytes==0 && stress_counter ++ >5) if (write_q_bytes==0 && stress_counter ++ >5)
@ -1550,6 +1576,7 @@ void aclk_disconnect()
waiting_init = 1; waiting_init = 1;
aclk_connected = 0; aclk_connected = 0;
aclk_connecting = 0; aclk_connecting = 0;
aclk_force_reconnect = 1;
} }
void aclk_shutdown() void aclk_shutdown()
@ -1598,6 +1625,7 @@ inline void aclk_create_header(BUFFER *dest, char *type, char *msg_id, time_t ts
* alarm_log * alarm_log
* active alarms * active alarms
*/ */
void health_active_log_alarms_2json(RRDHOST *host, BUFFER *wb);
void aclk_send_alarm_metadata() void aclk_send_alarm_metadata()
{ {
BUFFER *local_buffer = buffer_create(NETDATA_WEB_RESPONSE_INITIAL_SIZE); BUFFER *local_buffer = buffer_create(NETDATA_WEB_RESPONSE_INITIAL_SIZE);
@ -1618,17 +1646,18 @@ void aclk_send_alarm_metadata()
aclk_create_header(local_buffer, "connect_alarms", msg_id, aclk_session_sec, aclk_session_us); aclk_create_header(local_buffer, "connect_alarms", msg_id, aclk_session_sec, aclk_session_us);
buffer_strcat(local_buffer, ",\n\t\"payload\": "); buffer_strcat(local_buffer, ",\n\t\"payload\": ");
buffer_sprintf(local_buffer, "{\n\t \"configured-alarms\" : "); buffer_sprintf(local_buffer, "{\n\t \"configured-alarms\" : ");
health_alarms2json(localhost, local_buffer, 1); health_alarms2json(localhost, local_buffer, 1);
debug(D_ACLK, "Metadata %s with configured alarms has %zu bytes", msg_id, local_buffer->len); debug(D_ACLK, "Metadata %s with configured alarms has %zu bytes", msg_id, local_buffer->len);
// buffer_sprintf(local_buffer, ",\n\t \"alarm-log\" : ");
buffer_sprintf(local_buffer, ",\n\t \"alarm-log\" : "); // health_alarm_log2json(localhost, local_buffer, 0);
health_alarm_log2json(localhost, local_buffer, 0); // debug(D_ACLK, "Metadata %s with alarm_log has %zu bytes", msg_id, local_buffer->len);
debug(D_ACLK, "Metadata %s with alarm_log has %zu bytes", msg_id, local_buffer->len);
buffer_sprintf(local_buffer, ",\n\t \"alarms-active\" : "); buffer_sprintf(local_buffer, ",\n\t \"alarms-active\" : ");
health_alarms_values2json(localhost, local_buffer, 0); health_active_log_alarms_2json(localhost, local_buffer);
debug(D_ACLK, "Metadata %s with alarms_active has %zu bytes", msg_id, local_buffer->len); //debug(D_ACLK, "Metadata message %s", local_buffer->buffer);
buffer_sprintf(local_buffer, "\n}\n}"); buffer_sprintf(local_buffer, "\n}\n}");
aclk_send_message(ACLK_ALARMS_TOPIC, local_buffer->buffer, msg_id); aclk_send_message(ACLK_ALARMS_TOPIC, local_buffer->buffer, msg_id);
@ -1657,7 +1686,7 @@ int aclk_send_info_metadata()
// a fake on_connect message then use the real timestamp to indicate it is within the existing // a fake on_connect message then use the real timestamp to indicate it is within the existing
// session. // session.
if (aclk_metadata_submitted == ACLK_METADATA_SENT) if (aclk_metadata_submitted == ACLK_METADATA_SENT)
aclk_create_header(local_buffer, "connect", msg_id, 0, 0); aclk_create_header(local_buffer, "update", msg_id, 0, 0);
else else
aclk_create_header(local_buffer, "connect", msg_id, aclk_session_sec, aclk_session_us); aclk_create_header(local_buffer, "connect", msg_id, aclk_session_sec, aclk_session_us);
buffer_strcat(local_buffer, ",\n\t\"payload\": "); buffer_strcat(local_buffer, ",\n\t\"payload\": ");

View file

@ -25,7 +25,7 @@
#define ACLK_MAX_TOPIC 255 #define ACLK_MAX_TOPIC 255
#define ACLK_RECONNECT_DELAY 1 // reconnect delay -- with backoff stragegy fow now #define ACLK_RECONNECT_DELAY 1 // reconnect delay -- with backoff stragegy fow now
#define ACLK_STABLE_TIMEOUT 10 // Minimum delay to mark AGENT as stable #define ACLK_STABLE_TIMEOUT 3 // Minimum delay to mark AGENT as stable
#define ACLK_DEFAULT_PORT 9002 #define ACLK_DEFAULT_PORT 9002
#define ACLK_DEFAULT_HOST "localhost" #define ACLK_DEFAULT_HOST "localhost"

View file

@ -29,7 +29,7 @@ void publish_callback(struct mosquitto *mosq, void *obj, int rc)
UNUSED(mosq); UNUSED(mosq);
UNUSED(obj); UNUSED(obj);
UNUSED(rc); UNUSED(rc);
info("Publish_callback: mid=%d", rc);
// TODO: link this with a msg_id so it can be traced // TODO: link this with a msg_id so it can be traced
return; return;
} }
@ -219,7 +219,8 @@ void aclk_lws_connection_data_received()
void aclk_lws_connection_closed() void aclk_lws_connection_closed()
{ {
aclk_disconnect(NULL); aclk_disconnect();
} }

View file

@ -9,6 +9,8 @@
-e 's#[@]registrydir_POST@#$(registrydir)#g' \ -e 's#[@]registrydir_POST@#$(registrydir)#g' \
-e 's#[@]varlibdir_POST@#$(varlibdir)#g' \ -e 's#[@]varlibdir_POST@#$(varlibdir)#g' \
-e 's#[@]webdir_POST@#$(webdir)#g' \ -e 's#[@]webdir_POST@#$(webdir)#g' \
-e 's#[@]can_enable_aclk_POST@#$(can_enable_aclk)#g' \
-e 's#[@]enable_cloud_POST@#$(enable_cloud)#g' \
$< > $@.tmp; then \ $< > $@.tmp; then \
mv "$@.tmp" "$@"; \ mv "$@.tmp" "$@"; \
else \ else \

View file

@ -9,7 +9,7 @@ services:
- VERSION=current - VERSION=current
image: arch_current_dev:latest image: arch_current_dev:latest
command: > command: >
sh -c "echo -n 00000000-0000-0000-0000-000000000000 >/etc/netdata/claim.d/claimed_id && sh -c "echo -n 00000000-0000-0000-0000-000000000000 >/var/lib/netdata/cloud.d/claimed_id &&
echo '[agent_cloud_link]' >>/etc/netdata/netdata.conf && echo '[agent_cloud_link]' >>/etc/netdata/netdata.conf &&
echo ' agent cloud link hostname = vernemq' >>/etc/netdata/netdata.conf && echo ' agent cloud link hostname = vernemq' >>/etc/netdata/netdata.conf &&
echo ' agent cloud link port = 9002' >>/etc/netdata/netdata.conf && echo ' agent cloud link port = 9002' >>/etc/netdata/netdata.conf &&

View file

@ -9,7 +9,7 @@ services:
- VERSION=extras - VERSION=extras
image: arch_extras_dev:latest image: arch_extras_dev:latest
command: > command: >
sh -c "echo -n 00000000-0000-0000-0000-000000000000 >/etc/netdata/claim.d/claimed_id && sh -c "echo -n 00000000-0000-0000-0000-000000000000 >/var/lib/netdata/cloud.d/claimed_id &&
echo '[agent_cloud_link]' >>/etc/netdata/netdata.conf && echo '[agent_cloud_link]' >>/etc/netdata/netdata.conf &&
echo ' agent cloud link hostname = vernemq' >>/etc/netdata/netdata.conf && echo ' agent cloud link hostname = vernemq' >>/etc/netdata/netdata.conf &&
echo ' agent cloud link port = 9002' >>/etc/netdata/netdata.conf && echo ' agent cloud link port = 9002' >>/etc/netdata/netdata.conf &&

View file

@ -96,7 +96,7 @@ docker run -d --name=netdata \
--cap-add SYS_PTRACE \ --cap-add SYS_PTRACE \
--security-opt apparmor=unconfined \ --security-opt apparmor=unconfined \
netdata/netdata \ netdata/netdata \
/usr/sbin/netdata -D -W set global "netdata cloud" enable -W set cloud "cloud base url" "https://app.netdata.cloud" -W "claim -token=TOKEN -rooms=ROOM1,ROOM2 -url=https://app.netdata.cloud" /usr/sbin/netdata -D -W set cloud global enabled true -W set cloud global "cloud base url" "https://app.netdata.cloud" -W "claim -token=TOKEN -rooms=ROOM1,ROOM2 -url=https://app.netdata.cloud"
``` ```
The container runs in detached mode, so you won't see any output. If the node does not appear in your Space, you can run The container runs in detached mode, so you won't see any output. If the node does not appear in your Space, you can run
@ -167,11 +167,11 @@ Use these keys and the information below to troubleshoot the ACLK.
If `cloud-enabled` is `false`, you probably ran the installer with `--disable-cloud` option. If `cloud-enabled` is `false`, you probably ran the installer with `--disable-cloud` option.
Additionally, check that the `netdata cloud` setting in `netdata.conf` is set to `enable`: Additionally, check that the `enabled` setting in `var/lib/netdata/cloud.d/cloud.conf` is set to `true`:
```ini ```ini
[general] [global]
netadata cloud = enable enabled = true
``` ```
To fix this issue, reinstall Netdata using your [preferred method](/packaging/installer/README.md) and do not add the To fix this issue, reinstall Netdata using your [preferred method](/packaging/installer/README.md) and do not add the
@ -234,23 +234,23 @@ with details about your system and relevant output from `error.log`.
### Unclaim (remove) an Agent from Netdata Cloud ### Unclaim (remove) an Agent from Netdata Cloud
The best method to remove an Agent from Netdata Cloud is to unclaim it by deleting the `claim.d/` directory in your The best method to remove an Agent from Netdata Cloud is to unclaim it by deleting the `cloud.d/` directory in your
Netdata configuration directory. Netdata library directory.
```bash ```bash
cd /etc/netdata # Replace with your Netdata configuration directory, if not /etc/netdata/ cd /var/lib/netdata # Replace with your Netdata library directory, if not /var/lib/netdata/
rm -rf claim.d/ rm -rf cloud.d/
``` ```
> You may need to use `sudo` or another method of elevating your privileges. > You may need to use `sudo` or another method of elevating your privileges.
Once you delete the `claim.d/` directory, the ACLK will not connect to Cloud the next time the Agent starts, and Cloud Once you delete the `cloud.d/` directory, the ACLK will not connect to Cloud the next time the Agent starts, and Cloud
will then remove it from the interface. will then remove it from the interface.
## Claiming reference ## Claiming reference
In the sections below, you can find reference material for the claiming script, claiming via the Agent's command line In the sections below, you can find reference material for the claiming script, claiming via the Agent's command line
tool, and details about the files found in `claim.d`. tool, and details about the files found in `cloud.d`.
### Claiming script ### Claiming script
@ -263,7 +263,7 @@ and passing the following arguments:
-rooms=ROOM1,ROOM2,... -rooms=ROOM1,ROOM2,...
where ROOMX is the War Room this node should be added to. This list is optional. where ROOMX is the War Room this node should be added to. This list is optional.
-url=URL_BASE -url=URL_BASE
where URL_BASE is the Netdata Cloud endpoint base URL. By default, this is https://netdata.cloud. where URL_BASE is the Netdata Cloud endpoint base URL. By default, this is https://app.netdata.cloud.
-id=AGENT_ID -id=AGENT_ID
where AGENT_ID is the unique identifier of the Agent. This is the Agent's MACHINE_GUID by default. where AGENT_ID is the unique identifier of the Agent. This is the Agent's MACHINE_GUID by default.
-hostname=HOSTNAME -hostname=HOSTNAME
@ -306,14 +306,14 @@ If need be, the user can override the Agent's defaults by providing additional a
### Claiming directory ### Claiming directory
Netdata stores the agent claiming-related state in the user configuration directory under `claim.d`, e.g. in Netdata stores the agent claiming-related state in the Netdata library directory under `cloud.d`, e.g. in
`/etc/netdata/claim.d`. The user can put files in this directory to provide defaults to the `-token` and `-rooms` `/var/lib/netdata/cloud.d`. The user can put files in this directory to provide defaults to the `-token` and `-rooms`
arguments. These files should be owned **by the `netdata` user**. arguments. These files should be owned **by the `netdata` user**.
The `claim.d/token` file should contain the claiming-token and the `claim.d/rooms` file should contain the list of The `cloud.d/token` file should contain the claiming-token and the `cloud.d/rooms` file should contain the list of
war-rooms. war-rooms.
The user can also put the Cloud endpoint's full certificate chain in `claim.d/cloud_fullchain.pem` so that the Agent The user can also put the Cloud endpoint's full certificate chain in `cloud.d/cloud_fullchain.pem` so that the Agent
can trust the endpoint if necessary. can trust the endpoint if necessary.
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fclaim%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>) [![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fclaim%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)

View file

@ -12,17 +12,19 @@ static char *claiming_errors[] = {
"Problems with claiming working directory", // 2 "Problems with claiming working directory", // 2
"Missing dependencies", // 3 "Missing dependencies", // 3
"Failure to connect to endpoint", // 4 "Failure to connect to endpoint", // 4
"Unknown HTTP error message", // 5 "The CLI didn't work", // 5
"invalid node id", // 6 "Wrong user", // 6
"invalid node name", // 7 "Unknown HTTP error message", // 7
"invalid room id", // 8 "invalid node id", // 8
"invalid public key", // 9 "invalid node name", // 9
"token expired/token not found/invalid token", // 10 "invalid room id", // 10
"already claimed", // 11 "invalid public key", // 11
"processing claiming", // 12 "token expired/token not found/invalid token", // 12
"Internal Server Error", // 13 "already claimed", // 13
"Gateway Timeout", // 14 "processing claiming", // 14
"Service Unavailable" // 15 "Internal Server Error", // 15
"Gateway Timeout", // 16
"Service Unavailable" // 17
}; };
static char *claimed_id = NULL; static char *claimed_id = NULL;
@ -37,7 +39,7 @@ char *is_agent_claimed(void)
extern struct registry registry; extern struct registry registry;
/* rrd_init() must have been called before this function */ /* rrd_init() and post_conf_load() must have been called before this function */
void claim_agent(char *claiming_arguments) void claim_agent(char *claiming_arguments)
{ {
if (!netdata_cloud_setting) { if (!netdata_cloud_setting) {
@ -51,7 +53,10 @@ void claim_agent(char *claiming_arguments)
char command_buffer[CLAIMING_COMMAND_LENGTH + 1]; char command_buffer[CLAIMING_COMMAND_LENGTH + 1];
FILE *fp; FILE *fp;
char *cloud_base_url = config_get(CONFIG_SECTION_CLOUD, "cloud base url", DEFAULT_CLOUD_BASE_URL); // This is guaranteed to be set early in main via post_conf_load()
char *cloud_base_url = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "cloud base url", NULL);
if (cloud_base_url == NULL)
fatal("Do not move the cloud base url out of post_conf_load!!");
const char *proxy_str; const char *proxy_str;
ACLK_PROXY_TYPE proxy_type; ACLK_PROXY_TYPE proxy_type;
char proxy_flag[CLAIMING_PROXY_LENGTH] = "-noproxy"; char proxy_flag[CLAIMING_PROXY_LENGTH] = "-noproxy";
@ -111,8 +116,11 @@ void load_claiming_state(void)
claimed_id = NULL; claimed_id = NULL;
} }
// Propagate into aclk and registry. Be kind of atomic...
appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "cloud base url", DEFAULT_CLOUD_BASE_URL);
char filename[FILENAME_MAX + 1]; char filename[FILENAME_MAX + 1];
snprintfz(filename, FILENAME_MAX, "%s/claim.d/claimed_id", netdata_configured_user_config_dir); snprintfz(filename, FILENAME_MAX, "%s/cloud.d/claimed_id", netdata_configured_varlib_dir);
long bytes_read; long bytes_read;
claimed_id = read_by_filename(filename, &bytes_read); claimed_id = read_by_filename(filename, &bytes_read);
@ -122,4 +130,34 @@ void load_claiming_state(void)
} }
info("File '%s' was found. Setting state to AGENT_CLAIMED.", filename); info("File '%s' was found. Setting state to AGENT_CLAIMED.", filename);
// --------------------------------------------------------------------
// Check if the cloud is enabled
#if defined( DISABLE_CLOUD ) || !defined( ENABLE_ACLK )
netdata_cloud_setting = 0;
#else
netdata_cloud_setting = appconfig_get_boolean(&cloud_config, CONFIG_SECTION_GLOBAL, "enabled", 1);
#endif
}
struct config cloud_config = { .first_section = NULL,
.last_section = NULL,
.mutex = NETDATA_MUTEX_INITIALIZER,
.index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
.rwlock = AVL_LOCK_INITIALIZER } };
void load_cloud_conf(int silent)
{
char *filename;
errno = 0;
int ret = 0;
filename = strdupz_path_subpath(netdata_configured_varlib_dir, "cloud.d/cloud.conf");
ret = appconfig_load(&cloud_config, filename, 1, NULL);
if(!ret && !silent) {
info("CONFIG: cannot load cloud config '%s'. Running with internal defaults.", filename);
}
freez(filename);
} }

View file

@ -6,9 +6,11 @@
#include "../daemon/common.h" #include "../daemon/common.h"
extern char *claiming_pending_arguments; extern char *claiming_pending_arguments;
extern struct config cloud_config;
void claim_agent(char *claiming_arguments); void claim_agent(char *claiming_arguments);
char *is_agent_claimed(void); char *is_agent_claimed(void);
void load_claiming_state(void); void load_claiming_state(void);
void load_cloud_conf(int silent);
#endif //NETDATA_CLAIM_H #endif //NETDATA_CLAIM_H

View file

@ -9,74 +9,92 @@
# Exit code: 2 - Problems with claiming working directory # Exit code: 2 - Problems with claiming working directory
# Exit code: 3 - Missing dependencies # Exit code: 3 - Missing dependencies
# Exit code: 4 - Failure to connect to endpoint # Exit code: 4 - Failure to connect to endpoint
# Exit code: 5 - Unknown HTTP error message # Exit code: 5 - The CLI didn't work
# Exit code: 6 - The CLI didn't work # Exit code: 6 - Wrong user
# Exit code: 7 - Wrong user # Exit code: 7 - Unknown HTTP error message
# #
# OK: Agent claimed successfully # OK: Agent claimed successfully
# HTTP Status code: 204 # HTTP Status code: 204
# Exit code: 0 # Exit code: 0
# #
# Unknown HTTP error message
# HTTP Status code: 422
# Exit code: 7
ERROR_KEYS[7]="None"
ERROR_MESSAGES[7]="Unknown HTTP error message"
# Error: The agent id is invalid; it does not fulfill the constraints # Error: The agent id is invalid; it does not fulfill the constraints
# HTTP Status code: 422 # HTTP Status code: 422
# Error key: "ErrInvalidNodeID" # Exit code: 8
# Error message: "invalid node id" ERROR_KEYS[8]="ErrInvalidNodeID"
# Exit code: 6 ERROR_MESSAGES[8]="invalid node id"
# Error: The agent hostname is invalid; it does not fulfill the constraints # Error: The agent hostname is invalid; it does not fulfill the constraints
# HTTP Status code: 422 # HTTP Status code: 422
# Error key: "ErrInvalidNodeName" # Exit code: 9
# Error message: "invalid node name" ERROR_KEYS[9]="ErrInvalidNodeName"
# Exit code: 7 ERROR_MESSAGES[9]="invalid node name"
#
# Error: At least one of the given rooms ids is invalid; it does not fulfill the constraints # Error: At least one of the given rooms ids is invalid; it does not fulfill the constraints
# HTTP Status code: 422 # HTTP Status code: 422
# Error key: "ErrInvalidRoomID" # Exit code: 10
# Error message: "invalid room id" ERROR_KEYS[10]="ErrInvalidRoomID"
# Exit code: 8 ERROR_MESSAGES[10]="invalid room id"
#
# Error: Invalid public key; the public key is empty or not present # Error: Invalid public key; the public key is empty or not present
# HTTP Status code: 422 # HTTP Status code: 422
# Error key: "ErrInvalidPublicKey" # Exit code: 11
# Error message: "invalid public key" ERROR_KEYS[11]="ErrInvalidPublicKey"
# Exit code: 9 ERROR_MESSAGES[11]="invalid public key"
# #
# Error: Expired, missing or invalid token # Error: Expired, missing or invalid token
# HTTP Status code: 403 # HTTP Status code: 403
# Error key: "ErrForbidden" # Exit code: 12
# Error message: "token expired" | "token not found" | "invalid token" ERROR_KEYS[12]="ErrForbidden"
# Exit code: 10 ERROR_MESSAGES[12]="token expired/token not found/invalid token"
#
# Error: Duplicate agent id; an agent with the same id is already registered in the cloud # Error: Duplicate agent id; an agent with the same id is already registered in the cloud
# HTTP Status code: 409 # HTTP Status code: 409
# Error key: "ErrAlreadyClaimed" # Exit code: 13
# Error message: "already claimed" ERROR_KEYS[13]="ErrAlreadyClaimed"
# Exit code: 11 ERROR_MESSAGES[13]="already claimed"
#
# Error: The node claiming process is still in progress. # Error: The node claiming process is still in progress.
# HTTP Status code: 102 # HTTP Status code: 102
# Error key: "ErrProcessingClaim" # Exit code: 14
# Error message: "processing claiming" ERROR_KEYS[14]="ErrProcessingClaim"
# Exit code: 12 ERROR_MESSAGES[14]="processing claiming"
#
# Error: Internal server error. Any other unexpected error (DB problems, etc.) # Error: Internal server error. Any other unexpected error (DB problems, etc.)
# HTTP Status code: 500 # HTTP Status code: 500
# Error key: "ErrInternalServerError" # Exit code: 15
# Error message: "Internal Server Error" ERROR_KEYS[15]="ErrInternalServerError"
# Exit code: 13 ERROR_MESSAGES[15]="Internal Server Error"
#
# Error: There was a timout processing the claim. # Error: There was a timout processing the claim.
# HTTP Status code: 504 # HTTP Status code: 504
# Error key: "ErrGatewayTimeout" # Exit code: 16
# Error message: "Gateway Timeout" ERROR_KEYS[16]="ErrGatewayTimeout"
# Exit code: 14 ERROR_MESSAGES[16]="Gateway Timeout"
#
# Error: The service cannot handle the claiming request at this time. # Error: The service cannot handle the claiming request at this time.
# HTTP Status code: 503 # HTTP Status code: 503
# Error key: "ErrServiceUnavailable" # Exit code: 17
# Error message: "Service Unavailable" ERROR_KEYS[17]="ErrServiceUnavailable"
# Exit code: 15 ERROR_MESSAGES[17]="Service Unavailable"
get_config_value() {
conf_file="${1}"
section="${2}"
key_name="${3}"
config_result=$(@sbindir_POST@/netdatacli 2>/dev/null read-config "$conf_file|$section|$key_name"; exit $?)
# shellcheck disable=SC2181
if [ "$?" != "0" ]; then
echo >&2 "cli failed, assume netdata is not running and query the on-disk config"
config_result=$(@sbindir_POST@/netdata 2>/dev/null -W get2 "$conf_file" "$section" "$key_name" unknown_default)
fi
echo "$config_result"
}
if command -v curl >/dev/null 2>&1 ; then if command -v curl >/dev/null 2>&1 ; then
URLTOOL="curl" URLTOOL="curl"
elif command -v wget >/dev/null 2>&1 ; then elif command -v wget >/dev/null 2>&1 ; then
@ -90,15 +108,26 @@ if ! command -v openssl >/dev/null 2>&1 ; then
exit 3 exit 3
fi fi
# shellcheck disable=SC2050
if [ "@enable_cloud_POST@" = "no" ]; then
echo >&2 "This agent was built with --disable-cloud and cannot be claimed"
exit 3
fi
# shellcheck disable=SC2050
if [ "@can_enable_aclk_POST@" != "yes" ]; then
echo >&2 "This agent was built without the dependencies for Cloud and cannot be claimed"
exit 3
fi
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
# defaults to allow running this script by hand # defaults to allow running this script by hand
[ -z "${NETDATA_USER_CONFIG_DIR}" ] && NETDATA_USER_CONFIG_DIR="@configdir_POST@" [ -z "${NETDATA_VARLIB_DIR}" ] && NETDATA_VARLIB_DIR="@varlibdir_POST@"
MACHINE_GUID_FILE="@registrydir_POST@/netdata.public.unique.id" MACHINE_GUID_FILE="@registrydir_POST@/netdata.public.unique.id"
CLAIMING_DIR="${NETDATA_USER_CONFIG_DIR}/claim.d" CLAIMING_DIR="${NETDATA_VARLIB_DIR}/cloud.d"
TOKEN="unknown" TOKEN="unknown"
URL_BASE="https://netdata.cloud" URL_BASE=$(get_config_value cloud global "cloud base url")
[ -z "$URL_BASE" ] && URL_BASE="https://app.netdata.cloud" # Cover post-install with --dont-start
ID="unknown" ID="unknown"
ROOMS="" ROOMS=""
[ -z "$HOSTNAME" ] && HOSTNAME=$(hostname) [ -z "$HOSTNAME" ] && HOSTNAME=$(hostname)
@ -106,14 +135,9 @@ CLOUD_CERTIFICATE_FILE="${CLAIMING_DIR}/cloud_fullchain.pem"
VERBOSE=0 VERBOSE=0
INSECURE=0 INSECURE=0
RELOAD=1 RELOAD=1
NETDATA_USER=netdata NETDATA_USER=$(get_config_value netdata global "run as user")
[ -z "$EUID" ] && EUID="$(id -u)" [ -z "$EUID" ] && EUID="$(id -u)"
CONF_USER=$(grep '^[ #]*run as user[ ]*=' "${NETDATA_USER_CONFIG_DIR}/netdata.conf" 2>/dev/null)
if [ -n "$CONF_USER" ]; then
NETDATA_USER=$(echo "$CONF_USER" | sed 's/^[^=]*=[ \t]*//' | sed 's/[ \t]*$//')
fi
# get the MACHINE_GUID by default # get the MACHINE_GUID by default
if [ -r "${MACHINE_GUID_FILE}" ]; then if [ -r "${MACHINE_GUID_FILE}" ]; then
@ -152,7 +176,7 @@ done
if [ "$EUID" != "0" ] && [ "$(whoami)" != "$NETDATA_USER" ]; then if [ "$EUID" != "0" ] && [ "$(whoami)" != "$NETDATA_USER" ]; then
echo >&2 "This script must be run by the $NETDATA_USER user account" echo >&2 "This script must be run by the $NETDATA_USER user account"
exit 7 exit 6
fi fi
# if curl not installed give warning SOCKS can't be used # if curl not installed give warning SOCKS can't be used
@ -279,37 +303,73 @@ if [ "${VERBOSE}" == 1 ] ; then
cat "${CLAIMING_DIR}/tmpout.txt" cat "${CLAIMING_DIR}/tmpout.txt"
fi fi
HTTP_STATUS_CODE=$(grep "HTTP" "${CLAIMING_DIR}/tmpout.txt" | awk -F " " '{print $2}') ERROR_KEY=$(grep "\"errorMsgKey\":" "${CLAIMING_DIR}/tmpout.txt" | awk -F "errorMsgKey\":\"" '{print $2}' | awk -F "\"" '{print $1}')
case ${ERROR_KEY} in
"ErrInvalidNodeID") EXIT_CODE=8 ;;
"ErrInvalidNodeName") EXIT_CODE=9 ;;
"ErrInvalidRoomID") EXIT_CODE=10 ;;
"ErrInvalidPublicKey") EXIT_CODE=11 ;;
"ErrForbidden") EXIT_CODE=12 ;;
"ErrAlreadyClaimed") EXIT_CODE=13 ;;
"ErrProcessingClaim") EXIT_CODE=14 ;;
"ErrInternalServerError") EXIT_CODE=15 ;;
"ErrGatewayTimeout") EXIT_CODE=16 ;;
"ErrServiceUnavailable") EXIT_CODE=17 ;;
*) EXIT_CODE=7 ;;
esac
HTTP_STATUS_CODE=$(grep "HTTP" "${CLAIMING_DIR}/tmpout.txt" | awk -F " " '{print $2}')
if [ "${HTTP_STATUS_CODE}" = "204" ] ; then if [ "${HTTP_STATUS_CODE}" = "204" ] ; then
EXIT_CODE=0
fi
if [ "${HTTP_STATUS_CODE}" = "204" ] || [ "${ERROR_KEY}" = "ErrAlreadyClaimed" ] ; then
rm -f "${CLAIMING_DIR}/tmpout.txt" rm -f "${CLAIMING_DIR}/tmpout.txt"
echo -n "${ID}" >"${CLAIMING_DIR}/claimed_id" || (echo >&2 "Claiming failed"; set -e; exit 2) echo -n "${ID}" >"${CLAIMING_DIR}/claimed_id" || (echo >&2 "Claiming failed"; set -e; exit 2)
rm -f "${CLAIMING_DIR}/token" || (echo >&2 "Claiming failed"; set -e; exit 2) rm -f "${CLAIMING_DIR}/token" || (echo >&2 "Claiming failed"; set -e; exit 2)
# Rewrite the cloud.conf on the disk
cat > "$CLAIMING_DIR/cloud.conf" <<HERE_DOC
[global]
enabled = yes
cloud base url = $URL_BASE
HERE_DOC
if [ "$EUID" == "0" ]; then if [ "$EUID" == "0" ]; then
chown -R "${NETDATA_USER}:${NETDATA_USER}" ${CLAIMING_DIR} || (echo >&2 "Claiming failed"; set -e; exit 2) chown -R "${NETDATA_USER}:${NETDATA_USER}" ${CLAIMING_DIR} || (echo >&2 "Claiming failed"; set -e; exit 2)
fi fi
if [ "${RELOAD}" == "0" ] ; then if [ "${RELOAD}" == "0" ] ; then
exit 0 exit $EXIT_CODE
fi fi
netdatacli reload-claiming-state && echo >&2 "Node was successfully claimed." && exit 0
echo "The claim was successful but the agent could not be notified ($?)- it requires a restart to connect to the cloud" if [ -z "${PROXY}" ]; then
exit 6 PROXYMSG=""
else
PROXYMSG="You have attempted to claim this node through a proxy - please update your the proxy setting in your netdata.conf to ${PROXY}. "
fi
# Update cloud.conf in the agent memory
@sbindir_POST@/netdatacli write-config 'cloud|global|enabled|yes' && \
@sbindir_POST@/netdatacli write-config "cloud|global|cloud base url|$URL_BASE" && \
@sbindir_POST@/netdatacli reload-claiming-state && \
if [ "${HTTP_STATUS_CODE}" = "204" ] ; then
echo >&2 "${PROXYMSG}Node was successfully claimed."
else
echo >&2 "The agent cloud base url is set to the url provided."
echo >&2 "The cloud may have different credentials already registered for this agent ID and it cannot be reclaimed under different credentials for security reasons. If you are unable to connect use -id=\$(uuidgen) to overwrite this agent ID with a fresh value if the original credentials cannot be restored."
echo >&2 "${PROXYMSG}Failed to claim node with the following error message:\"${ERROR_MESSAGES[$EXIT_CODE]}\""
fi && exit $EXIT_CODE
if [ "${ERROR_KEY}" = "ErrAlreadyClaimed" ] ; then
echo >&2 "The cloud may have different credentials already registered for this agent ID and it cannot be reclaimed under different credentials for security reasons. If you are unable to connect use -id=\$(uuidgen) to overwrite this agent ID with a fresh value if the original credentials cannot be restored."
echo >&2 "${PROXYMSG}Failed to claim node with the following error message:\"${ERROR_MESSAGES[$EXIT_CODE]}\""
exit $EXIT_CODE
fi
echo >&2 "${PROXYMSG}The claim was successful but the agent could not be notified ($?)- it requires a restart to connect to the cloud."
exit 5
fi fi
ERROR_MESSAGE=$(grep "\"errorMsgKey\":" "${CLAIMING_DIR}/tmpout.txt" | awk -F "errorMsgKey\":\"" '{print $2}' | awk -F "\"" '{print $1}') echo >&2 "Failed to claim node with the following error message:\"${ERROR_MESSAGES[$EXIT_CODE]}\""
case ${ERROR_MESSAGE} in if [ "${VERBOSE}" == 1 ]; then
"ErrInvalidNodeID") EXIT_CODE=6 ;; echo >&2 "Error key was:\"${ERROR_KEYS[$EXIT_CODE]}\""
"ErrInvalidNodeName") EXIT_CODE=7 ;; fi
"ErrInvalidRoomID") EXIT_CODE=8 ;;
"ErrInvalidPublicKey") EXIT_CODE=9 ;;
"ErrForbidden") EXIT_CODE=10 ;;
"ErrAlreadyClaimed") EXIT_CODE=11 ;;
"ErrProcessingClaim") EXIT_CODE=12 ;;
"ErrInternalServerError") EXIT_CODE=13 ;;
"ErrGatewayTimeout") EXIT_CODE=14 ;;
"ErrServiceUnavailable") EXIT_CODE=15 ;;
*) EXIT_CODE=5 ;;
esac
echo >&2 "Failed to claim node."
rm -f "${CLAIMING_DIR}/tmpout.txt" rm -f "${CLAIMING_DIR}/tmpout.txt"
exit $EXIT_CODE exit $EXIT_CODE

View file

@ -173,7 +173,6 @@ AC_ARG_ENABLE(
[ enable_cloud="detect" ] [ enable_cloud="detect" ]
) )
aclk_required="${enable_cloud}"
if test "${enable_cloud}" = "no"; then if test "${enable_cloud}" = "no"; then
AC_DEFINE([DISABLE_CLOUD], [1], [disable netdata cloud functionality]) AC_DEFINE([DISABLE_CLOUD], [1], [disable netdata cloud functionality])
fi fi
@ -435,6 +434,35 @@ AM_CONDITIONAL([ENABLE_HTTPS], [test "${enable_https}" = "yes"])
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
# JSON-C # JSON-C
if test "${enable_jsonc}" != "no" -a -z "${JSONC_LIBS}"; then
# Try and detect manual static build presence (from netdata-installer.sh)
AC_MSG_CHECKING([if statically built json-c is present])
HAVE_libjson_c_a="no"
if test -f "externaldeps/jsonc/libjson-c.a"; then
LIBS_BKP="${LIBS}"
LIBS="externaldeps/jsonc/libjson-c.a"
AC_LINK_IFELSE([AC_LANG_SOURCE([[#include "externaldeps/jsonc/json-c/json.h"
int main (int argc, char **argv) {
struct json_object *jobj;
char *str = "{ \"msg-type\": \"random\" }";
jobj = json_tokener_parse(str);
json_object_get_type(jobj);
}]])],
[HAVE_libjson_c_a="yes"],
[HAVE_libjson_c_a="no"])
LIBS="${LIBS_BKP}"
fi
if test "${HAVE_libjson_c_a}" = "yes"; then
AC_DEFINE([LINK_STATIC_JSONC], [1], [static json-c should be used])
JSONC_LIBS="static"
OPTIONAL_JSONC_STATIC_CFLAGS="-I externaldeps/jsonc"
fi
AC_MSG_RESULT([${HAVE_libjson_c_a}])
fi
AM_CONDITIONAL([LINK_STATIC_JSONC], [test "${JSONC_LIBS}" = "static"])
test "${enable_jsonc}" = "yes" -a -z "${JSONC_LIBS}" && \ test "${enable_jsonc}" = "yes" -a -z "${JSONC_LIBS}" && \
AC_MSG_ERROR([JSON-C required but not found. Try installing 'libjson-c-dev' or 'json-c'.]) AC_MSG_ERROR([JSON-C required but not found. Try installing 'libjson-c-dev' or 'json-c'.])
@ -577,7 +605,7 @@ if test "$enable_cloud" != "no"; then
fi fi
AC_MSG_RESULT([${HAVE_libwebsockets_a}]) AC_MSG_RESULT([${HAVE_libwebsockets_a}])
if test "${build_target}" = "linux" -a "${aclk_required}" != "no"; then if test "${build_target}" = "linux" -a "${enable_cloud}" != "no"; then
if test "${have_libcap}" = "yes" -a "${with_libcap}" = "no"; then if test "${have_libcap}" = "yes" -a "${with_libcap}" = "no"; then
AC_MSG_ERROR([agent-cloud-link can't be built without libcap. Disable it by --disable-cloud or enable libcap]) AC_MSG_ERROR([agent-cloud-link can't be built without libcap. Disable it by --disable-cloud or enable libcap])
fi fi
@ -586,23 +614,31 @@ if test "$enable_cloud" != "no"; then
fi fi
fi fi
# next 2 lines are just to have info for ACLK dependencies in common place
AC_MSG_CHECKING([if json-c available for ACLK])
AC_MSG_RESULT([${enable_jsonc}])
test "${enable_cloud}" = "yes" -a "${enable_jsonc}" = "no" && \
AC_MSG_ERROR([You have asked for ACLK to be built but no json-c available. ACLK requires json-c])
AC_MSG_CHECKING([if netdata agent-cloud-link can be enabled]) AC_MSG_CHECKING([if netdata agent-cloud-link can be enabled])
if test "${HAVE_libmosquitto_a}" = "yes" -a "${HAVE_libwebsockets_a}" = "yes" -a -n "${SSL_LIBS}"; then if test "${HAVE_libmosquitto_a}" = "yes" -a "${HAVE_libwebsockets_a}" = "yes" -a -n "${SSL_LIBS}" -a "${enable_jsonc}" = "yes"; then
can_enable_aclk="yes" can_enable_aclk="yes"
else else
can_enable_aclk="no" can_enable_aclk="no"
fi fi
AC_MSG_RESULT([${can_enable_aclk}]) AC_MSG_RESULT([${can_enable_aclk}])
test "${aclk_required}" = "yes" -a "${can_enable_aclk}" = "no" && \ test "${enable_cloud}" = "yes" -a "${can_enable_aclk}" = "no" && \
AC_MSG_ERROR([User required agent-cloud-link but it can't be built!]) AC_MSG_ERROR([User required agent-cloud-link but it can't be built!])
AC_MSG_CHECKING([if netdata agent-cloud-link should/will be enabled]) AC_MSG_CHECKING([if netdata agent-cloud-link should/will be enabled])
if test "${aclk_required}" = "detect"; then if test "${enable_cloud}" = "detect"; then
enable_aclk=$can_enable_aclk enable_aclk=$can_enable_aclk
else else
enable_aclk=$aclk_required enable_aclk=$enable_cloud
fi fi
AC_SUBST([can_enable_aclk])
if test "${enable_aclk}" = "yes"; then if test "${enable_aclk}" = "yes"; then
AC_DEFINE([ENABLE_ACLK], [1], [netdata ACLK]) AC_DEFINE([ENABLE_ACLK], [1], [netdata ACLK])
@ -610,6 +646,7 @@ if test "$enable_cloud" != "no"; then
AC_MSG_RESULT([${enable_aclk}]) AC_MSG_RESULT([${enable_aclk}])
fi fi
AC_SUBST([enable_cloud])
AM_CONDITIONAL([ENABLE_ACLK], [test "${enable_aclk}" = "yes"]) AM_CONDITIONAL([ENABLE_ACLK], [test "${enable_aclk}" = "yes"])
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
@ -1216,7 +1253,8 @@ AC_SUBST([webdir])
CFLAGS="${CFLAGS} ${OPTIONAL_MATH_CFLAGS} ${OPTIONAL_NFACCT_CFLAGS} ${OPTIONAL_ZLIB_CFLAGS} ${OPTIONAL_UUID_CFLAGS} \ CFLAGS="${CFLAGS} ${OPTIONAL_MATH_CFLAGS} ${OPTIONAL_NFACCT_CFLAGS} ${OPTIONAL_ZLIB_CFLAGS} ${OPTIONAL_UUID_CFLAGS} \
${OPTIONAL_LIBCAP_CFLAGS} ${OPTIONAL_IPMIMONITORING_CFLAGS} ${OPTIONAL_CUPS_CFLAGS} ${OPTIONAL_XENSTAT_FLAGS} \ ${OPTIONAL_LIBCAP_CFLAGS} ${OPTIONAL_IPMIMONITORING_CFLAGS} ${OPTIONAL_CUPS_CFLAGS} ${OPTIONAL_XENSTAT_FLAGS} \
${OPTIONAL_KINESIS_CFLAGS} ${OPTIONAL_PROMETHEUS_REMOTE_WRITE_CFLAGS} ${OPTIONAL_MONGOC_CFLAGS} ${LWS_CFLAGS}" ${OPTIONAL_KINESIS_CFLAGS} ${OPTIONAL_PROMETHEUS_REMOTE_WRITE_CFLAGS} ${OPTIONAL_MONGOC_CFLAGS} ${LWS_CFLAGS} \
${OPTIONAL_JSONC_STATIC_CFLAGS}"
CXXFLAGS="${CFLAGS} ${CXX11FLAG}" CXXFLAGS="${CFLAGS} ${CXX11FLAG}"

View file

@ -43,6 +43,8 @@ static cmd_status_t cmd_exit_execute(char *args, char **message);
static cmd_status_t cmd_fatal_execute(char *args, char **message); static cmd_status_t cmd_fatal_execute(char *args, char **message);
static cmd_status_t cmd_reload_claiming_state_execute(char *args, char **message); static cmd_status_t cmd_reload_claiming_state_execute(char *args, char **message);
static cmd_status_t cmd_reload_labels_execute(char *args, char **message); static cmd_status_t cmd_reload_labels_execute(char *args, char **message);
static cmd_status_t cmd_read_config_execute(char *args, char **message);
static cmd_status_t cmd_write_config_execute(char *args, char **message);
static command_info_t command_info_array[] = { static command_info_t command_info_array[] = {
{"help", cmd_help_execute, CMD_TYPE_HIGH_PRIORITY}, // show help menu {"help", cmd_help_execute, CMD_TYPE_HIGH_PRIORITY}, // show help menu
@ -53,6 +55,8 @@ static command_info_t command_info_array[] = {
{"fatal-agent", cmd_fatal_execute, CMD_TYPE_HIGH_PRIORITY}, // exit with fatal error {"fatal-agent", cmd_fatal_execute, CMD_TYPE_HIGH_PRIORITY}, // exit with fatal error
{"reload-claiming-state", cmd_reload_claiming_state_execute, CMD_TYPE_ORTHOGONAL}, // reload claiming state {"reload-claiming-state", cmd_reload_claiming_state_execute, CMD_TYPE_ORTHOGONAL}, // reload claiming state
{"reload-labels", cmd_reload_labels_execute, CMD_TYPE_ORTHOGONAL}, // reload the labels {"reload-labels", cmd_reload_labels_execute, CMD_TYPE_ORTHOGONAL}, // reload the labels
{"read-config", cmd_read_config_execute, CMD_TYPE_CONCURRENT},
{"write-config", cmd_write_config_execute, CMD_TYPE_ORTHOGONAL}
}; };
/* Mutexes for commands of type CMD_TYPE_ORTHOGONAL */ /* Mutexes for commands of type CMD_TYPE_ORTHOGONAL */
@ -185,23 +189,15 @@ static cmd_status_t cmd_reload_claiming_state_execute(char *args, char **message
{ {
(void)args; (void)args;
(void)message; (void)message;
#if defined(DISABLE_CLOUD) || !defined(ENABLE_ACLK)
#ifdef DISABLE_CLOUD info("The claiming feature has been explicitly disabled");
info("The claiming feature has been disabled"); *message = strdupz("This agent cannot be claimed, it was built without support for Cloud");
return CMD_STATUS_FAILURE; return CMD_STATUS_FAILURE;
#endif #endif
#ifndef ENABLE_ACLK
info("Cloud functionality is not enabled because of missing dependencies at build-time.");
return CMD_STATUS_FAILURE;
#endif
if (!netdata_cloud_setting) {
error("Cannot reload claiming status -> cloud functionality has been disabled");
return CMD_STATUS_FAILURE;
}
error_log_limit_unlimited(); error_log_limit_unlimited();
info("COMMAND: Reloading Agent Claiming configuration."); info("COMMAND: Reloading Agent Claiming configuration.");
load_claiming_state(); load_claiming_state();
registry_update_cloud_base_url();
error_log_limit_reset(); error_log_limit_reset();
return CMD_STATUS_SUCCESS; return CMD_STATUS_SUCCESS;
} }
@ -230,6 +226,76 @@ static cmd_status_t cmd_reload_labels_execute(char *args, char **message)
return CMD_STATUS_SUCCESS; return CMD_STATUS_SUCCESS;
} }
static cmd_status_t cmd_read_config_execute(char *args, char **message)
{
size_t n = strlen(args);
char *separator = strchr(args,'|');
if (separator == NULL)
return CMD_STATUS_FAILURE;
char *separator2 = strchr(separator + 1,'|');
if (separator2 == NULL)
return CMD_STATUS_FAILURE;
char *temp = callocz(n + 1, 1);
strcpy(temp, args);
size_t offset = separator - args;
temp[offset] = 0;
size_t offset2 = separator2 - args;
temp[offset2] = 0;
const char *conf_file = temp; /* "cloud" is cloud.conf, otherwise netdata.conf */
struct config *tmp_config = strcmp(conf_file, "cloud") ? &netdata_config : &cloud_config;
char *value = appconfig_get(tmp_config, temp + offset + 1, temp + offset2 + 1, NULL);
if (value == NULL)
{
error("Cannot execute read-config conf_file=%s section=%s / key=%s because no value set", conf_file,
temp + offset + 1, temp + offset2 + 1);
freez(temp);
return CMD_STATUS_FAILURE;
}
else
{
(*message) = strdupz(value);
freez(temp);
return CMD_STATUS_SUCCESS;
}
}
static cmd_status_t cmd_write_config_execute(char *args, char **message)
{
UNUSED(message);
info("write-config %s", args);
size_t n = strlen(args);
char *separator = strchr(args,'|');
if (separator == NULL)
return CMD_STATUS_FAILURE;
char *separator2 = strchr(separator + 1,'|');
if (separator2 == NULL)
return CMD_STATUS_FAILURE;
char *separator3 = strchr(separator2 + 1,'|');
if (separator3 == NULL)
return CMD_STATUS_FAILURE;
char *temp = callocz(n + 1, 1);
strcpy(temp, args);
size_t offset = separator - args;
temp[offset] = 0;
size_t offset2 = separator2 - args;
temp[offset2] = 0;
size_t offset3 = separator3 - args;
temp[offset3] = 0;
const char *conf_file = temp; /* "cloud" is cloud.conf, otherwise netdata.conf */
struct config *tmp_config = strcmp(conf_file, "cloud") ? &netdata_config : &cloud_config;
appconfig_set(tmp_config, temp + offset + 1, temp + offset2 + 1, temp + offset3 + 1);
info("write-config conf_file=%s section=%s key=%s value=%s",conf_file, temp + offset + 1, temp + offset2 + 1,
temp + offset3 + 1);
freez(temp);
return CMD_STATUS_SUCCESS;
}
static void cmd_lock_exclusive(unsigned index) static void cmd_lock_exclusive(unsigned index)
{ {
(void)index; (void)index;
@ -369,9 +435,11 @@ static void schedule_command(uv_work_t *req)
cmd_ctx->status = execute_command(cmd_ctx->idx, cmd_ctx->args, &cmd_ctx->message); cmd_ctx->status = execute_command(cmd_ctx->idx, cmd_ctx->args, &cmd_ctx->message);
} }
/* This will alter the state of the command_info_array.cmd_str
*/
static void parse_commands(struct command_context *cmd_ctx) static void parse_commands(struct command_context *cmd_ctx)
{ {
char *message = NULL, *pos; char *message = NULL, *pos, *lstrip, *rstrip;
cmd_t i; cmd_t i;
cmd_status_t status; cmd_status_t status;
@ -381,9 +449,12 @@ static void parse_commands(struct command_context *cmd_ctx)
for (pos = cmd_ctx->command_string ; isspace(*pos) && ('\0' != *pos) ; ++pos) {;} for (pos = cmd_ctx->command_string ; isspace(*pos) && ('\0' != *pos) ; ++pos) {;}
for (i = 0 ; i < CMD_TOTAL_COMMANDS ; ++i) { for (i = 0 ; i < CMD_TOTAL_COMMANDS ; ++i) {
if (!strncmp(pos, command_info_array[i].cmd_str, strlen(command_info_array[i].cmd_str))) { if (!strncmp(pos, command_info_array[i].cmd_str, strlen(command_info_array[i].cmd_str))) {
for (lstrip=pos + strlen(command_info_array[i].cmd_str); isspace(*lstrip) && ('\0' != *lstrip); ++lstrip) {;}
for (rstrip=lstrip+strlen(lstrip)-1; rstrip>lstrip && isspace(*rstrip); *(rstrip--) = 0 );
cmd_ctx->work.data = cmd_ctx; cmd_ctx->work.data = cmd_ctx;
cmd_ctx->idx = i; cmd_ctx->idx = i;
cmd_ctx->args = pos + strlen(command_info_array[i].cmd_str); cmd_ctx->args = lstrip;
cmd_ctx->message = NULL; cmd_ctx->message = NULL;
assert(0 == uv_queue_work(loop, &cmd_ctx->work, schedule_command, after_schedule_command)); assert(0 == uv_queue_work(loop, &cmd_ctx->work, schedule_command, after_schedule_command));

View file

@ -21,6 +21,8 @@ typedef enum cmd {
CMD_FATAL, CMD_FATAL,
CMD_RELOAD_CLAIMING_STATE, CMD_RELOAD_CLAIMING_STATE,
CMD_RELOAD_LABELS, CMD_RELOAD_LABELS,
CMD_READ_CONFIG,
CMD_WRITE_CONFIG,
CMD_TOTAL_COMMANDS CMD_TOTAL_COMMANDS
} cmd_t; } cmd_t;

View file

@ -437,7 +437,7 @@ int become_daemon(int dont_fork, const char *user)
sched_setscheduler_set(); sched_setscheduler_set();
// Set claiming directory based on user config directory with correct ownership // Set claiming directory based on user config directory with correct ownership
snprintfz(claimingdirectory, FILENAME_MAX, "%s/claim.d", netdata_configured_user_config_dir); snprintfz(claimingdirectory, FILENAME_MAX, "%s/cloud.d", netdata_configured_varlib_dir);
if(user && *user) { if(user && *user) {
if(become_user(user, pidfd) != 0) { if(become_user(user, pidfd) != 0) {

View file

@ -577,20 +577,7 @@ static void get_netdata_configured_variables() {
get_system_cpus(); get_system_cpus();
get_system_pid_max(); get_system_pid_max();
// --------------------------------------------------------------------
// Check if the cloud is enabled
#ifdef DISABLE_CLOUD
netdata_cloud_setting = 0;
#else
char *cloud = config_get(CONFIG_SECTION_GLOBAL, "netdata cloud", "coming soon");
if (!strcmp(cloud, "coming soon")) {
netdata_cloud_setting = 0; // Note: this flips to 1 after the release
} else if (!strcmp(cloud, "enable")) {
netdata_cloud_setting = 1;
} else if (!strcmp(cloud, "disable")) {
netdata_cloud_setting = 0;
}
#endif
} }
static void get_system_timezone(void) { static void get_system_timezone(void) {
@ -851,11 +838,41 @@ void set_silencers_filename() {
silencers_filename = config_get(CONFIG_SECTION_HEALTH, "silencers file", filename); silencers_filename = config_get(CONFIG_SECTION_HEALTH, "silencers file", filename);
} }
/* Any config setting that can be accessed without a default value i.e. configget(...,...,NULL) *MUST*
be set in this procedure to be called in all the relevant code paths.
*/
void post_conf_load(char **user)
{
// --------------------------------------------------------------------
// get the user we should run
// IMPORTANT: this is required before web_files_uid()
if(getuid() == 0) {
*user = config_get(CONFIG_SECTION_GLOBAL, "run as user", NETDATA_USER);
}
else {
struct passwd *passwd = getpwuid(getuid());
*user = config_get(CONFIG_SECTION_GLOBAL, "run as user", (passwd && passwd->pw_name)?passwd->pw_name:"");
}
// --------------------------------------------------------------------
// Check if the cloud is enabled
#if defined( DISABLE_CLOUD ) || !defined( ENABLE_ACLK )
netdata_cloud_setting = 0;
#else
netdata_cloud_setting = appconfig_get_boolean(&cloud_config, CONFIG_SECTION_GLOBAL, "enabled", 1);
#endif
// This must be set before any point in the code that accesses it. Do not move it from this function.
appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "cloud base url", DEFAULT_CLOUD_BASE_URL);
}
int main(int argc, char **argv) { int main(int argc, char **argv) {
int i; int i;
int config_loaded = 0; int config_loaded = 0;
int dont_fork = 0; int dont_fork = 0;
size_t default_stacksize; size_t default_stacksize;
char *user = NULL;
netdata_ready=0; netdata_ready=0;
// set the name for logging // set the name for logging
@ -918,6 +935,8 @@ int main(int argc, char **argv) {
} }
else { else {
debug(D_OPTIONS, "Configuration loaded from %s.", optarg); debug(D_OPTIONS, "Configuration loaded from %s.", optarg);
post_conf_load(&user);
load_cloud_conf(1);
config_loaded = 1; config_loaded = 1;
} }
break; break;
@ -965,6 +984,8 @@ int main(int argc, char **argv) {
if(strcmp(optarg, "unittest") == 0) { if(strcmp(optarg, "unittest") == 0) {
if(unit_test_buffer()) return 1; if(unit_test_buffer()) return 1;
if(unit_test_str2ld()) return 1; if(unit_test_str2ld()) return 1;
// No call to load the config file on this code-path
post_conf_load(&user);
get_netdata_configured_variables(); get_netdata_configured_variables();
default_rrd_update_every = 1; default_rrd_update_every = 1;
default_rrd_memory_mode = RRD_MEMORY_MODE_RAM; default_rrd_memory_mode = RRD_MEMORY_MODE_RAM;
@ -1065,9 +1086,9 @@ int main(int argc, char **argv) {
debug_flags = strtoull(optarg, NULL, 0); debug_flags = strtoull(optarg, NULL, 0);
} }
else if(strcmp(optarg, "set") == 0) { else if(strcmp(optarg, "set") == 0) {
if(optind + 3 > argc) { if(optind + 4 > argc) {
fprintf(stderr, "%s", "\nUSAGE: -W set 'section' 'key' 'value'\n\n" fprintf(stderr, "%s", "\nUSAGE: -W set 'conf_file' 'section' 'key' 'value'\n\n"
" Overwrites settings of netdata.conf.\n" " Overwrites settings of netdata.conf or cloud.conf\n"
"\n" "\n"
" These options interact with: -c netdata.conf\n" " These options interact with: -c netdata.conf\n"
" If -c netdata.conf is given on the command line,\n" " If -c netdata.conf is given on the command line,\n"
@ -1076,21 +1097,24 @@ int main(int argc, char **argv) {
" If -c netdata.conf is given after (or missing)\n" " If -c netdata.conf is given after (or missing)\n"
" -W set... the user cannot overwrite the command line\n" " -W set... the user cannot overwrite the command line\n"
" parameters." " parameters."
" conf_file can be \"cloud\" or \"netdata\".\n"
"\n" "\n"
); );
return 1; return 1;
} }
const char *section = argv[optind]; const char *conf_file = argv[optind]; /* "cloud" is cloud.conf, otherwise netdata.conf */
const char *key = argv[optind + 1]; struct config *tmp_config = strcmp(conf_file, "cloud") ? &netdata_config : &cloud_config;
const char *value = argv[optind + 2]; const char *section = argv[optind + 1];
optind += 3; const char *key = argv[optind + 2];
const char *value = argv[optind + 3];
optind += 4;
// set this one as the default // set this one as the default
// only if it is not already set in the config file // only if it is not already set in the config file
// so the caller can use -c netdata.conf before or // so the caller can use -c netdata.conf before or
// after this parameter to prevent or allow overwriting // after this parameter to prevent or allow overwriting
// variables at netdata.conf // variables at netdata.conf
config_set_default(section, key, value); appconfig_set_default(tmp_config, section, key, value);
// fprintf(stderr, "SET section '%s', key '%s', value '%s'\n", section, key, value); // fprintf(stderr, "SET section '%s', key '%s', value '%s'\n", section, key, value);
} }
@ -1109,6 +1133,7 @@ int main(int argc, char **argv) {
if(!config_loaded) { if(!config_loaded) {
fprintf(stderr, "warning: no configuration file has been loaded. Use -c CONFIG_FILE, before -W get. Using default config.\n"); fprintf(stderr, "warning: no configuration file has been loaded. Use -c CONFIG_FILE, before -W get. Using default config.\n");
load_netdata_conf(NULL, 0); load_netdata_conf(NULL, 0);
post_conf_load(&user);
} }
get_netdata_configured_variables(); get_netdata_configured_variables();
@ -1120,6 +1145,37 @@ int main(int argc, char **argv) {
printf("%s\n", value); printf("%s\n", value);
return 0; return 0;
} }
else if(strcmp(optarg, "get2") == 0) {
if(optind + 4 > argc) {
fprintf(stderr, "%s", "\nUSAGE: -W get2 'conf_file' 'section' 'key' 'value'\n\n"
" Prints settings of netdata.conf or cloud.conf\n"
"\n"
" These options interact with: -c netdata.conf\n"
" -c netdata.conf has to be given before -W get2.\n"
" conf_file can be \"cloud\" or \"netdata\".\n"
"\n"
);
return 1;
}
if(!config_loaded) {
fprintf(stderr, "warning: no configuration file has been loaded. Use -c CONFIG_FILE, before -W get. Using default config.\n");
load_netdata_conf(NULL, 0);
post_conf_load(&user);
load_cloud_conf(1);
}
get_netdata_configured_variables();
const char *conf_file = argv[optind]; /* "cloud" is cloud.conf, otherwise netdata.conf */
struct config *tmp_config = strcmp(conf_file, "cloud") ? &netdata_config : &cloud_config;
const char *section = argv[optind + 1];
const char *key = argv[optind + 2];
const char *def = argv[optind + 3];
const char *value = appconfig_get(tmp_config, section, key, def);
printf("%s\n", value);
return 0;
}
else if(strncmp(optarg, claim_string, strlen(claim_string)) == 0) { else if(strncmp(optarg, claim_string, strlen(claim_string)) == 0) {
/* will trigger a claiming attempt when the agent is initialized */ /* will trigger a claiming attempt when the agent is initialized */
claiming_pending_arguments = optarg + strlen(claim_string); claiming_pending_arguments = optarg + strlen(claim_string);
@ -1149,7 +1205,11 @@ int main(int argc, char **argv) {
#endif #endif
if(!config_loaded) if(!config_loaded)
{
load_netdata_conf(NULL, 0); load_netdata_conf(NULL, 0);
post_conf_load(&user);
load_cloud_conf(0);
}
// ------------------------------------------------------------------------ // ------------------------------------------------------------------------
@ -1179,8 +1239,6 @@ int main(int argc, char **argv) {
fatal("Cannot cd to '%s'", netdata_configured_user_config_dir); fatal("Cannot cd to '%s'", netdata_configured_user_config_dir);
} }
char *user = NULL;
{ {
// -------------------------------------------------------------------- // --------------------------------------------------------------------
// get the debugging flags from the configuration file // get the debugging flags from the configuration file
@ -1246,19 +1304,6 @@ int main(int argc, char **argv) {
} }
// --------------------------------------------------------------------
// get the user we should run
// IMPORTANT: this is required before web_files_uid()
if(getuid() == 0) {
user = config_get(CONFIG_SECTION_GLOBAL, "run as user", NETDATA_USER);
}
else {
struct passwd *passwd = getpwuid(getuid());
user = config_get(CONFIG_SECTION_GLOBAL, "run as user", (passwd && passwd->pw_name)?passwd->pw_name:"");
}
// -------------------------------------------------------------------- // --------------------------------------------------------------------
// create the listening sockets // create the listening sockets

View file

@ -341,3 +341,28 @@ void health_alarms_values2json(RRDHOST *host, BUFFER *wb, int all) {
buffer_strcat(wb, "\n\t}\n}\n"); buffer_strcat(wb, "\n\t}\n}\n");
rrdhost_unlock(host); rrdhost_unlock(host);
} }
void health_active_log_alarms_2json(RRDHOST *host, BUFFER *wb) {
netdata_rwlock_rdlock(&host->health_log.alarm_log_rwlock);
buffer_sprintf(wb, "[\n");
unsigned int max = host->health_log.max;
unsigned int count = 0;
ALARM_ENTRY *ae;
for(ae = host->health_log.alarms; ae && count < max ; ae = ae->next) {
if(likely(!((ae->new_status == RRDCALC_STATUS_WARNING || ae->new_status == RRDCALC_STATUS_CRITICAL)
&& (ae->old_status != RRDCALC_STATUS_WARNING || ae->old_status != RRDCALC_STATUS_CRITICAL)
&& !ae->updated_by_id)))
continue;
if(likely(count)) buffer_strcat(wb, ",");
health_alarm_entry2json_nolock(wb, ae, host);
count++;
}
buffer_strcat(wb, "]");
netdata_rwlock_unlock(&host->health_log.alarm_log_rwlock);
}

View file

@ -195,7 +195,7 @@ fi
[ -z "${NETDATA_STOCK_CONFIG_DIR}" ] && NETDATA_STOCK_CONFIG_DIR="@libconfigdir_POST@" [ -z "${NETDATA_STOCK_CONFIG_DIR}" ] && NETDATA_STOCK_CONFIG_DIR="@libconfigdir_POST@"
[ -z "${NETDATA_CACHE_DIR}" ] && NETDATA_CACHE_DIR="@cachedir_POST@" [ -z "${NETDATA_CACHE_DIR}" ] && NETDATA_CACHE_DIR="@cachedir_POST@"
[ -z "${NETDATA_REGISTRY_URL}" ] && NETDATA_REGISTRY_URL="https://registry.my-netdata.io" [ -z "${NETDATA_REGISTRY_URL}" ] && NETDATA_REGISTRY_URL="https://registry.my-netdata.io"
[ -z "${NETDATA_REGISTRY_CLOUD_BASE_URL}" ] && NETDATA_REGISTRY_CLOUD_BASE_URL="https://netdata.cloud" [ -z "${NETDATA_REGISTRY_CLOUD_BASE_URL}" ] && NETDATA_REGISTRY_CLOUD_BASE_URL="https://app.netdata.cloud"
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
# parse command line parameters # parse command line parameters

View file

@ -289,18 +289,21 @@ char *appconfig_get_by_section(struct section *co, const char *name, const char
{ {
struct config_option *cv; struct config_option *cv;
// Only calls internal to this file check for a NULL result and they do not supply a NULL arg.
// External caller should treat NULL as an error case.
cv = appconfig_option_index_find(co, name, 0); cv = appconfig_option_index_find(co, name, 0);
if(!cv) { if (!cv) {
if (!default_value) return NULL;
cv = appconfig_value_create(co, name, default_value); cv = appconfig_value_create(co, name, default_value);
if(!cv) return NULL; if (!cv) return NULL;
} }
cv->flags |= CONFIG_VALUE_USED; cv->flags |= CONFIG_VALUE_USED;
if((cv->flags & CONFIG_VALUE_LOADED) || (cv->flags & CONFIG_VALUE_CHANGED)) { if((cv->flags & CONFIG_VALUE_LOADED) || (cv->flags & CONFIG_VALUE_CHANGED)) {
// this is a loaded value from the config file // this is a loaded value from the config file
// if it is different that the default, mark it // if it is different than the default, mark it
if(!(cv->flags & CONFIG_VALUE_CHECKED)) { if(!(cv->flags & CONFIG_VALUE_CHECKED)) {
if(strcmp(cv->value, default_value) != 0) cv->flags |= CONFIG_VALUE_CHANGED; if(default_value && strcmp(cv->value, default_value) != 0) cv->flags |= CONFIG_VALUE_CHANGED;
cv->flags |= CONFIG_VALUE_CHECKED; cv->flags |= CONFIG_VALUE_CHECKED;
} }
} }
@ -308,11 +311,17 @@ char *appconfig_get_by_section(struct section *co, const char *name, const char
return(cv->value); return(cv->value);
} }
char *appconfig_get(struct config *root, const char *section, const char *name, const char *default_value) char *appconfig_get(struct config *root, const char *section, const char *name, const char *default_value)
{ {
debug(D_CONFIG, "request to get config in section '%s', name '%s', default_value '%s'", section, name, default_value); if (default_value == NULL)
debug(D_CONFIG, "request to get config in section '%s', name '%s' or fail", section, name);
else
debug(D_CONFIG, "request to get config in section '%s', name '%s', default_value '%s'", section, name, default_value);
struct section *co = appconfig_section_find(root, section); struct section *co = appconfig_section_find(root, section);
if (!co && !default_value)
return NULL;
if(!co) co = appconfig_section_create(root, section); if(!co) co = appconfig_section_create(root, section);
return appconfig_get_by_section(co, name, default_value); return appconfig_get_by_section(co, name, default_value);

View file

@ -323,6 +323,6 @@ extern char *netdata_configured_host_prefix;
#include "string/utf8.h" #include "string/utf8.h"
// BEWARE: Outside of the C code this also exists in alarm-notify.sh // BEWARE: Outside of the C code this also exists in alarm-notify.sh
#define DEFAULT_CLOUD_BASE_URL "https://netdata.cloud" #define DEFAULT_CLOUD_BASE_URL "https://app.netdata.cloud"
#endif // NETDATA_LIB_H #endif // NETDATA_LIB_H

View file

@ -296,6 +296,9 @@ while [ -n "${1}" ]; do
NETDATA_CONFIGURE_OPTIONS="${NETDATA_CONFIGURE_OPTIONS//--enable-cloud/} --enable-cloud" NETDATA_CONFIGURE_OPTIONS="${NETDATA_CONFIGURE_OPTIONS//--enable-cloud/} --enable-cloud"
fi fi
;; ;;
"--build-json-c")
NETDATA_BUILD_JSON_C=1
;;
"--install") "--install")
NETDATA_PREFIX="${2}/netdata" NETDATA_PREFIX="${2}/netdata"
shift 1 shift 1
@ -621,6 +624,73 @@ bundle_libwebsockets() {
bundle_libwebsockets bundle_libwebsockets
# -----------------------------------------------------------------------------
build_jsonc() {
pushd "${1}" > /dev/null || exit 1
run env CFLAGS= CXXFLAGS= LDFLAGS= cmake -DBUILD_SHARED_LIBS=OFF .
run env CFLAGS= CXXFLAGS= LDFLAGS= make
popd > /dev/null || exit 1
}
copy_jsonc() {
target_dir="${PWD}/externaldeps/jsonc"
run mkdir -p "${target_dir}" "${target_dir}/json-c" || return 1
run cp "${1}/libjson-c.a" "${target_dir}/libjson-c.a" || return 1
run cp ${1}/*.h "${target_dir}/json-c" || return 1
}
bundle_jsonc() {
# If --build-json-c flag or not json-c on system, then bundle our own json-c
if [ -z "${NETDATA_BUILD_JSON_C}" ] && pkg-config json-c; then
return 0
fi
if [ -z "$(command -v cmake)" ]; then
run_failed "Could not find cmake, which is required to build JSON-C. The install process will continue, but Netdata Cloud support will be disabled."
defer_error_highlighted "Could not find cmake, which is required to build JSON-C. The install process will continue, but Netdata Cloud support will be disabled."
return 0
fi
progress "Prepare JSON-C"
JSONC_PACKAGE_VERSION="$(cat packaging/jsonc.version)"
tmp="$(mktemp -d -t netdata-jsonc-XXXXXX)"
JSONC_PACKAGE_BASENAME="json-c-${JSONC_PACKAGE_VERSION}.tar.gz"
if fetch_and_verify "jsonc" \
"https://github.com/json-c/json-c/archive/${JSONC_PACKAGE_BASENAME}" \
"${JSONC_PACKAGE_BASENAME}" \
"${tmp}" \
"${NETDATA_LOCAL_TARBALL_OVERRIDE_JSONC}"; then
if run tar -xf "${tmp}/${JSONC_PACKAGE_BASENAME}" -C "${tmp}" &&
build_jsonc "${tmp}/json-c-json-c-${JSONC_PACKAGE_VERSION}" &&
copy_jsonc "${tmp}/json-c-json-c-${JSONC_PACKAGE_VERSION}" &&
rm -rf "${tmp}"; then
run_ok "JSON-C built and prepared."
else
run_failed "Failed to build JSON-C."
if [ -n "${NETDATA_REQUIRE_CLOUD}" ]; then
exit 1
else
defer_error_highlighted "Failed to build JSON-C. Netdata Cloud support will be disabled."
fi
fi
else
run_failed "Unable to fetch sources for JSON-C."
if [ -n "${NETDATA_REQUIRE_CLOUD}" ]; then
exit 1
else
defer_error_highlighted "Unable to fetch sources for JSON-C. Netdata Cloud support will be disabled."
fi
fi
}
bundle_jsonc
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
# If we have the dashboard switching logic, make sure we're on the classic # If we have the dashboard switching logic, make sure we're on the classic
# dashboard during the install (updates don't work correctly otherwise). # dashboard during the install (updates don't work correctly otherwise).
@ -853,7 +923,7 @@ NETDATA_LOG_DIR="$(config_option "global" "log directory" "${NETDATA_PREFIX}/var
NETDATA_USER_CONFIG_DIR="$(config_option "global" "config directory" "${NETDATA_PREFIX}/etc/netdata")" NETDATA_USER_CONFIG_DIR="$(config_option "global" "config directory" "${NETDATA_PREFIX}/etc/netdata")"
NETDATA_STOCK_CONFIG_DIR="$(config_option "global" "stock config directory" "${NETDATA_PREFIX}/usr/lib/netdata/conf.d")" NETDATA_STOCK_CONFIG_DIR="$(config_option "global" "stock config directory" "${NETDATA_PREFIX}/usr/lib/netdata/conf.d")"
NETDATA_RUN_DIR="${NETDATA_PREFIX}/var/run" NETDATA_RUN_DIR="${NETDATA_PREFIX}/var/run"
NETDATA_CLAIMING_DIR="${NETDATA_USER_CONFIG_DIR}/claim.d" NETDATA_CLAIMING_DIR="${NETDATA_LIB_DIR}/cloud.d"
cat << OPTIONSEOF cat << OPTIONSEOF

View file

@ -1 +1 @@
aa0056df0e79720cf9bda68a9e4051cd021f40c482f64bfa46645cdd1243b93e dashboard.tar.gz c48c971cef360a08ac82ae43c7f50fdce32da4b0fc78b0f67b296fa154dfd7a6 dashboard.tar.gz

View file

@ -1 +1 @@
v0.4.18 v1.0.14

View file

@ -98,8 +98,8 @@ RUN \
/var/cache/netdata \ /var/cache/netdata \
/var/lib/netdata \ /var/lib/netdata \
/var/log/netdata && \ /var/log/netdata && \
chown -R netdata:netdata /etc/netdata/claim.d && \ chown -R netdata:netdata /var/lib/netdata/cloud.d && \
chmod 0700 /etc/netdata/claim.d && \ chmod 0700 /var/lib/netdata/cloud.d && \
chmod 0755 /usr/libexec/netdata/plugins.d/*.plugin && \ chmod 0755 /usr/libexec/netdata/plugins.d/*.plugin && \
chmod 4755 \ chmod 4755 \
/usr/libexec/netdata/plugins.d/cgroup-network \ /usr/libexec/netdata/plugins.d/cgroup-network \

View file

@ -0,0 +1 @@
ec4eb70e0f6c0d707b9b1ec646cf7c860f4abb3562a90ea6e4d78d177fd95303 json-c-0.14-20200419.tar.gz

1
packaging/jsonc.version Normal file
View file

@ -0,0 +1 @@
0.14-20200419

View file

@ -129,6 +129,20 @@ static inline int registry_person_url_callback_verify_machine_exists(void *entry
return 0; return 0;
} }
// ----------------------------------------------------------------------------
// dynamic update of the configuration
// The registry does not seem to be designed to support this and I cannot see any concurrency protection
// that could make this safe, so try to be as atomic as possible.
void registry_update_cloud_base_url()
{
// This is guaranteed to be set early in main via post_conf_load()
registry.cloud_base_url = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "cloud base url", NULL);
if (registry.cloud_base_url == NULL)
fatal("Do not move the cloud base url out of post_conf_load!!");
setenv("NETDATA_REGISTRY_CLOUD_BASE_URL", registry.cloud_base_url, 1);
}
// ---------------------------------------------------------------------------- // ----------------------------------------------------------------------------
// public HELLO request // public HELLO request

View file

@ -68,6 +68,9 @@ extern int registry_request_search_json(RRDHOST *host, struct web_client *w, cha
extern int registry_request_switch_json(RRDHOST *host, struct web_client *w, char *person_guid, char *machine_guid, char *url, char *new_person_guid, time_t when); extern int registry_request_switch_json(RRDHOST *host, struct web_client *w, char *person_guid, char *machine_guid, char *url, char *new_person_guid, time_t when);
extern int registry_request_hello_json(RRDHOST *host, struct web_client *w); extern int registry_request_hello_json(RRDHOST *host, struct web_client *w);
// update the registry config
extern void registry_update_cloud_base_url();
// update the registry monitoring charts // update the registry monitoring charts
extern void registry_statistics(void); extern void registry_statistics(void);

View file

@ -40,10 +40,7 @@ int registry_init(void) {
registry.hostname = config_get(CONFIG_SECTION_REGISTRY, "registry hostname", netdata_configured_hostname); registry.hostname = config_get(CONFIG_SECTION_REGISTRY, "registry hostname", netdata_configured_hostname);
registry.verify_cookies_redirects = config_get_boolean(CONFIG_SECTION_REGISTRY, "verify browser cookies support", 1); registry.verify_cookies_redirects = config_get_boolean(CONFIG_SECTION_REGISTRY, "verify browser cookies support", 1);
// netdata.cloud configuration, if cloud_base_url == "", cloud functionality is disabled. registry_update_cloud_base_url();
registry.cloud_base_url = config_get(CONFIG_SECTION_CLOUD, "cloud base url", DEFAULT_CLOUD_BASE_URL);
setenv("NETDATA_REGISTRY_CLOUD_BASE_URL", registry.cloud_base_url, 1);
setenv("NETDATA_REGISTRY_HOSTNAME", registry.hostname, 1); setenv("NETDATA_REGISTRY_HOSTNAME", registry.hostname, 1);
setenv("NETDATA_REGISTRY_URL", registry.registry_to_announce, 1); setenv("NETDATA_REGISTRY_URL", registry.registry_to_announce, 1);

View file

@ -54,4 +54,4 @@
allow from = * allow from = *
[cloud] [cloud]
cloud base url = https://netdata.cloud cloud base url = https://app.netdata.cloud

View file

@ -54,4 +54,4 @@
allow from = * allow from = *
[cloud] [cloud]
cloud base url = https://netdata.cloud cloud base url = https://app.netdata.cloud

View file

@ -865,11 +865,10 @@ inline int web_client_api_request_v1_info_fill_buffer(RRDHOST *host, BUFFER *wb)
#ifdef DISABLE_CLOUD #ifdef DISABLE_CLOUD
buffer_strcat(wb, "\t\"cloud-enabled\": false,\n"); buffer_strcat(wb, "\t\"cloud-enabled\": false,\n");
#else #else
if (netdata_cloud_setting) buffer_sprintf(wb, "\t\"cloud-enabled\": %s,\n",
buffer_strcat(wb, "\t\"cloud-enabled\": true,\n"); appconfig_get_boolean(&cloud_config, CONFIG_SECTION_GLOBAL, "enabled", 1) ? "true" : "false");
else
buffer_strcat(wb, "\t\"cloud-enabled\": false,\n");
#endif #endif
#ifdef ENABLE_ACLK #ifdef ENABLE_ACLK
buffer_strcat(wb, "\t\"cloud-available\": true,\n"); buffer_strcat(wb, "\t\"cloud-available\": true,\n");
#else #else

View file

@ -792,11 +792,6 @@ function renderMyNetdataMenu(machinesArray) {
if (!isSignedIn()) { if (!isSignedIn()) {
html += ( html += (
`<div class="agent-item"> `<div class="agent-item">
<i class="fas fa-tv"></i>
<a onClick="openAuthenticatedUrl('console.html');" target="_blank">Nodes<sup class="beta"> beta</sup></a>
<div></div>
</div>
<div class="agent-item">
<i class="fas fa-cog""></i> <i class="fas fa-cog""></i>
<a href="#" onclick="switchRegistryModalHandler(); return false;">Switch Identity</a> <a href="#" onclick="switchRegistryModalHandler(); return false;">Switch Identity</a>
<div></div> <div></div>
@ -4807,11 +4802,7 @@ function signInDidClick(e) {
} }
function shouldShowSignInBanner() { function shouldShowSignInBanner() {
if (isSignedIn()) { return false;
return false;
}
return localStorage.getItem("signInBannerClosed") != "true";
} }
function closeSignInBanner() { function closeSignInBanner() {
@ -4895,43 +4886,6 @@ function signOut() {
cloudSSOSignOut(); cloudSSOSignOut();
} }
function renderAccountUI() {
if (!NETDATA.registry.isCloudEnabled) {
return
}
const container = document.getElementById("account-menu-container");
if (isSignedIn()) {
container.removeAttribute("title");
container.removeAttribute("data-original-title");
container.removeAttribute("data-placement");
container.innerHTML = (
`<a href="#" class="dropdown-toggle" data-toggle="dropdown"><span id="amc-account-name"></span> <strong class="caret"></strong></a>
<ul id="cloud-menu" class="dropdown-menu scrollable-menu inpagemenu" role="menu">
<li>
<a onclick="openAuthenticatedUrl('console.html');" target="_blank" class="btn">
<i class="fas fa-tv"></i>&nbsp;&nbsp;<span class="hidden-sm hidden-md">Nodes<sup class="beta"> beta</sup></span>
</a>
</li>
<li>
<a href="#" class="btn" onclick="signOutDidClick(event); return false">
<i class="fas fa-sign-out-alt"></i>&nbsp;&nbsp;<span class="hidden-sm hidden-md">Sign Out</span>
</a>
</li>
</ul>`
)
document.getElementById("amc-account-name").textContent = cloudAccountName; // Anti-XSS
} else {
container.setAttribute("data-original-title", "sign in");
container.setAttribute("data-placement", "bottom");
container.innerHTML = (
`<a href="#" class="btn sign-in-btn theme-${netdataTheme}" onclick="signInDidClick(event); return false">
<i class="fas fa-sign-in-alt"></i>&nbsp;<span class="hidden-sm hidden-md">Sign In</span>
</a>`
)
}
}
function handleMessage(e) { function handleMessage(e) {
switch (e.data.type) { switch (e.data.type) {
case "sign-in": case "sign-in":
@ -4964,7 +4918,6 @@ function handleSignInMessage(e) {
function handleSignOutMessage(e) { function handleSignOutMessage(e) {
clearCloudVariables(); clearCloudVariables();
renderAccountUI();
renderMyNetdataMenu(registryAgents); renderMyNetdataMenu(registryAgents);
} }
@ -5118,7 +5071,6 @@ function initCloud() {
} }
touchAgent(); touchAgent();
renderAccountUI();
} }
// This callback is called after NETDATA.registry is initialized. // This callback is called after NETDATA.registry is initialized.

View file

@ -33,7 +33,7 @@
<meta name="twitter:description" content="Unparalleled insights, in real-time, of everything happening on your Linux systems and applications, with stunning, interactive web dashboards and powerful performance and health alarms." /> <meta name="twitter:description" content="Unparalleled insights, in real-time, of everything happening on your Linux systems and applications, with stunning, interactive web dashboards and powerful performance and health alarms." />
<meta name="twitter:image" content="https://cloud.githubusercontent.com/assets/2662304/14092712/93b039ea-f551-11e5-822c-beadbf2b2a2e.gif" /> <meta name="twitter:image" content="https://cloud.githubusercontent.com/assets/2662304/14092712/93b039ea-f551-11e5-822c-beadbf2b2a2e.gif" />
<script src="../main.js?v20190905-0"></script> <script src="../main.js?v20200429-0"></script>
</head> </head>
<body data-spy="scroll" data-target="#sidebar" data-offset="100"> <body data-spy="scroll" data-target="#sidebar" data-offset="100">
@ -90,7 +90,6 @@
</div> </div>
<nav class="collapse navbar-collapse navbar-right" role="navigation"> <nav class="collapse navbar-collapse navbar-right" role="navigation">
<ul class="nav navbar-nav"> <ul class="nav navbar-nav">
<li title="Nodes view" data-toggle="tooltip" data-placement="bottom"><a onclick="openAuthenticatedUrl('console.html');" class="btn" target="_blank"><i class="fas fa-tv"></i>&nbsp;<span class="hidden-sm hidden-md">Nodes<sup class="beta"> beta</sup></span></a></li>
<li id="alarmsButton" title="check the health monitoring alarms and their log" data-toggle="tooltip" data-placement="bottom"><a href="#" class="btn" data-toggle="modal" data-target="#alarmsModal"><i class="fas fa-bell"></i>&nbsp;<span class="hidden-sm hidden-md">Alarms&nbsp;</span><span id="alarms_count_badge" class="badge"></span></a></li> <li id="alarmsButton" title="check the health monitoring alarms and their log" data-toggle="tooltip" data-placement="bottom"><a href="#" class="btn" data-toggle="modal" data-target="#alarmsModal"><i class="fas fa-bell"></i>&nbsp;<span class="hidden-sm hidden-md">Alarms&nbsp;</span><span id="alarms_count_badge" class="badge"></span></a></li>
<li title="change dashboard settings" data-toggle="tooltip" data-placement="bottom"><a href="#" class="btn" data-toggle="modal" data-target="#optionsModal"><i class="fas fa-cog"></i>&nbsp;<span class="hidden-sm hidden-md">Settings</span></a></li> <li title="change dashboard settings" data-toggle="tooltip" data-placement="bottom"><a href="#" class="btn" data-toggle="modal" data-target="#optionsModal"><i class="fas fa-cog"></i>&nbsp;<span class="hidden-sm hidden-md">Settings</span></a></li>
<li title="check for netdata updates<br/>you should keep your netdata updated" data-toggle="tooltip" data-placement="bottom" class="hidden-sm" id="updateButton"><a href="#" class="btn" data-toggle="modal" data-target="#updateModal"><i class="fas fa-cloud-download-alt"></i> <span class="hidden-sm hidden-md">Update </span><span id="update_badge" class="badge"></span></a></li> <li title="check for netdata updates<br/>you should keep your netdata updated" data-toggle="tooltip" data-placement="bottom" class="hidden-sm" id="updateButton"><a href="#" class="btn" data-toggle="modal" data-target="#updateModal"><i class="fas fa-cloud-download-alt"></i> <span class="hidden-sm hidden-md">Update </span><span id="update_badge" class="badge"></span></a></li>