0
0
Fork 0
mirror of https://github.com/netdata/netdata.git synced 2025-04-28 14:42:31 +00:00

eBPF socket function ()

This commit is contained in:
thiagoftsm 2023-09-14 13:33:59 -03:00 committed by GitHub
parent ce055a9679
commit 8fbb89b1db
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
29 changed files with 2930 additions and 2442 deletions

View file

@ -261,7 +261,7 @@ You can also enable the following eBPF programs:
- `swap` : This eBPF program creates charts that show information about swap access.
- `mdflush`: This eBPF program creates charts that show information about
- `sync`: Monitor calls to syscalls sync(2), fsync(2), fdatasync(2), syncfs(2), msync(2), and sync_file_range(2).
- `network viewer`: This eBPF program creates charts with information about `TCP` and `UDP` functions, including the
- `socket`: This eBPF program creates charts with information about `TCP` and `UDP` functions, including the
bandwidth consumed by each.
multi-device software flushes.
- `vfs`: This eBPF program creates charts that show information about VFS (Virtual File System) functions.
@ -302,12 +302,13 @@ are divided in the following sections:
#### `[network connections]`
You can configure the information shown on `outbound` and `inbound` charts with the settings in this section.
You can configure the information shown with function `ebpf_socket` using the settings in this section.
```conf
[network connections]
maximum dimensions = 500
enabled = yes
resolve hostname ips = no
resolve service names = yes
ports = 1-1024 !145 !domain
hostnames = !example.com
ips = !127.0.0.1/8 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 fc00::/7
@ -318,24 +319,23 @@ write `ports = 19999`, Netdata will collect only connections for itself. The `ho
[simple patterns](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md). The `ports`, and `ips` settings accept negation (`!`) to deny
specific values or asterisk alone to define all values.
In the above example, Netdata will collect metrics for all ports between 1 and 443, with the exception of 53 (domain)
and 145.
In the above example, Netdata will collect metrics for all ports between `1` and `1024`, with the exception of `53` (domain)
and `145`.
The following options are available:
- `enabled`: Disable network connections monitoring. This can affect directly some funcion output.
- `resolve hostname ips`: Enable resolving IPs to hostnames. It is disabled by default because it can be too slow.
- `resolve service names`: Convert destination ports into service names, for example, port `53` protocol `UDP` becomes `domain`.
all names are read from /etc/services.
- `ports`: Define the destination ports for Netdata to monitor.
- `hostnames`: The list of hostnames that can be resolved to an IP address.
- `ips`: The IP or range of IPs that you want to monitor. You can use IPv4 or IPv6 addresses, use dashes to define a
range of IPs, or use CIDR values. By default, only data for private IP addresses is collected, but this can
be changed with the `ips` setting.
range of IPs, or use CIDR values.
By default, Netdata displays up to 500 dimensions on network connection charts. If there are more possible dimensions,
they will be bundled into the `other` dimension. You can increase the number of shown dimensions by changing
the `maximum dimensions` setting.
The dimensions for the traffic charts are created using the destination IPs of the sockets by default. This can be
changed setting `resolve hostname ips = yes` and restarting Netdata, after this Netdata will create dimensions using
the `hostnames` every time that is possible to resolve IPs to their hostnames.
By default the traffic table is created using the destination IPs and ports of the sockets. This can be
changed, so that Netdata uses service names (if possible), by specifying `resolve service name = yes` in the configuration
section.
#### `[service name]`
@ -990,13 +990,15 @@ shows how the lockdown module impacts `ebpf.plugin` based on the selected option
If you or your distribution compiled the kernel with the last combination, your system cannot load shared libraries
required to run `ebpf.plugin`.
## Function
## Functions
### ebpf_thread
The eBPF plugin has a [function](https://github.com/netdata/netdata/blob/master/docs/cloud/netdata-functions.md) named
`ebpf_thread` that controls its internal threads and helps to reduce the overhead on host. Using the function you
can run the plugin with all threads disabled and enable them only when you want to take a look in specific areas.
### List threads
#### List threads
To list all threads status you can query directly the endpoint function:
@ -1006,7 +1008,7 @@ It is also possible to query a specific thread adding keyword `thread` and threa
`http://localhost:19999/api/v1/function?function=ebpf_thread%20thread:mount`
### Enable thread
#### Enable thread
It is possible to enable a specific thread using the keyword `enable`:
@ -1019,14 +1021,14 @@ after the thread name:
in this example thread `mount` will run during 600 seconds (10 minutes).
### Disable thread
#### Disable thread
It is also possible to stop any thread running using the keyword `disable`. For example, to disable `cachestat` you can
request:
`http://localhost:19999/api/v1/function?function=ebpf_thread%20disable:cachestat`
### Debugging threads
#### Debugging threads
You can verify the impact of threads on the host by running the
[ebpf_thread_function.sh](https://github.com/netdata/netdata/blob/master/tests/ebpf/ebpf_thread_function.sh)
@ -1036,3 +1038,34 @@ You can check the results of having threads running on your environment in the N
dashboard
<img src="https://github.com/netdata/netdata/assets/49162938/91823573-114c-4c16-b634-cc46f7bb1bcf" alt="Threads running." />
### ebpf_socket
The eBPF plugin has a [function](https://github.com/netdata/netdata/blob/master/docs/cloud/netdata-functions.md) named
`ebpf_socket` that shows the current status of open sockets on host.
#### Families
The plugin shows by default sockets for IPV4 and IPV6, but it is possible to select a specific family by passing the
family as an argument:
`http://localhost:19999/api/v1/function?function=ebpf_socket%20family:IPV4`
#### Resolve
The plugin resolves ports to service names by default. You can show the port number by disabling the name resolution:
`http://localhost:19999/api/v1/function?function=ebpf_socket%20resolve:NO`
#### CIDR
The plugin shows connections for all possible destination IPs by default. You can limit the range by specifying the CIDR:
`http://localhost:19999/api/v1/function?function=ebpf_socket%20cidr:192.168.1.0/24`
#### PORT
The plugin shows connections for all possible ports by default. You can limit the range by specifying a port or range
of ports:
`http://localhost:19999/api/v1/function?function=ebpf_socket%20port:1-1024`

File diff suppressed because it is too large Load diff

View file

@ -26,6 +26,11 @@
#
# The `maps per core` defines if hash tables will be per core or not. This option is ignored on kernels older than 4.6.
#
# The `collect pid` option defines the PID stored inside hash tables and accepts the following options:
# `real parent`: Only stores real parent inside PID
# `parent` : Only stores parent PID.
# `all` : Stores all PIDs used by software. This is the most expensive option.
#
# The `lifetime` defines the time length a thread will run when it is enabled by a function.
#
# Uncomment lines to define specific options for thread.
@ -35,12 +40,12 @@
# cgroups = no
# update every = 10
bandwidth table size = 16384
ipv4 connection table size = 16384
ipv6 connection table size = 16384
socket monitoring table size = 16384
udp connection table size = 4096
ebpf type format = auto
ebpf co-re tracing = trampoline
ebpf co-re tracing = probe
maps per core = no
collect pid = all
lifetime = 300
#
@ -49,11 +54,12 @@
# This is a feature with status WIP(Work in Progress)
#
[network connections]
maximum dimensions = 50
enabled = yes
resolve hostnames = no
resolve service names = no
resolve service names = yes
ports = *
ips = !127.0.0.1/8 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 fc00::/7 !::1/128
# ips = !127.0.0.1/8 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 fc00::/7 !::1/128
ips = *
hostnames = *
[service name]

View file

@ -31,6 +31,7 @@
#include "daemon/main.h"
#include "ebpf_apps.h"
#include "ebpf_functions.h"
#include "ebpf_cgroup.h"
#define NETDATA_EBPF_OLD_CONFIG_FILE "ebpf.conf"
@ -98,6 +99,26 @@ typedef struct netdata_error_report {
int err;
} netdata_error_report_t;
typedef struct netdata_ebpf_judy_pid {
ARAL *pid_table;
// Index for PIDs
struct { // support for multiple indexing engines
Pvoid_t JudyLArray; // the hash table
RW_SPINLOCK rw_spinlock; // protect the index
} index;
} netdata_ebpf_judy_pid_t;
typedef struct netdata_ebpf_judy_pid_stats {
char *cmdline;
// Index for Socket timestamp
struct { // support for multiple indexing engines
Pvoid_t JudyLArray; // the hash table
RW_SPINLOCK rw_spinlock; // protect the index
} socket_stats;
} netdata_ebpf_judy_pid_stats_t;
extern ebpf_module_t ebpf_modules[];
enum ebpf_main_index {
EBPF_MODULE_PROCESS_IDX,
@ -322,10 +343,19 @@ void ebpf_unload_legacy_code(struct bpf_object *objects, struct bpf_link **probe
void ebpf_read_global_table_stats(netdata_idx_t *stats, netdata_idx_t *values, int map_fd,
int maps_per_core, uint32_t begin, uint32_t end);
void **ebpf_judy_insert_unsafe(PPvoid_t arr, Word_t key);
netdata_ebpf_judy_pid_stats_t *ebpf_get_pid_from_judy_unsafe(PPvoid_t judy_array, uint32_t pid);
void parse_network_viewer_section(struct config *cfg);
void ebpf_clean_ip_structure(ebpf_network_viewer_ip_list_t **clean);
void ebpf_clean_port_structure(ebpf_network_viewer_port_list_t **clean);
void ebpf_read_local_addresses_unsafe();
extern ebpf_filesystem_partitions_t localfs[];
extern ebpf_sync_syscalls_t local_syscalls[];
extern int ebpf_exit_plugin;
void ebpf_stop_threads(int sig);
extern netdata_ebpf_judy_pid_t ebpf_judy_pid;
#define EBPF_MAX_SYNCHRONIZATION_TIME 300

View file

@ -375,58 +375,6 @@ int ebpf_read_hash_table(void *ep, int fd, uint32_t pid)
return -1;
}
/**
* Read socket statistic
*
* Read information from kernel ring to user ring.
*
* @param ep the table with all process stats values.
* @param fd the file descriptor mapped from kernel
* @param ef a pointer for the functions mapped from dynamic library
* @param pids the list of pids associated to a target.
*
* @return
*/
size_t read_bandwidth_statistic_using_pid_on_target(ebpf_bandwidth_t **ep, int fd, struct ebpf_pid_on_target *pids)
{
size_t count = 0;
while (pids) {
uint32_t current_pid = pids->pid;
if (!ebpf_read_hash_table(ep[current_pid], fd, current_pid))
count++;
pids = pids->next;
}
return count;
}
/**
* Read bandwidth statistic using hash table
*
* @param out the output tensor that will receive the information.
* @param fd the file descriptor that has the data
* @param bpf_map_lookup_elem a pointer for the function to read the data
* @param bpf_map_get_next_key a pointer fo the function to read the index.
*/
size_t read_bandwidth_statistic_using_hash_table(ebpf_bandwidth_t **out, int fd)
{
size_t count = 0;
uint32_t key = 0;
uint32_t next_key = 0;
while (bpf_map_get_next_key(fd, &key, &next_key) == 0) {
ebpf_bandwidth_t *eps = out[next_key];
if (!eps) {
eps = callocz(1, sizeof(ebpf_process_stat_t));
out[next_key] = eps;
}
ebpf_read_hash_table(eps, fd, next_key);
}
return count;
}
/*****************************************************************
*
* FUNCTIONS CALLED FROM COLLECTORS
@ -887,6 +835,7 @@ static inline int read_proc_pid_cmdline(struct ebpf_pid_stat *p)
{
static char cmdline[MAX_CMDLINE + 1];
int ret = 0;
if (unlikely(!p->cmdline_filename)) {
char filename[FILENAME_MAX + 1];
snprintfz(filename, FILENAME_MAX, "%s/proc/%d/cmdline", netdata_configured_host_prefix, p->pid);
@ -909,20 +858,23 @@ static inline int read_proc_pid_cmdline(struct ebpf_pid_stat *p)
cmdline[i] = ' ';
}
if (p->cmdline)
freez(p->cmdline);
p->cmdline = strdupz(cmdline);
debug_log("Read file '%s' contents: %s", p->cmdline_filename, p->cmdline);
return 1;
ret = 1;
cleanup:
// copy the command to the command line
if (p->cmdline)
freez(p->cmdline);
p->cmdline = strdupz(p->comm);
return 0;
rw_spinlock_write_lock(&ebpf_judy_pid.index.rw_spinlock);
netdata_ebpf_judy_pid_stats_t *pid_ptr = ebpf_get_pid_from_judy_unsafe(&ebpf_judy_pid.index.JudyLArray, p->pid);
if (pid_ptr)
pid_ptr->cmdline = p->cmdline;
rw_spinlock_write_unlock(&ebpf_judy_pid.index.rw_spinlock);
return ret;
}
/**
@ -1238,6 +1190,24 @@ static inline void del_pid_entry(pid_t pid)
freez(p->status_filename);
freez(p->io_filename);
freez(p->cmdline_filename);
rw_spinlock_write_lock(&ebpf_judy_pid.index.rw_spinlock);
netdata_ebpf_judy_pid_stats_t *pid_ptr = ebpf_get_pid_from_judy_unsafe(&ebpf_judy_pid.index.JudyLArray, p->pid);
if (pid_ptr) {
if (pid_ptr->socket_stats.JudyLArray) {
Word_t local_socket = 0;
Pvoid_t *socket_value;
bool first_socket = true;
while ((socket_value = JudyLFirstThenNext(pid_ptr->socket_stats.JudyLArray, &local_socket, &first_socket))) {
netdata_socket_plus_t *socket_clean = *socket_value;
aral_freez(aral_socket_table, socket_clean);
}
JudyLFreeArray(&pid_ptr->socket_stats.JudyLArray, PJE0);
}
JudyLDel(&ebpf_judy_pid.index.JudyLArray, p->pid, PJE0);
}
rw_spinlock_write_unlock(&ebpf_judy_pid.index.rw_spinlock);
freez(p->cmdline);
ebpf_pid_stat_release(p);
@ -1279,12 +1249,6 @@ int get_pid_comm(pid_t pid, size_t n, char *dest)
*/
void cleanup_variables_from_other_threads(uint32_t pid)
{
// Clean socket structures
if (socket_bandwidth_curr) {
ebpf_socket_release(socket_bandwidth_curr[pid]);
socket_bandwidth_curr[pid] = NULL;
}
// Clean cachestat structure
if (cachestat_pid) {
ebpf_cachestat_release(cachestat_pid[pid]);

View file

@ -150,24 +150,6 @@ typedef struct ebpf_process_stat {
uint8_t removeme;
} ebpf_process_stat_t;
typedef struct ebpf_bandwidth {
uint32_t pid;
uint64_t first; // First timestamp
uint64_t ct; // Last timestamp
uint64_t bytes_sent; // Bytes sent
uint64_t bytes_received; // Bytes received
uint64_t call_tcp_sent; // Number of times tcp_sendmsg was called
uint64_t call_tcp_received; // Number of times tcp_cleanup_rbuf was called
uint64_t retransmit; // Number of times tcp_retransmit was called
uint64_t call_udp_sent; // Number of times udp_sendmsg was called
uint64_t call_udp_received; // Number of times udp_recvmsg was called
uint64_t close; // Number of times tcp_close was called
uint64_t drop; // THIS IS NOT USED FOR WHILE, we are in groom section
uint32_t tcp_v4_connection; // Number of times tcp_v4_connection was called.
uint32_t tcp_v6_connection; // Number of times tcp_v6_connection was called.
} ebpf_bandwidth_t;
/**
* Internal function used to write debug messages.
*
@ -208,12 +190,6 @@ int ebpf_read_hash_table(void *ep, int fd, uint32_t pid);
int get_pid_comm(pid_t pid, size_t n, char *dest);
size_t read_processes_statistic_using_pid_on_target(ebpf_process_stat_t **ep,
int fd,
struct ebpf_pid_on_target *pids);
size_t read_bandwidth_statistic_using_pid_on_target(ebpf_bandwidth_t **ep, int fd, struct ebpf_pid_on_target *pids);
void collect_data_for_all_processes(int tbl_pid_stats_fd, int maps_per_core);
void ebpf_process_apps_accumulator(ebpf_process_stat_t *out, int maps_per_core);

View file

@ -1479,7 +1479,7 @@ static int ebpf_cachestat_load_bpf(ebpf_module_t *em)
#endif
if (ret)
netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
return ret;
}

View file

@ -21,7 +21,7 @@ struct pid_on_target2 {
ebpf_process_stat_t ps;
netdata_dcstat_pid_t dc;
netdata_publish_shm_t shm;
ebpf_bandwidth_t socket;
netdata_socket_t socket;
netdata_cachestat_pid_t cachestat;
struct pid_on_target2 *next;

View file

@ -1311,7 +1311,7 @@ static int ebpf_dcstat_load_bpf(ebpf_module_t *em)
#endif
if (ret)
netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
return ret;
}

View file

@ -873,7 +873,7 @@ static int ebpf_disk_load_bpf(ebpf_module_t *em)
#endif
if (ret)
netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
return ret;
}

View file

@ -1337,7 +1337,7 @@ static int ebpf_fd_load_bpf(ebpf_module_t *em)
#endif
if (ret)
netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
return ret;
}

View file

@ -470,12 +470,12 @@ int ebpf_filesystem_initialize_ebpf_data(ebpf_module_t *em)
{
pthread_mutex_lock(&lock);
int i;
const char *saved_name = em->thread_name;
const char *saved_name = em->info.thread_name;
uint64_t kernels = em->kernels;
for (i = 0; localfs[i].filesystem; i++) {
ebpf_filesystem_partitions_t *efp = &localfs[i];
if (!efp->probe_links && efp->flags & NETDATA_FILESYSTEM_LOAD_EBPF_PROGRAM) {
em->thread_name = efp->filesystem;
em->info.thread_name = efp->filesystem;
em->kernels = efp->kernels;
em->maps = efp->fs_maps;
#ifdef LIBBPF_MAJOR_VERSION
@ -484,7 +484,7 @@ int ebpf_filesystem_initialize_ebpf_data(ebpf_module_t *em)
if (em->load & EBPF_LOAD_LEGACY) {
efp->probe_links = ebpf_load_program(ebpf_plugin_dir, em, running_on_kernel, isrh, &efp->objects);
if (!efp->probe_links) {
em->thread_name = saved_name;
em->info.thread_name = saved_name;
em->kernels = kernels;
em->maps = NULL;
pthread_mutex_unlock(&lock);
@ -495,7 +495,7 @@ int ebpf_filesystem_initialize_ebpf_data(ebpf_module_t *em)
else {
efp->fs_obj = filesystem_bpf__open();
if (!efp->fs_obj) {
em->thread_name = saved_name;
em->info.thread_name = saved_name;
em->kernels = kernels;
return -1;
} else {
@ -515,7 +515,7 @@ int ebpf_filesystem_initialize_ebpf_data(ebpf_module_t *em)
}
efp->flags &= ~NETDATA_FILESYSTEM_LOAD_EBPF_PROGRAM;
}
em->thread_name = saved_name;
em->info.thread_name = saved_name;
pthread_mutex_unlock(&lock);
em->kernels = kernels;
em->maps = NULL;

View file

@ -3,6 +3,42 @@
#include "ebpf.h"
#include "ebpf_functions.h"
/*****************************************************************
* EBPF FUNCTION COMMON
*****************************************************************/
RW_SPINLOCK rw_spinlock; // protect the buffer
/**
* Function Start thread
*
* Start a specific thread after user request.
*
* @param em The structure with thread information
* @param period
* @return
*/
static int ebpf_function_start_thread(ebpf_module_t *em, int period)
{
struct netdata_static_thread *st = em->thread;
// another request for thread that already ran, cleanup and restart
if (st->thread)
freez(st->thread);
if (period <= 0)
period = EBPF_DEFAULT_LIFETIME;
st->thread = mallocz(sizeof(netdata_thread_t));
em->enabled = NETDATA_THREAD_EBPF_FUNCTION_RUNNING;
em->lifetime = period;
#ifdef NETDATA_INTERNAL_CHECKS
netdata_log_info("Starting thread %s with lifetime = %d", em->info.thread_name, period);
#endif
return netdata_thread_create(st->thread, st->name, NETDATA_THREAD_OPTION_DEFAULT, st->start_routine, em);
}
/*****************************************************************
* EBPF SELECT MODULE
*****************************************************************/
@ -17,7 +53,7 @@
ebpf_module_t *ebpf_functions_select_module(const char *thread_name) {
int i;
for (i = 0; i < EBPF_MODULE_FUNCTION_IDX; i++) {
if (strcmp(ebpf_modules[i].thread_name, thread_name) == 0) {
if (strcmp(ebpf_modules[i].info.thread_name, thread_name) == 0) {
return &ebpf_modules[i];
}
}
@ -56,7 +92,6 @@ static void ebpf_function_thread_manipulation_help(const char *transaction) {
" Disable a sp.\n"
"\n"
"Filters can be combined. Each filter can be given only one time.\n"
"Process thread is not controlled by functions until we finish the creation of functions per thread..\n"
);
pthread_mutex_lock(&lock);
@ -66,7 +101,6 @@ static void ebpf_function_thread_manipulation_help(const char *transaction) {
buffer_free(wb);
}
/*****************************************************************
* EBPF ERROR FUNCTIONS
*****************************************************************/
@ -91,7 +125,7 @@ static void ebpf_function_error(const char *transaction, int code, const char *m
*****************************************************************/
/**
* Function enable
* Function: thread
*
* Enable a specific thread.
*
@ -140,27 +174,15 @@ static void ebpf_function_thread_manipulation(const char *transaction,
pthread_mutex_lock(&ebpf_exit_cleanup);
if (lem->enabled > NETDATA_THREAD_EBPF_FUNCTION_RUNNING) {
struct netdata_static_thread *st = lem->thread;
// Load configuration again
ebpf_update_module(lem, default_btf, running_on_kernel, isrh);
// another request for thread that already ran, cleanup and restart
if (st->thread)
freez(st->thread);
if (period <= 0)
period = EBPF_DEFAULT_LIFETIME;
st->thread = mallocz(sizeof(netdata_thread_t));
lem->enabled = NETDATA_THREAD_EBPF_FUNCTION_RUNNING;
lem->lifetime = period;
#ifdef NETDATA_INTERNAL_CHECKS
netdata_log_info("Starting thread %s with lifetime = %d", thread_name, period);
#endif
netdata_thread_create(st->thread, st->name, NETDATA_THREAD_OPTION_DEFAULT,
st->start_routine, lem);
if (ebpf_function_start_thread(lem, period)) {
ebpf_function_error(transaction,
HTTP_RESP_INTERNAL_SERVER_ERROR,
"Cannot start thread.");
return;
}
} else {
lem->running_time = 0;
if (period > 0) // user is modifying period to run
@ -225,10 +247,10 @@ static void ebpf_function_thread_manipulation(const char *transaction,
// THE ORDER SHOULD BE THE SAME WITH THE FIELDS!
// thread name
buffer_json_add_array_item_string(wb, wem->thread_name);
buffer_json_add_array_item_string(wb, wem->info.thread_name);
// description
buffer_json_add_array_item_string(wb, wem->thread_description);
buffer_json_add_array_item_string(wb, wem->info.thread_description);
// Either it is not running or received a disabled signal and it is stopping.
if (wem->enabled > NETDATA_THREAD_EBPF_FUNCTION_RUNNING ||
(!wem->lifetime && (int)wem->running_time == wem->update_every)) {
@ -266,7 +288,7 @@ static void ebpf_function_thread_manipulation(const char *transaction,
RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
RRDF_FIELD_FILTER_MULTISELECT,
RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY, NULL);
RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY | RRDF_FIELD_OPTS_UNIQUE_KEY, NULL);
buffer_rrdf_table_add_field(wb, fields_id++, "Description", "Thread Desc", RRDF_FIELD_TYPE_STRING,
RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
@ -355,6 +377,698 @@ static void ebpf_function_thread_manipulation(const char *transaction,
buffer_free(wb);
}
/*****************************************************************
* EBPF SOCKET FUNCTION
*****************************************************************/
/**
* Thread Help
*
* Shows help with all options accepted by thread function.
*
* @param transaction the transaction id that Netdata sent for this function execution
*/
static void ebpf_function_socket_help(const char *transaction) {
pthread_mutex_lock(&lock);
pluginsd_function_result_begin_to_stdout(transaction, HTTP_RESP_OK, "text/plain", now_realtime_sec() + 3600);
fprintf(stdout, "%s",
"ebpf.plugin / socket\n"
"\n"
"Function `socket` display information for all open sockets during ebpf.plugin runtime.\n"
"During thread runtime the plugin is always collecting data, but when an option is modified, the plugin\n"
"resets completely the previous table and can show a clean data for the first request before to bring the\n"
"modified request.\n"
"\n"
"The following filters are supported:\n"
"\n"
" family:FAMILY\n"
" Shows information for the FAMILY specified. Option accepts IPV4, IPV6 and all, that is the default.\n"
"\n"
" period:PERIOD\n"
" Enable socket to run a specific PERIOD in seconds. When PERIOD is not\n"
" specified plugin will use the default 300 seconds\n"
"\n"
" resolve:BOOL\n"
" Resolve service name, default value is YES.\n"
"\n"
" range:CIDR\n"
" Show sockets that have only a specific destination. Default all addresses.\n"
"\n"
" port:range\n"
" Show sockets that have only a specific destination.\n"
"\n"
" reset\n"
" Send a reset to collector. When a collector receives this command, it uses everything defined in configuration file.\n"
"\n"
" interfaces\n"
" When the collector receives this command, it read all available interfaces on host.\n"
"\n"
"Filters can be combined. Each filter can be given only one time. Default all ports\n"
);
pluginsd_function_result_end_to_stdout();
fflush(stdout);
pthread_mutex_unlock(&lock);
}
/**
* Fill Fake socket
*
* Fill socket with an invalid request.
*
* @param fake_values is the structure where we are storing the value.
*/
static inline void ebpf_socket_fill_fake_socket(netdata_socket_plus_t *fake_values)
{
snprintfz(fake_values->socket_string.src_ip, INET6_ADDRSTRLEN, "%s", "127.0.0.1");
snprintfz(fake_values->socket_string.dst_ip, INET6_ADDRSTRLEN, "%s", "127.0.0.1");
fake_values->pid = getpid();
//fake_values->socket_string.src_port = 0;
fake_values->socket_string.dst_port[0] = 0;
snprintfz(fake_values->socket_string.dst_ip, NI_MAXSERV, "%s", "none");
fake_values->data.family = AF_INET;
fake_values->data.protocol = AF_UNSPEC;
}
/**
* Fill function buffer
*
* Fill buffer with data to be shown on cloud.
*
* @param wb buffer where we store data.
* @param values data read from hash table
* @param name the process name
*/
static void ebpf_fill_function_buffer(BUFFER *wb, netdata_socket_plus_t *values, char *name)
{
buffer_json_add_array_item_array(wb);
// IMPORTANT!
// THE ORDER SHOULD BE THE SAME WITH THE FIELDS!
// PID
buffer_json_add_array_item_uint64(wb, (uint64_t)values->pid);
// NAME
buffer_json_add_array_item_string(wb, (name) ? name : "not identified");
// Origin
buffer_json_add_array_item_string(wb, (values->data.external_origin) ? "incoming" : "outgoing");
// Source IP
buffer_json_add_array_item_string(wb, values->socket_string.src_ip);
// SRC Port
//buffer_json_add_array_item_uint64(wb, (uint64_t) values->socket_string.src_port);
// Destination IP
buffer_json_add_array_item_string(wb, values->socket_string.dst_ip);
// DST Port
buffer_json_add_array_item_string(wb, values->socket_string.dst_port);
uint64_t connections;
if (values->data.protocol == IPPROTO_TCP) {
// Protocol
buffer_json_add_array_item_string(wb, "TCP");
// Bytes received
buffer_json_add_array_item_uint64(wb, (uint64_t) values->data.tcp.tcp_bytes_received);
// Bytes sent
buffer_json_add_array_item_uint64(wb, (uint64_t) values->data.tcp.tcp_bytes_sent);
// Connections
connections = values->data.tcp.ipv4_connect + values->data.tcp.ipv6_connect;
} else if (values->data.protocol == IPPROTO_UDP) {
// Protocol
buffer_json_add_array_item_string(wb, "UDP");
// Bytes received
buffer_json_add_array_item_uint64(wb, (uint64_t) values->data.udp.udp_bytes_received);
// Bytes sent
buffer_json_add_array_item_uint64(wb, (uint64_t) values->data.udp.udp_bytes_sent);
// Connections
connections = values->data.udp.call_udp_sent + values->data.udp.call_udp_received;
} else {
// Protocol
buffer_json_add_array_item_string(wb, "UNSPEC");
// Bytes received
buffer_json_add_array_item_uint64(wb, 0);
// Bytes sent
buffer_json_add_array_item_uint64(wb, 0);
connections = 1;
}
// Connections
if (values->flags & NETDATA_SOCKET_FLAGS_ALREADY_OPEN) {
connections++;
} else if (!connections) {
// If no connections, this means that we lost when connection was opened
values->flags |= NETDATA_SOCKET_FLAGS_ALREADY_OPEN;
connections++;
}
buffer_json_add_array_item_uint64(wb, connections);
buffer_json_array_close(wb);
}
/**
* Clean Judy array unsafe
*
* Clean all Judy Array allocated to show table when a function is called.
* Before to call this function it is necessary to lock `ebpf_judy_pid.index.rw_spinlock`.
**/
static void ebpf_socket_clean_judy_array_unsafe()
{
if (!ebpf_judy_pid.index.JudyLArray)
return;
Pvoid_t *pid_value, *socket_value;
Word_t local_pid = 0, local_socket = 0;
bool first_pid = true, first_socket = true;
while ((pid_value = JudyLFirstThenNext(ebpf_judy_pid.index.JudyLArray, &local_pid, &first_pid))) {
netdata_ebpf_judy_pid_stats_t *pid_ptr = (netdata_ebpf_judy_pid_stats_t *)*pid_value;
rw_spinlock_write_lock(&pid_ptr->socket_stats.rw_spinlock);
if (pid_ptr->socket_stats.JudyLArray) {
while ((socket_value = JudyLFirstThenNext(pid_ptr->socket_stats.JudyLArray, &local_socket, &first_socket))) {
netdata_socket_plus_t *socket_clean = *socket_value;
aral_freez(aral_socket_table, socket_clean);
}
JudyLFreeArray(&pid_ptr->socket_stats.JudyLArray, PJE0);
pid_ptr->socket_stats.JudyLArray = NULL;
}
rw_spinlock_write_unlock(&pid_ptr->socket_stats.rw_spinlock);
}
}
/**
* Fill function buffer unsafe
*
* Fill the function buffer with socket information. Before to call this function it is necessary to lock
* ebpf_judy_pid.index.rw_spinlock
*
* @param buf buffer used to store data to be shown by function.
*
* @return it returns 0 on success and -1 otherwise.
*/
static void ebpf_socket_fill_function_buffer_unsafe(BUFFER *buf)
{
int counter = 0;
Pvoid_t *pid_value, *socket_value;
Word_t local_pid = 0;
bool first_pid = true;
while ((pid_value = JudyLFirstThenNext(ebpf_judy_pid.index.JudyLArray, &local_pid, &first_pid))) {
netdata_ebpf_judy_pid_stats_t *pid_ptr = (netdata_ebpf_judy_pid_stats_t *)*pid_value;
bool first_socket = true;
Word_t local_timestamp = 0;
rw_spinlock_read_lock(&pid_ptr->socket_stats.rw_spinlock);
if (pid_ptr->socket_stats.JudyLArray) {
while ((socket_value = JudyLFirstThenNext(pid_ptr->socket_stats.JudyLArray, &local_timestamp, &first_socket))) {
netdata_socket_plus_t *values = (netdata_socket_plus_t *)*socket_value;
ebpf_fill_function_buffer(buf, values, pid_ptr->cmdline);
}
counter++;
}
rw_spinlock_read_unlock(&pid_ptr->socket_stats.rw_spinlock);
}
if (!counter) {
netdata_socket_plus_t fake_values = { };
ebpf_socket_fill_fake_socket(&fake_values);
ebpf_fill_function_buffer(buf, &fake_values, NULL);
}
}
/**
* Socket read hash
*
* This is the thread callback.
* This thread is necessary, because we cannot freeze the whole plugin to read the data on very busy socket.
*
* @param buf the buffer to store data;
* @param em the module main structure.
*
* @return It always returns NULL.
*/
void ebpf_socket_read_open_connections(BUFFER *buf, struct ebpf_module *em)
{
// thread was not initialized or Array was reset
rw_spinlock_read_lock(&ebpf_judy_pid.index.rw_spinlock);
if (!em->maps || (em->maps[NETDATA_SOCKET_OPEN_SOCKET].map_fd == ND_EBPF_MAP_FD_NOT_INITIALIZED) ||
!ebpf_judy_pid.index.JudyLArray){
netdata_socket_plus_t fake_values = { };
ebpf_socket_fill_fake_socket(&fake_values);
ebpf_fill_function_buffer(buf, &fake_values, NULL);
rw_spinlock_read_unlock(&ebpf_judy_pid.index.rw_spinlock);
return;
}
rw_spinlock_read_lock(&network_viewer_opt.rw_spinlock);
ebpf_socket_fill_function_buffer_unsafe(buf);
rw_spinlock_read_unlock(&network_viewer_opt.rw_spinlock);
rw_spinlock_read_unlock(&ebpf_judy_pid.index.rw_spinlock);
}
/**
* Function: Socket
*
* Show information for sockets stored in hash tables.
*
* @param transaction the transaction id that Netdata sent for this function execution
* @param function function name and arguments given to thread.
* @param line_buffer buffer used to parse args
* @param line_max Number of arguments given
* @param timeout The function timeout
* @param em The structure with thread information
*/
static void ebpf_function_socket_manipulation(const char *transaction,
char *function __maybe_unused,
char *line_buffer __maybe_unused,
int line_max __maybe_unused,
int timeout __maybe_unused,
ebpf_module_t *em)
{
UNUSED(line_buffer);
UNUSED(timeout);
char *words[PLUGINSD_MAX_WORDS] = {NULL};
size_t num_words = quoted_strings_splitter_pluginsd(function, words, PLUGINSD_MAX_WORDS);
const char *name;
int period = -1;
rw_spinlock_write_lock(&ebpf_judy_pid.index.rw_spinlock);
network_viewer_opt.enabled = CONFIG_BOOLEAN_YES;
uint32_t previous;
for (int i = 1; i < PLUGINSD_MAX_WORDS; i++) {
const char *keyword = get_word(words, num_words, i);
if (!keyword)
break;
if (strncmp(keyword, EBPF_FUNCTION_SOCKET_FAMILY, sizeof(EBPF_FUNCTION_SOCKET_FAMILY) - 1) == 0) {
name = &keyword[sizeof(EBPF_FUNCTION_SOCKET_FAMILY) - 1];
previous = network_viewer_opt.family;
uint32_t family = AF_UNSPEC;
if (!strcmp(name, "IPV4"))
family = AF_INET;
else if (!strcmp(name, "IPV6"))
family = AF_INET6;
if (family != previous) {
rw_spinlock_write_lock(&network_viewer_opt.rw_spinlock);
network_viewer_opt.family = family;
rw_spinlock_write_unlock(&network_viewer_opt.rw_spinlock);
ebpf_socket_clean_judy_array_unsafe();
}
} else if (strncmp(keyword, EBPF_FUNCTION_SOCKET_PERIOD, sizeof(EBPF_FUNCTION_SOCKET_PERIOD) - 1) == 0) {
name = &keyword[sizeof(EBPF_FUNCTION_SOCKET_PERIOD) - 1];
pthread_mutex_lock(&ebpf_exit_cleanup);
period = str2i(name);
if (period > 0) {
em->lifetime = period;
} else
em->lifetime = EBPF_NON_FUNCTION_LIFE_TIME;
#ifdef NETDATA_DEV_MODE
collector_info("Lifetime modified for %u", em->lifetime);
#endif
pthread_mutex_unlock(&ebpf_exit_cleanup);
} else if (strncmp(keyword, EBPF_FUNCTION_SOCKET_RESOLVE, sizeof(EBPF_FUNCTION_SOCKET_RESOLVE) - 1) == 0) {
previous = network_viewer_opt.service_resolution_enabled;
uint32_t resolution;
name = &keyword[sizeof(EBPF_FUNCTION_SOCKET_RESOLVE) - 1];
resolution = (!strcasecmp(name, "YES")) ? CONFIG_BOOLEAN_YES : CONFIG_BOOLEAN_NO;
if (previous != resolution) {
rw_spinlock_write_lock(&network_viewer_opt.rw_spinlock);
network_viewer_opt.service_resolution_enabled = resolution;
rw_spinlock_write_unlock(&network_viewer_opt.rw_spinlock);
ebpf_socket_clean_judy_array_unsafe();
}
} else if (strncmp(keyword, EBPF_FUNCTION_SOCKET_RANGE, sizeof(EBPF_FUNCTION_SOCKET_RANGE) - 1) == 0) {
name = &keyword[sizeof(EBPF_FUNCTION_SOCKET_RANGE) - 1];
rw_spinlock_write_lock(&network_viewer_opt.rw_spinlock);
ebpf_clean_ip_structure(&network_viewer_opt.included_ips);
ebpf_clean_ip_structure(&network_viewer_opt.excluded_ips);
ebpf_parse_ips_unsafe((char *)name);
rw_spinlock_write_unlock(&network_viewer_opt.rw_spinlock);
ebpf_socket_clean_judy_array_unsafe();
} else if (strncmp(keyword, EBPF_FUNCTION_SOCKET_PORT, sizeof(EBPF_FUNCTION_SOCKET_PORT) - 1) == 0) {
name = &keyword[sizeof(EBPF_FUNCTION_SOCKET_PORT) - 1];
rw_spinlock_write_lock(&network_viewer_opt.rw_spinlock);
ebpf_clean_port_structure(&network_viewer_opt.included_port);
ebpf_clean_port_structure(&network_viewer_opt.excluded_port);
ebpf_parse_ports((char *)name);
rw_spinlock_write_unlock(&network_viewer_opt.rw_spinlock);
ebpf_socket_clean_judy_array_unsafe();
} else if (strncmp(keyword, EBPF_FUNCTION_SOCKET_RESET, sizeof(EBPF_FUNCTION_SOCKET_RESET) - 1) == 0) {
rw_spinlock_write_lock(&network_viewer_opt.rw_spinlock);
ebpf_clean_port_structure(&network_viewer_opt.included_port);
ebpf_clean_port_structure(&network_viewer_opt.excluded_port);
ebpf_clean_ip_structure(&network_viewer_opt.included_ips);
ebpf_clean_ip_structure(&network_viewer_opt.excluded_ips);
ebpf_clean_ip_structure(&network_viewer_opt.ipv4_local_ip);
ebpf_clean_ip_structure(&network_viewer_opt.ipv6_local_ip);
parse_network_viewer_section(&socket_config);
ebpf_read_local_addresses_unsafe();
network_viewer_opt.enabled = CONFIG_BOOLEAN_YES;
rw_spinlock_write_unlock(&network_viewer_opt.rw_spinlock);
} else if (strncmp(keyword, EBPF_FUNCTION_SOCKET_INTERFACES, sizeof(EBPF_FUNCTION_SOCKET_INTERFACES) - 1) == 0) {
rw_spinlock_write_lock(&network_viewer_opt.rw_spinlock);
ebpf_read_local_addresses_unsafe();
rw_spinlock_write_unlock(&network_viewer_opt.rw_spinlock);
} else if (strncmp(keyword, "help", 4) == 0) {
ebpf_function_socket_help(transaction);
rw_spinlock_write_unlock(&ebpf_judy_pid.index.rw_spinlock);
return;
}
}
rw_spinlock_write_unlock(&ebpf_judy_pid.index.rw_spinlock);
pthread_mutex_lock(&ebpf_exit_cleanup);
if (em->enabled > NETDATA_THREAD_EBPF_FUNCTION_RUNNING) {
// Cleanup when we already had a thread running
rw_spinlock_write_lock(&ebpf_judy_pid.index.rw_spinlock);
ebpf_socket_clean_judy_array_unsafe();
rw_spinlock_write_unlock(&ebpf_judy_pid.index.rw_spinlock);
if (ebpf_function_start_thread(em, period)) {
ebpf_function_error(transaction,
HTTP_RESP_INTERNAL_SERVER_ERROR,
"Cannot start thread.");
pthread_mutex_unlock(&ebpf_exit_cleanup);
return;
}
} else {
if (period < 0 && em->lifetime < EBPF_NON_FUNCTION_LIFE_TIME) {
em->lifetime = EBPF_NON_FUNCTION_LIFE_TIME;
}
}
pthread_mutex_unlock(&ebpf_exit_cleanup);
time_t expires = now_realtime_sec() + em->update_every;
BUFFER *wb = buffer_create(PLUGINSD_LINE_MAX, NULL);
buffer_json_initialize(wb, "\"", "\"", 0, true, false);
buffer_json_member_add_uint64(wb, "status", HTTP_RESP_OK);
buffer_json_member_add_string(wb, "type", "table");
buffer_json_member_add_time_t(wb, "update_every", em->update_every);
buffer_json_member_add_string(wb, "help", EBPF_PLUGIN_SOCKET_FUNCTION_DESCRIPTION);
// Collect data
buffer_json_member_add_array(wb, "data");
ebpf_socket_read_open_connections(wb, em);
buffer_json_array_close(wb); // data
buffer_json_member_add_object(wb, "columns");
{
int fields_id = 0;
// IMPORTANT!
// THE ORDER SHOULD BE THE SAME WITH THE VALUES!
buffer_rrdf_table_add_field(wb, fields_id++, "PID", "Process ID", RRDF_FIELD_TYPE_INTEGER,
RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NUMBER, 0, NULL, NAN,
RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
RRDF_FIELD_FILTER_MULTISELECT,
RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY,
NULL);
buffer_rrdf_table_add_field(wb, fields_id++, "Process Name", "Process Name", RRDF_FIELD_TYPE_STRING,
RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
RRDF_FIELD_FILTER_MULTISELECT,
RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY, NULL);
buffer_rrdf_table_add_field(wb, fields_id++, "Origin", "The connection origin.", RRDF_FIELD_TYPE_STRING,
RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
RRDF_FIELD_FILTER_MULTISELECT,
RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY, NULL);
buffer_rrdf_table_add_field(wb, fields_id++, "Request from", "Request from IP", RRDF_FIELD_TYPE_STRING,
RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
RRDF_FIELD_FILTER_MULTISELECT,
RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY, NULL);
/*
buffer_rrdf_table_add_field(wb, fields_id++, "SRC PORT", "Source Port", RRDF_FIELD_TYPE_INTEGER,
RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NUMBER, 0, NULL, NAN,
RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
RRDF_FIELD_FILTER_MULTISELECT,
RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY,
NULL);
*/
buffer_rrdf_table_add_field(wb, fields_id++, "Destination IP", "Destination IP", RRDF_FIELD_TYPE_STRING,
RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
RRDF_FIELD_FILTER_MULTISELECT,
RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY, NULL);
buffer_rrdf_table_add_field(wb, fields_id++, "Destination Port", "Destination Port", RRDF_FIELD_TYPE_STRING,
RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
RRDF_FIELD_FILTER_MULTISELECT,
RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY, NULL);
buffer_rrdf_table_add_field(wb, fields_id++, "Protocol", "Communication protocol", RRDF_FIELD_TYPE_STRING,
RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
RRDF_FIELD_FILTER_MULTISELECT,
RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY, NULL);
buffer_rrdf_table_add_field(wb, fields_id++, "Incoming Bandwidth", "Bytes received.", RRDF_FIELD_TYPE_INTEGER,
RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NUMBER, 0, NULL, NAN,
RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
RRDF_FIELD_FILTER_MULTISELECT,
RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY,
NULL);
buffer_rrdf_table_add_field(wb, fields_id++, "Outgoing Bandwidth", "Bytes sent.", RRDF_FIELD_TYPE_INTEGER,
RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NUMBER, 0, NULL, NAN,
RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
RRDF_FIELD_FILTER_MULTISELECT,
RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY,
NULL);
buffer_rrdf_table_add_field(wb, fields_id, "Connections", "Number of calls to tcp_vX_connections and udp_sendmsg, where X is the protocol version.", RRDF_FIELD_TYPE_INTEGER,
RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NUMBER, 0, NULL, NAN,
RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
RRDF_FIELD_FILTER_MULTISELECT,
RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY,
NULL);
}
buffer_json_object_close(wb); // columns
buffer_json_member_add_object(wb, "charts");
{
// OutBound Connections
buffer_json_member_add_object(wb, "IPInboundConn");
{
buffer_json_member_add_string(wb, "name", "TCP Inbound Connection");
buffer_json_member_add_string(wb, "type", "line");
buffer_json_member_add_array(wb, "columns");
{
buffer_json_add_array_item_string(wb, "connected_tcp");
buffer_json_add_array_item_string(wb, "connected_udp");
}
buffer_json_array_close(wb);
}
buffer_json_object_close(wb);
// OutBound Connections
buffer_json_member_add_object(wb, "IPTCPOutboundConn");
{
buffer_json_member_add_string(wb, "name", "TCP Outbound Connection");
buffer_json_member_add_string(wb, "type", "line");
buffer_json_member_add_array(wb, "columns");
{
buffer_json_add_array_item_string(wb, "connected_V4");
buffer_json_add_array_item_string(wb, "connected_V6");
}
buffer_json_array_close(wb);
}
buffer_json_object_close(wb);
// TCP Functions
buffer_json_member_add_object(wb, "TCPFunctions");
{
buffer_json_member_add_string(wb, "name", "TCPFunctions");
buffer_json_member_add_string(wb, "type", "line");
buffer_json_member_add_array(wb, "columns");
{
buffer_json_add_array_item_string(wb, "received");
buffer_json_add_array_item_string(wb, "sent");
buffer_json_add_array_item_string(wb, "close");
}
buffer_json_array_close(wb);
}
buffer_json_object_close(wb);
// TCP Bandwidth
buffer_json_member_add_object(wb, "TCPBandwidth");
{
buffer_json_member_add_string(wb, "name", "TCPBandwidth");
buffer_json_member_add_string(wb, "type", "line");
buffer_json_member_add_array(wb, "columns");
{
buffer_json_add_array_item_string(wb, "received");
buffer_json_add_array_item_string(wb, "sent");
}
buffer_json_array_close(wb);
}
buffer_json_object_close(wb);
// UDP Functions
buffer_json_member_add_object(wb, "UDPFunctions");
{
buffer_json_member_add_string(wb, "name", "UDPFunctions");
buffer_json_member_add_string(wb, "type", "line");
buffer_json_member_add_array(wb, "columns");
{
buffer_json_add_array_item_string(wb, "received");
buffer_json_add_array_item_string(wb, "sent");
}
buffer_json_array_close(wb);
}
buffer_json_object_close(wb);
// UDP Bandwidth
buffer_json_member_add_object(wb, "UDPBandwidth");
{
buffer_json_member_add_string(wb, "name", "UDPBandwidth");
buffer_json_member_add_string(wb, "type", "line");
buffer_json_member_add_array(wb, "columns");
{
buffer_json_add_array_item_string(wb, "received");
buffer_json_add_array_item_string(wb, "sent");
}
buffer_json_array_close(wb);
}
buffer_json_object_close(wb);
}
buffer_json_object_close(wb); // charts
buffer_json_member_add_string(wb, "default_sort_column", "PID");
// Do we use only on fields that can be groupped?
buffer_json_member_add_object(wb, "group_by");
{
// group by PID
buffer_json_member_add_object(wb, "PID");
{
buffer_json_member_add_string(wb, "name", "Process ID");
buffer_json_member_add_array(wb, "columns");
{
buffer_json_add_array_item_string(wb, "PID");
}
buffer_json_array_close(wb);
}
buffer_json_object_close(wb);
// group by Process Name
buffer_json_member_add_object(wb, "Process Name");
{
buffer_json_member_add_string(wb, "name", "Process Name");
buffer_json_member_add_array(wb, "columns");
{
buffer_json_add_array_item_string(wb, "Process Name");
}
buffer_json_array_close(wb);
}
buffer_json_object_close(wb);
// group by Process Name
buffer_json_member_add_object(wb, "Origin");
{
buffer_json_member_add_string(wb, "name", "Origin");
buffer_json_member_add_array(wb, "columns");
{
buffer_json_add_array_item_string(wb, "Origin");
}
buffer_json_array_close(wb);
}
buffer_json_object_close(wb);
// group by Request From IP
buffer_json_member_add_object(wb, "Request from");
{
buffer_json_member_add_string(wb, "name", "Request from IP");
buffer_json_member_add_array(wb, "columns");
{
buffer_json_add_array_item_string(wb, "Request from");
}
buffer_json_array_close(wb);
}
buffer_json_object_close(wb);
// group by Destination IP
buffer_json_member_add_object(wb, "Destination IP");
{
buffer_json_member_add_string(wb, "name", "Destination IP");
buffer_json_member_add_array(wb, "columns");
{
buffer_json_add_array_item_string(wb, "Destination IP");
}
buffer_json_array_close(wb);
}
buffer_json_object_close(wb);
// group by DST Port
buffer_json_member_add_object(wb, "Destination Port");
{
buffer_json_member_add_string(wb, "name", "Destination Port");
buffer_json_member_add_array(wb, "columns");
{
buffer_json_add_array_item_string(wb, "Destination Port");
}
buffer_json_array_close(wb);
}
buffer_json_object_close(wb);
// group by Protocol
buffer_json_member_add_object(wb, "Protocol");
{
buffer_json_member_add_string(wb, "name", "Protocol");
buffer_json_member_add_array(wb, "columns");
{
buffer_json_add_array_item_string(wb, "Protocol");
}
buffer_json_array_close(wb);
}
buffer_json_object_close(wb);
}
buffer_json_object_close(wb); // group_by
buffer_json_member_add_time_t(wb, "expires", expires);
buffer_json_finalize(wb);
// Lock necessary to avoid race condition
pthread_mutex_lock(&lock);
pluginsd_function_result_begin_to_stdout(transaction, HTTP_RESP_OK, "application/json", expires);
fwrite(buffer_tostring(wb), buffer_strlen(wb), 1, stdout);
pluginsd_function_result_end_to_stdout();
fflush(stdout);
pthread_mutex_unlock(&lock);
buffer_free(wb);
}
/*****************************************************************
* EBPF FUNCTION THREAD
@ -372,6 +1086,7 @@ void *ebpf_function_thread(void *ptr)
ebpf_module_t *em = (ebpf_module_t *)ptr;
char buffer[PLUGINSD_LINE_MAX + 1];
rw_spinlock_init(&rw_spinlock);
char *s = NULL;
while(!ebpf_exit_plugin && (s = fgets(buffer, PLUGINSD_LINE_MAX, stdin))) {
char *words[PLUGINSD_MAX_WORDS] = { NULL };
@ -393,6 +1108,7 @@ void *ebpf_function_thread(void *ptr)
}
else {
int timeout = str2i(timeout_s);
rw_spinlock_write_lock(&rw_spinlock);
if (!strncmp(function, EBPF_FUNCTION_THREAD, sizeof(EBPF_FUNCTION_THREAD) - 1))
ebpf_function_thread_manipulation(transaction,
function,
@ -400,14 +1116,28 @@ void *ebpf_function_thread(void *ptr)
PLUGINSD_LINE_MAX + 1,
timeout,
em);
else if (!strncmp(function, EBPF_FUNCTION_SOCKET, sizeof(EBPF_FUNCTION_SOCKET) - 1))
ebpf_function_socket_manipulation(transaction,
function,
buffer,
PLUGINSD_LINE_MAX + 1,
timeout,
&ebpf_modules[EBPF_MODULE_SOCKET_IDX]);
else
ebpf_function_error(transaction,
HTTP_RESP_NOT_FOUND,
"No function with this name found in ebpf.plugin.");
rw_spinlock_write_unlock(&rw_spinlock);
}
}
else
netdata_log_error("Received unknown command: %s", keyword ? keyword : "(unset)");
}
if(!s || feof(stdin) || ferror(stdin)) {
ebpf_stop_threads(SIGQUIT);
netdata_log_error("Received error on stdin.");
}
return NULL;
}

View file

@ -3,20 +3,25 @@
#ifndef NETDATA_EBPF_FUNCTIONS_H
#define NETDATA_EBPF_FUNCTIONS_H 1
#ifdef NETDATA_DEV_MODE
// Common
static inline void EBPF_PLUGIN_FUNCTIONS(const char *NAME, const char *DESC) {
fprintf(stdout, "%s \"%s\" 10 \"%s\"\n", PLUGINSD_KEYWORD_FUNCTION, NAME, DESC);
}
#endif
// configuration file & description
#define NETDATA_DIRECTORY_FUNCTIONS_CONFIG_FILE "functions.conf"
#define NETDATA_EBPF_FUNCTIONS_MODULE_DESC "Show information about current function status."
// function list
#define EBPF_FUNCTION_THREAD "ebpf_thread"
#define EBPF_FUNCTION_SOCKET "ebpf_socket"
// thread constants
#define EBPF_PLUGIN_THREAD_FUNCTION_DESCRIPTION "Detailed information about eBPF threads."
#define EBPF_PLUGIN_THREAD_FUNCTION_ERROR_THREAD_NOT_FOUND "ebpf.plugin does not have thread named "
#define EBPF_PLUGIN_FUNCTIONS(NAME, DESC) do { \
fprintf(stdout, PLUGINSD_KEYWORD_FUNCTION " \"" NAME "\" 10 \"%s\"\n", DESC); \
} while(0)
#define EBPF_THREADS_SELECT_THREAD "thread:"
#define EBPF_THREADS_ENABLE_CATEGORY "enable:"
#define EBPF_THREADS_DISABLE_CATEGORY "disable:"
@ -24,6 +29,16 @@
#define EBPF_THREAD_STATUS_RUNNING "running"
#define EBPF_THREAD_STATUS_STOPPED "stopped"
// socket constants
#define EBPF_PLUGIN_SOCKET_FUNCTION_DESCRIPTION "Detailed information about open sockets."
#define EBPF_FUNCTION_SOCKET_FAMILY "family:"
#define EBPF_FUNCTION_SOCKET_PERIOD "period:"
#define EBPF_FUNCTION_SOCKET_RESOLVE "resolve:"
#define EBPF_FUNCTION_SOCKET_RANGE "range:"
#define EBPF_FUNCTION_SOCKET_PORT "port:"
#define EBPF_FUNCTION_SOCKET_RESET "reset"
#define EBPF_FUNCTION_SOCKET_INTERFACES "interfaces"
void *ebpf_function_thread(void *ptr);
#endif

View file

@ -466,7 +466,7 @@ static int ebpf_mount_load_bpf(ebpf_module_t *em)
#endif
if (ret)
netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
return ret;
}

View file

@ -52,7 +52,8 @@ enum netdata_ebpf_stats_order {
NETDATA_EBPF_ORDER_STAT_HASH_GLOBAL_TABLE_TOTAL,
NETDATA_EBPF_ORDER_STAT_HASH_PID_TABLE_ADDED,
NETDATA_EBPF_ORDER_STAT_HASH_PID_TABLE_REMOVED,
NETATA_EBPF_ORDER_STAT_ARAL_BEGIN
NETATA_EBPF_ORDER_STAT_ARAL_BEGIN,
NETDATA_EBPF_ORDER_FUNCTION_PER_THREAD,
};
enum netdata_ebpf_load_mode_stats{

View file

@ -1222,7 +1222,7 @@ static int ebpf_shm_load_bpf(ebpf_module_t *em)
if (ret)
netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
return ret;
}

File diff suppressed because it is too large Load diff

View file

@ -4,6 +4,11 @@
#include <stdint.h>
#include "libnetdata/avl/avl.h"
#include <sys/socket.h>
#ifdef HAVE_NETDB_H
#include <netdb.h>
#endif
// Module name & description
#define NETDATA_EBPF_MODULE_NAME_SOCKET "socket"
#define NETDATA_EBPF_SOCKET_MODULE_DESC "Monitors TCP and UDP bandwidth. This thread is integrated with apps and cgroup."
@ -11,8 +16,6 @@
// Vector indexes
#define NETDATA_UDP_START 3
#define NETDATA_SOCKET_READ_SLEEP_MS 800000ULL
// config file
#define NETDATA_NETWORK_CONFIG_FILE "network.conf"
#define EBPF_NETWORK_VIEWER_SECTION "network connections"
@ -21,18 +24,13 @@
#define EBPF_CONFIG_RESOLVE_SERVICE "resolve service names"
#define EBPF_CONFIG_PORTS "ports"
#define EBPF_CONFIG_HOSTNAMES "hostnames"
#define EBPF_CONFIG_BANDWIDTH_SIZE "bandwidth table size"
#define EBPF_CONFIG_IPV4_SIZE "ipv4 connection table size"
#define EBPF_CONFIG_IPV6_SIZE "ipv6 connection table size"
#define EBPF_CONFIG_SOCKET_MONITORING_SIZE "socket monitoring table size"
#define EBPF_CONFIG_UDP_SIZE "udp connection table size"
#define EBPF_MAXIMUM_DIMENSIONS "maximum dimensions"
enum ebpf_socket_table_list {
NETDATA_SOCKET_TABLE_BANDWIDTH,
NETDATA_SOCKET_GLOBAL,
NETDATA_SOCKET_LPORTS,
NETDATA_SOCKET_TABLE_IPV4,
NETDATA_SOCKET_TABLE_IPV6,
NETDATA_SOCKET_OPEN_SOCKET,
NETDATA_SOCKET_TABLE_UDP,
NETDATA_SOCKET_TABLE_CTRL
};
@ -122,13 +120,6 @@ typedef enum ebpf_socket_idx {
#define NETDATA_NET_APPS_BANDWIDTH_UDP_SEND_CALLS "bandwidth_udp_send"
#define NETDATA_NET_APPS_BANDWIDTH_UDP_RECV_CALLS "bandwidth_udp_recv"
// Network viewer charts
#define NETDATA_NV_OUTBOUND_BYTES "outbound_bytes"
#define NETDATA_NV_OUTBOUND_PACKETS "outbound_packets"
#define NETDATA_NV_OUTBOUND_RETRANSMIT "outbound_retransmit"
#define NETDATA_NV_INBOUND_BYTES "inbound_bytes"
#define NETDATA_NV_INBOUND_PACKETS "inbound_packets"
// Port range
#define NETDATA_MINIMUM_PORT_VALUE 1
#define NETDATA_MAXIMUM_PORT_VALUE 65535
@ -163,6 +154,8 @@ typedef enum ebpf_socket_idx {
// ARAL name
#define NETDATA_EBPF_SOCKET_ARAL_NAME "ebpf_socket"
#define NETDATA_EBPF_PID_SOCKET_ARAL_TABLE_NAME "ebpf_pid_socket"
#define NETDATA_EBPF_SOCKET_ARAL_TABLE_NAME "ebpf_socket_tbl"
typedef struct ebpf_socket_publish_apps {
// Data read
@ -246,10 +239,11 @@ typedef struct ebpf_network_viewer_hostname_list {
struct ebpf_network_viewer_hostname_list *next;
} ebpf_network_viewer_hostname_list_t;
#define NETDATA_NV_CAP_VALUE 50L
typedef struct ebpf_network_viewer_options {
RW_SPINLOCK rw_spinlock;
uint32_t enabled;
uint32_t max_dim; // Store value read from 'maximum dimensions'
uint32_t family; // AF_INET, AF_INET6 or AF_UNSPEC (both)
uint32_t hostname_resolution_enabled;
uint32_t service_resolution_enabled;
@ -275,98 +269,82 @@ extern ebpf_network_viewer_options_t network_viewer_opt;
* Structure to store socket information
*/
typedef struct netdata_socket {
uint64_t recv_packets;
uint64_t sent_packets;
uint64_t recv_bytes;
uint64_t sent_bytes;
uint64_t first; // First timestamp
uint64_t ct; // Current timestamp
uint32_t retransmit; // It is never used with UDP
// Timestamp
uint64_t first_timestamp;
uint64_t current_timestamp;
// Socket additional info
uint16_t protocol;
uint16_t reserved;
uint16_t family;
uint32_t external_origin;
struct {
uint32_t call_tcp_sent;
uint32_t call_tcp_received;
uint64_t tcp_bytes_sent;
uint64_t tcp_bytes_received;
uint32_t close; //It is never used with UDP
uint32_t retransmit; //It is never used with UDP
uint32_t ipv4_connect;
uint32_t ipv6_connect;
} tcp;
struct {
uint32_t call_udp_sent;
uint32_t call_udp_received;
uint64_t udp_bytes_sent;
uint64_t udp_bytes_received;
} udp;
} netdata_socket_t;
typedef struct netdata_plot_values {
// Values used in the previous iteration
uint64_t recv_packets;
uint64_t sent_packets;
uint64_t recv_bytes;
uint64_t sent_bytes;
uint32_t retransmit;
typedef enum netdata_socket_flags {
NETDATA_SOCKET_FLAGS_ALREADY_OPEN = (1<<0)
} netdata_socket_flags_t;
uint64_t last_time;
typedef enum netdata_socket_src_ip_origin {
NETDATA_EBPF_SRC_IP_ORIGIN_LOCAL,
NETDATA_EBPF_SRC_IP_ORIGIN_EXTERNAL
} netdata_socket_src_ip_origin_t;
// Values used to plot
uint64_t plot_recv_packets;
uint64_t plot_sent_packets;
uint64_t plot_recv_bytes;
uint64_t plot_sent_bytes;
uint16_t plot_retransmit;
} netdata_plot_values_t;
typedef struct netata_socket_plus {
netdata_socket_t data; // Data read from database
uint32_t pid;
time_t last_update;
netdata_socket_flags_t flags;
struct {
char src_ip[INET6_ADDRSTRLEN + 1];
// uint16_t src_port;
char dst_ip[INET6_ADDRSTRLEN+ 1];
char dst_port[NI_MAXSERV + 1];
} socket_string;
} netdata_socket_plus_t;
enum netdata_udp_ports {
NETDATA_EBPF_UDP_PORT = 53
};
extern ARAL *aral_socket_table;
/**
* Index used together previous structure
*/
typedef struct netdata_socket_idx {
union netdata_ip_t saddr;
uint16_t sport;
//uint16_t sport;
union netdata_ip_t daddr;
uint16_t dport;
uint32_t pid;
} netdata_socket_idx_t;
// Next values were defined according getnameinfo(3)
#define NETDATA_MAX_NETWORK_COMBINED_LENGTH 1018
#define NETDATA_DOTS_PROTOCOL_COMBINED_LENGTH 5 // :TCP:
#define NETDATA_DIM_LENGTH_WITHOUT_SERVICE_PROTOCOL 979
#define NETDATA_INBOUND_DIRECTION (uint32_t)1
#define NETDATA_OUTBOUND_DIRECTION (uint32_t)2
/**
* Allocate the maximum number of structures in the beginning, this can force the collector to use more memory
* in the long term, on the other had it is faster.
*/
typedef struct netdata_socket_plot {
// Search
avl_t avl;
netdata_socket_idx_t index;
// Current data
netdata_socket_t sock;
// Previous values and values used to write on chart.
netdata_plot_values_t plot;
int family; // AF_INET or AF_INET6
char *resolved_name; // Resolve only in the first call
unsigned char resolved;
char *dimension_sent;
char *dimension_recv;
char *dimension_retransmit;
uint32_t flags;
} netdata_socket_plot_t;
#define NETWORK_VIEWER_CHARTS_CREATED (uint32_t)1
typedef struct netdata_vector_plot {
netdata_socket_plot_t *plot; // Vector used to plot charts
avl_tree_lock tree; // AVL tree to speed up search
uint32_t last; // The 'other' dimension, the last chart accepted.
uint32_t next; // The next position to store in the vector.
uint32_t max_plot; // Max number of elements to plot.
uint32_t last_plot; // Last element plot
uint32_t flags; // Flags
} netdata_vector_plot_t;
void clean_port_structure(ebpf_network_viewer_port_list_t **clean);
void ebpf_clean_port_structure(ebpf_network_viewer_port_list_t **clean);
extern ebpf_network_viewer_port_list_t *listen_ports;
void update_listen_table(uint16_t value, uint16_t proto, netdata_passive_connection_t *values);
void parse_network_viewer_section(struct config *cfg);
void ebpf_fill_ip_list(ebpf_network_viewer_ip_list_t **out, ebpf_network_viewer_ip_list_t *in, char *table);
void parse_service_name_section(struct config *cfg);
void ebpf_fill_ip_list_unsafe(ebpf_network_viewer_ip_list_t **out, ebpf_network_viewer_ip_list_t *in, char *table);
void ebpf_parse_service_name_section(struct config *cfg);
void ebpf_parse_ips_unsafe(char *ptr);
void ebpf_parse_ports(char *ptr);
void ebpf_socket_read_open_connections(BUFFER *buf, struct ebpf_module *em);
void ebpf_socket_fill_publish_apps(uint32_t current_pid, netdata_socket_t *ns);
extern struct config socket_config;
extern netdata_ebpf_targets_t socket_targets[];

View file

@ -124,13 +124,6 @@ static int ebpf_swap_attach_kprobe(struct swap_bpf *obj)
if (ret)
return -1;
obj->links.netdata_release_task_probe = bpf_program__attach_kprobe(obj->progs.netdata_release_task_probe,
false,
EBPF_COMMON_FNCT_CLEAN_UP);
ret = libbpf_get_error(obj->links.netdata_swap_writepage_probe);
if (ret)
return -1;
return 0;
}
@ -176,7 +169,6 @@ static void ebpf_swap_adjust_map(struct swap_bpf *obj, ebpf_module_t *em)
static void ebpf_swap_disable_release_task(struct swap_bpf *obj)
{
bpf_program__set_autoload(obj->progs.netdata_release_task_fentry, false);
bpf_program__set_autoload(obj->progs.netdata_release_task_probe, false);
}
/**
@ -959,7 +951,7 @@ static int ebpf_swap_load_bpf(ebpf_module_t *em)
#endif
if (ret)
netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
return ret;
}

View file

@ -383,7 +383,7 @@ static void ebpf_sync_exit(void *ptr)
*/
static int ebpf_sync_load_legacy(ebpf_sync_syscalls_t *w, ebpf_module_t *em)
{
em->thread_name = w->syscall;
em->info.thread_name = w->syscall;
if (!w->probe_links) {
w->probe_links = ebpf_load_program(ebpf_plugin_dir, em, running_on_kernel, isrh, &w->objects);
if (!w->probe_links) {
@ -413,7 +413,7 @@ static int ebpf_sync_initialize_syscall(ebpf_module_t *em)
#endif
int i;
const char *saved_name = em->thread_name;
const char *saved_name = em->info.thread_name;
int errors = 0;
for (i = 0; local_syscalls[i].syscall; i++) {
ebpf_sync_syscalls_t *w = &local_syscalls[i];
@ -424,7 +424,7 @@ static int ebpf_sync_initialize_syscall(ebpf_module_t *em)
if (ebpf_sync_load_legacy(w, em))
errors++;
em->thread_name = saved_name;
em->info.thread_name = saved_name;
}
#ifdef LIBBPF_MAJOR_VERSION
else {
@ -446,12 +446,12 @@ static int ebpf_sync_initialize_syscall(ebpf_module_t *em)
w->enabled = false;
}
em->thread_name = saved_name;
em->info.thread_name = saved_name;
}
#endif
}
}
em->thread_name = saved_name;
em->info.thread_name = saved_name;
memset(sync_counter_aggregated_data, 0 , NETDATA_SYNC_IDX_END * sizeof(netdata_syscall_stat_t));
memset(sync_counter_publish_aggregated, 0 , NETDATA_SYNC_IDX_END * sizeof(netdata_publish_syscall_t));

View file

@ -12,8 +12,8 @@ ebpf_module_t test_em;
void ebpf_ut_initialize_structure(netdata_run_mode_t mode)
{
memset(&test_em, 0, sizeof(ebpf_module_t));
test_em.thread_name = strdupz("process");
test_em.config_name = test_em.thread_name;
test_em.info.thread_name = strdupz("process");
test_em.info.config_name = test_em.info.thread_name;
test_em.kernels = NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_10 |
NETDATA_V5_14;
test_em.pid_map_size = ND_EBPF_DEFAULT_PID_SIZE;
@ -28,7 +28,7 @@ void ebpf_ut_initialize_structure(netdata_run_mode_t mode)
*/
void ebpf_ut_cleanup_memory()
{
freez((void *)test_em.thread_name);
freez((void *)test_em.info.thread_name);
}
/**
@ -70,14 +70,14 @@ int ebpf_ut_load_real_binary()
*/
int ebpf_ut_load_fake_binary()
{
const char *original = test_em.thread_name;
const char *original = test_em.info.thread_name;
test_em.thread_name = strdupz("I_am_not_here");
test_em.info.thread_name = strdupz("I_am_not_here");
int ret = ebpf_ut_load_binary();
ebpf_ut_cleanup_memory();
test_em.thread_name = original;
test_em.info.thread_name = original;
return !ret;
}

View file

@ -33,7 +33,8 @@ functions - [plugins.d](https://github.com/netdata/netdata/blob/master/collector
| Function | Description | plugin - module |
| :-- | :-- | :-- |
| processes | Detailed information on the currently running processes on the node. | [apps.plugin](https://github.com/netdata/netdata/blob/master/collectors/apps.plugin/README.md) |
| ebpf_thread | Controller for eBPF threads. | [ebpf.plugin](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/README.md) |
| ebpf_socket | Detailed socket information. | [ebpf.plugin](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/README.md#ebpf_thread) |
| ebpf_thread | Controller for eBPF threads. | [ebpf.plugin](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/README.md#ebpf_socket) |
If you have ideas or requests for other functions:
* Participate in the relevant [GitHub discussion](https://github.com/netdata/netdata/discussions/14412)

View file

@ -792,13 +792,13 @@ void ebpf_update_controller(int fd, ebpf_module_t *em)
{
uint32_t values[NETDATA_CONTROLLER_END] = {
(em->apps_charts & NETDATA_EBPF_APPS_FLAG_YES) | em->cgroup_charts,
em->apps_level
em->apps_level, 0, 0, 0, 0
};
uint32_t key;
uint32_t end = (em->apps_level != NETDATA_APPS_NOT_SET) ? NETDATA_CONTROLLER_END : NETDATA_CONTROLLER_APPS_LEVEL;
uint32_t end = NETDATA_CONTROLLER_PID_TABLE_ADD;
for (key = NETDATA_CONTROLLER_APPS_ENABLED; key < end; key++) {
int ret = bpf_map_update_elem(fd, &key, &values[key], 0);
int ret = bpf_map_update_elem(fd, &key, &values[key], BPF_ANY);
if (ret)
netdata_log_error("Add key(%u) for controller table failed.", key);
}
@ -855,7 +855,7 @@ struct bpf_link **ebpf_load_program(char *plugins_dir, ebpf_module_t *em, int kv
uint32_t idx = ebpf_select_index(em->kernels, is_rhf, kver);
ebpf_mount_name(lpath, 4095, plugins_dir, idx, em->thread_name, em->mode, is_rhf);
ebpf_mount_name(lpath, 4095, plugins_dir, idx, em->info.thread_name, em->mode, is_rhf);
// When this function is called ebpf.plugin is using legacy code, so we should reset the variable
em->load &= ~ NETDATA_EBPF_LOAD_METHODS;
@ -1269,7 +1269,7 @@ void ebpf_update_module_using_config(ebpf_module_t *modules, netdata_ebpf_load_m
#ifdef NETDATA_DEV_MODE
netdata_log_info("The thread %s was configured with: mode = %s; update every = %d; apps = %s; cgroup = %s; ebpf type format = %s; ebpf co-re tracing = %s; collect pid = %s; maps per core = %s, lifetime=%u",
modules->thread_name,
modules->info.thread_name,
load_mode,
modules->update_every,
(modules->apps_charts)?"enabled":"disabled",

View file

@ -301,11 +301,27 @@ enum ebpf_global_table_values {
typedef uint64_t netdata_idx_t;
typedef struct ebpf_module {
const char *thread_name;
const char *config_name;
const char *thread_description;
// Constants used with module
struct {
const char *thread_name;
const char *config_name;
const char *thread_description;
} info;
// Helpers used with plugin
struct {
void *(*start_routine)(void *); // the thread function
void (*apps_routine)(struct ebpf_module *em, void *ptr); // the apps charts
void (*fnct_routine)(BUFFER *bf, struct ebpf_module *em); // the function used for exteernal requests
const char *fcnt_name; // name given to cloud
const char *fcnt_desc; // description given about function
const char *fcnt_thread_chart_name;
int order_thread_chart;
const char *fcnt_thread_lifetime_name;
int order_thread_lifetime;
} functions;
enum ebpf_threads_status enabled;
void *(*start_routine)(void *);
int update_every;
int global_charts;
netdata_apps_integration_flags_t apps_charts;
@ -314,7 +330,6 @@ typedef struct ebpf_module {
netdata_run_mode_t mode;
uint32_t thread_id;
int optional;
void (*apps_routine)(struct ebpf_module *em, void *ptr);
ebpf_local_maps_t *maps;
ebpf_specify_name_t *names;
uint32_t pid_map_size;

View file

@ -1 +1 @@
2abbbaf30a73e1ed365d42324a5128470568b008528c3ff8cd98d5eb86152f03 netdata-ebpf-co-re-glibc-v1.2.1.tar.xz
7ef8d2a0f485b4c81942f66c50e1aedcd568b7997a933c50c0ebbd8353543c08 netdata-ebpf-co-re-glibc-v1.2.8.tar.xz

View file

@ -1 +1 @@
v1.2.1
v1.2.8

View file

@ -1,3 +1,3 @@
cb0cd6ef4bdb8a39c42b152d328d4822217c59e1d616d3003bc67bc53a058275 ./netdata-kernel-collector-glibc-v1.2.1.tar.xz
0633ff39e8654a21ab664a289f58daca5792cfaf2ed62dcaacf7cd267eeedd40 ./netdata-kernel-collector-musl-v1.2.1.tar.xz
6ce60c5ac8f45cc6a01b7ac9ea150728963d0aca1ee6dfd568b0f8b2ba67b88b ./netdata-kernel-collector-static-v1.2.1.tar.xz
9035b6b8dda5230c1ddc44991518a3ee069bd497ad5a8e5448b79dc4b8c51c43 ./netdata-kernel-collector-glibc-v1.2.8.tar.xz
e5b1a141475f75c60c282a2e3ce8e3914893e75d474c976bad95f66d4c9846c5 ./netdata-kernel-collector-musl-v1.2.8.tar.xz
d6081a2fedc9435d1ab430697cb101123cebaac07b62fb91d790ca526923f4e3 ./netdata-kernel-collector-static-v1.2.8.tar.xz

View file

@ -1 +1 @@
v1.2.1
v1.2.8