diff --git a/collectors/ebpf.plugin/README.md b/collectors/ebpf.plugin/README.md
index fb036a5aa3..06915ea521 100644
--- a/collectors/ebpf.plugin/README.md
+++ b/collectors/ebpf.plugin/README.md
@@ -261,7 +261,7 @@ You can also enable the following eBPF programs:
 -   `swap` : This eBPF program creates charts that show information about swap access.
 -   `mdflush`: This eBPF program creates charts that show information about
 -   `sync`: Monitor calls to syscalls sync(2), fsync(2), fdatasync(2), syncfs(2), msync(2), and sync_file_range(2).
--   `network viewer`: This eBPF program creates charts with information about `TCP` and `UDP` functions, including the
+-   `socket`: This eBPF program creates charts with information about `TCP` and `UDP` functions, including the
     bandwidth consumed by each.
     multi-device software flushes.
 -   `vfs`: This eBPF program creates charts that show information about VFS (Virtual File System) functions.
@@ -302,12 +302,13 @@ are divided in the following sections:
 
 #### `[network connections]`
 
-You can configure the information shown on `outbound` and `inbound` charts with the settings in this section.
+You can configure the information shown with function `ebpf_socket` using the settings in this section.
 
 ```conf
 [network connections]
-    maximum dimensions = 500
+    enabled = yes
     resolve hostname ips = no
+    resolve service names = yes
     ports = 1-1024 !145 !domain
     hostnames = !example.com
     ips = !127.0.0.1/8 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 fc00::/7
@@ -318,24 +319,23 @@ write `ports = 19999`, Netdata will collect only connections for itself. The `ho
 [simple patterns](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md). The `ports`, and `ips` settings accept negation (`!`) to deny
 specific values or asterisk alone to define all values.
 
-In the above example, Netdata will collect metrics for all ports between 1 and 443, with the exception of 53 (domain)
-and 145.
+In the above example, Netdata will collect metrics for all ports between `1` and `1024`, with the exception of `53` (domain)
+and `145`.
 
 The following options are available:
 
+-   `enabled`: Disable network connections monitoring. This can affect directly some funcion output.
+-   `resolve hostname ips`: Enable resolving IPs to hostnames. It is disabled by default because it can be too slow.
+-   `resolve service names`: Convert destination ports into service names, for example, port `53` protocol `UDP` becomes `domain`.
+    all names are read from /etc/services.
 -   `ports`: Define the destination ports for Netdata to monitor.
 -   `hostnames`: The list of hostnames that can be resolved to an IP address.
 -   `ips`: The IP or range of IPs that you want to monitor. You can use IPv4 or IPv6 addresses, use dashes to define a
-    range of IPs, or use CIDR values. By default, only data for private IP addresses is collected, but this can
-    be changed with the `ips` setting.
+    range of IPs, or use CIDR values.
 
-By default, Netdata displays up to 500 dimensions on network connection charts. If there are more possible dimensions,
-they will be bundled into the `other` dimension. You can increase the number of shown dimensions by changing
-the `maximum dimensions` setting.
-
-The dimensions for the traffic charts are created using the destination IPs of the sockets by default. This can be
-changed setting `resolve hostname ips = yes` and restarting Netdata, after this Netdata will create dimensions using
-the `hostnames` every time that is possible to resolve IPs to their hostnames.
+By default the traffic table is created using the destination IPs and ports of the sockets. This can be
+changed, so that Netdata uses service names (if possible), by specifying `resolve service name = yes` in the configuration
+section.
 
 #### `[service name]`
 
@@ -990,13 +990,15 @@ shows how the lockdown module impacts `ebpf.plugin` based on the selected option
 If you or your distribution compiled the kernel with the last combination, your system cannot load shared libraries
 required to run `ebpf.plugin`.
 
-## Function
+## Functions
+
+### ebpf_thread
 
 The eBPF plugin has a [function](https://github.com/netdata/netdata/blob/master/docs/cloud/netdata-functions.md) named
 `ebpf_thread` that controls its internal threads and helps to reduce the overhead on host. Using the function you
 can run the plugin with all threads disabled and enable them only when you want to take a look in specific areas.
 
-### List threads
+#### List threads
 
 To list all threads status you can query directly the endpoint function:
 
@@ -1006,7 +1008,7 @@ It is also possible to query a specific thread adding keyword `thread` and threa
 
 `http://localhost:19999/api/v1/function?function=ebpf_thread%20thread:mount`
 
-### Enable thread
+#### Enable thread
 
 It is possible to enable a specific thread using the keyword `enable`:
 
@@ -1019,14 +1021,14 @@ after the thread name:
 
 in this example thread `mount` will run during 600 seconds (10 minutes).
 
-### Disable thread
+#### Disable thread
 
 It is also possible to stop any thread running using the keyword `disable`. For example, to disable `cachestat` you can
 request:
 
 `http://localhost:19999/api/v1/function?function=ebpf_thread%20disable:cachestat`
 
-### Debugging threads
+#### Debugging threads
 
 You can verify the impact of threads on the host by running the
 [ebpf_thread_function.sh](https://github.com/netdata/netdata/blob/master/tests/ebpf/ebpf_thread_function.sh)
@@ -1036,3 +1038,34 @@ You can check the results of having threads running on your environment in the N
 dashboard
 
 <img src="https://github.com/netdata/netdata/assets/49162938/91823573-114c-4c16-b634-cc46f7bb1bcf" alt="Threads running." />
+
+### ebpf_socket
+
+The eBPF plugin has a [function](https://github.com/netdata/netdata/blob/master/docs/cloud/netdata-functions.md) named
+`ebpf_socket` that shows the current status of open sockets on host.
+
+#### Families
+
+The plugin shows by default sockets for IPV4 and IPV6, but it is possible to select a specific family by passing the
+family as an argument:
+
+`http://localhost:19999/api/v1/function?function=ebpf_socket%20family:IPV4`
+
+#### Resolve
+
+The plugin resolves ports to service names by default. You can show the port number by disabling the name resolution:
+
+`http://localhost:19999/api/v1/function?function=ebpf_socket%20resolve:NO`
+
+#### CIDR
+
+The plugin shows connections for all possible destination IPs by default. You can limit the range by specifying the CIDR:
+
+`http://localhost:19999/api/v1/function?function=ebpf_socket%20cidr:192.168.1.0/24`
+
+#### PORT
+
+The plugin shows connections for all possible ports by default. You can limit the range by specifying a port or range
+of ports:
+
+`http://localhost:19999/api/v1/function?function=ebpf_socket%20port:1-1024`
diff --git a/collectors/ebpf.plugin/ebpf.c b/collectors/ebpf.plugin/ebpf.c
index 844047305c..af856f438b 100644
--- a/collectors/ebpf.plugin/ebpf.c
+++ b/collectors/ebpf.plugin/ebpf.c
@@ -49,176 +49,258 @@ struct netdata_static_thread cgroup_integration_thread = {
 };
 
 ebpf_module_t ebpf_modules[] = {
-    { .thread_name = "process", .config_name = "process", .thread_description = NETDATA_EBPF_MODULE_PROCESS_DESC,
-      .enabled = 0, .start_routine = ebpf_process_thread,
+    { .info = {.thread_name = "process",
+              .config_name = "process",
+              .thread_description = NETDATA_EBPF_MODULE_PROCESS_DESC},
+      .functions = {.start_routine = ebpf_process_thread,
+                    .apps_routine = ebpf_process_create_apps_charts,
+                    .fnct_routine = NULL},
+      .enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_LEVEL_REAL_PARENT, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = ebpf_process_create_apps_charts, .maps = NULL,
-      .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &process_config,
+      .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &process_config,
       .config_file = NETDATA_PROCESS_CONFIG_FILE,
       .kernels =  NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_10 |
                   NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = NULL, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0 },
-    { .thread_name = "socket", .config_name = "socket", .thread_description = NETDATA_EBPF_SOCKET_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_socket_thread,
+    { .info = {.thread_name = "socket",
+              .config_name = "socket",
+              .thread_description = NETDATA_EBPF_SOCKET_MODULE_DESC},
+      .functions = {.start_routine = ebpf_socket_thread,
+                   .apps_routine = ebpf_socket_create_apps_charts,
+                   .fnct_routine = ebpf_socket_read_open_connections,
+                   .fcnt_name = EBPF_FUNCTION_SOCKET,
+                   .fcnt_desc = EBPF_PLUGIN_SOCKET_FUNCTION_DESCRIPTION,
+                   .fcnt_thread_chart_name = NULL,
+                   .fcnt_thread_lifetime_name = NULL},
+      .enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_LEVEL_REAL_PARENT, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = ebpf_socket_create_apps_charts, .maps = NULL,
+      .maps = NULL,
       .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &socket_config,
       .config_file = NETDATA_NETWORK_CONFIG_FILE,
       .kernels =  NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = socket_targets, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "cachestat", .config_name = "cachestat", .thread_description = NETDATA_EBPF_CACHESTAT_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_cachestat_thread,
+    { .info = {.thread_name = "cachestat", .config_name = "cachestat", .thread_description = NETDATA_EBPF_CACHESTAT_MODULE_DESC},
+      .functions = {.start_routine = ebpf_cachestat_thread,
+                    .apps_routine = ebpf_cachestat_create_apps_charts,
+                    .fnct_routine = NULL},
+      .enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_LEVEL_REAL_PARENT, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = ebpf_cachestat_create_apps_charts, .maps = cachestat_maps,
-      .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &cachestat_config,
+      .maps = cachestat_maps, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &cachestat_config,
       .config_file = NETDATA_CACHESTAT_CONFIG_FILE,
       .kernels = NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18|
                  NETDATA_V5_4 | NETDATA_V5_14 | NETDATA_V5_15 | NETDATA_V5_16,
       .load = EBPF_LOAD_LEGACY, .targets = cachestat_targets, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "sync", .config_name = "sync", .thread_description = NETDATA_EBPF_SYNC_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_sync_thread,
+    { .info = {.thread_name = "sync",
+              .config_name = "sync",
+              .thread_description = NETDATA_EBPF_SYNC_MODULE_DESC},
+      .functions = {.start_routine = ebpf_sync_thread,
+                    .apps_routine = NULL,
+                    .fnct_routine = NULL},
+      .enabled = 0, .maps = NULL,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_NOT_SET, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = NULL, .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &sync_config,
+      .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &sync_config,
       .config_file = NETDATA_SYNC_CONFIG_FILE,
       // All syscalls have the same kernels
       .kernels =  NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = sync_targets, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "dc", .config_name = "dc", .thread_description = NETDATA_EBPF_DC_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_dcstat_thread,
+    { .info = {.thread_name = "dc",
+              .config_name = "dc",
+              .thread_description = NETDATA_EBPF_DC_MODULE_DESC},
+      .functions = {.start_routine = ebpf_dcstat_thread,
+                    .apps_routine = ebpf_dcstat_create_apps_charts,
+                    .fnct_routine = NULL},
+      .enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_LEVEL_REAL_PARENT, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = ebpf_dcstat_create_apps_charts, .maps = dcstat_maps,
+      .maps = dcstat_maps,
       .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &dcstat_config,
       .config_file = NETDATA_DIRECTORY_DCSTAT_CONFIG_FILE,
       .kernels =  NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = dc_targets, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "swap", .config_name = "swap", .thread_description = NETDATA_EBPF_SWAP_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_swap_thread,
+    { .info = {.thread_name = "swap", .config_name = "swap", .thread_description = NETDATA_EBPF_SWAP_MODULE_DESC},
+      .functions = {.start_routine = ebpf_swap_thread,
+                    .apps_routine = ebpf_swap_create_apps_charts,
+                    .fnct_routine = NULL},
+      .enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_LEVEL_REAL_PARENT, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = ebpf_swap_create_apps_charts, .maps = NULL,
+      .maps = NULL,
       .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &swap_config,
       .config_file = NETDATA_DIRECTORY_SWAP_CONFIG_FILE,
       .kernels =  NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = swap_targets, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "vfs", .config_name = "vfs", .thread_description = NETDATA_EBPF_VFS_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_vfs_thread,
+    { .info = {.thread_name = "vfs",
+              .config_name = "vfs",
+              .thread_description = NETDATA_EBPF_VFS_MODULE_DESC},
+      .functions = {.start_routine = ebpf_vfs_thread,
+                   .apps_routine = ebpf_vfs_create_apps_charts,
+                   .fnct_routine = NULL},
+      .enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_LEVEL_REAL_PARENT, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = ebpf_vfs_create_apps_charts, .maps = NULL,
+      .maps = NULL,
       .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &vfs_config,
       .config_file = NETDATA_DIRECTORY_VFS_CONFIG_FILE,
       .kernels =  NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = vfs_targets, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "filesystem", .config_name = "filesystem", .thread_description = NETDATA_EBPF_FS_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_filesystem_thread,
+    { .info = {.thread_name = "filesystem", .config_name = "filesystem", .thread_description = NETDATA_EBPF_FS_MODULE_DESC},
+      .functions = {.start_routine = ebpf_filesystem_thread,
+                   .apps_routine = NULL,
+                   .fnct_routine = NULL},
+      .enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_NOT_SET, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = NULL, .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &fs_config,
+      .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &fs_config,
       .config_file = NETDATA_FILESYSTEM_CONFIG_FILE,
       //We are setting kernels as zero, because we load eBPF programs according the kernel running.
       .kernels = 0, .load = EBPF_LOAD_LEGACY, .targets = NULL, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "disk", .config_name = "disk", .thread_description = NETDATA_EBPF_DISK_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_disk_thread,
+    { .info = {.thread_name = "disk",
+              .config_name = "disk",
+              .thread_description = NETDATA_EBPF_DISK_MODULE_DESC},
+      .functions = {.start_routine = ebpf_disk_thread,
+                   .apps_routine = NULL,
+                   .fnct_routine = NULL},
+     .enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_NOT_SET, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = NULL, .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &disk_config,
+      .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &disk_config,
       .config_file = NETDATA_DISK_CONFIG_FILE,
       .kernels =  NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = NULL, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "mount", .config_name = "mount", .thread_description = NETDATA_EBPF_MOUNT_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_mount_thread,
+    { .info = {.thread_name = "mount",
+              .config_name = "mount",
+              .thread_description = NETDATA_EBPF_MOUNT_MODULE_DESC},
+      .functions = {.start_routine = ebpf_mount_thread,
+                   .apps_routine = NULL,
+                   .fnct_routine = NULL},
+      .enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_NOT_SET, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = NULL, .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &mount_config,
+      .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &mount_config,
       .config_file = NETDATA_MOUNT_CONFIG_FILE,
       .kernels =  NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = mount_targets, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "fd", .config_name = "fd", .thread_description = NETDATA_EBPF_FD_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_fd_thread,
+    {  .info = { .thread_name = "fd",
+              .config_name = "fd",
+              .thread_description = NETDATA_EBPF_FD_MODULE_DESC},
+       .functions = {.start_routine = ebpf_fd_thread,
+                     .apps_routine = ebpf_fd_create_apps_charts,
+                     .fnct_routine = NULL},
+      .enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_LEVEL_REAL_PARENT, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = ebpf_fd_create_apps_charts, .maps = NULL,
+      .maps = NULL,
       .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &fd_config,
       .config_file = NETDATA_FD_CONFIG_FILE,
       .kernels =  NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_11 |
                   NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = fd_targets, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "hardirq", .config_name = "hardirq", .thread_description = NETDATA_EBPF_HARDIRQ_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_hardirq_thread,
+    {  .info = { .thread_name = "hardirq",
+              .config_name = "hardirq",
+              .thread_description = NETDATA_EBPF_HARDIRQ_MODULE_DESC},
+      .functions = {.start_routine = ebpf_hardirq_thread,
+                    .apps_routine = NULL,
+                    .fnct_routine = NULL},
+      .enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_NOT_SET, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = NULL, .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &hardirq_config,
+      .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &hardirq_config,
       .config_file = NETDATA_HARDIRQ_CONFIG_FILE,
       .kernels =  NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = NULL, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "softirq", .config_name = "softirq", .thread_description = NETDATA_EBPF_SOFTIRQ_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_softirq_thread,
+    {  .info = { .thread_name = "softirq",
+                 .config_name = "softirq",
+                 .thread_description = NETDATA_EBPF_SOFTIRQ_MODULE_DESC},
+       .functions = {.start_routine = ebpf_softirq_thread,
+                     .apps_routine = NULL,
+                     .fnct_routine = NULL },
+      .enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_NOT_SET, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = NULL, .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &softirq_config,
+      .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &softirq_config,
       .config_file = NETDATA_SOFTIRQ_CONFIG_FILE,
       .kernels =  NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = NULL, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "oomkill", .config_name = "oomkill", .thread_description = NETDATA_EBPF_OOMKILL_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_oomkill_thread,
+    {  .info = {.thread_name = "oomkill",
+              .config_name = "oomkill",
+              .thread_description = NETDATA_EBPF_OOMKILL_MODULE_DESC},
+      .functions = {.start_routine = ebpf_oomkill_thread,
+                    .apps_routine = ebpf_oomkill_create_apps_charts,
+                    .fnct_routine = NULL},.enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_LEVEL_REAL_PARENT, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = ebpf_oomkill_create_apps_charts, .maps = NULL,
+      .maps = NULL,
       .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &oomkill_config,
       .config_file = NETDATA_OOMKILL_CONFIG_FILE,
       .kernels =  NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = NULL, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "shm", .config_name = "shm", .thread_description = NETDATA_EBPF_SHM_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_shm_thread,
+    { .info = {.thread_name = "shm",
+              .config_name = "shm",
+              .thread_description = NETDATA_EBPF_SHM_MODULE_DESC},
+      .functions = {.start_routine = ebpf_shm_thread,
+                   .apps_routine = ebpf_shm_create_apps_charts,
+                   .fnct_routine = NULL},
+      .enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_LEVEL_REAL_PARENT, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = ebpf_shm_create_apps_charts, .maps = NULL,
+      .maps = NULL,
       .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &shm_config,
       .config_file = NETDATA_DIRECTORY_SHM_CONFIG_FILE,
       .kernels =  NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = shm_targets, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "mdflush", .config_name = "mdflush", .thread_description = NETDATA_EBPF_MD_MODULE_DESC,
-      .enabled = 0, .start_routine = ebpf_mdflush_thread,
+    {  .info = { .thread_name = "mdflush",
+              .config_name = "mdflush",
+              .thread_description = NETDATA_EBPF_MD_MODULE_DESC},
+       .functions = {.start_routine = ebpf_mdflush_thread,
+                   .apps_routine = NULL,
+                   .fnct_routine = NULL},
+      .enabled = 0,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_NOT_SET, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = NULL, .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &mdflush_config,
+      .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = &mdflush_config,
       .config_file = NETDATA_DIRECTORY_MDFLUSH_CONFIG_FILE,
       .kernels =  NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = mdflush_targets, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = "functions", .config_name = "functions", .thread_description = NETDATA_EBPF_FUNCTIONS_MODULE_DESC,
-      .enabled = 1, .start_routine = ebpf_function_thread,
+    {  .info = { .thread_name = "functions",
+              .config_name = "functions",
+              .thread_description = NETDATA_EBPF_FUNCTIONS_MODULE_DESC},
+       .functions = {.start_routine = ebpf_function_thread,
+                   .apps_routine = NULL,
+                   .fnct_routine = NULL},
+      .enabled = 1,
       .update_every = EBPF_DEFAULT_UPDATE_EVERY, .global_charts = 1, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO,
       .apps_level = NETDATA_APPS_NOT_SET, .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0,
-      .apps_routine = NULL, .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = NULL,
+      .maps = NULL, .pid_map_size = ND_EBPF_DEFAULT_PID_SIZE, .names = NULL, .cfg = NULL,
       .config_file = NETDATA_DIRECTORY_FUNCTIONS_CONFIG_FILE,
       .kernels =  NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_14,
       .load = EBPF_LOAD_LEGACY, .targets = NULL, .probe_links = NULL, .objects = NULL,
       .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES, .lifetime = EBPF_DEFAULT_LIFETIME, .running_time = 0},
-    { .thread_name = NULL, .enabled = 0, .start_routine = NULL, .update_every = EBPF_DEFAULT_UPDATE_EVERY,
+    { .info = {.thread_name = NULL, .config_name = NULL},
+      .functions = {.start_routine = NULL, .apps_routine = NULL, .fnct_routine = NULL},
+      .enabled = 0, .update_every = EBPF_DEFAULT_UPDATE_EVERY,
       .global_charts = 0, .apps_charts = NETDATA_EBPF_APPS_FLAG_NO, .apps_level = NETDATA_APPS_NOT_SET,
-      .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0, .apps_routine = NULL, .maps = NULL,
-      .pid_map_size = 0, .names = NULL, .cfg = NULL, .config_name = NULL, .kernels = 0, .load = EBPF_LOAD_LEGACY,
+      .cgroup_charts = CONFIG_BOOLEAN_NO, .mode = MODE_ENTRY, .optional = 0, .maps = NULL,
+      .pid_map_size = 0, .names = NULL, .cfg = NULL, .kernels = 0, .load = EBPF_LOAD_LEGACY,
       .targets = NULL, .probe_links = NULL, .objects = NULL, .thread = NULL, .maps_per_core = CONFIG_BOOLEAN_YES},
 };
 
@@ -559,6 +641,7 @@ ebpf_network_viewer_options_t network_viewer_opt;
 ebpf_plugin_stats_t plugin_statistics = {.core = 0, .legacy = 0, .running = 0, .threads = 0, .tracepoints = 0,
                                          .probes = 0, .retprobes = 0, .trampolines = 0, .memlock_kern = 0,
                                          .hash_tables = 0};
+netdata_ebpf_judy_pid_t ebpf_judy_pid = {.pid_table = NULL, .index = {.JudyLArray = NULL}};
 
 #ifdef LIBBPF_MAJOR_VERSION
 struct btf *default_btf = NULL;
@@ -578,6 +661,61 @@ void *default_btf = NULL;
 #endif
 char *btf_path = NULL;
 
+/*****************************************************************
+ *
+ *  FUNCTIONS USED TO MANIPULATE JUDY ARRAY
+ *
+ *****************************************************************/
+
+/**
+ * Hashtable insert unsafe
+ *
+ * Find or create a value associated to the index
+ *
+ * @return The lsocket = 0 when new item added to the array otherwise the existing item value is returned in *lsocket
+ * we return a pointer to a pointer, so that the caller can put anything needed at the value of the index.
+ * The pointer to pointer we return has to be used before any other operation that may change the index (insert/delete).
+ *
+ */
+void **ebpf_judy_insert_unsafe(PPvoid_t arr, Word_t key)
+{
+    JError_t J_Error;
+    Pvoid_t *idx = JudyLIns(arr, key, &J_Error);
+    if (unlikely(idx == PJERR)) {
+        netdata_log_error("Cannot add PID to JudyL, JU_ERRNO_* == %u, ID == %d",
+                          JU_ERRNO(&J_Error), JU_ERRID(&J_Error));
+    }
+
+    return idx;
+}
+
+/**
+ * Get PID from judy
+ *
+ * Get a pointer for the `pid` from judy_array;
+ *
+ * @param judy_array a judy array where PID is the primary key
+ * @param pid        pid stored.
+ */
+netdata_ebpf_judy_pid_stats_t *ebpf_get_pid_from_judy_unsafe(PPvoid_t judy_array, uint32_t pid)
+{
+    netdata_ebpf_judy_pid_stats_t **pid_pptr =
+        (netdata_ebpf_judy_pid_stats_t **)ebpf_judy_insert_unsafe(judy_array, pid);
+    netdata_ebpf_judy_pid_stats_t *pid_ptr = *pid_pptr;
+    if (likely(*pid_pptr == NULL)) {
+        // a new PID added to the index
+        *pid_pptr = aral_mallocz(ebpf_judy_pid.pid_table);
+
+        pid_ptr = *pid_pptr;
+
+        pid_ptr->cmdline = NULL;
+        pid_ptr->socket_stats.JudyLArray = NULL;
+        rw_spinlock_init(&pid_ptr->socket_stats.rw_spinlock);
+    }
+
+    return pid_ptr;
+}
+
 /*****************************************************************
  *
  *  FUNCTIONS USED TO ALLOCATE APPS/CGROUP MEMORIES (ARAL)
@@ -626,7 +764,7 @@ static inline void ebpf_check_before2go()
         i = 0;
         int j;
         pthread_mutex_lock(&ebpf_exit_cleanup);
-        for (j = 0; ebpf_modules[j].thread_name != NULL; j++) {
+        for (j = 0; ebpf_modules[j].info.thread_name != NULL; j++) {
             if (ebpf_modules[j].enabled < NETDATA_THREAD_EBPF_STOPPING)
                 i++;
         }
@@ -704,14 +842,15 @@ void ebpf_unload_legacy_code(struct bpf_object *objects, struct bpf_link **probe
 static void ebpf_unload_unique_maps()
 {
     int i;
-    for (i = 0; ebpf_modules[i].thread_name; i++) {
+    for (i = 0; ebpf_modules[i].info.thread_name; i++) {
         // These threads are cleaned with other functions
         if (i != EBPF_MODULE_SOCKET_IDX)
             continue;
 
         if (ebpf_modules[i].enabled != NETDATA_THREAD_EBPF_STOPPED) {
             if (ebpf_modules[i].enabled != NETDATA_THREAD_EBPF_NOT_RUNNING)
-                netdata_log_error("Cannot unload maps for thread %s, because it is not stopped.", ebpf_modules[i].thread_name);
+                netdata_log_error("Cannot unload maps for thread %s, because it is not stopped.",
+                                  ebpf_modules[i].info.thread_name);
 
             continue;
         }
@@ -781,7 +920,7 @@ int ebpf_exit_plugin = 0;
  *
  * @param sig is the signal number used to close the collector
  */
-static void ebpf_stop_threads(int sig)
+void ebpf_stop_threads(int sig)
 {
     UNUSED(sig);
     static int only_one = 0;
@@ -794,11 +933,11 @@ static void ebpf_stop_threads(int sig)
     }
     only_one = 1;
     int i;
-    for (i = 0; ebpf_modules[i].thread_name != NULL; i++) {
+    for (i = 0; ebpf_modules[i].info.thread_name != NULL; i++) {
         if (ebpf_modules[i].enabled < NETDATA_THREAD_EBPF_STOPPING) {
             netdata_thread_cancel(*ebpf_modules[i].thread->thread);
 #ifdef NETDATA_DEV_MODE
-            netdata_log_info("Sending cancel for thread %s", ebpf_modules[i].thread_name);
+            netdata_log_info("Sending cancel for thread %s", ebpf_modules[i].info.thread_name);
 #endif
         }
     }
@@ -839,8 +978,8 @@ static void ebpf_stop_threads(int sig)
  * @param root   a pointer for the targets.
  */
 static inline void ebpf_create_apps_for_module(ebpf_module_t *em, struct ebpf_target *root) {
-    if (em->enabled < NETDATA_THREAD_EBPF_STOPPING && em->apps_charts && em->apps_routine)
-        em->apps_routine(em, root);
+    if (em->enabled < NETDATA_THREAD_EBPF_STOPPING && em->apps_charts && em->functions.apps_routine)
+        em->functions.apps_routine(em, root);
 }
 
 /**
@@ -1368,6 +1507,607 @@ void ebpf_read_global_table_stats(netdata_idx_t *stats,
     }
 }
 
+/*****************************************************************
+ *
+ *  FUNCTIONS USED WITH SOCKET
+ *
+ *****************************************************************/
+
+/**
+ * Netmask
+ *
+ * Copied from iprange (https://github.com/firehol/iprange/blob/master/iprange.h)
+ *
+ * @param prefix create the netmask based in the CIDR value.
+ *
+ * @return
+ */
+static inline in_addr_t ebpf_netmask(int prefix) {
+
+    if (prefix == 0)
+        return (~((in_addr_t) - 1));
+    else
+        return (in_addr_t)(~((1 << (32 - prefix)) - 1));
+
+}
+
+/**
+ * Broadcast
+ *
+ * Copied from iprange (https://github.com/firehol/iprange/blob/master/iprange.h)
+ *
+ * @param addr is the ip address
+ * @param prefix is the CIDR value.
+ *
+ * @return It returns the last address of the range
+ */
+static inline in_addr_t ebpf_broadcast(in_addr_t addr, int prefix)
+{
+    return (addr | ~ebpf_netmask(prefix));
+}
+
+/**
+ * Network
+ *
+ * Copied from iprange (https://github.com/firehol/iprange/blob/master/iprange.h)
+ *
+ * @param addr is the ip address
+ * @param prefix is the CIDR value.
+ *
+ * @return It returns the first address of the range.
+ */
+static inline in_addr_t ebpf_ipv4_network(in_addr_t addr, int prefix)
+{
+    return (addr & ebpf_netmask(prefix));
+}
+
+/**
+ * Calculate ipv6 first address
+ *
+ * @param out the address to store the first address.
+ * @param in the address used to do the math.
+ * @param prefix number of bits used to calculate the address
+ */
+static void get_ipv6_first_addr(union netdata_ip_t *out, union netdata_ip_t *in, uint64_t prefix)
+{
+    uint64_t mask,tmp;
+    uint64_t ret[2];
+
+    memcpy(ret, in->addr32, sizeof(union netdata_ip_t));
+
+    if (prefix == 128) {
+        memcpy(out->addr32, in->addr32, sizeof(union netdata_ip_t));
+        return;
+    } else if (!prefix) {
+        ret[0] = ret[1] = 0;
+        memcpy(out->addr32, ret, sizeof(union netdata_ip_t));
+        return;
+    } else if (prefix <= 64) {
+        ret[1] = 0ULL;
+
+        tmp = be64toh(ret[0]);
+        mask = 0xFFFFFFFFFFFFFFFFULL << (64 - prefix);
+        tmp &= mask;
+        ret[0] = htobe64(tmp);
+    } else {
+        mask = 0xFFFFFFFFFFFFFFFFULL << (128 - prefix);
+        tmp = be64toh(ret[1]);
+        tmp &= mask;
+        ret[1] = htobe64(tmp);
+    }
+
+    memcpy(out->addr32, ret, sizeof(union netdata_ip_t));
+}
+
+/**
+ * Get IPV6 Last Address
+ *
+ * @param out the address to store the last address.
+ * @param in the address used to do the math.
+ * @param prefix number of bits used to calculate the address
+ */
+static void get_ipv6_last_addr(union netdata_ip_t *out, union netdata_ip_t *in, uint64_t prefix)
+{
+    uint64_t mask,tmp;
+    uint64_t ret[2];
+    memcpy(ret, in->addr32, sizeof(union netdata_ip_t));
+
+    if (prefix == 128) {
+        memcpy(out->addr32, in->addr32, sizeof(union netdata_ip_t));
+        return;
+    } else if (!prefix) {
+        ret[0] = ret[1] = 0xFFFFFFFFFFFFFFFF;
+        memcpy(out->addr32, ret, sizeof(union netdata_ip_t));
+        return;
+    } else if (prefix <= 64) {
+        ret[1] = 0xFFFFFFFFFFFFFFFFULL;
+
+        tmp = be64toh(ret[0]);
+        mask = 0xFFFFFFFFFFFFFFFFULL << (64 - prefix);
+        tmp |= ~mask;
+        ret[0] = htobe64(tmp);
+    } else {
+        mask = 0xFFFFFFFFFFFFFFFFULL << (128 - prefix);
+        tmp = be64toh(ret[1]);
+        tmp |= ~mask;
+        ret[1] = htobe64(tmp);
+    }
+
+    memcpy(out->addr32, ret, sizeof(union netdata_ip_t));
+}
+
+/**
+ * IP to network long
+ *
+ * @param dst the vector to store the result
+ * @param ip the source ip given by our users.
+ * @param domain the ip domain (IPV4 or IPV6)
+ * @param source the original string
+ *
+ * @return it returns 0 on success and -1 otherwise.
+ */
+static inline int ebpf_ip2nl(uint8_t *dst, char *ip, int domain, char *source)
+{
+    if (inet_pton(domain, ip, dst) <= 0) {
+        netdata_log_error("The address specified (%s) is invalid ", source);
+        return -1;
+    }
+
+    return 0;
+}
+
+/**
+ * Clean port Structure
+ *
+ * Clean the allocated list.
+ *
+ * @param clean the list that will be cleaned
+ */
+void ebpf_clean_port_structure(ebpf_network_viewer_port_list_t **clean)
+{
+    ebpf_network_viewer_port_list_t *move = *clean;
+    while (move) {
+        ebpf_network_viewer_port_list_t *next = move->next;
+        freez(move->value);
+        freez(move);
+
+        move = next;
+    }
+    *clean = NULL;
+}
+
+/**
+ * Clean IP structure
+ *
+ * Clean the allocated list.
+ *
+ * @param clean the list that will be cleaned
+ */
+void ebpf_clean_ip_structure(ebpf_network_viewer_ip_list_t **clean)
+{
+    ebpf_network_viewer_ip_list_t *move = *clean;
+    while (move) {
+        ebpf_network_viewer_ip_list_t *next = move->next;
+        freez(move->value);
+        freez(move);
+
+        move = next;
+    }
+    *clean = NULL;
+}
+
+/**
+ * Parse IP List
+ *
+ * Parse IP list and link it.
+ *
+ * @param out a pointer to store the link list
+ * @param ip the value given as parameter
+ */
+static void ebpf_parse_ip_list_unsafe(void **out, char *ip)
+{
+    ebpf_network_viewer_ip_list_t **list = (ebpf_network_viewer_ip_list_t **)out;
+
+    char *ipdup = strdupz(ip);
+    union netdata_ip_t first = { };
+    union netdata_ip_t last = { };
+    char *is_ipv6;
+    if (*ip == '*' && *(ip+1) == '\0') {
+        memset(first.addr8, 0, sizeof(first.addr8));
+        memset(last.addr8, 0xFF, sizeof(last.addr8));
+
+        is_ipv6 = ip;
+
+        ebpf_clean_ip_structure(list);
+        goto storethisip;
+    }
+
+    char *end = ip;
+    // Move while I cannot find a separator
+    while (*end && *end != '/' && *end != '-') end++;
+
+    // We will use only the classic IPV6 for while, but we could consider the base 85 in a near future
+    // https://tools.ietf.org/html/rfc1924
+    is_ipv6 = strchr(ip, ':');
+
+    int select;
+    if (*end && !is_ipv6) { // IPV4 range
+        select = (*end == '/') ? 0 : 1;
+        *end++ = '\0';
+        if (*end == '!') {
+            netdata_log_info("The exclusion cannot be in the second part of the range %s, it will be ignored.", ipdup);
+            goto cleanipdup;
+        }
+
+        if (!select) { // CIDR
+            select = ebpf_ip2nl(first.addr8, ip, AF_INET, ipdup);
+            if (select)
+                goto cleanipdup;
+
+            select = (int) str2i(end);
+            if (select < NETDATA_MINIMUM_IPV4_CIDR || select > NETDATA_MAXIMUM_IPV4_CIDR) {
+                netdata_log_info("The specified CIDR %s is not valid, the IP %s will be ignored.", end, ip);
+                goto cleanipdup;
+            }
+
+            last.addr32[0] = htonl(ebpf_broadcast(ntohl(first.addr32[0]), select));
+            // This was added to remove
+            // https://app.codacy.com/manual/netdata/netdata/pullRequest?prid=5810941&bid=19021977
+            UNUSED(last.addr32[0]);
+
+            uint32_t ipv4_test = htonl(ebpf_ipv4_network(ntohl(first.addr32[0]), select));
+            if (first.addr32[0] != ipv4_test) {
+                first.addr32[0] = ipv4_test;
+                struct in_addr ipv4_convert;
+                ipv4_convert.s_addr = ipv4_test;
+                char ipv4_msg[INET_ADDRSTRLEN];
+                if(inet_ntop(AF_INET, &ipv4_convert, ipv4_msg, INET_ADDRSTRLEN))
+                    netdata_log_info("The network value of CIDR %s was updated for %s .", ipdup, ipv4_msg);
+            }
+        } else { // Range
+            select = ebpf_ip2nl(first.addr8, ip, AF_INET, ipdup);
+            if (select)
+                goto cleanipdup;
+
+            select = ebpf_ip2nl(last.addr8, end, AF_INET, ipdup);
+            if (select)
+                goto cleanipdup;
+        }
+
+        if (htonl(first.addr32[0]) > htonl(last.addr32[0])) {
+            netdata_log_info("The specified range %s is invalid, the second address is smallest than the first, it will be ignored.",
+                             ipdup);
+            goto cleanipdup;
+        }
+    } else if (is_ipv6) { // IPV6
+        if (!*end) { // Unique
+            select = ebpf_ip2nl(first.addr8, ip, AF_INET6, ipdup);
+            if (select)
+                goto cleanipdup;
+
+            memcpy(last.addr8, first.addr8, sizeof(first.addr8));
+        } else if (*end == '-') {
+            *end++ = 0x00;
+            if (*end == '!') {
+                netdata_log_info("The exclusion cannot be in the second part of the range %s, it will be ignored.", ipdup);
+                goto cleanipdup;
+            }
+
+            select = ebpf_ip2nl(first.addr8, ip, AF_INET6, ipdup);
+            if (select)
+                goto cleanipdup;
+
+            select = ebpf_ip2nl(last.addr8, end, AF_INET6, ipdup);
+            if (select)
+                goto cleanipdup;
+        } else { // CIDR
+            *end++ = 0x00;
+            if (*end == '!') {
+                netdata_log_info("The exclusion cannot be in the second part of the range %s, it will be ignored.", ipdup);
+                goto cleanipdup;
+            }
+
+            select = str2i(end);
+            if (select < 0 || select > 128) {
+                netdata_log_info("The CIDR %s is not valid, the address %s will be ignored.", end, ip);
+                goto cleanipdup;
+            }
+
+            uint64_t prefix = (uint64_t)select;
+            select = ebpf_ip2nl(first.addr8, ip, AF_INET6, ipdup);
+            if (select)
+                goto cleanipdup;
+
+            get_ipv6_last_addr(&last, &first, prefix);
+
+            union netdata_ip_t ipv6_test;
+            get_ipv6_first_addr(&ipv6_test, &first, prefix);
+
+            if (memcmp(first.addr8, ipv6_test.addr8, sizeof(union netdata_ip_t)) != 0) {
+                memcpy(first.addr8, ipv6_test.addr8, sizeof(union netdata_ip_t));
+
+                struct in6_addr ipv6_convert;
+                memcpy(ipv6_convert.s6_addr,  ipv6_test.addr8, sizeof(union netdata_ip_t));
+
+                char ipv6_msg[INET6_ADDRSTRLEN];
+                if(inet_ntop(AF_INET6, &ipv6_convert, ipv6_msg, INET6_ADDRSTRLEN))
+                    netdata_log_info("The network value of CIDR %s was updated for %s .", ipdup, ipv6_msg);
+            }
+        }
+
+        if ((be64toh(*(uint64_t *)&first.addr32[2]) > be64toh(*(uint64_t *)&last.addr32[2]) &&
+             !memcmp(first.addr32, last.addr32, 2*sizeof(uint32_t))) ||
+            (be64toh(*(uint64_t *)&first.addr32) > be64toh(*(uint64_t *)&last.addr32)) ) {
+            netdata_log_info("The specified range %s is invalid, the second address is smallest than the first, it will be ignored.",
+                             ipdup);
+            goto cleanipdup;
+        }
+    } else { // Unique ip
+        select = ebpf_ip2nl(first.addr8, ip, AF_INET, ipdup);
+        if (select)
+            goto cleanipdup;
+
+        memcpy(last.addr8, first.addr8, sizeof(first.addr8));
+    }
+
+    ebpf_network_viewer_ip_list_t *store;
+
+    storethisip:
+    store = callocz(1, sizeof(ebpf_network_viewer_ip_list_t));
+    store->value = ipdup;
+    store->hash = simple_hash(ipdup);
+    store->ver = (uint8_t)(!is_ipv6)?AF_INET:AF_INET6;
+    memcpy(store->first.addr8, first.addr8, sizeof(first.addr8));
+    memcpy(store->last.addr8, last.addr8, sizeof(last.addr8));
+
+    ebpf_fill_ip_list_unsafe(list, store, "socket");
+    return;
+
+    cleanipdup:
+    freez(ipdup);
+}
+
+/**
+ * Parse IP Range
+ *
+ * Parse the IP ranges given and create Network Viewer IP Structure
+ *
+ * @param ptr  is a pointer with the text to parse.
+ */
+void ebpf_parse_ips_unsafe(char *ptr)
+{
+    // No value
+    if (unlikely(!ptr))
+        return;
+
+    while (likely(ptr)) {
+        // Move forward until next valid character
+        while (isspace(*ptr)) ptr++;
+
+        // No valid value found
+        if (unlikely(!*ptr))
+            return;
+
+        // Find space that ends the list
+        char *end = strchr(ptr, ' ');
+        if (end) {
+            *end++ = '\0';
+        }
+
+        int neg = 0;
+        if (*ptr == '!') {
+            neg++;
+            ptr++;
+        }
+
+        if (isascii(*ptr)) { // Parse port
+            ebpf_parse_ip_list_unsafe(
+                (!neg) ? (void **)&network_viewer_opt.included_ips : (void **)&network_viewer_opt.excluded_ips, ptr);
+        }
+
+        ptr = end;
+    }
+}
+
+/**
+ * Fill Port list
+ *
+ * @param out a pointer to the link list.
+ * @param in the structure that will be linked.
+ */
+static inline void fill_port_list(ebpf_network_viewer_port_list_t **out, ebpf_network_viewer_port_list_t *in)
+{
+    if (likely(*out)) {
+        ebpf_network_viewer_port_list_t *move = *out, *store = *out;
+        uint16_t first = ntohs(in->first);
+        uint16_t last = ntohs(in->last);
+        while (move) {
+            uint16_t cmp_first = ntohs(move->first);
+            uint16_t cmp_last = ntohs(move->last);
+            if (cmp_first <= first && first <= cmp_last  &&
+                cmp_first <= last && last <= cmp_last ) {
+                netdata_log_info("The range/value (%u, %u) is inside the range/value (%u, %u) already inserted, it will be ignored.",
+                                 first, last, cmp_first, cmp_last);
+                freez(in->value);
+                freez(in);
+                return;
+            } else if (first <= cmp_first && cmp_first <= last  &&
+                       first <= cmp_last && cmp_last <= last) {
+                netdata_log_info("The range (%u, %u) is bigger than previous range (%u, %u) already inserted, the previous will be ignored.",
+                                 first, last, cmp_first, cmp_last);
+                freez(move->value);
+                move->value = in->value;
+                move->first = in->first;
+                move->last = in->last;
+                freez(in);
+                return;
+            }
+
+            store = move;
+            move = move->next;
+        }
+
+        store->next = in;
+    } else {
+        *out = in;
+    }
+
+#ifdef NETDATA_INTERNAL_CHECKS
+    netdata_log_info("Adding values %s( %u, %u) to %s port list used on network viewer",
+                     in->value, in->first, in->last,
+                     (*out == network_viewer_opt.included_port)?"included":"excluded");
+#endif
+}
+
+/**
+ * Parse Service List
+ *
+ * @param out a pointer to store the link list
+ * @param service the service used to create the structure that will be linked.
+ */
+static void ebpf_parse_service_list(void **out, char *service)
+{
+    ebpf_network_viewer_port_list_t **list = (ebpf_network_viewer_port_list_t **)out;
+    struct servent *serv = getservbyname((const char *)service, "tcp");
+    if (!serv)
+        serv = getservbyname((const char *)service, "udp");
+
+    if (!serv) {
+        netdata_log_info("Cannot resolve the service '%s' with protocols TCP and UDP, it will be ignored", service);
+        return;
+    }
+
+    ebpf_network_viewer_port_list_t *w = callocz(1, sizeof(ebpf_network_viewer_port_list_t));
+    w->value = strdupz(service);
+    w->hash = simple_hash(service);
+
+    w->first = w->last = (uint16_t)serv->s_port;
+
+    fill_port_list(list, w);
+}
+
+/**
+ * Parse port list
+ *
+ * Parse an allocated port list with the range given
+ *
+ * @param out a pointer to store the link list
+ * @param range the informed range for the user.
+ */
+static void ebpf_parse_port_list(void **out, char *range)
+{
+    int first, last;
+    ebpf_network_viewer_port_list_t **list = (ebpf_network_viewer_port_list_t **)out;
+
+    char *copied = strdupz(range);
+    if (*range == '*' && *(range+1) == '\0') {
+        first = 1;
+        last = 65535;
+
+        ebpf_clean_port_structure(list);
+        goto fillenvpl;
+    }
+
+    char *end = range;
+    //Move while I cannot find a separator
+    while (*end && *end != ':' && *end != '-') end++;
+
+    //It has a range
+    if (likely(*end)) {
+        *end++ = '\0';
+        if (*end == '!') {
+            netdata_log_info("The exclusion cannot be in the second part of the range, the range %s will be ignored.", copied);
+            freez(copied);
+            return;
+        }
+        last = str2i((const char *)end);
+    } else {
+        last = 0;
+    }
+
+    first = str2i((const char *)range);
+    if (first < NETDATA_MINIMUM_PORT_VALUE || first > NETDATA_MAXIMUM_PORT_VALUE) {
+        netdata_log_info("The first port %d of the range \"%s\" is invalid and it will be ignored!", first, copied);
+        freez(copied);
+        return;
+    }
+
+    if (!last)
+        last = first;
+
+    if (last < NETDATA_MINIMUM_PORT_VALUE || last > NETDATA_MAXIMUM_PORT_VALUE) {
+        netdata_log_info("The second port %d of the range \"%s\" is invalid and the whole range will be ignored!", last, copied);
+        freez(copied);
+        return;
+    }
+
+    if (first > last) {
+        netdata_log_info("The specified order %s is wrong, the smallest value is always the first, it will be ignored!", copied);
+        freez(copied);
+        return;
+    }
+
+    ebpf_network_viewer_port_list_t *w;
+    fillenvpl:
+    w = callocz(1, sizeof(ebpf_network_viewer_port_list_t));
+    w->value = copied;
+    w->hash = simple_hash(copied);
+    w->first = (uint16_t)first;
+    w->last = (uint16_t)last;
+    w->cmp_first = (uint16_t)first;
+    w->cmp_last = (uint16_t)last;
+
+    fill_port_list(list, w);
+}
+
+/**
+ * Parse Port Range
+ *
+ * Parse the port ranges given and create Network Viewer Port Structure
+ *
+ * @param ptr  is a pointer with the text to parse.
+ */
+void ebpf_parse_ports(char *ptr)
+{
+    // No value
+    if (unlikely(!ptr))
+        return;
+
+    while (likely(ptr)) {
+        // Move forward until next valid character
+        while (isspace(*ptr)) ptr++;
+
+        // No valid value found
+        if (unlikely(!*ptr))
+            return;
+
+        // Find space that ends the list
+        char *end = strchr(ptr, ' ');
+        if (end) {
+            *end++ = '\0';
+        }
+
+        int neg = 0;
+        if (*ptr == '!') {
+            neg++;
+            ptr++;
+        }
+
+        if (isdigit(*ptr)) { // Parse port
+            ebpf_parse_port_list(
+                (!neg) ? (void **)&network_viewer_opt.included_port : (void **)&network_viewer_opt.excluded_port, ptr);
+        } else if (isalpha(*ptr)) { // Parse service
+            ebpf_parse_service_list(
+                (!neg) ? (void **)&network_viewer_opt.included_port : (void **)&network_viewer_opt.excluded_port, ptr);
+        } else if (*ptr == '*') { // All
+            ebpf_parse_port_list(
+                (!neg) ? (void **)&network_viewer_opt.included_port : (void **)&network_viewer_opt.excluded_port, ptr);
+        }
+
+        ptr = end;
+    }
+}
+
 /*****************************************************************
  *
  *  FUNCTIONS TO DEFINE OPTIONS
@@ -1432,7 +2172,7 @@ static inline void ebpf_enable_specific_chart(struct ebpf_module *em, int disabl
 
     // oomkill stores data inside apps submenu, so it always need to have apps_enabled for plugin to create
     // its chart, without this comparison eBPF.plugin will try to store invalid data when apps is disabled.
-    if (!strcmp(em->thread_name, "oomkill")) {
+    if (!strcmp(em->info.thread_name, "oomkill")) {
         em->apps_charts = NETDATA_EBPF_APPS_FLAG_YES;
     }
 
@@ -1451,7 +2191,7 @@ static inline void ebpf_enable_specific_chart(struct ebpf_module *em, int disabl
 static inline void disable_all_global_charts()
 {
     int i;
-    for (i = 0; ebpf_modules[i].thread_name; i++) {
+    for (i = 0; ebpf_modules[i].info.thread_name; i++) {
         ebpf_modules[i].enabled = 0;
         ebpf_modules[i].global_charts = 0;
     }
@@ -1465,7 +2205,7 @@ static inline void disable_all_global_charts()
 static inline void ebpf_enable_chart(int idx, int disable_cgroup)
 {
     int i;
-    for (i = 0; ebpf_modules[i].thread_name; i++) {
+    for (i = 0; ebpf_modules[i].info.thread_name; i++) {
         if (i == idx) {
             ebpf_enable_specific_chart(&ebpf_modules[i], disable_cgroup);
             break;
@@ -1481,7 +2221,7 @@ static inline void ebpf_enable_chart(int idx, int disable_cgroup)
 static inline void ebpf_disable_cgroups()
 {
     int i;
-    for (i = 0; ebpf_modules[i].thread_name; i++) {
+    for (i = 0; ebpf_modules[i].info.thread_name; i++) {
         ebpf_modules[i].cgroup_charts = 0;
     }
 }
@@ -1661,6 +2401,203 @@ uint32_t ebpf_enable_tracepoints(ebpf_tracepoint_t *tps)
  *
  *****************************************************************/
 
+/**
+ * Is ip inside the range
+ *
+ * Check if the ip is inside a IP range
+ *
+ * @param rfirst    the first ip address of the range
+ * @param rlast     the last ip address of the range
+ * @param cmpfirst  the first ip to compare
+ * @param cmplast   the last ip to compare
+ * @param family    the IP family
+ *
+ * @return It returns 1 if the IP is inside the range and 0 otherwise
+ */
+static int ebpf_is_ip_inside_range(union netdata_ip_t *rfirst, union netdata_ip_t *rlast,
+                                   union netdata_ip_t *cmpfirst, union netdata_ip_t *cmplast, int family)
+{
+    if (family == AF_INET) {
+        if ((rfirst->addr32[0] <= cmpfirst->addr32[0]) && (rlast->addr32[0] >= cmplast->addr32[0]))
+            return 1;
+    } else {
+        if (memcmp(rfirst->addr8, cmpfirst->addr8, sizeof(union netdata_ip_t)) <= 0 &&
+            memcmp(rlast->addr8, cmplast->addr8, sizeof(union netdata_ip_t)) >= 0) {
+            return 1;
+        }
+
+    }
+    return 0;
+}
+
+/**
+ * Fill IP list
+ *
+ * @param out a pointer to the link list.
+ * @param in the structure that will be linked.
+ * @param table the modified table.
+ */
+void ebpf_fill_ip_list_unsafe(ebpf_network_viewer_ip_list_t **out, ebpf_network_viewer_ip_list_t *in,
+                       char *table __maybe_unused)
+{
+    if (in->ver == AF_INET) { // It is simpler to compare using host order
+        in->first.addr32[0] = ntohl(in->first.addr32[0]);
+        in->last.addr32[0] = ntohl(in->last.addr32[0]);
+    }
+    if (likely(*out)) {
+        ebpf_network_viewer_ip_list_t *move = *out, *store = *out;
+        while (move) {
+            if (in->ver == move->ver &&
+                ebpf_is_ip_inside_range(&move->first, &move->last, &in->first, &in->last, in->ver)) {
+#ifdef NETDATA_DEV_MODE
+                netdata_log_info("The range/value (%s) is inside the range/value (%s) already inserted, it will be ignored.",
+                                 in->value, move->value);
+#endif
+                freez(in->value);
+                freez(in);
+                return;
+            }
+            store = move;
+            move = move->next;
+        }
+
+        store->next = in;
+    } else {
+        *out = in;
+    }
+
+#ifdef NETDATA_DEV_MODE
+    char first[256], last[512];
+            if (in->ver == AF_INET) {
+                netdata_log_info("Adding values %s: (%u - %u) to %s IP list \"%s\" used on network viewer",
+                                 in->value, in->first.addr32[0], in->last.addr32[0],
+                                 (*out == network_viewer_opt.included_ips)?"included":"excluded",
+                                 table);
+            } else {
+                if (inet_ntop(AF_INET6, in->first.addr8, first, INET6_ADDRSTRLEN) &&
+                inet_ntop(AF_INET6, in->last.addr8, last, INET6_ADDRSTRLEN))
+                    netdata_log_info("Adding values %s - %s to %s IP list \"%s\" used on network viewer",
+                                     first, last,
+                                     (*out == network_viewer_opt.included_ips)?"included":"excluded",
+                                     table);
+            }
+#endif
+}
+
+/**
+ * Link hostname
+ *
+ * @param out is the output link list
+ * @param in the hostname to add to list.
+ */
+static void ebpf_link_hostname(ebpf_network_viewer_hostname_list_t **out, ebpf_network_viewer_hostname_list_t *in)
+{
+    if (likely(*out)) {
+        ebpf_network_viewer_hostname_list_t *move = *out;
+        for (; move->next ; move = move->next ) {
+            if (move->hash == in->hash && !strcmp(move->value, in->value)) {
+                netdata_log_info("The hostname %s was already inserted, it will be ignored.", in->value);
+                freez(in->value);
+                simple_pattern_free(in->value_pattern);
+                freez(in);
+                return;
+            }
+        }
+
+        move->next = in;
+    } else {
+        *out = in;
+    }
+#ifdef NETDATA_INTERNAL_CHECKS
+    netdata_log_info("Adding value %s to %s hostname list used on network viewer",
+                                                    in->value,
+                                                    (*out == network_viewer_opt.included_hostnames)?"included":"excluded");
+#endif
+}
+
+/**
+ * Link Hostnames
+ *
+ * Parse the list of hostnames to create the link list.
+ * This is not associated with the IP, because simple patterns like *example* cannot be resolved to IP.
+ *
+ * @param out is the output link list
+ * @param parse is a pointer with the text to parser.
+ */
+static void ebpf_link_hostnames(char *parse)
+{
+    // No value
+    if (unlikely(!parse))
+        return;
+
+    while (likely(parse)) {
+        // Find the first valid value
+        while (isspace(*parse)) parse++;
+
+        // No valid value found
+        if (unlikely(!*parse))
+            return;
+
+        // Find space that ends the list
+        char *end = strchr(parse, ' ');
+        if (end) {
+            *end++ = '\0';
+        }
+
+        int neg = 0;
+        if (*parse == '!') {
+            neg++;
+            parse++;
+        }
+
+        ebpf_network_viewer_hostname_list_t *hostname = callocz(1 , sizeof(ebpf_network_viewer_hostname_list_t));
+        hostname->value = strdupz(parse);
+        hostname->hash = simple_hash(parse);
+        hostname->value_pattern = simple_pattern_create(parse, NULL, SIMPLE_PATTERN_EXACT, true);
+
+        ebpf_link_hostname((!neg) ? &network_viewer_opt.included_hostnames :
+                                    &network_viewer_opt.excluded_hostnames,
+                           hostname);
+
+        parse = end;
+    }
+}
+
+/**
+ * Parse network viewer section
+ *
+ * @param cfg the configuration structure
+ */
+void parse_network_viewer_section(struct config *cfg)
+{
+    network_viewer_opt.hostname_resolution_enabled = appconfig_get_boolean(cfg,
+                                                                           EBPF_NETWORK_VIEWER_SECTION,
+                                                                           EBPF_CONFIG_RESOLVE_HOSTNAME,
+                                                                           CONFIG_BOOLEAN_NO);
+
+    network_viewer_opt.service_resolution_enabled = appconfig_get_boolean(cfg,
+                                                                          EBPF_NETWORK_VIEWER_SECTION,
+                                                                          EBPF_CONFIG_RESOLVE_SERVICE,
+                                                                          CONFIG_BOOLEAN_YES);
+
+    char *value = appconfig_get(cfg, EBPF_NETWORK_VIEWER_SECTION, EBPF_CONFIG_PORTS, NULL);
+    ebpf_parse_ports(value);
+
+    if (network_viewer_opt.hostname_resolution_enabled) {
+        value = appconfig_get(cfg, EBPF_NETWORK_VIEWER_SECTION, EBPF_CONFIG_HOSTNAMES, NULL);
+        ebpf_link_hostnames(value);
+    } else {
+        netdata_log_info("Name resolution is disabled, collector will not parse \"hostnames\" list.");
+    }
+
+    value = appconfig_get(cfg,
+                          EBPF_NETWORK_VIEWER_SECTION,
+                          "ips",
+                          NULL);
+                                   //"ips", "!127.0.0.1/8 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 fc00::/7 !::1/128");
+    ebpf_parse_ips_unsafe(value);
+}
+
 /**
  *  Read Local Ports
  *
@@ -1705,7 +2642,7 @@ static void read_local_ports(char *filename, uint8_t proto)
  *
  * Read the local address from the interfaces.
  */
-static void read_local_addresses()
+void ebpf_read_local_addresses_unsafe()
 {
     struct ifaddrs *ifaddr, *ifa;
     if (getifaddrs(&ifaddr) == -1) {
@@ -1754,9 +2691,8 @@ static void read_local_addresses()
             }
         }
 
-        ebpf_fill_ip_list((family == AF_INET)?&network_viewer_opt.ipv4_local_ip:&network_viewer_opt.ipv6_local_ip,
-                     w,
-                     "selector");
+        ebpf_fill_ip_list_unsafe(
+            (family == AF_INET) ? &network_viewer_opt.ipv4_local_ip : &network_viewer_opt.ipv6_local_ip, w, "selector");
     }
 
     freeifaddrs(ifaddr);
@@ -1773,6 +2709,7 @@ void ebpf_start_pthread_variables()
     pthread_mutex_init(&ebpf_exit_cleanup, NULL);
     pthread_mutex_init(&collect_data_mutex, NULL);
     pthread_mutex_init(&mutex_cgroup_shm, NULL);
+    rw_spinlock_init(&ebpf_judy_pid.index.rw_spinlock);
 }
 
 /**
@@ -1780,6 +2717,8 @@ void ebpf_start_pthread_variables()
  */
 static void ebpf_allocate_common_vectors()
 {
+    ebpf_judy_pid.pid_table = ebpf_allocate_pid_aral(NETDATA_EBPF_PID_SOCKET_ARAL_TABLE_NAME,
+                                                     sizeof(netdata_ebpf_judy_pid_stats_t));
     ebpf_all_pids = callocz((size_t)pid_max, sizeof(struct ebpf_pid_stat *));
     ebpf_aral_init();
 }
@@ -1825,7 +2764,7 @@ static void ebpf_update_interval(int update_every)
     int i;
     int value = (int) appconfig_get_number(&collector_config, EBPF_GLOBAL_SECTION, EBPF_CFG_UPDATE_EVERY,
                                           update_every);
-    for (i = 0; ebpf_modules[i].thread_name; i++) {
+    for (i = 0; ebpf_modules[i].info.thread_name; i++) {
         ebpf_modules[i].update_every = value;
     }
 }
@@ -1840,7 +2779,7 @@ static void ebpf_update_table_size()
     int i;
     uint32_t value = (uint32_t) appconfig_get_number(&collector_config, EBPF_GLOBAL_SECTION,
                                                     EBPF_CFG_PID_SIZE, ND_EBPF_DEFAULT_PID_SIZE);
-    for (i = 0; ebpf_modules[i].thread_name; i++) {
+    for (i = 0; ebpf_modules[i].info.thread_name; i++) {
         ebpf_modules[i].pid_map_size = value;
     }
 }
@@ -1855,7 +2794,7 @@ static void ebpf_update_lifetime()
     int i;
     uint32_t value = (uint32_t) appconfig_get_number(&collector_config, EBPF_GLOBAL_SECTION,
                                                      EBPF_CFG_LIFETIME, EBPF_DEFAULT_LIFETIME);
-    for (i = 0; ebpf_modules[i].thread_name; i++) {
+    for (i = 0; ebpf_modules[i].info.thread_name; i++) {
         ebpf_modules[i].lifetime = value;
     }
 }
@@ -1868,7 +2807,7 @@ static void ebpf_update_lifetime()
 static inline void ebpf_set_load_mode(netdata_ebpf_load_mode_t load, netdata_ebpf_load_mode_t origin)
 {
     int i;
-    for (i = 0; ebpf_modules[i].thread_name; i++) {
+    for (i = 0; ebpf_modules[i].info.thread_name; i++) {
         ebpf_modules[i].load &= ~NETDATA_EBPF_LOAD_METHODS;
         ebpf_modules[i].load |= load | origin ;
     }
@@ -1897,7 +2836,7 @@ static void ebpf_update_map_per_core()
     int i;
     int value = appconfig_get_boolean(&collector_config, EBPF_GLOBAL_SECTION,
                                       EBPF_CFG_MAPS_PER_CORE, CONFIG_BOOLEAN_YES);
-    for (i = 0; ebpf_modules[i].thread_name; i++) {
+    for (i = 0; ebpf_modules[i].info.thread_name; i++) {
         ebpf_modules[i].maps_per_core = value;
     }
 }
@@ -1961,7 +2900,7 @@ static void read_collector_values(int *disable_cgroups,
 
     // Read ebpf programs section
     enabled = appconfig_get_boolean(&collector_config, EBPF_PROGRAMS_SECTION,
-                                    ebpf_modules[EBPF_MODULE_PROCESS_IDX].config_name, CONFIG_BOOLEAN_YES);
+                                    ebpf_modules[EBPF_MODULE_PROCESS_IDX].info.config_name, CONFIG_BOOLEAN_YES);
     if (enabled) {
         ebpf_enable_chart(EBPF_MODULE_PROCESS_IDX, *disable_cgroups);
     }
@@ -1971,7 +2910,7 @@ static void read_collector_values(int *disable_cgroups,
                                     CONFIG_BOOLEAN_NO);
     if (!enabled)
         enabled = appconfig_get_boolean(&collector_config, EBPF_PROGRAMS_SECTION,
-                                        ebpf_modules[EBPF_MODULE_SOCKET_IDX].config_name,
+                                        ebpf_modules[EBPF_MODULE_SOCKET_IDX].info.config_name,
                                         CONFIG_BOOLEAN_NO);
     if (enabled) {
         ebpf_enable_chart(EBPF_MODULE_SOCKET_IDX, *disable_cgroups);
@@ -1979,10 +2918,11 @@ static void read_collector_values(int *disable_cgroups,
 
     // This is kept to keep compatibility
     enabled = appconfig_get_boolean(&collector_config, EBPF_PROGRAMS_SECTION, "network connection monitoring",
-                                    CONFIG_BOOLEAN_NO);
+                                    CONFIG_BOOLEAN_YES);
     if (!enabled)
         enabled = appconfig_get_boolean(&collector_config, EBPF_PROGRAMS_SECTION, "network connections",
-                                        CONFIG_BOOLEAN_NO);
+                                        CONFIG_BOOLEAN_YES);
+
     network_viewer_opt.enabled = enabled;
     if (enabled) {
         if (!ebpf_modules[EBPF_MODULE_SOCKET_IDX].enabled)
@@ -1991,7 +2931,7 @@ static void read_collector_values(int *disable_cgroups,
         // Read network viewer section if network viewer is enabled
         // This is kept here to keep backward compatibility
         parse_network_viewer_section(&collector_config);
-        parse_service_name_section(&collector_config);
+        ebpf_parse_service_name_section(&collector_config);
     }
 
     enabled = appconfig_get_boolean(&collector_config, EBPF_PROGRAMS_SECTION, "cachestat",
@@ -2238,7 +3178,7 @@ static void ebpf_parse_args(int argc, char **argv)
     };
 
     memset(&network_viewer_opt, 0, sizeof(network_viewer_opt));
-    network_viewer_opt.max_dim = NETDATA_NV_CAP_VALUE;
+    rw_spinlock_init(&network_viewer_opt.rw_spinlock);
 
     if (argc > 1) {
         int n = (int)str2l(argv[1]);
@@ -2250,6 +3190,7 @@ static void ebpf_parse_args(int argc, char **argv)
     if (!freq)
         freq = EBPF_DEFAULT_UPDATE_EVERY;
 
+    //rw_spinlock_write_lock(&network_viewer_opt.rw_spinlock);
     if (ebpf_load_collector_config(ebpf_user_config_dir, &disable_cgroups, freq)) {
         netdata_log_info(
             "Does not have a configuration file inside `%s/ebpf.d.conf. It will try to load stock file.",
@@ -2260,6 +3201,7 @@ static void ebpf_parse_args(int argc, char **argv)
     }
 
     ebpf_load_thread_config();
+    //rw_spinlock_write_unlock(&network_viewer_opt.rw_spinlock);
 
     while (1) {
         int c = getopt_long_only(argc, argv, "", long_options, &option_index);
@@ -2510,8 +3452,8 @@ static inline void ebpf_send_hash_table_pid_data(char *chart, uint32_t idx)
     write_begin_chart(NETDATA_MONITORING_FAMILY, chart);
     for (i = 0; i < EBPF_MODULE_FUNCTION_IDX; i++) {
         ebpf_module_t *wem = &ebpf_modules[i];
-        if (wem->apps_routine)
-            write_chart_dimension((char *)wem->thread_name,
+        if (wem->functions.apps_routine)
+            write_chart_dimension((char *)wem->info.thread_name,
                                   (wem->enabled < NETDATA_THREAD_EBPF_STOPPING) ?
                                   wem->hash_table_stats[idx]:
                                   0);
@@ -2531,7 +3473,7 @@ static inline void ebpf_send_global_hash_table_data()
     write_begin_chart(NETDATA_MONITORING_FAMILY, NETDATA_EBPF_HASH_TABLES_GLOBAL_ELEMENTS);
     for (i = 0; i < EBPF_MODULE_FUNCTION_IDX; i++) {
         ebpf_module_t *wem = &ebpf_modules[i];
-        write_chart_dimension((char *)wem->thread_name,
+        write_chart_dimension((char *)wem->info.thread_name,
                               (wem->enabled < NETDATA_THREAD_EBPF_STOPPING) ? NETDATA_CONTROLLER_END: 0);
     }
     write_end_chart();
@@ -2551,7 +3493,10 @@ void ebpf_send_statistic_data()
     int i;
     for (i = 0; i < EBPF_MODULE_FUNCTION_IDX; i++) {
         ebpf_module_t *wem = &ebpf_modules[i];
-        write_chart_dimension((char *)wem->thread_name, (wem->enabled < NETDATA_THREAD_EBPF_STOPPING) ? 1 : 0);
+        if (wem->functions.fnct_routine)
+            continue;
+
+        write_chart_dimension((char *)wem->info.thread_name, (wem->enabled < NETDATA_THREAD_EBPF_STOPPING) ? 1 : 0);
     }
     write_end_chart();
 
@@ -2560,7 +3505,10 @@ void ebpf_send_statistic_data()
         ebpf_module_t *wem = &ebpf_modules[i];
         // Threads like VFS is slow to load and this can create an invalid number, this is the motive
         // we are also testing wem->lifetime value.
-        write_chart_dimension((char *)wem->thread_name,
+        if (wem->functions.fnct_routine)
+            continue;
+
+        write_chart_dimension((char *)wem->info.thread_name,
                               (wem->lifetime && wem->enabled < NETDATA_THREAD_EBPF_STOPPING) ?
                               (long long) (wem->lifetime - wem->running_time):
                               0) ;
@@ -2589,6 +3537,23 @@ void ebpf_send_statistic_data()
 
     ebpf_send_hash_table_pid_data(NETDATA_EBPF_HASH_TABLES_INSERT_PID_ELEMENTS, NETDATA_EBPF_GLOBAL_TABLE_PID_TABLE_ADD);
     ebpf_send_hash_table_pid_data(NETDATA_EBPF_HASH_TABLES_REMOVE_PID_ELEMENTS, NETDATA_EBPF_GLOBAL_TABLE_PID_TABLE_DEL);
+
+    for (i = 0; i < EBPF_MODULE_FUNCTION_IDX; i++) {
+        ebpf_module_t *wem = &ebpf_modules[i];
+        if (!wem->functions.fnct_routine)
+            continue;
+
+        write_begin_chart(NETDATA_MONITORING_FAMILY, (char *)wem->functions.fcnt_thread_chart_name);
+        write_chart_dimension((char *)wem->info.thread_name, (wem->enabled < NETDATA_THREAD_EBPF_STOPPING) ? 1 : 0);
+        write_end_chart();
+
+        write_begin_chart(NETDATA_MONITORING_FAMILY, (char *)wem->functions.fcnt_thread_lifetime_name);
+        write_chart_dimension((char *)wem->info.thread_name,
+                              (wem->lifetime && wem->enabled < NETDATA_THREAD_EBPF_STOPPING) ?
+                              (long long) (wem->lifetime - wem->running_time):
+                              0) ;
+        write_end_chart();
+    }
 }
 
 /**
@@ -2607,57 +3572,51 @@ static void update_internal_metric_variable()
 }
 
 /**
- * Create chart for Statistic Thread
+ * Create Thread Chart
  *
- * Write to standard output current values for threads.
+ * Write to standard output current values for threads charts.
  *
+ * @param name         is the chart name
+ * @param title        chart title.
+ * @param units        chart units
+ * @param order        is the chart order
  * @param update_every time used to update charts
+ * @param module       a module to create a specific chart.
  */
-static inline void ebpf_create_statistic_thread_chart(int update_every)
+static void ebpf_create_thread_chart(char *name,
+                                     char *title,
+                                     char *units,
+                                     int order,
+                                     int update_every,
+                                     ebpf_module_t *module)
 {
+    // common call for specific and all charts.
     ebpf_write_chart_cmd(NETDATA_MONITORING_FAMILY,
-                         NETDATA_EBPF_THREADS,
-                         "Threads running.",
-                         "boolean",
+                         name,
+                         title,
+                         units,
                          NETDATA_EBPF_FAMILY,
                          NETDATA_EBPF_CHART_TYPE_LINE,
                          NULL,
-                         NETDATA_EBPF_ORDER_STAT_THREADS,
+                         order,
                          update_every,
-                         NETDATA_EBPF_MODULE_NAME_PROCESS);
+                         "main");
 
-    int i;
-    for (i = 0; i < EBPF_MODULE_FUNCTION_IDX; i++) {
-        ebpf_write_global_dimension((char *)ebpf_modules[i].thread_name,
-                                    (char *)ebpf_modules[i].thread_name,
+    if (module) {
+        ebpf_write_global_dimension((char *)module->info.thread_name,
+                                    (char *)module->info.thread_name,
                                     ebpf_algorithms[NETDATA_EBPF_ABSOLUTE_IDX]);
+        return;
     }
-}
-
-/**
- * Create lifetime Thread Chart
- *
- * Write to standard output current values for threads lifetime.
- *
- * @param update_every time used to update charts
- */
-static inline void ebpf_create_lifetime_thread_chart(int update_every)
-{
-    ebpf_write_chart_cmd(NETDATA_MONITORING_FAMILY,
-                         NETDATA_EBPF_LIFE_TIME,
-                         "Threads running.",
-                         "seconds",
-                         NETDATA_EBPF_FAMILY,
-                         NETDATA_EBPF_CHART_TYPE_LINE,
-                         NULL,
-                         NETDATA_EBPF_ORDER_STAT_LIFE_TIME,
-                         update_every,
-                         NETDATA_EBPF_MODULE_NAME_PROCESS);
 
     int i;
     for (i = 0; i < EBPF_MODULE_FUNCTION_IDX; i++) {
-        ebpf_write_global_dimension((char *)ebpf_modules[i].thread_name,
-                                    (char *)ebpf_modules[i].thread_name,
+        ebpf_module_t *em = &ebpf_modules[i];
+        if (em->functions.fnct_routine)
+            continue;
+
+        ebpf_write_global_dimension((char *)em->info.thread_name,
+                                    (char *)em->info.thread_name,
                                     ebpf_algorithms[NETDATA_EBPF_ABSOLUTE_IDX]);
     }
 }
@@ -2792,8 +3751,8 @@ static void ebpf_create_statistic_hash_global_elements(int update_every)
 
     int i;
     for (i = 0; i < EBPF_MODULE_FUNCTION_IDX; i++) {
-        ebpf_write_global_dimension((char *)ebpf_modules[i].thread_name,
-                                    (char *)ebpf_modules[i].thread_name,
+        ebpf_write_global_dimension((char *)ebpf_modules[i].info.thread_name,
+                                    (char *)ebpf_modules[i].info.thread_name,
                                     ebpf_algorithms[NETDATA_EBPF_ABSOLUTE_IDX]);
     }
 }
@@ -2824,9 +3783,9 @@ static void ebpf_create_statistic_hash_pid_table(int update_every, char *id, cha
     int i;
     for (i = 0; i < EBPF_MODULE_FUNCTION_IDX; i++) {
         ebpf_module_t *wem = &ebpf_modules[i];
-        if (wem->apps_routine)
-            ebpf_write_global_dimension((char *)wem->thread_name,
-                                        (char *)wem->thread_name,
+        if (wem->functions.apps_routine)
+            ebpf_write_global_dimension((char *)wem->info.thread_name,
+                                        (char *)wem->info.thread_name,
                                         ebpf_algorithms[NETDATA_EBPF_INCREMENTAL_IDX]);
     }
 }
@@ -2850,16 +3809,60 @@ static void ebpf_create_statistic_charts(int update_every)
 
     create_charts = 0;
 
-    ebpf_create_statistic_thread_chart(update_every);
+    ebpf_create_thread_chart(NETDATA_EBPF_THREADS,
+                             "Threads running.",
+                             "boolean",
+                             NETDATA_EBPF_ORDER_STAT_THREADS,
+                             update_every,
+                             NULL);
 #ifdef NETDATA_DEV_MODE
     EBPF_PLUGIN_FUNCTIONS(EBPF_FUNCTION_THREAD, EBPF_PLUGIN_THREAD_FUNCTION_DESCRIPTION);
 #endif
 
-    ebpf_create_lifetime_thread_chart(update_every);
+    ebpf_create_thread_chart(NETDATA_EBPF_LIFE_TIME,
+                             "Time remaining for thread.",
+                             "seconds",
+                             NETDATA_EBPF_ORDER_STAT_LIFE_TIME,
+                             update_every,
+                             NULL);
 #ifdef NETDATA_DEV_MODE
     EBPF_PLUGIN_FUNCTIONS(EBPF_FUNCTION_THREAD, EBPF_PLUGIN_THREAD_FUNCTION_DESCRIPTION);
 #endif
 
+    int i,j;
+    char name[256];
+    for (i = 0, j = NETDATA_EBPF_ORDER_FUNCTION_PER_THREAD; i < EBPF_MODULE_FUNCTION_IDX; i++) {
+        ebpf_module_t *em = &ebpf_modules[i];
+        if (!em->functions.fnct_routine)
+            continue;
+
+        em->functions.order_thread_chart = j;
+        snprintfz(name, 255,"%s_%s", NETDATA_EBPF_THREADS, em->info.thread_name);
+        em->functions.fcnt_thread_chart_name = strdupz(name);
+        ebpf_create_thread_chart(name,
+                                 "Threads running.",
+                                 "boolean",
+                                 j++,
+                                 update_every,
+                                 em);
+#ifdef NETDATA_DEV_MODE
+        EBPF_PLUGIN_FUNCTIONS(em->functions.fcnt_name, em->functions.fcnt_desc);
+#endif
+
+        em->functions.order_thread_lifetime = j;
+        snprintfz(name, 255,"%s_%s", NETDATA_EBPF_LIFE_TIME, em->info.thread_name);
+        em->functions.fcnt_thread_lifetime_name = strdupz(name);
+        ebpf_create_thread_chart(name,
+                                 "Time remaining for thread.",
+                                 "seconds",
+                                 j++,
+                                 update_every,
+                                 em);
+#ifdef NETDATA_DEV_MODE
+        EBPF_PLUGIN_FUNCTIONS(em->functions.fcnt_name, em->functions.fcnt_desc);
+#endif
+    }
+
     ebpf_create_statistic_load_chart(update_every);
 
     ebpf_create_statistic_kernel_memory(update_every);
@@ -3040,8 +4043,8 @@ static void ebpf_manage_pid(pid_t pid)
  static void ebpf_set_static_routine()
  {
      int i;
-     for (i = 0; ebpf_modules[i].thread_name; i++) {
-         ebpf_threads[i].start_routine = ebpf_modules[i].start_routine;
+     for (i = 0; ebpf_modules[i].info.thread_name; i++) {
+         ebpf_threads[i].start_routine = ebpf_modules[i].functions.start_routine;
      }
  }
 
@@ -3095,7 +4098,7 @@ int main(int argc, char **argv)
     libbpf_set_strict_mode(LIBBPF_STRICT_ALL);
 #endif
 
-    read_local_addresses();
+    ebpf_read_local_addresses_unsafe();
     read_local_ports("/proc/net/tcp", IPPROTO_TCP);
     read_local_ports("/proc/net/tcp6", IPPROTO_TCP);
     read_local_ports("/proc/net/udp", IPPROTO_UDP);
diff --git a/collectors/ebpf.plugin/ebpf.d/network.conf b/collectors/ebpf.plugin/ebpf.d/network.conf
index 00cbf2e8ba..99c32edc13 100644
--- a/collectors/ebpf.plugin/ebpf.d/network.conf
+++ b/collectors/ebpf.plugin/ebpf.d/network.conf
@@ -26,6 +26,11 @@
 #
 # The `maps per core` defines if hash tables will be per core or not. This option is ignored on kernels older than 4.6.
 #
+# The `collect pid` option defines the PID stored inside hash tables and accepts the following options:
+#   `real parent`: Only stores real parent inside PID
+#   `parent`     : Only stores parent PID.
+#   `all`        : Stores all PIDs used by software. This is the most expensive option.
+#
 # The `lifetime` defines the time length a thread will run when it is enabled by a function.
 #
 # Uncomment lines to define specific options for thread.
@@ -35,12 +40,12 @@
 #    cgroups = no
 #    update every = 10
     bandwidth table size = 16384
-    ipv4 connection table size = 16384
-    ipv6 connection table size = 16384
+    socket monitoring table size = 16384
     udp connection table size = 4096
     ebpf type format = auto
-    ebpf co-re tracing = trampoline
+    ebpf co-re tracing = probe
     maps per core = no
+    collect pid = all
     lifetime = 300
 
 #
@@ -49,11 +54,12 @@
 # This is a feature with status WIP(Work in Progress)
 #
 [network connections]
-    maximum dimensions = 50
+    enabled = yes
     resolve hostnames = no
-    resolve service names = no
+    resolve service names = yes
     ports = *
-    ips = !127.0.0.1/8 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 fc00::/7 !::1/128
+#    ips = !127.0.0.1/8 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 fc00::/7 !::1/128
+    ips = *
     hostnames = *
 
 [service name]
diff --git a/collectors/ebpf.plugin/ebpf.h b/collectors/ebpf.plugin/ebpf.h
index 78e3a9252b..2f176b48c6 100644
--- a/collectors/ebpf.plugin/ebpf.h
+++ b/collectors/ebpf.plugin/ebpf.h
@@ -31,6 +31,7 @@
 #include "daemon/main.h"
 
 #include "ebpf_apps.h"
+#include "ebpf_functions.h"
 #include "ebpf_cgroup.h"
 
 #define NETDATA_EBPF_OLD_CONFIG_FILE "ebpf.conf"
@@ -98,6 +99,26 @@ typedef struct netdata_error_report {
     int err;
 } netdata_error_report_t;
 
+typedef struct netdata_ebpf_judy_pid {
+    ARAL *pid_table;
+
+    // Index for PIDs
+    struct {                            // support for multiple indexing engines
+        Pvoid_t JudyLArray;            // the hash table
+        RW_SPINLOCK rw_spinlock;        // protect the index
+    } index;
+} netdata_ebpf_judy_pid_t;
+
+typedef struct netdata_ebpf_judy_pid_stats {
+    char *cmdline;
+
+    // Index for Socket timestamp
+    struct {                            // support for multiple indexing engines
+        Pvoid_t JudyLArray;            // the hash table
+        RW_SPINLOCK rw_spinlock;        // protect the index
+    } socket_stats;
+} netdata_ebpf_judy_pid_stats_t;
+
 extern ebpf_module_t ebpf_modules[];
 enum ebpf_main_index {
     EBPF_MODULE_PROCESS_IDX,
@@ -322,10 +343,19 @@ void ebpf_unload_legacy_code(struct bpf_object *objects, struct bpf_link **probe
 
 void ebpf_read_global_table_stats(netdata_idx_t *stats, netdata_idx_t *values, int map_fd,
                                   int maps_per_core, uint32_t begin, uint32_t end);
+void **ebpf_judy_insert_unsafe(PPvoid_t arr, Word_t key);
+netdata_ebpf_judy_pid_stats_t *ebpf_get_pid_from_judy_unsafe(PPvoid_t judy_array, uint32_t pid);
+
+void parse_network_viewer_section(struct config *cfg);
+void ebpf_clean_ip_structure(ebpf_network_viewer_ip_list_t **clean);
+void ebpf_clean_port_structure(ebpf_network_viewer_port_list_t **clean);
+void ebpf_read_local_addresses_unsafe();
 
 extern ebpf_filesystem_partitions_t localfs[];
 extern ebpf_sync_syscalls_t local_syscalls[];
 extern int ebpf_exit_plugin;
+void ebpf_stop_threads(int sig);
+extern netdata_ebpf_judy_pid_t ebpf_judy_pid;
 
 #define EBPF_MAX_SYNCHRONIZATION_TIME 300
 
diff --git a/collectors/ebpf.plugin/ebpf_apps.c b/collectors/ebpf.plugin/ebpf_apps.c
index c7c0cbbbb0..b1b42c8d85 100644
--- a/collectors/ebpf.plugin/ebpf_apps.c
+++ b/collectors/ebpf.plugin/ebpf_apps.c
@@ -375,58 +375,6 @@ int ebpf_read_hash_table(void *ep, int fd, uint32_t pid)
     return -1;
 }
 
-/**
- * Read socket statistic
- *
- * Read information from kernel ring to user ring.
- *
- * @param ep    the table with all process stats values.
- * @param fd    the file descriptor mapped from kernel
- * @param ef    a pointer for the functions mapped from dynamic library
- * @param pids  the list of pids associated to a target.
- *
- * @return
- */
-size_t read_bandwidth_statistic_using_pid_on_target(ebpf_bandwidth_t **ep, int fd, struct ebpf_pid_on_target *pids)
-{
-    size_t count = 0;
-    while (pids) {
-        uint32_t current_pid = pids->pid;
-        if (!ebpf_read_hash_table(ep[current_pid], fd, current_pid))
-            count++;
-
-        pids = pids->next;
-    }
-
-    return count;
-}
-
-/**
- * Read bandwidth statistic using hash table
- *
- * @param out                   the output tensor that will receive the information.
- * @param fd                    the file descriptor that has the data
- * @param bpf_map_lookup_elem   a pointer for the function to read the data
- * @param bpf_map_get_next_key  a pointer fo the function to read the index.
- */
-size_t read_bandwidth_statistic_using_hash_table(ebpf_bandwidth_t **out, int fd)
-{
-    size_t count = 0;
-    uint32_t key = 0;
-    uint32_t next_key = 0;
-
-    while (bpf_map_get_next_key(fd, &key, &next_key) == 0) {
-        ebpf_bandwidth_t *eps = out[next_key];
-        if (!eps) {
-            eps = callocz(1, sizeof(ebpf_process_stat_t));
-            out[next_key] = eps;
-        }
-        ebpf_read_hash_table(eps, fd, next_key);
-    }
-
-    return count;
-}
-
 /*****************************************************************
  *
  *  FUNCTIONS CALLED FROM COLLECTORS
@@ -887,6 +835,7 @@ static inline int read_proc_pid_cmdline(struct ebpf_pid_stat *p)
 {
     static char cmdline[MAX_CMDLINE + 1];
 
+    int ret = 0;
     if (unlikely(!p->cmdline_filename)) {
         char filename[FILENAME_MAX + 1];
         snprintfz(filename, FILENAME_MAX, "%s/proc/%d/cmdline", netdata_configured_host_prefix, p->pid);
@@ -909,20 +858,23 @@ static inline int read_proc_pid_cmdline(struct ebpf_pid_stat *p)
             cmdline[i] = ' ';
     }
 
-    if (p->cmdline)
-        freez(p->cmdline);
-    p->cmdline = strdupz(cmdline);
-
     debug_log("Read file '%s' contents: %s", p->cmdline_filename, p->cmdline);
 
-    return 1;
+    ret = 1;
 
 cleanup:
     // copy the command to the command line
     if (p->cmdline)
         freez(p->cmdline);
     p->cmdline = strdupz(p->comm);
-    return 0;
+
+    rw_spinlock_write_lock(&ebpf_judy_pid.index.rw_spinlock);
+    netdata_ebpf_judy_pid_stats_t *pid_ptr = ebpf_get_pid_from_judy_unsafe(&ebpf_judy_pid.index.JudyLArray, p->pid);
+    if (pid_ptr)
+        pid_ptr->cmdline = p->cmdline;
+    rw_spinlock_write_unlock(&ebpf_judy_pid.index.rw_spinlock);
+
+    return ret;
 }
 
 /**
@@ -1238,6 +1190,24 @@ static inline void del_pid_entry(pid_t pid)
     freez(p->status_filename);
     freez(p->io_filename);
     freez(p->cmdline_filename);
+
+    rw_spinlock_write_lock(&ebpf_judy_pid.index.rw_spinlock);
+    netdata_ebpf_judy_pid_stats_t *pid_ptr = ebpf_get_pid_from_judy_unsafe(&ebpf_judy_pid.index.JudyLArray, p->pid);
+    if (pid_ptr) {
+        if (pid_ptr->socket_stats.JudyLArray) {
+            Word_t local_socket = 0;
+            Pvoid_t *socket_value;
+            bool first_socket = true;
+            while ((socket_value = JudyLFirstThenNext(pid_ptr->socket_stats.JudyLArray, &local_socket, &first_socket))) {
+                netdata_socket_plus_t *socket_clean = *socket_value;
+                aral_freez(aral_socket_table, socket_clean);
+            }
+            JudyLFreeArray(&pid_ptr->socket_stats.JudyLArray, PJE0);
+        }
+        JudyLDel(&ebpf_judy_pid.index.JudyLArray, p->pid, PJE0);
+    }
+    rw_spinlock_write_unlock(&ebpf_judy_pid.index.rw_spinlock);
+
     freez(p->cmdline);
     ebpf_pid_stat_release(p);
 
@@ -1279,12 +1249,6 @@ int get_pid_comm(pid_t pid, size_t n, char *dest)
  */
 void cleanup_variables_from_other_threads(uint32_t pid)
 {
-    // Clean socket structures
-    if (socket_bandwidth_curr) {
-        ebpf_socket_release(socket_bandwidth_curr[pid]);
-        socket_bandwidth_curr[pid] = NULL;
-    }
-
     // Clean cachestat structure
     if (cachestat_pid) {
         ebpf_cachestat_release(cachestat_pid[pid]);
diff --git a/collectors/ebpf.plugin/ebpf_apps.h b/collectors/ebpf.plugin/ebpf_apps.h
index fc894a55fe..5ae5342ddf 100644
--- a/collectors/ebpf.plugin/ebpf_apps.h
+++ b/collectors/ebpf.plugin/ebpf_apps.h
@@ -150,24 +150,6 @@ typedef struct ebpf_process_stat {
     uint8_t removeme;
 } ebpf_process_stat_t;
 
-typedef struct ebpf_bandwidth {
-    uint32_t pid;
-
-    uint64_t first;              // First timestamp
-    uint64_t ct;                 // Last timestamp
-    uint64_t bytes_sent;         // Bytes sent
-    uint64_t bytes_received;     // Bytes received
-    uint64_t call_tcp_sent;      // Number of times tcp_sendmsg was called
-    uint64_t call_tcp_received;  // Number of times tcp_cleanup_rbuf was called
-    uint64_t retransmit;         // Number of times tcp_retransmit was called
-    uint64_t call_udp_sent;      // Number of times udp_sendmsg was called
-    uint64_t call_udp_received;  // Number of times udp_recvmsg was called
-    uint64_t close;              // Number of times tcp_close was called
-    uint64_t drop;               // THIS IS NOT USED FOR WHILE, we are in groom section
-    uint32_t tcp_v4_connection;  // Number of times tcp_v4_connection was called.
-    uint32_t tcp_v6_connection;  // Number of times tcp_v6_connection was called.
-} ebpf_bandwidth_t;
-
 /**
  * Internal function used to write debug messages.
  *
@@ -208,12 +190,6 @@ int ebpf_read_hash_table(void *ep, int fd, uint32_t pid);
 
 int get_pid_comm(pid_t pid, size_t n, char *dest);
 
-size_t read_processes_statistic_using_pid_on_target(ebpf_process_stat_t **ep,
-                                                           int fd,
-                                                           struct ebpf_pid_on_target *pids);
-
-size_t read_bandwidth_statistic_using_pid_on_target(ebpf_bandwidth_t **ep, int fd, struct ebpf_pid_on_target *pids);
-
 void collect_data_for_all_processes(int tbl_pid_stats_fd, int maps_per_core);
 void ebpf_process_apps_accumulator(ebpf_process_stat_t *out, int maps_per_core);
 
diff --git a/collectors/ebpf.plugin/ebpf_cachestat.c b/collectors/ebpf.plugin/ebpf_cachestat.c
index affecdea2d..4b4ef5beb9 100644
--- a/collectors/ebpf.plugin/ebpf_cachestat.c
+++ b/collectors/ebpf.plugin/ebpf_cachestat.c
@@ -1479,7 +1479,7 @@ static int ebpf_cachestat_load_bpf(ebpf_module_t *em)
 #endif
 
     if (ret)
-        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
+        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
 
     return ret;
 }
diff --git a/collectors/ebpf.plugin/ebpf_cgroup.h b/collectors/ebpf.plugin/ebpf_cgroup.h
index 6620ea10a3..ba8346934f 100644
--- a/collectors/ebpf.plugin/ebpf_cgroup.h
+++ b/collectors/ebpf.plugin/ebpf_cgroup.h
@@ -21,7 +21,7 @@ struct pid_on_target2 {
     ebpf_process_stat_t ps;
     netdata_dcstat_pid_t dc;
     netdata_publish_shm_t shm;
-    ebpf_bandwidth_t socket;
+    netdata_socket_t socket;
     netdata_cachestat_pid_t cachestat;
 
     struct pid_on_target2 *next;
diff --git a/collectors/ebpf.plugin/ebpf_dcstat.c b/collectors/ebpf.plugin/ebpf_dcstat.c
index feb935b93a..52ba5e54f7 100644
--- a/collectors/ebpf.plugin/ebpf_dcstat.c
+++ b/collectors/ebpf.plugin/ebpf_dcstat.c
@@ -1311,7 +1311,7 @@ static int ebpf_dcstat_load_bpf(ebpf_module_t *em)
 #endif
 
     if (ret)
-        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
+        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
 
     return ret;
 }
diff --git a/collectors/ebpf.plugin/ebpf_disk.c b/collectors/ebpf.plugin/ebpf_disk.c
index 8794562709..f585de6201 100644
--- a/collectors/ebpf.plugin/ebpf_disk.c
+++ b/collectors/ebpf.plugin/ebpf_disk.c
@@ -873,7 +873,7 @@ static int ebpf_disk_load_bpf(ebpf_module_t *em)
 #endif
 
     if (ret)
-        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
+        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
 
     return ret;
 }
diff --git a/collectors/ebpf.plugin/ebpf_fd.c b/collectors/ebpf.plugin/ebpf_fd.c
index f039647a1d..044c8b7247 100644
--- a/collectors/ebpf.plugin/ebpf_fd.c
+++ b/collectors/ebpf.plugin/ebpf_fd.c
@@ -1337,7 +1337,7 @@ static int ebpf_fd_load_bpf(ebpf_module_t *em)
 #endif
 
     if (ret)
-        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
+        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
 
     return ret;
 }
diff --git a/collectors/ebpf.plugin/ebpf_filesystem.c b/collectors/ebpf.plugin/ebpf_filesystem.c
index 2bff738cae..b5fd98e89c 100644
--- a/collectors/ebpf.plugin/ebpf_filesystem.c
+++ b/collectors/ebpf.plugin/ebpf_filesystem.c
@@ -470,12 +470,12 @@ int ebpf_filesystem_initialize_ebpf_data(ebpf_module_t *em)
 {
     pthread_mutex_lock(&lock);
     int i;
-    const char *saved_name = em->thread_name;
+    const char *saved_name = em->info.thread_name;
     uint64_t kernels = em->kernels;
     for (i = 0; localfs[i].filesystem; i++) {
         ebpf_filesystem_partitions_t *efp = &localfs[i];
         if (!efp->probe_links && efp->flags & NETDATA_FILESYSTEM_LOAD_EBPF_PROGRAM) {
-            em->thread_name = efp->filesystem;
+            em->info.thread_name = efp->filesystem;
             em->kernels = efp->kernels;
             em->maps = efp->fs_maps;
 #ifdef LIBBPF_MAJOR_VERSION
@@ -484,7 +484,7 @@ int ebpf_filesystem_initialize_ebpf_data(ebpf_module_t *em)
             if (em->load & EBPF_LOAD_LEGACY) {
                 efp->probe_links = ebpf_load_program(ebpf_plugin_dir, em, running_on_kernel, isrh, &efp->objects);
                 if (!efp->probe_links) {
-                    em->thread_name = saved_name;
+                    em->info.thread_name = saved_name;
                     em->kernels = kernels;
                     em->maps = NULL;
                     pthread_mutex_unlock(&lock);
@@ -495,7 +495,7 @@ int ebpf_filesystem_initialize_ebpf_data(ebpf_module_t *em)
             else {
                 efp->fs_obj = filesystem_bpf__open();
                 if (!efp->fs_obj) {
-                    em->thread_name = saved_name;
+                    em->info.thread_name = saved_name;
                     em->kernels = kernels;
                     return -1;
                 } else {
@@ -515,7 +515,7 @@ int ebpf_filesystem_initialize_ebpf_data(ebpf_module_t *em)
         }
         efp->flags &= ~NETDATA_FILESYSTEM_LOAD_EBPF_PROGRAM;
     }
-    em->thread_name = saved_name;
+    em->info.thread_name = saved_name;
     pthread_mutex_unlock(&lock);
     em->kernels = kernels;
     em->maps = NULL;
diff --git a/collectors/ebpf.plugin/ebpf_functions.c b/collectors/ebpf.plugin/ebpf_functions.c
index d4f4687a7d..161473e41d 100644
--- a/collectors/ebpf.plugin/ebpf_functions.c
+++ b/collectors/ebpf.plugin/ebpf_functions.c
@@ -3,6 +3,42 @@
 #include "ebpf.h"
 #include "ebpf_functions.h"
 
+/*****************************************************************
+ *  EBPF FUNCTION COMMON
+ *****************************************************************/
+
+RW_SPINLOCK rw_spinlock;        // protect the buffer
+
+/**
+ * Function Start thread
+ *
+ * Start a specific thread after user request.
+ *
+ * @param em           The structure with thread information
+ * @param period
+ * @return
+ */
+static int ebpf_function_start_thread(ebpf_module_t *em, int period)
+{
+    struct netdata_static_thread *st = em->thread;
+    // another request for thread that already ran, cleanup and restart
+    if (st->thread)
+        freez(st->thread);
+
+    if (period <= 0)
+        period = EBPF_DEFAULT_LIFETIME;
+
+    st->thread = mallocz(sizeof(netdata_thread_t));
+    em->enabled = NETDATA_THREAD_EBPF_FUNCTION_RUNNING;
+    em->lifetime = period;
+
+#ifdef NETDATA_INTERNAL_CHECKS
+    netdata_log_info("Starting thread %s with lifetime = %d", em->info.thread_name, period);
+#endif
+
+    return netdata_thread_create(st->thread, st->name, NETDATA_THREAD_OPTION_DEFAULT, st->start_routine, em);
+}
+
 /*****************************************************************
  *  EBPF SELECT MODULE
  *****************************************************************/
@@ -17,7 +53,7 @@
 ebpf_module_t *ebpf_functions_select_module(const char *thread_name) {
     int i;
     for (i = 0; i < EBPF_MODULE_FUNCTION_IDX; i++) {
-        if (strcmp(ebpf_modules[i].thread_name, thread_name) == 0) {
+        if (strcmp(ebpf_modules[i].info.thread_name, thread_name) == 0) {
             return &ebpf_modules[i];
         }
     }
@@ -56,7 +92,6 @@ static void ebpf_function_thread_manipulation_help(const char *transaction) {
             "      Disable a sp.\n"
             "\n"
             "Filters can be combined. Each filter can be given only one time.\n"
-            "Process thread is not controlled by functions until we finish the creation of functions per thread..\n"
             );
 
     pthread_mutex_lock(&lock);
@@ -66,7 +101,6 @@ static void ebpf_function_thread_manipulation_help(const char *transaction) {
     buffer_free(wb);
 }
 
-
 /*****************************************************************
  *  EBPF ERROR FUNCTIONS
  *****************************************************************/
@@ -91,7 +125,7 @@ static void ebpf_function_error(const char *transaction, int code, const char *m
  *****************************************************************/
 
 /**
- * Function enable
+ * Function: thread
  *
  * Enable a specific thread.
  *
@@ -140,27 +174,15 @@ static void ebpf_function_thread_manipulation(const char *transaction,
 
             pthread_mutex_lock(&ebpf_exit_cleanup);
             if (lem->enabled > NETDATA_THREAD_EBPF_FUNCTION_RUNNING) {
-                struct netdata_static_thread *st = lem->thread;
                 // Load configuration again
                 ebpf_update_module(lem, default_btf, running_on_kernel, isrh);
 
-                // another request for thread that already ran, cleanup and restart
-                if (st->thread)
-                    freez(st->thread);
-
-                if (period <= 0)
-                    period = EBPF_DEFAULT_LIFETIME;
-
-                st->thread = mallocz(sizeof(netdata_thread_t));
-                lem->enabled = NETDATA_THREAD_EBPF_FUNCTION_RUNNING;
-                lem->lifetime = period;
-
-#ifdef NETDATA_INTERNAL_CHECKS
-                netdata_log_info("Starting thread %s with lifetime = %d", thread_name, period);
-#endif
-
-                netdata_thread_create(st->thread, st->name, NETDATA_THREAD_OPTION_DEFAULT,
-                                      st->start_routine, lem);
+                if (ebpf_function_start_thread(lem, period)) {
+                    ebpf_function_error(transaction,
+                                        HTTP_RESP_INTERNAL_SERVER_ERROR,
+                                        "Cannot start thread.");
+                    return;
+                }
             } else {
                 lem->running_time = 0;
                 if (period > 0) // user is modifying period to run
@@ -225,10 +247,10 @@ static void ebpf_function_thread_manipulation(const char *transaction,
         // THE ORDER SHOULD BE THE SAME WITH THE FIELDS!
 
         // thread name
-        buffer_json_add_array_item_string(wb, wem->thread_name);
+        buffer_json_add_array_item_string(wb, wem->info.thread_name);
 
         // description
-        buffer_json_add_array_item_string(wb, wem->thread_description);
+        buffer_json_add_array_item_string(wb, wem->info.thread_description);
         // Either it is not running or received a disabled signal and it is stopping.
         if (wem->enabled > NETDATA_THREAD_EBPF_FUNCTION_RUNNING ||
             (!wem->lifetime && (int)wem->running_time == wem->update_every)) {
@@ -266,7 +288,7 @@ static void ebpf_function_thread_manipulation(const char *transaction,
                              RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
                              RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
                              RRDF_FIELD_FILTER_MULTISELECT,
-                             RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY, NULL);
+                             RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY | RRDF_FIELD_OPTS_UNIQUE_KEY, NULL);
 
         buffer_rrdf_table_add_field(wb, fields_id++, "Description", "Thread Desc", RRDF_FIELD_TYPE_STRING,
                                     RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
@@ -355,6 +377,698 @@ static void ebpf_function_thread_manipulation(const char *transaction,
     buffer_free(wb);
 }
 
+/*****************************************************************
+ *  EBPF SOCKET FUNCTION
+ *****************************************************************/
+
+/**
+ * Thread Help
+ *
+ * Shows help with all options accepted by thread function.
+ *
+ * @param transaction  the transaction id that Netdata sent for this function execution
+*/
+static void ebpf_function_socket_help(const char *transaction) {
+    pthread_mutex_lock(&lock);
+    pluginsd_function_result_begin_to_stdout(transaction, HTTP_RESP_OK, "text/plain", now_realtime_sec() + 3600);
+    fprintf(stdout, "%s",
+            "ebpf.plugin / socket\n"
+            "\n"
+            "Function `socket` display information for all open sockets during ebpf.plugin runtime.\n"
+            "During thread runtime the plugin is always collecting data, but when an option is modified, the plugin\n"
+            "resets completely the previous table and can show a clean data for the first request before to bring the\n"
+            "modified request.\n"
+            "\n"
+            "The following filters are supported:\n"
+            "\n"
+            "   family:FAMILY\n"
+            "      Shows information for the FAMILY specified. Option accepts IPV4, IPV6 and all, that is the default.\n"
+            "\n"
+            "   period:PERIOD\n"
+            "      Enable socket to run a specific PERIOD in seconds. When PERIOD is not\n"
+            "      specified plugin will use the default 300 seconds\n"
+            "\n"
+            "   resolve:BOOL\n"
+            "      Resolve service name, default value is YES.\n"
+            "\n"
+            "   range:CIDR\n"
+            "      Show sockets that have only a specific destination. Default all addresses.\n"
+            "\n"
+            "   port:range\n"
+            "      Show sockets that have only a specific destination.\n"
+            "\n"
+            "   reset\n"
+            "      Send a reset to collector. When a collector receives this command, it uses everything defined in configuration file.\n"
+            "\n"
+            "   interfaces\n"
+            "      When the collector receives this command, it read all available interfaces on host.\n"
+            "\n"
+            "Filters can be combined. Each filter can be given only one time. Default all ports\n"
+    );
+    pluginsd_function_result_end_to_stdout();
+    fflush(stdout);
+    pthread_mutex_unlock(&lock);
+}
+
+/**
+ * Fill Fake socket
+ *
+ * Fill socket with an invalid request.
+ *
+ * @param fake_values is the structure where we are storing the value.
+ */
+static inline void ebpf_socket_fill_fake_socket(netdata_socket_plus_t *fake_values)
+{
+    snprintfz(fake_values->socket_string.src_ip, INET6_ADDRSTRLEN, "%s", "127.0.0.1");
+    snprintfz(fake_values->socket_string.dst_ip, INET6_ADDRSTRLEN, "%s", "127.0.0.1");
+    fake_values->pid = getpid();
+    //fake_values->socket_string.src_port = 0;
+    fake_values->socket_string.dst_port[0] = 0;
+    snprintfz(fake_values->socket_string.dst_ip, NI_MAXSERV, "%s", "none");
+    fake_values->data.family = AF_INET;
+    fake_values->data.protocol = AF_UNSPEC;
+}
+
+/**
+ * Fill function buffer
+ *
+ * Fill buffer with data to be shown on cloud.
+ *
+ * @param wb          buffer where we store data.
+ * @param values      data read from hash table
+ * @param name        the process name
+ */
+static void ebpf_fill_function_buffer(BUFFER *wb, netdata_socket_plus_t *values, char *name)
+{
+    buffer_json_add_array_item_array(wb);
+
+    // IMPORTANT!
+    // THE ORDER SHOULD BE THE SAME WITH THE FIELDS!
+
+    // PID
+    buffer_json_add_array_item_uint64(wb, (uint64_t)values->pid);
+
+    // NAME
+    buffer_json_add_array_item_string(wb, (name) ? name : "not identified");
+
+    // Origin
+    buffer_json_add_array_item_string(wb, (values->data.external_origin) ? "incoming" : "outgoing");
+
+    // Source IP
+    buffer_json_add_array_item_string(wb, values->socket_string.src_ip);
+
+    // SRC Port
+    //buffer_json_add_array_item_uint64(wb, (uint64_t) values->socket_string.src_port);
+
+    // Destination IP
+    buffer_json_add_array_item_string(wb, values->socket_string.dst_ip);
+
+    // DST Port
+    buffer_json_add_array_item_string(wb, values->socket_string.dst_port);
+
+    uint64_t connections;
+    if (values->data.protocol == IPPROTO_TCP) {
+        // Protocol
+        buffer_json_add_array_item_string(wb, "TCP");
+
+        // Bytes received
+        buffer_json_add_array_item_uint64(wb, (uint64_t) values->data.tcp.tcp_bytes_received);
+
+        // Bytes sent
+        buffer_json_add_array_item_uint64(wb, (uint64_t) values->data.tcp.tcp_bytes_sent);
+
+        // Connections
+        connections = values->data.tcp.ipv4_connect + values->data.tcp.ipv6_connect;
+    } else if (values->data.protocol == IPPROTO_UDP) {
+        // Protocol
+        buffer_json_add_array_item_string(wb, "UDP");
+
+        // Bytes received
+        buffer_json_add_array_item_uint64(wb, (uint64_t) values->data.udp.udp_bytes_received);
+
+        // Bytes sent
+        buffer_json_add_array_item_uint64(wb, (uint64_t) values->data.udp.udp_bytes_sent);
+
+        // Connections
+        connections = values->data.udp.call_udp_sent + values->data.udp.call_udp_received;
+    } else {
+        // Protocol
+        buffer_json_add_array_item_string(wb, "UNSPEC");
+
+        // Bytes received
+        buffer_json_add_array_item_uint64(wb, 0);
+
+        // Bytes sent
+        buffer_json_add_array_item_uint64(wb, 0);
+
+        connections = 1;
+    }
+
+    // Connections
+    if (values->flags & NETDATA_SOCKET_FLAGS_ALREADY_OPEN) {
+        connections++;
+    } else if (!connections) {
+        // If no connections, this means that we lost when connection was opened
+        values->flags |= NETDATA_SOCKET_FLAGS_ALREADY_OPEN;
+        connections++;
+    }
+    buffer_json_add_array_item_uint64(wb, connections);
+
+    buffer_json_array_close(wb);
+}
+
+/**
+ * Clean Judy array unsafe
+ *
+ * Clean all Judy Array allocated to show table when a function is called.
+ * Before to call this function it is necessary to lock `ebpf_judy_pid.index.rw_spinlock`.
+ **/
+static void ebpf_socket_clean_judy_array_unsafe()
+{
+    if (!ebpf_judy_pid.index.JudyLArray)
+        return;
+
+    Pvoid_t *pid_value, *socket_value;
+    Word_t local_pid = 0, local_socket = 0;
+    bool first_pid = true, first_socket = true;
+    while ((pid_value = JudyLFirstThenNext(ebpf_judy_pid.index.JudyLArray, &local_pid, &first_pid))) {
+        netdata_ebpf_judy_pid_stats_t *pid_ptr = (netdata_ebpf_judy_pid_stats_t *)*pid_value;
+        rw_spinlock_write_lock(&pid_ptr->socket_stats.rw_spinlock);
+        if (pid_ptr->socket_stats.JudyLArray) {
+            while ((socket_value = JudyLFirstThenNext(pid_ptr->socket_stats.JudyLArray, &local_socket, &first_socket))) {
+                netdata_socket_plus_t *socket_clean = *socket_value;
+                aral_freez(aral_socket_table, socket_clean);
+            }
+            JudyLFreeArray(&pid_ptr->socket_stats.JudyLArray, PJE0);
+            pid_ptr->socket_stats.JudyLArray = NULL;
+        }
+        rw_spinlock_write_unlock(&pid_ptr->socket_stats.rw_spinlock);
+    }
+}
+
+/**
+ * Fill function buffer unsafe
+ *
+ * Fill the function buffer with socket information. Before to call this function it is necessary to lock
+ * ebpf_judy_pid.index.rw_spinlock
+ *
+ * @param buf    buffer used to store data to be shown by function.
+ *
+ * @return it returns 0 on success and -1 otherwise.
+ */
+static void ebpf_socket_fill_function_buffer_unsafe(BUFFER *buf)
+{
+    int counter = 0;
+
+    Pvoid_t *pid_value, *socket_value;
+    Word_t local_pid = 0;
+    bool first_pid = true;
+    while ((pid_value = JudyLFirstThenNext(ebpf_judy_pid.index.JudyLArray, &local_pid, &first_pid))) {
+        netdata_ebpf_judy_pid_stats_t *pid_ptr = (netdata_ebpf_judy_pid_stats_t *)*pid_value;
+        bool first_socket = true;
+        Word_t local_timestamp = 0;
+        rw_spinlock_read_lock(&pid_ptr->socket_stats.rw_spinlock);
+        if (pid_ptr->socket_stats.JudyLArray) {
+            while ((socket_value = JudyLFirstThenNext(pid_ptr->socket_stats.JudyLArray, &local_timestamp, &first_socket))) {
+                netdata_socket_plus_t *values = (netdata_socket_plus_t *)*socket_value;
+                ebpf_fill_function_buffer(buf, values, pid_ptr->cmdline);
+            }
+            counter++;
+        }
+        rw_spinlock_read_unlock(&pid_ptr->socket_stats.rw_spinlock);
+    }
+
+    if (!counter) {
+        netdata_socket_plus_t fake_values = { };
+        ebpf_socket_fill_fake_socket(&fake_values);
+        ebpf_fill_function_buffer(buf, &fake_values, NULL);
+    }
+}
+
+/**
+ * Socket read hash
+ *
+ * This is the thread callback.
+ * This thread is necessary, because we cannot freeze the whole plugin to read the data on very busy socket.
+ *
+ * @param buf the buffer to store data;
+ * @param em  the module main structure.
+ *
+ * @return It always returns NULL.
+ */
+void ebpf_socket_read_open_connections(BUFFER *buf, struct ebpf_module *em)
+{
+    // thread was not initialized or Array was reset
+    rw_spinlock_read_lock(&ebpf_judy_pid.index.rw_spinlock);
+    if (!em->maps || (em->maps[NETDATA_SOCKET_OPEN_SOCKET].map_fd == ND_EBPF_MAP_FD_NOT_INITIALIZED) ||
+        !ebpf_judy_pid.index.JudyLArray){
+        netdata_socket_plus_t fake_values = { };
+
+        ebpf_socket_fill_fake_socket(&fake_values);
+
+        ebpf_fill_function_buffer(buf, &fake_values, NULL);
+        rw_spinlock_read_unlock(&ebpf_judy_pid.index.rw_spinlock);
+        return;
+    }
+
+    rw_spinlock_read_lock(&network_viewer_opt.rw_spinlock);
+    ebpf_socket_fill_function_buffer_unsafe(buf);
+    rw_spinlock_read_unlock(&network_viewer_opt.rw_spinlock);
+    rw_spinlock_read_unlock(&ebpf_judy_pid.index.rw_spinlock);
+}
+
+/**
+ * Function: Socket
+ *
+ * Show information for sockets stored in hash tables.
+ *
+ * @param transaction  the transaction id that Netdata sent for this function execution
+ * @param function     function name and arguments given to thread.
+ * @param line_buffer  buffer used to parse args
+ * @param line_max     Number of arguments given
+ * @param timeout      The function timeout
+ * @param em           The structure with thread information
+ */
+static void ebpf_function_socket_manipulation(const char *transaction,
+                                              char *function __maybe_unused,
+                                              char *line_buffer __maybe_unused,
+                                              int line_max __maybe_unused,
+                                              int timeout __maybe_unused,
+                                              ebpf_module_t *em)
+{
+    UNUSED(line_buffer);
+    UNUSED(timeout);
+
+    char *words[PLUGINSD_MAX_WORDS] = {NULL};
+    size_t num_words = quoted_strings_splitter_pluginsd(function, words, PLUGINSD_MAX_WORDS);
+    const char *name;
+    int period = -1;
+    rw_spinlock_write_lock(&ebpf_judy_pid.index.rw_spinlock);
+    network_viewer_opt.enabled = CONFIG_BOOLEAN_YES;
+    uint32_t previous;
+
+    for (int i = 1; i < PLUGINSD_MAX_WORDS; i++) {
+        const char *keyword = get_word(words, num_words, i);
+        if (!keyword)
+            break;
+
+        if (strncmp(keyword, EBPF_FUNCTION_SOCKET_FAMILY, sizeof(EBPF_FUNCTION_SOCKET_FAMILY) - 1) == 0) {
+            name = &keyword[sizeof(EBPF_FUNCTION_SOCKET_FAMILY) - 1];
+            previous = network_viewer_opt.family;
+            uint32_t family = AF_UNSPEC;
+            if (!strcmp(name, "IPV4"))
+                family = AF_INET;
+            else if (!strcmp(name, "IPV6"))
+                family = AF_INET6;
+
+            if (family != previous) {
+                rw_spinlock_write_lock(&network_viewer_opt.rw_spinlock);
+                network_viewer_opt.family = family;
+                rw_spinlock_write_unlock(&network_viewer_opt.rw_spinlock);
+                ebpf_socket_clean_judy_array_unsafe();
+            }
+        } else if (strncmp(keyword, EBPF_FUNCTION_SOCKET_PERIOD, sizeof(EBPF_FUNCTION_SOCKET_PERIOD) - 1) == 0) {
+            name = &keyword[sizeof(EBPF_FUNCTION_SOCKET_PERIOD) - 1];
+            pthread_mutex_lock(&ebpf_exit_cleanup);
+            period = str2i(name);
+            if (period > 0) {
+                em->lifetime = period;
+            } else
+                em->lifetime = EBPF_NON_FUNCTION_LIFE_TIME;
+
+#ifdef NETDATA_DEV_MODE
+            collector_info("Lifetime modified for %u", em->lifetime);
+#endif
+            pthread_mutex_unlock(&ebpf_exit_cleanup);
+        } else if (strncmp(keyword, EBPF_FUNCTION_SOCKET_RESOLVE, sizeof(EBPF_FUNCTION_SOCKET_RESOLVE) - 1) == 0) {
+            previous = network_viewer_opt.service_resolution_enabled;
+            uint32_t resolution;
+            name = &keyword[sizeof(EBPF_FUNCTION_SOCKET_RESOLVE) - 1];
+            resolution = (!strcasecmp(name, "YES")) ? CONFIG_BOOLEAN_YES : CONFIG_BOOLEAN_NO;
+
+            if (previous != resolution) {
+                rw_spinlock_write_lock(&network_viewer_opt.rw_spinlock);
+                network_viewer_opt.service_resolution_enabled = resolution;
+                rw_spinlock_write_unlock(&network_viewer_opt.rw_spinlock);
+
+                ebpf_socket_clean_judy_array_unsafe();
+            }
+        } else if (strncmp(keyword, EBPF_FUNCTION_SOCKET_RANGE, sizeof(EBPF_FUNCTION_SOCKET_RANGE) - 1) == 0) {
+            name = &keyword[sizeof(EBPF_FUNCTION_SOCKET_RANGE) - 1];
+            rw_spinlock_write_lock(&network_viewer_opt.rw_spinlock);
+            ebpf_clean_ip_structure(&network_viewer_opt.included_ips);
+            ebpf_clean_ip_structure(&network_viewer_opt.excluded_ips);
+            ebpf_parse_ips_unsafe((char *)name);
+            rw_spinlock_write_unlock(&network_viewer_opt.rw_spinlock);
+
+            ebpf_socket_clean_judy_array_unsafe();
+        } else if (strncmp(keyword, EBPF_FUNCTION_SOCKET_PORT, sizeof(EBPF_FUNCTION_SOCKET_PORT) - 1) == 0) {
+            name = &keyword[sizeof(EBPF_FUNCTION_SOCKET_PORT) - 1];
+            rw_spinlock_write_lock(&network_viewer_opt.rw_spinlock);
+            ebpf_clean_port_structure(&network_viewer_opt.included_port);
+            ebpf_clean_port_structure(&network_viewer_opt.excluded_port);
+            ebpf_parse_ports((char *)name);
+            rw_spinlock_write_unlock(&network_viewer_opt.rw_spinlock);
+
+            ebpf_socket_clean_judy_array_unsafe();
+        } else if (strncmp(keyword, EBPF_FUNCTION_SOCKET_RESET, sizeof(EBPF_FUNCTION_SOCKET_RESET) - 1) == 0) {
+            rw_spinlock_write_lock(&network_viewer_opt.rw_spinlock);
+            ebpf_clean_port_structure(&network_viewer_opt.included_port);
+            ebpf_clean_port_structure(&network_viewer_opt.excluded_port);
+
+            ebpf_clean_ip_structure(&network_viewer_opt.included_ips);
+            ebpf_clean_ip_structure(&network_viewer_opt.excluded_ips);
+            ebpf_clean_ip_structure(&network_viewer_opt.ipv4_local_ip);
+            ebpf_clean_ip_structure(&network_viewer_opt.ipv6_local_ip);
+
+            parse_network_viewer_section(&socket_config);
+            ebpf_read_local_addresses_unsafe();
+            network_viewer_opt.enabled = CONFIG_BOOLEAN_YES;
+            rw_spinlock_write_unlock(&network_viewer_opt.rw_spinlock);
+        } else if (strncmp(keyword, EBPF_FUNCTION_SOCKET_INTERFACES, sizeof(EBPF_FUNCTION_SOCKET_INTERFACES) - 1) == 0) {
+            rw_spinlock_write_lock(&network_viewer_opt.rw_spinlock);
+            ebpf_read_local_addresses_unsafe();
+            rw_spinlock_write_unlock(&network_viewer_opt.rw_spinlock);
+        } else if (strncmp(keyword, "help", 4) == 0) {
+            ebpf_function_socket_help(transaction);
+            rw_spinlock_write_unlock(&ebpf_judy_pid.index.rw_spinlock);
+            return;
+        }
+    }
+    rw_spinlock_write_unlock(&ebpf_judy_pid.index.rw_spinlock);
+
+    pthread_mutex_lock(&ebpf_exit_cleanup);
+    if (em->enabled > NETDATA_THREAD_EBPF_FUNCTION_RUNNING) {
+        // Cleanup when we already had a thread running
+        rw_spinlock_write_lock(&ebpf_judy_pid.index.rw_spinlock);
+        ebpf_socket_clean_judy_array_unsafe();
+        rw_spinlock_write_unlock(&ebpf_judy_pid.index.rw_spinlock);
+
+        if (ebpf_function_start_thread(em, period)) {
+            ebpf_function_error(transaction,
+                                HTTP_RESP_INTERNAL_SERVER_ERROR,
+                                "Cannot start thread.");
+            pthread_mutex_unlock(&ebpf_exit_cleanup);
+            return;
+        }
+    } else {
+        if (period < 0 && em->lifetime < EBPF_NON_FUNCTION_LIFE_TIME) {
+            em->lifetime = EBPF_NON_FUNCTION_LIFE_TIME;
+        }
+    }
+    pthread_mutex_unlock(&ebpf_exit_cleanup);
+
+    time_t expires = now_realtime_sec() + em->update_every;
+
+    BUFFER *wb = buffer_create(PLUGINSD_LINE_MAX, NULL);
+    buffer_json_initialize(wb, "\"", "\"", 0, true, false);
+    buffer_json_member_add_uint64(wb, "status", HTTP_RESP_OK);
+    buffer_json_member_add_string(wb, "type", "table");
+    buffer_json_member_add_time_t(wb, "update_every", em->update_every);
+    buffer_json_member_add_string(wb, "help", EBPF_PLUGIN_SOCKET_FUNCTION_DESCRIPTION);
+
+    // Collect data
+    buffer_json_member_add_array(wb, "data");
+    ebpf_socket_read_open_connections(wb, em);
+    buffer_json_array_close(wb); // data
+
+    buffer_json_member_add_object(wb, "columns");
+    {
+        int fields_id = 0;
+
+        // IMPORTANT!
+        // THE ORDER SHOULD BE THE SAME WITH THE VALUES!
+        buffer_rrdf_table_add_field(wb, fields_id++, "PID", "Process ID", RRDF_FIELD_TYPE_INTEGER,
+            RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NUMBER, 0, NULL, NAN,
+            RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
+            RRDF_FIELD_FILTER_MULTISELECT,
+            RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY,
+            NULL);
+
+        buffer_rrdf_table_add_field(wb, fields_id++, "Process Name", "Process Name", RRDF_FIELD_TYPE_STRING,
+                                    RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
+                                    RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
+                                    RRDF_FIELD_FILTER_MULTISELECT,
+                                    RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY, NULL);
+
+        buffer_rrdf_table_add_field(wb, fields_id++, "Origin", "The connection origin.", RRDF_FIELD_TYPE_STRING,
+                                    RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
+                                    RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
+                                    RRDF_FIELD_FILTER_MULTISELECT,
+                                    RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY, NULL);
+
+        buffer_rrdf_table_add_field(wb, fields_id++, "Request from", "Request from IP", RRDF_FIELD_TYPE_STRING,
+                                    RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
+                                    RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
+                                    RRDF_FIELD_FILTER_MULTISELECT,
+                                    RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY, NULL);
+
+        /*
+        buffer_rrdf_table_add_field(wb, fields_id++, "SRC PORT", "Source Port", RRDF_FIELD_TYPE_INTEGER,
+                                    RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NUMBER, 0, NULL, NAN,
+                                    RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
+                                    RRDF_FIELD_FILTER_MULTISELECT,
+                                    RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY,
+                                    NULL);
+                                    */
+
+        buffer_rrdf_table_add_field(wb, fields_id++, "Destination IP", "Destination IP", RRDF_FIELD_TYPE_STRING,
+                                    RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
+                                    RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
+                                    RRDF_FIELD_FILTER_MULTISELECT,
+                                    RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY, NULL);
+
+        buffer_rrdf_table_add_field(wb, fields_id++, "Destination Port", "Destination Port", RRDF_FIELD_TYPE_STRING,
+                                    RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
+                                    RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
+                                    RRDF_FIELD_FILTER_MULTISELECT,
+                                    RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY, NULL);
+
+        buffer_rrdf_table_add_field(wb, fields_id++, "Protocol", "Communication protocol", RRDF_FIELD_TYPE_STRING,
+                                    RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN,
+                                    RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
+                                    RRDF_FIELD_FILTER_MULTISELECT,
+                                    RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY, NULL);
+
+        buffer_rrdf_table_add_field(wb, fields_id++, "Incoming Bandwidth", "Bytes received.", RRDF_FIELD_TYPE_INTEGER,
+                                    RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NUMBER, 0, NULL, NAN,
+                                    RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
+                                    RRDF_FIELD_FILTER_MULTISELECT,
+                                    RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY,
+                                    NULL);
+
+        buffer_rrdf_table_add_field(wb, fields_id++, "Outgoing Bandwidth", "Bytes sent.", RRDF_FIELD_TYPE_INTEGER,
+                                    RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NUMBER, 0, NULL, NAN,
+                                    RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
+                                    RRDF_FIELD_FILTER_MULTISELECT,
+                                    RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY,
+                                    NULL);
+
+        buffer_rrdf_table_add_field(wb, fields_id, "Connections", "Number of calls to tcp_vX_connections and udp_sendmsg, where X is the protocol version.", RRDF_FIELD_TYPE_INTEGER,
+                                    RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NUMBER, 0, NULL, NAN,
+                                    RRDF_FIELD_SORT_ASCENDING, NULL, RRDF_FIELD_SUMMARY_COUNT,
+                                    RRDF_FIELD_FILTER_MULTISELECT,
+                                    RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY,
+                                    NULL);
+    }
+    buffer_json_object_close(wb); // columns
+
+    buffer_json_member_add_object(wb, "charts");
+    {
+        // OutBound Connections
+        buffer_json_member_add_object(wb, "IPInboundConn");
+        {
+            buffer_json_member_add_string(wb, "name", "TCP Inbound Connection");
+            buffer_json_member_add_string(wb, "type", "line");
+            buffer_json_member_add_array(wb, "columns");
+            {
+                buffer_json_add_array_item_string(wb, "connected_tcp");
+                buffer_json_add_array_item_string(wb, "connected_udp");
+            }
+            buffer_json_array_close(wb);
+        }
+        buffer_json_object_close(wb);
+
+        // OutBound Connections
+        buffer_json_member_add_object(wb, "IPTCPOutboundConn");
+        {
+            buffer_json_member_add_string(wb, "name", "TCP Outbound Connection");
+            buffer_json_member_add_string(wb, "type", "line");
+            buffer_json_member_add_array(wb, "columns");
+            {
+                buffer_json_add_array_item_string(wb, "connected_V4");
+                buffer_json_add_array_item_string(wb, "connected_V6");
+            }
+            buffer_json_array_close(wb);
+        }
+        buffer_json_object_close(wb);
+
+        // TCP Functions
+        buffer_json_member_add_object(wb, "TCPFunctions");
+        {
+            buffer_json_member_add_string(wb, "name", "TCPFunctions");
+            buffer_json_member_add_string(wb, "type", "line");
+            buffer_json_member_add_array(wb, "columns");
+            {
+                buffer_json_add_array_item_string(wb, "received");
+                buffer_json_add_array_item_string(wb, "sent");
+                buffer_json_add_array_item_string(wb, "close");
+            }
+            buffer_json_array_close(wb);
+        }
+        buffer_json_object_close(wb);
+
+        // TCP Bandwidth
+        buffer_json_member_add_object(wb, "TCPBandwidth");
+        {
+            buffer_json_member_add_string(wb, "name", "TCPBandwidth");
+            buffer_json_member_add_string(wb, "type", "line");
+            buffer_json_member_add_array(wb, "columns");
+            {
+                buffer_json_add_array_item_string(wb, "received");
+                buffer_json_add_array_item_string(wb, "sent");
+            }
+            buffer_json_array_close(wb);
+        }
+        buffer_json_object_close(wb);
+
+        // UDP Functions
+        buffer_json_member_add_object(wb, "UDPFunctions");
+        {
+            buffer_json_member_add_string(wb, "name", "UDPFunctions");
+            buffer_json_member_add_string(wb, "type", "line");
+            buffer_json_member_add_array(wb, "columns");
+            {
+                buffer_json_add_array_item_string(wb, "received");
+                buffer_json_add_array_item_string(wb, "sent");
+            }
+            buffer_json_array_close(wb);
+        }
+        buffer_json_object_close(wb);
+
+        // UDP Bandwidth
+        buffer_json_member_add_object(wb, "UDPBandwidth");
+        {
+            buffer_json_member_add_string(wb, "name", "UDPBandwidth");
+            buffer_json_member_add_string(wb, "type", "line");
+            buffer_json_member_add_array(wb, "columns");
+            {
+                buffer_json_add_array_item_string(wb, "received");
+                buffer_json_add_array_item_string(wb, "sent");
+            }
+            buffer_json_array_close(wb);
+        }
+        buffer_json_object_close(wb);
+
+    }
+    buffer_json_object_close(wb); // charts
+
+    buffer_json_member_add_string(wb, "default_sort_column", "PID");
+
+    // Do we use only on fields that can be groupped?
+    buffer_json_member_add_object(wb, "group_by");
+    {
+        // group by PID
+        buffer_json_member_add_object(wb, "PID");
+        {
+            buffer_json_member_add_string(wb, "name", "Process ID");
+            buffer_json_member_add_array(wb, "columns");
+            {
+                buffer_json_add_array_item_string(wb, "PID");
+            }
+            buffer_json_array_close(wb);
+        }
+        buffer_json_object_close(wb);
+
+        // group by Process Name
+        buffer_json_member_add_object(wb, "Process Name");
+        {
+            buffer_json_member_add_string(wb, "name", "Process Name");
+            buffer_json_member_add_array(wb, "columns");
+            {
+                buffer_json_add_array_item_string(wb, "Process Name");
+            }
+            buffer_json_array_close(wb);
+        }
+        buffer_json_object_close(wb);
+
+        // group by Process Name
+        buffer_json_member_add_object(wb, "Origin");
+        {
+            buffer_json_member_add_string(wb, "name", "Origin");
+            buffer_json_member_add_array(wb, "columns");
+            {
+                buffer_json_add_array_item_string(wb, "Origin");
+            }
+            buffer_json_array_close(wb);
+        }
+        buffer_json_object_close(wb);
+
+        // group by Request From IP
+        buffer_json_member_add_object(wb, "Request from");
+        {
+            buffer_json_member_add_string(wb, "name", "Request from IP");
+            buffer_json_member_add_array(wb, "columns");
+            {
+                buffer_json_add_array_item_string(wb, "Request from");
+            }
+            buffer_json_array_close(wb);
+        }
+        buffer_json_object_close(wb);
+
+        // group by Destination IP
+        buffer_json_member_add_object(wb, "Destination IP");
+        {
+            buffer_json_member_add_string(wb, "name", "Destination IP");
+            buffer_json_member_add_array(wb, "columns");
+            {
+                buffer_json_add_array_item_string(wb, "Destination IP");
+            }
+            buffer_json_array_close(wb);
+        }
+        buffer_json_object_close(wb);
+
+        // group by DST Port
+        buffer_json_member_add_object(wb, "Destination Port");
+        {
+            buffer_json_member_add_string(wb, "name", "Destination Port");
+            buffer_json_member_add_array(wb, "columns");
+            {
+                buffer_json_add_array_item_string(wb, "Destination Port");
+            }
+            buffer_json_array_close(wb);
+        }
+        buffer_json_object_close(wb);
+
+        // group by Protocol
+        buffer_json_member_add_object(wb, "Protocol");
+        {
+            buffer_json_member_add_string(wb, "name", "Protocol");
+            buffer_json_member_add_array(wb, "columns");
+            {
+                buffer_json_add_array_item_string(wb, "Protocol");
+            }
+            buffer_json_array_close(wb);
+        }
+        buffer_json_object_close(wb);
+    }
+    buffer_json_object_close(wb); // group_by
+
+    buffer_json_member_add_time_t(wb, "expires", expires);
+    buffer_json_finalize(wb);
+
+    // Lock necessary to avoid race condition
+    pthread_mutex_lock(&lock);
+    pluginsd_function_result_begin_to_stdout(transaction, HTTP_RESP_OK, "application/json", expires);
+
+    fwrite(buffer_tostring(wb), buffer_strlen(wb), 1, stdout);
+
+    pluginsd_function_result_end_to_stdout();
+    fflush(stdout);
+    pthread_mutex_unlock(&lock);
+
+    buffer_free(wb);
+}
 
 /*****************************************************************
  *  EBPF FUNCTION THREAD
@@ -372,6 +1086,7 @@ void *ebpf_function_thread(void *ptr)
     ebpf_module_t *em = (ebpf_module_t *)ptr;
     char buffer[PLUGINSD_LINE_MAX + 1];
 
+    rw_spinlock_init(&rw_spinlock);
     char *s = NULL;
     while(!ebpf_exit_plugin && (s = fgets(buffer, PLUGINSD_LINE_MAX, stdin))) {
         char *words[PLUGINSD_MAX_WORDS] = { NULL };
@@ -393,6 +1108,7 @@ void *ebpf_function_thread(void *ptr)
             }
             else {
                 int timeout = str2i(timeout_s);
+                rw_spinlock_write_lock(&rw_spinlock);
                 if (!strncmp(function, EBPF_FUNCTION_THREAD, sizeof(EBPF_FUNCTION_THREAD) - 1))
                     ebpf_function_thread_manipulation(transaction,
                                                       function,
@@ -400,14 +1116,28 @@ void *ebpf_function_thread(void *ptr)
                                                       PLUGINSD_LINE_MAX + 1,
                                                       timeout,
                                                       em);
+                else if (!strncmp(function, EBPF_FUNCTION_SOCKET, sizeof(EBPF_FUNCTION_SOCKET) - 1))
+                    ebpf_function_socket_manipulation(transaction,
+                                                      function,
+                                                      buffer,
+                                                      PLUGINSD_LINE_MAX + 1,
+                                                      timeout,
+                                                      &ebpf_modules[EBPF_MODULE_SOCKET_IDX]);
                 else
                     ebpf_function_error(transaction,
                                         HTTP_RESP_NOT_FOUND,
                                         "No function with this name found in ebpf.plugin.");
+
+                rw_spinlock_write_unlock(&rw_spinlock);
             }
         }
         else
             netdata_log_error("Received unknown command: %s", keyword ? keyword : "(unset)");
     }
+
+    if(!s || feof(stdin) || ferror(stdin)) {
+        ebpf_stop_threads(SIGQUIT);
+        netdata_log_error("Received error on stdin.");
+    }
     return NULL;
 }
diff --git a/collectors/ebpf.plugin/ebpf_functions.h b/collectors/ebpf.plugin/ebpf_functions.h
index b20dab6342..795703b428 100644
--- a/collectors/ebpf.plugin/ebpf_functions.h
+++ b/collectors/ebpf.plugin/ebpf_functions.h
@@ -3,20 +3,25 @@
 #ifndef NETDATA_EBPF_FUNCTIONS_H
 #define NETDATA_EBPF_FUNCTIONS_H 1
 
+#ifdef NETDATA_DEV_MODE
+// Common
+static inline void EBPF_PLUGIN_FUNCTIONS(const char *NAME, const char *DESC) {
+    fprintf(stdout, "%s \"%s\" 10 \"%s\"\n", PLUGINSD_KEYWORD_FUNCTION, NAME, DESC);
+}
+#endif
+
 // configuration file & description
 #define NETDATA_DIRECTORY_FUNCTIONS_CONFIG_FILE "functions.conf"
 #define NETDATA_EBPF_FUNCTIONS_MODULE_DESC "Show information about current function status."
 
 // function list
 #define EBPF_FUNCTION_THREAD "ebpf_thread"
+#define EBPF_FUNCTION_SOCKET "ebpf_socket"
 
+// thread constants
 #define EBPF_PLUGIN_THREAD_FUNCTION_DESCRIPTION "Detailed information about eBPF threads."
 #define EBPF_PLUGIN_THREAD_FUNCTION_ERROR_THREAD_NOT_FOUND "ebpf.plugin does not have thread named "
 
-#define EBPF_PLUGIN_FUNCTIONS(NAME, DESC) do { \
-    fprintf(stdout, PLUGINSD_KEYWORD_FUNCTION " \"" NAME "\" 10 \"%s\"\n", DESC); \
-} while(0)
-
 #define EBPF_THREADS_SELECT_THREAD "thread:"
 #define EBPF_THREADS_ENABLE_CATEGORY "enable:"
 #define EBPF_THREADS_DISABLE_CATEGORY "disable:"
@@ -24,6 +29,16 @@
 #define EBPF_THREAD_STATUS_RUNNING "running"
 #define EBPF_THREAD_STATUS_STOPPED "stopped"
 
+// socket constants
+#define EBPF_PLUGIN_SOCKET_FUNCTION_DESCRIPTION "Detailed information about open sockets."
+#define EBPF_FUNCTION_SOCKET_FAMILY "family:"
+#define EBPF_FUNCTION_SOCKET_PERIOD "period:"
+#define EBPF_FUNCTION_SOCKET_RESOLVE "resolve:"
+#define EBPF_FUNCTION_SOCKET_RANGE "range:"
+#define EBPF_FUNCTION_SOCKET_PORT "port:"
+#define EBPF_FUNCTION_SOCKET_RESET "reset"
+#define EBPF_FUNCTION_SOCKET_INTERFACES "interfaces"
+
 void *ebpf_function_thread(void *ptr);
 
 #endif
diff --git a/collectors/ebpf.plugin/ebpf_mount.c b/collectors/ebpf.plugin/ebpf_mount.c
index 57ea5b2f45..8650e8b623 100644
--- a/collectors/ebpf.plugin/ebpf_mount.c
+++ b/collectors/ebpf.plugin/ebpf_mount.c
@@ -466,7 +466,7 @@ static int ebpf_mount_load_bpf(ebpf_module_t *em)
 #endif
 
     if (ret)
-        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
+        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
 
     return ret;
 }
diff --git a/collectors/ebpf.plugin/ebpf_process.h b/collectors/ebpf.plugin/ebpf_process.h
index d49e384525..310b321d63 100644
--- a/collectors/ebpf.plugin/ebpf_process.h
+++ b/collectors/ebpf.plugin/ebpf_process.h
@@ -52,7 +52,8 @@ enum netdata_ebpf_stats_order {
     NETDATA_EBPF_ORDER_STAT_HASH_GLOBAL_TABLE_TOTAL,
     NETDATA_EBPF_ORDER_STAT_HASH_PID_TABLE_ADDED,
     NETDATA_EBPF_ORDER_STAT_HASH_PID_TABLE_REMOVED,
-    NETATA_EBPF_ORDER_STAT_ARAL_BEGIN
+    NETATA_EBPF_ORDER_STAT_ARAL_BEGIN,
+    NETDATA_EBPF_ORDER_FUNCTION_PER_THREAD,
 };
 
 enum netdata_ebpf_load_mode_stats{
diff --git a/collectors/ebpf.plugin/ebpf_shm.c b/collectors/ebpf.plugin/ebpf_shm.c
index baeb7204e2..a79074c78c 100644
--- a/collectors/ebpf.plugin/ebpf_shm.c
+++ b/collectors/ebpf.plugin/ebpf_shm.c
@@ -1222,7 +1222,7 @@ static int ebpf_shm_load_bpf(ebpf_module_t *em)
 
 
     if (ret)
-        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
+        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
 
     return ret;
 }
diff --git a/collectors/ebpf.plugin/ebpf_socket.c b/collectors/ebpf.plugin/ebpf_socket.c
index e4798b30c8..33a82dec0e 100644
--- a/collectors/ebpf.plugin/ebpf_socket.c
+++ b/collectors/ebpf.plugin/ebpf_socket.c
@@ -5,9 +5,6 @@
 #include "ebpf.h"
 #include "ebpf_socket.h"
 
-// ----------------------------------------------------------------------------
-// ARAL vectors used to speed up processing
-
 /*****************************************************************
  *
  *  GLOBAL VARIABLES
@@ -23,16 +20,7 @@ static char *socket_id_names[NETDATA_MAX_SOCKET_VECTOR] = { "tcp_cleanup_rbuf",
                                                             "tcp_connect_v4", "tcp_connect_v6", "inet_csk_accept_tcp",
                                                             "inet_csk_accept_udp" };
 
-static ebpf_local_maps_t socket_maps[] = {{.name = "tbl_bandwidth",
-                                           .internal_input = NETDATA_COMPILED_CONNECTIONS_ALLOWED,
-                                           .user_input = NETDATA_MAXIMUM_CONNECTIONS_ALLOWED,
-                                           .type = NETDATA_EBPF_MAP_RESIZABLE | NETDATA_EBPF_MAP_PID,
-                                           .map_fd = ND_EBPF_MAP_FD_NOT_INITIALIZED,
-#ifdef LIBBPF_MAJOR_VERSION
-                                           .map_type = BPF_MAP_TYPE_PERCPU_HASH
-#endif
-                                          },
-                                          {.name = "tbl_global_sock",
+static ebpf_local_maps_t socket_maps[] = {{.name = "tbl_global_sock",
                                            .internal_input = NETDATA_SOCKET_COUNTER,
                                            .user_input = 0, .type = NETDATA_EBPF_MAP_STATIC,
                                            .map_fd = ND_EBPF_MAP_FD_NOT_INITIALIZED,
@@ -48,16 +36,7 @@ static ebpf_local_maps_t socket_maps[] = {{.name = "tbl_bandwidth",
                                            .map_type = BPF_MAP_TYPE_PERCPU_HASH
 #endif
                                            },
-                                          {.name = "tbl_conn_ipv4",
-                                           .internal_input = NETDATA_COMPILED_CONNECTIONS_ALLOWED,
-                                           .user_input = NETDATA_MAXIMUM_CONNECTIONS_ALLOWED,
-                                           .type = NETDATA_EBPF_MAP_STATIC,
-                                           .map_fd = ND_EBPF_MAP_FD_NOT_INITIALIZED,
-#ifdef LIBBPF_MAJOR_VERSION
-                                           .map_type = BPF_MAP_TYPE_PERCPU_HASH
-#endif
-                                          },
-                                          {.name = "tbl_conn_ipv6",
+                                           {.name = "tbl_nd_socket",
                                            .internal_input = NETDATA_COMPILED_CONNECTIONS_ALLOWED,
                                            .user_input = NETDATA_MAXIMUM_CONNECTIONS_ALLOWED,
                                            .type = NETDATA_EBPF_MAP_STATIC,
@@ -93,11 +72,6 @@ static netdata_idx_t *socket_hash_values = NULL;
 static netdata_syscall_stat_t socket_aggregated_data[NETDATA_MAX_SOCKET_VECTOR];
 static netdata_publish_syscall_t socket_publish_aggregated[NETDATA_MAX_SOCKET_VECTOR];
 
-static ebpf_bandwidth_t *bandwidth_vector = NULL;
-
-pthread_mutex_t nv_mutex;
-netdata_vector_plot_t inbound_vectors = { .plot = NULL, .next = 0, .last = 0 };
-netdata_vector_plot_t outbound_vectors = { .plot = NULL, .next = 0, .last = 0 };
 netdata_socket_t *socket_values;
 
 ebpf_network_viewer_port_list_t *listen_ports = NULL;
@@ -108,28 +82,30 @@ struct config socket_config = { .first_section = NULL,
     .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
         .rwlock = AVL_LOCK_INITIALIZER } };
 
-netdata_ebpf_targets_t socket_targets[] = { {.name = "inet_csk_accept", .mode = EBPF_LOAD_TRAMPOLINE},
-                                            {.name = "tcp_retransmit_skb", .mode = EBPF_LOAD_TRAMPOLINE},
-                                            {.name = "tcp_cleanup_rbuf", .mode = EBPF_LOAD_TRAMPOLINE},
-                                            {.name = "tcp_close", .mode = EBPF_LOAD_TRAMPOLINE},
-                                            {.name = "udp_recvmsg", .mode = EBPF_LOAD_TRAMPOLINE},
-                                            {.name = "tcp_sendmsg", .mode = EBPF_LOAD_TRAMPOLINE},
-                                            {.name = "udp_sendmsg", .mode = EBPF_LOAD_TRAMPOLINE},
-                                            {.name = "tcp_v4_connect", .mode = EBPF_LOAD_TRAMPOLINE},
-                                            {.name = "tcp_v6_connect", .mode = EBPF_LOAD_TRAMPOLINE},
+netdata_ebpf_targets_t socket_targets[] = { {.name = "inet_csk_accept", .mode = EBPF_LOAD_PROBE},
+                                            {.name = "tcp_retransmit_skb", .mode = EBPF_LOAD_PROBE},
+                                            {.name = "tcp_cleanup_rbuf", .mode = EBPF_LOAD_PROBE},
+                                            {.name = "tcp_close", .mode = EBPF_LOAD_PROBE},
+                                            {.name = "udp_recvmsg", .mode = EBPF_LOAD_PROBE},
+                                            {.name = "tcp_sendmsg", .mode = EBPF_LOAD_PROBE},
+                                            {.name = "udp_sendmsg", .mode = EBPF_LOAD_PROBE},
+                                            {.name = "tcp_v4_connect", .mode = EBPF_LOAD_PROBE},
+                                            {.name = "tcp_v6_connect", .mode = EBPF_LOAD_PROBE},
                                             {.name = NULL, .mode = EBPF_LOAD_TRAMPOLINE}};
 
-struct netdata_static_thread socket_threads = {
-    .name = "EBPF SOCKET READ",
-    .config_section = NULL,
-    .config_name = NULL,
-    .env_name = NULL,
-    .enabled = 1,
-    .thread = NULL,
-    .init_routine = NULL,
-    .start_routine = NULL
+struct netdata_static_thread ebpf_read_socket = {
+        .name = "EBPF_READ_SOCKET",
+        .config_section = NULL,
+        .config_name = NULL,
+        .env_name = NULL,
+        .enabled = 1,
+        .thread = NULL,
+        .init_routine = NULL,
+        .start_routine = NULL
 };
 
+ARAL *aral_socket_table = NULL;
+
 #ifdef NETDATA_DEV_MODE
 int socket_disable_priority;
 #endif
@@ -145,7 +121,9 @@ int socket_disable_priority;
 static void ebpf_socket_disable_probes(struct socket_bpf *obj)
 {
     bpf_program__set_autoload(obj->progs.netdata_inet_csk_accept_kretprobe, false);
+    bpf_program__set_autoload(obj->progs.netdata_tcp_v4_connect_kprobe, false);
     bpf_program__set_autoload(obj->progs.netdata_tcp_v4_connect_kretprobe, false);
+    bpf_program__set_autoload(obj->progs.netdata_tcp_v6_connect_kprobe, false);
     bpf_program__set_autoload(obj->progs.netdata_tcp_v6_connect_kretprobe, false);
     bpf_program__set_autoload(obj->progs.netdata_tcp_retransmit_skb_kprobe, false);
     bpf_program__set_autoload(obj->progs.netdata_tcp_cleanup_rbuf_kprobe, false);
@@ -156,7 +134,6 @@ static void ebpf_socket_disable_probes(struct socket_bpf *obj)
     bpf_program__set_autoload(obj->progs.netdata_tcp_sendmsg_kprobe, false);
     bpf_program__set_autoload(obj->progs.netdata_udp_sendmsg_kretprobe, false);
     bpf_program__set_autoload(obj->progs.netdata_udp_sendmsg_kprobe, false);
-    bpf_program__set_autoload(obj->progs.netdata_socket_release_task_kprobe, false);
 }
 
 /**
@@ -168,8 +145,10 @@ static void ebpf_socket_disable_probes(struct socket_bpf *obj)
  */
 static void ebpf_socket_disable_trampoline(struct socket_bpf *obj)
 {
-    bpf_program__set_autoload(obj->progs.netdata_inet_csk_accept_fentry, false);
+    bpf_program__set_autoload(obj->progs.netdata_inet_csk_accept_fexit, false);
+    bpf_program__set_autoload(obj->progs.netdata_tcp_v4_connect_fentry, false);
     bpf_program__set_autoload(obj->progs.netdata_tcp_v4_connect_fexit, false);
+    bpf_program__set_autoload(obj->progs.netdata_tcp_v6_connect_fentry, false);
     bpf_program__set_autoload(obj->progs.netdata_tcp_v6_connect_fexit, false);
     bpf_program__set_autoload(obj->progs.netdata_tcp_retransmit_skb_fentry, false);
     bpf_program__set_autoload(obj->progs.netdata_tcp_cleanup_rbuf_fentry, false);
@@ -180,7 +159,6 @@ static void ebpf_socket_disable_trampoline(struct socket_bpf *obj)
     bpf_program__set_autoload(obj->progs.netdata_tcp_sendmsg_fexit, false);
     bpf_program__set_autoload(obj->progs.netdata_udp_sendmsg_fentry, false);
     bpf_program__set_autoload(obj->progs.netdata_udp_sendmsg_fexit, false);
-    bpf_program__set_autoload(obj->progs.netdata_socket_release_task_fentry, false);
 }
 
 /**
@@ -190,12 +168,18 @@ static void ebpf_socket_disable_trampoline(struct socket_bpf *obj)
  */
 static void ebpf_set_trampoline_target(struct socket_bpf *obj)
 {
-    bpf_program__set_attach_target(obj->progs.netdata_inet_csk_accept_fentry, 0,
+    bpf_program__set_attach_target(obj->progs.netdata_inet_csk_accept_fexit, 0,
                                    socket_targets[NETDATA_FCNT_INET_CSK_ACCEPT].name);
 
+    bpf_program__set_attach_target(obj->progs.netdata_tcp_v4_connect_fentry, 0,
+                                   socket_targets[NETDATA_FCNT_TCP_V4_CONNECT].name);
+
     bpf_program__set_attach_target(obj->progs.netdata_tcp_v4_connect_fexit, 0,
                                    socket_targets[NETDATA_FCNT_TCP_V4_CONNECT].name);
 
+    bpf_program__set_attach_target(obj->progs.netdata_tcp_v6_connect_fentry, 0,
+                                   socket_targets[NETDATA_FCNT_TCP_V6_CONNECT].name);
+
     bpf_program__set_attach_target(obj->progs.netdata_tcp_v6_connect_fexit, 0,
                                    socket_targets[NETDATA_FCNT_TCP_V6_CONNECT].name);
 
@@ -205,7 +189,8 @@ static void ebpf_set_trampoline_target(struct socket_bpf *obj)
     bpf_program__set_attach_target(obj->progs.netdata_tcp_cleanup_rbuf_fentry, 0,
                                    socket_targets[NETDATA_FCNT_CLEANUP_RBUF].name);
 
-    bpf_program__set_attach_target(obj->progs.netdata_tcp_close_fentry, 0, socket_targets[NETDATA_FCNT_TCP_CLOSE].name);
+    bpf_program__set_attach_target(obj->progs.netdata_tcp_close_fentry, 0,
+                                   socket_targets[NETDATA_FCNT_TCP_CLOSE].name);
 
     bpf_program__set_attach_target(obj->progs.netdata_udp_recvmsg_fentry, 0,
                                    socket_targets[NETDATA_FCNT_UDP_RECEVMSG].name);
@@ -224,8 +209,6 @@ static void ebpf_set_trampoline_target(struct socket_bpf *obj)
 
     bpf_program__set_attach_target(obj->progs.netdata_udp_sendmsg_fexit, 0,
                                    socket_targets[NETDATA_FCNT_UDP_SENDMSG].name);
-
-    bpf_program__set_attach_target(obj->progs.netdata_socket_release_task_fentry, 0, EBPF_COMMON_FNCT_CLEAN_UP);
 }
 
 
@@ -241,9 +224,13 @@ static inline void ebpf_socket_disable_specific_trampoline(struct socket_bpf *ob
 {
     if (sel == MODE_RETURN) {
         bpf_program__set_autoload(obj->progs.netdata_tcp_sendmsg_fentry, false);
+        bpf_program__set_autoload(obj->progs.netdata_tcp_v4_connect_fentry, false);
+        bpf_program__set_autoload(obj->progs.netdata_tcp_v6_connect_fentry, false);
         bpf_program__set_autoload(obj->progs.netdata_udp_sendmsg_fentry, false);
     } else {
         bpf_program__set_autoload(obj->progs.netdata_tcp_sendmsg_fexit, false);
+        bpf_program__set_autoload(obj->progs.netdata_tcp_v4_connect_fexit, false);
+        bpf_program__set_autoload(obj->progs.netdata_tcp_v6_connect_fexit, false);
         bpf_program__set_autoload(obj->progs.netdata_udp_sendmsg_fexit, false);
     }
 }
@@ -260,9 +247,13 @@ static inline void ebpf_socket_disable_specific_probe(struct socket_bpf *obj, ne
 {
     if (sel == MODE_RETURN) {
         bpf_program__set_autoload(obj->progs.netdata_tcp_sendmsg_kprobe, false);
+        bpf_program__set_autoload(obj->progs.netdata_tcp_v4_connect_kprobe, false);
+        bpf_program__set_autoload(obj->progs.netdata_tcp_v6_connect_kprobe, false);
         bpf_program__set_autoload(obj->progs.netdata_udp_sendmsg_kprobe, false);
     } else {
         bpf_program__set_autoload(obj->progs.netdata_tcp_sendmsg_kretprobe, false);
+        bpf_program__set_autoload(obj->progs.netdata_tcp_v4_connect_kretprobe, false);
+        bpf_program__set_autoload(obj->progs.netdata_tcp_v6_connect_kretprobe, false);
         bpf_program__set_autoload(obj->progs.netdata_udp_sendmsg_kretprobe, false);
     }
 }
@@ -275,26 +266,12 @@ static inline void ebpf_socket_disable_specific_probe(struct socket_bpf *obj, ne
  * @param obj is the main structure for bpf objects.
  * @param sel option selected by user.
  */
-static int ebpf_socket_attach_probes(struct socket_bpf *obj, netdata_run_mode_t sel)
+static long ebpf_socket_attach_probes(struct socket_bpf *obj, netdata_run_mode_t sel)
 {
     obj->links.netdata_inet_csk_accept_kretprobe = bpf_program__attach_kprobe(obj->progs.netdata_inet_csk_accept_kretprobe,
                                                                               true,
                                                                               socket_targets[NETDATA_FCNT_INET_CSK_ACCEPT].name);
-    int ret = libbpf_get_error(obj->links.netdata_inet_csk_accept_kretprobe);
-    if (ret)
-            return -1;
-
-    obj->links.netdata_tcp_v4_connect_kretprobe = bpf_program__attach_kprobe(obj->progs.netdata_tcp_v4_connect_kretprobe,
-                                                                             true,
-                                                                             socket_targets[NETDATA_FCNT_TCP_V4_CONNECT].name);
-    ret = libbpf_get_error(obj->links.netdata_tcp_v4_connect_kretprobe);
-    if (ret)
-        return -1;
-
-    obj->links.netdata_tcp_v6_connect_kretprobe = bpf_program__attach_kprobe(obj->progs.netdata_tcp_v6_connect_kretprobe,
-                                                                             true,
-                                                                             socket_targets[NETDATA_FCNT_TCP_V6_CONNECT].name);
-    ret = libbpf_get_error(obj->links.netdata_tcp_v6_connect_kretprobe);
+    long ret = libbpf_get_error(obj->links.netdata_inet_csk_accept_kretprobe);
     if (ret)
         return -1;
 
@@ -347,6 +324,20 @@ static int ebpf_socket_attach_probes(struct socket_bpf *obj, netdata_run_mode_t
         ret = libbpf_get_error(obj->links.netdata_udp_sendmsg_kretprobe);
         if (ret)
             return -1;
+
+        obj->links.netdata_tcp_v4_connect_kretprobe = bpf_program__attach_kprobe(obj->progs.netdata_tcp_v4_connect_kretprobe,
+                                                                                 true,
+                                                                                 socket_targets[NETDATA_FCNT_TCP_V4_CONNECT].name);
+        ret = libbpf_get_error(obj->links.netdata_tcp_v4_connect_kretprobe);
+        if (ret)
+            return -1;
+
+        obj->links.netdata_tcp_v6_connect_kretprobe = bpf_program__attach_kprobe(obj->progs.netdata_tcp_v6_connect_kretprobe,
+                                                                                 true,
+                                                                                 socket_targets[NETDATA_FCNT_TCP_V6_CONNECT].name);
+        ret = libbpf_get_error(obj->links.netdata_tcp_v6_connect_kretprobe);
+        if (ret)
+            return -1;
     } else {
         obj->links.netdata_tcp_sendmsg_kprobe = bpf_program__attach_kprobe(obj->progs.netdata_tcp_sendmsg_kprobe,
                                                                            false,
@@ -361,13 +352,21 @@ static int ebpf_socket_attach_probes(struct socket_bpf *obj, netdata_run_mode_t
         ret = libbpf_get_error(obj->links.netdata_udp_sendmsg_kprobe);
         if (ret)
             return -1;
-    }
 
-    obj->links.netdata_socket_release_task_kprobe = bpf_program__attach_kprobe(obj->progs.netdata_socket_release_task_kprobe,
-                                                                               false,  EBPF_COMMON_FNCT_CLEAN_UP);
-    ret = libbpf_get_error(obj->links.netdata_socket_release_task_kprobe);
-    if (ret)
-        return -1;
+        obj->links.netdata_tcp_v4_connect_kprobe = bpf_program__attach_kprobe(obj->progs.netdata_tcp_v4_connect_kprobe,
+                                                                              false,
+                                                                              socket_targets[NETDATA_FCNT_TCP_V4_CONNECT].name);
+        ret = libbpf_get_error(obj->links.netdata_tcp_v4_connect_kprobe);
+        if (ret)
+            return -1;
+
+        obj->links.netdata_tcp_v6_connect_kprobe = bpf_program__attach_kprobe(obj->progs.netdata_tcp_v6_connect_kprobe,
+                                                                              false,
+                                                                              socket_targets[NETDATA_FCNT_TCP_V6_CONNECT].name);
+        ret = libbpf_get_error(obj->links.netdata_tcp_v6_connect_kprobe);
+        if (ret)
+            return -1;
+    }
 
     return 0;
 }
@@ -381,11 +380,9 @@ static int ebpf_socket_attach_probes(struct socket_bpf *obj, netdata_run_mode_t
  */
 static void ebpf_socket_set_hash_tables(struct socket_bpf *obj)
 {
-    socket_maps[NETDATA_SOCKET_TABLE_BANDWIDTH].map_fd = bpf_map__fd(obj->maps.tbl_bandwidth);
     socket_maps[NETDATA_SOCKET_GLOBAL].map_fd = bpf_map__fd(obj->maps.tbl_global_sock);
     socket_maps[NETDATA_SOCKET_LPORTS].map_fd = bpf_map__fd(obj->maps.tbl_lports);
-    socket_maps[NETDATA_SOCKET_TABLE_IPV4].map_fd = bpf_map__fd(obj->maps.tbl_conn_ipv4);
-    socket_maps[NETDATA_SOCKET_TABLE_IPV6].map_fd = bpf_map__fd(obj->maps.tbl_conn_ipv6);
+    socket_maps[NETDATA_SOCKET_OPEN_SOCKET].map_fd = bpf_map__fd(obj->maps.tbl_nd_socket);
     socket_maps[NETDATA_SOCKET_TABLE_UDP].map_fd = bpf_map__fd(obj->maps.tbl_nv_udp);
     socket_maps[NETDATA_SOCKET_TABLE_CTRL].map_fd = bpf_map__fd(obj->maps.socket_ctrl);
 }
@@ -400,22 +397,13 @@ static void ebpf_socket_set_hash_tables(struct socket_bpf *obj)
  */
 static void ebpf_socket_adjust_map(struct socket_bpf *obj, ebpf_module_t *em)
 {
-    ebpf_update_map_size(obj->maps.tbl_bandwidth, &socket_maps[NETDATA_SOCKET_TABLE_BANDWIDTH],
-                         em, bpf_map__name(obj->maps.tbl_bandwidth));
-
-    ebpf_update_map_size(obj->maps.tbl_conn_ipv4, &socket_maps[NETDATA_SOCKET_TABLE_IPV4],
-                         em, bpf_map__name(obj->maps.tbl_conn_ipv4));
-
-    ebpf_update_map_size(obj->maps.tbl_conn_ipv6, &socket_maps[NETDATA_SOCKET_TABLE_IPV6],
-                         em, bpf_map__name(obj->maps.tbl_conn_ipv6));
+    ebpf_update_map_size(obj->maps.tbl_nd_socket, &socket_maps[NETDATA_SOCKET_OPEN_SOCKET],
+                         em, bpf_map__name(obj->maps.tbl_nd_socket));
 
     ebpf_update_map_size(obj->maps.tbl_nv_udp, &socket_maps[NETDATA_SOCKET_TABLE_UDP],
                          em, bpf_map__name(obj->maps.tbl_nv_udp));
 
-
-    ebpf_update_map_type(obj->maps.tbl_bandwidth, &socket_maps[NETDATA_SOCKET_TABLE_BANDWIDTH]);
-    ebpf_update_map_type(obj->maps.tbl_conn_ipv4, &socket_maps[NETDATA_SOCKET_TABLE_IPV4]);
-    ebpf_update_map_type(obj->maps.tbl_conn_ipv6, &socket_maps[NETDATA_SOCKET_TABLE_IPV6]);
+    ebpf_update_map_type(obj->maps.tbl_nd_socket, &socket_maps[NETDATA_SOCKET_OPEN_SOCKET]);
     ebpf_update_map_type(obj->maps.tbl_nv_udp, &socket_maps[NETDATA_SOCKET_TABLE_UDP]);
     ebpf_update_map_type(obj->maps.socket_ctrl, &socket_maps[NETDATA_SOCKET_TABLE_CTRL]);
     ebpf_update_map_type(obj->maps.tbl_global_sock, &socket_maps[NETDATA_SOCKET_GLOBAL]);
@@ -459,7 +447,7 @@ static inline int ebpf_socket_load_and_attach(struct socket_bpf *obj, ebpf_modul
     if (test == EBPF_LOAD_TRAMPOLINE) {
         ret = socket_bpf__attach(obj);
     } else {
-        ret = ebpf_socket_attach_probes(obj, em->mode);
+        ret = (int)ebpf_socket_attach_probes(obj, em->mode);
     }
 
     if (!ret) {
@@ -478,145 +466,6 @@ static inline int ebpf_socket_load_and_attach(struct socket_bpf *obj, ebpf_modul
  *
  *****************************************************************/
 
-/**
- * Clean internal socket plot
- *
- * Clean all structures allocated with strdupz.
- *
- * @param ptr the pointer with addresses to clean.
- */
-static inline void clean_internal_socket_plot(netdata_socket_plot_t *ptr)
-{
-    freez(ptr->dimension_recv);
-    freez(ptr->dimension_sent);
-    freez(ptr->resolved_name);
-    freez(ptr->dimension_retransmit);
-}
-
-/**
- * Clean socket plot
- *
- * Clean the allocated data for inbound and outbound vectors.
-static void clean_allocated_socket_plot()
-{
-    if (!network_viewer_opt.enabled)
-        return;
-
-    uint32_t i;
-    uint32_t end = inbound_vectors.last;
-    netdata_socket_plot_t *plot = inbound_vectors.plot;
-    for (i = 0; i < end; i++) {
-        clean_internal_socket_plot(&plot[i]);
-    }
-
-    clean_internal_socket_plot(&plot[inbound_vectors.last]);
-
-    end = outbound_vectors.last;
-    plot = outbound_vectors.plot;
-    for (i = 0; i < end; i++) {
-        clean_internal_socket_plot(&plot[i]);
-    }
-    clean_internal_socket_plot(&plot[outbound_vectors.last]);
-}
- */
-
-/**
- * Clean network ports allocated during initialization.
- *
- * @param ptr a pointer to the link list.
-static void clean_network_ports(ebpf_network_viewer_port_list_t *ptr)
-{
-    if (unlikely(!ptr))
-        return;
-
-    while (ptr) {
-        ebpf_network_viewer_port_list_t *next = ptr->next;
-        freez(ptr->value);
-        freez(ptr);
-        ptr = next;
-    }
-}
- */
-
-/**
- * Clean service names
- *
- * Clean the allocated link list that stores names.
- *
- * @param names the link list.
-static void clean_service_names(ebpf_network_viewer_dim_name_t *names)
-{
-    if (unlikely(!names))
-        return;
-
-    while (names) {
-        ebpf_network_viewer_dim_name_t *next = names->next;
-        freez(names->name);
-        freez(names);
-        names = next;
-    }
-}
- */
-
-/**
- * Clean hostnames
- *
- * @param hostnames the hostnames to clean
-static void clean_hostnames(ebpf_network_viewer_hostname_list_t *hostnames)
-{
-    if (unlikely(!hostnames))
-        return;
-
-    while (hostnames) {
-        ebpf_network_viewer_hostname_list_t *next = hostnames->next;
-        freez(hostnames->value);
-        simple_pattern_free(hostnames->value_pattern);
-        freez(hostnames);
-        hostnames = next;
-    }
-}
- */
-
-/**
- * Clean port Structure
- *
- * Clean the allocated list.
- *
- * @param clean the list that will be cleaned
- */
-void clean_port_structure(ebpf_network_viewer_port_list_t **clean)
-{
-    ebpf_network_viewer_port_list_t *move = *clean;
-    while (move) {
-        ebpf_network_viewer_port_list_t *next = move->next;
-        freez(move->value);
-        freez(move);
-
-        move = next;
-    }
-    *clean = NULL;
-}
-
-/**
- * Clean IP structure
- *
- * Clean the allocated list.
- *
- * @param clean the list that will be cleaned
- */
-static void clean_ip_structure(ebpf_network_viewer_ip_list_t **clean)
-{
-    ebpf_network_viewer_ip_list_t *move = *clean;
-    while (move) {
-        ebpf_network_viewer_ip_list_t *next = move->next;
-        freez(move->value);
-        freez(move);
-
-        move = next;
-    }
-    *clean = NULL;
-}
-
 /**
  * Socket Free
  *
@@ -626,28 +475,6 @@ static void clean_ip_structure(ebpf_network_viewer_ip_list_t **clean)
  */
 static void ebpf_socket_free(ebpf_module_t *em )
 {
-    /* We can have thousands of sockets to clean, so we are transferring
-     * for OS the responsibility while we do not use ARAL here
-    freez(socket_hash_values);
-
-    freez(bandwidth_vector);
-
-    freez(socket_values);
-    clean_allocated_socket_plot();
-    freez(inbound_vectors.plot);
-    freez(outbound_vectors.plot);
-
-    clean_port_structure(&listen_ports);
-
-    clean_network_ports(network_viewer_opt.included_port);
-    clean_network_ports(network_viewer_opt.excluded_port);
-    clean_service_names(network_viewer_opt.names);
-    clean_hostnames(network_viewer_opt.included_hostnames);
-    clean_hostnames(network_viewer_opt.excluded_hostnames);
-     */
-
-    pthread_mutex_destroy(&nv_mutex);
-
     pthread_mutex_lock(&ebpf_exit_cleanup);
     em->enabled = NETDATA_THREAD_EBPF_STOPPED;
     ebpf_update_stats(&plugin_statistics, em);
@@ -655,6 +482,338 @@ static void ebpf_socket_free(ebpf_module_t *em )
     pthread_mutex_unlock(&ebpf_exit_cleanup);
 }
 
+/**
+ *  Obsolete Systemd Socket Charts
+ *
+ *  Obsolete charts when systemd is enabled
+ *
+ *  @param update_every value to overwrite the update frequency set by the server.
+ **/
+static void ebpf_obsolete_systemd_socket_charts(int update_every)
+{
+    int order = 20080;
+    ebpf_write_chart_obsolete(NETDATA_SERVICE_FAMILY,
+                              NETDATA_NET_APPS_CONNECTION_TCP_V4,
+                              "Calls to tcp_v4_connection",
+                              EBPF_COMMON_DIMENSION_CONNECTIONS,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NETDATA_SERVICES_SOCKET_TCP_V4_CONN_CONTEXT,
+                              order++,
+                              update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_SERVICE_FAMILY,
+                              NETDATA_NET_APPS_CONNECTION_TCP_V6,
+                              "Calls to tcp_v6_connection",
+                              EBPF_COMMON_DIMENSION_CONNECTIONS,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NETDATA_SERVICES_SOCKET_TCP_V6_CONN_CONTEXT,
+                              order++,
+                              update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_SERVICE_FAMILY,
+                              NETDATA_NET_APPS_BANDWIDTH_RECV,
+                              "Bytes received",
+                              EBPF_COMMON_DIMENSION_BITS,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NETDATA_SERVICES_SOCKET_BYTES_RECV_CONTEXT,
+                              order++,
+                              update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_SERVICE_FAMILY,
+                              NETDATA_NET_APPS_BANDWIDTH_SENT,
+                              "Bytes sent",
+                              EBPF_COMMON_DIMENSION_BITS,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NETDATA_SERVICES_SOCKET_BYTES_SEND_CONTEXT,
+                              order++,
+                              update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_SERVICE_FAMILY,
+                              NETDATA_NET_APPS_BANDWIDTH_TCP_RECV_CALLS,
+                              "Calls to tcp_cleanup_rbuf.",
+                              EBPF_COMMON_DIMENSION_CALL,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NETDATA_SERVICES_SOCKET_TCP_RECV_CONTEXT,
+                              order++,
+                              update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_SERVICE_FAMILY,
+                              NETDATA_NET_APPS_BANDWIDTH_TCP_SEND_CALLS,
+                              "Calls to tcp_sendmsg.",
+                              EBPF_COMMON_DIMENSION_CALL,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NETDATA_SERVICES_SOCKET_TCP_SEND_CONTEXT,
+                              order++,
+                              update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_SERVICE_FAMILY,
+                              NETDATA_NET_APPS_BANDWIDTH_TCP_RETRANSMIT,
+                              "Calls to tcp_retransmit",
+                              EBPF_COMMON_DIMENSION_CALL,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NETDATA_SERVICES_SOCKET_TCP_RETRANSMIT_CONTEXT,
+                              order++,
+                              update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_SERVICE_FAMILY,
+                              NETDATA_NET_APPS_BANDWIDTH_UDP_SEND_CALLS,
+                              "Calls to udp_sendmsg",
+                              EBPF_COMMON_DIMENSION_CALL,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NETDATA_SERVICES_SOCKET_UDP_SEND_CONTEXT,
+                              order++,
+                              update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_SERVICE_FAMILY,
+                              NETDATA_NET_APPS_BANDWIDTH_UDP_RECV_CALLS,
+                              "Calls to udp_recvmsg",
+                              EBPF_COMMON_DIMENSION_CALL,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NETDATA_SERVICES_SOCKET_UDP_RECV_CONTEXT,
+                              order++,
+                              update_every);
+}
+
+static void ebpf_obsolete_specific_socket_charts(char *type, int update_every);
+/**
+ * Obsolete cgroup chart
+ *
+ * Send obsolete for all charts created before to close.
+ *
+ * @param em a pointer to `struct ebpf_module`
+ */
+static inline void ebpf_obsolete_socket_cgroup_charts(ebpf_module_t *em) {
+    pthread_mutex_lock(&mutex_cgroup_shm);
+
+    ebpf_obsolete_systemd_socket_charts(em->update_every);
+
+    ebpf_cgroup_target_t *ect;
+    for (ect = ebpf_cgroup_pids; ect ; ect = ect->next) {
+        if (ect->systemd)
+            continue;
+
+        ebpf_obsolete_specific_socket_charts(ect->name, em->update_every);
+    }
+    pthread_mutex_unlock(&mutex_cgroup_shm);
+}
+
+/**
+ * Create apps charts
+ *
+ * Call ebpf_create_chart to create the charts on apps submenu.
+ *
+ * @param em   a pointer to the structure with the default values.
+ */
+void ebpf_socket_obsolete_apps_charts(struct ebpf_module *em)
+{
+    int order = 20080;
+    ebpf_write_chart_obsolete(NETDATA_APPS_FAMILY,
+                              NETDATA_NET_APPS_CONNECTION_TCP_V4,
+                              "Calls to tcp_v4_connection",
+                              EBPF_COMMON_DIMENSION_CONNECTIONS,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_APPS_FAMILY,
+                              NETDATA_NET_APPS_CONNECTION_TCP_V6,
+                              "Calls to tcp_v6_connection",
+                              EBPF_COMMON_DIMENSION_CONNECTIONS,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_APPS_FAMILY,
+                              NETDATA_NET_APPS_BANDWIDTH_SENT,
+                              "Bytes sent",
+                              EBPF_COMMON_DIMENSION_BITS,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_APPS_FAMILY,
+                              NETDATA_NET_APPS_BANDWIDTH_RECV,
+                               "bytes received",
+                              EBPF_COMMON_DIMENSION_BITS,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_APPS_FAMILY,
+                              NETDATA_NET_APPS_BANDWIDTH_TCP_SEND_CALLS,
+                              "Calls for tcp_sendmsg",
+                              EBPF_COMMON_DIMENSION_CALL,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_APPS_FAMILY,
+                              NETDATA_NET_APPS_BANDWIDTH_TCP_RECV_CALLS,
+                              "Calls for tcp_cleanup_rbuf",
+                              EBPF_COMMON_DIMENSION_CALL,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_APPS_FAMILY,
+                              NETDATA_NET_APPS_BANDWIDTH_TCP_RETRANSMIT,
+                              "Calls for tcp_retransmit",
+                              EBPF_COMMON_DIMENSION_CALL,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_APPS_FAMILY,
+                              NETDATA_NET_APPS_BANDWIDTH_UDP_SEND_CALLS,
+                              "Calls for udp_sendmsg",
+                              EBPF_COMMON_DIMENSION_CALL,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_APPS_FAMILY,
+                              NETDATA_NET_APPS_BANDWIDTH_UDP_RECV_CALLS,
+                              "Calls for udp_recvmsg",
+                              EBPF_COMMON_DIMENSION_CALL,
+                              NETDATA_APPS_NET_GROUP,
+                              NETDATA_EBPF_CHART_TYPE_STACKED,
+                              NULL,
+                              order++,
+                              em->update_every);
+}
+
+/**
+ * Obsolete global charts
+ *
+ * Obsolete charts created.
+ *
+ * @param em a pointer to the structure with the default values.
+ */
+static void ebpf_socket_obsolete_global_charts(ebpf_module_t *em)
+{
+    int order = 21070;
+    ebpf_write_chart_obsolete(NETDATA_EBPF_IP_FAMILY,
+                              NETDATA_INBOUND_CONNECTIONS,
+                              "Inbound connections.",
+                              EBPF_COMMON_DIMENSION_CONNECTIONS,
+                              NETDATA_SOCKET_KERNEL_FUNCTIONS,
+                              NETDATA_EBPF_CHART_TYPE_LINE,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_EBPF_IP_FAMILY,
+                              NETDATA_TCP_OUTBOUND_CONNECTIONS,
+                              "TCP outbound connections.",
+                              EBPF_COMMON_DIMENSION_CONNECTIONS,
+                              NETDATA_SOCKET_KERNEL_FUNCTIONS,
+                              NETDATA_EBPF_CHART_TYPE_LINE,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+
+    ebpf_write_chart_obsolete(NETDATA_EBPF_IP_FAMILY,
+                              NETDATA_TCP_FUNCTION_COUNT,
+                              "Calls to internal functions",
+                              EBPF_COMMON_DIMENSION_CALL,
+                              NETDATA_SOCKET_KERNEL_FUNCTIONS,
+                              NETDATA_EBPF_CHART_TYPE_LINE,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_EBPF_IP_FAMILY,
+                              NETDATA_TCP_FUNCTION_BITS,
+                              "TCP bandwidth",
+                              EBPF_COMMON_DIMENSION_BITS,
+                              NETDATA_SOCKET_KERNEL_FUNCTIONS,
+                              NETDATA_EBPF_CHART_TYPE_LINE,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+    if (em->mode < MODE_ENTRY) {
+        ebpf_write_chart_obsolete(NETDATA_EBPF_IP_FAMILY,
+                                  NETDATA_TCP_FUNCTION_ERROR,
+                                  "TCP errors",
+                                  EBPF_COMMON_DIMENSION_CALL,
+                                  NETDATA_SOCKET_KERNEL_FUNCTIONS,
+                                  NETDATA_EBPF_CHART_TYPE_LINE,
+                                  NULL,
+                                  order++,
+                                  em->update_every);
+    }
+
+    ebpf_write_chart_obsolete(NETDATA_EBPF_IP_FAMILY,
+                              NETDATA_TCP_RETRANSMIT,
+                              "Packages retransmitted",
+                              EBPF_COMMON_DIMENSION_CALL,
+                              NETDATA_SOCKET_KERNEL_FUNCTIONS,
+                              NETDATA_EBPF_CHART_TYPE_LINE,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_EBPF_IP_FAMILY,
+                              NETDATA_UDP_FUNCTION_COUNT,
+                              "UDP calls",
+                              EBPF_COMMON_DIMENSION_CALL,
+                              NETDATA_SOCKET_KERNEL_FUNCTIONS,
+                              NETDATA_EBPF_CHART_TYPE_LINE,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+    ebpf_write_chart_obsolete(NETDATA_EBPF_IP_FAMILY,
+                              NETDATA_UDP_FUNCTION_BITS,
+                              "UDP bandwidth",
+                              EBPF_COMMON_DIMENSION_BITS,
+                              NETDATA_SOCKET_KERNEL_FUNCTIONS,
+                              NETDATA_EBPF_CHART_TYPE_LINE,
+                              NULL,
+                              order++,
+                              em->update_every);
+
+    if (em->mode < MODE_ENTRY) {
+        ebpf_write_chart_obsolete(NETDATA_EBPF_IP_FAMILY,
+                                  NETDATA_UDP_FUNCTION_ERROR,
+                                  "UDP errors",
+                                  EBPF_COMMON_DIMENSION_CALL,
+                                  NETDATA_SOCKET_KERNEL_FUNCTIONS,
+                                  NETDATA_EBPF_CHART_TYPE_LINE,
+                                  NULL,
+                                  order++,
+                                  em->update_every);
+    }
+
+    fflush(stdout);
+}
 /**
  * Socket exit
  *
@@ -665,23 +824,33 @@ static void ebpf_socket_free(ebpf_module_t *em )
 static void ebpf_socket_exit(void *ptr)
 {
     ebpf_module_t *em = (ebpf_module_t *)ptr;
-    pthread_mutex_lock(&nv_mutex);
-    if (socket_threads.thread)
-        netdata_thread_cancel(*socket_threads.thread);
-    pthread_mutex_unlock(&nv_mutex);
-    ebpf_socket_free(em);
-}
 
-/**
- * Socket cleanup
- *
- * Clean up allocated addresses.
- *
- * @param ptr thread data.
- */
-void ebpf_socket_cleanup(void *ptr)
-{
-    UNUSED(ptr);
+    if (ebpf_read_socket.thread)
+        netdata_thread_cancel(*ebpf_read_socket.thread);
+
+    if (em->enabled == NETDATA_THREAD_EBPF_FUNCTION_RUNNING) {
+        pthread_mutex_lock(&lock);
+
+        if (em->cgroup_charts) {
+            ebpf_obsolete_socket_cgroup_charts(em);
+            fflush(stdout);
+        }
+
+        if (em->apps_charts & NETDATA_EBPF_APPS_FLAG_CHART_CREATED) {
+            ebpf_socket_obsolete_apps_charts(em);
+            fflush(stdout);
+        }
+
+        ebpf_socket_obsolete_global_charts(em);
+
+#ifdef NETDATA_DEV_MODE
+        if (ebpf_aral_socket_pid)
+            ebpf_statistic_obsolete_aral_chart(em, socket_disable_priority);
+#endif
+        pthread_mutex_unlock(&lock);
+    }
+
+    ebpf_socket_free(em);
 }
 
 /*****************************************************************
@@ -736,174 +905,6 @@ static void ebpf_update_global_publish(
     udp->read = (long)publish[4].nbyte;
 }
 
-/**
- * Update Network Viewer plot data
- *
- * @param plot  the structure where the data will be stored
- * @param sock  the last update from the socket
- */
-static inline void update_nv_plot_data(netdata_plot_values_t *plot, netdata_socket_t *sock)
-{
-    if (sock->ct != plot->last_time) {
-        plot->last_time         = sock->ct;
-        plot->plot_recv_packets = sock->recv_packets;
-        plot->plot_sent_packets = sock->sent_packets;
-        plot->plot_recv_bytes   = sock->recv_bytes;
-        plot->plot_sent_bytes   = sock->sent_bytes;
-        plot->plot_retransmit   = sock->retransmit;
-    }
-
-    sock->recv_packets = 0;
-    sock->sent_packets = 0;
-    sock->recv_bytes   = 0;
-    sock->sent_bytes   = 0;
-    sock->retransmit   = 0;
-}
-
-/**
- * Calculate Network Viewer Plot
- *
- * Do math with collected values before to plot data.
- */
-static inline void calculate_nv_plot()
-{
-    pthread_mutex_lock(&nv_mutex);
-    uint32_t i;
-    uint32_t end = inbound_vectors.next;
-    for (i = 0; i < end; i++) {
-        update_nv_plot_data(&inbound_vectors.plot[i].plot, &inbound_vectors.plot[i].sock);
-    }
-    inbound_vectors.max_plot = end;
-
-    // The 'Other' dimension is always calculated for the chart to have at least one dimension
-    update_nv_plot_data(&inbound_vectors.plot[inbound_vectors.last].plot,
-                        &inbound_vectors.plot[inbound_vectors.last].sock);
-
-    end = outbound_vectors.next;
-    for (i = 0; i < end; i++) {
-        update_nv_plot_data(&outbound_vectors.plot[i].plot, &outbound_vectors.plot[i].sock);
-    }
-    outbound_vectors.max_plot = end;
-
-    /*
-    // The 'Other' dimension is always calculated for the chart to have at least one dimension
-    update_nv_plot_data(&outbound_vectors.plot[outbound_vectors.last].plot,
-                        &outbound_vectors.plot[outbound_vectors.last].sock);
-                        */
-    pthread_mutex_unlock(&nv_mutex);
-}
-
-/**
- * Network viewer send bytes
- *
- * @param ptr   the structure with values to plot
- * @param chart the chart name.
- */
-static inline void ebpf_socket_nv_send_bytes(netdata_vector_plot_t *ptr, char *chart)
-{
-    uint32_t i;
-    uint32_t end = ptr->last_plot;
-    netdata_socket_plot_t *w = ptr->plot;
-    collected_number value;
-
-    write_begin_chart(NETDATA_EBPF_FAMILY, chart);
-    for (i = 0; i < end; i++) {
-        value = ((collected_number) w[i].plot.plot_sent_bytes);
-        write_chart_dimension(w[i].dimension_sent, value);
-        value = (collected_number) w[i].plot.plot_recv_bytes;
-        write_chart_dimension(w[i].dimension_recv, value);
-    }
-
-    i = ptr->last;
-    value = ((collected_number) w[i].plot.plot_sent_bytes);
-    write_chart_dimension(w[i].dimension_sent, value);
-    value = (collected_number) w[i].plot.plot_recv_bytes;
-    write_chart_dimension(w[i].dimension_recv, value);
-    write_end_chart();
-}
-
-/**
- * Network Viewer Send packets
- *
- * @param ptr   the structure with values to plot
- * @param chart the chart name.
- */
-static inline void ebpf_socket_nv_send_packets(netdata_vector_plot_t *ptr, char *chart)
-{
-    uint32_t i;
-    uint32_t end = ptr->last_plot;
-    netdata_socket_plot_t *w = ptr->plot;
-    collected_number value;
-
-    write_begin_chart(NETDATA_EBPF_FAMILY, chart);
-    for (i = 0; i < end; i++) {
-        value = ((collected_number)w[i].plot.plot_sent_packets);
-        write_chart_dimension(w[i].dimension_sent, value);
-        value = (collected_number) w[i].plot.plot_recv_packets;
-        write_chart_dimension(w[i].dimension_recv, value);
-    }
-
-    i = ptr->last;
-    value = ((collected_number)w[i].plot.plot_sent_packets);
-    write_chart_dimension(w[i].dimension_sent, value);
-    value = (collected_number)w[i].plot.plot_recv_packets;
-    write_chart_dimension(w[i].dimension_recv, value);
-    write_end_chart();
-}
-
-/**
- * Network Viewer Send Retransmit
- *
- * @param ptr   the structure with values to plot
- * @param chart the chart name.
- */
-static inline void ebpf_socket_nv_send_retransmit(netdata_vector_plot_t *ptr, char *chart)
-{
-    uint32_t i;
-    uint32_t end = ptr->last_plot;
-    netdata_socket_plot_t *w = ptr->plot;
-    collected_number value;
-
-    write_begin_chart(NETDATA_EBPF_FAMILY, chart);
-    for (i = 0; i < end; i++) {
-        value = (collected_number) w[i].plot.plot_retransmit;
-        write_chart_dimension(w[i].dimension_retransmit, value);
-    }
-
-    i = ptr->last;
-    value = (collected_number)w[i].plot.plot_retransmit;
-    write_chart_dimension(w[i].dimension_retransmit, value);
-    write_end_chart();
-}
-
-/**
- * Send network viewer data
- *
- * @param ptr the pointer to plot data
- */
-static void ebpf_socket_send_nv_data(netdata_vector_plot_t *ptr)
-{
-    if (!ptr->flags)
-        return;
-
-    if (ptr == (netdata_vector_plot_t *)&outbound_vectors) {
-        ebpf_socket_nv_send_bytes(ptr, NETDATA_NV_OUTBOUND_BYTES);
-        fflush(stdout);
-
-        ebpf_socket_nv_send_packets(ptr, NETDATA_NV_OUTBOUND_PACKETS);
-        fflush(stdout);
-
-        ebpf_socket_nv_send_retransmit(ptr,  NETDATA_NV_OUTBOUND_RETRANSMIT);
-        fflush(stdout);
-    } else {
-        ebpf_socket_nv_send_bytes(ptr, NETDATA_NV_INBOUND_BYTES);
-        fflush(stdout);
-
-        ebpf_socket_nv_send_packets(ptr, NETDATA_NV_INBOUND_PACKETS);
-        fflush(stdout);
-    }
-}
-
 /**
  * Send Global Inbound connection
  *
@@ -1112,7 +1113,7 @@ void ebpf_socket_send_apps_data(ebpf_module_t *em, struct ebpf_target *root)
  *
  * @param em a pointer to the structure with the default values.
  */
-static void ebpf_create_global_charts(ebpf_module_t *em)
+static void ebpf_socket_create_global_charts(ebpf_module_t *em)
 {
     int order = 21070;
     ebpf_create_chart(NETDATA_EBPF_IP_FAMILY,
@@ -1319,138 +1320,6 @@ void ebpf_socket_create_apps_charts(struct ebpf_module *em, void *ptr)
     em->apps_charts |= NETDATA_EBPF_APPS_FLAG_CHART_CREATED;
 }
 
-/**
- *  Create network viewer chart
- *
- *  Create common charts.
- *
- * @param id            chart id
- * @param title         chart title
- * @param units         units label
- * @param family        group name used to attach the chart on dashboard
- * @param order         chart order
- * @param update_every value to overwrite the update frequency set by the server.
- * @param ptr          plot structure with values.
- */
-static void ebpf_socket_create_nv_chart(char *id, char *title, char *units,
-                                        char *family, int order, int update_every, netdata_vector_plot_t *ptr)
-{
-    ebpf_write_chart_cmd(NETDATA_EBPF_FAMILY,
-                         id,
-                         title,
-                         units,
-                         family,
-                         NETDATA_EBPF_CHART_TYPE_STACKED,
-                         NULL,
-                         order,
-                         update_every,
-                         NETDATA_EBPF_MODULE_NAME_SOCKET);
-
-    uint32_t i;
-    uint32_t end = ptr->last_plot;
-    netdata_socket_plot_t *w = ptr->plot;
-    for (i = 0; i < end; i++) {
-        fprintf(stdout, "DIMENSION %s '' incremental -1 1\n", w[i].dimension_sent);
-        fprintf(stdout, "DIMENSION %s '' incremental 1 1\n", w[i].dimension_recv);
-    }
-
-    end = ptr->last;
-    fprintf(stdout, "DIMENSION %s '' incremental -1 1\n", w[end].dimension_sent);
-    fprintf(stdout, "DIMENSION %s '' incremental 1 1\n", w[end].dimension_recv);
-}
-
-/**
- *  Create network viewer retransmit
- *
- *  Create a specific chart.
- *
- * @param id        the chart id
- * @param title     the chart title
- * @param units     the units label
- * @param family    the group name used to attach the chart on dashboard
- * @param order     the chart order
- * @param update_every value to overwrite the update frequency set by the server.
- * @param ptr       the plot structure with values.
- */
-static void ebpf_socket_create_nv_retransmit(char *id, char *title, char *units,
-                                             char *family, int order, int update_every, netdata_vector_plot_t *ptr)
-{
-    ebpf_write_chart_cmd(NETDATA_EBPF_FAMILY,
-                         id,
-                         title,
-                         units,
-                         family,
-                         NETDATA_EBPF_CHART_TYPE_STACKED,
-                         NULL,
-                         order,
-                         update_every,
-                         NETDATA_EBPF_MODULE_NAME_SOCKET);
-
-    uint32_t i;
-    uint32_t end = ptr->last_plot;
-    netdata_socket_plot_t *w = ptr->plot;
-    for (i = 0; i < end; i++) {
-        fprintf(stdout, "DIMENSION %s '' incremental 1 1\n", w[i].dimension_retransmit);
-    }
-
-    end = ptr->last;
-    fprintf(stdout, "DIMENSION %s '' incremental 1 1\n", w[end].dimension_retransmit);
-}
-
-/**
- * Create Network Viewer charts
- *
- * Recreate the charts when new sockets are created.
- *
- * @param ptr a pointer for inbound or outbound vectors.
- * @param update_every value to overwrite the update frequency set by the server.
- */
-static void ebpf_socket_create_nv_charts(netdata_vector_plot_t *ptr, int update_every)
-{
-    // We do not have new sockets, so we do not need move forward
-    if (ptr->max_plot == ptr->last_plot)
-        return;
-
-    ptr->last_plot = ptr->max_plot;
-
-    if (ptr == (netdata_vector_plot_t *)&outbound_vectors) {
-        ebpf_socket_create_nv_chart(NETDATA_NV_OUTBOUND_BYTES,
-                                    "Outbound connections (bytes).", EBPF_COMMON_DIMENSION_BYTES,
-                                    NETDATA_NETWORK_CONNECTIONS_GROUP,
-                                    21080,
-                                    update_every, ptr);
-
-        ebpf_socket_create_nv_chart(NETDATA_NV_OUTBOUND_PACKETS,
-                                    "Outbound connections (packets)",
-                                    EBPF_COMMON_DIMENSION_PACKETS,
-                                    NETDATA_NETWORK_CONNECTIONS_GROUP,
-                                    21082,
-                                    update_every, ptr);
-
-        ebpf_socket_create_nv_retransmit(NETDATA_NV_OUTBOUND_RETRANSMIT,
-                                         "Retransmitted packets",
-                                         EBPF_COMMON_DIMENSION_CALL,
-                                         NETDATA_NETWORK_CONNECTIONS_GROUP,
-                                         21083,
-                                         update_every, ptr);
-    } else {
-        ebpf_socket_create_nv_chart(NETDATA_NV_INBOUND_BYTES,
-                                    "Inbound connections (bytes)", EBPF_COMMON_DIMENSION_BYTES,
-                                    NETDATA_NETWORK_CONNECTIONS_GROUP,
-                                    21084,
-                                    update_every, ptr);
-
-        ebpf_socket_create_nv_chart(NETDATA_NV_INBOUND_PACKETS,
-                                    "Inbound connections (packets)",
-                                    EBPF_COMMON_DIMENSION_PACKETS,
-                                    NETDATA_NETWORK_CONNECTIONS_GROUP,
-                                    21085,
-                                    update_every, ptr);
-    }
-
-    ptr->flags |= NETWORK_VIEWER_CHARTS_CREATED;
-}
-
 /*****************************************************************
  *
  *  READ INFORMATION FROM KERNEL RING
@@ -1517,7 +1386,7 @@ static int ebpf_is_specific_ip_inside_range(union netdata_ip_t *cmp, int family)
  *
  * @return It returns 1 when cmp is inside and 0 otherwise.
  */
-static int is_port_inside_range(uint16_t cmp)
+static int ebpf_is_port_inside_range(uint16_t cmp)
 {
     // We do not have restrictions for ports.
     if (!network_viewer_opt.excluded_port && !network_viewer_opt.included_port)
@@ -1525,7 +1394,6 @@ static int is_port_inside_range(uint16_t cmp)
 
     // Test if port is excluded
     ebpf_network_viewer_port_list_t *move = network_viewer_opt.excluded_port;
-    cmp = htons(cmp);
     while (move) {
         if (move->cmp_first <= cmp && cmp <= move->cmp_last)
             return 0;
@@ -1583,469 +1451,193 @@ int hostname_matches_pattern(char *cmp)
  * Compare destination addresses and destination ports to define next steps
  *
  * @param key     the socket read from kernel ring
- * @param family  the family used to compare IPs (AF_INET and AF_INET6)
+ * @param data    the socket data used also used to refuse some sockets.
  *
  * @return It returns 1 if this socket is inside the ranges and 0 otherwise.
  */
-int is_socket_allowed(netdata_socket_idx_t *key, int family)
+int ebpf_is_socket_allowed(netdata_socket_idx_t *key, netdata_socket_t *data)
 {
-    if (!is_port_inside_range(key->dport))
-        return 0;
+    int ret = 0;
+    // If family is not AF_UNSPEC and it is different of specified
+    if (network_viewer_opt.family && network_viewer_opt.family != data->family)
+        goto endsocketallowed;
 
-    return ebpf_is_specific_ip_inside_range(&key->daddr, family);
-}
+    if (!ebpf_is_port_inside_range(key->dport))
+        goto endsocketallowed;
 
-/**
- * Compare sockets
- *
- * Compare destination address and destination port.
- * We do not compare source port, because it is random.
- * We also do not compare source address, because inbound and outbound connections are stored in separated AVL trees.
- *
- * @param a pointer to netdata_socket_plot
- * @param b pointer  to netdata_socket_plot
- *
- * @return It returns 0 case the values are equal, 1 case a is bigger than b and -1 case a is smaller than b.
- */
-static int ebpf_compare_sockets(void *a, void *b)
-{
-    struct netdata_socket_plot *val1 = a;
-    struct netdata_socket_plot *val2 = b;
-    int cmp = 0;
-
-    // We do not need to compare val2 family, because data inside hash table is always from the same family
-    if (val1->family == AF_INET) { //IPV4
-        if (network_viewer_opt.included_port || network_viewer_opt.excluded_port)
-            cmp = memcmp(&val1->index.dport, &val2->index.dport, sizeof(uint16_t));
-
-        if (!cmp) {
-            cmp = memcmp(&val1->index.daddr.addr32[0], &val2->index.daddr.addr32[0], sizeof(uint32_t));
-        }
-    } else {
-        if (network_viewer_opt.included_port || network_viewer_opt.excluded_port)
-            cmp = memcmp(&val1->index.dport, &val2->index.dport, sizeof(uint16_t));
-
-        if (!cmp) {
-            cmp = memcmp(&val1->index.daddr.addr32, &val2->index.daddr.addr32, 4*sizeof(uint32_t));
-        }
-    }
-
-    return cmp;
-}
-
-/**
- * Build dimension name
- *
- * Fill dimension name vector with values given
- *
- * @param dimname       the output vector
- * @param hostname      the hostname for the socket.
- * @param service_name  the service used to connect.
- * @param proto         the protocol used in this connection
- * @param family        is this IPV4(AF_INET) or IPV6(AF_INET6)
- *
- * @return  it returns the size of the data copied on success and -1 otherwise.
- */
-static inline int ebpf_build_outbound_dimension_name(char *dimname, char *hostname, char *service_name,
-                                                     char *proto, int family)
-{
-    if (network_viewer_opt.included_port || network_viewer_opt.excluded_port)
-        return snprintf(dimname, CONFIG_MAX_NAME - 7, (family == AF_INET)?"%s:%s:%s_":"%s:%s:[%s]_",
-                        service_name, proto, hostname);
-
-    return snprintf(dimname, CONFIG_MAX_NAME - 7, (family == AF_INET)?"%s:%s_":"%s:[%s]_",
-                    proto, hostname);
-}
-
-/**
- * Fill inbound dimension name
- *
- * Mount the dimension name with the input given
- *
- * @param dimname       the output vector
- * @param service_name  the service used to connect.
- * @param proto         the protocol used in this connection
- *
- * @return  it returns the size of the data copied on success and -1 otherwise.
- */
-static inline int build_inbound_dimension_name(char *dimname, char *service_name, char *proto)
-{
-    return snprintf(dimname, CONFIG_MAX_NAME - 7, "%s:%s_", service_name,
-                    proto);
-}
-
-/**
- * Fill Resolved Name
- *
- * Fill the resolved name structure with the value given.
- * The hostname is the largest value possible, if it is necessary to cut some value, it must be cut.
- *
- * @param ptr          the output vector
- * @param hostname     the hostname resolved or IP.
- * @param length       the length for the hostname.
- * @param service_name the service name associated to the connection
- * @param is_outbound    the is this an outbound connection
- */
-static inline void fill_resolved_name(netdata_socket_plot_t *ptr, char *hostname, size_t length,
-                                      char *service_name, int is_outbound)
-{
-    if (length < NETDATA_MAX_NETWORK_COMBINED_LENGTH)
-        ptr->resolved_name = strdupz(hostname);
-    else {
-        length = NETDATA_MAX_NETWORK_COMBINED_LENGTH;
-        ptr->resolved_name = mallocz( NETDATA_MAX_NETWORK_COMBINED_LENGTH + 1);
-        memcpy(ptr->resolved_name, hostname, length);
-        ptr->resolved_name[length] = '\0';
-    }
-
-    char dimname[CONFIG_MAX_NAME];
-    int size;
-    char *protocol;
-    if (ptr->sock.protocol == IPPROTO_UDP) {
-        protocol = "UDP";
-    } else if (ptr->sock.protocol == IPPROTO_TCP) {
-        protocol = "TCP";
-    } else {
-        protocol = "ALL";
-    }
-
-    if (is_outbound)
-        size = ebpf_build_outbound_dimension_name(dimname, hostname, service_name, protocol, ptr->family);
-    else
-        size = build_inbound_dimension_name(dimname,service_name, protocol);
-
-    if (size > 0) {
-        strcpy(&dimname[size], "sent");
-        dimname[size + 4] = '\0';
-        ptr->dimension_sent = strdupz(dimname);
-
-        strcpy(&dimname[size], "recv");
-        ptr->dimension_recv = strdupz(dimname);
-
-        dimname[size - 1] = '\0';
-        ptr->dimension_retransmit = strdupz(dimname);
-    }
-}
-
-/**
- * Mount dimension names
- *
- * Fill the vector names after to resolve the addresses
- *
- * @param ptr a pointer to the structure where the values are stored.
- * @param is_outbound is a outbound ptr value?
- *
- * @return It returns 1 if the name is valid and 0 otherwise.
- */
-int fill_names(netdata_socket_plot_t *ptr, int is_outbound)
-{
-    char hostname[NI_MAXHOST], service_name[NI_MAXSERV];
-    if (ptr->resolved)
-        return 1;
-
-    int ret;
-    static int resolve_name = -1;
-    static int resolve_service = -1;
-    if (resolve_name == -1)
-        resolve_name = network_viewer_opt.hostname_resolution_enabled;
-
-    if (resolve_service == -1)
-        resolve_service = network_viewer_opt.service_resolution_enabled;
-
-    netdata_socket_idx_t *idx = &ptr->index;
-
-    char *errname = { "Not resolved" };
-    // Resolve Name
-    if (ptr->family == AF_INET) { //IPV4
-        struct sockaddr_in myaddr;
-        memset(&myaddr, 0 , sizeof(myaddr));
-
-        myaddr.sin_family = ptr->family;
-        if (is_outbound) {
-            myaddr.sin_port = idx->dport;
-            myaddr.sin_addr.s_addr = idx->daddr.addr32[0];
-        } else {
-            myaddr.sin_port = idx->sport;
-            myaddr.sin_addr.s_addr = idx->saddr.addr32[0];
-        }
-
-        ret = (!resolve_name)?-1:getnameinfo((struct sockaddr *)&myaddr, sizeof(myaddr), hostname,
-                                              sizeof(hostname), service_name, sizeof(service_name), NI_NAMEREQD);
-
-        if (!ret && !resolve_service) {
-            snprintf(service_name, sizeof(service_name), "%u", ntohs(myaddr.sin_port));
-        }
-
-        if (ret) {
-            // I cannot resolve the name, I will use the IP
-            if (!inet_ntop(AF_INET, &myaddr.sin_addr.s_addr, hostname, NI_MAXHOST)) {
-                strncpy(hostname, errname, 13);
-            }
-
-            snprintf(service_name, sizeof(service_name), "%u", ntohs(myaddr.sin_port));
-            ret = 1;
-        }
-    } else { // IPV6
-        struct sockaddr_in6 myaddr6;
-        memset(&myaddr6, 0 , sizeof(myaddr6));
-
-        myaddr6.sin6_family = AF_INET6;
-        if (is_outbound) {
-            myaddr6.sin6_port =  idx->dport;
-            memcpy(myaddr6.sin6_addr.s6_addr, idx->daddr.addr8, sizeof(union netdata_ip_t));
-        } else {
-            myaddr6.sin6_port =  idx->sport;
-            memcpy(myaddr6.sin6_addr.s6_addr, idx->saddr.addr8, sizeof(union netdata_ip_t));
-        }
-
-        ret = (!resolve_name)?-1:getnameinfo((struct sockaddr *)&myaddr6, sizeof(myaddr6), hostname,
-                                              sizeof(hostname), service_name, sizeof(service_name), NI_NAMEREQD);
-
-        if (!ret && !resolve_service) {
-            snprintf(service_name, sizeof(service_name), "%u", ntohs(myaddr6.sin6_port));
-        }
-
-        if (ret) {
-            // I cannot resolve the name, I will use the IP
-            if (!inet_ntop(AF_INET6, myaddr6.sin6_addr.s6_addr, hostname, NI_MAXHOST)) {
-                strncpy(hostname, errname, 13);
-            }
-
-            snprintf(service_name, sizeof(service_name), "%u", ntohs(myaddr6.sin6_port));
-
-            ret = 1;
-        }
-    }
-
-    fill_resolved_name(ptr, hostname,
-                       strlen(hostname) + strlen(service_name)+ NETDATA_DOTS_PROTOCOL_COMBINED_LENGTH,
-                       service_name, is_outbound);
-
-    if (resolve_name && !ret)
-        ret = hostname_matches_pattern(hostname);
-
-    ptr->resolved++;
+    ret = ebpf_is_specific_ip_inside_range(&key->daddr, data->family);
 
+endsocketallowed:
     return ret;
 }
 
-/**
- * Fill last Network Viewer Dimension
- *
- * Fill the unique dimension that is always plotted.
- *
- * @param ptr           the pointer for the last dimension
- * @param is_outbound    is this an inbound structure?
- */
-static void fill_last_nv_dimension(netdata_socket_plot_t *ptr, int is_outbound)
-{
-    char hostname[NI_MAXHOST], service_name[NI_MAXSERV];
-    char *other = { "other" };
-    // We are also copying the NULL bytes to avoid warnings in new compilers
-    strncpy(hostname, other, 6);
-    strncpy(service_name, other, 6);
-
-    ptr->family = AF_INET;
-    ptr->sock.protocol = 255;
-    ptr->flags = (!is_outbound)?NETDATA_INBOUND_DIRECTION:NETDATA_OUTBOUND_DIRECTION;
-
-    fill_resolved_name(ptr, hostname,  10 + NETDATA_DOTS_PROTOCOL_COMBINED_LENGTH, service_name, is_outbound);
-
-#ifdef NETDATA_INTERNAL_CHECKS
-    netdata_log_info("Last %s dimension added: ID = %u, IP = OTHER, NAME = %s, DIM1 = %s, DIM2 = %s, DIM3 = %s",
-         (is_outbound)?"outbound":"inbound", network_viewer_opt.max_dim - 1, ptr->resolved_name,
-         ptr->dimension_recv, ptr->dimension_sent, ptr->dimension_retransmit);
-#endif
-}
-
-/**
- * Update Socket Data
- *
- * Update the socket information with last collected data
- *
- * @param sock
- * @param lvalues
- */
-static inline void update_socket_data(netdata_socket_t *sock, netdata_socket_t *lvalues)
-{
-    sock->recv_packets = lvalues->recv_packets;
-    sock->sent_packets = lvalues->sent_packets;
-    sock->recv_bytes   = lvalues->recv_bytes;
-    sock->sent_bytes   = lvalues->sent_bytes;
-    sock->retransmit   = lvalues->retransmit;
-    sock->ct = lvalues->ct;
-}
-
-/**
- * Store socket inside avl
- *
- * Store the socket values inside the avl tree.
- *
- * @param out     the structure with information used to plot charts.
- * @param lvalues Values read from socket ring.
- * @param lindex  the index information, the real socket.
- * @param family  the family associated to the socket
- * @param flags   the connection flags
- */
-static void store_socket_inside_avl(netdata_vector_plot_t *out, netdata_socket_t *lvalues,
-                                    netdata_socket_idx_t *lindex, int family, uint32_t flags)
-{
-    netdata_socket_plot_t test, *ret ;
-
-    memcpy(&test.index, lindex, sizeof(netdata_socket_idx_t));
-    test.flags = flags;
-
-    ret = (netdata_socket_plot_t *) avl_search_lock(&out->tree, (avl_t *)&test);
-    if (ret) {
-        if (lvalues->ct != ret->plot.last_time) {
-            update_socket_data(&ret->sock, lvalues);
-        }
-    } else {
-        uint32_t curr = out->next;
-        uint32_t last = out->last;
-
-        netdata_socket_plot_t *w = &out->plot[curr];
-
-        int resolved;
-        if (curr == last) {
-            if (lvalues->ct != w->plot.last_time) {
-                update_socket_data(&w->sock, lvalues);
-            }
-            return;
-        } else {
-            memcpy(&w->sock, lvalues, sizeof(netdata_socket_t));
-            memcpy(&w->index, lindex, sizeof(netdata_socket_idx_t));
-            w->family = family;
-
-            resolved = fill_names(w, out != (netdata_vector_plot_t *)&inbound_vectors);
-        }
-
-        if (!resolved) {
-            freez(w->resolved_name);
-            freez(w->dimension_sent);
-            freez(w->dimension_recv);
-            freez(w->dimension_retransmit);
-
-            memset(w, 0, sizeof(netdata_socket_plot_t));
-
-            return;
-        }
-
-        w->flags = flags;
-        netdata_socket_plot_t *check ;
-        check = (netdata_socket_plot_t *) avl_insert_lock(&out->tree, (avl_t *)w);
-        if (check != w)
-            netdata_log_error("Internal error, cannot insert the AVL tree.");
-
-#ifdef NETDATA_INTERNAL_CHECKS
-        char iptext[INET6_ADDRSTRLEN];
-        if (inet_ntop(family, &w->index.daddr.addr8, iptext, sizeof(iptext)))
-            netdata_log_info("New %s dimension added: ID = %u, IP = %s, NAME = %s, DIM1 = %s, DIM2 = %s, DIM3 = %s",
-                 (out == &inbound_vectors)?"inbound":"outbound", curr, iptext, w->resolved_name,
-                 w->dimension_recv, w->dimension_sent, w->dimension_retransmit);
-#endif
-        curr++;
-        if (curr > last)
-            curr = last;
-        out->next = curr;
-    }
-}
-
-/**
- * Compare Vector to store
- *
- * Compare input values with local address to select table to store.
- *
- * @param direction  store inbound and outbound direction.
- * @param cmp        index read from hash table.
- * @param proto      the protocol read.
- *
- * @return It returns the structure with address to compare.
- */
-netdata_vector_plot_t * select_vector_to_store(uint32_t *direction, netdata_socket_idx_t *cmp, uint8_t proto)
-{
-    if (!listen_ports) {
-        *direction = NETDATA_OUTBOUND_DIRECTION;
-        return &outbound_vectors;
-    }
-
-    ebpf_network_viewer_port_list_t *move_ports = listen_ports;
-    while (move_ports) {
-        if (move_ports->protocol == proto && move_ports->first == cmp->sport) {
-            *direction = NETDATA_INBOUND_DIRECTION;
-            return &inbound_vectors;
-        }
-
-        move_ports = move_ports->next;
-    }
-
-    *direction = NETDATA_OUTBOUND_DIRECTION;
-    return &outbound_vectors;
-}
-
 /**
  * Hash accumulator
  *
  * @param values        the values used to calculate the data.
- * @param key           the key to store  data.
  * @param family        the connection family
  * @param end           the values size.
  */
-static void hash_accumulator(netdata_socket_t *values, netdata_socket_idx_t *key, int family, int end)
+static void ebpf_hash_socket_accumulator(netdata_socket_t *values, int end)
 {
-    if (!network_viewer_opt.enabled || !is_socket_allowed(key, family))
-        return;
-
-    uint64_t bsent = 0, brecv = 0, psent = 0, precv = 0;
-    uint16_t retransmit = 0;
     int i;
     uint8_t protocol = values[0].protocol;
-    uint64_t ct = values[0].ct;
+    uint64_t ct = values[0].current_timestamp;
+    uint64_t ft = values[0].first_timestamp;
+    uint16_t family = AF_UNSPEC;
+    uint32_t external_origin = values[0].external_origin;
     for (i = 1; i < end; i++) {
         netdata_socket_t *w = &values[i];
 
-        precv += w->recv_packets;
-        psent += w->sent_packets;
-        brecv += w->recv_bytes;
-        bsent += w->sent_bytes;
-        retransmit += w->retransmit;
+        values[0].tcp.call_tcp_sent         += w->tcp.call_tcp_sent;
+        values[0].tcp.call_tcp_received     += w->tcp.call_tcp_received;
+        values[0].tcp.tcp_bytes_received    += w->tcp.tcp_bytes_received;
+        values[0].tcp.tcp_bytes_sent        += w->tcp.tcp_bytes_sent;
+        values[0].tcp.close                 += w->tcp.close;
+        values[0].tcp.retransmit            += w->tcp.retransmit;
+        values[0].tcp.ipv4_connect          += w->tcp.ipv4_connect;
+        values[0].tcp.ipv6_connect          += w->tcp.ipv6_connect;
 
         if (!protocol)
             protocol = w->protocol;
 
-        if (w->ct != ct)
-            ct = w->ct;
+        if (family == AF_UNSPEC)
+            family = w->family;
+
+        if (w->current_timestamp > ct)
+            ct = w->current_timestamp;
+
+        if (!ft)
+            ft = w->first_timestamp;
+
+        if (w->external_origin)
+            external_origin = NETDATA_EBPF_SRC_IP_ORIGIN_EXTERNAL;
     }
 
-    values[0].recv_packets += precv;
-    values[0].sent_packets += psent;
-    values[0].recv_bytes   += brecv;
-    values[0].sent_bytes   += bsent;
-    values[0].retransmit   += retransmit;
-    values[0].protocol     = (!protocol)?IPPROTO_TCP:protocol;
-    values[0].ct           = ct;
-
-    uint32_t dir;
-    netdata_vector_plot_t *table = select_vector_to_store(&dir, key, protocol);
-    store_socket_inside_avl(table, &values[0], key, family, dir);
+    values[0].protocol          = (!protocol)?IPPROTO_TCP:protocol;
+    values[0].current_timestamp = ct;
+    values[0].first_timestamp = ft;
+    values[0].external_origin = external_origin;
 }
 
 /**
- * Read socket hash table
+ * Translate socket
  *
- * Read data from hash tables created on kernel ring.
+ * Convert socket address to string
  *
- * @param fd                 the hash table with data.
- * @param family             the family associated to the hash table
- * @param maps_per_core      do I need to read all cores?
- *
- * @return it returns 0 on success and -1 otherwise.
+ * @param dst structure where we will store
+ * @param key the socket address
  */
-static void ebpf_read_socket_hash_table(int fd, int family, int maps_per_core)
+static void ebpf_socket_translate(netdata_socket_plus_t *dst, netdata_socket_idx_t *key)
 {
+    uint32_t resolve = network_viewer_opt.service_resolution_enabled;
+    char service[NI_MAXSERV];
+    int ret;
+    if (dst->data.family == AF_INET) {
+        struct sockaddr_in ipv4_addr = { };
+        ipv4_addr.sin_port = 0;
+        ipv4_addr.sin_addr.s_addr = key->saddr.addr32[0];
+        ipv4_addr.sin_family = AF_INET;
+        if (resolve) {
+            // NI_NAMEREQD : It is too slow
+            ret = getnameinfo((struct sockaddr *) &ipv4_addr, sizeof(ipv4_addr), dst->socket_string.src_ip,
+                              INET6_ADDRSTRLEN, service, NI_MAXSERV, NI_NUMERICHOST | NI_NUMERICSERV);
+            if (ret) {
+                collector_error("Cannot resolve name: %s", gai_strerror(ret));
+                resolve = 0;
+            } else {
+                ipv4_addr.sin_addr.s_addr = key->daddr.addr32[0];
+
+                ipv4_addr.sin_port = key->dport;
+                ret = getnameinfo((struct sockaddr *) &ipv4_addr, sizeof(ipv4_addr), dst->socket_string.dst_ip,
+                                  INET6_ADDRSTRLEN, dst->socket_string.dst_port, NI_MAXSERV,
+                                  NI_NUMERICHOST);
+                if (ret) {
+                    collector_error("Cannot resolve name: %s", gai_strerror(ret));
+                    resolve = 0;
+                }
+            }
+        }
+
+        // When resolution fail, we should use addresses
+        if (!resolve) {
+            ipv4_addr.sin_addr.s_addr = key->saddr.addr32[0];
+
+            if(!inet_ntop(AF_INET, &ipv4_addr.sin_addr, dst->socket_string.src_ip, INET6_ADDRSTRLEN))
+                netdata_log_info("Cannot convert IP %u .", ipv4_addr.sin_addr.s_addr);
+
+            ipv4_addr.sin_addr.s_addr = key->daddr.addr32[0];
+
+            if(!inet_ntop(AF_INET, &ipv4_addr.sin_addr, dst->socket_string.dst_ip, INET6_ADDRSTRLEN))
+                netdata_log_info("Cannot convert IP %u .", ipv4_addr.sin_addr.s_addr);
+            snprintfz(dst->socket_string.dst_port, NI_MAXSERV, "%u",  ntohs(key->dport));
+        }
+    } else {
+        struct sockaddr_in6 ipv6_addr = { };
+        memcpy(&ipv6_addr.sin6_addr, key->saddr.addr8, sizeof(key->saddr.addr8));
+        ipv6_addr.sin6_family = AF_INET6;
+        if (resolve) {
+            ret = getnameinfo((struct sockaddr *) &ipv6_addr, sizeof(ipv6_addr), dst->socket_string.src_ip,
+                              INET6_ADDRSTRLEN, service, NI_MAXSERV, NI_NUMERICHOST | NI_NUMERICSERV);
+            if (ret) {
+                collector_error("Cannot resolve name: %s", gai_strerror(ret));
+                resolve = 0;
+            } else {
+                memcpy(&ipv6_addr.sin6_addr, key->daddr.addr8, sizeof(key->daddr.addr8));
+                ret = getnameinfo((struct sockaddr *) &ipv6_addr, sizeof(ipv6_addr), dst->socket_string.dst_ip,
+                                  INET6_ADDRSTRLEN, dst->socket_string.dst_port, NI_MAXSERV,
+                                  NI_NUMERICHOST);
+                if (ret) {
+                    collector_error("Cannot resolve name: %s", gai_strerror(ret));
+                    resolve = 0;
+                }
+            }
+        }
+
+        if (!resolve) {
+            memcpy(&ipv6_addr.sin6_addr, key->saddr.addr8, sizeof(key->saddr.addr8));
+            if(!inet_ntop(AF_INET6, &ipv6_addr.sin6_addr, dst->socket_string.src_ip, INET6_ADDRSTRLEN))
+                netdata_log_info("Cannot convert IPv6 Address.");
+
+            memcpy(&ipv6_addr.sin6_addr, key->daddr.addr8, sizeof(key->daddr.addr8));
+            if(!inet_ntop(AF_INET6, &ipv6_addr.sin6_addr, dst->socket_string.dst_ip, INET6_ADDRSTRLEN))
+                netdata_log_info("Cannot convert IPv6 Address.");
+            snprintfz(dst->socket_string.dst_port, NI_MAXSERV, "%u",  ntohs(key->dport));
+        }
+    }
+    dst->pid = key->pid;
+
+    if (!strcmp(dst->socket_string.dst_port, "0"))
+        snprintfz(dst->socket_string.dst_port, NI_MAXSERV, "%u",  ntohs(key->dport));
+#ifdef NETDATA_DEV_MODE
+    collector_info("New socket: { ORIGIN IP: %s, ORIGIN : %u, DST IP:%s, DST PORT: %s, PID: %u, PROTO: %d, FAMILY: %d}",
+                   dst->socket_string.src_ip,
+                   dst->data.external_origin,
+                   dst->socket_string.dst_ip,
+                   dst->socket_string.dst_port,
+                   dst->pid,
+                   dst->data.protocol,
+                   dst->data.family
+                   );
+#endif
+}
+
+/**
+ * Update array vectors
+ *
+ * Read data from hash table and update vectors.
+ *
+ * @param em the structure with configuration
+ */
+static void ebpf_update_array_vectors(ebpf_module_t *em)
+{
+    netdata_thread_disable_cancelability();
     netdata_socket_idx_t key = {};
     netdata_socket_idx_t next_key = {};
 
+    int maps_per_core = em->maps_per_core;
+    int fd = em->maps[NETDATA_SOCKET_OPEN_SOCKET].map_fd;
+
     netdata_socket_t *values = socket_values;
     size_t length = sizeof(netdata_socket_t);
     int test, end;
@@ -2055,21 +1647,126 @@ static void ebpf_read_socket_hash_table(int fd, int family, int maps_per_core)
     } else
         end = 1;
 
+    // We need to reset the values when we are working on kernel 4.15 or newer, because kernel does not create
+    // values for specific processor unless it is used to store data. As result of this behavior one the next socket
+    // can have values from the previous one.
+    memset(values, 0, length);
+    time_t update_time = time(NULL);
     while (bpf_map_get_next_key(fd, &key, &next_key) == 0) {
-        // We need to reset the values when we are working on kernel 4.15 or newer, because kernel does not create
-        // values for specific processor unless it is used to store data. As result of this behavior one the next socket
-        // can have values from the previous one.
-        memset(values, 0, length);
         test = bpf_map_lookup_elem(fd, &key, values);
         if (test < 0) {
-            key = next_key;
-            continue;
+            goto end_socket_loop;
         }
 
-        hash_accumulator(values, &key, family, end);
+        if (key.pid > (uint32_t)pid_max) {
+            goto end_socket_loop;
+        }
 
-        key = next_key;
+        ebpf_hash_socket_accumulator(values, end);
+        ebpf_socket_fill_publish_apps(key.pid, values);
+
+        // We update UDP to show info with charts, but we do not show them with functions
+        /*
+        if (key.dport == NETDATA_EBPF_UDP_PORT && values[0].protocol == IPPROTO_UDP) {
+            bpf_map_delete_elem(fd, &key);
+            goto end_socket_loop;
+        }
+         */
+
+        // Discard non-bind sockets
+        if (!key.daddr.addr64[0] && !key.daddr.addr64[1] && !key.saddr.addr64[0] && !key.saddr.addr64[1]) {
+            bpf_map_delete_elem(fd, &key);
+            goto end_socket_loop;
+        }
+
+        // When socket is not allowed, we do not append it to table, but we are still keeping it to accumulate data.
+        if (!ebpf_is_socket_allowed(&key, values)) {
+            goto end_socket_loop;
+        }
+
+        // Get PID structure
+        rw_spinlock_write_lock(&ebpf_judy_pid.index.rw_spinlock);
+        PPvoid_t judy_array = &ebpf_judy_pid.index.JudyLArray;
+        netdata_ebpf_judy_pid_stats_t *pid_ptr = ebpf_get_pid_from_judy_unsafe(judy_array, key.pid);
+        if (!pid_ptr) {
+            goto end_socket_loop;
+        }
+
+        // Get Socket structure
+        rw_spinlock_write_lock(&pid_ptr->socket_stats.rw_spinlock);
+        netdata_socket_plus_t **socket_pptr = (netdata_socket_plus_t **)ebpf_judy_insert_unsafe(
+            &pid_ptr->socket_stats.JudyLArray, values[0].first_timestamp);
+        netdata_socket_plus_t *socket_ptr = *socket_pptr;
+        bool translate = false;
+        if (likely(*socket_pptr == NULL)) {
+            *socket_pptr = aral_mallocz(aral_socket_table);
+
+            socket_ptr = *socket_pptr;
+
+            translate = true;
+        }
+        uint64_t prev_period = socket_ptr->data.current_timestamp;
+        memcpy(&socket_ptr->data, &values[0], sizeof(netdata_socket_t));
+        if (translate)
+            ebpf_socket_translate(socket_ptr, &key);
+        else { // Check socket was updated
+            if (prev_period) {
+                if (values[0].current_timestamp > prev_period) // Socket updated
+                    socket_ptr->last_update = update_time;
+                else if ((update_time - socket_ptr->last_update) > em->update_every) {
+                    // Socket was not updated since last read
+                    JudyLDel(&pid_ptr->socket_stats.JudyLArray, values[0].first_timestamp, PJE0);
+                    aral_freez(aral_socket_table, socket_ptr);
+                }
+            } else // First time
+                socket_ptr->last_update = update_time;
+        }
+
+        rw_spinlock_write_unlock(&pid_ptr->socket_stats.rw_spinlock);
+        rw_spinlock_write_unlock(&ebpf_judy_pid.index.rw_spinlock);
+
+end_socket_loop:
+        memset(values, 0, length);
+        memcpy(&key, &next_key, sizeof(key));
     }
+    netdata_thread_enable_cancelability();
+}
+
+/**
+ * Socket thread
+ *
+ * Thread used to generate socket charts.
+ *
+ * @param ptr a pointer to `struct ebpf_module`
+ *
+ * @return It always return NULL
+ */
+void *ebpf_read_socket_thread(void *ptr)
+{
+    heartbeat_t hb;
+    heartbeat_init(&hb);
+
+    ebpf_module_t *em = (ebpf_module_t *)ptr;
+
+    ebpf_update_array_vectors(em);
+
+    int update_every = em->update_every;
+    int counter = update_every - 1;
+
+    uint32_t running_time = 0;
+    uint32_t lifetime = em->lifetime;
+    usec_t period = update_every * USEC_PER_SEC;
+    while (!ebpf_exit_plugin && running_time < lifetime) {
+        (void)heartbeat_next(&hb, period);
+        if (ebpf_exit_plugin || ++counter != update_every)
+            continue;
+
+        ebpf_update_array_vectors(em);
+
+        counter = 0;
+    }
+
+    return NULL;
 }
 
 /**
@@ -2164,44 +1861,6 @@ static void read_listen_table()
     }
 }
 
-/**
- * Socket read hash
- *
- * This is the thread callback.
- * This thread is necessary, because we cannot freeze the whole plugin to read the data on very busy socket.
- *
- * @param ptr It is a NULL value for this thread.
- *
- * @return It always returns NULL.
- */
-void *ebpf_socket_read_hash(void *ptr)
-{
-    netdata_thread_cleanup_push(ebpf_socket_cleanup, ptr);
-    ebpf_module_t *em = (ebpf_module_t *)ptr;
-
-    heartbeat_t hb;
-    heartbeat_init(&hb);
-    int fd_ipv4 = socket_maps[NETDATA_SOCKET_TABLE_IPV4].map_fd;
-    int fd_ipv6 = socket_maps[NETDATA_SOCKET_TABLE_IPV6].map_fd;
-    int maps_per_core = em->maps_per_core;
-    // This thread is cancelled from another thread
-    uint32_t running_time;
-    uint32_t lifetime = em->lifetime;
-    for (running_time = 0;!ebpf_exit_plugin && running_time < lifetime; running_time++) {
-        (void)heartbeat_next(&hb, USEC_PER_SEC);
-        if (ebpf_exit_plugin)
-           break;
-
-        pthread_mutex_lock(&nv_mutex);
-        ebpf_read_socket_hash_table(fd_ipv4, AF_INET, maps_per_core);
-        ebpf_read_socket_hash_table(fd_ipv6, AF_INET6, maps_per_core);
-        pthread_mutex_unlock(&nv_mutex);
-    }
-
-    netdata_thread_cleanup_pop(1);
-    return NULL;
-}
-
 /**
  * Read the hash table and store data to allocated vectors.
  *
@@ -2251,9 +1910,9 @@ static void ebpf_socket_read_hash_global_tables(netdata_idx_t *stats, int maps_p
  * Fill publish apps when necessary.
  *
  * @param current_pid  the PID that I am updating
- * @param eb           the structure with data read from memory.
+ * @param ns           the structure with data read from memory.
  */
-void ebpf_socket_fill_publish_apps(uint32_t current_pid, ebpf_bandwidth_t *eb)
+void ebpf_socket_fill_publish_apps(uint32_t current_pid, netdata_socket_t *ns)
 {
     ebpf_socket_publish_apps_t *curr = socket_bandwidth_curr[current_pid];
     if (!curr) {
@@ -2261,98 +1920,33 @@ void ebpf_socket_fill_publish_apps(uint32_t current_pid, ebpf_bandwidth_t *eb)
         socket_bandwidth_curr[current_pid] = curr;
     }
 
-    curr->bytes_sent = eb->bytes_sent;
-    curr->bytes_received = eb->bytes_received;
-    curr->call_tcp_sent = eb->call_tcp_sent;
-    curr->call_tcp_received = eb->call_tcp_received;
-    curr->retransmit = eb->retransmit;
-    curr->call_udp_sent = eb->call_udp_sent;
-    curr->call_udp_received = eb->call_udp_received;
-    curr->call_close = eb->close;
-    curr->call_tcp_v4_connection = eb->tcp_v4_connection;
-    curr->call_tcp_v6_connection = eb->tcp_v6_connection;
-}
+    curr->bytes_sent += ns->tcp.tcp_bytes_sent;
+    curr->bytes_received += ns->tcp.tcp_bytes_received;
+    curr->call_tcp_sent += ns->tcp.call_tcp_sent;
+    curr->call_tcp_received += ns->tcp.call_tcp_received;
+    curr->retransmit += ns->tcp.retransmit;
+    curr->call_close += ns->tcp.close;
+    curr->call_tcp_v4_connection += ns->tcp.ipv4_connect;
+    curr->call_tcp_v6_connection += ns->tcp.ipv6_connect;
 
-/**
- * Bandwidth accumulator.
- *
- * @param out the vector with the values to sum
- */
-void ebpf_socket_bandwidth_accumulator(ebpf_bandwidth_t *out, int maps_per_core)
-{
-    int i, end = (maps_per_core) ? ebpf_nprocs : 1;
-    ebpf_bandwidth_t *total = &out[0];
-    for (i = 1; i < end; i++) {
-        ebpf_bandwidth_t *move = &out[i];
-        total->bytes_sent += move->bytes_sent;
-        total->bytes_received += move->bytes_received;
-        total->call_tcp_sent += move->call_tcp_sent;
-        total->call_tcp_received += move->call_tcp_received;
-        total->retransmit += move->retransmit;
-        total->call_udp_sent += move->call_udp_sent;
-        total->call_udp_received += move->call_udp_received;
-        total->close += move->close;
-        total->tcp_v4_connection += move->tcp_v4_connection;
-        total->tcp_v6_connection += move->tcp_v6_connection;
-    }
-}
-
-/**
- *  Update the apps data reading information from the hash table
- *
- * @param maps_per_core      do I need to read all cores?
- */
-static void ebpf_socket_update_apps_data(int maps_per_core)
-{
-    int fd = socket_maps[NETDATA_SOCKET_TABLE_BANDWIDTH].map_fd;
-    ebpf_bandwidth_t *eb = bandwidth_vector;
-    uint32_t key;
-    struct ebpf_pid_stat *pids = ebpf_root_of_pids;
-    size_t length = sizeof(ebpf_bandwidth_t);
-    if (maps_per_core)
-        length *= ebpf_nprocs;
-    while (pids) {
-        key = pids->pid;
-
-        if (bpf_map_lookup_elem(fd, &key, eb)) {
-            pids = pids->next;
-            continue;
-        }
-
-        ebpf_socket_bandwidth_accumulator(eb, maps_per_core);
-
-        ebpf_socket_fill_publish_apps(key, eb);
-
-        memset(eb, 0, length);
-
-        pids = pids->next;
-    }
+    curr->call_udp_sent += ns->udp.call_udp_sent;
+    curr->call_udp_received += ns->udp.call_udp_received;
 }
 
 /**
  * Update cgroup
  *
  * Update cgroup data based in PIDs.
- *
- * @param maps_per_core      do I need to read all cores?
  */
-static void ebpf_update_socket_cgroup(int maps_per_core)
+static void ebpf_update_socket_cgroup()
 {
     ebpf_cgroup_target_t *ect ;
 
-    ebpf_bandwidth_t *eb = bandwidth_vector;
-    int fd = socket_maps[NETDATA_SOCKET_TABLE_BANDWIDTH].map_fd;
-
-    size_t length = sizeof(ebpf_bandwidth_t);
-    if (maps_per_core)
-        length *= ebpf_nprocs;
-
     pthread_mutex_lock(&mutex_cgroup_shm);
     for (ect = ebpf_cgroup_pids; ect; ect = ect->next) {
         struct pid_on_target2 *pids;
         for (pids = ect->pids; pids; pids = pids->next) {
             int pid = pids->pid;
-            ebpf_bandwidth_t *out = &pids->socket;
             ebpf_socket_publish_apps_t *publish = &ect->publish_socket;
             if (likely(socket_bandwidth_curr) && socket_bandwidth_curr[pid]) {
                 ebpf_socket_publish_apps_t *in = socket_bandwidth_curr[pid];
@@ -2367,25 +1961,6 @@ static void ebpf_update_socket_cgroup(int maps_per_core)
                 publish->call_close = in->call_close;
                 publish->call_tcp_v4_connection = in->call_tcp_v4_connection;
                 publish->call_tcp_v6_connection = in->call_tcp_v6_connection;
-            } else {
-                if (!bpf_map_lookup_elem(fd, &pid, eb)) {
-                    ebpf_socket_bandwidth_accumulator(eb, maps_per_core);
-
-                    memcpy(out, eb, sizeof(ebpf_bandwidth_t));
-
-                    publish->bytes_sent = out->bytes_sent;
-                    publish->bytes_received = out->bytes_received;
-                    publish->call_tcp_sent = out->call_tcp_sent;
-                    publish->call_tcp_received = out->call_tcp_received;
-                    publish->retransmit = out->retransmit;
-                    publish->call_udp_sent = out->call_udp_sent;
-                    publish->call_udp_received = out->call_udp_received;
-                    publish->call_close = out->close;
-                    publish->call_tcp_v4_connection = out->tcp_v4_connection;
-                    publish->call_tcp_v6_connection = out->tcp_v6_connection;
-
-                    memset(eb, 0, length);
-                }
             }
         }
     }
@@ -2406,18 +1981,18 @@ static void ebpf_socket_sum_cgroup_pids(ebpf_socket_publish_apps_t *socket, stru
     memset(&accumulator, 0, sizeof(accumulator));
 
     while (pids) {
-        ebpf_bandwidth_t *w = &pids->socket;
+        netdata_socket_t *w = &pids->socket;
 
-        accumulator.bytes_received += w->bytes_received;
-        accumulator.bytes_sent += w->bytes_sent;
-        accumulator.call_tcp_received += w->call_tcp_received;
-        accumulator.call_tcp_sent += w->call_tcp_sent;
-        accumulator.retransmit += w->retransmit;
-        accumulator.call_udp_received += w->call_udp_received;
-        accumulator.call_udp_sent += w->call_udp_sent;
-        accumulator.call_close += w->close;
-        accumulator.call_tcp_v4_connection += w->tcp_v4_connection;
-        accumulator.call_tcp_v6_connection += w->tcp_v6_connection;
+        accumulator.bytes_received += w->tcp.tcp_bytes_received;
+        accumulator.bytes_sent += w->tcp.tcp_bytes_sent;
+        accumulator.call_tcp_received += w->tcp.call_tcp_received;
+        accumulator.call_tcp_sent += w->tcp.call_tcp_sent;
+        accumulator.retransmit += w->tcp.retransmit;
+        accumulator.call_close += w->tcp.close;
+        accumulator.call_tcp_v4_connection += w->tcp.ipv4_connect;
+        accumulator.call_tcp_v6_connection += w->tcp.ipv6_connect;
+        accumulator.call_udp_received += w->udp.call_udp_received;
+        accumulator.call_udp_sent += w->udp.call_udp_sent;
 
         pids = pids->next;
     }
@@ -2902,15 +2477,6 @@ static void socket_collector(ebpf_module_t *em)
 {
     heartbeat_t hb;
     heartbeat_init(&hb);
-    uint32_t network_connection = network_viewer_opt.enabled;
-
-    if (network_connection) {
-        socket_threads.thread = mallocz(sizeof(netdata_thread_t));
-        socket_threads.start_routine = ebpf_socket_read_hash;
-
-        netdata_thread_create(socket_threads.thread, socket_threads.name,
-                              NETDATA_THREAD_OPTION_DEFAULT, ebpf_socket_read_hash, em);
-    }
 
     int cgroups = em->cgroup_charts;
     if (cgroups)
@@ -2937,14 +2503,8 @@ static void socket_collector(ebpf_module_t *em)
         }
 
         pthread_mutex_lock(&collect_data_mutex);
-        if (socket_apps_enabled)
-            ebpf_socket_update_apps_data(maps_per_core);
-
         if (cgroups)
-            ebpf_update_socket_cgroup(maps_per_core);
-
-        if (network_connection)
-            calculate_nv_plot();
+            ebpf_update_socket_cgroup();
 
         pthread_mutex_lock(&lock);
         if (socket_global_enabled)
@@ -2963,20 +2523,6 @@ static void socket_collector(ebpf_module_t *em)
 
         fflush(stdout);
 
-        if (network_connection) {
-            // We are calling fflush many times, because when we have a lot of dimensions
-            // we began to have not expected outputs and Netdata closed the plugin.
-            pthread_mutex_lock(&nv_mutex);
-            ebpf_socket_create_nv_charts(&inbound_vectors, update_every);
-            fflush(stdout);
-            ebpf_socket_send_nv_data(&inbound_vectors);
-
-            ebpf_socket_create_nv_charts(&outbound_vectors, update_every);
-            fflush(stdout);
-            ebpf_socket_send_nv_data(&outbound_vectors);
-            pthread_mutex_unlock(&nv_mutex);
-
-        }
         pthread_mutex_unlock(&lock);
         pthread_mutex_unlock(&collect_data_mutex);
 
@@ -2998,42 +2544,24 @@ static void socket_collector(ebpf_module_t *em)
  *****************************************************************/
 
 /**
- * Allocate vectors used with this thread.
+ * Initialize vectors used with this thread.
+ *
  * We are not testing the return, because callocz does this and shutdown the software
  * case it was not possible to allocate.
- *
- * @param apps is apps enabled?
  */
-static void ebpf_socket_allocate_global_vectors(int apps)
+static void ebpf_socket_initialize_global_vectors()
 {
     memset(socket_aggregated_data, 0 ,NETDATA_MAX_SOCKET_VECTOR * sizeof(netdata_syscall_stat_t));
     memset(socket_publish_aggregated, 0 ,NETDATA_MAX_SOCKET_VECTOR * sizeof(netdata_publish_syscall_t));
     socket_hash_values = callocz(ebpf_nprocs, sizeof(netdata_idx_t));
 
-    if (apps) {
-        ebpf_socket_aral_init();
-        socket_bandwidth_curr = callocz((size_t)pid_max, sizeof(ebpf_socket_publish_apps_t *));
-        bandwidth_vector = callocz((size_t)ebpf_nprocs, sizeof(ebpf_bandwidth_t));
-    }
+    ebpf_socket_aral_init();
+    socket_bandwidth_curr = callocz((size_t)pid_max, sizeof(ebpf_socket_publish_apps_t *));
+
+    aral_socket_table = ebpf_allocate_pid_aral(NETDATA_EBPF_SOCKET_ARAL_TABLE_NAME,
+                                               sizeof(netdata_socket_plus_t));
 
     socket_values = callocz((size_t)ebpf_nprocs, sizeof(netdata_socket_t));
-    if (network_viewer_opt.enabled) {
-        inbound_vectors.plot = callocz(network_viewer_opt.max_dim, sizeof(netdata_socket_plot_t));
-        outbound_vectors.plot = callocz(network_viewer_opt.max_dim, sizeof(netdata_socket_plot_t));
-    }
-}
-
-/**
- * Initialize Inbound and Outbound
- *
- * Initialize the common outbound and inbound sockets.
- */
-static void initialize_inbound_outbound()
-{
-    inbound_vectors.last = network_viewer_opt.max_dim - 1;
-    outbound_vectors.last = inbound_vectors.last;
-    fill_last_nv_dimension(&inbound_vectors.plot[inbound_vectors.last], 0);
-    fill_last_nv_dimension(&outbound_vectors.plot[outbound_vectors.last], 1);
 }
 
 /*****************************************************************
@@ -3042,793 +2570,6 @@ static void initialize_inbound_outbound()
  *
  *****************************************************************/
 
-/**
- * Fill Port list
- *
- * @param out a pointer to the link list.
- * @param in the structure that will be linked.
- */
-static inline void fill_port_list(ebpf_network_viewer_port_list_t **out, ebpf_network_viewer_port_list_t *in)
-{
-    if (likely(*out)) {
-        ebpf_network_viewer_port_list_t *move = *out, *store = *out;
-        uint16_t first = ntohs(in->first);
-        uint16_t last = ntohs(in->last);
-        while (move) {
-            uint16_t cmp_first = ntohs(move->first);
-            uint16_t cmp_last = ntohs(move->last);
-            if (cmp_first <= first && first <= cmp_last  &&
-                cmp_first <= last && last <= cmp_last ) {
-                netdata_log_info("The range/value (%u, %u) is inside the range/value (%u, %u) already inserted, it will be ignored.",
-                     first, last, cmp_first, cmp_last);
-                freez(in->value);
-                freez(in);
-                return;
-            } else if (first <= cmp_first && cmp_first <= last  &&
-                       first <= cmp_last && cmp_last <= last) {
-                netdata_log_info("The range (%u, %u) is bigger than previous range (%u, %u) already inserted, the previous will be ignored.",
-                     first, last, cmp_first, cmp_last);
-                freez(move->value);
-                move->value = in->value;
-                move->first = in->first;
-                move->last = in->last;
-                freez(in);
-                return;
-            }
-
-            store = move;
-            move = move->next;
-        }
-
-        store->next = in;
-    } else {
-        *out = in;
-    }
-
-#ifdef NETDATA_INTERNAL_CHECKS
-    netdata_log_info("Adding values %s( %u, %u) to %s port list used on network viewer",
-         in->value, ntohs(in->first), ntohs(in->last),
-         (*out == network_viewer_opt.included_port)?"included":"excluded");
-#endif
-}
-
-/**
- * Parse Service List
- *
- * @param out a pointer to store the link list
- * @param service the service used to create the structure that will be linked.
- */
-static void parse_service_list(void **out, char *service)
-{
-    ebpf_network_viewer_port_list_t **list = (ebpf_network_viewer_port_list_t **)out;
-    struct servent *serv = getservbyname((const char *)service, "tcp");
-    if (!serv)
-        serv = getservbyname((const char *)service, "udp");
-
-    if (!serv) {
-        netdata_log_info("Cannot resolv the service '%s' with protocols TCP and UDP, it will be ignored", service);
-        return;
-    }
-
-    ebpf_network_viewer_port_list_t *w = callocz(1, sizeof(ebpf_network_viewer_port_list_t));
-    w->value = strdupz(service);
-    w->hash = simple_hash(service);
-
-    w->first = w->last = (uint16_t)serv->s_port;
-
-    fill_port_list(list, w);
-}
-
-/**
- * Netmask
- *
- * Copied from iprange (https://github.com/firehol/iprange/blob/master/iprange.h)
- *
- * @param prefix create the netmask based in the CIDR value.
- *
- * @return
- */
-static inline in_addr_t netmask(int prefix) {
-
-    if (prefix == 0)
-        return (~((in_addr_t) - 1));
-    else
-        return (in_addr_t)(~((1 << (32 - prefix)) - 1));
-
-}
-
-/**
- * Broadcast
- *
- * Copied from iprange (https://github.com/firehol/iprange/blob/master/iprange.h)
- *
- * @param addr is the ip address
- * @param prefix is the CIDR value.
- *
- * @return It returns the last address of the range
- */
-static inline in_addr_t broadcast(in_addr_t addr, int prefix)
-{
-    return (addr | ~netmask(prefix));
-}
-
-/**
- * Network
- *
- * Copied from iprange (https://github.com/firehol/iprange/blob/master/iprange.h)
- *
- * @param addr is the ip address
- * @param prefix is the CIDR value.
- *
- * @return It returns the first address of the range.
- */
-static inline in_addr_t ipv4_network(in_addr_t addr, int prefix)
-{
-    return (addr & netmask(prefix));
-}
-
-/**
- * IP to network long
- *
- * @param dst the vector to store the result
- * @param ip the source ip given by our users.
- * @param domain the ip domain (IPV4 or IPV6)
- * @param source the original string
- *
- * @return it returns 0 on success and -1 otherwise.
- */
-static inline int ip2nl(uint8_t *dst, char *ip, int domain, char *source)
-{
-    if (inet_pton(domain, ip, dst) <= 0) {
-        netdata_log_error("The address specified (%s) is invalid ", source);
-        return -1;
-    }
-
-    return 0;
-}
-
-/**
- * Get IPV6 Last Address
- *
- * @param out the address to store the last address.
- * @param in the address used to do the math.
- * @param prefix number of bits used to calculate the address
- */
-static void get_ipv6_last_addr(union netdata_ip_t *out, union netdata_ip_t *in, uint64_t prefix)
-{
-    uint64_t mask,tmp;
-    uint64_t ret[2];
-    memcpy(ret, in->addr32, sizeof(union netdata_ip_t));
-
-    if (prefix == 128) {
-        memcpy(out->addr32, in->addr32, sizeof(union netdata_ip_t));
-        return;
-    } else if (!prefix) {
-        ret[0] = ret[1] = 0xFFFFFFFFFFFFFFFF;
-        memcpy(out->addr32, ret, sizeof(union netdata_ip_t));
-        return;
-    } else if (prefix <= 64) {
-        ret[1] = 0xFFFFFFFFFFFFFFFFULL;
-
-        tmp = be64toh(ret[0]);
-        if (prefix > 0) {
-            mask = 0xFFFFFFFFFFFFFFFFULL << (64 - prefix);
-            tmp |= ~mask;
-        }
-        ret[0] = htobe64(tmp);
-    } else {
-        mask = 0xFFFFFFFFFFFFFFFFULL << (128 - prefix);
-        tmp = be64toh(ret[1]);
-        tmp |= ~mask;
-        ret[1] = htobe64(tmp);
-    }
-
-    memcpy(out->addr32, ret, sizeof(union netdata_ip_t));
-}
-
-/**
- * Calculate ipv6 first address
- *
- * @param out the address to store the first address.
- * @param in the address used to do the math.
- * @param prefix number of bits used to calculate the address
- */
-static void get_ipv6_first_addr(union netdata_ip_t *out, union netdata_ip_t *in, uint64_t prefix)
-{
-    uint64_t mask,tmp;
-    uint64_t ret[2];
-
-    memcpy(ret, in->addr32, sizeof(union netdata_ip_t));
-
-    if (prefix == 128) {
-        memcpy(out->addr32, in->addr32, sizeof(union netdata_ip_t));
-        return;
-    } else if (!prefix) {
-        ret[0] = ret[1] = 0;
-        memcpy(out->addr32, ret, sizeof(union netdata_ip_t));
-        return;
-    } else if (prefix <= 64) {
-        ret[1] = 0ULL;
-
-        tmp = be64toh(ret[0]);
-        if (prefix > 0) {
-            mask = 0xFFFFFFFFFFFFFFFFULL << (64 - prefix);
-            tmp &= mask;
-        }
-        ret[0] = htobe64(tmp);
-    } else {
-        mask = 0xFFFFFFFFFFFFFFFFULL << (128 - prefix);
-        tmp = be64toh(ret[1]);
-        tmp &= mask;
-        ret[1] = htobe64(tmp);
-    }
-
-    memcpy(out->addr32, ret, sizeof(union netdata_ip_t));
-}
-
-/**
- * Is ip inside the range
- *
- * Check if the ip is inside a IP range
- *
- * @param rfirst    the first ip address of the range
- * @param rlast     the last ip address of the range
- * @param cmpfirst  the first ip to compare
- * @param cmplast   the last ip to compare
- * @param family    the IP family
- *
- * @return It returns 1 if the IP is inside the range and 0 otherwise
- */
-static int ebpf_is_ip_inside_range(union netdata_ip_t *rfirst, union netdata_ip_t *rlast,
-                                   union netdata_ip_t *cmpfirst, union netdata_ip_t *cmplast, int family)
-{
-    if (family == AF_INET) {
-        if ((rfirst->addr32[0] <= cmpfirst->addr32[0]) && (rlast->addr32[0] >= cmplast->addr32[0]))
-            return 1;
-    } else {
-        if (memcmp(rfirst->addr8, cmpfirst->addr8, sizeof(union netdata_ip_t)) <= 0 &&
-            memcmp(rlast->addr8, cmplast->addr8, sizeof(union netdata_ip_t)) >= 0) {
-            return 1;
-        }
-
-    }
-    return 0;
-}
-
-/**
- * Fill IP list
- *
- * @param out a pointer to the link list.
- * @param in the structure that will be linked.
- * @param table the modified table.
- */
-void ebpf_fill_ip_list(ebpf_network_viewer_ip_list_t **out, ebpf_network_viewer_ip_list_t *in, char *table)
-{
-#ifndef NETDATA_INTERNAL_CHECKS
-    UNUSED(table);
-#endif
-    if (in->ver == AF_INET) { // It is simpler to compare using host order
-        in->first.addr32[0] = ntohl(in->first.addr32[0]);
-        in->last.addr32[0] = ntohl(in->last.addr32[0]);
-    }
-    if (likely(*out)) {
-        ebpf_network_viewer_ip_list_t *move = *out, *store = *out;
-        while (move) {
-            if (in->ver == move->ver &&
-                ebpf_is_ip_inside_range(&move->first, &move->last, &in->first, &in->last, in->ver)) {
-                netdata_log_info("The range/value (%s) is inside the range/value (%s) already inserted, it will be ignored.",
-                     in->value, move->value);
-                freez(in->value);
-                freez(in);
-                return;
-            }
-            store = move;
-            move = move->next;
-        }
-
-        store->next = in;
-    } else {
-        *out = in;
-    }
-
-#ifdef NETDATA_INTERNAL_CHECKS
-    char first[256], last[512];
-    if (in->ver == AF_INET) {
-        netdata_log_info("Adding values %s: (%u - %u) to %s IP list \"%s\" used on network viewer",
-             in->value, in->first.addr32[0], in->last.addr32[0],
-             (*out == network_viewer_opt.included_ips)?"included":"excluded",
-             table);
-    } else {
-        if (inet_ntop(AF_INET6, in->first.addr8, first, INET6_ADDRSTRLEN) &&
-            inet_ntop(AF_INET6, in->last.addr8, last, INET6_ADDRSTRLEN))
-            netdata_log_info("Adding values %s - %s to %s IP list \"%s\" used on network viewer",
-                 first, last,
-                 (*out == network_viewer_opt.included_ips)?"included":"excluded",
-                 table);
-    }
-#endif
-}
-
-/**
- * Parse IP List
- *
- * Parse IP list and link it.
- *
- * @param out a pointer to store the link list
- * @param ip the value given as parameter
- */
-static void ebpf_parse_ip_list(void **out, char *ip)
-{
-    ebpf_network_viewer_ip_list_t **list = (ebpf_network_viewer_ip_list_t **)out;
-
-    char *ipdup = strdupz(ip);
-    union netdata_ip_t first = { };
-    union netdata_ip_t last = { };
-    char *is_ipv6;
-    if (*ip == '*' && *(ip+1) == '\0') {
-        memset(first.addr8, 0, sizeof(first.addr8));
-        memset(last.addr8, 0xFF, sizeof(last.addr8));
-
-        is_ipv6 = ip;
-
-        clean_ip_structure(list);
-        goto storethisip;
-    }
-
-    char *end = ip;
-    // Move while I cannot find a separator
-    while (*end && *end != '/' && *end != '-') end++;
-
-    // We will use only the classic IPV6 for while, but we could consider the base 85 in a near future
-    // https://tools.ietf.org/html/rfc1924
-    is_ipv6 = strchr(ip, ':');
-
-    int select;
-    if (*end && !is_ipv6) { // IPV4 range
-        select = (*end == '/') ? 0 : 1;
-        *end++ = '\0';
-        if (*end == '!') {
-            netdata_log_info("The exclusion cannot be in the second part of the range %s, it will be ignored.", ipdup);
-            goto cleanipdup;
-        }
-
-        if (!select) { // CIDR
-            select = ip2nl(first.addr8, ip, AF_INET, ipdup);
-            if (select)
-                goto cleanipdup;
-
-            select = (int) str2i(end);
-            if (select < NETDATA_MINIMUM_IPV4_CIDR || select > NETDATA_MAXIMUM_IPV4_CIDR) {
-                netdata_log_info("The specified CIDR %s is not valid, the IP %s will be ignored.", end, ip);
-                goto cleanipdup;
-            }
-
-            last.addr32[0] = htonl(broadcast(ntohl(first.addr32[0]), select));
-            // This was added to remove
-            // https://app.codacy.com/manual/netdata/netdata/pullRequest?prid=5810941&bid=19021977
-            UNUSED(last.addr32[0]);
-
-            uint32_t ipv4_test = htonl(ipv4_network(ntohl(first.addr32[0]), select));
-            if (first.addr32[0] != ipv4_test) {
-                first.addr32[0] = ipv4_test;
-                struct in_addr ipv4_convert;
-                ipv4_convert.s_addr = ipv4_test;
-                char ipv4_msg[INET_ADDRSTRLEN];
-                if(inet_ntop(AF_INET, &ipv4_convert, ipv4_msg, INET_ADDRSTRLEN))
-                    netdata_log_info("The network value of CIDR %s was updated for %s .", ipdup, ipv4_msg);
-            }
-        } else { // Range
-            select = ip2nl(first.addr8, ip, AF_INET, ipdup);
-            if (select)
-                goto cleanipdup;
-
-            select = ip2nl(last.addr8, end, AF_INET, ipdup);
-            if (select)
-                goto cleanipdup;
-        }
-
-        if (htonl(first.addr32[0]) > htonl(last.addr32[0])) {
-            netdata_log_info("The specified range %s is invalid, the second address is smallest than the first, it will be ignored.",
-                 ipdup);
-            goto cleanipdup;
-        }
-    } else if (is_ipv6) { // IPV6
-        if (!*end) { // Unique
-            select = ip2nl(first.addr8, ip, AF_INET6, ipdup);
-            if (select)
-                goto cleanipdup;
-
-            memcpy(last.addr8, first.addr8, sizeof(first.addr8));
-        } else if (*end == '-') {
-            *end++ = 0x00;
-            if (*end == '!') {
-                netdata_log_info("The exclusion cannot be in the second part of the range %s, it will be ignored.", ipdup);
-                goto cleanipdup;
-            }
-
-            select = ip2nl(first.addr8, ip, AF_INET6, ipdup);
-            if (select)
-                goto cleanipdup;
-
-            select = ip2nl(last.addr8, end, AF_INET6, ipdup);
-            if (select)
-                goto cleanipdup;
-        } else { // CIDR
-            *end++ = 0x00;
-            if (*end == '!') {
-                netdata_log_info("The exclusion cannot be in the second part of the range %s, it will be ignored.", ipdup);
-                goto cleanipdup;
-            }
-
-            select = str2i(end);
-            if (select < 0 || select > 128) {
-                netdata_log_info("The CIDR %s is not valid, the address %s will be ignored.", end, ip);
-                goto cleanipdup;
-            }
-
-            uint64_t prefix = (uint64_t)select;
-            select = ip2nl(first.addr8, ip, AF_INET6, ipdup);
-            if (select)
-                goto cleanipdup;
-
-            get_ipv6_last_addr(&last, &first, prefix);
-
-            union netdata_ip_t ipv6_test;
-            get_ipv6_first_addr(&ipv6_test, &first, prefix);
-
-            if (memcmp(first.addr8, ipv6_test.addr8, sizeof(union netdata_ip_t)) != 0) {
-                memcpy(first.addr8, ipv6_test.addr8, sizeof(union netdata_ip_t));
-
-                struct in6_addr ipv6_convert;
-                memcpy(ipv6_convert.s6_addr,  ipv6_test.addr8, sizeof(union netdata_ip_t));
-
-                char ipv6_msg[INET6_ADDRSTRLEN];
-                if(inet_ntop(AF_INET6, &ipv6_convert, ipv6_msg, INET6_ADDRSTRLEN))
-                    netdata_log_info("The network value of CIDR %s was updated for %s .", ipdup, ipv6_msg);
-            }
-        }
-
-        if ((be64toh(*(uint64_t *)&first.addr32[2]) > be64toh(*(uint64_t *)&last.addr32[2]) &&
-             !memcmp(first.addr32, last.addr32, 2*sizeof(uint32_t))) ||
-            (be64toh(*(uint64_t *)&first.addr32) > be64toh(*(uint64_t *)&last.addr32)) ) {
-            netdata_log_info("The specified range %s is invalid, the second address is smallest than the first, it will be ignored.",
-                 ipdup);
-            goto cleanipdup;
-        }
-    } else { // Unique ip
-        select = ip2nl(first.addr8, ip, AF_INET, ipdup);
-        if (select)
-            goto cleanipdup;
-
-        memcpy(last.addr8, first.addr8, sizeof(first.addr8));
-    }
-
-    ebpf_network_viewer_ip_list_t *store;
-
-storethisip:
-    store = callocz(1, sizeof(ebpf_network_viewer_ip_list_t));
-    store->value = ipdup;
-    store->hash = simple_hash(ipdup);
-    store->ver = (uint8_t)(!is_ipv6)?AF_INET:AF_INET6;
-    memcpy(store->first.addr8, first.addr8, sizeof(first.addr8));
-    memcpy(store->last.addr8, last.addr8, sizeof(last.addr8));
-
-    ebpf_fill_ip_list(list, store, "socket");
-    return;
-
-cleanipdup:
-    freez(ipdup);
-}
-
-/**
- * Parse IP Range
- *
- * Parse the IP ranges given and create Network Viewer IP Structure
- *
- * @param ptr  is a pointer with the text to parse.
- */
-static void ebpf_parse_ips(char *ptr)
-{
-    // No value
-    if (unlikely(!ptr))
-        return;
-
-    while (likely(ptr)) {
-        // Move forward until next valid character
-        while (isspace(*ptr)) ptr++;
-
-        // No valid value found
-        if (unlikely(!*ptr))
-            return;
-
-        // Find space that ends the list
-        char *end = strchr(ptr, ' ');
-        if (end) {
-            *end++ = '\0';
-        }
-
-        int neg = 0;
-        if (*ptr == '!') {
-            neg++;
-            ptr++;
-        }
-
-        if (isascii(*ptr)) { // Parse port
-            ebpf_parse_ip_list((!neg)?(void **)&network_viewer_opt.included_ips:
-                                      (void **)&network_viewer_opt.excluded_ips,
-                                ptr);
-        }
-
-        ptr = end;
-    }
-}
-
-
-
-/**
- * Parse port list
- *
- * Parse an allocated port list with the range given
- *
- * @param out a pointer to store the link list
- * @param range the informed range for the user.
- */
-static void parse_port_list(void **out, char *range)
-{
-    int first, last;
-    ebpf_network_viewer_port_list_t **list = (ebpf_network_viewer_port_list_t **)out;
-
-    char *copied = strdupz(range);
-    if (*range == '*' && *(range+1) == '\0') {
-        first = 1;
-        last = 65535;
-
-        clean_port_structure(list);
-        goto fillenvpl;
-    }
-
-    char *end = range;
-    //Move while I cannot find a separator
-    while (*end && *end != ':' && *end != '-') end++;
-
-    //It has a range
-    if (likely(*end)) {
-        *end++ = '\0';
-        if (*end == '!') {
-            netdata_log_info("The exclusion cannot be in the second part of the range, the range %s will be ignored.", copied);
-            freez(copied);
-            return;
-        }
-        last = str2i((const char *)end);
-    } else {
-        last = 0;
-    }
-
-    first = str2i((const char *)range);
-    if (first < NETDATA_MINIMUM_PORT_VALUE || first > NETDATA_MAXIMUM_PORT_VALUE) {
-        netdata_log_info("The first port %d of the range \"%s\" is invalid and it will be ignored!", first, copied);
-        freez(copied);
-        return;
-    }
-
-    if (!last)
-        last = first;
-
-    if (last < NETDATA_MINIMUM_PORT_VALUE || last > NETDATA_MAXIMUM_PORT_VALUE) {
-        netdata_log_info("The second port %d of the range \"%s\" is invalid and the whole range will be ignored!", last, copied);
-        freez(copied);
-        return;
-    }
-
-    if (first > last) {
-        netdata_log_info("The specified order %s is wrong, the smallest value is always the first, it will be ignored!", copied);
-        freez(copied);
-        return;
-    }
-
-    ebpf_network_viewer_port_list_t *w;
-fillenvpl:
-    w = callocz(1, sizeof(ebpf_network_viewer_port_list_t));
-    w->value = copied;
-    w->hash = simple_hash(copied);
-    w->first = (uint16_t)htons((uint16_t)first);
-    w->last = (uint16_t)htons((uint16_t)last);
-    w->cmp_first = (uint16_t)first;
-    w->cmp_last = (uint16_t)last;
-
-    fill_port_list(list, w);
-}
-
-/**
- * Read max dimension.
- *
- * Netdata plot two dimensions per connection, so it is necessary to adjust the values.
- *
- * @param cfg the configuration structure
- */
-static void read_max_dimension(struct config *cfg)
-{
-    int maxdim ;
-    maxdim = (int) appconfig_get_number(cfg,
-                                        EBPF_NETWORK_VIEWER_SECTION,
-                                        EBPF_MAXIMUM_DIMENSIONS,
-                                        NETDATA_NV_CAP_VALUE);
-    if (maxdim < 0) {
-        netdata_log_error("'maximum dimensions = %d' must be a positive number, Netdata will change for default value %ld.",
-              maxdim, NETDATA_NV_CAP_VALUE);
-        maxdim = NETDATA_NV_CAP_VALUE;
-    }
-
-    maxdim /= 2;
-    if (!maxdim) {
-        netdata_log_info("The number of dimensions is too small (%u), we are setting it to minimum 2", network_viewer_opt.max_dim);
-        network_viewer_opt.max_dim = 1;
-        return;
-    }
-
-    network_viewer_opt.max_dim = (uint32_t)maxdim;
-}
-
-/**
- * Parse Port Range
- *
- * Parse the port ranges given and create Network Viewer Port Structure
- *
- * @param ptr  is a pointer with the text to parse.
- */
-static void parse_ports(char *ptr)
-{
-    // No value
-    if (unlikely(!ptr))
-        return;
-
-    while (likely(ptr)) {
-        // Move forward until next valid character
-        while (isspace(*ptr)) ptr++;
-
-        // No valid value found
-        if (unlikely(!*ptr))
-            return;
-
-        // Find space that ends the list
-        char *end = strchr(ptr, ' ');
-        if (end) {
-            *end++ = '\0';
-        }
-
-        int neg = 0;
-        if (*ptr == '!') {
-            neg++;
-            ptr++;
-        }
-
-        if (isdigit(*ptr)) { // Parse port
-            parse_port_list((!neg)?(void **)&network_viewer_opt.included_port:(void **)&network_viewer_opt.excluded_port,
-                            ptr);
-        } else if (isalpha(*ptr)) { // Parse service
-            parse_service_list((!neg)?(void **)&network_viewer_opt.included_port:(void **)&network_viewer_opt.excluded_port,
-                               ptr);
-        } else if (*ptr == '*') { // All
-            parse_port_list((!neg)?(void **)&network_viewer_opt.included_port:(void **)&network_viewer_opt.excluded_port,
-                            ptr);
-        }
-
-        ptr = end;
-    }
-}
-
-/**
- * Link hostname
- *
- * @param out is the output link list
- * @param in the hostname to add to list.
- */
-static void link_hostname(ebpf_network_viewer_hostname_list_t **out, ebpf_network_viewer_hostname_list_t *in)
-{
-    if (likely(*out)) {
-        ebpf_network_viewer_hostname_list_t *move = *out;
-        for (; move->next ; move = move->next ) {
-            if (move->hash == in->hash && !strcmp(move->value, in->value)) {
-                netdata_log_info("The hostname %s was already inserted, it will be ignored.", in->value);
-                freez(in->value);
-                simple_pattern_free(in->value_pattern);
-                freez(in);
-                return;
-            }
-        }
-
-        move->next = in;
-    } else {
-        *out = in;
-    }
-#ifdef NETDATA_INTERNAL_CHECKS
-    netdata_log_info("Adding value %s to %s hostname list used on network viewer",
-         in->value,
-         (*out == network_viewer_opt.included_hostnames)?"included":"excluded");
-#endif
-}
-
-/**
- * Link Hostnames
- *
- * Parse the list of hostnames to create the link list.
- * This is not associated with the IP, because simple patterns like *example* cannot be resolved to IP.
- *
- * @param out is the output link list
- * @param parse is a pointer with the text to parser.
- */
-static void link_hostnames(char *parse)
-{
-    // No value
-    if (unlikely(!parse))
-        return;
-
-    while (likely(parse)) {
-        // Find the first valid value
-        while (isspace(*parse)) parse++;
-
-        // No valid value found
-        if (unlikely(!*parse))
-            return;
-
-        // Find space that ends the list
-        char *end = strchr(parse, ' ');
-        if (end) {
-            *end++ = '\0';
-        }
-
-        int neg = 0;
-        if (*parse == '!') {
-            neg++;
-            parse++;
-        }
-
-        ebpf_network_viewer_hostname_list_t *hostname = callocz(1 , sizeof(ebpf_network_viewer_hostname_list_t));
-        hostname->value = strdupz(parse);
-        hostname->hash = simple_hash(parse);
-        hostname->value_pattern = simple_pattern_create(parse, NULL, SIMPLE_PATTERN_EXACT, true);
-
-        link_hostname((!neg)?&network_viewer_opt.included_hostnames:&network_viewer_opt.excluded_hostnames,
-                      hostname);
-
-        parse = end;
-    }
-}
-
-/**
- * Parse network viewer section
- *
- * @param cfg the configuration structure
- */
-void parse_network_viewer_section(struct config *cfg)
-{
-    read_max_dimension(cfg);
-
-    network_viewer_opt.hostname_resolution_enabled = appconfig_get_boolean(cfg,
-                                                                           EBPF_NETWORK_VIEWER_SECTION,
-                                                                           EBPF_CONFIG_RESOLVE_HOSTNAME,
-                                                                           CONFIG_BOOLEAN_NO);
-
-    network_viewer_opt.service_resolution_enabled = appconfig_get_boolean(cfg,
-                                                                          EBPF_NETWORK_VIEWER_SECTION,
-                                                                          EBPF_CONFIG_RESOLVE_SERVICE,
-                                                                          CONFIG_BOOLEAN_NO);
-
-    char *value = appconfig_get(cfg, EBPF_NETWORK_VIEWER_SECTION, EBPF_CONFIG_PORTS, NULL);
-    parse_ports(value);
-
-    if (network_viewer_opt.hostname_resolution_enabled) {
-        value = appconfig_get(cfg, EBPF_NETWORK_VIEWER_SECTION, EBPF_CONFIG_HOSTNAMES, NULL);
-        link_hostnames(value);
-    } else {
-        netdata_log_info("Name resolution is disabled, collector will not parser \"hostnames\" list.");
-    }
-
-    value = appconfig_get(cfg, EBPF_NETWORK_VIEWER_SECTION,
-                          "ips", "!127.0.0.1/8 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 fc00::/7 !::1/128");
-    ebpf_parse_ips(value);
-}
-
 /**
  * Link dimension name
  *
@@ -3838,7 +2579,7 @@ void parse_network_viewer_section(struct config *cfg)
  * @param hash the calculated hash for the dimension name.
  * @param name the dimension name.
  */
-static void link_dimension_name(char *port, uint32_t hash, char *value)
+static void ebpf_link_dimension_name(char *port, uint32_t hash, char *value)
 {
     int test = str2i(port);
     if (test < NETDATA_MINIMUM_PORT_VALUE || test > NETDATA_MAXIMUM_PORT_VALUE){
@@ -3883,13 +2624,13 @@ static void link_dimension_name(char *port, uint32_t hash, char *value)
  *
  * @param cfg the configuration structure
  */
-void parse_service_name_section(struct config *cfg)
+void ebpf_parse_service_name_section(struct config *cfg)
 {
     struct section *co = appconfig_get_section(cfg, EBPF_SERVICE_NAME_SECTION);
     if (co) {
         struct config_option *cv;
         for (cv = co->values; cv ; cv = cv->next) {
-            link_dimension_name(cv->name, cv->hash, cv->value);
+            ebpf_link_dimension_name(cv->name, cv->hash, cv->value);
         }
     }
 
@@ -3910,23 +2651,21 @@ void parse_service_name_section(struct config *cfg)
         // if variable has an invalid value, we assume netdata is using 19999
         int default_port = str2i(port_string);
         if (default_port > 0 && default_port < 65536)
-            link_dimension_name(port_string, simple_hash(port_string), "Netdata");
+            ebpf_link_dimension_name(port_string, simple_hash(port_string), "Netdata");
     }
 }
 
+/**
+ * Parse table size options
+ *
+ * @param cfg configuration options read from user file.
+ */
 void parse_table_size_options(struct config *cfg)
 {
-    socket_maps[NETDATA_SOCKET_TABLE_BANDWIDTH].user_input = (uint32_t) appconfig_get_number(cfg,
-                                                                                            EBPF_GLOBAL_SECTION,
-                                                                                            EBPF_CONFIG_BANDWIDTH_SIZE, NETDATA_MAXIMUM_CONNECTIONS_ALLOWED);
-
-    socket_maps[NETDATA_SOCKET_TABLE_IPV4].user_input = (uint32_t) appconfig_get_number(cfg,
-                                                                                       EBPF_GLOBAL_SECTION,
-                                                                                       EBPF_CONFIG_IPV4_SIZE, NETDATA_MAXIMUM_CONNECTIONS_ALLOWED);
-
-    socket_maps[NETDATA_SOCKET_TABLE_IPV6].user_input = (uint32_t) appconfig_get_number(cfg,
-                                                                                       EBPF_GLOBAL_SECTION,
-                                                                                       EBPF_CONFIG_IPV6_SIZE, NETDATA_MAXIMUM_CONNECTIONS_ALLOWED);
+    socket_maps[NETDATA_SOCKET_OPEN_SOCKET].user_input = (uint32_t) appconfig_get_number(cfg,
+                                                                                        EBPF_GLOBAL_SECTION,
+                                                                                        EBPF_CONFIG_SOCKET_MONITORING_SIZE,
+                                                                                        NETDATA_MAXIMUM_CONNECTIONS_ALLOWED);
 
     socket_maps[NETDATA_SOCKET_TABLE_UDP].user_input = (uint32_t) appconfig_get_number(cfg,
                                                                                       EBPF_GLOBAL_SECTION,
@@ -3965,7 +2704,7 @@ static int ebpf_socket_load_bpf(ebpf_module_t *em)
 #endif
 
     if (ret) {
-        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
+        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
     }
 
     return ret;
@@ -3985,25 +2724,23 @@ void *ebpf_socket_thread(void *ptr)
     netdata_thread_cleanup_push(ebpf_socket_exit, ptr);
 
     ebpf_module_t *em = (ebpf_module_t *)ptr;
+    if (em->enabled > NETDATA_THREAD_EBPF_FUNCTION_RUNNING) {
+        collector_error("There is already a thread %s running", em->info.thread_name);
+        return NULL;
+    }
+
     em->maps = socket_maps;
 
+    rw_spinlock_write_lock(&network_viewer_opt.rw_spinlock);
+    // It was not enabled from main config file (ebpf.d.conf)
+    if (!network_viewer_opt.enabled)
+        network_viewer_opt.enabled = appconfig_get_boolean(&socket_config, EBPF_NETWORK_VIEWER_SECTION, "enabled",
+                                                           CONFIG_BOOLEAN_YES);
+    rw_spinlock_write_unlock(&network_viewer_opt.rw_spinlock);
+
     parse_table_size_options(&socket_config);
 
-    if (pthread_mutex_init(&nv_mutex, NULL)) {
-        netdata_log_error("Cannot initialize local mutex");
-        goto endsocket;
-    }
-
-    ebpf_socket_allocate_global_vectors(em->apps_charts);
-
-    if (network_viewer_opt.enabled) {
-        memset(&inbound_vectors.tree, 0, sizeof(avl_tree_lock));
-        memset(&outbound_vectors.tree, 0, sizeof(avl_tree_lock));
-        avl_init_lock(&inbound_vectors.tree, ebpf_compare_sockets);
-        avl_init_lock(&outbound_vectors.tree, ebpf_compare_sockets);
-
-        initialize_inbound_outbound();
-    }
+    ebpf_socket_initialize_global_vectors();
 
     if (running_on_kernel < NETDATA_EBPF_KERNEL_5_0)
         em->mode = MODE_ENTRY;
@@ -4026,8 +2763,15 @@ void *ebpf_socket_thread(void *ptr)
         socket_aggregated_data, socket_publish_aggregated, socket_dimension_names, socket_id_names,
         algorithms, NETDATA_MAX_SOCKET_VECTOR);
 
+    ebpf_read_socket.thread = mallocz(sizeof(netdata_thread_t));
+    netdata_thread_create(ebpf_read_socket.thread,
+                          ebpf_read_socket.name,
+                          NETDATA_THREAD_OPTION_DEFAULT,
+                          ebpf_read_socket_thread,
+                          em);
+
     pthread_mutex_lock(&lock);
-    ebpf_create_global_charts(em);
+    ebpf_socket_create_global_charts(em);
 
     ebpf_update_stats(&plugin_statistics, em);
     ebpf_update_kernel_memory_with_vector(&plugin_statistics, em->maps, EBPF_ACTION_STAT_ADD);
diff --git a/collectors/ebpf.plugin/ebpf_socket.h b/collectors/ebpf.plugin/ebpf_socket.h
index ae2ee28abc..fb2404c240 100644
--- a/collectors/ebpf.plugin/ebpf_socket.h
+++ b/collectors/ebpf.plugin/ebpf_socket.h
@@ -4,6 +4,11 @@
 #include <stdint.h>
 #include "libnetdata/avl/avl.h"
 
+#include <sys/socket.h>
+#ifdef HAVE_NETDB_H
+#include <netdb.h>
+#endif
+
 // Module name & description
 #define NETDATA_EBPF_MODULE_NAME_SOCKET "socket"
 #define NETDATA_EBPF_SOCKET_MODULE_DESC "Monitors TCP and UDP bandwidth. This thread is integrated with apps and cgroup."
@@ -11,8 +16,6 @@
 // Vector indexes
 #define NETDATA_UDP_START 3
 
-#define NETDATA_SOCKET_READ_SLEEP_MS 800000ULL
-
 // config file
 #define NETDATA_NETWORK_CONFIG_FILE "network.conf"
 #define EBPF_NETWORK_VIEWER_SECTION "network connections"
@@ -21,18 +24,13 @@
 #define EBPF_CONFIG_RESOLVE_SERVICE "resolve service names"
 #define EBPF_CONFIG_PORTS "ports"
 #define EBPF_CONFIG_HOSTNAMES "hostnames"
-#define EBPF_CONFIG_BANDWIDTH_SIZE "bandwidth table size"
-#define EBPF_CONFIG_IPV4_SIZE "ipv4 connection table size"
-#define EBPF_CONFIG_IPV6_SIZE "ipv6 connection table size"
+#define EBPF_CONFIG_SOCKET_MONITORING_SIZE "socket monitoring table size"
 #define EBPF_CONFIG_UDP_SIZE "udp connection table size"
-#define EBPF_MAXIMUM_DIMENSIONS "maximum dimensions"
 
 enum ebpf_socket_table_list {
-    NETDATA_SOCKET_TABLE_BANDWIDTH,
     NETDATA_SOCKET_GLOBAL,
     NETDATA_SOCKET_LPORTS,
-    NETDATA_SOCKET_TABLE_IPV4,
-    NETDATA_SOCKET_TABLE_IPV6,
+    NETDATA_SOCKET_OPEN_SOCKET,
     NETDATA_SOCKET_TABLE_UDP,
     NETDATA_SOCKET_TABLE_CTRL
 };
@@ -122,13 +120,6 @@ typedef enum ebpf_socket_idx {
 #define NETDATA_NET_APPS_BANDWIDTH_UDP_SEND_CALLS "bandwidth_udp_send"
 #define NETDATA_NET_APPS_BANDWIDTH_UDP_RECV_CALLS "bandwidth_udp_recv"
 
-// Network viewer charts
-#define NETDATA_NV_OUTBOUND_BYTES "outbound_bytes"
-#define NETDATA_NV_OUTBOUND_PACKETS "outbound_packets"
-#define NETDATA_NV_OUTBOUND_RETRANSMIT "outbound_retransmit"
-#define NETDATA_NV_INBOUND_BYTES "inbound_bytes"
-#define NETDATA_NV_INBOUND_PACKETS "inbound_packets"
-
 // Port range
 #define NETDATA_MINIMUM_PORT_VALUE 1
 #define NETDATA_MAXIMUM_PORT_VALUE 65535
@@ -163,6 +154,8 @@ typedef enum ebpf_socket_idx {
 
 // ARAL name
 #define NETDATA_EBPF_SOCKET_ARAL_NAME "ebpf_socket"
+#define NETDATA_EBPF_PID_SOCKET_ARAL_TABLE_NAME "ebpf_pid_socket"
+#define NETDATA_EBPF_SOCKET_ARAL_TABLE_NAME "ebpf_socket_tbl"
 
 typedef struct ebpf_socket_publish_apps {
     // Data read
@@ -246,10 +239,11 @@ typedef struct ebpf_network_viewer_hostname_list {
     struct ebpf_network_viewer_hostname_list *next;
 } ebpf_network_viewer_hostname_list_t;
 
-#define NETDATA_NV_CAP_VALUE 50L
 typedef struct ebpf_network_viewer_options {
+    RW_SPINLOCK rw_spinlock;
+
     uint32_t enabled;
-    uint32_t max_dim;   // Store value read from 'maximum dimensions'
+    uint32_t family;                                        // AF_INET, AF_INET6 or AF_UNSPEC (both)
 
     uint32_t hostname_resolution_enabled;
     uint32_t service_resolution_enabled;
@@ -275,98 +269,82 @@ extern ebpf_network_viewer_options_t network_viewer_opt;
  * Structure to store socket information
  */
 typedef struct netdata_socket {
-    uint64_t recv_packets;
-    uint64_t sent_packets;
-    uint64_t recv_bytes;
-    uint64_t sent_bytes;
-    uint64_t first; // First timestamp
-    uint64_t ct;   // Current timestamp
-    uint32_t retransmit; // It is never used with UDP
+    // Timestamp
+    uint64_t first_timestamp;
+    uint64_t current_timestamp;
+    // Socket additional info
     uint16_t protocol;
-    uint16_t reserved;
+    uint16_t family;
+    uint32_t external_origin;
+    struct {
+        uint32_t call_tcp_sent;
+        uint32_t call_tcp_received;
+        uint64_t tcp_bytes_sent;
+        uint64_t tcp_bytes_received;
+        uint32_t close;        //It is never used with UDP
+        uint32_t retransmit;   //It is never used with UDP
+        uint32_t ipv4_connect;
+        uint32_t ipv6_connect;
+    } tcp;
+
+    struct {
+        uint32_t call_udp_sent;
+        uint32_t call_udp_received;
+        uint64_t udp_bytes_sent;
+        uint64_t udp_bytes_received;
+    } udp;
 } netdata_socket_t;
 
-typedef struct netdata_plot_values {
-    // Values used in the previous iteration
-    uint64_t recv_packets;
-    uint64_t sent_packets;
-    uint64_t recv_bytes;
-    uint64_t sent_bytes;
-    uint32_t retransmit;
+typedef enum netdata_socket_flags {
+    NETDATA_SOCKET_FLAGS_ALREADY_OPEN = (1<<0)
+} netdata_socket_flags_t;
 
-    uint64_t last_time;
+typedef enum netdata_socket_src_ip_origin {
+    NETDATA_EBPF_SRC_IP_ORIGIN_LOCAL,
+    NETDATA_EBPF_SRC_IP_ORIGIN_EXTERNAL
+} netdata_socket_src_ip_origin_t;
 
-    // Values used to plot
-    uint64_t plot_recv_packets;
-    uint64_t plot_sent_packets;
-    uint64_t plot_recv_bytes;
-    uint64_t plot_sent_bytes;
-    uint16_t plot_retransmit;
-} netdata_plot_values_t;
+typedef struct netata_socket_plus {
+    netdata_socket_t data;           // Data read from database
+    uint32_t pid;
+    time_t last_update;
+    netdata_socket_flags_t flags;
+
+    struct  {
+        char src_ip[INET6_ADDRSTRLEN + 1];
+ //       uint16_t src_port;
+        char dst_ip[INET6_ADDRSTRLEN+ 1];
+        char dst_port[NI_MAXSERV + 1];
+    } socket_string;
+} netdata_socket_plus_t;
+
+enum netdata_udp_ports {
+    NETDATA_EBPF_UDP_PORT = 53
+};
+
+extern ARAL *aral_socket_table;
 
 /**
  * Index used together previous structure
  */
 typedef struct netdata_socket_idx {
     union netdata_ip_t saddr;
-    uint16_t sport;
+    //uint16_t sport;
     union netdata_ip_t daddr;
     uint16_t dport;
+    uint32_t pid;
 } netdata_socket_idx_t;
 
-// Next values were defined according getnameinfo(3)
-#define NETDATA_MAX_NETWORK_COMBINED_LENGTH 1018
-#define NETDATA_DOTS_PROTOCOL_COMBINED_LENGTH 5 // :TCP:
-#define NETDATA_DIM_LENGTH_WITHOUT_SERVICE_PROTOCOL 979
-
-#define NETDATA_INBOUND_DIRECTION (uint32_t)1
-#define NETDATA_OUTBOUND_DIRECTION (uint32_t)2
-/**
- * Allocate the maximum number of structures in the beginning, this can force the collector to use more memory
- * in the long term, on the other had it is faster.
- */
-typedef struct netdata_socket_plot {
-    // Search
-    avl_t avl;
-    netdata_socket_idx_t index;
-
-    // Current data
-    netdata_socket_t sock;
-
-    // Previous values and values used to write on chart.
-    netdata_plot_values_t plot;
-
-    int family;                     // AF_INET or AF_INET6
-    char *resolved_name;            // Resolve only in the first call
-    unsigned char resolved;
-
-    char *dimension_sent;
-    char *dimension_recv;
-    char *dimension_retransmit;
-
-    uint32_t flags;
-} netdata_socket_plot_t;
-
-#define NETWORK_VIEWER_CHARTS_CREATED (uint32_t)1
-typedef struct netdata_vector_plot {
-    netdata_socket_plot_t *plot;    // Vector used to plot charts
-
-    avl_tree_lock tree;             // AVL tree to speed up search
-    uint32_t last;                  // The 'other' dimension, the last chart accepted.
-    uint32_t next;                  // The next position to store in the vector.
-    uint32_t max_plot;              // Max number of elements to plot.
-    uint32_t last_plot;             // Last element plot
-
-    uint32_t flags;                 // Flags
-
-} netdata_vector_plot_t;
-
-void clean_port_structure(ebpf_network_viewer_port_list_t **clean);
+void ebpf_clean_port_structure(ebpf_network_viewer_port_list_t **clean);
 extern ebpf_network_viewer_port_list_t *listen_ports;
 void update_listen_table(uint16_t value, uint16_t proto, netdata_passive_connection_t *values);
-void parse_network_viewer_section(struct config *cfg);
-void ebpf_fill_ip_list(ebpf_network_viewer_ip_list_t **out, ebpf_network_viewer_ip_list_t *in, char *table);
-void parse_service_name_section(struct config *cfg);
+void ebpf_fill_ip_list_unsafe(ebpf_network_viewer_ip_list_t **out, ebpf_network_viewer_ip_list_t *in, char *table);
+void ebpf_parse_service_name_section(struct config *cfg);
+void ebpf_parse_ips_unsafe(char *ptr);
+void ebpf_parse_ports(char *ptr);
+void ebpf_socket_read_open_connections(BUFFER *buf, struct ebpf_module *em);
+void ebpf_socket_fill_publish_apps(uint32_t current_pid, netdata_socket_t *ns);
+
 
 extern struct config socket_config;
 extern netdata_ebpf_targets_t socket_targets[];
diff --git a/collectors/ebpf.plugin/ebpf_swap.c b/collectors/ebpf.plugin/ebpf_swap.c
index 359fe23082..9629a09b11 100644
--- a/collectors/ebpf.plugin/ebpf_swap.c
+++ b/collectors/ebpf.plugin/ebpf_swap.c
@@ -124,13 +124,6 @@ static int ebpf_swap_attach_kprobe(struct swap_bpf *obj)
     if (ret)
         return -1;
 
-    obj->links.netdata_release_task_probe = bpf_program__attach_kprobe(obj->progs.netdata_release_task_probe,
-                                                                         false,
-                                                                         EBPF_COMMON_FNCT_CLEAN_UP);
-    ret = libbpf_get_error(obj->links.netdata_swap_writepage_probe);
-    if (ret)
-        return -1;
-
     return 0;
 }
 
@@ -176,7 +169,6 @@ static void ebpf_swap_adjust_map(struct swap_bpf *obj, ebpf_module_t *em)
 static void ebpf_swap_disable_release_task(struct swap_bpf *obj)
 {
     bpf_program__set_autoload(obj->progs.netdata_release_task_fentry, false);
-    bpf_program__set_autoload(obj->progs.netdata_release_task_probe, false);
 }
 
 /**
@@ -959,7 +951,7 @@ static int ebpf_swap_load_bpf(ebpf_module_t *em)
 #endif
 
     if (ret)
-        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->thread_name);
+        netdata_log_error("%s %s", EBPF_DEFAULT_ERROR_MSG, em->info.thread_name);
 
     return ret;
 }
diff --git a/collectors/ebpf.plugin/ebpf_sync.c b/collectors/ebpf.plugin/ebpf_sync.c
index 521d39f31d..064690683e 100644
--- a/collectors/ebpf.plugin/ebpf_sync.c
+++ b/collectors/ebpf.plugin/ebpf_sync.c
@@ -383,7 +383,7 @@ static void ebpf_sync_exit(void *ptr)
  */
 static int ebpf_sync_load_legacy(ebpf_sync_syscalls_t *w, ebpf_module_t *em)
 {
-    em->thread_name = w->syscall;
+    em->info.thread_name = w->syscall;
     if (!w->probe_links) {
         w->probe_links = ebpf_load_program(ebpf_plugin_dir, em, running_on_kernel, isrh, &w->objects);
         if (!w->probe_links) {
@@ -413,7 +413,7 @@ static int ebpf_sync_initialize_syscall(ebpf_module_t *em)
 #endif
 
     int i;
-    const char *saved_name = em->thread_name;
+    const char *saved_name = em->info.thread_name;
     int errors = 0;
     for (i = 0; local_syscalls[i].syscall; i++) {
         ebpf_sync_syscalls_t *w = &local_syscalls[i];
@@ -424,7 +424,7 @@ static int ebpf_sync_initialize_syscall(ebpf_module_t *em)
                 if (ebpf_sync_load_legacy(w, em))
                     errors++;
 
-                em->thread_name = saved_name;
+                em->info.thread_name = saved_name;
             }
 #ifdef LIBBPF_MAJOR_VERSION
             else {
@@ -446,12 +446,12 @@ static int ebpf_sync_initialize_syscall(ebpf_module_t *em)
                     w->enabled = false;
                 }
 
-                em->thread_name = saved_name;
+                em->info.thread_name = saved_name;
             }
 #endif
         }
     }
-    em->thread_name = saved_name;
+    em->info.thread_name = saved_name;
 
     memset(sync_counter_aggregated_data, 0 , NETDATA_SYNC_IDX_END * sizeof(netdata_syscall_stat_t));
     memset(sync_counter_publish_aggregated, 0 , NETDATA_SYNC_IDX_END * sizeof(netdata_publish_syscall_t));
diff --git a/collectors/ebpf.plugin/ebpf_unittest.c b/collectors/ebpf.plugin/ebpf_unittest.c
index 3e1443ad37..11b449e03b 100644
--- a/collectors/ebpf.plugin/ebpf_unittest.c
+++ b/collectors/ebpf.plugin/ebpf_unittest.c
@@ -12,8 +12,8 @@ ebpf_module_t test_em;
 void ebpf_ut_initialize_structure(netdata_run_mode_t mode)
 {
     memset(&test_em, 0, sizeof(ebpf_module_t));
-    test_em.thread_name = strdupz("process");
-    test_em.config_name = test_em.thread_name;
+    test_em.info.thread_name = strdupz("process");
+    test_em.info.config_name = test_em.info.thread_name;
     test_em.kernels = NETDATA_V3_10 | NETDATA_V4_14 | NETDATA_V4_16 | NETDATA_V4_18 | NETDATA_V5_4 | NETDATA_V5_10 |
                       NETDATA_V5_14;
     test_em.pid_map_size = ND_EBPF_DEFAULT_PID_SIZE;
@@ -28,7 +28,7 @@ void ebpf_ut_initialize_structure(netdata_run_mode_t mode)
  */
 void ebpf_ut_cleanup_memory()
 {
-    freez((void *)test_em.thread_name);
+    freez((void *)test_em.info.thread_name);
 }
 
 /**
@@ -70,14 +70,14 @@ int ebpf_ut_load_real_binary()
  */
 int ebpf_ut_load_fake_binary()
 {
-    const char *original = test_em.thread_name;
+    const char *original = test_em.info.thread_name;
 
-    test_em.thread_name = strdupz("I_am_not_here");
+    test_em.info.thread_name = strdupz("I_am_not_here");
     int ret = ebpf_ut_load_binary();
 
     ebpf_ut_cleanup_memory();
 
-    test_em.thread_name = original;
+    test_em.info.thread_name = original;
 
     return !ret;
 }
diff --git a/docs/cloud/netdata-functions.md b/docs/cloud/netdata-functions.md
index 949c8b4cc4..80616ca419 100644
--- a/docs/cloud/netdata-functions.md
+++ b/docs/cloud/netdata-functions.md
@@ -33,7 +33,8 @@ functions - [plugins.d](https://github.com/netdata/netdata/blob/master/collector
 | Function | Description | plugin - module |
 | :-- | :-- | :-- |
 | processes | Detailed information on the currently running processes on the node. | [apps.plugin](https://github.com/netdata/netdata/blob/master/collectors/apps.plugin/README.md) |
-| ebpf_thread | Controller for eBPF threads. | [ebpf.plugin](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/README.md) |
+| ebpf_socket | Detailed socket information. | [ebpf.plugin](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/README.md#ebpf_thread) |
+| ebpf_thread | Controller for eBPF threads. | [ebpf.plugin](https://github.com/netdata/netdata/blob/master/collectors/ebpf.plugin/README.md#ebpf_socket) |
 
 If you have ideas or requests for other functions:
 * Participate in the relevant [GitHub discussion](https://github.com/netdata/netdata/discussions/14412)
diff --git a/libnetdata/ebpf/ebpf.c b/libnetdata/ebpf/ebpf.c
index 6793f403a9..1bd45ef258 100644
--- a/libnetdata/ebpf/ebpf.c
+++ b/libnetdata/ebpf/ebpf.c
@@ -792,13 +792,13 @@ void ebpf_update_controller(int fd, ebpf_module_t *em)
 {
     uint32_t values[NETDATA_CONTROLLER_END] = {
         (em->apps_charts & NETDATA_EBPF_APPS_FLAG_YES) | em->cgroup_charts,
-        em->apps_level
+        em->apps_level, 0, 0, 0, 0
     };
     uint32_t key;
-    uint32_t end = (em->apps_level != NETDATA_APPS_NOT_SET) ? NETDATA_CONTROLLER_END : NETDATA_CONTROLLER_APPS_LEVEL;
+    uint32_t end = NETDATA_CONTROLLER_PID_TABLE_ADD;
 
     for (key = NETDATA_CONTROLLER_APPS_ENABLED; key < end; key++) {
-        int ret = bpf_map_update_elem(fd, &key, &values[key], 0);
+        int ret = bpf_map_update_elem(fd, &key, &values[key], BPF_ANY);
         if (ret)
             netdata_log_error("Add key(%u) for controller table failed.", key);
     }
@@ -855,7 +855,7 @@ struct bpf_link **ebpf_load_program(char *plugins_dir, ebpf_module_t *em, int kv
 
     uint32_t idx = ebpf_select_index(em->kernels, is_rhf, kver);
 
-    ebpf_mount_name(lpath, 4095, plugins_dir, idx, em->thread_name, em->mode, is_rhf);
+    ebpf_mount_name(lpath, 4095, plugins_dir, idx, em->info.thread_name, em->mode, is_rhf);
 
     // When this function is called ebpf.plugin is using legacy code, so we should reset the variable
     em->load &= ~ NETDATA_EBPF_LOAD_METHODS;
@@ -1269,7 +1269,7 @@ void ebpf_update_module_using_config(ebpf_module_t *modules, netdata_ebpf_load_m
 
 #ifdef NETDATA_DEV_MODE
     netdata_log_info("The thread %s was configured with: mode = %s; update every = %d; apps = %s; cgroup = %s; ebpf type format = %s; ebpf co-re tracing = %s; collect pid = %s; maps per core = %s, lifetime=%u",
-         modules->thread_name,
+         modules->info.thread_name,
          load_mode,
          modules->update_every,
          (modules->apps_charts)?"enabled":"disabled",
diff --git a/libnetdata/ebpf/ebpf.h b/libnetdata/ebpf/ebpf.h
index 691a4c26ed..6708f669a6 100644
--- a/libnetdata/ebpf/ebpf.h
+++ b/libnetdata/ebpf/ebpf.h
@@ -301,11 +301,27 @@ enum ebpf_global_table_values {
 typedef uint64_t netdata_idx_t;
 
 typedef struct ebpf_module {
-    const char *thread_name;
-    const char *config_name;
-    const char *thread_description;
+    // Constants used with module
+    struct {
+        const char *thread_name;
+        const char *config_name;
+        const char *thread_description;
+    } info;
+
+    // Helpers used with plugin
+    struct {
+        void *(*start_routine)(void *);                             // the thread function
+        void (*apps_routine)(struct ebpf_module *em, void *ptr);    // the apps charts
+        void (*fnct_routine)(BUFFER *bf, struct ebpf_module *em);   // the function used for exteernal requests
+        const char *fcnt_name;                                      // name given to cloud
+        const char *fcnt_desc;                                      // description given about function
+        const char *fcnt_thread_chart_name;
+        int order_thread_chart;
+        const char *fcnt_thread_lifetime_name;
+        int order_thread_lifetime;
+    } functions;
+
     enum ebpf_threads_status enabled;
-    void *(*start_routine)(void *);
     int update_every;
     int global_charts;
     netdata_apps_integration_flags_t apps_charts;
@@ -314,7 +330,6 @@ typedef struct ebpf_module {
     netdata_run_mode_t mode;
     uint32_t thread_id;
     int optional;
-    void (*apps_routine)(struct ebpf_module *em, void *ptr);
     ebpf_local_maps_t *maps;
     ebpf_specify_name_t *names;
     uint32_t pid_map_size;
diff --git a/packaging/ebpf-co-re.checksums b/packaging/ebpf-co-re.checksums
index 6ee06dd1bd..c51f3ef5fd 100644
--- a/packaging/ebpf-co-re.checksums
+++ b/packaging/ebpf-co-re.checksums
@@ -1 +1 @@
-2abbbaf30a73e1ed365d42324a5128470568b008528c3ff8cd98d5eb86152f03  netdata-ebpf-co-re-glibc-v1.2.1.tar.xz
+7ef8d2a0f485b4c81942f66c50e1aedcd568b7997a933c50c0ebbd8353543c08  netdata-ebpf-co-re-glibc-v1.2.8.tar.xz
diff --git a/packaging/ebpf-co-re.version b/packaging/ebpf-co-re.version
index 6a5e98a744..d1f79a9413 100644
--- a/packaging/ebpf-co-re.version
+++ b/packaging/ebpf-co-re.version
@@ -1 +1 @@
-v1.2.1
+v1.2.8
diff --git a/packaging/ebpf.checksums b/packaging/ebpf.checksums
index e79daee9af..28f023d524 100644
--- a/packaging/ebpf.checksums
+++ b/packaging/ebpf.checksums
@@ -1,3 +1,3 @@
-cb0cd6ef4bdb8a39c42b152d328d4822217c59e1d616d3003bc67bc53a058275  ./netdata-kernel-collector-glibc-v1.2.1.tar.xz
-0633ff39e8654a21ab664a289f58daca5792cfaf2ed62dcaacf7cd267eeedd40  ./netdata-kernel-collector-musl-v1.2.1.tar.xz
-6ce60c5ac8f45cc6a01b7ac9ea150728963d0aca1ee6dfd568b0f8b2ba67b88b  ./netdata-kernel-collector-static-v1.2.1.tar.xz
+9035b6b8dda5230c1ddc44991518a3ee069bd497ad5a8e5448b79dc4b8c51c43  ./netdata-kernel-collector-glibc-v1.2.8.tar.xz
+e5b1a141475f75c60c282a2e3ce8e3914893e75d474c976bad95f66d4c9846c5  ./netdata-kernel-collector-musl-v1.2.8.tar.xz
+d6081a2fedc9435d1ab430697cb101123cebaac07b62fb91d790ca526923f4e3  ./netdata-kernel-collector-static-v1.2.8.tar.xz
diff --git a/packaging/ebpf.version b/packaging/ebpf.version
index 6a5e98a744..d1f79a9413 100644
--- a/packaging/ebpf.version
+++ b/packaging/ebpf.version
@@ -1 +1 @@
-v1.2.1
+v1.2.8