mirror of
https://github.com/netdata/netdata.git
synced 2025-05-17 14:42:21 +00:00
Docs fixes (#18676)
This commit is contained in:
parent
e6e8a3ed71
commit
7332919cf5
164 changed files with 415 additions and 1035 deletions
docs/netdata-agent/sizing-netdata-agents
|
@ -23,7 +23,7 @@ The expected bandwidth consumption using `zstd` for 1 million samples per second
|
|||
|
||||
The order compression algorithms is selected is configured in `stream.conf`, per `[API KEY]`, like this:
|
||||
|
||||
```txt
|
||||
```text
|
||||
compression algorithms order = zstd lz4 brotli gzip
|
||||
```
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ This number can be lowered by limiting the number of database tier or switching
|
|||
|
||||
The general formula, with the default configuration of database tiers, is:
|
||||
|
||||
```txt
|
||||
```text
|
||||
memory = UNIQUE_METRICS x 16KiB + CONFIGURED_CACHES
|
||||
```
|
||||
|
||||
|
@ -22,7 +22,7 @@ The default `CONFIGURED_CACHES` is 32MiB.
|
|||
|
||||
For 1 million concurrently collected time-series (independently of their data collection frequency), the memory required is:
|
||||
|
||||
```txt
|
||||
```text
|
||||
UNIQUE_METRICS = 1000000
|
||||
CONFIGURED_CACHES = 32MiB
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue