mirror of
https://github.com/netdata/netdata.git
synced 2025-05-22 16:37:46 +00:00
Docs directory lint documentation and fix issues (#18660)
* alerts-and-notifications broken link pass * category-overview-pages pass * dashboards and charts pass * deployment-guides pass * dev corner pass * exporting metrics pass * Netdata Agent pass * Netdata Cloud pass * observ centrl points pass * sec and priv design pass * final docs on docs/ folder * web server readme fix * fix broken link
This commit is contained in:
parent
dbec34183b
commit
a5460023bf
67 changed files with 664 additions and 871 deletions
docs/netdata-agent/sizing-netdata-agents
|
@ -23,7 +23,7 @@ The expected bandwidth consumption using `zstd` for 1 million samples per second
|
|||
|
||||
The order compression algorithms is selected is configured in `stream.conf`, per `[API KEY]`, like this:
|
||||
|
||||
```
|
||||
```txt
|
||||
compression algorithms order = zstd lz4 brotli gzip
|
||||
```
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ This number can be lowered by limiting the number of database tier or switching
|
|||
|
||||
The general formula, with the default configuration of database tiers, is:
|
||||
|
||||
```
|
||||
```txt
|
||||
memory = UNIQUE_METRICS x 16KiB + CONFIGURED_CACHES
|
||||
```
|
||||
|
||||
|
@ -22,7 +22,7 @@ The default `CONFIGURED_CACHES` is 32MiB.
|
|||
|
||||
For 1 million concurrently collected time-series (independently of their data collection frequency), the memory required is:
|
||||
|
||||
```
|
||||
```txt
|
||||
UNIQUE_METRICS = 1000000
|
||||
CONFIGURED_CACHES = 32MiB
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue