mirror of
https://github.com/netdata/netdata.git
synced 2025-04-12 16:58:10 +00:00
Add initial tooling for generating integrations.js file. (#15406)
* Fix link tags in deploy. * Add initial tooling for generating integrations.js file. * Skip integrations directory for eslint. * Add README to explain how to generate integrations.js locally. * Fix ID/name for top-level categories. * Deduplicate categories entries. * Properly render related resources information. * Warn on and skip bad references for related resources. * Add CI workflow to rebuild integrations as-needed. * Add integrations.js to build artifacts. * Fix actionlint complaints. * Assorted template fixes. * Add script to check collector metadata. * Add default categories for collectors when they have no categories. * Fix template formatting issues. * Link related resources properly. * Skip more sections in rendered output if they are not present in source data. * Temporarily skip config syntax section. It needs further work and is not critical at the moment. * Fix metrics table rendering. * Hide most overview content if method_description is empty. * Fix metrics table rendering (again). * Add detailed description to setup options section. * Fix detailed description handling for config options. * Fix config example folding logic. * Fix multi-instance selection. * Properly fix multi-instance selection. * Add titles for labels and metrics charts. * Include monitored instance name in integration ID. This is required to disambiguate some ‘virtual’ integrations. * Indicate if there are no alerts defined for an integration. * Fix multi-instance in template. * Improve warning handling in script and fix category handling. * Hide debug messages by default. * Fix invalid category name in cgroups plugin. * Completely fix invalid categories in cgroups plugin. * Warn about and ignore duplicate integration ids. * Flag integration type in integrations list. * Add configuration syntax samples. * Fix issues in gen_integrations.py * Validate categories.yaml on load. * Add support for handling deployment information. * Fix bugs in gen_integrations.py * Add code to handle exporters. * Add link to integrations pointing to their source files. * Fix table justification. * Add notification handling to script. Also tidy up a few other things. * Fix numerous bugs in gen_integrations.py * remove trailing space from deploy.yaml command * make availability one column * Switch back to multiple columns for availability. And also switch form +/- to a dot for positive and empty cell for negative. * Render setup description. * Fix platform info rendering in deploy integrations. * Fix sourcing of cloud-notifications metadata. * Fix rendering of empty metrics. * Fix alerts template. * Add per-instance templating for templated keys. * Fix go plugin links. * Fix overview template. * Fix handling of exporters. * Fix loading of cloud notification integrations. * Always show full collector overview. * Add static troubleshooting content when appropriate. * Assorted deploy integration updates. * Add initial copy of integrations.js. --------- Co-authored-by: Fotis Voutsas <fotis@netdata.cloud>
This commit is contained in:
parent
7773b5ee33
commit
183bb1db19
32 changed files with 1395 additions and 98 deletions
.eslintignore
.github/workflows
collectors/cgroups.plugin
integrations
README.mdcategories.yamlcheck_collector_metadata.py
cloud-notifications
deploy.yamlgen_integrations.pyintegrations.jsschemas
templates
|
@ -1,3 +1,4 @@
|
||||||
**/*{.,-}min.js
|
**/*{.,-}min.js
|
||||||
|
integrations/*
|
||||||
web/gui/v1/*
|
web/gui/v1/*
|
||||||
web/gui/v2/*
|
web/gui/v2/*
|
||||||
|
|
5
.github/workflows/build.yml
vendored
5
.github/workflows/build.yml
vendored
|
@ -519,6 +519,7 @@ jobs:
|
||||||
mv ../static-archive/* . || exit 1
|
mv ../static-archive/* . || exit 1
|
||||||
ln -s ${{ needs.build-dist.outputs.distfile }} netdata-latest.tar.gz || exit 1
|
ln -s ${{ needs.build-dist.outputs.distfile }} netdata-latest.tar.gz || exit 1
|
||||||
cp ../packaging/version ./latest-version.txt || exit 1
|
cp ../packaging/version ./latest-version.txt || exit 1
|
||||||
|
cp ../integrations/integrations.js ./integrations.js || exit 1
|
||||||
sha256sum -b ./* > sha256sums.txt || exit 1
|
sha256sum -b ./* > sha256sums.txt || exit 1
|
||||||
cat sha256sums.txt
|
cat sha256sums.txt
|
||||||
- name: Store Artifacts
|
- name: Store Artifacts
|
||||||
|
@ -753,7 +754,7 @@ jobs:
|
||||||
with:
|
with:
|
||||||
allowUpdates: false
|
allowUpdates: false
|
||||||
artifactErrorsFailBuild: true
|
artifactErrorsFailBuild: true
|
||||||
artifacts: 'final-artifacts/sha256sums.txt,final-artifacts/netdata-*.tar.gz,final-artifacts/netdata-*.gz.run'
|
artifacts: 'final-artifacts/sha256sums.txt,final-artifacts/netdata-*.tar.gz,final-artifacts/netdata-*.gz.run,final-artifacts/integrations.js'
|
||||||
owner: netdata
|
owner: netdata
|
||||||
repo: netdata-nightlies
|
repo: netdata-nightlies
|
||||||
body: Netdata nightly build for ${{ steps.version.outputs.date }}.
|
body: Netdata nightly build for ${{ steps.version.outputs.date }}.
|
||||||
|
@ -823,7 +824,7 @@ jobs:
|
||||||
with:
|
with:
|
||||||
allowUpdates: false
|
allowUpdates: false
|
||||||
artifactErrorsFailBuild: true
|
artifactErrorsFailBuild: true
|
||||||
artifacts: 'final-artifacts/sha256sums.txt,final-artifacts/netdata-*.tar.gz,final-artifacts/netdata-*.gz.run'
|
artifacts: 'final-artifacts/sha256sums.txt,final-artifacts/netdata-*.tar.gz,final-artifacts/netdata-*.gz.run,final-artifacts/integrations.js'
|
||||||
draft: true
|
draft: true
|
||||||
tag: ${{ needs.normalize-tag.outputs.tag }}
|
tag: ${{ needs.normalize-tag.outputs.tag }}
|
||||||
token: ${{ secrets.NETDATABOT_GITHUB_TOKEN }}
|
token: ${{ secrets.NETDATABOT_GITHUB_TOKEN }}
|
||||||
|
|
88
.github/workflows/generate-integrations.yml
vendored
Normal file
88
.github/workflows/generate-integrations.yml
vendored
Normal file
|
@ -0,0 +1,88 @@
|
||||||
|
---
|
||||||
|
# CI workflow used to regenerate `integrations/integrations.js` when
|
||||||
|
# relevant source files are changed.
|
||||||
|
name: Generate Integrations
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- master
|
||||||
|
paths: # If any of these files change, we need to regenerate integrations.js.
|
||||||
|
- 'collectors/**/metadata.yaml'
|
||||||
|
- 'collectors/**/multi_metadata.yaml'
|
||||||
|
- 'integrations/templates/**'
|
||||||
|
- 'integrations/categories.yaml'
|
||||||
|
- 'integrations/gen_integrations.py'
|
||||||
|
- 'packaging/go.d.version'
|
||||||
|
workflow_dispatch: null
|
||||||
|
concurrency: # This keeps multiple instances of the job from running concurrently for the same ref.
|
||||||
|
group: integrations-${{ github.ref }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
jobs:
|
||||||
|
generate-integrations:
|
||||||
|
name: Generate Integrations
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Checkout Agent
|
||||||
|
id: checkout-agent
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 1
|
||||||
|
submodules: recursive
|
||||||
|
- name: Get Go Ref
|
||||||
|
id: get-go-ref
|
||||||
|
run: echo "go_ref=$(cat packaging/go.d.version)" >> "${GITHUB_ENV}"
|
||||||
|
- name: Checkout Go
|
||||||
|
id: checkout-go
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 1
|
||||||
|
path: go.d.plugin
|
||||||
|
repository: netdata/go.d.plugin
|
||||||
|
ref: ${{ env.go_ref }}
|
||||||
|
- name: Prepare Dependencies
|
||||||
|
id: prep-deps
|
||||||
|
run: sudo apt-get install python3-jsonschema python3-referencing python3-jinja2 python3-ruamel.yaml
|
||||||
|
- name: Generate Integrations
|
||||||
|
id: generate
|
||||||
|
run: integrations/gen_integrations.py
|
||||||
|
- name: Clean Up Go Repo
|
||||||
|
id: clean-go
|
||||||
|
run: rm -rf go.d.plugin
|
||||||
|
- name: Create PR
|
||||||
|
id: create-pr
|
||||||
|
uses: peter-evans/create-pull-request@v5
|
||||||
|
with:
|
||||||
|
token: ${{ secrets.NETDATABOT_GITHUB_TOKEN }}
|
||||||
|
commit-message: Regenerate integrations.js
|
||||||
|
branch: integrations-regen
|
||||||
|
title: Regenerate integrations.js
|
||||||
|
body: |
|
||||||
|
Regenerate `integrations/integrations.js` based on the
|
||||||
|
latest code.
|
||||||
|
|
||||||
|
This PR was auto-generated by
|
||||||
|
`.github/workflows/generate-integrations.yml`.
|
||||||
|
- name: Failure Notification
|
||||||
|
uses: rtCamp/action-slack-notify@v2
|
||||||
|
env:
|
||||||
|
SLACK_COLOR: 'danger'
|
||||||
|
SLACK_FOOTER: ''
|
||||||
|
SLACK_ICON_EMOJI: ':github-actions:'
|
||||||
|
SLACK_TITLE: 'Integrations regeneration failed:'
|
||||||
|
SLACK_USERNAME: 'GitHub Actions'
|
||||||
|
SLACK_MESSAGE: |-
|
||||||
|
${{ github.repository }}: Failed to create PR rebuilding integrations.js
|
||||||
|
Checkout Agent: ${{ steps.checkout-agent.outcome }}
|
||||||
|
Get Go Ref: ${{ steps.get-go-ref.outcome }}
|
||||||
|
Checkout Go: ${{ steps.checkout-go.outcome }}
|
||||||
|
Prepare Dependencies: ${{ steps.prep-deps.outcome }}
|
||||||
|
Generate Integrations: ${{ steps.generate.outcome }}
|
||||||
|
Clean Up Go Repository: ${{ steps.clean-go.outcome }}
|
||||||
|
Create PR: ${{ steps.create-pr.outcome }}
|
||||||
|
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_URL }}
|
||||||
|
if: >-
|
||||||
|
${{
|
||||||
|
failure()
|
||||||
|
&& startsWith(github.ref, 'refs/heads/master')
|
||||||
|
&& github.repository == 'netdata/netdata'
|
||||||
|
}}
|
2
.github/workflows/review.yml
vendored
2
.github/workflows/review.yml
vendored
|
@ -54,7 +54,7 @@ jobs:
|
||||||
run: |
|
run: |
|
||||||
if [ "${{ contains(github.event.pull_request.labels.*.name, 'run-ci/eslint') }}" = "true" ]; then
|
if [ "${{ contains(github.event.pull_request.labels.*.name, 'run-ci/eslint') }}" = "true" ]; then
|
||||||
echo "run=true" >> "${GITHUB_OUTPUT}"
|
echo "run=true" >> "${GITHUB_OUTPUT}"
|
||||||
elif git diff --name-only origin/${{ github.base_ref }} HEAD | grep -v "web/gui/v1" | grep -v "web/gui/v2" | grep -Eq '.*\.js|node\.d\.plugin\.in' ; then
|
elif git diff --name-only origin/${{ github.base_ref }} HEAD | grep -v "web/gui/v1" | grep -v "web/gui/v2" | grep -v "integrations/" | grep -Eq '.*\.js' ; then
|
||||||
echo "run=true" >> "${GITHUB_OUTPUT}"
|
echo "run=true" >> "${GITHUB_OUTPUT}"
|
||||||
echo 'JS files have changed, need to run ESLint.'
|
echo 'JS files have changed, need to run ESLint.'
|
||||||
else
|
else
|
||||||
|
|
|
@ -406,7 +406,7 @@ modules:
|
||||||
link: https://kubernetes.io/
|
link: https://kubernetes.io/
|
||||||
icon_filename: kubernetes.svg
|
icon_filename: kubernetes.svg
|
||||||
categories:
|
categories:
|
||||||
- data-collection.containers-vms
|
- data-collection.containers-and-vms
|
||||||
- data-collection.kubernetes
|
- data-collection.kubernetes
|
||||||
keywords:
|
keywords:
|
||||||
- k8s
|
- k8s
|
||||||
|
@ -977,7 +977,7 @@ modules:
|
||||||
link: ""
|
link: ""
|
||||||
icon_filename: container.svg
|
icon_filename: container.svg
|
||||||
categories:
|
categories:
|
||||||
- data-collection.containers-vms
|
- data-collection.containers-and-vms
|
||||||
keywords:
|
keywords:
|
||||||
- vms
|
- vms
|
||||||
- virtualization
|
- virtualization
|
||||||
|
@ -995,7 +995,7 @@ modules:
|
||||||
link: ""
|
link: ""
|
||||||
icon_filename: lxc.png
|
icon_filename: lxc.png
|
||||||
categories:
|
categories:
|
||||||
- data-collection.containers-vms
|
- data-collection.containers-and-vms
|
||||||
keywords:
|
keywords:
|
||||||
- lxc
|
- lxc
|
||||||
- lxd
|
- lxd
|
||||||
|
@ -1013,7 +1013,7 @@ modules:
|
||||||
link: ""
|
link: ""
|
||||||
icon_filename: libvirt.png
|
icon_filename: libvirt.png
|
||||||
categories:
|
categories:
|
||||||
- data-collection.containers-vms
|
- data-collection.containers-and-vms
|
||||||
keywords:
|
keywords:
|
||||||
- libvirt
|
- libvirt
|
||||||
- container
|
- container
|
||||||
|
@ -1030,7 +1030,7 @@ modules:
|
||||||
link: ""
|
link: ""
|
||||||
icon_filename: ovirt.svg
|
icon_filename: ovirt.svg
|
||||||
categories:
|
categories:
|
||||||
- data-collection.containers-vms
|
- data-collection.containers-and-vms
|
||||||
keywords:
|
keywords:
|
||||||
- ovirt
|
- ovirt
|
||||||
- container
|
- container
|
||||||
|
@ -1047,7 +1047,7 @@ modules:
|
||||||
link: ""
|
link: ""
|
||||||
icon_filename: proxmox.png
|
icon_filename: proxmox.png
|
||||||
categories:
|
categories:
|
||||||
- data-collection.containers-vms
|
- data-collection.containers-and-vms
|
||||||
keywords:
|
keywords:
|
||||||
- proxmox
|
- proxmox
|
||||||
- container
|
- container
|
||||||
|
|
26
integrations/README.md
Normal file
26
integrations/README.md
Normal file
|
@ -0,0 +1,26 @@
|
||||||
|
To generate a copy of `integrations.js` locally, you will need:
|
||||||
|
|
||||||
|
- Python 3.6 or newer (only tested on Python 3.10 currently, should work
|
||||||
|
on any version of Python newer than 3.6).
|
||||||
|
- The following third-party Python modules:
|
||||||
|
- `jsonschema`
|
||||||
|
- `referencing`
|
||||||
|
- `jinja2`
|
||||||
|
- `ruamel.yaml`
|
||||||
|
- A local checkout of https://github.com/netdata/netdata
|
||||||
|
- A local checkout of https://github.com/netdata/go.d.plugin. The script
|
||||||
|
expects this to be checked out in a directory called `go.d.plugin`
|
||||||
|
in the root directory of the agent repo, though a symlink with that
|
||||||
|
name pointing at the actual location of the repo will work as well.
|
||||||
|
|
||||||
|
The first two parts can be easily covered in a Linux environment, such
|
||||||
|
as a VM or Docker container:
|
||||||
|
|
||||||
|
- On Debian or Ubuntu: `apt-get install python3-jsonschema python3-referencing python3-jinja2 python3-ruamel.yaml`
|
||||||
|
- On Alpine: `apk add py3-jsonschema py3-referencing py3-jinja2 py3-ruamel.yaml`
|
||||||
|
- On Fedora or RHEL (EPEL is required on RHEL systems): `dnf install python3-jsonschema python3-referencing python3-jinja2 python3-ruamel-yaml`
|
||||||
|
|
||||||
|
Once the environment is set up, simply run
|
||||||
|
`integrations/gen_integrations.py` from the agent repo. Note that the
|
||||||
|
script must be run _from this specific location_, as it uses it’s own
|
||||||
|
path to figure out where all the files it needs are.
|
|
@ -1,5 +1,5 @@
|
||||||
- id: deploy
|
- id: deploy
|
||||||
name: deploy
|
name: Deploy
|
||||||
description: ""
|
description: ""
|
||||||
most_popular: true
|
most_popular: true
|
||||||
priority: 1
|
priority: 1
|
||||||
|
@ -24,7 +24,7 @@
|
||||||
priority: -1
|
priority: -1
|
||||||
children: []
|
children: []
|
||||||
- id: data-collection
|
- id: data-collection
|
||||||
name: data-collection
|
name: Data Collection
|
||||||
description: ""
|
description: ""
|
||||||
most_popular: true
|
most_popular: true
|
||||||
priority: 2
|
priority: 2
|
||||||
|
@ -34,6 +34,7 @@
|
||||||
description: ""
|
description: ""
|
||||||
most_popular: false
|
most_popular: false
|
||||||
priority: -1
|
priority: -1
|
||||||
|
collector_default: true
|
||||||
children: []
|
children: []
|
||||||
- id: data-collection.ebpf
|
- id: data-collection.ebpf
|
||||||
name: eBPF
|
name: eBPF
|
||||||
|
|
89
integrations/check_collector_metadata.py
Executable file
89
integrations/check_collector_metadata.py
Executable file
|
@ -0,0 +1,89 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from jsonschema import ValidationError
|
||||||
|
|
||||||
|
from gen_integrations import (CATEGORIES_FILE, SINGLE_PATTERN, MULTI_PATTERN, SINGLE_VALIDATOR, MULTI_VALIDATOR,
|
||||||
|
load_yaml, get_category_sets)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
if len(sys.argv) != 2:
|
||||||
|
print(':error:This script takes exactly one argument.')
|
||||||
|
return 2
|
||||||
|
|
||||||
|
check_path = Path(sys.argv[1])
|
||||||
|
|
||||||
|
if not check_path.is_file():
|
||||||
|
print(f':error file={ check_path }:{ check_path } does not appear to be a regular file.')
|
||||||
|
return 1
|
||||||
|
|
||||||
|
if check_path.match(SINGLE_PATTERN):
|
||||||
|
variant = 'single'
|
||||||
|
print(f':debug:{ check_path } appears to be single-module metadata.')
|
||||||
|
elif check_path.match(MULTI_PATTERN):
|
||||||
|
variant = 'multi'
|
||||||
|
print(f':debug:{ check_path } appears to be multi-module metadata.')
|
||||||
|
else:
|
||||||
|
print(f':error file={ check_path }:{ check_path } does not match required file name format.')
|
||||||
|
return 1
|
||||||
|
|
||||||
|
categories = load_yaml(CATEGORIES_FILE)
|
||||||
|
|
||||||
|
if not categories:
|
||||||
|
print(':error:Failed to load categories file.')
|
||||||
|
return 2
|
||||||
|
|
||||||
|
_, valid_categories = get_category_sets(categories)
|
||||||
|
|
||||||
|
data = load_yaml(check_path)
|
||||||
|
|
||||||
|
if not data:
|
||||||
|
print(f':error file={ check_path }:Failed to load data from { check_path }.')
|
||||||
|
return 1
|
||||||
|
|
||||||
|
check_modules = []
|
||||||
|
|
||||||
|
if variant == 'single':
|
||||||
|
try:
|
||||||
|
SINGLE_VALIDATOR.validate(data)
|
||||||
|
except ValidationError as e:
|
||||||
|
print(f':error file={ check_path }:Failed to validate { check_path } against the schema.')
|
||||||
|
raise e
|
||||||
|
else:
|
||||||
|
check_modules.append(data)
|
||||||
|
elif variant == 'multi':
|
||||||
|
try:
|
||||||
|
MULTI_VALIDATOR.validate(data)
|
||||||
|
except ValidationError as e:
|
||||||
|
print(f':error file={ check_path }:Failed to validate { check_path } against the schema.')
|
||||||
|
raise e
|
||||||
|
else:
|
||||||
|
for item in data['modules']:
|
||||||
|
item['meta']['plugin_name'] = data['plugin_name']
|
||||||
|
check_modules.append(item)
|
||||||
|
else:
|
||||||
|
print(':error:Internal error encountered.')
|
||||||
|
return 2
|
||||||
|
|
||||||
|
failed = False
|
||||||
|
|
||||||
|
for idx, module in enumerate(check_modules):
|
||||||
|
invalid_cats = set(module['meta']['monitored_instance']['categories']) - valid_categories
|
||||||
|
|
||||||
|
if invalid_cats:
|
||||||
|
print(f':error file={ check_path }:Invalid categories found in module { idx } in { check_path }: { ", ".joiin(invalid_cats) }.')
|
||||||
|
failed = True
|
||||||
|
|
||||||
|
if failed:
|
||||||
|
return 1
|
||||||
|
else:
|
||||||
|
print('{ check_path } is a valid collector metadata file.')
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
sys.exit(main())
|
|
@ -1,6 +1,6 @@
|
||||||
# yamllint disable rule:line-length
|
# yamllint disable rule:line-length
|
||||||
---
|
---
|
||||||
- id: 'notify-discord'
|
- id: 'notify-cloud-discord'
|
||||||
meta:
|
meta:
|
||||||
name: 'Discord'
|
name: 'Discord'
|
||||||
link: 'https://discord.com/'
|
link: 'https://discord.com/'
|
||||||
|
@ -42,7 +42,7 @@
|
||||||
- Webhook URL - URL provided on Discord for the channel you want to receive your notifications.
|
- Webhook URL - URL provided on Discord for the channel you want to receive your notifications.
|
||||||
- Thread name - if the Discord channel is a **Forum channel** you will need to provide the thread name as well
|
- Thread name - if the Discord channel is a **Forum channel** you will need to provide the thread name as well
|
||||||
|
|
||||||
- id: 'notify-pagerduty'
|
- id: 'notify-cloud-pagerduty'
|
||||||
meta:
|
meta:
|
||||||
name: 'PagerDuty'
|
name: 'PagerDuty'
|
||||||
link: 'https://www.pagerduty.com/'
|
link: 'https://www.pagerduty.com/'
|
||||||
|
@ -84,7 +84,7 @@
|
||||||
* **Integration configuration** are the specific notification integration required settings, which vary by notification method. For PagerDuty:
|
* **Integration configuration** are the specific notification integration required settings, which vary by notification method. For PagerDuty:
|
||||||
- Integration Key - is a 32 character key provided by PagerDuty to receive events on your service.
|
- Integration Key - is a 32 character key provided by PagerDuty to receive events on your service.
|
||||||
|
|
||||||
- id: 'notify-slack'
|
- id: 'notify-cloud-slack'
|
||||||
meta:
|
meta:
|
||||||
name: 'Slack'
|
name: 'Slack'
|
||||||
link: 'https://slack.com/'
|
link: 'https://slack.com/'
|
||||||
|
@ -133,7 +133,7 @@
|
||||||
* **Integration configuration** are the specific notification integration required settings, which vary by notification method. For Slack:
|
* **Integration configuration** are the specific notification integration required settings, which vary by notification method. For Slack:
|
||||||
- Webhook URL - URL provided on Slack for the channel you want to receive your notifications.
|
- Webhook URL - URL provided on Slack for the channel you want to receive your notifications.
|
||||||
|
|
||||||
- id: 'notify-opsgenie'
|
- id: 'notify-cloud-opsgenie'
|
||||||
meta:
|
meta:
|
||||||
name: 'Opsgenie'
|
name: 'Opsgenie'
|
||||||
link: 'https://www.atlassian.com/software/opsgenie'
|
link: 'https://www.atlassian.com/software/opsgenie'
|
||||||
|
@ -177,7 +177,7 @@
|
||||||
* **Integration configuration** are the specific notification integration required settings, which vary by notification method. For Opsgenie:
|
* **Integration configuration** are the specific notification integration required settings, which vary by notification method. For Opsgenie:
|
||||||
- API Key - a key provided on Opsgenie for the channel you want to receive your notifications.
|
- API Key - a key provided on Opsgenie for the channel you want to receive your notifications.
|
||||||
|
|
||||||
- id: 'notify-mattermost'
|
- id: 'notify-cloud-mattermost'
|
||||||
meta:
|
meta:
|
||||||
name: 'Mattermost'
|
name: 'Mattermost'
|
||||||
link: 'https://mattermost.com/'
|
link: 'https://mattermost.com/'
|
||||||
|
@ -228,7 +228,7 @@
|
||||||
* **Integration configuration** are the specific notification integration required settings, which vary by notification method. For Mattermost:
|
* **Integration configuration** are the specific notification integration required settings, which vary by notification method. For Mattermost:
|
||||||
- Webhook URL - URL provided on Mattermost for the channel you want to receive your notifications
|
- Webhook URL - URL provided on Mattermost for the channel you want to receive your notifications
|
||||||
|
|
||||||
- id: 'notify-rocketchat'
|
- id: 'notify-cloud-rocketchat'
|
||||||
meta:
|
meta:
|
||||||
name: 'RocketChat'
|
name: 'RocketChat'
|
||||||
link: 'https://www.rocket.chat/'
|
link: 'https://www.rocket.chat/'
|
||||||
|
@ -279,7 +279,7 @@
|
||||||
* **Integration configuration** are the specific notification integration required settings, which vary by notification method. For RocketChat:
|
* **Integration configuration** are the specific notification integration required settings, which vary by notification method. For RocketChat:
|
||||||
- Webhook URL - URL provided on RocketChat for the channel you want to receive your notifications.
|
- Webhook URL - URL provided on RocketChat for the channel you want to receive your notifications.
|
||||||
|
|
||||||
- id: 'notify-webhook'
|
- id: 'notify-cloud-webhook'
|
||||||
meta:
|
meta:
|
||||||
name: 'Webhook'
|
name: 'Webhook'
|
||||||
link: 'https://en.wikipedia.org/wiki/Webhook'
|
link: 'https://en.wikipedia.org/wiki/Webhook'
|
||||||
|
@ -517,4 +517,3 @@
|
||||||
# returns properly formatted json response
|
# returns properly formatted json response
|
||||||
return json.dumps(response)
|
return json.dumps(response)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -11,28 +11,30 @@
|
||||||
most_popular: true
|
most_popular: true
|
||||||
install_description: 'Run the following command on your node to install and claim Netdata:'
|
install_description: 'Run the following command on your node to install and claim Netdata:'
|
||||||
methods:
|
methods:
|
||||||
- method: wget
|
- &ks_wget
|
||||||
|
method: wget
|
||||||
commands:
|
commands:
|
||||||
- channel: nightly
|
- channel: nightly
|
||||||
command: >
|
command: >
|
||||||
wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
|
wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
|
||||||
--nightly-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
|
--nightly-channel{% if $showClaimingOptions %} --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}{% /if %}
|
||||||
- channel: stable
|
- channel: stable
|
||||||
command: >
|
command: >
|
||||||
wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
|
wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
|
||||||
--stable-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
|
--stable-channel{% if $showClaimingOptions %} --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}{% /if %}
|
||||||
- method: curl
|
- &ks_curl
|
||||||
|
method: curl
|
||||||
commands:
|
commands:
|
||||||
- channel: nightly
|
- channel: nightly
|
||||||
command: >
|
command: >
|
||||||
curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh
|
curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh
|
||||||
--nightly-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
|
--nightly-channel{% if $showClaimingOptions %} --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}{% /if %}
|
||||||
- channel: stable
|
- channel: stable
|
||||||
command: >
|
command: >
|
||||||
curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh
|
curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh
|
||||||
--stable-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
|
--stable-channel{% if $showClaimingOptions %} --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}{% /if %}
|
||||||
additional_info: &ref_containers >
|
additional_info: &ref_containers >
|
||||||
Did you know you can also deploy Netdata on your OS using {% goToCategory categoryId="deploy.docker-kubernetes" %}Kubernetes{% /goToCategory %} or {% goToCategory categoryId="deploy.docker-kubernetes" %}Docker{% /goToCategory %}?
|
Did you know you can also deploy Netdata on your OS using {% goToCategory navigateToSettings=$navigateToSettings categoryId="deploy.docker-kubernetes" %}Kubernetes{% /goToCategory %} or {% goToCategory categoryId="deploy.docker-kubernetes" %}Docker{% /goToCategory %}?
|
||||||
related_resources: {}
|
related_resources: {}
|
||||||
platform_info:
|
platform_info:
|
||||||
group: ''
|
group: ''
|
||||||
|
@ -196,16 +198,7 @@
|
||||||
- apple
|
- apple
|
||||||
install_description: 'Run the following command on your Intel based OSX, macOS servers to install and claim Netdata:'
|
install_description: 'Run the following command on your Intel based OSX, macOS servers to install and claim Netdata:'
|
||||||
methods:
|
methods:
|
||||||
- method: curl
|
- *ks_curl
|
||||||
commands:
|
|
||||||
- channel: nightly
|
|
||||||
command: >
|
|
||||||
curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh
|
|
||||||
--nightly-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
|
|
||||||
- channel: stable
|
|
||||||
command: >
|
|
||||||
curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh
|
|
||||||
--stable-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
|
|
||||||
additional_info: *ref_containers
|
additional_info: *ref_containers
|
||||||
related_resources: {}
|
related_resources: {}
|
||||||
platform_info:
|
platform_info:
|
||||||
|
@ -230,7 +223,6 @@
|
||||||
|
|
||||||
> Netdata container requires different privileges and mounts to provide functionality similar to that provided by Netdata installed on the host. More info [here](https://learn.netdata.cloud/docs/installing/docker?_gl=1*f2xcnf*_ga*MTI1MTUwMzU0OS4xNjg2NjM1MDA1*_ga_J69Z2JCTFB*MTY5MDMxMDIyMS40MS4xLjE2OTAzMTAzNjkuNTguMC4w#create-a-new-netdata-agent-container)
|
> Netdata container requires different privileges and mounts to provide functionality similar to that provided by Netdata installed on the host. More info [here](https://learn.netdata.cloud/docs/installing/docker?_gl=1*f2xcnf*_ga*MTI1MTUwMzU0OS4xNjg2NjM1MDA1*_ga_J69Z2JCTFB*MTY5MDMxMDIyMS40MS4xLjE2OTAzMTAzNjkuNTguMC4w#create-a-new-netdata-agent-container)
|
||||||
> Netdata will use the hostname from the container in which it is run instead of that of the host system. To change the default hostname check [here](https://learn.netdata.cloud/docs/agent/packaging/docker?_gl=1*i5weve*_ga*MTI1MTUwMzU0OS4xNjg2NjM1MDA1*_ga_J69Z2JCTFB*MTY5MDMxMjM4Ny40Mi4xLjE2OTAzMTIzOTAuNTcuMC4w#change-the-default-hostname)
|
> Netdata will use the hostname from the container in which it is run instead of that of the host system. To change the default hostname check [here](https://learn.netdata.cloud/docs/agent/packaging/docker?_gl=1*i5weve*_ga*MTI1MTUwMzU0OS4xNjg2NjM1MDA1*_ga_J69Z2JCTFB*MTY5MDMxMjM4Ny40Mi4xLjE2OTAzMTIzOTAuNTcuMC4w#change-the-default-hostname)
|
||||||
|
|
||||||
methods:
|
methods:
|
||||||
- method: Docker CLI
|
- method: Docker CLI
|
||||||
commands:
|
commands:
|
||||||
|
@ -252,11 +244,12 @@
|
||||||
--cap-add SYS_PTRACE \
|
--cap-add SYS_PTRACE \
|
||||||
--cap-add SYS_ADMIN \
|
--cap-add SYS_ADMIN \
|
||||||
--security-opt apparmor=unconfined \
|
--security-opt apparmor=unconfined \
|
||||||
|
{% if $showClaimingOptions %}
|
||||||
-e NETDATA_CLAIM_TOKEN={% claim_token %} \
|
-e NETDATA_CLAIM_TOKEN={% claim_token %} \
|
||||||
-e NETDATA_CLAIM_URL={% claim_url %} \
|
-e NETDATA_CLAIM_URL={% claim_url %} \
|
||||||
-e NETDATA_CLAIM_ROOMS={% $claim_rooms %} \
|
-e NETDATA_CLAIM_ROOMS={% $claim_rooms %} \
|
||||||
|
{% /if %}
|
||||||
netdata/netdata:edge
|
netdata/netdata:edge
|
||||||
|
|
||||||
- channel: stable
|
- channel: stable
|
||||||
command: |
|
command: |
|
||||||
docker run -d --name=netdata \
|
docker run -d --name=netdata \
|
||||||
|
@ -275,9 +268,11 @@
|
||||||
--cap-add SYS_PTRACE \
|
--cap-add SYS_PTRACE \
|
||||||
--cap-add SYS_ADMIN \
|
--cap-add SYS_ADMIN \
|
||||||
--security-opt apparmor=unconfined \
|
--security-opt apparmor=unconfined \
|
||||||
|
{% if $showClaimingOptions %}
|
||||||
-e NETDATA_CLAIM_TOKEN={% claim_token %} \
|
-e NETDATA_CLAIM_TOKEN={% claim_token %} \
|
||||||
-e NETDATA_CLAIM_URL={% claim_url %} \
|
-e NETDATA_CLAIM_URL={% claim_url %} \
|
||||||
-e NETDATA_CLAIM_ROOMS={% $claim_rooms %} \
|
-e NETDATA_CLAIM_ROOMS={% $claim_rooms %} \
|
||||||
|
{% /if %}
|
||||||
netdata/netdata:stable
|
netdata/netdata:stable
|
||||||
- method: Docker Compose
|
- method: Docker Compose
|
||||||
commands:
|
commands:
|
||||||
|
@ -306,10 +301,12 @@
|
||||||
- /sys:/host/sys:ro
|
- /sys:/host/sys:ro
|
||||||
- /etc/os-release:/host/etc/os-release:ro
|
- /etc/os-release:/host/etc/os-release:ro
|
||||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||||
|
{% if $showClaimingOptions %}
|
||||||
environment:
|
environment:
|
||||||
- NETDATA_CLAIM_TOKEN={% claim_token %}
|
- NETDATA_CLAIM_TOKEN={% claim_token %}
|
||||||
- NETDATA_CLAIM_URL={% claim_url %}
|
- NETDATA_CLAIM_URL={% claim_url %}
|
||||||
- NETDATA_CLAIM_ROOMS={% $claim_rooms %}
|
- NETDATA_CLAIM_ROOMS={% $claim_rooms %}
|
||||||
|
{% /if %}
|
||||||
volumes:
|
volumes:
|
||||||
netdataconfig:
|
netdataconfig:
|
||||||
netdatalib:
|
netdatalib:
|
||||||
|
@ -339,10 +336,12 @@
|
||||||
- /sys:/host/sys:ro
|
- /sys:/host/sys:ro
|
||||||
- /etc/os-release:/host/etc/os-release:ro
|
- /etc/os-release:/host/etc/os-release:ro
|
||||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||||
|
{% if $showClaimingOptions %}
|
||||||
environment:
|
environment:
|
||||||
- NETDATA_CLAIM_TOKEN={% claim_token %}
|
- NETDATA_CLAIM_TOKEN={% claim_token %}
|
||||||
- NETDATA_CLAIM_URL={% claim_url %}
|
- NETDATA_CLAIM_URL={% claim_url %}
|
||||||
- NETDATA_CLAIM_ROOMS={% $claim_rooms %}
|
- NETDATA_CLAIM_ROOMS={% $claim_rooms %}
|
||||||
|
{% /if %}
|
||||||
volumes:
|
volumes:
|
||||||
netdataconfig:
|
netdataconfig:
|
||||||
netdatalib:
|
netdatalib:
|
||||||
|
@ -444,23 +443,23 @@
|
||||||
- channel: nightly
|
- channel: nightly
|
||||||
command: |
|
command: |
|
||||||
helm install netdata netdata/netdata \
|
helm install netdata netdata/netdata \
|
||||||
--set image.tag=latest \
|
--set image.tag=latest{% if $showClaimingOptions %} \
|
||||||
--set parent.claiming.enabled="true" \
|
--set parent.claiming.enabled="true" \
|
||||||
--set parent.claiming.token={% claim_token %} \
|
--set parent.claiming.token={% claim_token %} \
|
||||||
--set parent.claiming.rooms={% $claim_rooms %} \
|
--set parent.claiming.rooms={% $claim_rooms %} \
|
||||||
--set child.claiming.enabled="true" \
|
--set child.claiming.enabled="true" \
|
||||||
--set child.claiming.token={% claim_token %} \
|
--set child.claiming.token={% claim_token %} \
|
||||||
--set child.claiming.rooms={% $claim_rooms %}
|
--set child.claiming.rooms={% $claim_rooms %}{% /if %}
|
||||||
- channel: stable
|
- channel: stable
|
||||||
command: |
|
command: |
|
||||||
helm install netdata netdata/netdata \
|
helm install netdata netdata/netdata \
|
||||||
--set image.tag=stable \
|
--set image.tag=stable{% if $showClaimingOptions %} \
|
||||||
--set parent.claiming.enabled="true" \
|
--set parent.claiming.enabled="true" \
|
||||||
--set parent.claiming.token={% claim_token %} \
|
--set parent.claiming.token={% claim_token %} \
|
||||||
--set parent.claiming.rooms={% $claim_rooms %} \
|
--set parent.claiming.rooms={% $claim_rooms %} \
|
||||||
--set child.claiming.enabled="true" \
|
--set child.claiming.enabled="true" \
|
||||||
--set child.claiming.token={% claim_token %} \
|
--set child.claiming.token={% claim_token %} \
|
||||||
--set child.claiming.rooms={% $claim_rooms %}
|
--set child.claiming.rooms={% $claim_rooms %}{% /if %}
|
||||||
- method: Existing Cluster
|
- method: Existing Cluster
|
||||||
commands:
|
commands:
|
||||||
- channel: nightly
|
- channel: nightly
|
||||||
|
@ -470,6 +469,7 @@
|
||||||
|
|
||||||
restarter:
|
restarter:
|
||||||
enabled: true
|
enabled: true
|
||||||
|
{% if $showClaimingOptions %}
|
||||||
|
|
||||||
parent:
|
parent:
|
||||||
claiming:
|
claiming:
|
||||||
|
@ -482,11 +482,16 @@
|
||||||
enabled: true
|
enabled: true
|
||||||
token: {% claim_token %}
|
token: {% claim_token %}
|
||||||
rooms: {% $claim_rooms %}
|
rooms: {% $claim_rooms %}
|
||||||
|
{% /if %}
|
||||||
- channel: stable
|
- channel: stable
|
||||||
command: |
|
command: |
|
||||||
image:
|
image:
|
||||||
tag: stable
|
tag: stable
|
||||||
|
|
||||||
|
restarter:
|
||||||
|
enabled: true
|
||||||
|
{% if $showClaimingOptions %}
|
||||||
|
|
||||||
parent:
|
parent:
|
||||||
claiming:
|
claiming:
|
||||||
enabled: true
|
enabled: true
|
||||||
|
@ -498,6 +503,7 @@
|
||||||
enabled: true
|
enabled: true
|
||||||
token: {% claim_token %}
|
token: {% claim_token %}
|
||||||
rooms: {% $claim_rooms %}
|
rooms: {% $claim_rooms %}
|
||||||
|
{% /if %}
|
||||||
additional_info: ''
|
additional_info: ''
|
||||||
related_resources: {}
|
related_resources: {}
|
||||||
most_popular: true
|
most_popular: true
|
||||||
|
@ -520,26 +526,8 @@
|
||||||
3. Configure Netdata to collect data remotely from your Windows hosts by adding one job per host to windows.conf file. See the [configuration section](https://learn.netdata.cloud/docs/data-collection/monitor-anything/System%20Metrics/Windows-machines#configuration) for details.
|
3. Configure Netdata to collect data remotely from your Windows hosts by adding one job per host to windows.conf file. See the [configuration section](https://learn.netdata.cloud/docs/data-collection/monitor-anything/System%20Metrics/Windows-machines#configuration) for details.
|
||||||
4. Enable [virtual nodes](https://learn.netdata.cloud/docs/data-collection/windows-systems#virtual-nodes) configuration so the windows nodes are displayed as separate nodes.
|
4. Enable [virtual nodes](https://learn.netdata.cloud/docs/data-collection/windows-systems#virtual-nodes) configuration so the windows nodes are displayed as separate nodes.
|
||||||
methods:
|
methods:
|
||||||
- method: wget
|
- *ks_wget
|
||||||
commands:
|
- *ks_curl
|
||||||
- channel: nightly
|
|
||||||
command: >
|
|
||||||
wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
|
|
||||||
--nightly-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
|
|
||||||
- channel: stable
|
|
||||||
command: >
|
|
||||||
wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
|
|
||||||
--stable-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
|
|
||||||
- method: curl
|
|
||||||
commands:
|
|
||||||
- channel: nightly
|
|
||||||
command: >
|
|
||||||
curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh
|
|
||||||
--nightly-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
|
|
||||||
- channel: stable
|
|
||||||
command: >
|
|
||||||
curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh
|
|
||||||
--stable-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
|
|
||||||
additional_info: ''
|
additional_info: ''
|
||||||
related_resources: {}
|
related_resources: {}
|
||||||
most_popular: true
|
most_popular: true
|
||||||
|
@ -566,16 +554,17 @@
|
||||||
|
|
||||||
Run the following command on your node to install and claim Netdata:
|
Run the following command on your node to install and claim Netdata:
|
||||||
methods:
|
methods:
|
||||||
- method: wget
|
- *ks_curl
|
||||||
|
- method: fetch
|
||||||
commands:
|
commands:
|
||||||
- channel: nightly
|
- channel: nightly
|
||||||
command: >
|
command: >
|
||||||
wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
|
fetch -o /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
|
||||||
--nightly-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
|
--nightly-channel{% if $showClaimingOptions %} --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}{% /if %}
|
||||||
- channel: stable
|
- channel: stable
|
||||||
command: >
|
command: >
|
||||||
wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
|
fetch -o /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
|
||||||
--stable-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
|
--stable-channel{% if $showClaimingOptions %} --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}{% /if %}
|
||||||
additional_info: |
|
additional_info: |
|
||||||
Netdata can also be installed via [FreeBSD ports](https://www.freshports.org/net-mgmt/netdata).
|
Netdata can also be installed via [FreeBSD ports](https://www.freshports.org/net-mgmt/netdata).
|
||||||
related_resources: {}
|
related_resources: {}
|
||||||
|
|
617
integrations/gen_integrations.py
Executable file
617
integrations/gen_integrations.py
Executable file
|
@ -0,0 +1,617 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from jsonschema import Draft7Validator, ValidationError
|
||||||
|
from referencing import Registry, Resource
|
||||||
|
from referencing.jsonschema import DRAFT7
|
||||||
|
from ruamel.yaml import YAML, YAMLError
|
||||||
|
|
||||||
|
AGENT_REPO = 'netdata/netdata'
|
||||||
|
GO_REPO = 'netdata/go.d.plugin'
|
||||||
|
|
||||||
|
INTEGRATIONS_PATH = Path(__file__).parent
|
||||||
|
TEMPLATE_PATH = INTEGRATIONS_PATH / 'templates'
|
||||||
|
OUTPUT_PATH = INTEGRATIONS_PATH / 'integrations.js'
|
||||||
|
CATEGORIES_FILE = INTEGRATIONS_PATH / 'categories.yaml'
|
||||||
|
REPO_PATH = INTEGRATIONS_PATH.parent
|
||||||
|
SCHEMA_PATH = INTEGRATIONS_PATH / 'schemas'
|
||||||
|
GO_REPO_PATH = REPO_PATH / 'go.d.plugin'
|
||||||
|
DISTROS_FILE = REPO_PATH / '.github' / 'data' / 'distros.yml'
|
||||||
|
METADATA_PATTERN = '*/metadata.yaml'
|
||||||
|
|
||||||
|
COLLECTOR_SOURCES = [
|
||||||
|
(AGENT_REPO, REPO_PATH / 'collectors', True),
|
||||||
|
(AGENT_REPO, REPO_PATH / 'collectors' / 'charts.d.plugin', True),
|
||||||
|
(AGENT_REPO, REPO_PATH / 'collectors' / 'python.d.plugin', True),
|
||||||
|
(GO_REPO, GO_REPO_PATH / 'modules', True),
|
||||||
|
]
|
||||||
|
|
||||||
|
DEPLOY_SOURCES = [
|
||||||
|
(AGENT_REPO, INTEGRATIONS_PATH / 'deploy.yaml', False),
|
||||||
|
]
|
||||||
|
|
||||||
|
EXPORTER_SOURCES = [
|
||||||
|
(AGENT_REPO, REPO_PATH / 'exporting', True),
|
||||||
|
]
|
||||||
|
|
||||||
|
NOTIFICATION_SOURCES = [
|
||||||
|
(AGENT_REPO, REPO_PATH / 'health' / 'notifications', True),
|
||||||
|
(AGENT_REPO, INTEGRATIONS_PATH / 'cloud-notifications' / 'metadata.yaml', False),
|
||||||
|
]
|
||||||
|
|
||||||
|
COLLECTOR_RENDER_KEYS = [
|
||||||
|
'alerts',
|
||||||
|
'metrics',
|
||||||
|
'overview',
|
||||||
|
'related_resources',
|
||||||
|
'setup',
|
||||||
|
'troubleshooting',
|
||||||
|
]
|
||||||
|
|
||||||
|
EXPORTER_RENDER_KEYS = [
|
||||||
|
'overview',
|
||||||
|
'setup',
|
||||||
|
]
|
||||||
|
|
||||||
|
NOTIFICATION_RENDER_KEYS = [
|
||||||
|
'overview',
|
||||||
|
'setup',
|
||||||
|
]
|
||||||
|
|
||||||
|
GITHUB_ACTIONS = os.environ.get('GITHUB_ACTIONS', False)
|
||||||
|
DEBUG = os.environ.get('DEBUG', False)
|
||||||
|
|
||||||
|
|
||||||
|
def debug(msg):
|
||||||
|
if GITHUB_ACTIONS:
|
||||||
|
print(f':debug:{ msg }')
|
||||||
|
elif DEBUG:
|
||||||
|
print(f'>>> { msg }')
|
||||||
|
else:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def warn(msg, path):
|
||||||
|
if GITHUB_ACTIONS:
|
||||||
|
print(f':warning file={ path }:{ msg }')
|
||||||
|
else:
|
||||||
|
print(f'!!! WARNING:{ path }:{ msg }')
|
||||||
|
|
||||||
|
|
||||||
|
def retrieve_from_filesystem(uri):
|
||||||
|
path = SCHEMA_PATH / Path(uri)
|
||||||
|
contents = json.loads(path.read_text())
|
||||||
|
return Resource.from_contents(contents, DRAFT7)
|
||||||
|
|
||||||
|
|
||||||
|
registry = Registry(retrieve=retrieve_from_filesystem)
|
||||||
|
|
||||||
|
CATEGORY_VALIDATOR = Draft7Validator(
|
||||||
|
{'$ref': './categories.json#'},
|
||||||
|
registry=registry,
|
||||||
|
)
|
||||||
|
|
||||||
|
DEPLOY_VALIDATOR = Draft7Validator(
|
||||||
|
{'$ref': './deploy.json#'},
|
||||||
|
registry=registry,
|
||||||
|
)
|
||||||
|
|
||||||
|
EXPORTER_VALIDATOR = Draft7Validator(
|
||||||
|
{'$ref': './exporter.json#'},
|
||||||
|
registry=registry,
|
||||||
|
)
|
||||||
|
|
||||||
|
NOTIFICATION_VALIDATOR = Draft7Validator(
|
||||||
|
{'$ref': './notification.json#'},
|
||||||
|
registry=registry,
|
||||||
|
)
|
||||||
|
|
||||||
|
COLLECTOR_VALIDATOR = Draft7Validator(
|
||||||
|
{'$ref': './collector.json#'},
|
||||||
|
registry=registry,
|
||||||
|
)
|
||||||
|
|
||||||
|
_jinja_env = False
|
||||||
|
|
||||||
|
|
||||||
|
def get_jinja_env():
|
||||||
|
global _jinja_env
|
||||||
|
|
||||||
|
if not _jinja_env:
|
||||||
|
from jinja2 import Environment, FileSystemLoader, select_autoescape
|
||||||
|
|
||||||
|
_jinja_env = Environment(
|
||||||
|
loader=FileSystemLoader(TEMPLATE_PATH),
|
||||||
|
autoescape=select_autoescape(),
|
||||||
|
block_start_string='[%',
|
||||||
|
block_end_string='%]',
|
||||||
|
variable_start_string='[[',
|
||||||
|
variable_end_string=']]',
|
||||||
|
comment_start_string='[#',
|
||||||
|
comment_end_string='#]',
|
||||||
|
trim_blocks=True,
|
||||||
|
lstrip_blocks=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
return _jinja_env
|
||||||
|
|
||||||
|
|
||||||
|
def get_category_sets(categories):
|
||||||
|
default = set()
|
||||||
|
valid = set()
|
||||||
|
|
||||||
|
for c in categories:
|
||||||
|
if 'id' in c:
|
||||||
|
valid.add(c['id'])
|
||||||
|
|
||||||
|
if c.get('collector_default', False):
|
||||||
|
default.add(c['id'])
|
||||||
|
|
||||||
|
if 'children' in c and c['children']:
|
||||||
|
d, v = get_category_sets(c['children'])
|
||||||
|
default |= d
|
||||||
|
valid |= v
|
||||||
|
|
||||||
|
return (default, valid)
|
||||||
|
|
||||||
|
|
||||||
|
def get_collector_metadata_entries():
|
||||||
|
ret = []
|
||||||
|
|
||||||
|
for r, d, m in COLLECTOR_SOURCES:
|
||||||
|
if d.exists() and d.is_dir() and m:
|
||||||
|
for item in d.glob(METADATA_PATTERN):
|
||||||
|
ret.append((r, item))
|
||||||
|
elif d.exists() and d.is_file() and not m:
|
||||||
|
if d.match(METADATA_PATTERN):
|
||||||
|
ret.append(d)
|
||||||
|
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
def load_yaml(src):
|
||||||
|
yaml = YAML(typ='safe')
|
||||||
|
|
||||||
|
if not src.is_file():
|
||||||
|
warn(f'{ src } is not a file.', src)
|
||||||
|
return False
|
||||||
|
|
||||||
|
try:
|
||||||
|
contents = src.read_text()
|
||||||
|
except (IOError, OSError):
|
||||||
|
warn(f'Failed to read { src }.', src)
|
||||||
|
return False
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = yaml.load(contents)
|
||||||
|
except YAMLError:
|
||||||
|
warn(f'Failed to parse { src } as YAML.', src)
|
||||||
|
return False
|
||||||
|
|
||||||
|
return data
|
||||||
|
|
||||||
|
|
||||||
|
def load_categories():
|
||||||
|
categories = load_yaml(CATEGORIES_FILE)
|
||||||
|
|
||||||
|
if not categories:
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
try:
|
||||||
|
CATEGORY_VALIDATOR.validate(categories)
|
||||||
|
except ValidationError:
|
||||||
|
warn(f'Failed to validate { CATEGORIES_FILE } against the schema.', CATEGORIES_FILE)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
return categories
|
||||||
|
|
||||||
|
|
||||||
|
def load_collectors():
|
||||||
|
ret = []
|
||||||
|
|
||||||
|
entries = get_collector_metadata_entries()
|
||||||
|
|
||||||
|
for repo, path in entries:
|
||||||
|
debug(f'Loading { path }.')
|
||||||
|
data = load_yaml(path)
|
||||||
|
|
||||||
|
if not data:
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
COLLECTOR_VALIDATOR.validate(data)
|
||||||
|
except ValidationError:
|
||||||
|
warn(f'Failed to validate { path } against the schema.', path)
|
||||||
|
continue
|
||||||
|
|
||||||
|
for idx, item in enumerate(data['modules']):
|
||||||
|
item['meta']['plugin_name'] = data['plugin_name']
|
||||||
|
item['integration_type'] = 'collector'
|
||||||
|
item['_src_path'] = path
|
||||||
|
item['_repo'] = repo
|
||||||
|
item['_index'] = idx
|
||||||
|
ret.append(item)
|
||||||
|
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
def _load_deploy_file(file, repo):
|
||||||
|
ret = []
|
||||||
|
debug(f'Loading { file }.')
|
||||||
|
data = load_yaml(file)
|
||||||
|
|
||||||
|
if not data:
|
||||||
|
return []
|
||||||
|
|
||||||
|
try:
|
||||||
|
DEPLOY_VALIDATOR.validate(data)
|
||||||
|
except ValidationError:
|
||||||
|
warn(f'Failed to validate { file } against the schema.', file)
|
||||||
|
return []
|
||||||
|
|
||||||
|
for idx, item in enumerate(data):
|
||||||
|
item['integration_type'] = 'deploy'
|
||||||
|
item['_src_path'] = file
|
||||||
|
item['_repo'] = repo
|
||||||
|
item['_index'] = idx
|
||||||
|
ret.append(item)
|
||||||
|
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
def load_deploy():
|
||||||
|
ret = []
|
||||||
|
|
||||||
|
for repo, path, match in DEPLOY_SOURCES:
|
||||||
|
if match and path.exists() and path.is_dir():
|
||||||
|
for file in path.glob(METADATA_PATTERN):
|
||||||
|
ret.extend(_load_deploy_file(file, repo))
|
||||||
|
elif not match and path.exists() and path.is_file():
|
||||||
|
ret.extend(_load_deploy_file(path, repo))
|
||||||
|
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
def _load_exporter_file(file, repo):
|
||||||
|
debug(f'Loading { file }.')
|
||||||
|
data = load_yaml(file)
|
||||||
|
|
||||||
|
if not data:
|
||||||
|
return []
|
||||||
|
|
||||||
|
try:
|
||||||
|
EXPORTER_VALIDATOR.validate(data)
|
||||||
|
except ValidationError:
|
||||||
|
warn(f'Failed to validate { file } against the schema.', file)
|
||||||
|
return []
|
||||||
|
|
||||||
|
if 'id' in data:
|
||||||
|
data['integration_type'] = 'exporter'
|
||||||
|
data['_src_path'] = file
|
||||||
|
data['_repo'] = repo
|
||||||
|
data['_index'] = 0
|
||||||
|
|
||||||
|
return [data]
|
||||||
|
else:
|
||||||
|
ret = []
|
||||||
|
|
||||||
|
for idx, item in enumerate(data):
|
||||||
|
item['integration_type'] = 'exporter'
|
||||||
|
item['_src_path'] = file
|
||||||
|
item['_repo'] = repo
|
||||||
|
item['_index'] = idx
|
||||||
|
ret.append(item)
|
||||||
|
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
def load_exporters():
|
||||||
|
ret = []
|
||||||
|
|
||||||
|
for repo, path, match in EXPORTER_SOURCES:
|
||||||
|
if match and path.exists() and path.is_dir():
|
||||||
|
for file in path.glob(METADATA_PATTERN):
|
||||||
|
ret.extend(_load_exporter_file(file, repo))
|
||||||
|
elif not match and path.exists() and path.is_file():
|
||||||
|
ret.extend(_load_exporter_file(path, repo))
|
||||||
|
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
def _load_notification_file(file, repo):
|
||||||
|
debug(f'Loading { file }.')
|
||||||
|
data = load_yaml(file)
|
||||||
|
|
||||||
|
if not data:
|
||||||
|
return []
|
||||||
|
|
||||||
|
try:
|
||||||
|
NOTIFICATION_VALIDATOR.validate(data)
|
||||||
|
except ValidationError:
|
||||||
|
warn(f'Failed to validate { file } against the schema.', file)
|
||||||
|
return []
|
||||||
|
|
||||||
|
if 'id' in data:
|
||||||
|
data['integration_type'] = 'notification'
|
||||||
|
data['_src_path'] = file
|
||||||
|
data['_repo'] = repo
|
||||||
|
data['_index'] = 0
|
||||||
|
|
||||||
|
return [data]
|
||||||
|
else:
|
||||||
|
ret = []
|
||||||
|
|
||||||
|
for idx, item in enumerate(data):
|
||||||
|
item['integration_type'] = 'notification'
|
||||||
|
item['_src_path'] = file
|
||||||
|
item['_repo'] = repo
|
||||||
|
item['_index'] = idx
|
||||||
|
ret.append(item)
|
||||||
|
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
def load_notifications():
|
||||||
|
ret = []
|
||||||
|
|
||||||
|
for repo, path, match in NOTIFICATION_SOURCES:
|
||||||
|
if match and path.exists() and path.is_dir():
|
||||||
|
for file in path.glob(METADATA_PATTERN):
|
||||||
|
ret.extend(_load_notification_file(file, repo))
|
||||||
|
elif not match and path.exists() and path.is_file():
|
||||||
|
ret.extend(_load_notification_file(path, repo))
|
||||||
|
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
def make_id(meta):
|
||||||
|
if 'monitored_instance' in meta:
|
||||||
|
instance_name = meta['monitored_instance']['name'].replace(' ', '_')
|
||||||
|
elif 'instance_name' in meta:
|
||||||
|
instance_name = meta['instance_name']
|
||||||
|
else:
|
||||||
|
instance_name = '000_unknown'
|
||||||
|
|
||||||
|
return f'{ meta["plugin_name"] }-{ meta["module_name"] }-{ instance_name }'
|
||||||
|
|
||||||
|
|
||||||
|
def make_edit_link(item):
|
||||||
|
if item['_repo'] == 'netdata/go.d.plugin':
|
||||||
|
item_path = item['_src_path'].relative_to(GO_REPO_PATH)
|
||||||
|
else:
|
||||||
|
item_path = item['_src_path'].relative_to(REPO_PATH)
|
||||||
|
|
||||||
|
return f'https://github.com/{ item["_repo"] }/blob/master/{ item_path }'
|
||||||
|
|
||||||
|
|
||||||
|
def sort_integrations(integrations):
|
||||||
|
integrations.sort(key=lambda i: i['_index'])
|
||||||
|
integrations.sort(key=lambda i: i['_src_path'])
|
||||||
|
integrations.sort(key=lambda i: i['id'])
|
||||||
|
|
||||||
|
|
||||||
|
def dedupe_integrations(integrations, ids):
|
||||||
|
tmp_integrations = []
|
||||||
|
|
||||||
|
for i in integrations:
|
||||||
|
if ids.get(i['id'], False):
|
||||||
|
first_path, first_index = ids[i['id']]
|
||||||
|
warn(f'Duplicate integration ID found at { i["_src_path"] } index { i["_index"] } (original definition at { first_path } index { first_index }), ignoring that integration.', i['_src_path'])
|
||||||
|
else:
|
||||||
|
tmp_integrations.append(i)
|
||||||
|
ids[i['id']] = (i['_src_path'], i['_index'])
|
||||||
|
|
||||||
|
return tmp_integrations, ids
|
||||||
|
|
||||||
|
|
||||||
|
def render_collectors(categories, collectors, ids):
|
||||||
|
debug('Computing default categories.')
|
||||||
|
|
||||||
|
default_cats, valid_cats = get_category_sets(categories)
|
||||||
|
|
||||||
|
debug('Generating collector IDs.')
|
||||||
|
|
||||||
|
for item in collectors:
|
||||||
|
item['id'] = make_id(item['meta'])
|
||||||
|
|
||||||
|
debug('Sorting collectors.')
|
||||||
|
|
||||||
|
sort_integrations(collectors)
|
||||||
|
|
||||||
|
debug('Removing duplicate collectors.')
|
||||||
|
|
||||||
|
collectors, ids = dedupe_integrations(collectors, ids)
|
||||||
|
|
||||||
|
idmap = {i['id']: i for i in collectors}
|
||||||
|
|
||||||
|
for item in collectors:
|
||||||
|
debug(f'Processing { item["id"] }.')
|
||||||
|
|
||||||
|
related = []
|
||||||
|
|
||||||
|
for res in item['meta']['related_resources']['integrations']['list']:
|
||||||
|
res_id = make_id(res)
|
||||||
|
|
||||||
|
if res_id not in idmap.keys():
|
||||||
|
warn(f'Could not find related integration { res_id }, ignoring it.', item['_src_path'])
|
||||||
|
continue
|
||||||
|
|
||||||
|
related.append({
|
||||||
|
'plugin_name': res['plugin_name'],
|
||||||
|
'module_name': res['module_name'],
|
||||||
|
'id': res_id,
|
||||||
|
'name': idmap[res_id]['meta']['monitored_instance']['name'],
|
||||||
|
'info': idmap[res_id]['meta']['info_provided_to_referring_integrations'],
|
||||||
|
})
|
||||||
|
|
||||||
|
item_cats = set(item['meta']['monitored_instance']['categories'])
|
||||||
|
bogus_cats = item_cats - valid_cats
|
||||||
|
actual_cats = item_cats & valid_cats
|
||||||
|
|
||||||
|
if bogus_cats:
|
||||||
|
warn(f'Ignoring invalid categories: { ", ".join(bogus_cats) }', item["_src_path"])
|
||||||
|
|
||||||
|
if not item_cats:
|
||||||
|
item['meta']['monitored_instance']['categories'] = list(default_cats)
|
||||||
|
warn(f'{ item["id"] } does not list any caregories, adding it to: { default_cats }', item["_src_path"])
|
||||||
|
else:
|
||||||
|
item['meta']['monitored_instance']['categories'] = list(actual_cats)
|
||||||
|
|
||||||
|
for scope in item['metrics']['scopes']:
|
||||||
|
if scope['name'] == 'global':
|
||||||
|
scope['name'] = f'{ item["meta"]["monitored_instance"]["name"] } instance'
|
||||||
|
|
||||||
|
for cfg_example in item['setup']['configuration']['examples']['list']:
|
||||||
|
if 'folding' not in cfg_example:
|
||||||
|
cfg_example['folding'] = {
|
||||||
|
'enabled': item['setup']['configuration']['examples']['folding']['enabled']
|
||||||
|
}
|
||||||
|
|
||||||
|
for key in COLLECTOR_RENDER_KEYS:
|
||||||
|
template = get_jinja_env().get_template(f'{ key }.md')
|
||||||
|
data = template.render(entry=item, related=related)
|
||||||
|
|
||||||
|
if 'variables' in item['meta']['monitored_instance']:
|
||||||
|
template = get_jinja_env().from_string(data)
|
||||||
|
data = template.render(variables=item['meta']['monitored_instance']['variables'])
|
||||||
|
|
||||||
|
item[key] = data
|
||||||
|
|
||||||
|
item['edit_link'] = make_edit_link(item)
|
||||||
|
|
||||||
|
del item['_src_path']
|
||||||
|
del item['_repo']
|
||||||
|
del item['_index']
|
||||||
|
|
||||||
|
return collectors, ids
|
||||||
|
|
||||||
|
|
||||||
|
def render_deploy(distros, categories, deploy, ids):
|
||||||
|
debug('Sorting deployments.')
|
||||||
|
|
||||||
|
sort_integrations(deploy)
|
||||||
|
|
||||||
|
debug('Checking deployment ids.')
|
||||||
|
|
||||||
|
deploy, ids = dedupe_integrations(deploy, ids)
|
||||||
|
|
||||||
|
template = get_jinja_env().get_template('platform_info.md')
|
||||||
|
|
||||||
|
for item in deploy:
|
||||||
|
debug(f'Processing { item["id"] }.')
|
||||||
|
|
||||||
|
if item['platform_info']['group']:
|
||||||
|
entries = [
|
||||||
|
{
|
||||||
|
'version': i['version'],
|
||||||
|
'support': i['support_type'],
|
||||||
|
'arches': i.get('packages', {'arches': []})['arches'],
|
||||||
|
'notes': i['notes'],
|
||||||
|
} for i in distros[item['platform_info']['group']] if i['distro'] == item['platform_info']['distro']
|
||||||
|
]
|
||||||
|
else:
|
||||||
|
entries = []
|
||||||
|
|
||||||
|
data = template.render(entries=entries)
|
||||||
|
|
||||||
|
item['platform_info'] = data
|
||||||
|
item['edit_link'] = make_edit_link(item)
|
||||||
|
|
||||||
|
del item['_src_path']
|
||||||
|
del item['_repo']
|
||||||
|
del item['_index']
|
||||||
|
|
||||||
|
return deploy, ids
|
||||||
|
|
||||||
|
|
||||||
|
def render_exporters(categories, exporters, ids):
|
||||||
|
debug('Sorting exporters.')
|
||||||
|
|
||||||
|
sort_integrations(exporters)
|
||||||
|
|
||||||
|
debug('Checking exporter ids.')
|
||||||
|
|
||||||
|
exporters, ids = dedupe_integrations(exporters, ids)
|
||||||
|
|
||||||
|
for item in exporters:
|
||||||
|
for key in EXPORTER_RENDER_KEYS:
|
||||||
|
template = get_jinja_env().get_template(f'{ key }.md')
|
||||||
|
data = template.render(entry=item)
|
||||||
|
|
||||||
|
if 'variables' in item['meta']:
|
||||||
|
template = get_jinja_env().from_string(data)
|
||||||
|
data = template.render(variables=item['meta']['variables'])
|
||||||
|
|
||||||
|
item[key] = data
|
||||||
|
|
||||||
|
item['edit_link'] = make_edit_link(item)
|
||||||
|
|
||||||
|
del item['_src_path']
|
||||||
|
del item['_repo']
|
||||||
|
del item['_index']
|
||||||
|
|
||||||
|
return exporters, ids
|
||||||
|
|
||||||
|
|
||||||
|
def render_notifications(categories, notifications, ids):
|
||||||
|
debug('Sorting notifications.')
|
||||||
|
|
||||||
|
sort_integrations(notifications)
|
||||||
|
|
||||||
|
debug('Checking notification ids.')
|
||||||
|
|
||||||
|
notifications, ids = dedupe_integrations(notifications, ids)
|
||||||
|
|
||||||
|
for item in notifications:
|
||||||
|
for key in NOTIFICATION_RENDER_KEYS:
|
||||||
|
template = get_jinja_env().get_template(f'{ key }.md')
|
||||||
|
data = template.render(entry=item)
|
||||||
|
|
||||||
|
if 'variables' in item['meta']:
|
||||||
|
template = get_jinja_env().from_string(data)
|
||||||
|
data = template.render(variables=item['meta']['variables'])
|
||||||
|
|
||||||
|
item[key] = data
|
||||||
|
|
||||||
|
item['edit_link'] = make_edit_link(item)
|
||||||
|
|
||||||
|
del item['_src_path']
|
||||||
|
del item['_repo']
|
||||||
|
del item['_index']
|
||||||
|
|
||||||
|
return notifications, ids
|
||||||
|
|
||||||
|
|
||||||
|
def render_integrations(categories, integrations):
|
||||||
|
template = get_jinja_env().get_template('integrations.js')
|
||||||
|
data = template.render(
|
||||||
|
categories=json.dumps(categories),
|
||||||
|
integrations=json.dumps(integrations),
|
||||||
|
)
|
||||||
|
OUTPUT_PATH.write_text(data)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
categories = load_categories()
|
||||||
|
distros = load_yaml(DISTROS_FILE)
|
||||||
|
collectors = load_collectors()
|
||||||
|
deploy = load_deploy()
|
||||||
|
exporters = load_exporters()
|
||||||
|
notifications = load_notifications()
|
||||||
|
|
||||||
|
collectors, ids = render_collectors(categories, collectors, dict())
|
||||||
|
deploy, ids = render_deploy(distros, categories, deploy, ids)
|
||||||
|
exporters, ids = render_exporters(categories, exporters, ids)
|
||||||
|
notifications, ids = render_notifications(categories, notifications, ids)
|
||||||
|
|
||||||
|
integrations = collectors + deploy + exporters + notifications
|
||||||
|
render_integrations(categories, integrations)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
sys.exit(main())
|
5
integrations/integrations.js
Normal file
5
integrations/integrations.js
Normal file
File diff suppressed because one or more lines are too long
|
@ -30,6 +30,10 @@
|
||||||
"type": "integer",
|
"type": "integer",
|
||||||
"description": "Indicates sort order for categories that are marked as most popular."
|
"description": "Indicates sort order for categories that are marked as most popular."
|
||||||
},
|
},
|
||||||
|
"collector_default": {
|
||||||
|
"type": "boolean",
|
||||||
|
"description": "Indicates that the category should be added to collector integrations that list no categories."
|
||||||
|
},
|
||||||
"children": {
|
"children": {
|
||||||
"type": "array",
|
"type": "array",
|
||||||
"description": "A list of categories that are children of this category.",
|
"description": "A list of categories that are children of this category.",
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
{
|
{
|
||||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||||
"type": "object",
|
"type": "object",
|
||||||
|
"title": "Netdata agent collector metadata.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"plugin_name": {
|
"plugin_name": {
|
||||||
"type": "string"
|
"type": "string"
|
||||||
|
@ -444,4 +445,3 @@
|
||||||
"modules"
|
"modules"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -15,6 +15,7 @@
|
||||||
],
|
],
|
||||||
"$defs": {
|
"$defs": {
|
||||||
"entry": {
|
"entry": {
|
||||||
|
"type": "object",
|
||||||
"properties": {
|
"properties": {
|
||||||
"id": {
|
"id": {
|
||||||
"$ref": "./shared.json#/$defs/id"
|
"$ref": "./shared.json#/$defs/id"
|
||||||
|
|
27
integrations/templates/README.md
Normal file
27
integrations/templates/README.md
Normal file
|
@ -0,0 +1,27 @@
|
||||||
|
This directory contains templates used to generate the `integrations.js` file.
|
||||||
|
|
||||||
|
Templating is done using Jinja2 as a templating engine. Full documentation
|
||||||
|
can be found at https://jinja.palletsprojects.com/en/ (the ‘Template
|
||||||
|
Designer Documentation’ is the relevant part for people looking to
|
||||||
|
edit the templates, it’s not linked directly here to avoid embedding
|
||||||
|
version numbers in the links).
|
||||||
|
|
||||||
|
The particular instance of Jinja2 used has the following configuration
|
||||||
|
differences from the defaults:
|
||||||
|
|
||||||
|
- Any instances of curly braces in are replaced with square brackets
|
||||||
|
(so instead of `{{ variable }}`, the syntax used here is `[[ variable
|
||||||
|
]]`. This is done so that templating commands for the frontend can be
|
||||||
|
included without having to do any special escaping for them.
|
||||||
|
- `trim_blocks` and `lstrip_blocks` are both enabled, meaning that
|
||||||
|
the first newline after a block will be _removed_, as will any leading
|
||||||
|
whitespace on the same line as a block.
|
||||||
|
|
||||||
|
Each markdown template corresponds to the key of the same name in the
|
||||||
|
integrations objects in that file. Those templates get passed the
|
||||||
|
integration data using the name `entry`, plus the composed related
|
||||||
|
resource data using the name `rel_res`.
|
||||||
|
|
||||||
|
The `integrations.js` template is used to compose the final file. It gets
|
||||||
|
passed the JSON-formatted category and integration data using the names
|
||||||
|
`categories` and `integrations` respectively.
|
9
integrations/templates/alerts.md
Normal file
9
integrations/templates/alerts.md
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
[% if entry.alerts %]
|
||||||
|
| Alert name | On metric | Description |
|
||||||
|
|:------------:|:---------:|:-----------:|
|
||||||
|
[% for alert in entry.alerts %]
|
||||||
|
| [ [[ alert.name ]] ]([[ alert.link ]]) | [[ alert.metric ]] | [[ alert.info ]] |
|
||||||
|
[% endfor %]
|
||||||
|
[% else %]
|
||||||
|
There are no alerts configured by default for this integration.
|
||||||
|
[% endif %]
|
6
integrations/templates/integrations.js
Normal file
6
integrations/templates/integrations.js
Normal file
|
@ -0,0 +1,6 @@
|
||||||
|
// DO NOT EDIT THIS FILE DIRECTLY
|
||||||
|
// It gets generated by integrations/gen_integrations.py in the Netdata repo
|
||||||
|
|
||||||
|
export const categories = [[ categories ]]
|
||||||
|
export const integrations = [[ integrations ]]
|
||||||
|
|
49
integrations/templates/metrics.md
Normal file
49
integrations/templates/metrics.md
Normal file
|
@ -0,0 +1,49 @@
|
||||||
|
[% if entry.metrics.scopes %]
|
||||||
|
## Metrics
|
||||||
|
|
||||||
|
[% if entry.metrics.folding.enabled %]
|
||||||
|
{% details summary="[[ entry.metrics.folding.title ]]" %}
|
||||||
|
[% endif %]
|
||||||
|
Metrics grouped by *scope*.
|
||||||
|
|
||||||
|
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
|
||||||
|
|
||||||
|
[[ entry.metrics.description ]]
|
||||||
|
|
||||||
|
[% for scope in entry.metrics.scopes %]
|
||||||
|
### Per [[ scope.name ]]
|
||||||
|
|
||||||
|
[[ scope.description ]]
|
||||||
|
|
||||||
|
[% if scope.labels %]
|
||||||
|
Labels:
|
||||||
|
|
||||||
|
| Label | Description |
|
||||||
|
|:-----------:|:----------------:|
|
||||||
|
[% for label in scope.labels %]
|
||||||
|
| [[ label.name ]] | [[ label.description ]] |
|
||||||
|
[% endfor %]
|
||||||
|
[% else %]
|
||||||
|
This scope has no labels.
|
||||||
|
[% endif %]
|
||||||
|
|
||||||
|
Metrics:
|
||||||
|
|
||||||
|
| Metric | Dimensions | Unit |[% for a in entry.metrics.availability %] [[ a ]] |[% endfor %]
|
||||||
|
|
||||||
|
|:------:|:----------:|:----:|[% for a in entry.metrics.availability %]:---:|[% endfor %]
|
||||||
|
|
||||||
|
[% for metric in scope.metrics %]
|
||||||
|
| [[ metric.name ]] | [% for d in metric.dimensions %][[ d.name ]][% if not loop.last %], [% endif %][% endfor %] | [[ metric.unit ]] |[% for a in entry.metrics.availability %] [% if a.name in metric.availability %]•[% else %] [% endif %] |[% endfor %]
|
||||||
|
|
||||||
|
[% endfor %]
|
||||||
|
|
||||||
|
[% endfor %]
|
||||||
|
[% if entry.metrics.folding.enabled %]
|
||||||
|
{% /details %}
|
||||||
|
[% endif %]
|
||||||
|
[% else %]
|
||||||
|
## Metrics
|
||||||
|
|
||||||
|
[[ entry.metrics.description ]]
|
||||||
|
[% endif %]
|
7
integrations/templates/overview.md
Normal file
7
integrations/templates/overview.md
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
[% if entry.integration_type == 'collector' %]
|
||||||
|
[% include 'overview/collector.md' %]
|
||||||
|
[% elif entry.integration_type == 'exporter' %]
|
||||||
|
[% include 'overview/exporter.md' %]
|
||||||
|
[% elif entry.integration_type == 'notification' %]
|
||||||
|
[% include 'overview/notification.md' %]
|
||||||
|
[% endif %]
|
67
integrations/templates/overview/collector.md
Normal file
67
integrations/templates/overview/collector.md
Normal file
|
@ -0,0 +1,67 @@
|
||||||
|
# [[ entry.meta.monitored_instance.name ]]
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
[[ entry.overview.data_collection.metrics_description ]]
|
||||||
|
|
||||||
|
[[ entry.overview.data_collection.method_description ]]
|
||||||
|
|
||||||
|
[% if entry.overview.supported_platforms.include %]
|
||||||
|
This collector is only supported on the following platforms:
|
||||||
|
|
||||||
|
[% for platform in entry.overview.supported_platforms.include %]
|
||||||
|
- [[ platform ]]
|
||||||
|
[% endfor %]
|
||||||
|
[% elif entry.overview.supported_platforms.exclude %]
|
||||||
|
This collector is supported on all platforms except for the following platforms:
|
||||||
|
|
||||||
|
[% for platform in entry.overview.supported_platforms.exclude %]
|
||||||
|
- [[ platform ]]
|
||||||
|
[% endfor %]
|
||||||
|
[% else %]
|
||||||
|
This collector is supported on all platforms.
|
||||||
|
[% endif %]
|
||||||
|
|
||||||
|
[% if entry.overview.multi_instance %]
|
||||||
|
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
|
||||||
|
[% else %]
|
||||||
|
This collector only supports collecting metrics from a single instance of this integration.
|
||||||
|
[% endif %]
|
||||||
|
|
||||||
|
[% if entry.overview.additional_permissions.description %]
|
||||||
|
[[ entry.overview.additional_permissions.description ]]
|
||||||
|
[% endif %]
|
||||||
|
|
||||||
|
[% if related %]
|
||||||
|
[[ entry.meta.name ]] can be monitored further using the following other integrations:
|
||||||
|
|
||||||
|
[% for res in related %]
|
||||||
|
- {% relatedResource id="[[ res.id ]]" %}[[ res.name ]]{% /relatedResource %}
|
||||||
|
[% endfor %]
|
||||||
|
|
||||||
|
[% endif %]
|
||||||
|
### Default Behavior
|
||||||
|
|
||||||
|
#### Auto-Detection
|
||||||
|
|
||||||
|
[% if entry.overview.default_behavior.auto_detection.description %]
|
||||||
|
[[ entry.overview.default_behavior.auto_detection.description ]]
|
||||||
|
[% else %]
|
||||||
|
This integration doesn't support auto-detection.
|
||||||
|
[% endif %]
|
||||||
|
|
||||||
|
#### Limits
|
||||||
|
|
||||||
|
[% if entry.overview.default_behavior.limits.description %]
|
||||||
|
[[ entry.overview.default_behavior.limits.description ]]
|
||||||
|
[% else %]
|
||||||
|
The default configuration for this integration does not impose any limits on data collection.
|
||||||
|
[% endif %]
|
||||||
|
|
||||||
|
#### Performance Impact
|
||||||
|
|
||||||
|
[% if entry.overview.default_behavior.performance_impact.description %]
|
||||||
|
[[ entry.overview.default_behavior.performance_impact.description ]]
|
||||||
|
[% else %]
|
||||||
|
The default configuration for this integration is not expected to impose a significant performance impact on the system.
|
||||||
|
[% endif %]
|
9
integrations/templates/overview/exporter.md
Normal file
9
integrations/templates/overview/exporter.md
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
# [[ entry.meta.name ]]
|
||||||
|
|
||||||
|
[[ entry.overview.exporter_description ]]
|
||||||
|
[% if entry.overview.exporter_limitations %]
|
||||||
|
|
||||||
|
## Limitations
|
||||||
|
|
||||||
|
[[ entry.overview.exporter_limitations ]]
|
||||||
|
[% endif %]
|
9
integrations/templates/overview/notification.md
Normal file
9
integrations/templates/overview/notification.md
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
# [[ entry.meta.name ]]
|
||||||
|
|
||||||
|
[[ entry.overview.notification_description ]]
|
||||||
|
[% if entry.overview.notification_limitations %]
|
||||||
|
|
||||||
|
## Limitations
|
||||||
|
|
||||||
|
[[ entry.overview.notification_limitations ]]
|
||||||
|
[% endif %]
|
9
integrations/templates/platform_info.md
Normal file
9
integrations/templates/platform_info.md
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
[% if entries %]
|
||||||
|
The following releases of this platform are supported:
|
||||||
|
|
||||||
|
| Version | Support Tier | Native Package Architectures | Notes |
|
||||||
|
|:-------:|:------------:|:----------------------------:|:----- |
|
||||||
|
[% for e in entries %]
|
||||||
|
| [[ e.version ]] | [[ e.support ]] | [[ ', '.join(e.arches) ]] | [[ e.notes ]] |
|
||||||
|
[% endfor %]
|
||||||
|
[% endif %]
|
7
integrations/templates/related_resources.md
Normal file
7
integrations/templates/related_resources.md
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
[% if related %]
|
||||||
|
You can further monitor this integration by using:
|
||||||
|
|
||||||
|
[% for item in related %]
|
||||||
|
- {% relatedResource id="[[ item.id ]]" %}[[ item.name ]]{% /relatedResource %}: [[ item.info.description ]]
|
||||||
|
[% endfor %]
|
||||||
|
[% endif %]
|
94
integrations/templates/setup.md
Normal file
94
integrations/templates/setup.md
Normal file
|
@ -0,0 +1,94 @@
|
||||||
|
[% if entry.setup.description %]
|
||||||
|
[[ entry.setup.description ]]
|
||||||
|
[% else %]
|
||||||
|
[% if entry.setup.prerequisites.list %]
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
[% for prereq in entry.setup.prerequisites.list %]
|
||||||
|
#### [[ prereq.title ]]
|
||||||
|
|
||||||
|
[[ prereq.description ]]
|
||||||
|
|
||||||
|
[% endfor %]
|
||||||
|
[% endif %]
|
||||||
|
[% if entry.setup.configuration.file.name %]
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
#### File
|
||||||
|
|
||||||
|
The configuration file name for this integration is `[[ entry.setup.configuration.file.name ]]`.
|
||||||
|
[% if 'section_name' in entry.setup.configuration.file %]
|
||||||
|
Configuration for this specific integration is located in the `[[ entry.setup.configuration.file.section_name ]]` section within that file.
|
||||||
|
[% endif %]
|
||||||
|
|
||||||
|
[% if entry.plugin_name == 'go.d.plugin' %]
|
||||||
|
[% include 'setup/sample-go-config.md' %]
|
||||||
|
[% elif entry.plugin_name == 'python.d.plugin' %]
|
||||||
|
[% include 'setup/sample-python-config.md' %]
|
||||||
|
[% elif entry.plugin_name == 'charts.d.plugin' %]
|
||||||
|
[% include 'setup/sample-charts-config.md' %]
|
||||||
|
[% elif entry.plugin_name == 'ioping.plugin' %]
|
||||||
|
[% include 'setup/sample-charts-config.md' %]
|
||||||
|
[% elif entry.plugin_name == 'apps.plugin' %]
|
||||||
|
[% include 'setup/sample-apps-config.md' %]
|
||||||
|
[% elif entry.plugin_name == 'ebpf.plugin' %]
|
||||||
|
[% include 'setup/sample-netdata-config.md' %]
|
||||||
|
[% elif entry.setup.configuration.file.name == 'netdata.conf' %]
|
||||||
|
[% include 'setup/sample-netdata-config.md' %]
|
||||||
|
[% endif %]
|
||||||
|
|
||||||
|
You can edit the configuration file using the `edit-config` script from the
|
||||||
|
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
|
||||||
|
sudo ./edit-config [[ entry.setup.configuration.file.name ]]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Options
|
||||||
|
|
||||||
|
[[ entry.setup.configuration.options.description ]]
|
||||||
|
|
||||||
|
[% if entry.setup.configuration.options.list %]
|
||||||
|
[% if entry.setup.configuration.options.folding.enabled %]
|
||||||
|
{% details summary="[[ entry.setup.configuration.options.folding.title ]]" %}
|
||||||
|
[% endif %]
|
||||||
|
| Name | Description | Default | Required |
|
||||||
|
|:----:|:-----------:|:-------:|:--------:|
|
||||||
|
[% for item in entry.setup.configuration.options.list %]
|
||||||
|
| [[ item.name ]] | [[ item.description ]] | [[ item.default ]] | [[ item.required ]] |
|
||||||
|
[% endfor %]
|
||||||
|
|
||||||
|
[% for item in entry.setup.configuration.options.list %]
|
||||||
|
[% if 'detailed_description' in item %]
|
||||||
|
##### [[ item.name ]]
|
||||||
|
|
||||||
|
[[ item.detailed_description ]]
|
||||||
|
|
||||||
|
[% endif %]
|
||||||
|
[% endfor %]
|
||||||
|
[% if entry.setup.configuration.options.folding.enabled %]
|
||||||
|
{% /details %}
|
||||||
|
[% endif %]
|
||||||
|
[% endif %]
|
||||||
|
[% if entry.setup.configuration.examples.list %]
|
||||||
|
#### Examples
|
||||||
|
|
||||||
|
[% for example in entry.setup.configuration.examples.list %]
|
||||||
|
##### [[ example.name ]]
|
||||||
|
|
||||||
|
[[ example.description ]]
|
||||||
|
|
||||||
|
[% if example.folding.enabled %]
|
||||||
|
{% details summary="[[ entry.setup.configuration.examples.folding.title ]]" %}
|
||||||
|
[% endif %]
|
||||||
|
```yaml
|
||||||
|
[[ example.config ]]
|
||||||
|
```
|
||||||
|
[% if example.folding.enabled %]
|
||||||
|
{% /details %}
|
||||||
|
[% endif %]
|
||||||
|
[% endfor %]
|
||||||
|
[% endif %]
|
||||||
|
[% endif %]
|
||||||
|
[% endif %]
|
41
integrations/templates/setup/sample-apps-config.md
Normal file
41
integrations/templates/setup/sample-apps-config.md
Normal file
|
@ -0,0 +1,41 @@
|
||||||
|
A custom format is used.
|
||||||
|
|
||||||
|
Each configuration line has a form like:
|
||||||
|
|
||||||
|
```
|
||||||
|
group_name: app1 app2 app3
|
||||||
|
```
|
||||||
|
|
||||||
|
Where `group_name` defines an application group, and `app1`, `app2`, and `app3` are process names to match for
|
||||||
|
that application group.
|
||||||
|
|
||||||
|
Each group can be given multiple times, to add more processes to it.
|
||||||
|
|
||||||
|
The process names are the ones returned by:
|
||||||
|
|
||||||
|
- `ps -e` or `/proc/PID/stat`
|
||||||
|
- in case of substring mode (see below): `/proc/PID/cmdline`
|
||||||
|
|
||||||
|
To add process names with spaces, enclose them in quotes (single or double):
|
||||||
|
`'Plex Media Serv' "my other process"`
|
||||||
|
|
||||||
|
Note that spaces are not supported for process groups. Use a dash "-" instead.
|
||||||
|
|
||||||
|
You can add an asterisk (\*) at the beginning and/or the end of a process to do wildcard matching:
|
||||||
|
|
||||||
|
- `*name` suffix mode: will search for processes ending with 'name' (/proc/PID/stat)
|
||||||
|
- `name*` prefix mode: will search for processes beginning with 'name' (/proc/PID/stat)
|
||||||
|
- `*name*` substring mode: will search for 'name' in the whole command line (/proc/PID/cmdline)
|
||||||
|
|
||||||
|
If you enter even just one `*name*` (substring), apps.plugin will process /proc/PID/cmdline for all processes,
|
||||||
|
just once (when they are first seen).
|
||||||
|
|
||||||
|
To add processes with single quotes, enclose them in double quotes: "process with this ' single quote"
|
||||||
|
|
||||||
|
To add processes with double quotes, enclose them in single quotes: 'process with this " double quote'
|
||||||
|
|
||||||
|
The order of the entries in this list is important, the first that matches a process is used, so put important
|
||||||
|
ones at the top. Processes not matched by any row, will inherit it from their parents or children.
|
||||||
|
|
||||||
|
The order also controls the order of the dimensions on the generated charts (although applications started after
|
||||||
|
apps.plugin is started, will be appended to the existing list of dimensions the netdata daemon maintains).
|
6
integrations/templates/setup/sample-charts-config.md
Normal file
6
integrations/templates/setup/sample-charts-config.md
Normal file
|
@ -0,0 +1,6 @@
|
||||||
|
The file format is POSIX shell script. Generally, the structure is:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
OPTION_1="some value"
|
||||||
|
OPTION_2="some other value"
|
||||||
|
```
|
9
integrations/templates/setup/sample-go-config.md
Normal file
9
integrations/templates/setup/sample-go-config.md
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
The file format is YAML. Generally, the structure is:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
update_every: 1
|
||||||
|
autodetection_retry: 0
|
||||||
|
jobs:
|
||||||
|
- name: some_name1
|
||||||
|
- name: some_name1
|
||||||
|
```
|
10
integrations/templates/setup/sample-netdata-config.md
Normal file
10
integrations/templates/setup/sample-netdata-config.md
Normal file
|
@ -0,0 +1,10 @@
|
||||||
|
The file format is a modified INI syntax. The general structure is:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[section1]
|
||||||
|
option 1 = some value
|
||||||
|
option 2 = some other value
|
||||||
|
|
||||||
|
[section2]
|
||||||
|
option 3 = some third value
|
||||||
|
```
|
10
integrations/templates/setup/sample-python-config.md
Normal file
10
integrations/templates/setup/sample-python-config.md
Normal file
|
@ -0,0 +1,10 @@
|
||||||
|
The file format is YAML. Generally, the structure is:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
update_every: 1
|
||||||
|
autodetection_retry: 0
|
||||||
|
|
||||||
|
job_name:
|
||||||
|
job_option1: some_value
|
||||||
|
job_option2: some_other_vlaue
|
||||||
|
```
|
107
integrations/templates/troubleshooting.md
Normal file
107
integrations/templates/troubleshooting.md
Normal file
|
@ -0,0 +1,107 @@
|
||||||
|
[% if entry.troubleshooting.list or entry.integration_type == 'collector' or entry.integration_type == 'notification' %]
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
[% if entry.integration_type == 'collector' %]
|
||||||
|
[% if entry.plugin_name == 'go.d.plugin' %]
|
||||||
|
### Debug Mode
|
||||||
|
|
||||||
|
To troubleshoot issues with the `[[ entry.module_name ]]` collector, run the `go.d.plugin` with the debug option enabled. The output
|
||||||
|
should give you clues as to why the collector isn't working.
|
||||||
|
|
||||||
|
- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on
|
||||||
|
your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /usr/libexec/netdata/plugins.d/
|
||||||
|
```
|
||||||
|
|
||||||
|
- Switch to the `netdata` user.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo -u netdata -s
|
||||||
|
```
|
||||||
|
|
||||||
|
- Run the `go.d.plugin` to debug the collector:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./go.d.plugin -d -m [[ entry.module_name ]]
|
||||||
|
```
|
||||||
|
[% elif entry.plugin_name == 'python.d.plugin' %]
|
||||||
|
### Debug Mode
|
||||||
|
|
||||||
|
To troubleshoot issues with the `[[ entry.module_name ]]` collector, run the `python.d.plugin` with the debug option enabled. The output
|
||||||
|
should give you clues as to why the collector isn't working.
|
||||||
|
|
||||||
|
- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on
|
||||||
|
your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /usr/libexec/netdata/plugins.d/
|
||||||
|
```
|
||||||
|
|
||||||
|
- Switch to the `netdata` user.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo -u netdata -s
|
||||||
|
```
|
||||||
|
|
||||||
|
- Run the `python.d.plugin` to debug the collector:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./python.d.plugin [[ entry.module_name ]] debug trace
|
||||||
|
```
|
||||||
|
[% elif entry.plugin_name == 'charts.d.plugin' %]
|
||||||
|
### Debug Mode
|
||||||
|
|
||||||
|
To troubleshoot issues with the `[[ entry.module_name ]]` collector, run the `charts.d.plugin` with the debug option enabled. The output
|
||||||
|
should give you clues as to why the collector isn't working.
|
||||||
|
|
||||||
|
- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on
|
||||||
|
your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /usr/libexec/netdata/plugins.d/
|
||||||
|
```
|
||||||
|
|
||||||
|
- Switch to the `netdata` user.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo -u netdata -s
|
||||||
|
```
|
||||||
|
|
||||||
|
- Run the `charts.d.plugin` to debug the collector:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./charts.d.plugin debug 1 [[ entry.module_name ]]
|
||||||
|
```
|
||||||
|
[% endif %]
|
||||||
|
[% elif entry.integration_type == 'notification' %]
|
||||||
|
[% if not 'cloud-notifications' in entry._src_path %]
|
||||||
|
### Test Notification
|
||||||
|
|
||||||
|
You can run the following command by hand, to test alerts configuration:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# become user netdata
|
||||||
|
sudo su -s /bin/bash netdata
|
||||||
|
|
||||||
|
# enable debugging info on the console
|
||||||
|
export NETDATA_ALARM_NOTIFY_DEBUG=1
|
||||||
|
|
||||||
|
# send test alarms to sysadmin
|
||||||
|
/usr/libexec/netdata/plugins.d/alarm-notify.sh test
|
||||||
|
|
||||||
|
# send test alarms to any role
|
||||||
|
/usr/libexec/netdata/plugins.d/alarm-notify.sh test "ROLE"
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that this will test _all_ alert mechanisms for the selected role.
|
||||||
|
[% endif %]
|
||||||
|
[% endif %]
|
||||||
|
[% for item in entry.troubleshooting.list %]
|
||||||
|
### [[ item.name ]]
|
||||||
|
|
||||||
|
[[ description ]]
|
||||||
|
|
||||||
|
[% endfor %]
|
||||||
|
[% endif %]
|
Loading…
Add table
Reference in a new issue