fda0b6e15f
Bump github.com/nats-io/nats-server/v2 from 2.10.23 to 2.10.24 |
||
---|---|---|
.github | ||
client | ||
dist/init/systemd | ||
docker | ||
docs | ||
proxy | ||
scripts | ||
server | ||
.dockerignore | ||
.gitignore | ||
.golangci.yml | ||
.readthedocs.yaml | ||
allowed_ips_test.go | ||
allowed_ips.go | ||
api_async_easyjson.go | ||
api_async.go | ||
api_backend_easyjson.go | ||
api_backend_test.go | ||
api_backend.go | ||
api_grpc_easyjson.go | ||
api_grpc.go | ||
api_proxy_easyjson.go | ||
api_proxy.go | ||
api_signaling_easyjson.go | ||
api_signaling_test.go | ||
api_signaling.go | ||
async_events_nats.go | ||
async_events_test.go | ||
async_events.go | ||
backend_client_test.go | ||
backend_client.go | ||
backend_configuration_stats_prometheus.go | ||
backend_configuration_test.go | ||
backend_configuration.go | ||
backend_server_test.go | ||
backend_server.go | ||
backend_storage_etcd_test.go | ||
backend_storage_etcd.go | ||
backend_storage_static.go | ||
backoff_test.go | ||
backoff.go | ||
capabilities_test.go | ||
capabilities.go | ||
certificate_reloader_test.go | ||
certificate_reloader.go | ||
CHANGELOG.md | ||
channel_waiter_test.go | ||
channel_waiter.go | ||
client_stats_prometheus.go | ||
client.go | ||
clientsession_test.go | ||
clientsession.go | ||
closer_test.go | ||
closer.go | ||
concurrentmap_test.go | ||
concurrentmap.go | ||
config_test.go | ||
config.go | ||
continentmap.go | ||
deferred_executor_test.go | ||
deferred_executor.go | ||
dns_monitor_test.go | ||
dns_monitor.go | ||
etcd_client_test.go | ||
etcd_client.go | ||
federation_test.go | ||
federation.go | ||
file_watcher_test.go | ||
file_watcher.go | ||
flags_test.go | ||
flags.go | ||
geoip_test.go | ||
geoip.go | ||
gnatsd.conf | ||
go.mod | ||
go.sum | ||
grpc_backend_grpc.pb.go | ||
grpc_backend.pb.go | ||
grpc_backend.proto | ||
grpc_client_test.go | ||
grpc_client.go | ||
grpc_common_test.go | ||
grpc_common.go | ||
grpc_internal_grpc.pb.go | ||
grpc_internal.pb.go | ||
grpc_internal.proto | ||
grpc_mcu_grpc.pb.go | ||
grpc_mcu.pb.go | ||
grpc_mcu.proto | ||
grpc_remote_client.go | ||
grpc_server_test.go | ||
grpc_server.go | ||
grpc_sessions_grpc.pb.go | ||
grpc_sessions.pb.go | ||
grpc_sessions.proto | ||
grpc_stats_prometheus.go | ||
http_client_pool_stats_prometheus.go | ||
http_client_pool_test.go | ||
http_client_pool.go | ||
hub_stats_prometheus.go | ||
hub_test.go | ||
hub.go | ||
janus_client.go | ||
LICENSE | ||
lru_test.go | ||
lru.go | ||
Makefile | ||
mcu_common_test.go | ||
mcu_common.go | ||
mcu_janus_client.go | ||
mcu_janus_publisher_test.go | ||
mcu_janus_publisher.go | ||
mcu_janus_remote_publisher.go | ||
mcu_janus_remote_subscriber.go | ||
mcu_janus_stream_selection.go | ||
mcu_janus_subscriber.go | ||
mcu_janus_test.go | ||
mcu_janus.go | ||
mcu_proxy_test.go | ||
mcu_proxy.go | ||
mcu_stats_prometheus.go | ||
mcu_test.go | ||
mkdocs.yml | ||
mock_data_test.go | ||
natsclient_loopback_test.go | ||
natsclient_loopback.go | ||
natsclient_test.go | ||
natsclient.go | ||
notifier_test.go | ||
notifier.go | ||
proxy_config_etcd_test.go | ||
proxy_config_etcd.go | ||
proxy_config_static_test.go | ||
proxy_config_static.go | ||
proxy_config_test.go | ||
proxy_config.go | ||
proxy.conf.in | ||
publisher_stats_counter_test.go | ||
publisher_stats_counter.go | ||
README.md | ||
remotesession.go | ||
room_ping_test.go | ||
room_ping.go | ||
room_stats_prometheus.go | ||
room_test.go | ||
room.go | ||
roomsessions_builtin_test.go | ||
roomsessions_builtin.go | ||
roomsessions_test.go | ||
roomsessions.go | ||
server.conf.in | ||
session_test.go | ||
session.go | ||
session.pb.go | ||
session.proto | ||
sessionid_codec_test.go | ||
sessionid_codec.go | ||
single_notifier_test.go | ||
single_notifier.go | ||
stats_prometheus_test.go | ||
stats_prometheus.go | ||
syscallconn.go | ||
test_helpers.go | ||
testclient_test.go | ||
testutils_test.go | ||
throttle_stats_prometheus.go | ||
throttle_test.go | ||
throttle.go | ||
tools.go | ||
transient_data_test.go | ||
transient_data.go | ||
vendor_helper_test.go | ||
virtualsession_test.go | ||
virtualsession.go |
Spreed standalone signaling server
This repository contains the standalone signaling server which can be used for Nextcloud Talk (https://apps.nextcloud.com/apps/spreed).
See https://nextcloud-spreed-signaling.readthedocs.io/en/latest/ for further information on the API of the signaling server.
Building
The following tools are required for building the signaling server.
- git
- go >= 1.22
- make
Usually the last two versions of Go are supported. This follows the release policy of Go: https://go.dev/doc/devel/release#policy
These additional tools are required to generate code from .proto
files.
- protobuf-compiler >= 3
All other dependencies are fetched automatically while building.
$ make build
or on FreeBSD
$ gmake build
Afterwards the binary is created as bin/signaling
.
Configuration
A default configuration file is included as server.conf.in
. Copy this to
server.conf
and adjust as necessary for the local setup. See the file for
comments about the different parameters that can be changed.
Running
The signaling server connects to a NATS server (https://nats.io/) to distribute messages between different instances. See the NATS documentation on how to set up a server and run it.
Once the NATS server is running (and the URL to it is configured for the signaling server), you can start the signaling server.
$ ./bin/signaling
By default, the configuration is loaded from server.conf
in the current
directory, but a different path can be passed through the --config
option.
$ ./bin/signaling --config /etc/signaling/server.conf
Running as daemon
systemd
Create a dedicated group and user:
sudo groupadd --system signaling
sudo useradd --system \
--gid signaling \
--shell /usr/sbin/nologin \
--comment "Standalone signaling server for Nextcloud Talk." \
signaling
Copy server.conf.in
to /etc/signaling/server.conf
and fix permissions:
sudo chmod 600 /etc/signaling/server.conf
sudo chown signaling: /etc/signaling/server.conf
Copy dist/init/systemd/signaling.service
to /etc/systemd/system/signaling.service
(adjust abs. path in ExecStart
to match your binary location!)
Enable and start service:
systemctl enable signaling.service
systemctl start signaling.service
Running with Docker
Official docker containers for the signaling server and -proxy are available on Docker Hub at https://hub.docker.com/r/strukturag/nextcloud-spreed-signaling
See the README.md
in the docker
subfolder for details.
Docker Compose
You will likely have to adjust the Janus command line options depending on the exact network configuration on your server. Refer to Setup of Janus and the Janus documentation for how to configure your Janus server.
Copy server.conf.in
to server.conf
and adjust it to your liking.
If you're using the docker-compose.yml configuration as is, the MCU Url must be set to ws://localhost:8188
, the NATS Url must be set to nats://localhost:4222
, and TURN Servers must be set to turn:localhost:3478?transport=udp,turn:localhost:3478?transport=tcp
.
docker-compose build
docker-compose up -d
Please note that docker-compose v2 is required for building while most distributions will ship older versions. You can download a recent version from https://docs.docker.com/compose/install/
Setup of NATS server
There is a detailed description on how to install and run the NATS server available at https://docs.nats.io/running-a-nats-service/introduction
You can use the gnatsd.conf
file as base for the configuration of the NATS
server.
Setup of Janus
A Janus server (from https://github.com/meetecho/janus-gateway) can be used to
act as a WebRTC gateway. See the documentation of Janus on how to configure and
run the server. At least the VideoRoom
plugin and the websocket transport of
Janus must be enabled.
The signaling server uses the VideoRoom
plugin of Janus to manage sessions.
All gateway details are hidden from the clients, all messages are sent through
the signaling server. Only WebRTC media is exchanged directly between the
gateway and the clients.
Edit the server.conf
and enter the URL to the websocket endpoint of Janus in
the section [mcu]
and key url
. During startup, the signaling server will
connect to Janus and log information of the gateway.
The maximum bandwidth per publishing stream can also be configured in the
section [mcu]
, see properties maxstreambitrate
and maxscreenbitrate
.
Use multiple Janus servers
To scale the setup and add high availability, a signaling server can connect to one or multiple proxy servers that each provide access to a single Janus server.
For that, set the type
key in section [mcu]
to proxy
and set url
to a
space-separated list of URLs where a proxy server is running.
Each signaling server that connects to a proxy needs a unique token id and a
public / private RSA keypair. The token id must be configured as token_id
in
section [mcu]
, the path to the private key file as token_key
.
Setup of proxy server
The proxy server is built with the standard make command make build
as
bin/proxy
binary. Copy the proxy.conf.in
as proxy.conf
and edit section
[tokens]
to the list of allowed token ids and filenames of the public keys
for each token id. See the comments in proxy.conf.in
for other configuration
options.
When the proxy process receives a SIGHUP
signal, the list of allowed token
ids / public keys is reloaded. A SIGUSR1
signal can be used to shutdown a
proxy process gracefully after all clients have been disconnected. No new
publishers will be accepted in this case.
Remote streams (preview)
With Janus 1.1.0 or newer, remote streams are supported, i.e. a subscriber can
receive a published stream from any server. For this, you need to configure
hostname
, token_id
and token_key
in the proxy configuration. Each proxy
server also supports configuring maximum incoming
and outgoing
bandwidth
settings, which will also be used to select remote streams.
See proxy.conf.in
in section app
for details.
Clustering
The signaling server supports a clustering mode where multiple running servers can be interconnected to form a single "virtual" server. This can be used to increase the capacity of the signaling server or provide a failover setup.
For that a central NATS server / cluster must be used by all instances. Each
instance must run a GRPC server (enable listening
in section grpc
and
optionally setup certificate, private key and CA). The list of other GRPC
targets must be configured as targets
in section grpc
or can be retrieved
from an etcd cluster. See server.conf.in
in section grpc
for configuration
details.
Setup of frontend webserver
Usually the standalone signaling server is running behind a webserver that does the SSL protocol or acts as a load balancer for multiple signaling servers.
The configuration examples below assume a pre-configured webserver (nginx or Apache) with a working HTTPS setup, that is listening on the external interface of the server hosting the standalone signaling server.
After everything has been set up, the configuration can be tested using curl
:
$ curl -i https://myserver.domain.invalid/standalone-signaling/api/v1/welcome
HTTP/1.1 200 OK
Date: Thu, 05 Jul 2018 09:28:08 GMT
Server: nextcloud-spreed-signaling/1.0.0
Content-Type: application/json; charset=utf-8
Content-Length: 59
{"nextcloud-spreed-signaling":"Welcome","version":"1.0.0"}
nginx
Nginx can be used as frontend for the standalone signaling server without any additional requirements.
The backend should be configured separately so it can be changed in a single location and also to allow using multiple backends from a single frontend server.
Assuming the standalone signaling server is running on the local interface on
port 8080
below, add the following block to the nginx server definition in
/etc/nginx/sites-enabled
(just before the server
definition):
upstream signaling {
server 127.0.0.1:8080;
}
To proxy all requests for the standalone signaling to the correct backend, the
following location
block must be added inside the server
definition of
the same file:
location /standalone-signaling/ {
proxy_pass http://signaling/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /standalone-signaling/spreed {
proxy_pass http://signaling/spreed;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Example (e.g. /etc/nginx/sites-enabled/default
):
upstream signaling {
server 127.0.0.1:8080;
}
server {
listen 443 ssl http2;
server_name myserver.domain.invalid;
# ... other existing configuration ...
location /standalone-signaling/ {
proxy_pass http://signaling/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /standalone-signaling/spreed {
proxy_pass http://signaling/spreed;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Apache
To configure the Apache webservice as frontend for the standalone signaling
server, the modules mod_proxy_http
and mod_proxy_wstunnel
must be enabled
so WebSocket and API backend requests can be proxied:
$ sudo a2enmod proxy
$ sudo a2enmod proxy_http
$ sudo a2enmod proxy_wstunnel
Now the Apache VirtualHost
configuration can be extended to forward requests
to the standalone signaling server (assuming the server is running on the local
interface on port 8080
below):
<VirtualHost *:443>
# ... existing configuration ...
# Enable proxying Websocket requests to the standalone signaling server.
ProxyPass "/standalone-signaling/" "ws://127.0.0.1:8080/"
RequestHeader set X-Real-IP %{REMOTE_ADDR}s
RewriteEngine On
# Websocket connections from the clients.
RewriteRule ^/standalone-signaling/spreed/$ - [L]
# Backend connections from Nextcloud.
RewriteRule ^/standalone-signaling/api/(.*) http://127.0.0.1:8080/api/$1 [L,P]
# ... existing configuration ...
</VirtualHost>
Caddy
v1
Caddy (v1) configuration:
myserver.domain.invalid {
proxy /standalone-signaling/ http://127.0.0.1:8080 {
without /standalone-signaling
transparent
websocket
}
}
v2
Caddy (v2) configuration:
myserver.domain.invalid {
route /standalone-signaling/* {
uri strip_prefix /standalone-signaling
reverse_proxy http://127.0.0.1:8080
header_up X-Real-IP {remote_host}
}
}
Setup of Nextcloud Talk
Login to your Nextcloud as admin and open the additional settings page. Scroll
down to the "Talk" section and enter the base URL of your standalone signaling
server in the field "External signaling server".
Please note that you have to use https
if your Nextcloud is also running on
https
. Usually you should enter https://myhostname/standalone-signaling
as
URL.
The value "Shared secret for external signaling server" must be the same as the
property secret
in section backend
of your server.conf
.
If you are using a self-signed certificate for development, you need to uncheck
the box Validate SSL certificate
so backend requests from Nextcloud to the
signaling server can be performed.
Benchmarking the server
A simple client exists to benchmark the server. Please note that the features
that are benchmarked might not cover the whole functionality, check the
implementation in src/client
for details on the client.
To authenticate new client connections to the signaling server, the client
starts a dummy authentication handler on a local interface and passes the URL
in the hello
request. Therefore the signaling server should be configured to
allow all backend hosts (option allowall
in section backend
).
The client is not compiled by default, but can be using the client
target:
$ make client
Usage:
$ ./bin/client
Usage of ./bin/client:
-addr string
http service address (default "localhost:28080")
-config string
config file to use (default "server.conf")
-maxClients int
number of client connections (default 100)