What’s Owncast? Owncast in a Nutshell

Owncast is an all-in-one streaming service that combines RTMP-streaming with a easy-to-use, feature-rich and customizable Web-UI. It’s written in Go and is fully open-source. You can find the source-code on GitHub.

What you get out-of-the-box

  • a nicely designed website that’ll present the stream
  • a live-chat beside the live-stream, where users can engage with each other and the streamer
  • an intuitive admin interface where you can change/design the looks of your site to your likings
  • a built-in information-section on the stream page where your viewers can find information about you and your stream
  • a RTMP streaming endpoint that works with any existing streaming software (e.g. OBS)
  • a nicely documented API for programmatically extending functionality of your Owncast instance (e.g. chat bots)
  • (optional) stream offloading to a third-party (e.g. AWS) to support a huge number of parallel listeners
  • reaching a broader audience through the Owncast Directory (a free service that lists and promotes live Owncast streams)
  • and much, much, much, much more.

Why I ❤ Owncast

  • most importantly: Owncast does not violate neither my privacy nor the privacy of my viewers
  • I can host Owncast myself, on comparatively cheap hardware
  • through the Owncast Directory lots of lovely people discovered my stream
  • it’s actively developed with a very nice maintainer that puts huge amounts effort into the project
  • the community around Owncast is just beyond! I’ve never in my whole life met so many awesome human beings

Hosting Owncast

Before we get started, there’s also a way to get your Owncast instance, even if you don’t want to setup and maintain it yourself. If just want a cheap instance hosted for you, then please make sure to checkout this tutorial from @gabek on how to get Owncast running on Linode with little to no prior knowledge.

Source: Invidious on yt.mstdn.social provided by stux

podman instead of docker

Podman is a daemon-less, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images. Podman provides a command line interface (CLI) familiar to anyone who has used the Docker Container Engine. Most users can simply alias Docker to Podman (alias docker=podman) without any problems. Similar to other common Container Engines (Docker, CRI-O, containerd), Podman relies on an OCI compliant Container Runtime (runc, crun, runv, etc) to interface with the operating system and create the running containers. This makes the running containers created by Podman nearly indistinguishable from those created by any other common container engine.

I personally prefer podman over docker for the following reasons:

  • support for Linux cgroup2
  • deamon-less (no need for dockerd running constantly in the background)
  • no need to run containers as root anymore (both inside container and on the host machine)

In this article I’ll be using podman, however you can just alias podman to docker. The CLI of podman is fully compatible to the CLI of docker, so you won’t notice any drastic difference.

alias podman=docker

The architecture

So this is what we’re building. nginx is acting as Reverse-Proxy which is forwarding HTTP traffic to the built-in web server of Owncast (inside the container). Pushing traffic through nginx is especially helpful, because you have one central point on your host-machine where you terminate TLS. If you’re not familiar with nginx you could obviously also use pretty much every other webserver e.g. Apache, Caddy or Traefik. Owncast itself is running as container. All data created inside the container, that is supposed to be accessible by the host for backup-purposes (e.g. the database of Owncast), is mounted as podman/docker volume into the container.

Getting a container image

Building the Container from Sources

Luckily Owncast has a Dockerfile in its repository, which can be used to build the image yourself. You just need to pull the sources from Owncast on GitHub and then utilize either buildah or docker to create an OCI compliant image.

git clone git@github.com:owncast/owncast.git
cd owncast/

# if you prefer using buildah you need to utilze bud (=build using docker) for building the Dockerfile
buildah bud -f ./Dockerfile -t owncast .

# if you have docker installed you run the following
docker build -t owncast .

Pulling pre-built images from DockerHub

Alternatively, if you prefer pulling a pre-built image, you can pull the image from Owncast on Dockerhub. The image is officially pushed to DockerHub by Gabe (the project maintainer) himself, so the image is to be considered trustworthy. It’s up to you, if you trust DockerHub now.

Starting the Owncast container

# using the pre-built image from Gabe
podman pull gabekangas/owncast:latest

podman run \
    --name='owncast' \
    -v /opt/owncast/data:/app/data:Z \
    -p 8011:8080 -p 1935:1935 \
    -it gabekangas/owncast:latest

# Let's verify that Owncast is up and running
curl http://localhost:8011

This simply starts the Owncast container - listening on Port 8011/tcp and mounting the directory /opt/owncast/data into the container. On first run, Owncast will create the database and save it into /opt/owncast/data. You can verify that it worked by checking whether a file named owncast.db exists in /opt/owncast/data. The :Z at the end of the volume mount parameter is necessary on Linux distributions with SELinux enabled (like RHEL, CentOS, Fedora and Rocky Linux). It basically tells podman to take care of labeling the directory on the host system correctly, so that SELinux does not prevent access to any files within this directory required by Owncast.

A simple container management script

for the sake of simplicity I prefer starting/stopping/updating containerized applications from scripts, rather than typing tons of commands each time. That’s why I usually end up with quick-and-dirty management-scripts that do the heavy-lifting for me. These scripts tend to just Install, Start, Stop and Update the container in a controlled manner. I keep these scripts under /usr/local/bin, to keep them accessible and executable for all users on a system. There I have added owncast.sh with the following contents. Don’t forget to chmod +x /usr/local/bin/owncast.sh before running it.

#!/usr/bin/env bash

podman_container_image_name="gabekangas/owncast:latest"
podman_container_image_alias="owncast"
podman_container_auto_update=0

function Install() {
  if ! podman image exists "$podman_container_image_name"; then
    podman pull "$podman_container_image_name"
  fi

	if ! podman container exists "$podman_container_image_alias"; then
    echo "Starting $podman_container_image_alias from $podman_container_image_name"
		podman run --name="$podman_container_image_alias" \
      -p 8011:8080 \
      -p 1935:1935 \
			-v /opt/owncast/data:/app/data:Z \
			"$podman_container_image_name"
	fi
}

function Update() {
  podman pull "$podman_container_image_name"

	if podman container exists "$podman_container_image_alias"; then
    podman stop -t 30 "$podman_container_image_alias"
    podman rm "$podman_container_image_alias"
  fi

  Install
}

function Start() {
	podman start -a "$podman_container_image_alias"
}

function Stop() {
	podman stop -t 30 "$podman_container_image_alias"
}

if [ "$1" = "start" ]; then
	Install

  if $podman_container_auto_update; then
	  Update
  fi

	Start
fi

if [ "$1" = "stop" ]; then
	Stop
fi

if [ "$1" = "update" ]; then
	Update
fi

With this script it becomes a lot easier to manage the Owncast container. Want to install Owncast? Just run /usr/local/bin/owncast.sh install. Stop it? /usr/local/bin/owncast.sh stop. Update it? /usr/local/bin/owncast.sh update. It just makes things a lot easier for you later on - also it’ll give you slightly more flexibility in terms of extensibility. For example: if you want to make a backup of the volume before you run the update - just add it to the update-function in the script. Easy fix, nice solution.

Writing a simple systemd unit

if you are using docker to run your containers, you don’t need to utilize systemd to start your containers. dockerd is doing that for you automagically. podman is deamon-less, meaning you have to manually start your containers on boot if you want them running. And this is what we’re utilizing systemd for (that’s what an init-system is supposed to do, right Lennart? 😆 - just joking. I’m a huge fan)

This unit file utilizes the previous management-script. For security reasons, I recommend you to start your containers as a different user. In the following example I use the user owncast. Please note, that this only works with podman. docker still has not managed to run rootless and non-root containers - that’s exactly where podman is superior.

Create a file with the following content under /etc/systemd/system/owncast.service.

[Unit]
Description=Owncast Container
Wants=syslog.service

[Service]
Restart=always
User=owncast
Group=owncast
ExecStart=/usr/local/bin/owncast.sh start
ExecStop=/usr/local/bin/owncast.sh stop

[Install]
WantedBy=multi-user.target

after you have created the Unit-File, we can finally start the container as system-service and let systemctl take care of running/starting/stopping the service for us. Here, systemd just delegates the ExectStartand ExecStop to the management script added beforehand.

# tell systemd to reload all changes/additions in its unit-files
systemctl daemon-reload

# autostart Owncast on system boot
systemctl enable owncast.service

# start Owncast right away
systemctl start owncast.service

# check the logs and status of Owncast
systemctl status owncast.service

Reverse-Proxy for Owncast

Now that our Owncast container is running we can finally expose it to the public. All we need to do is, to tell nginx to forward traffic to the running Owncast container. In case of the following configuration, Owncast is reachable on the sub domain live.rtrace.io. Please make sure to replace it with your desired domain (and/or sub domain).

I frequently see Owncast hosted on a sub domain and here are the most common that I’ve observed so far:

  • tv.domain.tld
  • live.domain.tld
  • cast.domain.tld
  • stream.domain.tld
  • owncast.domain.tld

Obviously you can pick any domain (and/or sub domain) of your liking - the list is just for reference. If you’re looking for further inspiration on how to name your domain/sub domain, then go checkout the Owncast directory. Just make sure that either your own DNS-server, or the one of your DNS-provider has an active DNS A-Record pointing to the host or server running Owncast. If you’re hosting this from your home network with a dynamic IP-Address you might want to look into DuckDNS or NoIP to point to your routers IP-Address. Please note, that you’ll also need to “Port-Forward” (FORWARD NAT) to the machine in your network hosting Owncast.

The configuration

server {
        listen 80;
        server_name live.rtrace.io;
        return 301 https://$host$request_uri;
}

server {
        listen 443 ssl;
        server_name live.rtrace.io;

        location / {
                proxy_buffering off;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_set_header X-Forwarded-For $remote_addr;

                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection "upgrade";

                proxy_pass http://localhost:8011;
        }
}

Explanation

The Reverse-Proxy consists out of 2 server blocks. The first one listens for HTTP traffic on Port 80 for the sub domain live.rtrace.io. Its only purpose, is to redirect the client to the second server-block, which is listening on Port 443. 443/tcp is the HTTPS port, and inside this block we’re telling nginx to forward-proxy the traffic with some additional HTTP-Headers to http://localhost:8011. Surprisingly that’s the exact port we’ve previously started Owncast on. Nice!

a side note on HTTPS & Security

I usually configure TLS / HTTPS on the outermost nginx configuration block, located in /etc/nginx/nginx.conf. With that I have one configuration for all my server-blocks. I prefer it that way, because then I have one place where I configure TLS right, and it’ll propagate to all server-blocks automatically. If you prefer to handle HTTPS differently (e.g. with a separate configuration for each server-block) feel free to edit it. However, make sure to configure TLS right! As privacy-concerned person visiting your Owncast instance, I expect TLS to be configured correctly - ideally with PFS (Perfect Forward Secrecy). If you have no clue how to achieve that, make sure to check out the Mozilla SSL-Configuration generator for nginx. This service can create a reasonably secure configuration for nginx (and various other webservers).

Additionally you should check out the following resources to make your site more secure (both for you as administrator and your visitors):

  • Qualys SSL Labs, SSL Server Test
    this tool helps you to make sure, that your TLS/HTTPS configuration is reasonably secure and up-to-date. If you follow the modern TLS configuration of the Mozilla SSL Server Configuration generator, you should at least achieve a straight A-rating. That’s what you should aim for.

  • HTTP Security Headers
    SecurityHeaders.com is a project created by Scott Helme. This site scans your Website for the presence of Security-Headers (such as CORS, Referrer-Policy, X-XSS-Protection, etc …) and explains how to configure them. You should aim for a straight A+-rating. For reference: Rating of this blog you’re currently reading. Please be aware that certain headers can break certain features of Owncast (e.g. X-Frame-Options deny effectively disallows the embed-functionality - Security is an act of balance after all).

Finalizing

Now that we have our container running and nginx reverse-proxying requests to our container, we’re pretty much done with the setup part. Reload the nginx configuration and you are good to go and visit your newly created Owncast instance from any browser.

# let nginx verify that the configuration is correct
sudo nginx -t 

# reload the nginx configuration (if reload doesn't work for you, use restart instead)
systemctl reload nginx.service

Troubleshooting

Firewall

if you can’t access Owncast from the outside world yet, make sure you have configured your firewall to allow traffic on 80/tcp and 443/tcp. Due to the huge variety of distributions and their default-choices in firewall-frontends I’m not showing this in detail. What you’re looking for are frontends for iptables or nftables. For RHEL-based distributions your best bet is most likely firewall-cmd (frontend for firewalld which is a frontend for nftables and/or iptables - insert yooo dawg meme here). If you’re on a Debian-based distribution like Ubuntu, what you’re looking for is ufw. Additionally make sure to allow 1935/tcp. It is the incoming and outgoing RTMP port.

SELinux / AppArmor

On RHEL-based distributions SELinux can (and will be) (just) a minor pain in the a**. Check /var/log/audit/audit.log for messages starting with AVC if you’re suspecting SELinux. You most likely need to give your reverse-proxy access to networking resources. You can do this with setsebool -P httpd_can_network_connect.

Administration and maintenance

Configuration

all the configuration can (and should) be done in the admin Interface (unless you’re using Owncast < 0.0.6, which used a yaml configuration). The admin interface is accessible with a browser. Just visit https://owncast.yourdomain.tld/admin. You’ll be prompted with a simple HTTP Basic Authentication Dialog. The default username is admin. The default password is abc123. Make sure to change the Password right after the first login! The password is also your RTMP streaming key.

Updates

to update Owncast, just utilize the maintenance-script from earlier in this article. Simply call /usr/local/bin/owncast.sh update. It’ll pull the latest image and nicely start everything. I recommend you to follow the GitHub Release page of Owncast closely. New releases are pushed there. If you want Owncast to auto-update on every start, you might also want to set the podman_container_auto_update variable to 0.

Backups

all the data, that is relevant to Owncast is located on the volume that we’ve mounted to our container - so all you need to backup is within /opt/owncast/data - most importantly the database /opt/owncast/data/owncast.db. You should consider backing-up this directory before you update to a newer version. Also depending on the rate of change, frequent backups are recommended. If you lose the database, you’ll have to configure your instance from scratch again.

Support Owncast

Owncast is an awesome project, run by awesome people! Please consider supporting the development of the Owncast project by leaving a small donation at OpenCollective. Also make sure to spread the word about Owncast. How great it’d be if Owncast replaces all these proprietary streaming services?! You can help achieve that goal. Spread the word, spread the love 💝


Thank you for reading ❤

I hope you enjoyed reading this article. Maybe it was helpful, maybe it was not? Maybe you learned something new? I'd be really glad to hear your opinions. Your feedback is much appreciated, please just send a mail to bloginput@rtrace.io