What’s Owncast? Owncast in a Nutshell

Owncast is an all-in-one streaming service that combines RTMP-streaming with a easy-to-use, feature-rich and customizable Web-UI. It’s written in Go and is fully open-source. You can find the source-code on Github.

What you get out-of-the-box

  • a nicely designed website that’ll present the stream
  • a live-chat beside the live-stream, where users can engage with each other and the streamer
  • an intuitive admin interface where you can change/design the looks of your site to your likings
  • a built-in information-section on the stream page where your viewers can find information about you and your stream
  • a RTMP streaming endpoint that works with any existing streaming software (e.g. OBS)
  • a nicely documented API for programmatically extending functionality of your owncast instance (e.g. Chatbots)
  • (optional) stream offloading to a third-party (e.g. AWS) to support a huge number of parallel listeners
  • reaching a broader audience through the Owncast Directory (a free service that lists and promotes live Owncast streams)
  • and much, much, much, much more.

Why I ❤ Owncast

  • most importantly: Owncast does not violate neither my privacy nor the privacy of my viewers
  • I can host Owncast myself, on comparatively cheap hardware
  • through the Owncast Directory lots of lovely people discovered my stream
  • it’s actively developed with a very nice maintainer that puts huge amounts effort into the project
  • the community around Owncast is just beyond! I’ve never in my whole life met so many awesome human beings

Hosting Owncast

Before we get started, there’s also a way to get your Owncast instance, even if you don’t want to setup and maintain it yourself. If just want a cheap instance hosted for you, then please make sure to checkout this tutorial from @gabek on how to get Owncast running on Linode with little to no prior knowledge.

Source: Invidious on yt.mstdn.social provided by stux

podman instead of docker

Podman is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images. Podman provides a command line interface (CLI) familiar to anyone who has used the Docker Container Engine. Most users can simply alias Docker to Podman (alias docker=podman) without any problems. Similar to other common Container Engines (Docker, CRI-O, containerd), Podman relies on an OCI compliant Container Runtime (runc, crun, runv, etc) to interface with the operating system and create the running containers. This makes the running containers created by Podman nearly indistinguishable from those created by any other common container engine.

I personally prefer podman over docker for the following reasons:

  • support for Linux cgroup2
  • deamonless (no need for dockerd running constantly in the background)
  • no need to run containers as root anymore (both inside container and on the host machine)

In this article I’ll be using podman, however you can just alias podman to docker. The CLI of podman is fully compatible to the CLI of docker, so you won’t notice any drastical difference.

alias podman=docker

The architecture

So this is what we’re building. nginx is acting as Reverse-Proxy which is forwarding HTTP traffic to the built-in webserver of Owncast (inside the container). Pushing traffic through nginx is especially helpful, because you have one central point on your host-machine where you terminate TLS. If you’re not familiar with nginx you could obviously also use pretty much every other webserver e.g. Apache, Caddy or Traefik. Owncast itself is running as container. All data created inside the container, that is supposed to be accessible by the host for backup-purposes (e.g. the database of owncast), is mounted as podman/docker volume into the container.

Getting a container image

Building the Container from Sources

Luckily Owncast has a Dockerfile in its repository, which can be used to build the image yourself. You just need to pull the sources from Owncast on Github and then utilize either buildah or docker to create an OCI compliant image.

git clone git@github.com:owncast/owncast.git
cd owncast/

# if you prefer using buildah you need to utilze bud (=build using docker) for building the Dockerfile
buildah bud -f ./Dockerfile -t owncast .

# if you have docker installed you run the following
docker build -t owncast .

Pulling pre-built images from DockerHub

Alternatively, if you prefer pulling a pre-built image, you can pull the image from owncast on Dockerhub. The image is officially pushed to Dockerhub by Gabe (the project maintainer) himself, so the image is to be considered trustworthy. It’s up to you, if you trust Dockerhub now.

Starting the Owncast container

# using the pre-built image from Gabe
podman pull gabekangas/owncast:latest

podman run \
    --name='owncast' \
    -v /opt/owncast/data:/app/data:Z \
    -p 8011:8080 -p 1935:1935 \
    -it gabekangas/owncast:latest

# Let's verify that Owncast is up and running
curl http://localhost:8011

This starts the Owncast container - listening on Port 8011/tcp. Also it mounts the directory /opt/owncast/data into the container. On first run, Owncast will create the database and you can verify that it worked by checking whether a owncast.db exists in /opt/owncast/data. The :Z at the end of the volume mount parameter is necessary on Linux distributions with SELinux enabled (like RHEL, CentOS, Fedora, Rocky Linux, SUSE). It basically tells podman to take care of labelling the Host-directory correctly, so that SELinux does not prevent access to any files required by Owncast.

A simple Owncast management script

for the sake of simplicity I prefer starting/stopping/updating containerized applications from scripts, rather than typing tons of commands each time. That’s why I usually end up with a small quick-and-dirty management-scripts that does the heavy-lifting for me. In case of Owncast this is the most simple script I came up with.

I keep such scripts in /usr/local/bin to keep them accessible for all users on the system. There I have added owncast.sh with the following contents. Don’t forget to chmod +x /usr/local/bin/owncast.sh before running it.


function Install() {
    # Intsall Owncast if owncast is not already existing on podman
	podman ps --all | grep owncast
	if [ $? -ne 0 ]; then
		podman pull gabekangas/owncast:latest
		echo "Installing Owncast"
		podman run \
			--name='owncast' \
			-v /opt/owncast/data:/app/data:Z \
			-p 8011:8080 -p 1935:1935 \
			-it gabekangas/owncast:latest

function Update() {
        podman pull gabekangas/owncast:latest

        podman ps --all | grep owncast
        if [ $? -eq 0 ]; then
                podman stop owncast
                podman rm owncast


function Start() {
	podman start -a owncast

function Stop() {
	podman stop -t 30 owncast

if [ "$1" = "start" ]; then

if [ "$1" = "stop" ]; then

if [ "$1" = "update" ]; then

With this script it becomes a lot easier to update/install the owncast container. Want to install Owncast? Just run /usr/local/bin/owncast.sh install. Stop it? /usr/local/bin/owncast.sh stop. Update it? /usr/local/bin/owncast.sh update. It just makes things a lot easier for you later on - also it’ll give you slightly more flexibility in terms of extensibility. For example: if you want to make a backup of the volume before you run the update - just add it to the update-function in the script. Easy fix, nice solution.

Writing a simple systemd unit

if you are using docker to run your containers, you don’t need to utilize systemd to start your containers. dockerd is doing that for you automagically. podman is deamonless, meaning you have to manually start your containers on boot if you want them running. And this is what we’re utilizing systemd for (that’s what an init-system is supposed to do, right Lennart? 😆)

This unit file utilizes the previously shown management-script. For security reasons, I recommend you to start your containers as different user. In the following example I use user owncast. Please note, that this only works with podman. docker still has not managed to run rootless and non-root containers - that’s exactly why podman is superior.

Create a file with the following content under /etc/systemd/system/owncast.service.

Description=Owncast Container

ExecStart=/usr/local/bin/owncast.sh start
ExecStop=/usr/local/bin/owncast.sh stop


after you have created the Unit-File, we can finally start the container as system-service and let systemctl take care of running/starting/stopping the serivce for us

# tell systemd to reload all changes/additions in its unit-files
systemctl daemon-reload

# autostart Owncast on system boot
systemctl enable owncast.service

# start Owncast right away
systemctl start owncast.service

# check the logs and status of Owncast
systemctl status owncast.service

Reverse-Proxy for Owncast

Now that our Owncast container is running we can finally expose it to the public. All we need to do is, to tell nginx to forward traffic to the running Owncast container. In case of the following configuration, Owncast is reachable on the subdomain live.rtrace.io. Please make sure to replace it with your desired domain.

I frequently see Owncast hosted on a subdomain and here are the most common that I’ve observed so far:

  • tv.domain.tld
  • live.domain.tld
  • cast.domain.tld
  • stream.domain.tld
  • owncast.domain.tld

Obviously you can pick any domain (and/or subdomain) of your liking - the list is just for reference. If you’re looking for further inspiration on how to name your domain/subdomain, then go checkout the Owncast directory. Just make sure that either your own DNS-server, or the one of your DNS-provider has an active DNS A-Record pointing to the host running Owncast.

The configuration

server {
        listen 80;
        server_name live.rtrace.io;
        return 301 https://$host$request_uri;

server {
        listen 443 ssl;
        server_name live.rtrace.io;

        location / {
                proxy_buffering off;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_set_header X-Forwarded-For $remote_addr;

                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection "upgrade";

                proxy_pass http://localhost:8011;


The Reverse-Proxy consists out of 2 server blocks. One server blocks listens for HTTP traffic on Port 80 for the subdomain live.rtrace.io. Its only purpose, is to forward traffic to the second server-block,which is listening on Port 443. 443/tcp is the HTTPS port, and inside this block we’re telling nginx to forward the traffic with some HTTP-Headers to http://localhost:8011. Surprisingly that’s exactly the port we’ve previously started Owncast on. Nice!

Sidenote on HTTPS & Security

I usually configure TLS / HTTPS on the outermost nginx configuration block, located in /etc/nginx/nginx.conf. With that I have one configuration for all my server-blocks. I prefer it that way, because then I have one place where I configure TLS right, and it’ll propagate to all server-blocks automatically. If you prefer to handle HTTPS differently (e.g. with a separate configuration for each server-block) feel free to edit it. However, make sure to configure TLS right! As privacy-concerned person visiting your Owncast instance, I expect TLS to be configured correctly - ideally with PFS (Perfect Forward Secrecy). If you have no clue how to achieve that, make sure to check out the Mozilla SSL-Configuration generator for nginx. This service can create a reasonably secure configuration for nginx (and various other webservers).

Additionally you should check out the following resources to make your site more secure (both for you as adminstrator and your visitors):

  • Qualys SSL Labs, SSL Server Test
    this tool helps you to make sure, that your TLS/HTTPS configuration is reasonably secure and up-to-date. If you follow the modern TLS configuration of the Mozilla SSL Server Configuration generator, you should at least achieve a straight A-rating. That’s what you should aim for.

  • HTTP Security Headers
    SecurityHeaders.com is a project created by Scott Helme. This site scans your Website for the presence of Security-Headres (such as CORS, Referrer-Policy, X-XSS-Protection, etc …) and explains how to configure them. You should aim for a straight A+-rating. For reference: Rating of this blog. Please be aware that certain headers can break certain features of Owncast (e.g. X-Frame-Options deny effectively disallows the embed-functionality - Security is an act of balance after all).


Now that we have our container running and nginx reverse-proxying requests to our container, we’re pretty much done with the setup part. Reload the nginx configuration and you are good to go and visit your newly created Owncast instance from any browser.

# let nginx verify that the configuration is correct
sudo nginx -t 

# reload the nginx configuration (if reload doesn't work for you, use restart instead)
systemctl reload nginx.service



if you can’t access Owncast from the outside world yet, make sure you have configured your firewall to allow traffic on 80/tcp and 443/tcp. Due to the huge variety of distributions and their default-choices in firewall-frontends I’m not showing this in detail. What you’re looking for are frontends for iptables or nftables. For RHEl-based distributions your best best is most likely firewall-cmd (frontend for firewalld which is a frontend for nftables and/or iptables - insert yooo dawg meme here). If you’re on a Debian-based distribution like Ubuntu, what you’re looking for is ufw. Additionally make sure to allow 1935/tcp. It is the incoming and outgoing RTMP port.

SELinux / AppArmor

On RHEL-based distributions SELinux can (and will be) a minor pain in the a**. Check /var/log/audit/audit.log for messages starting with AVC if you’re suspecting SELinux. You most likely need to give your reverse-proxy access to networking resources. You can do this with setsebool -P httpd_can_network_connect.

Administration and maintenance


all the configuration can (and should) be done in the Admin-Interface (unless you’re using Owncast < 0.0.6, which used a yaml configuration). The Admin-Interface is accesible with a browser. Just visit https://owncast.yourdomain.tld/admin. You’ll be prompted with a simple HTTP Basic Auth Dialog. The Default Username is admin. The Default Password is abc123.


to update Owncast, just utilize the maintenance-script. Simply call /usr/local/bin/owncast.sh update. It’ll pull the latest image and nicely start everything. I recommend you to follow the Github Release page of Owncast closely. New releases are pushed there.


all the data, that is relevant to Owncast is located on the volume that we’ve mounted to our container - so all you need to backup is /opt/owncast/data - most importantly the database /opt/owncast/data/owncast.db. You should consider backing-up this directory before you update to a newer version. Also depending on the rate of change, frequent backups are recommended. If you lose the database, you’ll have to configure the instance from scratch again.

Support Owncast

Owncast is an awesome project, run by awesome people! Please consider supporting the development of the Owncast project by leaving a small donation at OpenCollective. Also make sure to spread the word about Owncast. How great it’d be if Owncast replaces all these proprietary streaming services?! You can help achieve that goal. Spread the word, spread the love 💝

Thank you for reading ❤

I hope you enjoyed reading this article. Maybe it was helpful, maybe it was not? Maybe you learned something new? I'd be really glad to hear your opinions. Your feedback is much appreciated, please just send a mail to bloginput@rtrace.io