Dashboards are the worst thing that happened to self hosting
I think dashboards low key ruined self hosting for a lot of people.
Not the tools themselves. The mindset they create.
Once you add a dashboard, you stop asking why you need a service and start asking why its missing from the dashboard.
Suddenly you are running
a media server you barely use
a note app you forget exists
three monitoring tools watching each other
metrics for things that have never once failed
All green lights. Zero value.
I noticed I was spending more time looking at my setup than actually using it. Clicking tiles. Refreshing stats. Updating containers that nobody depends on.
The worst part is dashboards make complexity feel productive. If it looks clean and organized, your brain assumes its justified.
I unplugged my dashboard for a week.
Nothing bad happened.
Nothing broke.
I didnt miss a single service.
Now I only self host things that have users. Even if that user is just future me at 3am actually needing it.
Everything else is just digital shelf decor.
Curious how many people here would lose nothing if their dashboard disappeared tomorrow.
https://redd.it/1qez1l7
@r_SelfHosted
MusicGrabber - A self-hosted app for grabbing singles without the Lidarr drama
https://gitlab.com/g33kphr33k/musicgrabber
https://redd.it/1qef78k
@r_SelfHosted
Docker
sudo systemctl start docker
```
Store backups off-node. If all managers die simultaneously (rare but possible), this is your recovery path.
---
## When NOT to Use Swarm
To be fair, Swarm isn't always the answer:
- **Need advanced scheduling?** K8s has more sophisticated options
- **Running 50+ services?** K8s ecosystem is more mature at scale
- **Need service mesh?** Istio/Linkerd integrate better with K8s
- **Team already knows K8s?** Stick with what you know
For everything else - small teams, 2-20 nodes, wanting to move fast - Swarm is hard to beat.
---
## GitHub Repo
All compose files, Dockerfiles, and configs mentioned in this guide:
**[github.com/TheDecipherist/docker-swarm-guide](https://github.com/TheDecipherist/docker-swarm-guide)**
The repo includes:
- Complete monitoring stack compose file
- Production-ready multi-stage Dockerfiles
- Network configuration examples
- Portainer deployment scripts
---
## What's Coming in V2
Based on community feedback, V2 will cover:
- Deep dive into monitoring (Prometheus, Grafana, DataDog comparison)
- Blue-green deployments in Swarm
- Logging strategies (ELK, Loki, etc.)
- Traefik integration for automatic SSL
---
*What's your Swarm setup? Running it in production? Home lab? What providers are you using? Drop your configs and war stories below — I'll incorporate the best tips into V2.*
*Questions? I'll be in the comments.*
https://redd.it/1qe7wue
@r_SelfHosted
package-lock.json ./
FROM base AS compiled
RUN npm ci --omit=dev
FROM node:20-bookworm-slim AS final
RUN ln -snf /usr/share/zoneinfo/America/New_York /etc/localtime
WORKDIR /app
COPY --from=compiled /app/node_modules /app/node_modules
COPY . .
EXPOSE 3000
ENTRYPOINT ["node", "./server.js"]
```
**Why multi-stage?** Build tools stay in temp stage. Final image is clean and small.
### Key Rules
1. **Run in foreground** - `CMD ["nginx", "-g", "daemon off;"]` (official nginx image handles this)
2. **Pin base images** - `FROM ubuntu:22.04` not `FROM ubuntu:latest`
3. **Include health checks** - Swarm uses these for rolling updates
4. **Use .dockerignore** - Exclude `.env`, `node_modules`, `.git`
### Sample .dockerignore
```
.git
.gitignore
.env
.env.*
node_modules
npm-debug.log
Dockerfile*
docker-compose*
.dockerignore
*.md
.vscode
.idea
```
This keeps your build context small and prevents secrets from accidentally ending up in images.
---
## Monitoring Stack (Prometheus + Grafana)
Full compose file in the [GitHub repo](https://github.com/TheDecipherist/docker-swarm-guide). Key points:
| Service | Purpose | Mode |
|---------|---------|------|
| Grafana | Dashboards | 1 replica on manager |
| Prometheus | Metrics collection | 1 replica on manager |
| cAdvisor | Container metrics | Global (all nodes) |
| Node Exporter | Host metrics | Global (all nodes) |
Use `mode: global` for monitoring agents - runs ONE instance on EVERY node.
**Quick setup tip:** Start with cAdvisor + Node Exporter first. Add Prometheus when you need historical data. Add Grafana when you need pretty dashboards for your team.
---
## Docker Management Platforms
Managing Swarm via CLI is powerful, but GUIs improve visibility significantly.
### Portainer
**Best for:** Teams wanting visual management without changing workflows.
```bash
# Deploy Portainer agent on each node
docker service create --name portainer_agent \
--publish mode=host,target=9001,published=9001 \
--mode global \
--mount type=bind,src=//var/run/docker.sock,dst=/var/run/docker.sock \
--mount type=bind,src=//var/lib/docker/volumes,dst=/var/lib/docker/volumes \
portainer/agent:latest
# Deploy Portainer server on manager
docker service create --name portainer \
--publish 9443:9443 --publish 8000:8000 \
--replicas=1 --constraint 'node.role == manager' \
--mount type=volume,src=portainer_data,dst=/data \
portainer/portainer-ce:latest
```
**Pricing:** CE is completely free with no node limits. Business Edition adds enterprise features.
**Why Portainer?** It shows you container logs, resource usage, network topology, and lets you manage stacks visually. Perfect for teams where not everyone is a CLI wizard.
### Platform Comparison
| Platform | Swarm Support | Git Deploy | Auto SSL | Best For |
|----------|---------------|------------|----------|----------|
| **Portainer** | Full | No | No | Visual management |
| **Dokploy** | Full | Yes | Yes | Heroku-style on Swarm |
| **Coolify** | Experimental | Yes | Yes | 280+ templates, great UI |
| **CapRover** | Full (native) | Yes | Yes | Proven Swarm PaaS |
| **Dockge** | None | No | No | Simple Compose management |
**My setup:** Portainer for visibility + custom CI/CD + Prometheus/Grafana for monitoring.
**Note on Coolify:** Their Swarm support is experimental. Works for basic setups but I've hit edge cases. Great project though - watch this space.
---
## Secret Management
**Stop using environment variables for secrets.**
```yaml
secrets:
app_secrets:
external: true # Created via CLI or Portainer
services:
app:
secrets:
- app_secrets
```
Create secrets:
```bash
docker secret create app_secrets ./secrets.json
```
Secrets appear as files in `/run/secrets/SECRET_NAME`. They're encrypted at rest, not visible in `docker inspect`, and only sent to nodes that need them.
---
## CI/CD Versioning
```bash
BUILD_VERSION=$(cat ./buildVersion.txt)
LONG_COMMIT=$(git rev-parse HEAD)
docker compose build --build-arg GIT_COMMIT=$LONG_COMMIT --build-arg BUILD_VERSION=$BUILD_VERSION
docker compose
The Complete Docker Swarm Production Guide for 2026: Everything I Learned Running It for Years
📸 **[View FULL version on GITHUB website](https://thedecipherist.github.io/docker-swarm-guide/?utm_source=reddit&utm_medium=post&utm_campaign=docker-swarm-guide&utm_content=v1-guide)**
## V1: Battle-Tested Production Knowledge
**TL;DR:** I've been running Docker Swarm in production on AWS for years and I'm sharing everything I've learned - from basic concepts to advanced production configurations. This isn't theory - it's battle-tested knowledge that kept our services running through countless deployments.
**What's in V1:**
- Complete Swarm hierarchy explained
- VPS requirements and cost planning across providers
- DNS configuration (the #1 cause of Swarm issues)
- Production-ready compose files and multi-stage Dockerfiles
- Prometheus + Grafana monitoring stack
- Platform comparison (Portainer, Dokploy, Coolify, CapRover, Dockge)
- CI/CD versioning and deployment workflows
- [GitHub repo](https://github.com/TheDecipherist/docker-swarm-guide) with all configs
---
## Why Docker Swarm in 2026?
Before the Kubernetes crowd jumps in - yes, I know K8s exists. But here's the thing: **Docker Swarm is still incredibly relevant in 2026**, especially for small-to-medium teams who want container orchestration without the complexity overhead.
Swarm advantages:
- Native Docker integration (no YAML hell beyond compose files)
- Significantly lower learning curve
- Perfect for 2-20 node clusters
- Built-in service discovery and load balancing
- Rolling updates out of the box
- Works with your existing Docker Compose files (mostly)
If you're not running thousands of microservices across multiple data centers, Swarm might be exactly what you need.
---
## Understanding the Docker Swarm Hierarchy
```
Swarm → Nodes → Stacks → Services → Tasks (Containers)
```
- **Swarm**: Your entire cluster. Only works with **pre-built images** - no `docker build` in production.
- **Nodes**: Managers (handle state/scheduling) and Workers (run containers). Use 3 or 5 managers for HA.
- **Stacks**: Groups of related services from a compose file.
- **Services**: Manage replicas, rolling updates, health monitoring, auto-restart.
- **Tasks**: A Task = Container. 6 replicas = 6 tasks.
---
## VPS Requirements & Cost Planning
Docker Swarm is lightweight - minimal overhead compared to Kubernetes.
### Infrastructure Presets
| Preset | Nodes | Layout | Min Specs (per node) | Use Case |
|--------|-------|--------|---------------------|----------|
| **Minimal** | 1 | 1 manager | 1 vCPU, 1GB RAM, 25GB | Dev/testing only |
| **Basic** | 2 | 1 manager + 1 worker | 1 vCPU, 2GB RAM, 50GB | Small production |
| **Standard** | 3 | 1 manager + 2 workers | 2 vCPU, 4GB RAM, 80GB | Standard production |
| **HA** | 5 | 3 managers + 2 workers | 2 vCPU, 4GB RAM, 80GB | High availability |
### Approximate Monthly Costs (2025/2026)
| Provider | Basic (2 nodes) | Standard (3 nodes) | HA (5 nodes) |
|----------|-----------------|--------------------|--------------|
| **Hetzner** | ~€8-12 | ~€20-30 | ~€40-60 |
| **Vultr** | ~$12-20 | ~$30-50 | ~$60-100 |
| **DigitalOcean** | ~$16-24 | ~$40-60 | ~$80-120 |
| **Linode** | ~$14-22 | ~$35-55 | ~$70-110 |
**Why these numbers?**
- **1GB RAM minimum**: Swarm itself uses ~100-200MB, but you need headroom for containers
- **3 or 5 managers for HA**: Raft consensus requires odd numbers for quorum
- **2 vCPU for production**: Single core gets bottlenecked during deployments
### My Recommendation
For most small-to-medium teams:
1. **Start with Basic (2 nodes)** - 1 manager + 1 worker on Vultr or Hetzner
2. **Budget ~$20-40/month** for a production-ready setup
3. **Add nodes as needed** - Swarm makes scaling easy
If you need HA from day one, the **Standard (3 nodes)** preset gives you redundancy without breaking the bank.
### What About AWS/GCP/Azure?
Cloud giants work fine with Swarm, but:
- **More expensive** for equivalent specs
- **More complexity** (VPCs, security groups, IAM)
- **Better if** you need other AWS services (RDS,
Updates to Mediora (open-source Apple TV Jellyfin/Sonarr/Radarr client): Playback fixes, UI polish, and better discovery/search
https://apps.apple.com/us/app/mediora/id6757345487
https://redd.it/1qdz0zc
@r_SelfHosted
How are you handling secrets?
I have made the mistake of going down the secrets management rabbit hole over the last few days and intend to do something to address my obvious shortcomings. Things I am looking to secure:
* Environment variables (both in Docker Compose and regular .env files)
* DNS API keys (e.g. acme.sh)
* Sensitive creds in configuration files, e.g. OIDC client secret.
At this point, it seems my options are between Infiscal and OpenBao but I have no experience with either.
Would love to hear the challenges others have faced, how the challenges were overcome and any recommendations or advice from those who have walked this path before me. Thank you!
https://redd.it/1qdwnzf
@r_SelfHosted
Why hard drives becoming so expensive in 2026?
I was checking on hard drives with a minimum storage capacity of 20TB and was shocked when I saw the prices. I think that the prices increased by at least 20%. What is happening? I thought China had entered the market, but it seems like they're not.
https://redd.it/1qdsu88
@r_SelfHosted
Looking for a self hosted meal planning app with some specific features
I know I have seen something like this, but I can't seem to find what I need. Any help is appreciated.
Here is what I am trying to do:
\- Set a genre of food for each day of the week (Sunday = American, Monday = Italian, Tuesday = Mexican, etc.)
\- Then for each day a handful of random but different (not like 5 lasagna recipes), highly rated recipes (with possibly ingredient limits) are offered
\- After a recipe for each day is selected, a shopping list of what is required for the week/all recipes is generated
\- Ability to print recipes out
Totally cool if this hooks up to other recipe services, as long as it is free or selfhosted.
Thanks!
https://redd.it/1qdoxdo
@r_SelfHosted
Introducing Sendinator - a self-hosted web-based file sharing app
Hi Everyone!
Somewhat inspired by XKCD I wanted to make a self-hosted app that solves the simple problem of sharing a file (or many files) too big to e-mail. Like in the comic; even in 2026, most users would probably use something like google drive or dropbox. It seems dumb that we have to rely on tech giants hosting our files for what should be a pretty basic task.
So my design goals are:
You can share the file before it's finished uploading. I accomplish this by breaking the file or collection into 16MB blocks. The downloader can start getting the blocks before they're all there and gracefully wait. This makes it a LOT more convenient to send large files because you don't have to wait around for the upload before you share.
Handle large file sizes (like 100GB plus)
Handle many files/folders at once (like a million)
Browser only; no plugins or apps
Resilient downloads -- even sleeping your laptop or an extended internet outage won't break the managed downloader
quotas to prevent accidental runaway bandwidth usage. You can issue an upload key of arbitrary amount
Ease of use on all platforms
Easy to deploy (For this community this is probably true, but I think I could make it easier. Currently the easy way to install is to fire up a container or VM; and stick a reverse proxy in front of it like nginx proxy manager. You really want https otherwise browser security breaks things
screenshot: A folder with almost 10,000 files I uploaded with no issues. It even gets decent speed on small files due to chunking them into 16MB blocks
It uses webauthn (passkeys) for dead-simple secure administration
screenshot: admin login
screenshot: admin panel
github
demo instance
key: e4RjTkPX52xJXZDknwSh < I made a limited key so if it runs out, test it on your own server.
demo download of ubuntu iso
FAQ:
Q: Why no end to end encryption?
A: Because E2EE in javascript is bullshit. Why? Because even if the cryptography is properly implemented; there are no guarantees that this isn't randomly disabled for a targeted user. See, the point of E2EE is to secure against even the people who run the server. The problem with that is that if you run the server, you can change the crypto library any time you want. You could disable encryption or steal keys and your browser wouldn't warn you.
Sendinator configured with https encrypts data in transit and then you just have to trust the server you run it on. Don't trust a server, run your own.
https://redd.it/1qdm50e
@r_SelfHosted
Do you trust Vaultwarden / Psono?
I am looking into self hosted password managers and keep coming back to Vaultwarden and Psono. Both seem popular in the self hosted space, but I am curious how much people actually trust them for long term use?
If you are using Vaultwarden or Psono in production or at home, how has it been over time?Any issues with updates, data integrity, backups, or mobile access??
https://redd.it/1qdfo8l
@r_SelfHosted
trueNAS not detecting my NIC
https://redd.it/1qd9oln
@r_SelfHosted
Dashboard Sharing - take a Glance
https://redd.it/1qd90v9
@r_SelfHosted
Kavita v0.8.9 - New Stats pages, Journal Style reading, 50x Faster Scanner, and so much more!
Kavita just released a massive new version and I felt it was a good time to make a post to r/selfhosted with an update.
In the last update, I had talked about some community feature requests, like OIDC and Annotations for books, that were coming to Kavita. We delivered those last release and the feedback has been great. In this release, I delivered on a massive overhaul to Kavita's reading progress to bring out the full potential of our stat tracking. We also increased our scanner speed by 50x by finally fixing a long time issue. This release has been a lot of fun, we delivered 12 Feature Requests with a total of 98 upvotes.
You can find the full release here: [https://github.com/Kareadita/Kavita/releases/tag/v0.8.9](https://github.com/Kareadita/Kavita/releases/tag/v0.8.9)
What's coming up this year:
* **Reading List Overhaul Project** \- I've worked closely with the CBL group to define some key improvements to Kavita's reading list experience along with the CBL import.
* **Kobo Sync** \- Another big one that most other servers have adapted. We have Progress Sync support already, but this is the icing on the cake.
* **Kavita+ Enhancements** \- Hardcover and MangaBaka (once it's stable) are still on my list, along with a slew of issues that have been piling up.
Here are some of the top features (too many to cover, even from the release notes):
# Journal-Style Progress and Stats
https://preview.redd.it/w6d27v7e1edg1.png?width=1992&format=png&auto=webp&s=c7e5ca85dfcffa32c89973b301d5024467caada1
https://preview.redd.it/rsj4eate1edg1.png?width=1766&format=png&auto=webp&s=cc9596721d70d295636fd16f0a2b3a422abbb0f2
https://preview.redd.it/d9w320bf1edg1.png?width=1775&format=png&auto=webp&s=84c62c8142580bbb49eb5b2432384c4d7cee87e6
https://preview.redd.it/8ykiwosf1edg1.png?width=1875&format=png&auto=webp&s=78ec2ee9ea2728bcb4531a7f0e30ff851a6e9b4d
# Devices and the ability to track reading against and bind settings to them
https://preview.redd.it/3mzfprbk1edg1.png?width=1615&format=png&auto=webp&s=e4c2f2b333729ee3e1db9fa77df64bcafbd772b0
# 50x (or more) improvement in scan time
>14 days -> 3 hours over 141K files (112x faster)
10 days -> 4 hours over 50K files (60x faster)
3.5 hours for 96K archives with 32 threads
# OPDS/KOReader Love
KOReader Progress sync now works reliably and I expanded support to Archives/Epub/PDF. OPDS got a massive overhaul in terms of performance.
https://redd.it/1qd1cr2
@r_SelfHosted
I replaced my AWS stack with 2 broken laptops and a $10 VPS. Here is the architecture.
Hello everyone! This is a project I've been spending a lot of time on recently and have spent even longer fixing stuff I broke in the process.
This subreddit really helped me figure out the right tools to use (especially regarding the networking), so thanks for that.
The basic setup is two old laptops with broken screens (one in Dubai, one in the US) mesh-networked with a cheap VPS using Tailscale. I'm using it to host my own media (Jellyfin), passwords, and a custom security dashboard I built to watch SSH attacks.
I wrote up the full breakdown of the hardware and config on my blog if anyone is interested:
https://blog.sanyamgarg.com/#/posts/private-cloud
I also host a dashboard for the servers' stats pulled directly from their dashdot instances:
https://server.sanyamgarg.com
Very fun!
https://redd.it/1qcxlxv
@r_SelfHosted
I _also_ built yet another modern, self hoosted IPTV player .... because I didn't know the other 2 guys already did.
https://streamable.com/a8bvrg
https://redd.it/1qeobvd
@r_SelfHosted
So how are you guys handling the spotify/yt music "knowing what you like" problem?
My current setup: Ive got a library of roughly 40k songs, currently hosted through jellyfin with the audiomuse plugin and musicbrainz picard. im accessing it via symphonium.
I want to get off spotify and yt music as my streamed music providers. but the issue i have is in those apps the recommendations is too spot on and i cant find a way to replicate that. (and im not referring to new music from outside my library).
i just mean the "focus" "workout" "energy" personalized playlists and ability to look up one song and it perfectly plays similar songs of artists i like while slowly fading back to the music it knows i like. meanwhile i dont have to click the "skip song" button for a few hours
meanwhile symphonium even with the mood tagging and audiomuse just feels like its throwing stuff at the wall to see what sticks, rap followed by Frank Sinatra then Justin Bieber and corn. even when it does get it right and feels moderately cohesive i think it just doesnt know what music i like and im not quite sure how to help it get there. any advice would be great. Theres lots of songs i really like in my library but i feel like im constantly skipping and the music i want to listen to it isnt playing
https://redd.it/1qe4xt8
@r_SelfHosted
push
docker stack deploy -c docker-compose.yml mystack
```
**Never use `latest` in production.** Use commit hashes or semantic versions.
**Why versioning matters:**
- Rollback becomes a one-liner: `docker service update --image yourapp:v1.2.3 mystack_app`
- You know exactly what's running on each node
- Audit trails for compliance
- No more "but it worked on my machine" mysteries
---
## Useful Commands
```bash
# Node management
docker node ls # List all nodes
docker node update --availability=drain docker2.domain.io # Maintenance mode
docker node update --availability=active docker2.domain.io # Back to active
docker node inspect docker2.domain.io --pretty # Node details
# Stack operations
docker stack deploy -c docker-compose.yml mystack # Deploy/update stack
docker stack services mystack # List services in stack
docker stack ps mystack # List tasks (containers)
docker stack rm mystack # Remove stack
# Service operations
docker service scale mystack_web=4 # Scale to 4 replicas
docker service logs -f mystack_web # Follow logs
docker service logs --tail 100 mystack_web # Last 100 lines
docker service update --force mystack_web # Force redeploy
docker service update --image yourapp:v2 mystack_web # Update image
# Debugging
docker service ps mystack_web --no-trunc # Full error messages
docker inspect $(docker ps -q -f name=mystack_web) # Container details
```
**Pro tip:** `docker stack deploy` is idempotent. Run it again to update - no need to `rm` first.
---
## Common Gotchas
These issues have cost me hours. Learn from my pain.
**Containers can't communicate between nodes:**
1. Verify overlay network exists: `docker network ls`
2. Check it's attached to your service in compose file
3. Verify DNS config in `/etc/systemd/resolved.conf` on each node
4. Ensure ports 7946 (TCP/UDP) and 4789 (UDP) are open between nodes
5. If using `--opt encrypted`, try without it first (NAT issues)
**Service stuck in "Pending":**
```bash
docker service ps myservice --no-trunc
```
Common causes:
- Resource constraints - scheduler can't find a node with enough CPU/memory
- Image doesn't exist or can't be pulled (check registry auth)
- Placement constraints can't be satisfied
- All nodes are drained or paused
**Rolling update hangs:**
Health checks are usually the culprit. Your container might be healthy but Swarm doesn't know it.
```yaml
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s # Give your app time to start!
```
**"No such network" errors:**
Create networks BEFORE deploying stacks:
```bash
docker network create --driver overlay --attachable mynetwork
docker stack deploy -c compose.yml mystack
```
**Secrets not updating:**
Secrets are immutable. To update:
1. Create new secret with different name: `docker secret create app_secrets_v2 ./secrets.json`
2. Update compose to reference new secret name
3. Redeploy stack
---
## Final Tips
1. **Use Portainer** - Free and makes Swarm management much easier. Deploy it first.
2. **Always use external networks** - Create overlay networks before deploying stacks
3. **Tag images properly** - Never `latest` in production. Use commit hashes or semver.
4. **Set resource limits** - Always. A runaway container will take down your node.
5. **Test your rollback** - Deploy a broken image intentionally to verify auto-rollback works
6. **Monitor from day one** - Prometheus + Grafana is free and catches issues early
7. **Document your setup** - Future you will thank present you
8. **Start small** - 2 nodes is enough to learn. Scale when you need it.
---
## Backup Your Swarm State
Swarm state lives on manager nodes. Back it up:
```bash
# Stop Docker (on manager)
sudo systemctl stop docker
# Backup the Swarm state
sudo tar -cvzf swarm-backup-$(date +%Y%m%d).tar.gz /var/lib/docker/swarm
# Start
S3, etc.)
We run Swarm on AWS EC2 because we're already deep in the AWS ecosystem. If you're starting fresh, a dedicated VPS provider is simpler and cheaper.
---
## Setting Up Your Production Environment
### Install Docker (Ubuntu)
```bash
# Add Docker's official GPG key and repo
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker $USER
```
**Important:** Use `docker compose` (space), not `docker-compose` (deprecated).
### Initialize the Swarm
```bash
# Get your internal IP
ip addr
# Initialize on manager (use YOUR internal IP)
docker swarm init --advertise-addr 10.10.1.141:2377 --listen-addr 10.10.1.141:2377
# Join token for workers (save this!)
docker swarm join --token SWMTKN-1-xxxxx... 10.10.1.141:2377
```
**Critical:** Use a fixed IP for advertise address. Dynamic IPs will break your cluster on restart.
---
## DNS Configuration (This Will Save You Hours)
**CRITICAL**: DNS issues cause 90% of Swarm networking problems.
Edit `/etc/systemd/resolved.conf` on each node:
```ini
[Resolve]
DNS=10.10.1.122 8.8.8.8
Domains=~yourdomain.io
```
Then reboot. Docker runs its own DNS at `127.0.0.11` for container-to-container resolution.
**Rule:** Never hardcode IPs in Swarm. Use service names - Docker handles routing.
---
## Network Configuration
Create an overlay network (mandatory for multi-node):
```bash
docker network create \
--opt encrypted \
--subnet 172.240.0.0/24 \
--gateway 172.240.0.254 \
--attachable \
--driver overlay \
awsnet
```
| Flag | Purpose |
|------|---------|
| `--opt encrypted` | IPsec encryption. Optional but recommended. **Note:** Can cause issues with NAT - use internal VPC IPs |
| `--subnet` | Prevents conflicts with VPC ranges |
| `--attachable` | Allows standalone containers to connect |
### Required Ports
- **TCP 2377**: Cluster management
- **TCP/UDP 7946**: Node communication
- **TCP/UDP 4789**: Overlay network traffic
---
## Production Compose File
```yaml
version: "3.8"
services:
nodeserver:
dns:
- 10.10.1.122
init: true # Proper signal handling, zombie cleanup
environment:
- NODE_ENV=production
- API_KEY=${API_KEY}
deploy:
mode: replicated
replicas: 6
placement:
max_replicas_per_node: 3
update_config:
parallelism: 2
delay: 10s
failure_action: rollback
order: start-first
rollback_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
resources:
limits:
cpus: '0.50'
memory: 400M
reservations:
cpus: '0.20'
memory: 150M
image: "yourregistry/nodeserver:latest"
ports:
- "61339"
networks:
awsnet:
secrets:
- app_secrets
secrets:
app_secrets:
external: true
networks:
awsnet:
external: true
```
**Key settings:**
- `init: true` - Runs tini as PID 1 for proper signal handling
- `failure_action: rollback` - Auto-rollback on failed deployments
- `order: start-first` - New containers start before old ones stop (zero downtime)
- **Always set resource limits** - A runaway container can kill your node
---
## Dockerfile Best Practices
### Multi-Stage Build (Node.js)
```dockerfile
# syntax=docker/dockerfile:1
FROM node:20-bookworm-slim AS base
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends python3 make g++ && rm -rf /var/lib/apt/lists/*
COPY package.json
The Complete Docker Swarm Production Guide for 2026: Everything I Learned Running It for Years
📸 View FULL version on GITHUB website
## V1: Battle-Tested Production Knowledge
TL;DR: I've been running Docker Swarm in production on AWS for years and I'm sharing everything I've learned - from basic concepts to advanced production configurations. This isn't theory - it's battle-tested knowledge that kept our services running through countless deployments.
What's in V1:
- Complete Swarm hierarchy explained
- VPS requirements and cost planning across providers
- DNS configuration (the #1 cause of Swarm issues)
- Production-ready compose files and multi-stage Dockerfiles
- Prometheus + Grafana monitoring stack
- Platform comparison (Portainer, Dokploy, Coolify, CapRover, Dockge)
- CI/CD versioning and deployment workflows
- GitHub repo with all configs
---
## Why Docker Swarm in 2026?
Before the Kubernetes crowd jumps in - yes, I know K8s exists. But here's the thing: Docker Swarm is still incredibly relevant in 2026, especially for small-to-medium teams who want container orchestration without the complexity overhead.
Swarm advantages:
- Native Docker integration (no YAML hell beyond compose files)
- Significantly lower learning curve
- Perfect for 2-20 node clusters
- Built-in service discovery and load balancing
- Rolling updates out of the box
- Works with your existing Docker Compose files (mostly)
If you're not running thousands of microservices across multiple data centers, Swarm might be exactly what you need.
---
## Understanding the Docker Swarm Hierarchy
Swarm → Nodes → Stacks → Services → Tasks (Containers)
docker build in production.
i made an overseer for lidarr called aurral
https://redd.it/1qe1gy3
@r_SelfHosted
Shoutout to the Booklore team!
I just connected my Kobo e-reader with my Booklore instance and I’m blown away, both by the open config file on the kobo, but first and foremost by the amazing work the Booklore team did: The process of getting my local books on my e-reader couldn’t be smoother & the documentation is also great. Thank you very much for your work, I really appreciate it.
https://redd.it/1qdtmz4
@r_SelfHosted
What is your docker container backup method?
I just compress the folder with the data into a zip or tar file. I will probably make a script that copies the compose file into the data folder then compress the data directory and moves it to another directory that is synced with my hetzner storage box
I didn't have containers fail on me hard yet. The ones that maybe failed at some point was unifi network I used it's backup feature to fix an issue I was having.
My system is a pi 4 8gb with an external ssd
Edit: when im taking a back up im first stopping the containers i want to take a backup
https://redd.it/1qdkr1x
@r_SelfHosted
PSA to newbie ZFS + Usenet users
I spent a significant number of hours yesterday debugging why SABnzbd would fully download an NZB, even extract a working .mkv, yet still mark the job as FAILED and make Radarr re-grab the same movie over and over. I’ve been running torrents for a while with the *Arr stack and never hit this, but Usenet is different because SABnzbd typically unpacks via 7zz, and 7zz tries to set file attributes on extract.
On TrueNAS/ZFS this can bite you if your downloads live on a dataset using the SMB/NFSv4 (Samba-style) preset. In that setup, chmod / “set file attribute” operations can be restricted, so 7zz throws errors like “Cannot set file attribute / Operation not permitted” and SABnzbd flags the job as failed even if the video file looks OK.
Easiest fix was to change the dataset's "ACL Mode" from "Restricted" to "Passthrough". After allowing chmod/attribute changes on that dataset, SABnzbd unpacking and Radarr importing worked normally.
https://redd.it/1qdmc5u
@r_SelfHosted
I updated Logtide (my lightweight ELK alternative) based on your feedback, v0.4.0 is out!
https://redd.it/1qdizlt
@r_SelfHosted
[Release] RSSpub V0.106.0 : Turn your RSS feeds and Read later articles into a personal daily newspaper for your e-reader.
**RSSpub** is a lightweight Rust application that:
* Fetches your favorite RSS/Atom feeds
* Processes articles and optimizes images for e-readers (grayscale, resizing)
* Bundles everything into a clean EPUB file
* Serves an OPDS catalog so compatible readers (KOReader, Moon+ Reader) can download directly
* Optionally emails the EPUB to your Kindle
* Optimised to use as low resource as possible (using this on free tier 0.1vCpu ,512Mb Ram, idle ram usage: 32\~60MB)What's New since last post
* Added support for ordering feeds when generating EPUB files
* Add ReadItLater articles via web UI, Android app (HTTP Shortcuts), iOS Shortcut, or browser extension
* Added multiple extractor including Custom Extractor for CSS selector-based extraction for sites which
* Processors config based on domains (useful for read-it-later or feed which have link for other sites)
* Added general config for some hardcoded values in code .Links
* **GitHub:** [github.com/harshit181/rsspub](https://github.com/harshit181/rsspub)
* **Docker Hub:** `harshit181/rsspub:latest`
https://preview.redd.it/46gq3l5kuhdg1.png?width=2772&format=png&auto=webp&s=c9399878e521953a786d7b200bb8dfa0309cb719
https://preview.redd.it/uyj3bsyouhdg1.png?width=2772&format=png&auto=webp&s=124f8a1c506c587b42189317861ac72623fee726
https://preview.redd.it/b2wqvppsuhdg1.png?width=2772&format=png&auto=webp&s=5c7e022d5c1482ca9e3161e12e738d8109439329
Since post can have just 1 flair ,please note that this UI was developed using LLM .
https://redd.it/1qdgjq9
@r_SelfHosted
Open source Rust SMS Server, Client and TUI - Send, receive and track messages all from a Raspberry Pi!
A quick tour of sms-terminal messages.
Hello! This is an entirely self hosted, open source (written in Rust) SMS gateway! You can send and receive SMS messages from a Raspberry Pi for only $20 (for tested SIM800C Waveshare GSM Modem Hat) + the cost of a SIM card*
[sms-server](https://github.com/morgverd/sms-server) - The self hosted SMS gateway, see README for feature list.
**sms-client** - Rust client library to control the sms-server remotely. Includes examples.
[sms-terminal](https://github.com/morgverd/sms-terminal) - A TUI app to fully control your SMS server! This also somewhat serves as another example for using sms-client.
\In testing (with ASDA Mobile SIM) this was £3 monthly for Unlimited Texts, however with a pay as you go SIM this could be cheaper. Credit is only used to send messages, not receive.
(This is my first Reddit post ever, if it's formatted weirdly I apologize)
https://redd.it/1qd83un
@r_SelfHosted
Looking for an offline hard-drive cataloguer
Cause...yeah, "cataloguer" is totally a word. Anyway...
I’m looking for something I can self-host (Docker preferred) that will catalog the contents of external hard drives / SSDs without them needing to stay connected.
The idea is to plug in an external SSD, let the app scan and index file names and folder structure, then disconnect the drive. Later, when I’m trying to find a file, I want to search a web interface to see which drive it lives on, grab that drive, plug it in, and go. I don’t need full-text or content indexing—just file names, paths, and basic metadata.
Most of what I’ve found (Nextcloud, File Browser, etc.) assumes storage is always online. I’m really after something closer to an old-school offline disk catalog, but modern and web-based, ideally accessible remotely behind a reverse proxy.
This is primarily for external SSDs used for archived media and project files. Does anything like this exist in the self-hosted world, or is this a case where people roll their own with a database + search layer?
https://redd.it/1qd86hs
@r_SelfHosted
Dashboard Wednesday!!
https://redd.it/1qd05zv
@r_SelfHosted
Koito v0.1.3 released! A self hosted scrobbler to track and obsess over your listening history
https://redd.it/1qcvwck
@r_SelfHosted