Does Komodo wait for a container to fully stop before executing the next stage?
I'm building a backup procedure in Komodo with the following stages:
1. DestroyStack (stop the app)
2. Deploy backup container (copy volume data)
3. Destroy backup container
4. DeployStack (restart the app)
Does anyone know if Komodo waits for the Container/Stack to stop and then jumps to the next step or if it jumps to the next step after sending the docker compose down without waiting for the container to stop before continuing?
I'm asking because the backup container mounts the app's volume read-only and copies it. If the app container is still running or mid-shutdown when the copy starts, I might end up with an inconsistent backup (Critical between Step 1 and 2).
https://redd.it/1shvoc1
@r_SelfHosted
Anyone using Angie?
I haven't seen much talk about Angie around here, so I'm wondering if anyone is running it.
For those who don't know, Angie is a fork of NGINX by the original authors. They decided to launch it after F5 (and thus NGINX Inc.) pulled out of Russia in 2022, which left a gap in the market for support contracts (at least, that’s my reading of the situation).
While they were at it, they decided to redevelop NGINX Plus functionality in the FOSS project, shying away from the open-core model NGINX has been pushing. They've limited the divide between the FOSS and Pro offerings to just support. They also added some nice-to-haves like HTTP/3 and ACME support.
Has anyone tried it yet? I’m tempted, but unsure because of Russia. Then again, I used NGINX back when it was also Russian-owned, so maybe it’s okay?
https://redd.it/1shu5gc
@r_SelfHosted
Starting over
I am currently running an ubuntu headless server on a computer I built specifically for hosting steam game servers for my friends and I. I started getting into self hosting when I initially added plex and other minor services. I'm interested in whiping the computer and starting from scratch but idk what the best self hosting OS would be. I've tried proxmox on a test computer and did enjoy it, but im no expert. I plan to continue using it primarily for game servers but im getting tired of manually adding everything through the linux terminal. Any and all recommendations are appreciated!
Not sure if it matters but some PC specs below
CPU/GPU- Ryzen 5 5600g
Memeory - 32gb @ 3200 (4×8gb)
Storage - 512 gb NVME & 4tb 3.5 HDD
PSU - 750w 80+ bronze
https://redd.it/1shr9e3
@r_SelfHosted
Self host a retro game emulator?
Does anyone host a retro game emulator that they can use a client on a phone or TV to play the games?
https://redd.it/1shoct4
@r_SelfHosted
Built a web server that runs on sunlight and 27MB of RAM
https://hackaday.io/project/205403-solar-pi
https://redd.it/1shi20g
@r_SelfHosted
Security Hardening - Host, Docker, Network
Hello all,
I'll preface this by saying ***AI was not used to write or reformat any of this,*** so if you can spend the time to read and respond, I would be very grateful.
I am looking for advice on where to begin with shoring up the defenses of my server. As the saying goes...*"The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards - and even then I have my doubts."*
BUT, I don't want to lay out the red carpet for malicious unwanted guests either.
I currently run a hardwired Linux Mint server. On this server, I currently have **39 docker containers** running, with a roadmap of several more to add. 37 of these are all just port mapping on the host's internal IP, and the other 2 are Actual and Nextcloud which are proxied behind Caddy to my domain. Port 80 and 443 are open on my network.
For the "just use tailscale" argument...I do have it, and it works well for what it is. However, the constant IP switching is a pain, and I utilize the VPN slot on my phone 24/7 so I hate having to split between the 2. I also would like to share some of these services with other people, and while I can add their tailscale users to my tailnet or even device specifically, it's another point of tension.
For the "just use cloudflare" argument...TOS for some services, and I am trying to avoid any central relays through someone else as much as possible.
I know Docker running as root is a concern, and I plan to investigate this soon for the containers I'm running.
I also know I should add something like Authelia or Authentik...but I have yet to look into this much further. I'd like to setup a way to have everything accessible publicly, but locked behind username, password, and app based 2fac.
I did recently acquire an Edge Router X SFP and TP Link Omada EAP723 that I've replaced my ISP hardware with. I plan on setting up a couple VLAN's and doing some network segmentation, but I think that applies less in this scenario because my server is both my test, my prod, and while I exercise caution in what I install or spin up...it's not practical to have it in a DMZ.
TL:DR/Final Question - Where did you begin when it came to hardening your security for Docker, Host, and Network?
Any words of advice, guides, or documentation you'd be willing to share?
(currently running these:)
* Homepage
* Uptime Kuma
* Seerr
* Dockhand
* It-Tools
* Termix
* Nextcloud AIO (Apache, Database, Redis, Collabora, Talk, Imaginary, ClamAV, Whiteboard, Notify-Push, Fulltext Search)
* Actual Budget
* Filebrowser
* Backrest
* Jellyfin
* Sonarr
* Radarr
* Bazarr
* Prowlarr
* Lidarr
* qBittorrent-nox
* Gluetun
* SearXNG
* Valkey
* Redis
* Prometheus
* Grafana
* Node Exporter
* cAdvisor
* BentoPDF
* LubeLogger
https://redd.it/1shd8y5
@r_SelfHosted
Self hosted XMPP on a Raspberry Pi 2
https://redd.it/1sh7j25
@r_SelfHosted
If your password manager was to disappear, how fucked would you be?
I'm trying to assess how much of a good/bad idea it is to self-host Vaultwarden as my password manager.
I'm planning on a good backup strategy with external encrypted backup, but I'm still wondering if it's really enough
https://redd.it/1sh8mht
@r_SelfHosted
Managing all my ROMs
Hey have a extra server and looking to either build out a Linux box or possibly Windows box (as all the tools to manage things like MAME seem to be windows tools) Just trying to find something that catalogs them and pulls down the metadata and posters and such and lets me brows the ROMs and download what I want for my various retro systems. Looking at Romm but not sure how it handles various versions of MAME but the other systems seem to be there. I don't really need the ability to play them in a browser. Also have things such as LaunchBox but it's more of a Front end than a management server. Just seeing whats out there..
https://redd.it/1sh3lby
@r_SelfHosted
New Project Megathread - Week of 09 Apr 2026
Welcome to the New Project Megathread!
This weekly thread is the new official home for sharing your new projects (younger than three months) with the community.
To keep the subreddit feed from being overwhelmed (particularly with the rapid influx of AI-generated projects) all new projects can only be posted here.
How this thread works:
A new thread will be posted every Friday.
You can post here ANY day of the week. You do not have to wait until Friday to share your new project.
Standalone new project posts will be removed and the author will be redirected to the current week's megathread.
To find past New Project Megathreads just use the [search](https://www.reddit.com/r/selfhosted/search/?q="New%20Project%20Megathread%20-"&type=posts&sort=new).
# Posting a New Project
We recommend to use the following template (or include this information) in your top-level comment:
Project Name:
Repo/Website Link: (GitHub, GitLab, Codeberg, etc.)
Description: (What does it do? What problem does it solve? What features are included? How is it beneficial for users who may try it?)
Deployment: (App must be released and available for users to download/try. App must have some minimal form of documentation explaining how to install or use your app. Is there a Docker image? Docker-compose example? How can I selfhost the app?)
AI Involvement: (Please be transparent.)
Please keep our rules on self promotion in mind as well.
Cheers,
https://redd.it/1sh3rjs
@r_SelfHosted
I calculated the real cost of self-hosting a full team workspace vs SaaS - honest numbers including the hidden costs
I spent a few years working at companies that ran the standard SaaS stack. Slack for chat, Notion for docs, Zoom for calls, Asana for tasks. Nobody questioned it. It was just "how teams work."
When I started running my own small team I inherited that same mental model and started signing up for the same tools. Then I actually added up the bill.
The SaaS stack for a 20-person team:
Slack Pro: \~$150/month
Notion Team: \~$160/month
Zoom Pro: \~$150/month
Asana Premium: \~$200/month
That's roughly $660/month or about $8000 a year. For 20 people. For software that runs on someone else's computer and can raise prices or shut down whenever they want.
I had a spare Dell PowerEdge R210 II sitting around. Xeon E3-1230 V2, 16GB RAM, 2TB disk. Not exotic hardware by any stretch. I colocated it for ₹40,000/year (roughly $480/year).
The honest self-hosting cost breakdown:
Server colocation: $480/year
My time setting everything up: roughly 2 weekends
Ongoing maintenance: maybe 1-2 hours a month
Backups: automated daily to a cheap object storage bucket, maybe $3/month extra
Total annual cost: roughly $520/year for the whole team vs $8000/year for SaaS.
Now the part people don't talk about enough:
Reliability is on you. If the server goes down at 2am that's your problem to fix. SaaS tools have entire infrastructure teams behind them. For a small team where everyone sleeps in the same timezone it matters less but it's still real.
Data migration is painful. Moving years of Notion docs or Slack history into a self-hosted alternative is not fun. I underestimated this badly.
The tools aren't always as polished. Open source alternatives have rough edges. Your team needs to actually be okay with that before you commit.
Security is your responsibility. Keeping software updated, managing access, making sure you're not accidentally exposing something to the open internet. This needs at least basic sysadmin comfort.
So who should actually do this?
Honestly not everyone. If you're at a 200-person company where an hour of downtime costs real money, SaaS is probably worth every dollar. The reliability alone justifies it.
But for small teams, indie hackers, homelab people, or anyone who just doesn't want their internal conversations sitting on someone else's servers in another country, the math is hard to argue with. $520 vs $8000 is not a close call.
The thing nobody warned me about: owning your data actually changes how you use the tools. I stopped being weirdly careful about what I wrote in docs because I knew exactly where it lived and who could see it.
I ended up building my own workspace tool because I couldn't find a single self-hosted option that replaced all four tools without feeling like separate products duct-taped together. But that's a story for another post. The underlying point stands regardless of what software you pick. The hardware cost of running a 20-person workspace yourself is genuinely trivial compared to what SaaS companies charge for it.
Happy to go deeper on the colocation setup or the Docker stack if anyone wants.
https://redd.it/1sgyqza
@r_SelfHosted
limits per role, and then revoked CONNECT from PUBLIC on every database. Now \`psql -U serviceA -d serviceB\_db\` gets "permission denied." Each service is walled off.
Migration was mostly fine. pg\_dump per table, restore, reassign ownership. One gotcha though: per-table dumps don't include trigger functions. Had a full-text search trigger that just silently didn't make it over. Only noticed because searches started coming back empty. Had to recreate it manually.
**Secrets** This was the one that made me cringe. My Cloudflare key? The Global API Key. Full account access. Plaintext env var. Visible to anyone who runs docker inspect.
Database passwords? Inline in DATABASE\_URL. Also visible in docker inspect.
Replaced the CF key with a scoped token (DNS edit only, single zone). Moved DB passwords to Docker secrets so they're mounted as files, not env vars. Also pinned every image to SHA256 digests while I was at it. No more :latest. Tradeoff is manual updates but honestly I'd rather decide when to update.
**Traefik** TLS 1.2 minimum. Restricted ciphers. Catch-all that returns nothing for unknown hostnames (stops bots from enumerating subdomains). Blocked .env, .git, wp-admin, phpmyadmin at high priority so they never reach any backend. Rate limiting on all public routers. Moved Traefik's own ping endpoint to a private port.
**Still on my list** Not going to pretend I'm done. Haven't moved all containers to non-root users. Postgres especially needs host directory ownership sorted first and I haven't gotten around to it. read\_only filesystems are only on some containers because the rest need tmpfs paths I haven't mapped yet. And tbh my memory limits are educated guesses from docker stats, not real profiling.
**Was it worth it?** None of this had caused an actual incident. Everything was "working." But now if something does go wrong, the blast radius is one container instead of the whole box. A compromised web service can't pivot to another service's database. A memory leak gets OOM killed instead of swapping the host to death.
Biggest time sink was the network segmentation and database migration. The per-container stuff was pretty quick once I had the pattern.
**Still figuring things out**. If anyone's actually gotten postgres running as non-root in Docker or has a good approach to read\_only with complex entrypoints, would genuinely like to know how you did it.
https://redd.it/1sgvwep
@r_SelfHosted
Looking for a simple grocery list with scanning barcodes to add.
I'm looking for a simple grocery list app that allows me to scan items by barcode (or just enter them manually) and add them to the list. I would also like to be able to use things like UPCDatabase or similar.
I know apps like this, such as grocy, but those have way to much overhead for my needs. I don't need to keep track of inventory, just a list of items I can easily add to my shopping list. Obviously a requirement that this is open-source
https://redd.it/1sgv5u9
@r_SelfHosted
How do you alert users?
I'm running a little media server for me, my partners, their partners and some friends. How do I go about alerting everyone who's using the server (mainly jellyfin) that a feature has been added, something has changed, or the server is restarting?
https://redd.it/1sgqsag
@r_SelfHosted
What are you using to automate your Jellyfin setup?
I’m pretty new to Jellyfin and I’m trying to build a cleaner setup around it. I’m mostly looking for the best self hosted tools to automate the boring parts of managing a library, like importing legally obtained media, organizing folders, matching metadata, subtitles, monitoring new episodes, and keeping everything tidy.
I keep seeing different stacks mentioned and I’m trying to understand what people actually use long term without turning the setup into a complete mess.
https://redd.it/1sgohhn
@r_SelfHosted
Removed by moderator
removed
https://redd.it/1shxk02
@r_SelfHosted
MinusPod: Fully Self-Hostable Automatic Podcast Ad Removal.
https://redd.it/1shukpq
@r_SelfHosted
OVHCloud Sucks
I tried to move away from Hetzner, was ready to pay 2x more for lower pings, but over a period of time I regret being an OvhCloud customer.
* They have monthly billing whereas AWS / Hetzner are hourly billed.
* If you upgrade to a bigger server, they will bill you for both servers for that month.
* They lured me to upgrade by offering discounts on RAM etc., and then after a few months I got a mail that prices are increasing.
* Evil cancellation policy. I found that during server upgrade the cancellation button worked smoothly, I had like > 3 weeks balance left.
* When you cancel just before the end, like 2–3 days before the renewal date, the cancellation button will stop working. You will be forced into renewal and asked to pay for 1 month without using any of their service.
I will never be an OVH customer again. I will never pay the invoice which I was forced into. Happy to get sued, I am just going to fight this evil practice.
https://redd.it/1shqcbt
@r_SelfHosted
NoteDiscovery got a bunch of updates in the last few weeks
Hey r/selfhosted, **NoteDiscovery** is my lightweight self-hosted markdown notes app ([GitHub](https://github.com/gamosoft/notediscovery), [site](https://www.notediscovery.com/), [demo](https://gamosoft-notediscovery-demo.hf.space/)).
Here’s everything worth mentioning up until latest version:
https://preview.redd.it/iys50i8a6dug1.png?width=697&format=png&auto=webp&s=1fb16809a0b4fe1b120dbb18f2a2c4ab732f007b
* Backlinks are actually a thing now, you get retrieval + tooling + a sidebar/pane UI, plus some perf polish and a nicer icon
* Graph view had a rendering bug that’s fixed now
* There’s a new stats HTTP endpoint now if you want to wire it into a dashboard or whatever (like homepage)
* Print preview so you’re not flying blind before printing
* In the editor, Tab finally inserts a real tab (small thing, big if you care)
* Export / LaTeX got a cleanup, less weird client-side export stuff, math leans more “modern” LaTeX
* Configurable upload limits you can tune, sorting options, and links can use weird/custom URL schemes (not just http/https) where that matters
If you’re already running it, just pull latest release and let me know what you think 🙏
Thank you very much.
Kind regards.
https://redd.it/1shmglb
@r_SelfHosted
Those of you who use VaultWarden as a fresh start, why it, and not KeePassXC family?
If you switched to VaultWarden from BitWarden - that's absolutely clear why, no need to answer.
My question is to those who are setting up VaultWarden as a fresh start. What features specifically made you chose it over .kdbx synced over your infrastructure?
Genuinely curious.
https://redd.it/1shi1a3
@r_SelfHosted
Fucked up my backup history
So...
I literally fucked up my backups. They are still there, but I cant access them anymore.
Story:
My files are on a ZFS pool with snapshots. Daily Backups to a local zfs pool. Daily backups with borg to a remote storage.
I decided to move to sia storage for backups. configured everything with restic to backup to sia.
so far (not) so good. something bad happened two nights ago. backup crashed, my server got unresponsive. Something fucked up my local zfs pool. But I also made a big mistake. I did not properly check if the new local backup routine is backing up properly.
in the end I lost my borg password, and quite much of my appdata. the only thing I can rely on now is the partially backed up files on my sia storage. but I also lost metadata of renterd. Rebuilding renterd on my pc now and hope that I can recover from there. maybe I'm lucky and all appdata are in the bucket.
bruteforcing the borg backup is senseless, because it's a random password. the password in my vault warden vault is a old password.
that sucks.
https://redd.it/1shg5li
@r_SelfHosted
Manage Docker container updates and their respective compose files simultaneously
Hi everyone. I'm currently looking into a way for my containers to stay up to date, and while I've found some tools that achieve this (Watchtower, Komodo, WUD, Tugtainer, among others) none of them also keep their respective compose up to date, which would make it so that every time I need to rebuild the container, I load up an old version of it.
I know of setting tags on the image name to specify a version, but unfortunately not all containers take advantage of this.
My current setup is a "containers" folder that contains subfolders for each compose file, wherein each folder is the respective compose. I'm also looking into adding version control (most likely a private Github repo) to the "containers" parent folder to back up those files.
Has anyone managed to get a setup like this working?
https://redd.it/1sh5uki
@r_SelfHosted
PSA to Cloudflare Tunnel (cloudflared) users
(This is directed to self-hosters who use Cloudflare Tunnels (cloudflared) and the Cloudflare ecosystem. And I'm not going to debate the pros or cons of using a Cloudflare Tunnel, as they have been brought up in countless other posts. I use CF services, and I'm happy with them. YMMV, of course.)
Cloudflare Tunnels are an excellent, free, and reliable way to connect a subdomain to a local service without exposing ports. It's tried and tested, and the learning curve is not that steep.
But, your nicely connected service is now public, as in available to anyone. Is that what you really intend?
"Oh, but I use 2FA or strong passwords on my internal service." No. That is not the solution.
Research Cloudflare Applications. These sit between the visitor and the Cloudflare Tunnel, prompting for the user authentication. And the nice thing about Cloudflare Applications is that all authentication happens on CF's servers, so your servers are never touched until the user successfully authenticates.
Cloudflare provides several authentication methods, from simple OTCs to OAUTH or GitHub authentication. And you can apply many Rules to narrow down who can connect (IP ranges, countries, etc.).
So, unless your exposed service is intended to be publicly accessible, like a public-facing website, look into Cloudflare Applications.
(Yes, there are many alternative solutions. But again, countless other posts provide excellent details.)
https://redd.it/1sh0ab3
@r_SelfHosted
I'm syncing Apple Health data to my self-hosted TimescaleDB + Grafana stack and feeding it into Home Assistant as sensors
I’ve been trying to get my health data out of Apple’s ecosystem and into something I can actually query, automate, and keep long-term.
Ended up building a pipeline that pushes everything into my own stack and exposes it as real-time signals in Home Assistant.
Stack:
iPhone + Apple Watch / Whoop / Zepp → HealthKit
Small iOS companion (reads HealthKit + background sync via HKObserverQuery)
FastAPI ingestion endpoint
TimescaleDB (Postgres + time-series extensions)
Grafana for dashboards
Home Assistant for automation
The iOS side just listens for HealthKit updates and POSTs to a REST endpoint on a configurable interval. The annoying part wasn’t reading the data, it was getting reliable background delivery - HKObserverQuery + background URLSession was the only setup that didn’t silently die.
Once the data is in TimescaleDB, it becomes actually usable.
Instead of Apple’s “here’s your last 7 days, good luck,” I now have full history across \~120 metrics, queryable like any other dataset. Continuous aggregates keep Grafana responsive even with per-minute heart rate data.
The fun part was wiring it into Home Assistant.
I’m exposing selected metrics as sensors and using them as triggers:
Lights dim + ambient audio when HR drops into sleep range
Thermostat adjusts based on sleep/wake state
Notification if resting HR trends upward for 3 days
Example HA automation I made:
alias: Sleep Detected
trigger:
- platform: numeric_state
entity_id: sensor.heart_rate
below: 55
condition:
- condition: time
after: "23:00:00"
action:
- service: light.turn_off
target:
entity_id: light.bedroom
- service: media_player.play_media
data:
entity_id: media_player.speaker
media_content_id: "ambient_sleep"
media_content_type: "music"
A couple things that surprised me:
HealthKit is way more comprehensive than it looks - 100+ data types if you dig
TimescaleDB continuous aggregates make a huge difference once data grows
Background sync still isn’t perfect - iOS (especially with Low Power Mode) occasionally delays updates
The iOS side is just a thin bridge into the backend (I ended up packaging it as HealthSave so I didn't have to rebuild it every time).
Server side is just docker-compose with FastAPI + Timescale + Grafana.
If anyone’s doing something similar, I’m curious what metrics you’ve found actually useful as automation triggers - most of mine started as experiments and only a few stuck.
https://preview.redd.it/47iz0up7n8ug1.png?width=3928&format=png&auto=webp&s=ee97628c0a12de63f73e7fef746e886efc1c5ce1
https://preview.redd.it/kfi4qvton8ug1.png?width=3880&format=png&auto=webp&s=e3decaf4cc593b7b5f426e1643a8ef01db8ab3eb
https://redd.it/1sh41id
@r_SelfHosted
Are there any Self Hostable Alternatives to Google Fit?
Looking for a program as an alternative to google fit with a mobile app that works exactly like it.
https://redd.it/1sgw7zz
@r_SelfHosted
Self hosting music library using navidrome
https://redd.it/1sgwj3r
@r_SelfHosted
After my last post blew up, I audited my Docker security. It was worse than I thought.
A week ago I posted here about dockerizing my self-hosted stack on a single VPS. A lot of you rightfully called me out on some bad advice, especially the "put everything on one Docker network" part. I owned that in the comments.
But it kept nagging at me. If the networking was wrong, what else was I getting wrong? So I went through all 19 containers one by one and yeah, it was bad.
**Capabilities** First thing I checked. I ran docker inspect and every single container had the full default Linux capability set. NET\_RAW, SYS\_CHROOT, MKNOD, the works. None of my services needed any of that.
I added cap\_drop: ALL to everything, restarted one at a time. Most came back fine with zero capabilities. PostgreSQL was the exception, its entrypoint needs to chown data directories so it needed a handful back (CHOWN, SETUID, SETGID, a couple others). Traefik needed NET\_BIND\_SERVICE for 80/443. That was it. Everything else ran with nothing.
Honestly the whole thing took maybe an hour. Add it, restart, read the error if it crashes, add back the minimum.
**Resource limits** None of my containers had memory limits. 19 containers on a 4GB VPS and any one of them could eat all the RAM and swap if it felt like it.
Set explicit limits on everything. Disabled swap per container (memswap\_limit = mem\_limit) so if a service hits its ceiling it gets OOM killed cleanly instead of taking the whole box down with it. Added PID limits too because I don't want to find out what a fork bomb does to a shared host.
The CPU I just tiered with cpu\_shares. Reverse proxy and databases get highest priority. App services get medium. Background workers get lowest. My headless browser container got a hard CPU cap on top of that because it absolutely will eat an entire core if you let it.
**Health checks** Had health checks on most containers already but they were all basically "is the process alive." Which tells you nothing. A web server can have a running process and be returning 500s on every request.
Replaced them with real HTTP probes. The annoying part: each runtime needs its own approach. Node containers don't have curl, so I used Node's http module inline. Python slim doesn't have curl either (spent an embarrassing amount of time debugging that one), so urllib. Postgres has pg\_isready which just works.
Not glamorous work but now when docker says a container is healthy, it actually means something.
**Network segmentation** Ok this was the big one. All 19 containers on one flat network. Databases reachable from web-facing services. Mail server can talk to the URL shortener. Nothing needed to talk to everything but everything could.
I basically ripped it out. Each database now sits on its own network marked \`internal: true\` so it has zero internet access. Only the specific app that uses it can reach it. Reverse proxy gets its own network. Inter-service communication goes through a separate mesh.
# before: everything on one network
networks:
default:
name: shared_network
# after: database isolated, no internet
networks:
default:
name: myapp_db
internal: true
web_ingress:
external: true
My postgres containers literally cannot see the internet anymore. Can't see Traefik. Can only talk to their one app.
**The shared database** I didn't even realize this was a problem until I started mapping out the networks. Three separate services, all connecting to the same PostgreSQL container, all using the same superuser account. A URL shortener, an API gateway, and a web app. They have nothing in common except I set them all up pointing at the same database and never thought about it again.
If any one of them leaked connections or ran a bad query, it would exhaust the pool for all four. Classic noisy neighbor.
I can't afford separate postgres containers on my VPS so I did logical separation. Dedicated database + role per service, connection
YTPTube: v2.x major frontend update
If you have not seen it before, [YTPTube](https://github.com/arabcoders/ytptube) is a self-hosted web UI for yt-dlp. I originally built it for cases where a simple one-off downloader was not enough and I wanted something that could handle larger ongoing workflows from a browser.
It supports things like:
* downloads from URLs, playlists, and channels
* scheduled jobs
* presets and conditions
* live and upcoming stream handling
* history and notifications
* file browser and built-in player
* self executable for poeple who dont want to use docker although with less features compared to docker.
The big change in **v2.x** is a major UI rework. The frontend was rebuilt using nuxt/ui, which give us better base for future work. A lot of work also went into the app beyond just the visuals, general backend cleanup/refactoring, improvements around downloads/tasks/history, metadata-related work, file browser improvements and many more. TO see all features, please see the github project.
I would appreciate feedback from other selfhosters, especially from people using yt-dlp heavily for playlists, scheduled jobs, or archive-style setups.
* [original release post](https://old.reddit.com/r/selfhosted/comments/1l1p76w/ytptube_a_selfhosted_frontend_for_ytdlp/)
* [project github](https://github.com/arabcoders/ytptube)
https://redd.it/1sgrgah
@r_SelfHosted
What Grafana dashboards do you actually use the most?
Hey, I’m new to Grafana and I’m curious what dashboards people here actually use on a regular basis. I know there are loads of options, but I’m more interested in the ones that are genuinely useful and not just nice to look at for five minutes after setup.
https://redd.it/1sg5plx
@r_SelfHosted
My journey in the last 6 months...
https://redd.it/1sglbk1
@r_SelfHosted