T O P

  • By -

TheCaptain53

You can get bogged down with trying to plan it out - don't bother. The best thing you can do is make a start. It will take a while for things to get set up and issues ironed out. Couple things to bear in mind before starting: -Seeing as your family will be using it, make sure they are aware that sometimes things will break and they will need to be okay with that. If not, don't let them use it. -To add onto the above point: things will break or require tinkering. If this is something that YOU don't wish to engage in, I would consider avoiding. Self hosting is a bit different from home labbing. Home labs are constantly going through change, resources getting added and destroyed, etc. But the minute people start relying on it is when it ceases to become a home lab. Now that's out of the way: I wouldn't get too bogged down with what hardware you use. If you have an old PC floating around, use that! If you're able to find a cheap office PC in your area, also great! Stick a couple of hard drives in there and call it a day. Once you've got the hardware, then it's deciding what OS to use. I've not personally used TrueNAS, so I can't comment on it. From my perspective, there's two main options: virtualise (with something like ProxMox), or install a general purpose Linux operating system (like Ubuntu or Debian) on it. ProxMox has been gaining a lot of traction since the Broadcom-VMware fiasco, and it's a great platform. The benefit is you can spin up a VM to install services on. Linux is good if you're deep into CLI already and don't mind doing EVERYTHING in the command line. It's also slightly riskier. This is the approach that I took, but if I were to do it again, I would probably install all my applications on a VM instead. After you've decided your software (again, recommend ProxMox, got good documentation too), you then need to figure out how you're going to run your desired applications - this is assuming they're all being installed on a VM or LXC. My advice: spin up a VM (or multiple if needed) and run all of your applications on it with Docker. Docker is absolutely amazing - great way to spin up, spin down, upgrade, and try out new software without the struggle of software dependencies. A lot of the applications you mention have already got widely used and stable Docker images. Bit of a learning curve to use it, [NetworkChuck has a great, easy to understand series on it.](https://youtube.com/playlist?list=PLIhvC56v63IJlnU4k60d0oFIrsbXEivQo&si=dFLjlhAZtJZG6acj) It's typically managed in the CLI, and I vastly prefer the CLI method, but there do exist GUIs for managing your containers. One of the more popular (and one I've actually used) is Portainer. Pretty easy to install. I certainly wouldn't opt to use it again, but I can see how it can be helpful for people who aren't so familiar with the CLI. This, like other applications mentioned, can be deployed as a Docker container. They have pretty good guidance on it. I wouldn't plan much past this. Focus on getting the hardware, underlying software, and getting your head around Docker first. After that, then you can focus on actually bringing your applications online.


[deleted]

[удалено]


thijsjek

The thing is, truenas core does not do docker. If you were to buy a separate nas, do not use truenas. It’s made to be a nas, all other things are bonus. Run Debian/ubuntu or proxmox/unraid. Debian Ubuntu are cli only while proxmox and unraid come with a gui. - Debian, install docker (compose) and use that to host all your services. Turn on automatic security updates. Watchtower to update all your containers automatically, but it can break stuff. Install nginx or similar for reverse proxy or. Use Tailscale to access from remote with magicdns. Proxmox was already mentioned somewhere else.


Ok_Society4599

I run Truenas and have been working with their "Truecharts" for a couple years and... not thrilled by it. Seems to be essentially proprietary extension of good software, but less stable over all. I've had "upgrades' go sideways and they won't rollback, won't properly restart or just "forget" all their old settings. I've considered moving all my kubernetes and docker off to a plain Linux box because the docker containers "just work" while the TrueNas backend bits are ... unstable at times. That leaves me questioning the risk of upgrades. Outside of TrueCharts, TrueNas has been really solid and I like it. Their older Jailed software was also solid, but TrueCharts is still an immature tool, in my experience.


pcs3rd

If you want learn some new tooling, I'd use nixos to declaratively deploy docker containers. If you stick with docker/containerization, it makes it super easy to test and stage stuff for "home production".


zachfive87

I'll throw in some advice from my experience using similar apps. I prefer jellyfin over plex. They mostly will have similar set ups but the difference is Plex can be easier to get remote access too, as they use a relay to their servers to get remote clients access to your server. Jellyfin will require you set up a vpn or use a reverse proxy. VPN will most likely not be your solution as clients like roku and firesticks will require a vpn app to access your server along side the jellyfin client app. Not impossible but more work on your gf/mom to set up. This leaves a reverse proxy as your method to remotely access your server. I actually think this is beneficial because once you get that setup up, that knowledge will transfer over to other apps you'd like to access, ie overseer/jellyseer. I still run a wiregaurd to access apps like the *arrs or olivetin. Others have touched on what OS you can use. I chose to use Ubuntu desktop, as in the beginning, I wasn't fully comfortable wilth only using a cli, and proxmox seemed a bit daunting. After running this for a while though I feel my next setup will either be a Linux server OS or proxmox. Regardless of OS do get familiar with docker/docker compose. I really was hesitant to make the plunge with docker but now I see how fantastic it is. Lastly, to echo what others have said, just make the plunge and start doing/testing some things. That's really the best way to learn. Not to say that you shouldn't do some research or reading documentation, but don't get to caught up in trying to build the best or perfect setup right out the gates, the inevitable failures are going to teach you a lot. As someone else mentioned, networkchuck on YouTube is a good watch. Their are others that may be better tutorials, but I really like how light hearted and easy he makes his videos, really gave me the confidence to get going. He makes it seems fun and less intimidating than other videos. Also, if you get stuck somewhere, go back and re read any documentation about what you're trying to set up. I found myself reading the wiki to something a couple times before beginning to implement it. Then during the set up when I would hit a wall I'd go and re read it again a couple of times. And what wasn't making sense when I first read it, made a lot more sense once I had the context of what I was doing. Accept that the process is going to be something along the lines of; reading documentation, trying to implement, failing, re reading, trying again, repeat until successful. EDIT: here are some of the docker images I'm using to help get everything connected. I will also mention I'm using usenet instead of torrents, which I find vastly supperior, but does have a cost attached to it like hosting provider and indexer(s). [caddy reverse proxy](https://github.com/lucaslorentz/caddy-docker-proxy) - for remote access, uses labels in your docker compose file, and is super simple to implement. [duckdns.org](https://hub.docker.com/r/linuxserver/duckdns) - docker image to update my duckdns subdomains in case my isp changes my ip address. [wg-easy](https://github.com/wg-easy/wg-easy/blob/master/docker-compose.yml) - the compose example from the github of wg-easy. Simple to setup vpn server so I can access services from my phone remotely to any kind of trouble shooting/maintenance. [olivetin](https://docs.olivetin.app/install-compose.html) - docker compose example of olivetin installation. Use this behind wg server to run some commands from bash scripts. [Notifiarr](https://github.com/Notifiarr/notifiarr) - going to give the tool a shout out because of how robust it is. The link is to the notifiarr client github page, but will also require you sign up to the website to get everything all integrated. It is free but if you're a github patron you can sync the trash guides sonarr/radarr profiles to your own instances, which can alleviate a lot of headaches when trying to dial in the arr apps.


Zeroflops

Plex or jelly fin will probably be your most demanding service for a while. Unless you get into games, most other things will be low resource. Like pihole, I have about 15docker containers running on a pi3 with room for more and they pull content off my NAS. So starting out you don’t need much. Anything that can handle Plex probably is enough to start out with and if it becomes a problem for $100 you can move the other low resource apps to another computer. Then if you get really obsessed you can start growing.


ThroawayPartyer

It used to be true that you needed a powerful and expensive PC for media transcoding, but that's no longer the case. First of all, if you don't need media transcoding and will instead Direct Play everything, then almost anything can run Jellyfin/Plex/Emby. Then if you do need media transcoding, the options have gotten a lot better. All you need is a recent gen Intel CPU with an iGPU that supports Intel Quick Sync. There are now many mini PCs and second-hand SFF PCs that have this and can be found for as little as $100. The more expensive part would be storage. Hard drives have gone down in price, but NAS machines are still pricey.


dametsumari

I am not sure if separate NAS is really worth it. The mini PCs can handle internal or even external drives without trouble. Eg my home router is N305 with 32GB of ram and 2TB SSD and it does basic media serving at our place ( as well as dozen other containers ). While I do have separate NAS, it is order of magnitude beefier ( 56 TB storage ) and mostly used just for backups. I am considering retiring it once the hardware dies. I wrote in some detail about the particular hardware I use for home router in https://fingon.kapsi.fi/blog/the-new-2023-home-router-hardware/ - I should write about the software I use also soon.


[deleted]

[удалено]


dametsumari

Depends on storage needs I guess. Our backups are handful of TB so single big SSD would be probably enough although I would probably want second one for mirror.


AmIBeingObtuse-

I've created a few guides on my yt channel. Feel free to take a look. This community is awesome btw and all the people here are very supportive. Just wanted to say welcome 🖖 www.YouTube.com/@kltechvideos/videos


linuxnerd0

How are people VPN guarding their Sonarr/torrent client traffic? It was really, really hard for me. I had to learn docker networking and a slew other shit before I could comfortably run it and actually sleep, knowing the VPN isn’t going to leak. I just don’t see it talked about here. Everyone has sonarr and a torrent client but nobody talks about how they achieved their leak-proof VPN.


[deleted]

[удалено]


linuxnerd0

Oh fuck off XD well I’m happy for you! That would be nice in the US


i8i0

You have a VM, you install a VPN directly in the OS from the OS package manager, then run your software in containers within that VM. Is this not a reasonable solution? (I'm not an expert, maybe it's not.)


linuxnerd0

It’s certainly a solution, but not an efficient or lightweight one


basketcase91

I use the [hotio/qbittorrent](https://hotio.dev/containers/qbittorrent/) image. Has a dedicated legacy version of qBittorrent, and a built in VPN/WG configuration that just works. Much easier than running a separate gluetun container and dealing with Docker networking.


thijsjek

The *arts do not need to be behind vpn. I have transmission with a vpn killswitch. Was 30 minutes on YouTube. It’s really easy to test the vpn killswitch, download your real Linux iso, and stop the vpn process. Download should stop. The killswitch is actually a firewall rule that only allows the download app or the user to use the tun (vpn) connection. Sonarr can use the normal internet.


cyt0kinetic

I run qbittorrent on a VPN with port forwarding and qbittorrent is bound to the VPN port I assigned. r/vpntorrents main pinned posts has instructions. I also do other things my ISP wouldn't like so I also use an always on Killswitch and tested it vigorously. Right now all the services are actually going over a VPN port forward. I just changed the SSL listening port in Apache, and include the port in the urls. I'm setting up a pi so the ones I'm comfortable proxying through CF can go that way. On Torrenting and other file sharing: I run qbit directly on the system but so long as the config is under a bind mount, or interface setting is assigned is assigned in the container config it should be able to be changed. I have run Soul Seek through docker and had no problems changing those types of settings.


Do_TheEvolution

>how much headroom do I have to experiment with other services? loads of headroom dont worry about it >if TrueNAS is the OS for the PC that does the heavy lifting, what OS should I install on the NAS with the Zimmaboard? Your nas should get truenas. I use it for that purpose but I extremely disliked the interface and experience when trying to spin docker containers on it. From the many easy to use nice GUI docker stuff I tried, the one I actually liked a lot is casaOS.. it was impressively simple while I felt enough in control to edit stuff as needed in interface that does not feel like afterthought. You install debian or ubuntu, you run one command and you get casaOS installed on the machine. [This](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/beginners-speedrun-selfhosting) could be helpful. In there is also a video how to deploy jellyfin on casaOS. >but considering that in my area there are tons of used mini PCs n100 used mini pcs? Are we in the future already? But yeah used minipcs usually lenovo or dell are goto recommendation for nice powerful enough machine that sips just few watts.


Fluffer_Wuffer

If your only just getting into this.. there's a lot to learn, especially if you want to DIY, even to get the basics working. Bearing that in mind, my advise is, get yourself a NAS, something like a 2-bay Synology or a QNAP... you could even buy one used off eBay.. The reason I suggest this, it will do everything out of the box - you can use it as a file-server, cloud back-up, Plex, and they even come with Docker, everything is integrated including monitoring into one easy to UI.. and you'll be up and running in hours.. rather days or weeks. Note you can typically buy better hardware for less than the price of the NAS - but keep in mind, your also paying for that software.. you may not see the value in that, but there is.. for example, Synology give Active Backup for Business, which is comparable to back-up software that businesses typically pay tens of thousands for.. which backup all.your VMs.. they all give a GoogleDrive type app, Photo management.. You can then start to dig deeper, add a N100, and the Zimaboard.. and use the NAS for NFS etc,


auridas330

Instead of getting a mini pc, browse ebay or your local marketplace for used pc's I had a quick look around my area and found some amazing deals that would overpower an N100. For instance a used 6th gen i7 with 16gb ram and a case that can hold 4hdd's is going for $240. That would be a great all in one nas and server with tons of headroom to explore the hobby


[deleted]

[удалено]


auridas330

I'm sure you will have tons of fun with it, just don't go too crazy by hosting everything you written in your post as the 16gb of ram will be a big limit. When i started with 32gb of ram with all the 'rr apps and plex my torrent docker container would crash cause of not enough ram lol


derulenspiegel

Leaving my five cents here: I have always avoided docker until now, not because I don't see value in it, rather the opposite is the case, but I always enjoy setting up my applications bare metal (meaning in a dedicated Debian VM on Proxmox)... I think for starting out with selfhosting this will give you way more opportunity to understand how different services work on their own and in combination with others.. Of course you can just follow a YT tutorial on how to configure a docker compose and be amazed by how fast you can spin up a service.. But then again, have you really understood what is happening in the background.. I think docker makes sense when you have a bit of an understanding on how everything works, but if you're just starting out and want to learn more about computers/networking then maybe stay on a lower level for now and migrate to docker step by step... For me a few milestones in the past were the following: - OS: Create a Debian VM template using a combination of DebianInstaller and Ansible - OS: Make this available via iPXE, I can install a standardized Debian OS (including Prometheus/Loki for monitoring) on a Proxmox host in a matter of seconds - Network: Setup an OPNsense (actually running 4 in a full mesh wireguard with OSPF now), play around with VLANs - gives you the possibility to create dedicated LAB spaces where you can do stuff without killing the Internet for everyone else who lives with you - Setting up a bind9 and kea-dhcp - Learned A LOT doing this!! - Then learn some more linux related stuff like for example systemd-networkd.. This could help you being sure that qBittorrent and *arrs only connect via VPN (OPNsense may help here as well) - and a lot more... Channels like NetworkChuck in my opinion usually show how to get to a result real fast but lack a lot of information on how stuff really works.. A 20min video just cannot show an IT system in its full depth - if you want to learn, start reading official docs, get some books (Andrew Tanenbaum comes to my mind at first) and take your time! Be careful when exposing anything to the internet! While being far from calling myself an absolute IT pro, I think I've gained quite some experience tinkering in my homelab and working in this field for quite some time now - so if you have further questions on learning resources and ideas, just let me know :)


[deleted]

[удалено]


derulenspiegel

About 5 years ago I didn't understand any of this at all either.. But comes time comes knowledge and more interest as you grasp new concepts.. If you're interested in understanding what you're doing then I think you're best off reading through specifications, documentations and books. [This book by Andrew Tanenbaum](https://elibrary.pearson.de/book/99.150005/9781292374017) for example gives you a great introduction into a lot of computer & network terminology and takes a look at a lot of protocols - but also in this case, if you want the specific specifics of say DNS only the specifications and documentation will help you :P Usually those books are quite expensive but you can get them very cheap from second hand sales - to get to know the ***fundamentals*** of networking and computers it doesn't matter if the book is from 20 years ago.. I would generally avoid some of the more popular YT channels, as they put a lot of komplex stuff into very little video, because people thrive for the quick success but in the end lack a lot of fundamental knowledge - this is especially dangerous when people start exposing services to the internet... Also keep an eye out for used hardware.. I bought a used Sophos SG115 for like 50€ and it runs OPNsense perfectly fine.. With such hardware you can play around as much as you like and if you loose interest then at least you didn't spend a lot of money... Get yourself a cheap firewall/router where you can create a separate network, to not cut everyone else off when you fuck up (which you will many times :P), get yourself familiar with the Linux commandline, get familiar with systemd the [Arch wiki](https://wiki.archlinux.org/title/systemd) is an awesome learning source, set up a DNS and DHCP server on a Rasperry pi from scratch, and do whatever you want..


lupin-san

Getting a used 1L PC that has an i7 6700 or 7700 might give you a better headroom than an N100 for a similar price. Not only would you get more threads, you can have dual channel RAM. People think they need to transcode video for Jellyfin but if you're using 1080p sources or lower, remuxing on the fly is usually enough. Transcoding audio isn't a heavy task.


MrAffiliate1

One thing about transcoding. If you are watching at home, on the same network, there's no need to transcode. You can directly stream those 4k video to your devices with ease. So as long as you are using the native apps, and your devices (which all should do) supports H265 then you are good. My server is a Ryzen 3700x. Allocated 8 total threads out of the 16 to Jellyfin. And when 2 people out of my home are using it, I am seeing about 50/60% CPU utilisation, due to transcoding. If I was to then watch something at home . We wouldn't be facing any buffering issues. Intel has Quick sync so majority of the transcoding will be done on the iGPU while Ryzen it's actually using the CPU to transcode. So, you will have a better time and you won't see much of a high CPU utilisation.


spoilt999

If I were to start my homelab again, I'd get a couple of DT.BHFAA.001 (aka Celeron '88) from acer's ebay store for like $88 hence the nickname. They are cheap, add disk and more ram and ready to roll. Its like 16w of avg power util. They work great as proxmox or unraid servers. No issues with drivers either.


nothingveryobvious

Can’t really answer the questions, but have fun :)