About the BBYT Suggestions category

Suggest ideas for the Big Bear YouTube Channel:

These are just suggestions, and there is no timeline for getting to them. I am one person creating videos, so please bear with me while I work on these ideas.

would love some projects like immich via dockage as u will trim down in the easiest manner bear cant wait

For each suggestion, you should start a post: BBYT Suggestions - Big Bear Community

Hey, thanks Christopher for offering this space to the community to suggest topics. As someone who has just started in the world of home labs and still having trouble with things that might be easy for others, I’d like to present a list of ideas in order from what interests me most to what are suggestions out of curiosity.

  1. I would like to publish a WordPress website from my server, but after following several tutorials on how to set up a tunnel using Cloudflare’s zero trust, I’m having trouble with SSL certificates. This is forcing me to either publish the site filled with security warnings or publish it with HTTPS but unable to view it due to an error 502 from Cloudflare. My suggestion is then:
    Create a tutorial on how to set up a website from scratch using Docker on a server based on Proxmox, DietPi, or Ubuntu Server 24.04, whether using WordPress or another CMS that might be useful.

  2. In many tutorials, people talk about the wonders of setting up a Plex server in your home lab, explaining how to install Sonarr, Radarr, Jackett, torrent client, PLEX, or Jellyfin, even arguing that it can be implemented on a Raspberry Pi without issues. However, there are common problems that aren’t addressed, such as the necessary hardware features and configuration for transcoding. My suggestion is then:
    Create a tutorial on how to set up an automated Plex server, focusing on technical aspects like hardware and configurations needed to achieve adequate transcoding, all while using Docker, Ubuntu Server, and/or Proxmox.

  3. AI is on the rise, and we have the opportunity to implement models locally in our equipment using modern GPUs like the RTX series, making use of LM Studio. I’ve been able to implement Anything LM and Continue for VS Code, all this locally on my computer with Windows 11. I know there are options like Ollama to implement them with Docker, so my third idea would be:
    Create a tutorial on how to set up the same system in a home lab, allowing access to private AI services from anywhere in the world and being available for conversational models as well as using Continue in VS Code.

Thank you in advance for reviewing my suggestions. I’ll be available to provide more suggestions in the future.

Hello thanks for all the suggestions Can you post this in a separate forum post, please, to keep things organized?

yes of course I will :slightly_smiling_face:

1 Like