With our hardware humming, networking seamless, and CasaOS managing our containers beautifully, it's time to talk about the applications that actually make our HomeLab useful. After three years of experimentation, we've settled on a core set of services that power everything from client projects to daily operations.
Today, I'll walk you through the essential applications running in our Alpha Bits HomeLab, why we chose each one, and the real-world configurations that make them work together as a cohesive system.
The Philosophy: Purpose-Built, Not Kitchen Sink
Early in our HomeLab journey, I made the classic mistake of deploying every interesting application I found. Media servers, monitoring tools, development environments, automation platforms – if it had a Docker container, I probably tried it.
The result was a sprawling mess of services that consumed resources, required constant maintenance, and provided little actual value. I spent more time managing the infrastructure than using it for productive work.
Our current approach is different: every application must serve a specific purpose in our business operations or learning objectives. If it doesn't contribute to client work, team productivity, or skill development, it doesn't get deployed.
The Core Stack: Applications That Earn Their Keep
Node-RED: The Swiss Army Knife of Automation
If I had to pick one application that best represents the power of HomeLab infrastructure, it would be Node-RED. This visual programming tool has become the nervous system of our entire operation.
What Node-RED Does for Us:
- IoT Data Processing - Collecting sensor data from client deployments
- API Integration - Connecting disparate systems and services
- Workflow Automation - Automating repetitive business processes
- Data Pipeline Management - ETL processes for analytics and reporting
- Notification Systems - Alerts, reports, and status updates
Real-World Example: We have a Node-RED flow that monitors our I.C.E. Battery thermal storage installations, processing 10,000+ sensor data points daily in real-time, storing them in InfluxDB, triggering alerts for anomalies, and generating reports. The entire pipeline runs on a Raspberry Pi 4.
A word of caution: Node-RED's power comes with responsibility. A tiny bug in a flow — literally one missing character — can bypass a safety limit, overpower a system, and instantly melt wiring. I know this from personal experience. When you're controlling physical hardware through software flows, test obsessively and build in failsafe nodes.
Deployment via CasaOS:
Node-RED is available in the CasaOS app store with ARM optimization. The deployment includes:
- Persistent data volumes for flows and configurations
- Environment variables for security settings
- Network configuration for MQTT and HTTP endpoints
- Automatic restart policies
Why Node-RED Over Alternatives:
We've tried traditional programming approaches, cloud automation platforms, and other workflow tools. Node-RED wins because:
- Visual programming is accessible to non-developers
- Massive library of pre-built nodes
- Excellent ARM performance
- Active community and continuous development
- Perfect for rapid prototyping and iteration
Database Infrastructure: PostgreSQL + Redis + InfluxDB
Data is the lifeblood of any modern application, and our database strategy reflects the diverse needs of our projects.
PostgreSQL - The Reliable Workhorse
PostgreSQL serves as our primary relational database for:
- Directus CMS data
- Client application databases
- User management and authentication
- Business logic and transactional data
Running on our Pi-Data device with 8GB RAM, PostgreSQL handles multiple databases and concurrent connections comfortably. The ARM64 builds are mature.
Redis - Speed When It Matters
Redis handles our API response caching, session storage, and real-time data sharing between services. Its memory efficiency makes it ideal for Raspberry Pi deployments where every MB counts. We allocate about 256MB to Redis — enough for our caching needs without starving other services.
InfluxDB - Time-Series Data
For IoT and monitoring data, InfluxDB earns its place. It stores sensor data from our I.C.E. Battery installations, system performance metrics, and application analytics. The compression is impressive — 10,000+ data points daily barely dents disk usage. One tip: put InfluxDB on an SSD, not an SD card. The write patterns will kill an SD card within months.
Directus: Headless CMS That Actually Works
We've covered Directus in previous posts, but it deserves mention here as a critical application. Running in Docker via CasaOS, Directus provides:
- Content management for our website and blog
- API backend for client projects
- Admin interface for non-technical team members
- Flexible data modeling without custom development
The fact that Directus runs beautifully on ARM architecture makes it perfect for our distributed setup.
Monitoring and Observability: Grafana + Uptime Kuma
Grafana - Beautiful Data Visualization
Grafana connects to our various data sources to provide:
- System performance dashboards
- IoT sensor data visualization
- Business metrics and KPIs
- Client project monitoring
The ability to create custom dashboards and share them with clients has been invaluable for demonstrating value and maintaining transparency.
Uptime Kuma - Service Monitoring Made Simple
Uptime Kuma monitors all our services and provides:
- HTTP/HTTPS endpoint monitoring
- Database connection checks
- SSL certificate expiration alerts
- Beautiful status pages for clients
The lightweight nature and beautiful interface make it perfect for HomeLab environments.
Development and Productivity Tools
Code-Server - VS Code in the Browser
Running VS Code in a browser might sound crazy, but it's incredibly useful:
- Consistent development environment across devices
- Access to our codebase from anywhere
- No need to sync configurations between machines
- Perfect for quick edits and configuration changes
FileBrowser - Web-Based File Management
FileBrowser provides secure file access:
- Upload/download files to any Pi
- Edit configuration files directly
- Share files with team members
- Backup and restore operations
Integration Patterns: How Everything Works Together
The real power of our HomeLab comes from how these applications integrate:
Data Flow Example: IoT Monitoring Pipeline
- Sensors send data via MQTT to Mosquitto broker
- Node-RED processes and enriches the data
- InfluxDB stores time-series data
- PostgreSQL stores device metadata and configurations
- Grafana visualizes data in real-time dashboards
- Uptime Kuma monitors the entire pipeline
Content Management Workflow
- Directus provides content creation interface
- PostgreSQL stores content and metadata
- Redis caches frequently accessed content
- Node-RED handles webhook notifications
- Cloudflare Tunnel exposes APIs to the public
Deployment Strategies and Best Practices
1. Resource Allocation
We distribute applications based on resource requirements:
- CPU-intensive: Node-RED flows, data processing
- Memory-intensive: Databases, caching layers
- I/O-intensive: File management, backup operations
- Network-intensive: API gateways, monitoring
2. Data Persistence Strategy
- Critical data: USB SSDs with regular backups
- Cache data: Local storage with automatic cleanup
- Log data: Centralized logging with rotation
- Configuration: Version controlled and backed up
3. Security Considerations
- Network segmentation: Internal services on ZeroTier only
- Authentication: Strong passwords and API keys
- Updates: Regular container updates via Watchtower
- Monitoring: Alert on unusual activity or failures
Performance Insights: What Actually Works on ARM
After running these applications for months, here are the performance insights:
Excellent ARM Performance:
- Node-RED: Handles complex flows without issues
- Redis: Memory efficiency is perfect for Pi constraints
- Uptime Kuma: Lightweight and responsive
- FileBrowser: Fast file operations
Good ARM Performance:
- PostgreSQL: Solid performance with proper tuning
- Grafana: Some lag with complex dashboards
- Directus: Good for moderate traffic
Requires Optimization:
- InfluxDB: Benefits from SSD storage
- Code-Server: Better on higher-memory Pis
Cost Analysis
Every application in our stack is open source — Node-RED, PostgreSQL, Redis, InfluxDB, Directus, Grafana, Uptime Kuma. Total software licensing cost: $0/month. The only costs are hardware (covered in our hardware guide) and electricity — about $3-5/month for the Pis running 24/7.
Lessons Learned and Recommendations
1. Start Small, Scale Gradually
Don't try to deploy everything at once. Start with one or two core applications and add others as you identify specific needs.
2. Monitor Resource Usage
Use CasaOS's monitoring to understand which applications consume the most resources. This helps with optimization and capacity planning.
3. Document Everything
Keep detailed notes on configurations, integrations, and customizations. This documentation becomes invaluable during troubleshooting or migrations.
4. Plan for Failure
Critical applications should have backup strategies and failover plans. Test these regularly to ensure they work when needed.
5. Embrace the Community
The open-source communities around these applications are incredible resources. Don't hesitate to ask questions or contribute back when you can.
What's Next?
That covers the core stack. In the next post, we'll look at advanced topics and where the HomeLab goes from here.
Questions about specific configurations or integration patterns? We cover a lot of it in the earlier posts — and if we missed something, it'll probably show up in a future write-up.
Next up: "HomeLab Future: Advanced Topics and What's Coming Next"