Most teams do not need a giant platform on day one. They need a stack that solves a real sequence of problems: expose services safely, automate work, store operational data, run analytics, add AI where it helps, and keep the whole system observable.
That is where a self-hosted stack becomes useful. Not because self-hosting is automatically better, but because some teams need more control over network boundaries, data flow, operating cost, and system composition than a SaaS-first setup can easily provide.
This article maps the self-hosted stack behind that kind of environment. It is not a list of containers to launch for fun. It is a practical way to think about how services fit together when you are building automation, analytics, AI workflows, and internal tools.
Why teams move from isolated containers to a stack
Running one service in Docker is easy. Running a system that supports real work is a different problem.
As soon as automation or internal tools become important, the same questions start appearing:
- how do we expose services securely;
- how do we keep credentials and access under control;
- where does application data live;
- what handles queues, caching, and workflow state;
- where do analytics and dashboards go;
- how do we monitor uptime and system health;
- how do we add local AI components without routing everything through external APIs.
At that point, the useful unit is no longer a single app. It is the stack around it.
The stack, layer by layer
1. Edge and access layer
At the front of the stack, you usually need two things: traffic routing and identity.
When Self-Hosted Caddy Is Enough for Your Reverse Proxy Layer covers the routing side. Caddy gives you a lightweight reverse proxy with automatic TLS and a simple operational model. It is a strong fit when you want one predictable place to publish services like n8n, Metabase, WordPress, or internal dashboards.
When Self-Hosted Authentik Becomes Worth It covers the identity side. Once multiple internal services need centralized login, access policies, and stronger authentication, identity stops being optional infrastructure. It becomes part of how you control operational risk.
2. Automation backbone
Automation usually becomes the center of gravity once teams stop copying data manually between systems.
When Self-Hosted n8n Is the Better Choice already explains when it makes sense to own the automation platform instead of using the hosted version. Around n8n, the key supporting layers are:
- When Self-Hosted PostgreSQL Is the Right Default for Internal Tools: a practical primary store for app state, workflow metadata, and internal systems.
- When Self-Hosted Redis Starts Paying Off: useful when workflows need queueing, fast ephemeral state, or broker-like behavior.
This combination matters because automation platforms rarely live alone. They depend on storage, queueing, and reliable connectivity to become operationally useful.
3. AI layer
If your AI use cases involve internal documents, sensitive prompts, or predictable cost control, local infrastructure becomes more interesting.
When Self-Hosted Ollama Makes Sense for Private AI Workflows is about running local inference and keeping model traffic inside your own stack. When Self-Hosted Qdrant Is the Better Fit for Retrieval and Internal AI Search covers the vector storage side when you need retrieval, semantic search, or agent memory patterns.
These tools are especially useful when combined with automation. n8n can orchestrate workflows, Ollama can generate or classify, and Qdrant can provide the retrieval layer behind search and knowledge workflows.
4. Data and analytics layer
Operational systems generate data quickly. The useful question is where that data should live and how it should be queried.
For many internal tools, When Self-Hosted PostgreSQL Is the Right Default for Internal Tools is the safest default. For CMS-style and compatibility-heavy workloads, When Self-Hosted MySQL Is Still the Practical Choice stays relevant. When reporting becomes event-heavy or analytical workloads outgrow a row-store-first setup, When Self-Hosted ClickHouse Starts Making Sense becomes the more interesting option.
On top of that data layer, When Self-Hosted Metabase Is Enough for Business Intelligence gives teams a practical BI surface without committing to a heavier analytics platform.
5. Observability layer
A stack that cannot be observed is fragile, even if it looks fine in Docker Compose.
There are three different observability jobs in this ecosystem:
- When Self-Hosted Prometheus and Grafana Are the Right Monitoring Stack for metrics and dashboards;
- When Gatus Is Better Than a Full Monitoring Stack for uptime checks and status visibility;
- When Beszel Is the Fastest Way to Monitor a Docker Host for host and container visibility with a lighter setup.
Those tools overlap a little, but they do not solve the same problem. That is why I also planned a direct comparison in Prometheus vs Gatus vs Beszel: What Each Tool Actually Solves.
6. Application layer
Not every service in the stack is an internal tool. Sometimes the stack needs to support a public-facing application or a controllable content surface.
When Self-Hosted WordPress Still Makes Business Sense fits here. It is less about “should WordPress exist” and more about when a self-hosted CMS still has a practical role inside a broader infrastructure strategy.
A sensible order to build this stack
Most teams should not deploy all of this at once. A better progression looks like this:
- Start with edge and one core service.
- Add storage and backup discipline.
- Add automation where manual work already hurts.
- Add monitoring before complexity increases further.
- Add analytics and AI only where there is a clear use case.
That order matters because infrastructure tends to compound in both directions. Well-sequenced components reduce friction. Random components create operational debt.
What not to self-host first
A common mistake is starting from the most interesting tool instead of the most useful layer.
For example:
- do not start with a vector database if you do not yet have a real retrieval workflow;
- do not start with a full monitoring stack if one uptime dashboard is enough for now;
- do not deploy identity infrastructure unless access sprawl is already becoming a problem;
- do not self-host AI just because API pricing feels abstract.
The best first self-hosted component is usually the one that removes an immediate operational bottleneck.
Where the AiratTop repositories fit
The *-self-hosted repositories in the AiratTop GitHub account are useful because they are not random one-container demos. They are designed to work as a connected ecosystem around a shared Docker network and a practical operational model.
That makes the series easier to use in real life:
- the networking intent is consistent;
- helper scripts are predictable across repos;
- storage locations are explicit;
- backup patterns appear where they matter;
- the stack can grow service by service instead of forcing an all-or-nothing rebuild.
If you want to browse the full set of templates first, start with the GitHub org at AiratTop on GitHub.
Summary
A useful self-hosted stack is not a badge. It is an architecture choice made service by service.
Start with the layer that removes the most friction now. Add the rest only when the next bottleneck is real. If automation is already central, begin with When Self-Hosted n8n Is the Better Choice. If you need a map for the supporting services around it, this series will fill in the rest.
