From Monitoring to Observability: Building a Complete Stack with Prometheus, Grafana, and Loki
- Eglis Alvarez
- Sep 6
- 3 min read
It’s 3:00 a.m. and your phone buzzes. The monitoring system says “Service down”. You log in half-asleep, but the only information you get is that CPU usage spiked. Is the problem caused by a bad deployment? A database bottleneck? A network issue?
This is where traditional monitoring falls short. It detects symptoms but doesn’t explain causes.
In my previous article, we concluded with a simple truth:
Monitoring = detecting symptoms.
Observability = understanding causes.
That distinction matters more than ever. Modern systems are distributed, containerized, and highly dynamic. Basic monitoring can no longer keep up with microservices, Kubernetes clusters, and event-driven architectures. Teams need more than alerts—they need context.
This is where observability comes in. Instead of asking “Is the system up?”, observability allows you to ask:
“Why did latency spike after the last deployment?”
“Which service is causing cascading failures?”
“Can we trace a single request across multiple microservices?”
And to answer those questions, we need the right stack.
Why Prometheus, Grafana, and Loki?

Observability rests on three pillars:
Metrics – quantitative measurements (latency, throughput, errors).
Logs – detailed records of what happened, line by line.
Traces – request flows across services (not covered in this tutorial, but think of Jaeger or Tempo).
Prometheus provides metrics collection and powerful querying via PromQL.
Loki provides logs, designed to integrate seamlessly with Prometheus labels.
Grafana brings it all together in dashboards, letting you visualize metrics + logs in context—and now also manage alerts from a single interface.
Together, these tools shift you from reactive firefighting to proactive insight.
Step 1 – Prerequisites
Make sure you have:
Docker installed.
Docker Compose.
Free ports: 3000 (Grafana), 9090 (Prometheus), 3100 (Loki).
Step 2 – Define Your Stack
Create a file called docker-compose.yml:
services:
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
loki:
image: grafana/loki:latest
ports:
- "3100:3100"
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
volumes:
- ./grafana-data:/var/lib/grafana
Step 3 – Configure Prometheus
Add a prometheus.yml file:
global:
scrape_interval: 5s
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["prometheus:9090"]
This basic setup tells Prometheus to scrape itself, confirming the system is alive.
Step 4 – Launch the Stack
Run:
docker-compose up -d
You now have:
Prometheus → http://localhost:9090
Grafana → http://localhost:3000 (default login: admin / admin)
Loki → http://localhost:3100
Step 5 – First Grafana Dashboard
Log in to Grafana.
Add Prometheus as a data source (http://prometheus:9090).
Now that Prometheus is configured, you’re ready to import a pre-built dashboard (ID 1860) from Grafana’s community library, or create your own to tailor metrics visualization to your needs.

Grafana Dashboard
Step 6 – Add Loki for Logs
Add Loki as a data source (http://loki:3100).
Run a query such as:
{container="prometheus"}
Now, you can explore logs directly alongside your metrics.
Step 7 – Alerts with Grafana
One of Grafana’s biggest advantages today is its unified alerting system.
From any panel, you can define a threshold (e.g., latency > 500ms).
Grafana continuously evaluates the query.
If the condition is met, an alert is fired and sent to Slack, Teams, email, or PagerDuty.
This means you no longer need to switch between multiple tools—your dashboards become your alerting engine.
Why This Matters
At first glance, this looks like just another local dev setup. But the implications are larger:
With metrics, you know something is wrong.
With logs, you know what happened.
With alerts, you’re notified before users complain.
With all three in Grafana, you know why it happened—and you act faster.
This shift drastically reduces MTTR (Mean Time to Resolution), cuts operational costs, and builds trust in your systems.
What’s Next?
You now have a sandbox observability stack. Next steps include:
Adding Promtail to ship logs into Loki.
Extending exporters (Node, SQL, custom).
Refining Grafana alerting rules to fit your SLAs.
And remember the 3:00 a.m. scenario? Next time your pager goes off, you won’t be staring at a meaningless CPU graph. Instead, you’ll have metrics, logs, and alerts in Grafana—the difference between guessing and knowing.
✅ In the next article: we’ll add Promtail to the stack and explore how to correlate logs with metrics in Grafana to solve real-world issues faster.



Comments