Benchmarking the Death Star: A Deep Dive into Microservices Performance

May 24, 2025

In modern software architecture, microservices are the reigning paradigm, promising scalability and resilience. But this power comes with immense complexity. How do deployment decisions affect performance? How do you monitor a system composed of dozens of interconnected services?

As part of my coursework, we tackled these questions head-on. Our mission: to benchmark and monitor the Death Star Social Network, a popular open-source microservices benchmark application, under various conditions to uncover real-world performance insights.

The Battleground: Deployment Environments

To get a comprehensive picture, we deployed the Death Star application in two distinct environments, representing both local and cloud-native setups:

  1. Local Docker Swarm Cluster: This allowed us to simulate a production-like cluster on local machines. It was perfect for rapid testing and understanding the fundamentals of container orchestration without cloud overhead.
  2. Google Kubernetes Engine (GKE): To test the system in a robust, scalable, and industry-standard cloud environment, we leveraged the power of Kubernetes on Google Cloud Platform.

The Strategies: Comparing Deployment Configurations

The core of our experiment was to compare three distinct deployment strategies, each designed to test a different aspect of system performance and reliability:

  • Single Node - Single Replica: The baseline. All microservices deployed as single copies on a single machine. This helped us measure performance with minimal resource usage and no network latency between services.
  • Multi-Node - Single Replica: Services were distributed across multiple nodes. This configuration was crucial for studying the impact of inter-service network latency and understanding how distribution affects overall throughput.
  • Single Node - Multi Replica: Multiple copies of each microservice were deployed on a single node. This setup simulated a high-availability scenario within a resource-constrained environment, testing how services behave under load with redundancy.

Our Eyes and Ears: A Two-Tier Observability Stack

You can't optimize what you can't measure. To get a complete view of the system's health, we implemented a two-tier observability stack, combining high-level visualization with deep, granular metrics.

📡

The Perfect Pair: We used Pixie for real-time, high-level visual observability and Prometheus for detailed, fine-grained metrics collection and deep-dive analysis.

  • Pixie Client gave us an instant, live-updating map of service interactions, request rates, and latencies with minimal setup. It was our "command center" for at-a-glance system health.
  • Prometheus was our "forensics lab." It scraped detailed metrics over time—CPU usage, memory consumption, error counts, latency distributions—allowing us to perform in-depth analysis and correlate performance changes with specific events or configurations.
// Our observability toolkit
const monitoringStack = {
  highLevelVisualization: "Pixie",
  detailedMetrics: "Prometheus",
  orchestration: ["Docker Swarm", "Kubernetes (GKE)"],
  platform: "Google Cloud Platform"
};

Key Findings and Outcomes

This project was far more than an academic exercise; it was a practical simulation of the challenges faced by DevOps and SRE teams every day. Our key takeaways were:

  • Identified Critical Trade-offs: Our benchmarks revealed the tangible trade-offs between different deployment models. We saw firsthand how distributing services across nodes increased resilience but introduced network latency that impacted overall response times.
  • Hands-On Orchestration Mastery: We gained invaluable hands-on experience deploying, managing, and debugging a complex microservices application on both Docker Swarm and the industry-leading Kubernetes platform.
  • Demonstrated Practical Monitoring: We successfully showed how a combination of tools like Pixie and Prometheus can create a powerful, multi-layered observability strategy for effectively monitoring and debugging real-time distributed systems.

Ultimately, this project demystified the complexities of microservices performance, providing a solid, evidence-based understanding of how architectural and deployment choices directly impact system behavior.

View Resume