Master Rolling Deployments with Docker Compose, NGINX, and Blue-Green Strategy
Introduction
In the fast-paced world of software development, ensuring that your application stays live and functional during deployments is crucial. In my case, this meant adopting a rolling deployment strategy that allowed for zero downtime while updating my application. The ultimate goal was seamless, uninterrupted service for my users.
I’m excited to walk you through my experience of achieving rolling deployments using Docker Compose, NGINX, and a blue-green deployment strategy. I’ll share not only the steps I took but also the lessons I learned. By the end of this post, I hope you’ll have a clear roadmap to deploy your updates with the confidence that uptime won’t be sacrificed.
Why Rolling Deployments Matter (Especially for Me)
There was always this sinking feeling right before deploying an update to production. You’re haunted by the possibility of breaking things and causing service downtime. In one of my projects, I simply couldn’t afford to disrupt the service for my users. That’s when I knew I needed a better deployment strategy.
With rolling deployments, I saw a way to:
- Deploy updates gradually: Users would continue using the current stable version while the new one is tested in parallel.
- Easily roll back: If something went wrong with the new version, I could seamlessly revert to the old one.
- Minimize risk: Rolling deployments let me reduce the scope of any bugs or issues by progressively shifting traffic to the new version.
My First Steps: Implementing Rolling Deployments
When I started exploring rolling deployments, I knew it would involve Docker Compose, NGINX, and a blue-green strategy. The blue version represented my current, stable environment, and the green version would be the new one running simultaneously for testing.
The Challenge I Faced
It wasn’t smooth sailing right from the start. The biggest issue? Traffic switching between the blue and green environments. NGINX wasn’t routing traffic correctly, and I quickly realized the culprit was a misconfigured nginx.conf
file.
Imagine the stress of deploying a production system only to find traffic bouncing unpredictably between your environments! Thankfully, after poring over the logs and digging into the configuration, I fixed it, and everything started to click.
Lesson Learned
This taught me the importance of thorough testing in staging environments before pushing to production. Also, having proper health checks in place proved to be crucial to ensure both environments were stable before switching traffic between them.
The Technical Guide: How I Achieved It
Here’s the breakdown of the steps I followed to get rolling deployments working with Docker Compose and NGINX:
1. Setting Up Docker Compose (Blue and Green)
The first task was to set up two versions of my application (blue and green) using Docker Compose. I created a docker-compose.prod.yml
file to define both versions of my application and a NGINX container for balancing the traffic.
version: '3.8'
services: blue: build: . container_name: blue ports: - '3001:3000' environment: - NODE_ENV=production volumes: - .:/usr/src/app - /usr/src/app/node_modules command: npm run forever-start restart: unless-stopped
green: build: . container_name: green ports: - '3002:3000' environment: - NODE_ENV=production volumes: - .:/usr/src/app - /usr/src/app/node_modules command: npm run forever-start restart: unless-stopped
nginx: image: nginx:alpine container_name: nginx-proxy ports: - '80:80' volumes: - ./nginx.conf:/etc/nginx/nginx.conf depends_on: - blue - green restart: unless-stopped
The idea was simple: spin up two versions (blue and green) and use NGINX to distribute traffic between them. This gave me the ability to test the green version without affecting users still accessing the stable blue version.
2. Configuring NGINX: The Key to Load Balancing
For this setup to work, NGINX had to balance traffic between the blue and green instances. I modified the nginx.conf
to ensure it used a least connections strategy, which directs traffic to the server with the fewest connections.
events {}
http { upstream upsrtc_api { least_conn; server blue:3000; server green:3000; }
server { listen 80;
location / { proxy_pass http://api; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; }
location /healthz { proxy_pass http://api/healthz; proxy_set_header Host $host; } }}
Here, I added a /healthz
endpoint for both the blue and green instances. It’s absolutely crucial to have this in place to monitor the health of each instance and ensure NGINX doesn’t route traffic to a failing service.
3. Launching the Blue Version
With the configurations ready, I started with the blue (current) version. This was the live version users were accessing while I tested the green version.
docker-compose -f docker-compose.prod.yml up -d blue nginx
This command got the blue version and NGINX running. Users continued accessing the service as usual.
4. Deploying the Green Version for Parallel Testing
Next, I deployed the green version the one I intended to release while the blue instance remained live.
docker-compose -f docker-compose.prod.yml up -d --no-deps --build green
At this point, NGINX was distributing traffic between the blue and green versions. This allowed me to test the new version in a live environment without disrupting existing users.
5. Monitoring and Adjusting Traffic
Both blue and green versions were running smoothly. By monitoring the /healthz
endpoint, I ensured both services were stable. As I gained confidence in the green version, I began shifting more traffic to it.
6. Retiring the Blue Version
Once the green version proved to be stable, I was ready to retire the blue version and transition fully to the new instance.
docker-compose -f docker-compose.prod.yml stop bluedocker-compose -f docker-compose.prod.yml rm blue
With this command, I stopped and removed the blue version, officially releasing the green version to all users. The transition was smooth and, importantly, no downtime was experienced!
Best Practices I Followed for Rolling Deployments
Throughout the process, there were some key practices that really helped:
- Health Checks: Adding a
/healthz
endpoint for each service was crucial. This ensured I never routed traffic to an unhealthy instance. - Gradual Traffic Shift: I didn’t rush to stop the blue version. Instead, I gradually moved traffic to the green version, giving me time to catch any potential issues before they became widespread.
- Version Tagging: I tagged Docker images by version. This gave me a simple rollback option if something went wrong.
- Logging: Enabling logging for Docker and NGINX was a lifesaver when troubleshooting the traffic issues I encountered early on.
What I Learned Along the Way
During the course of this deployment, there were several lessons I took away:
- Test Before Production: Misconfigurations in NGINX are stressful! I learned the hard way to always test the configuration in a staging environment before touching production.
- Health Checks Are Non-Negotiable: The
/healthz
endpoint became my safety net. If a service wasn’t healthy, NGINX automatically stopped routing traffic to it—preventing any downtime. - Gradual Shifts Pay Off: Being able to slowly move traffic to the green version gave me the flexibility to identify and fix issues before they affected all users.
Conclusion
Rolling deployments are a game-changer for achieving zero-downtime updates. By leveraging Docker Compose and NGINX with a blue-green strategy, I was able to ensure smooth transitions between application versions without affecting users. Whether you’re a small startup or a large-scale service, this strategy gives you the flexibility to deploy confidently without disrupting your users’ experience.