Skip to main content

Command Palette

Search for a command to run...

AI & ML

Docker & Containerization: Building Portable, Scalable Applications

13 May 202615 min readSenthil Kumar

# Docker & Containerization: Building Portable, Scalable Applications

"It works on my machine" is not a deployment strategy.

Developers build on laptops (macOS). Staging runs on Linux. Production is a different Linux version. Dependencies are installed differently. Python version mismatch. Library version conflicts. Suddenly, code that worked locally fails in production.

Docker solves this: containerization bundles application code, dependencies, configuration, and runtime into a single image. Same image runs identically on any machine: developer laptop, staging, production, cloud provider.

What Containers Do

A container is a lightweight process isolation:

``` Without containers: Laptop: Python 3.11, PostgreSQL 15, Redis 7.0 Staging: Python 3.9, PostgreSQL 14, Redis 6.2 Production: Python 3.10, PostgreSQL 16, Redis 7.1 Result: Different behavior across environments

With containers: Docker image specifies: Python 3.11, PostgreSQL 15, Redis 7.0 Same image deployed everywhere Result: Identical behavior across environments ```

**Benefits:**

1. **Consistency:** Same code, dependencies, config everywhere 2. **Isolation:** Containers don't interfere with each other; run multiple versions of same software simultaneously 3. **Reproducibility:** Build once, deploy many times identically 4. **Portability:** Run on any machine with Docker (laptop, cloud, on-premises) 5. **Speed:** Containers start in seconds (vs. VMs in minutes)

Docker Best Practices

1. Keep Images Small

Large images = slow to build, slow to deploy, expensive to store.

**Bad Dockerfile:**

```dockerfile FROM ubuntu:22.04 RUN apt-get update && apt-get install -y \ python3 python3-pip postgresql redis-server \ build-essential gcc git curl wget ... RUN pip install -r requirements.txt COPY . /app WORKDIR /app CMD ["python3", "app.py"] ```

**Result:** 2GB image (bloated)

**Good Dockerfile:**

```dockerfile FROM python:3.11-slim RUN pip install --no-cache-dir -r requirements.txt COPY . /app WORKDIR /app CMD ["python3", "app.py"] ```

**Result:** 150MB image (lean)

**Techniques:**

Use slim or alpine base images (Python slim, Node alpine)

Don't install development tools (gcc, git) in production image

Use multi-stage builds (build in one stage, copy artifacts to clean final stage)

Remove package manager caches

2. Separate Concerns

Each container has one responsibility.

**Bad:** Application + database + cache in one container

``` Reasons this fails: - Can't scale database independently from app - Database logs mixed with app logs - Update app? Rebuild entire container ```

**Good:** Separate containers

``` Container 1: Application (restart frequently) Container 2: PostgreSQL (rarely changed) Container 3: Redis (rarely changed) ```

3. Immutable Infrastructure

Container image is immutable once built. No manual changes in production.

**Bad approach:**

``` Build image Deploy to production SSH into container to fix something Image now different from source code Next deploy overwrites manual change; thing breaks again ```

**Good approach:**

``` Update source code Build new image Test image Deploy new image Old version available for rollback No manual changes; all changes tracked in Git ```

4. Security

Containers run as root by default. Attackers gain full access.

**Best practices:**

Use read-only filesystems where possible

Run as non-root user (USER directive in Dockerfile)

Don't store secrets in images (use environment variables or secret management)

Scan images for vulnerabilities (Trivy, Snyk)

Keep base images updated

Docker Architecture

Dockerfile

Defines how to build an image.

```dockerfile FROM python:3.11-slim # Base image RUN apt-get update && apt-get install -y postgresql-client # Dependencies COPY requirements.txt /tmp/ # Copy files RUN pip install -r /tmp/requirements.txt # Install Python deps COPY . /app # Copy application code WORKDIR /app # Set working directory EXPOSE 8000 # Document port (informational) USER app # Run as non-root user CMD ["python", "app.py"] # Default command ```

Building & Running

```bash

# Build image docker build -t myapp:1.0 .

# Run container docker run -p 8000:8000 myapp:1.0

# In Kubernetes, define Pod spec spec: containers: - name: app image: myapp:1.0 ports: - containerPort: 8000 ```

Container Orchestration

Containers need management: scheduling, networking, storage, updates.

**Without orchestration (bad):**

``` Manually manage 100 containers Container crashes; I restart it manually New code release; I manually stop and restart all containers Scale up; manually start 50 more containers Out of resources; manually provision new machine Nightmare. ```

**With orchestration (good):**

``` Define: "Run 100 instances of this image" Orchestrator automatically: - Schedules containers on available machines - Restarts failed containers - Replaces old containers with new ones - Scales up/down based on load - Manages networking, storage, secrets ```

**Kubernetes** is the standard orchestrator (Docker Swarm is simpler alternative for small deployments).

Real-World Container Scenarios

Scenario 1: The Dependency Hell

Company has 50 microservices. Each has different dependencies:

Service A: Python 3.9, PostgreSQL 14

Service B: Python 3.11, PostgreSQL 15

Service C: Node 16, PostgreSQL 14

Without containers: Shared PostgreSQL version; Service A breaks on upgrade; nightmare.

With containers: Each service has own PostgreSQL version in its container. Services don't conflict. Update one without affecting others.

Scenario 2: Local Development

Developer clones repo, runs `docker-compose up`.

All services spin up (app, database, cache, message queue) in seconds. Identical to production. No "works on my machine but not in staging" issues.

Without Docker: Spend 2 hours configuring local environment.

Scenario 3: Zero-Downtime Deployment

Production: 10 instances of app running.

New code released. Kubernetes:

1. Starts 1 new container (new version) 2. Routes traffic to new container 3. Kills 1 old container 4. Repeats until all 10 containers are new 5. Zero downtime; users see no disruption

Without containers: Manual deployments; downtime.

Container Registry

Where images are stored.

**Docker Hub:** Public registry (DockerHub.com)

Free public images (Ubuntu, Python, Postgres)

Free private repos (limited)

**Private registries:**

ECR (AWS), GCR (Google Cloud), ACR (Azure)

Self-hosted (Harbor, Nexus)

**Security:** Scan images for vulnerabilities before deploying.

Cost & Performance

**Laptops (development):**

Docker Desktop: Free

Storage: Containers 100-500MB each

**Servers (production):**

Orchestrator (Kubernetes): Open-source, but $100K+ to operate well

Cloud (ECS/EKS/AKS): $500-5K/month (depends on load)

Container registry: $0-50/month

**Performance:**

Container startup: <1 second

Overhead vs. VM: 1-5% (minimal)

Overhead vs. native: 1-10% (minimal)

Common Container Mistakes

1. **Giant images** — 2GB images; slow to build and deploy 2. **Root user** — Security vulnerability 3. **Secrets in images** — Hardcoded passwords in Dockerfile; leaked 4. **No health checks** — Container crashes; orchestrator doesn't know 5. **No logging** — Container logs lost when container deleted 6. **No resource limits** — Container uses all CPU/memory; starves others 7. **Manual changes** — SSH into container, change config; lost on redeploy 8. **Inefficient layers** — Each RUN command creates layer; bloats image

Integration with Managed Services

Managing containers at scale requires:

Container registry (store and version images)

Orchestration (Kubernetes or cloud equivalent)

Networking (service discovery, load balancing)

Storage (persistent volumes for databases)

Monitoring (logs, metrics, alerts)

Sentos' managed container service:

Designs container architecture

Builds and deploys images via CI/CD

Manages orchestration (Kubernetes or cloud native)

Monitors containers and orchestrates scaling

Handles updates and rollbacks

The Bottom Line

Containers are not optional anymore. They standardize deployment and enable scalability.

Build small images. One concern per container. Use registries. Orchestrate with Kubernetes or cloud service.

Do this, and "it works on my machine" becomes "it works everywhere."

Senthil Kumar

Founder & CEO

Founder & CEO of Sentos Technologies. Passionate about AI-powered IT solutions and helping mid-market enterprises advance beyond.

Share this article

Want more insights?

Subscribe to the Sentos newsletter for expert perspectives on managed IT, cybersecurity, AI, and digital transformation.

Advance Beyond.