How to Fix Nginx 502 Bad Gateway on Kubernetes Pod


  1. The Root Cause On a Kubernetes Pod, an Nginx 502 Bad Gateway error primarily indicates that the Nginx reverse proxy cannot connect to its configured upstream application. This typically happens because the application container within the same pod is unhealthy, crashed, or not listening on the expected port, or due to resource starvation preventing the upstream from responding.

  2. Quick Fix (CLI)

    # 1. Identify the problematic pod
    # Replace <your-namespace> and <your-deployment-name>
    kubectl get pods -n <your-namespace> | grep <your-deployment-name>
    
    # 2. Check logs of the application container within the pod for errors
    # This helps understand why the upstream failed.
    # Replace <pod-name> and <app-container-name> with actual values.
    kubectl logs <pod-name> -n <your-namespace> -c <app-container-name>
    
    # 3. Restart the pod by deleting it, allowing the deployment controller to recreate it.
    # This often resolves transient issues or reinitializes a crashed application.
    kubectl delete pod <pod-name> -n <your-namespace>
  3. Configuration Check

    Inspect the Nginx configuration, typically found in /etc/nginx/nginx.conf or a mounted ConfigMap, to ensure the proxy_pass directive correctly points to the upstream application’s internal port. Ensure the Nginx container has access to this configuration.

    # Example: /etc/nginx/nginx.conf (or included site configuration)
    server {
        listen 80;
        server_name localhost;
    
        location / {
            # Ensure this port (e.g., 8080) matches your application's listening port
            proxy_pass http://localhost:8080; 
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            # Consider increasing timeout if the upstream application is slow to respond
            proxy_read_timeout 120s; 
            proxy_connect_timeout 60s;
        }
    }

    Verify your application container’s Dockerfile or entrypoint script correctly starts the application and binds it to the expected port (e.g., 0.0.0.0:8080). Additionally, examine the Kubernetes Deployment manifest (deployment.yaml) for the application container to confirm containerPort is accurately declared and that livenessProbe and readinessProbe configurations are sound, as misconfigurations here can lead to an unhealthy upstream.

    # deployment.yaml (snippet for application container definition)
    containers:
      - name: my-app-container # The name of your application container
        image: my-app:latest
        ports:
          - containerPort: 8080 # This must match the port your application listens on
        livenessProbe:
          httpGet:
            path: /healthz # Health endpoint of your application
            port: 8080
          initialDelaySeconds: 15
          periodSeconds: 20
        readinessProbe:
          httpGet:
            path: /ready # Readiness endpoint of your application
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 10
  4. Verification

    # 1. Confirm the pod is running and ready after fixes
    kubectl get pods -n <your-namespace> | grep <your-deployment-name>
    
    # 2. Access the service exposed by the Nginx pod.
    # Replace <service-ip-or-hostname> and <nginx-port> with your service's details.
    curl -I http://<service-ip-or-hostname>:<nginx-port>/
    # Expected output should include "HTTP/1.1 200 OK"