AWS Fargate 101: When Your Containers Don't Need a Babysitter

A practical guide to AWS Fargate from someone who's managed too many EC2 instances. Learn when serverless containers make sense and when they don't.

You know that gradual realization that you're spending more time babysitting EC2 instances than actually building features? I hit that point a few years back during what should have been a routine infrastructure review. There I was, troubleshooting disk space issues on a production server again, when it struck me that I'd somehow become a very expensive Linux system administrator.

That's around the time AWS Fargate started catching my attention.

How I Think About Fargate#

In my experience, Fargate is essentially the "I just want to run my containers" option. You provide AWS with your Docker image, specify your CPU and memory requirements, and it takes care of the underlying infrastructure. No EC2 instances to patch, no cluster capacity planning, and no late-night alerts about disk space issues.

The mental model that helped me understand it: if EC2 is like owning a car (oil changes, tire rotations, that weird noise that started last week), then Fargate is more like using a ride service. You specify your destination (run this container), and someone else handles the vehicle maintenance.

The Architecture (How It Actually Works)#

Loading diagram...

What I find elegant about this approach is the isolation model. Each Fargate task runs in its own environment with dedicated kernel, CPU resources, memory, and network interface. It's similar to having a dedicated micro-VM for each container, but without the operational overhead that usually comes with VM management.

Getting Started With Your First Deployment#

I'll walk through a practical Fargate deployment. I tend to use ECS for initial experiments because it's more straightforward than EKS when you're learning. Unless you have specific Kubernetes requirements, ECS might be a simpler starting point.

The first step is creating a task definition, which is essentially telling AWS what resources your container needs:

JSON
{
  "family": "my-app",
  "networkMode": "awsvpc",
  "requiresCompatibilities": ["FARGATE"],
  "cpu": "256",
  "memory": "512",
  "containerDefinitions": [
    {
      "name": "my-app",
      "image": "nginx:latest",
      "portMappings": [
        {
          "containerPort": 80,
          "protocol": "tcp"
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/my-app",
          "awslogs-region": "us-east-1",
          "awslogs-stream-prefix": "ecs"
        }
      }
    }
  ]
}

One thing that caught me off guard initially: the CPU and memory values aren't arbitrary. Fargate supports specific combinations:

CPU (vCPU)Memory Values (GB)
0.250.5, 1, 2
0.51, 2, 3, 4
12, 3, 4, 5, 6, 7, 8
24-16 (1GB increments)
48-30 (1GB increments)
816-60 (4GB increments)
1632-120 (8GB increments)

If you pick an invalid combination, AWS will let you know and ask you to adjust. I discovered this when I copied a task definition from an EC2 setup and couldn't figure out why the tasks wouldn't start.

Networking Considerations#

One important aspect of Fargate is that it only supports awsvpc network mode. This means each task gets its own elastic network interface (ENI) with a private IP address. While this provides good security isolation, it does require some VPC planning.

Here's an example using Terraform (which I find more manageable than console clicking for anything beyond initial experiments):

hcl
resource "aws_ecs_service" "my_app" {
  name            = "my-app-service"
  cluster         = aws_ecs_cluster.main.id
  task_definition = aws_ecs_task_definition.my_app.arn
  desired_count   = 2
  launch_type     = "FARGATE"

  network_configuration {
    subnets          = aws_subnet.private[*].id
    security_groups  = [aws_security_group.ecs_tasks.id]
    assign_public_ip = false
  }

  load_balancer {
    target_group_arn = aws_lb_target_group.my_app.arn
    container_name   = "my-app"
    container_port   = 80
  }
}

# Important: Fargate requires 'ip' target type, not 'instance'
resource "aws_lb_target_group" "my_app" {
  name        = "my-app-tg"
  port        = 80
  protocol    = "HTTP"
  vpc_id      = aws_vpc.main.id
  target_type = "ip"  # <-- This is crucial for Fargate
  
  health_check {
    enabled             = true
    healthy_threshold   = 2
    unhealthy_threshold = 2
    timeout             = 5
    interval            = 30
    path                = "/health"
    matcher             = "200"
  }
}

When I Consider Using Fargate#

Based on my experience running workloads on both Fargate and EC2, here's how I think about the decision:

Fargate tends to work well when:#

  • You have unpredictable or spiky traffic patterns
  • Your team prefers focusing on application code over infrastructure management
  • You're running multiple small, isolated services
  • You need strong workload isolation for compliance reasons
  • You're comfortable with containerized thinking

EC2 might be a better fit when:#

  • You need GPU instances (Fargate doesn't support them yet)
  • You're running Windows containers with specific requirements
  • Cost optimization is a primary concern and you have predictable, high utilization
  • You need privileged containers or custom kernel modules
  • You want to leverage Spot instances for cost savings

Cost Considerations#

I should be upfront about pricing: Fargate does cost more per unit of compute than EC2. Here's what I found when I analyzed one of our services:

Bash
# EC2 (t3.medium, 2 vCPU, 4GB RAM)
Monthly: ~$30 (On-Demand)
Monthly: ~$19 (Reserved Instance)

# Fargate (2 vCPU, 4GB RAM)
Monthly: ~$72 (24/7 usage)
Monthly: ~$50 (with Savings Plans)

What these numbers don't capture is the operational overhead saved:

  • No OS patching and updates
  • No cluster capacity planning
  • No auto-scaling group management
  • No instance health monitoring
  • No capacity shortage emergencies

For our team, the additional $30-40 per month felt worthwhile to reduce operational burden.

Things That Surprised Me#

  1. Cold Start Delays: The first task launch in a new availability zone can take 30-60 seconds. Worth planning for if you have strict latency requirements.

  2. ENI Limitations: Each Fargate task requires an ENI. When you hit your VPC's ENI limit, tasks simply won't launch. I learned this during a particularly busy deployment day.

  3. No SSH Access: You can't SSH into Fargate containers the traditional way. ECS Exec provides debugging access:

    Bash
    aws ecs execute-command \
      --cluster my-cluster \
      --task abc123 \
      --container my-app \
      --interactive \
      --command "/bin/sh"
    
  4. Ephemeral Storage Only: Tasks get 20GB of ephemeral storage. Need more? Use EFS, but expect slower I/O.

  5. Platform Versions: Fargate has platform versions (1.4.0, 1.3.0, etc.). AWS updates these automatically, which is usually fine, but occasionally breaks things. Always test in staging first.

A Real Production Setup#

Here's a pattern that's served us well in production:

Loading diagram...

The key insights from running this in production:

  1. Use ALB for load balancing - It integrates seamlessly with Fargate's IP-based targets
  2. Put Fargate tasks in private subnets - Use NAT gateways for outbound internet
  3. Use Parameter Store or Secrets Manager - Don't bake secrets into images
  4. Set up proper logging - CloudWatch Logs is fine to start, but consider Datadog or similar for production
  5. Monitor ENI allocation - It's the resource you'll run out of first

My Take#

Fargate isn't a silver bullet, and it won't be the right choice for every situation. But for teams that want to run containers without diving deep into infrastructure management, it can be a reasonable option. The cost premium compared to EC2 is real, but the operational simplicity might be worth it.

My general approach: start with Fargate for new containerized workloads. If AWS costs become a significant concern, that's usually a good problem to have—it means you have enough scale to justify the engineering investment in more complex infrastructure optimization.

Hopefully your containers stay stateless and your deployments stay smooth.

AWS Fargate Deep Dive Series

Complete guide to AWS Fargate from basics to production. Learn serverless containers, cost optimization, debugging techniques, and Infrastructure-as-Code deployment patterns through real-world experience.

Progress1/4 posts completed
Loading...

Comments (0)

Join the conversation

Sign in to share your thoughts and engage with the community

No comments yet

Be the first to share your thoughts on this post!

Related Posts