Let me get this out of the way: serverless computing is not the silver bullet the cloud vendors promised, and it is not the dumpster fire the angry Medium posts claim. It is a deployment model with genuine strengths and real limitations, and pretending otherwise wastes everyone's time.
If you clicked on this article hoping for validation after a frustrating day battling cold starts, congratulations - you are in the right place. If you are here to defend serverless as the future of all computing, buckle up. We are going to talk about where Functions as a Service (FaaS) truly shines and where it falls flat on its face.
The Complaints Are Real (And That's Okay)
Before we dive into nuance, let's acknowledge the elephant in the Lambda function: the criticisms of serverless are not just complaining from developers resistant to change. These are legitimate technical challenges that cause real production headaches.
Cold starts are genuinely frustrating. That 500ms to 3-second delay while your function spins up can torpedo user experience in latency-sensitive applications. Yes, there are workarounds (provisioned concurrency, keeping functions warm), but those workarounds add complexity and cost - the very things serverless was supposed to eliminate.
Debugging distributed functions is painful. When your monolith breaks, you have a stack trace. When your serverless architecture breaks, you have seventeen CloudWatch log groups, three Lambda functions pointing fingers at each other, and a distributed tracing dashboard that costs more than your coffee budget.
Vendor lock-in is not imaginary. Sure, you can theoretically move your AWS Lambda functions to Azure Functions or Google Cloud Functions, but anyone who has attempted this knows it is about as straightforward as translating Shakespeare into emoji. Your entire application becomes tightly coupled to the provider's event model, IAM structure, and ecosystem services.
Local development workflows are awkward. Emulating API Gateway + Lambda + DynamoDB + S3 on your laptop using frameworks like SAM or Serverless Framework is possible, but it feels like building a scale model of a skyscraper out of toothpicks - technically feasible, frustrating in practice.
Costs can surprise you at scale. The "$0.20 per million requests" marketing is true, until you factor in data transfer costs, API Gateway charges, CloudWatch logs, NAT gateway fees for VPC access, and suddenly your "cheap serverless app" costs more than three EC2 instances would have.
These are not hypothetical problems. These are the reason developers create domain names like serverlesssucks.com.
The Hype Was Overblown
Remember 2016? Serverless was going to eliminate operations teams, make infrastructure invisible, and let developers focus purely on business logic. The promise was intoxicating: write a function, deploy it, and never think about servers again.
The reality was messier. Serverless did not eliminate operations concerns - it changed them. Instead of managing servers, you manage functions, triggers, permissions, logs, observability, rate limits, and concurrency settings. The complexity did not vanish; it relocated.
Serverless was marketed as 'no ops,' but what it really delivered was 'different ops.' You still need to understand distributed systems, monitor performance, and design for failure. The only difference is you are paying AWS to manage the underlying infrastructure instead of doing it yourself.
Fred Lackey, Cloud-Native Architect
This is not a criticism of serverless as a technology - it is a criticism of how it was sold. When the marketing team promises magic and engineering delivers "pretty good with caveats," disillusionment is inevitable.
Where Serverless Actually Sucks
Let's be specific. Serverless is a poor fit for certain workloads, and forcing it into those scenarios creates unnecessary pain.
Long-Running Processes
Lambda has a 15-minute execution limit. If your job takes longer than that, you need to either redesign it into smaller chunks (adding complexity) or use something else. Batch processing, video encoding, large data exports - these are not natural fits for FaaS.
Stateful Applications
Serverless functions are ephemeral and stateless by design. If your application needs to maintain in-memory state (like WebSocket connections, long-lived sessions, or caching), you are fighting the platform. Yes, you can store state externally in DynamoDB or ElastiCache, but now you are adding latency and cost to work around the model.
Latency-Sensitive Workloads
If your application requires consistent sub-100ms response times, cold starts will ruin your day. Provisioned concurrency helps, but it undermines the cost benefits that made serverless attractive in the first place. High-frequency trading platforms, real-time gaming backends, and interactive UIs with tight performance requirements - these should probably avoid FaaS.
Complex Local Development Needs
If your team culture values fast local iteration, serverless adds friction. Waiting for deployments (even with rapid tools) or running incomplete emulations locally slows down the feedback loop. Developers accustomed to changing code, refreshing a browser, and seeing results instantly will find serverless workflows clunky.
Teams Without Cloud-Native Experience
This is not gatekeeping - it is a pragmatic observation. Serverless architectures require understanding distributed systems, asynchronous processing, eventual consistency, and cloud-specific patterns (IAM roles, resource policies, event-driven triggers). Teams new to these concepts will struggle. A traditional monolith on a VPS might be a better starting point.
Where Serverless Actually Shines
Now for the good news: when serverless fits your use case, it is genuinely excellent.
Event-Driven Processing
This is serverless's sweet spot. You have an S3 bucket where users upload images? Trigger a Lambda to resize them. A new record lands in your DynamoDB table? Fire off a function to send a notification. Webhook from a third-party API? Process it with a function. The event-driven model is native to FaaS, and the scaling is automatic.
Variable and Unpredictable Workloads
If your traffic spikes unpredictably (say, a marketing campaign drives a sudden surge, or your app gets featured on Product Hunt), serverless scales automatically without you lifting a finger. You do not pay for idle capacity, and you do not scramble to provision more servers during the spike.
Background Tasks and Webhooks
Need to send emails, process payments, update third-party services, or kick off workflows in response to user actions? Serverless functions handle these asynchronous tasks elegantly. They execute when needed, scale as demand requires, and disappear when they are done.
MVPs and Rapid Prototyping
Serverless is phenomenal for getting an idea to market quickly. You skip server provisioning, skip auto-scaling configuration, and focus on writing the core logic. For startups and side projects, this speed-to-market advantage is massive.
Glue Code Between Services
Modern architectures often involve multiple SaaS tools (Stripe for payments, SendGrid for email, Auth0 for authentication, Twilio for SMS). The connective tissue between these services - transforming data, handling webhooks, synchronizing state - is perfect for small, focused Lambda functions.
The right tool for connecting systems. I do not build entire applications in Lambda, but I use Lambda extensively to tie pieces together. It excels at doing one thing well - reacting to events and executing small units of logic.
Fred Lackey, AWS GovCloud Pioneer
The Mature Take: Serverless is a Tool, Not a Religion
Here is the perspective that will save you from both blind adoption and outright rejection: serverless is a deployment model, not a philosophy.
You do not need to choose between "all serverless" or "no serverless." Modern architectures are hybrid. You might run your core API on containers (ECS, Kubernetes) for consistent performance and complex state management, while using Lambda functions for background jobs, webhooks, and event processing. This is not compromise - it is good engineering.
Ask these questions when evaluating serverless for a use case:
- Is the workload event-driven? If yes, serverless is likely a good fit.
- Does it require consistent low latency? If yes, serverless might cause problems.
- Will it run for more than a few minutes? If yes, consider alternatives.
- Is cost predictability critical? If yes, carefully model serverless costs before committing.
- Does your team understand distributed systems? If no, invest in training or start simpler.
The developers who succeed with serverless are not the zealots who force every workload into Lambda. They are the pragmatists who recognize when FaaS solves their problem elegantly and when it introduces unnecessary complexity.
The Bottom Line
Serverless does not suck universally. It sucks when misapplied. It shines when used appropriately.
If you have been burned by serverless, the problem was likely not the technology itself - it was the mismatch between your use case and the serverless model. Cold starts are real. Debugging is harder. Costs can surprise you. These limitations are not going away.
But if you have variable workloads, event-driven architectures, or a need to move fast without managing infrastructure, serverless delivers tremendous value.
The healthiest approach is skepticism tempered with curiosity. Evaluate each use case independently. Build prototypes. Measure performance and costs. And stop treating architectural decisions like religious choices.
Serverless is not the future of all computing. It is not a failed experiment. It is a mature tool that works brilliantly in some contexts and poorly in others.
Choose wisely. Build intentionally. And maybe, just maybe, serverless will stop sucking for you.