Skip to main content
Cloud

When Does Cloud Computing Make Sense? Six Honest Scenarios

Cloud computing is not universally the right answer. Here are six scenarios where it genuinely is, and a few where it isn't, from someone who will happily sell you either.

John Lane 2024-09-26 7 min read
When Does Cloud Computing Make Sense? Six Honest Scenarios

The honest answer to "should we be in the cloud?" is "it depends on what you're running and why." That answer does not sell seats at conferences, so most of the industry pretends cloud is universally correct. It is not. It is an excellent answer to some problems and a bad answer to others, and knowing the difference will save you both money and pain.

Here are the six scenarios where cloud computing is clearly the right call, based on what we have actually seen work. Private cloud and on-prem scenarios get their own article — this one is about public cloud specifically, because that is where most of the "should we move?" questions live.

Scenario 1: Your workload has genuinely unpredictable demand

This is the scenario the cloud was invented for, and it is still the strongest case. If your traffic can go from 100 requests per second to 10,000 requests per second in an hour, and you cannot predict when, cloud is the right answer. Autoscaling groups, managed Kubernetes, and serverless platforms all handle this well, and the economics work because you are only paying for the spike while it is happening.

The key word is "genuinely." Many workloads that feel unpredictable are actually quite regular once you look at the data — retail traffic peaks in November, tax season peaks in March, school enrollment peaks in August. Predictable peaks are not the same as unpredictable peaks. Predictable peaks can be served by capacity you provision in advance, which is often cheaper. The workloads where cloud elasticity is truly valuable are the ones where you cannot see the spike coming. Viral events, incident response, emergency broadcasting, backup-as-a-service during a mass event. If your business is one of those, cloud is clearly right.

Scenario 2: You are building something new and do not know if it will work

Early-stage products belong in the cloud. You do not know how much compute you will need because you do not know how many users you will have. You do not know where to place capacity because you do not know where your users will be. You do not know what services to buy because you do not know what your architecture will look like in six months. The last thing you want to do in that state is order hardware.

Cloud gives you the ability to spin up whatever you need in minutes, try three different architectures in a weekend, and tear it all down when you decide you were wrong. The dollar cost per hour is higher than the equivalent bare metal, but the cost of being wrong is much lower, and in early-stage work being wrong is the thing you are optimizing against.

The corollary: once the product stabilizes and the traffic is predictable and the architecture is settled, the reason to be in the cloud weakens. Many mature products migrate steady-state workloads to a cheaper substrate eventually, and keep cloud for the parts that still need elasticity.

Scenario 3: You need a global footprint and you cannot build one

If you need to serve users in twelve countries with low latency, you need presence in twelve regions. Building that presence yourself — the contracts, the hardware, the network, the staff — is a multi-year project. AWS, Azure, and GCP have already done it. You can have a deployment in twelve regions by the end of the afternoon.

This is one of the clearest wins for cloud, and it does not apply to most organizations. If you serve a single metro area or a single country, you do not need twelve regions, and the global footprint is not a reason to be in the cloud. But if your product is genuinely global and your users are geographically distributed and latency is a real product quality issue, building it yourself is a bad use of your engineering time. Rent the footprint.

Scenario 4: You need a service that only exists as managed

Some services are very hard to run well yourself. Managed databases with automatic failover and point-in-time recovery. Managed message queues with guaranteed delivery. Managed ML training platforms with automatic GPU scheduling. Managed identity systems with enterprise federation. Not because the underlying technology is secret — it is all open source — but because operating it at production quality requires a level of expertise that most organizations cannot staff.

For these services, the honest calculation is not "cloud vs. on-prem." It is "managed cloud service vs. hiring a specialist to run the equivalent ourselves." Once you count the fully-loaded cost of the specialist, the managed service is often cheaper, and it is always easier to get started. This is especially true for anything involving machine learning infrastructure, where the operational expertise is genuinely scarce and the pace of change makes it hard to keep current.

Scenario 5: You need regulatory or geographic diversity for disaster recovery

Real disaster recovery means your backups and failover capacity are in a different failure domain from your production environment. Same rack, same building, same metro area — none of those are far enough. Real DR means different region, ideally different provider, different physical infrastructure. Building that out yourself is expensive, because you need to own or lease facilities in multiple places.

Cloud makes this trivial. You can put your primary in one region, your replica in another region, and your cold backup in a third region in a different provider entirely. Total cost: a few cents per gigabyte per month, plus whatever compute you need for the warm standby. This is cheaper, simpler, and more reliable than any DR solution you can build from scratch, and it is the scenario where we most often recommend cloud to customers who are otherwise running on-prem.

Even customers we run on a managed private cloud typically use a hyperscaler for backup and DR. The economics of "cold object storage in a distant region" are hard to beat, and the ransomware resistance of immutable cloud buckets is genuinely valuable. This is a hybrid pattern that almost always pays off.

Scenario 6: You are running a workload that the cloud provider is also running at scale

Some things that used to be hard are now commodity because the cloud providers spent years building them. Object storage, serverless functions, content delivery networks, managed Kubernetes, managed PostgreSQL with HA, managed Redis. When you use one of these, you are renting a small slice of infrastructure that the provider has already amortized across millions of other customers. The marginal cost to them is low, the price to you is reasonable, and the quality is higher than what you could build yourself.

The rule of thumb is: if the cloud provider has been running this same service for five years, you probably should not build your own. If the service is new, experimental, or proprietary to one provider, be cautious — you are taking on a dependency and the quality is not yet proven.

The scenarios where it doesn't make sense

For honesty's sake, here are the workloads that do not belong in the cloud or do not belong entirely in the cloud.

Predictable steady-state compute at scale. If you're running a hundred servers at 70 percent utilization 24 hours a day, cloud is expensive. A managed private cloud or colocation will cut your cost by a factor of three or more for the same workload. This is where hybrid patterns come from — cloud for the variable part, private for the steady part.

Large storage that you rarely egress. Cloud storage is cheap to put in and expensive to take out. If you are storing petabytes and occasionally need to move large chunks of it elsewhere, the egress bill will be shocking. A private filer or object store is cheaper for steady state and removes the egress tax.

Workloads that cannot leave a specific physical location. Some regulated, air-gapped, or classified environments have hard requirements that the cloud cannot meet. These are rarer than vendors suggest, but they are real, and for those workloads cloud is not the right answer.

How to actually decide

The framework we use with customers is simpler than most: start with the business requirement, not the platform preference. What is the workload? What does it actually need in terms of elasticity, latency, compliance, durability, and predictability? Then match the workload to the infrastructure that gives you those properties at the lowest total cost.

Most customers end up with a mix. Cloud for the parts that need elasticity, new development, and global reach. Private cloud for the predictable steady-state workloads and the large data. On-prem for the odd specific thing that cannot live anywhere else. This is not the glamorous answer, but it is the one that actually works, and it is the one we keep arriving at after enough real projects to know what we are doing.

Talk with us about your infrastructure

Schedule a consultation with a solutions architect.

Schedule a Consultation
Talk to an expert →