Primary Data Center in the Cloud: Five Things That Decide If It Fits
Moving your primary data center to the cloud is a bigger decision than moving workloads. Five factors decide whether it's the right call — or an expensive lesson.

A primary data center in the cloud is not the same decision as "we'll put some workloads in Azure." It is a commitment to run the core of your business — the systems that define what happens when an employee logs in or a customer places an order — on infrastructure you do not own, in a building you have never visited, operated by people who do not know your name. For the right organization, this is a smart trade. For the wrong organization, it is an expensive lesson that takes three years and a budget variance to learn.
I have been on the advisory side of this decision many times. The customers who end up happy and the customers who end up unhappy usually look the same on paper. The difference is whether they honestly assessed five factors before they committed. These five factors are not theoretical. They are the ones that determine whether your cloud primary site will feel like a strategic upgrade or a strategic mistake.
One: Your Workload Shape
Not all workloads belong in a public cloud primary site, and pretending otherwise is how budgets get broken. Cloud compute is priced for elasticity. If your workload is elastic — it scales up during business hours, scales down at night, bursts hard on specific events, or needs to spin up experiments on demand — cloud is the right home, and you will pay a modest premium for the elasticity that is absolutely worth it.
If your workload is flat — it runs 24/7 at roughly the same utilization, it doesn't scale up or down meaningfully, and the shape of demand next year will look almost identical to the shape of demand this year — cloud is structurally the wrong home from a cost perspective. You will pay three to five times more than you would in a private cloud or a well-managed colo for the exact same work, and you will pay it every month for as long as the workload exists.
The right answer for most organizations is a split. Put the elastic workloads in public cloud. Put the steady-state workloads on a private cloud in a colocation facility or a managed provider's environment. Design the two sides to look operationally consistent so your team doesn't have to context-switch. The customers who try to put everything in one bucket — all cloud or all on-prem — usually end up re-doing the work within eighteen months when the cost picture becomes unavoidable.
Before you commit to a cloud primary, spend a week honestly profiling your workloads. What percentage of your compute is elastic? What percentage is flat? If flat is more than sixty percent, you are probably looking at the wrong target architecture.
Two: Your Network Path to Users
A cloud data center is only as close to your users as the network between them. If your users are concentrated in a few metro areas with reliable fiber, this is a non-problem. If your users are distributed across rural sites, mobile workers in vehicles, or international offices with flaky links, the network becomes the determining factor for whether your cloud primary site feels fast or feels miserable.
The thing to measure is not average latency — it is tail latency. A user workflow that averages forty milliseconds but spikes to four hundred milliseconds twice an hour will feel broken even though the average is fine. Cloud primary sites magnify this because everything the user does has to round-trip to the cloud, and any hop in the middle that gets congested becomes a visible slowdown.
The fix, when it is possible, is to provision dedicated network circuits from your user sites to the cloud provider. All three hyperscalers offer direct connect options, and for organizations with real traffic volumes, the cost of a dedicated circuit is worth it for the consistency. But direct connect does not reach every location, and the monthly cost of enough circuits to cover a distributed workforce can be significant. If the math doesn't work, you need to either place the primary closer to the users or choose a hybrid model where user-facing services live in a location with better network proximity and backend services live in the cloud.
There is also the question of what happens when the network between a site and the cloud fails. If your primary site is in the cloud, a network outage at a user site does not just cut off email — it cuts off every application the site uses. You need a plan for partial outages, not just full ones, and for most distributed organizations, that plan drives some amount of local infrastructure to remain on-prem.
Three: Your Regulatory and Data Residency Constraints
This factor either kills the conversation or doesn't, and it is worth finding out early. Certain industries and jurisdictions require data to stay within specific geographic, legal, or operational boundaries. Healthcare has HIPAA. Public sector has FedRAMP, CJIS, and state-specific rules. Education has FERPA. Finance has PCI, SOX, and state banking rules. International operations add GDPR and various country-specific residency laws.
The cloud providers have built compliance-aligned regions for most of these frameworks, and the certifications are real. But "there is a compliant region" is not the same as "your implementation is compliant." Your configuration has to use the right region, the right storage classes, the right encryption keys, the right identity model, and the right audit logging. Getting any of that wrong means you have a compliance gap that the certification doesn't cover.
The practical version: if you are in a regulated industry, include your compliance officer and your auditor in the cloud primary conversation before you commit to architecture. Not during, not after — before. The architecture decisions that determine whether you can pass audit are made in the first weeks, and backing them out later is expensive. I have seen customers spend six figures on a migration only to discover that a specific data flow was not allowed and the whole design had to be revisited.
If your regulatory environment is constraining enough, the right answer may be a private cloud in a certified facility rather than a hyperscaler. This isn't a loss — it's a better fit for the constraint. The hyperscalers are good at most things, but they are not the only legitimate cloud model, and for some customers they are not the best one.
Four: Your Team's Operational Muscle
Running a primary data center in the cloud requires a different operational muscle than running one on-prem. The skills overlap — both require solid fundamentals in networking, storage, identity, and monitoring — but the tools, the failure modes, and the day-to-day rhythms are different. A team that is good at VMware may not be immediately good at Kubernetes or at cloud-native networking, and vice versa.
Before you commit to a cloud primary, assess your team honestly. Do they know Terraform? Do they know the identity model of your target cloud well enough to design around it? Do they know how to debug a production problem when the tool they need is a cloud console page they have never opened? If the answer is no, you have two options. You can train them, which takes time and patience and doesn't finish in one quarter. Or you can hire a managed provider to handle the cloud operations until the team is ready. Either path is fine. What doesn't work is pretending the skill gap isn't there and letting the team stumble through a production migration on untested skills.
The related question is whether your operational model itself is cloud-ready. Do you use infrastructure-as-code? Do you have a CI pipeline for infrastructure changes? Are your runbooks precise enough that a new engineer could follow them without verbal coaching? If not, a cloud primary will force you to either upgrade those practices fast or suffer without them. Most customers are better off upgrading the practices first, on a smaller scope, and then doing the migration once the operational muscle is in place.
Five: Your Exit Story
The last factor is the one almost nobody wants to talk about at the start of a cloud primary project: how do you leave, if you ever need to?
Cloud providers do not intentionally lock customers in, but the architectural choices you make during a migration often create soft lock-in by accident. You used a proprietary database service instead of a portable one. You built on a provider-specific serverless platform. Your identity model is tightly coupled to the provider's directory. Your data egress costs are high enough that moving elsewhere would be expensive. Individually, each of these is a reasonable choice. Collectively, they can add up to a situation where switching providers is a multi-year project even though technically nothing stops you.
The question to ask at the start of the migration is: if we had to move to a different provider in three years, what would it take? Write the answer down. Review it quarterly as the architecture evolves. Keep the answer from getting worse. You are not trying to build an environment that could migrate overnight — that's not realistic. You are trying to make sure that the exit cost stays manageable, so that "our provider raised prices" doesn't turn into "we are stuck."
The customers who think about this up front make slightly different architecture choices than the ones who don't. They tend to use standard databases over proprietary ones, standard container platforms over serverless, and their identity model stays portable. Those choices cost a small amount of productivity up front and save a large amount of optionality down the road.
The Bottom Line
A primary data center in the cloud is the right answer when your workloads are elastic, your users are well-connected, your regulatory constraints are satisfied, your team has the operational muscle, and you've thought through the exit story. Those five conditions are not a high bar for the right kind of organization, but they are a bar, and skipping any of them is how cloud primary projects become cautionary tales.
My honest recommendation for most mid-market customers is a hybrid design with a clear line between what belongs in the cloud and what doesn't. You get the cloud benefits where they pay off, you keep the on-prem benefits where they pay off, and you avoid the failure modes on both sides. It is not the most exciting architecture to put on a slide, but it is the one that tends to work.
Talk with us about your infrastructure
Schedule a consultation with a solutions architect.
Schedule a Consultation