In recent years, cloud adoption across media and entertainment has accelerated. The benefits were clear: scalable infrastructure, reduced reliance on physical hardware, and support for increasingly remote teams. For many, it seemed like the right solution for ageing on-premises systems with built-in support for evolving workflows.

But today, as data volumes surge and workflows become more complex, the limitations of a cloud-only model are becoming harder to ignore. For some teams, cloud’s rising costs, performance issues, and operational friction are prompting a second look, one that includes data repatriation and hybrid storage strategies. It seems as though people don’t know how to quit the cycle of rising cloud expenses.
Why the public cloud isn’t always the right fit
Here’s another fact to consider: media workflows aren’t lightweight. Editing, colour grading, and finishing demand sustained performance, fast access, and seamless interaction with large files. Many of these workloads also depend on GPU acceleration and low latency, not just scaleable compute.
Cloud infrastructure, by contrast, is typically optimised for short-term processing and smaller, burstier data transfers. When media files move constantly in and out of storage or need to be accessed repeatedly over time, costs and delays start to add up. Scrubbing through a timeline or relinking to files stored offsite can become frustratingly slow, making it harder to plan properly and stick to a budget. It may surprise you to realise that your cloud services may actually be the biggest obstacles to your workflow.
Cloud storage costs may seem manageable upfront, but teams working from archives or revisiting content often face high retrieval fees. As content moves further downstream from the production process, the harder it becomes to meet deadlines without surprise costs.
What’s driving the shift to repatriation?
In response, many organisations are rethinking where their data actually lives. Repatriation—the process of moving data back from the public cloud to on-premises or private infrastructure—is gaining traction. This isn’t about abandoning the cloud entirely. It’s about putting content where it makes the most sense based on how it’s used. Sometimes in the cloud, sometimes on-premises, and often in flux between the two (what we call hybrid cloud).
For media companies working in terabytes, cloud costs add up quickly. Uploads, downloads, storage, and processing are all billed separately by many public cloud providers. The pay-as-you-go model can evolve into a cycle of compounding costs and recurring overages that are hard to avoid.
Control is another concern. Cloud environments come with trade-offs in ownership and oversight. If a provider changes policies or discontinues a service, teams are left managing the fallout. That’s a tough position for companies whose intellectual property is core to their business.
Security plays a role here, too. While cloud platforms offer robust protections, storing high-value media assets in a shared environment still invokes risks. Data breaches, misconfigurations, and unauthorised access (whether accidental or malicious) remain real concerns. For teams handling pre-release content, sensitive IP, or client assets, having tighter control over security protocols is a compelling reason to bring data back in-house. Any work available in the cloud can be stolen by artificial intelligence, and nobody wants that.
There’s also the challenge of managing complexity. Building out redundancy plans, even workflows across multiple cloud platforms, often creates new operational overhead, and not every team has the expertise to manage that effectively. Many cloud-facing technologies have not been standardised, meaning keeping track of viable tools to use in your workflow on your chosen cloud platform takes a level of knowledge that lots of teams today simply don’t have access to.
What workflows make sense in the cloud
Let’s be really clear. Cloud is still a powerful tool and often a good match for workflows that benefit from scale or geographic reach, like rendering, transcoding, or remote content review. Teams working across time zones or locations can also benefit from cloud-based collaboration tools that keep projects moving.
Cloud cold storage can serve long-term archive or compliance needs, especially when access is infrequent. But for “living” archives where content may need to be retrieved quickly, long wait times and high fees can outweigh the benefits.
Tasks requiring low-latency, real-time responsiveness, or GPU power, like editing or live ingest, still tend to be cheaper to operate and perform better on local systems. In these cases, on-premises still tends to deliver more consistently high performance.
Finding balance in a hybrid model
Most teams today are exploring ways cloud and on-premises can work together. A hybrid approach provides more options, allowing different workflows to live in the environment that fits best.
Hybrid systems offer a smart solution for data tiering. Files that are used regularly can stay local for fast access, while older or less-used content can be moved to the cloud. With the right tools, managing both doesn’t have to inflict extra complexity, especially if the system presents a unified interface for storing and accessing data.
A flexible hybrid model takes advantage of the strengths of both, making it easier to grow without encountering roadblocks. As demands increase, from high-resolution formats to distributed production and AI, teams need systems that can scale while staying responsive and affordable.
Rethinking flexibility
Cloud is still the reality of media and entertainment, but it’s no longer the answer to everything. Teams are taking a closer look at the total cost of ownership, where performance matters most, and what content makes sense to keep in-house.
A more deliberate approach to deciding where to place your data is one that follows the needs of the workflow, helps avoid surprise costs, and keeps content available when and where it’s needed.
Sometimes, it’s about finding better ways to work across environments, and that may mean moving data back on-premises. Either way, the goal is the same: make sure your infrastructure supports the work, not the other way around.