For many years, there’s been a lot of buzz in the media and entertainment industry about new architectures based on microservices: best practices, how to migrate from the old-fashioned monolithic infrastructures, cloud native, cloud ready and a long list of topics, discussions and confusion.
Following this tsunami of information and marketing actions comes the reality, which means that cloud and hybrid cloud deployments have to prove that they really work, are cost-effective, easy to maintain and more importantly they respond to the business needs of media companies in terms of elasticity and pricing model, etc.
Like many computer science disciplines, Cloud-native computing means many things and one thing at the same time. Our view is that cloud-native software should be designed and built entirely according to the principals of service oriented architecture, with each new generation of technology enabling more abstraction and leverage on past achievements.
There is also another important ethos that we embrace and that includes reaching for very high levels of availability, elastic scale capabilities and the ability to meet user demands without traditional capacity planning exercises. Software that is Cloud-native follows the then-state-of-the-art architectural principals – frequently referred to as “modern” architecture. As trade craft evolves, so too should toolsets.
Cloud-ready vs. Cloud-native
The traditional notion of feature-rich software, divided up into separately designed functional modules that can be deployed on virtualised hardware, is a powerful idea. Today many such systems are what we might call Cloud-ready. It is easy to mount them on a public cloud like AWS by simply mapping their on-premise infrastructure to the cloud. A traditional software system might require two or three servers for each deployment – and cloud makes that easy to do.
But that’s not nearly as efficient, scalable, or flexible as it could be. And as technology marches forward, end users tend to have a reduced tolerance for trade-offs. Increasingly, companies need flexibility and power, instant-on and massive-scale, automation and agility.
And that’s the difference between cloud-ready and cloud-native.
Where software-based infrastructure that’s agnostic to hardware has been a powerful and important cloud-ready principal, the idea of abstracting the operating system – using Docker containers and Kubernetes orchestration – takes it to the next level, opening granular APIs and microservices for every single system function. This is what the future of computing requires.
Cloud-native Building Blocks: Containers
Containers abstract not only the hardware, as a virtual machine does, but defines a set of best practices. This enables computational function in very small chunks that respond to specific features implemented, including only the dependencies needed for the execution in the container. They also need far less resources than a full virtual machine, and they can be upgraded and managed independently as long as the APIs exposed are compatible.
Indeed, for years, computer scientists have imagined a world of technology that can work like the box of plug-and-play blocks we all loved as kids. Business analysts understand the power of the idea.
Imagine a physical world built with Minecraft-magical shipping containers. You could build that world easily, scale up towers, roads, bridges, houses, buildings, cars and indeed an entire city simply by changing the form and function of the container and then placing it where you need it, in exactly the right number.
You could also imagine it being taken down just as quickly, like a massive traveling amusement park headed on to the next town.
Or perhaps take our imaginary city into maintenance mode, nurturing it with fool-proof rehabilitation and enhancement of neighbourhoods by simply removing the existing containers, placing in the new ones, and bringing back the old ones for reuse or disposal.
With traditional cloud-ready software, you can have a very successful DevOps team who are masters at Continuous Integration and Continuous Deployment (CI/CD). But try to scale a piece of functionality in a cloud-ready software system and you’ll find a large “minimal sufficient deployment” of networking, storage and computing is needed – so the entire system must be scaled regardless of functionality needed. Continuing our example above, we need six servers to serve a range of functions including storage, transcoding, and editing tasks.
What if you only need to scale on processing operation, for example, but not any other functionality the system provides? Spinning up six servers, even virtual ones, is not efficient. It generates waste. And this is a core benefit of containers, orchestration and 100 percent microservices.
Cloud-native Benefits: Idling Resources, Scaling Difficulties Eliminated
So what’s needed? The application must be entirely re-architected and re-built in a cloud-Native Microservices approach. Then it will be able to meet the above requirements in an optimally efficient way.
Many vendors claim cloud-native because they’re using two availability zones of AWS S3 Storage. That doesn’t fit our definition.
Other vendors demonstrate a slice of functionality built on Dockers that can be deployed manually on a machine. That doesn’t fit our definition.
To be efficient, 100 percent of the system functionality needs to be exposed in a set of granular, self-sufficient services that can be immediately spun-up, scaled-out and shut down using modern container orchestration techniques, including Docker and Kubernetes.
The Well-Architected Cloud: AWS’s 5 Pillars
Indeed, the agility and power that arises from cloud-native services do incur some additional – and complex – non-functional requirements in the way they’re architected at the foundational level. What does that mean? One of the leaders in cloud is Amazon Web Services and their description of five core dimensions is a great start – and an inspiration to the Tedial approach to architecture. These are:
- Operational Excellence. Meet the business needs, “develop and run” workloads should deliver the business value required, expose operational visibility and continuously improve processes and procedures.
- Security. Protect data, systems and assets – maintain the idea that there is no ‘trusted’ origin.
- Reliability. Redundancy / Resiliency. Performs its function correctly and consistently, including the ability to continue working in case of the failure of some component (redundancy) and recover automatically (resiliency).
- Performance Efficiency. The application uses only the minimum computing resources needed to meet the system requirements under changing demand conditions. More importantly, it should adapt to new efficiencies provided by core hardware and advanced technologies available and scale appropriately to maintain business value under increased utilisation.
- Cost Optimisation. The systems should be able to deliver business value at the lowest possible price point at each moment in time.
These requirements are more complex than what a high-performing monolithic software – even one with modularity – can meet. Re-architecting an entire set of mature MAM and workflow system functionalities to meet the above requirements does not come without heavy investment.
But it’s precisely this investment that must be made for system functionality to be considered 100 percent Microservices based.
Kubernetes: Key to Container Orchestration
The most popular way to take advantage of microservices orchestration is with Kubernetes. Kubernetes offers a common platform with minimal adaptations required in order to deploy microservices both on-prems as well as in all Public Clouds such as AWS, Google Cloud Platform, Microsoft Azure and others.
This approach must include:
- Advanced Technical and Operational Dashboards. Detailed monitoring of all microservices, comprehensive metrics about the use of the resources (Disk, Memory, CPU, Network and Cloud Services) and other data needed to optimise the execution of the microservices and reduce cost over time.
- Various deployment strategies such as Blue-Green, Roll-up and Canary can be used depending on the SLAs needed. (For example, are zero downtimes desired? or do you need to minimise upgrade and maintenance risks?)
- Autoscaling: Spin up and down services based on different criteria: Pure IT (Disk IO, Memory) vs. workflows parameters (queued operations).
- Security. Easily use and reuse Kubernetes security policies and network slices.
Finally, a few additional features of our approach to cloud-native includes:
- Full use of serverless computing to trigger microservices on-demand, for event-based pricing.
- Full implementation of CI/CD) in a pipeline that will package, auto-test and deploy, again, using orchestration.
Physical Infrastructure Agnostic: Using the “Infrastructure as a Code (IaC)” paradigm
Another key feature to consider is how “tied” a solution is with a specific cloud vendor. It can be cloud native, meeting all the desired architecture principles but be so dependant on specific cloud vendor services that it’s impossible or very hard to move from one cloud vendor to another, or even run on-premises. Some companies have used this approach for quick time to market and have a product that is running perfectly in the cloud, but only in ONE cloud, and never to say on-prems.
To solve this, vendors need to go one step further, and to use services that are available or are portable to any environment, in other words, not to use specific Cloud Vendor services for core functionalities and define the deployment with a “Infrastructure as a Code (IaC)” approach. Historically, setting up new infrastructure meant stacking physical servers, configuring network cables and housing hardware in a capable data centre. Today, setting up a more performance efficient, cost-effective and secure infrastructure can all be done using software which is readable by a machine that will create all the microservices, network, storage and all the related security policies automatically.
There are many Infrastructure-As-Code Tools, some of them vendor specific too, but having one that’s multi-cloud such as Terraform, deploying to any cloud and being cloud agnostic or even on-prems in a Kubernetes Cluster, will avoid vendor lock-in and will be future-proof guaranteed.
Going one Step forward: Hybrid Cloud
Hybrid Cloud goes one step forward in microservices deployments, as they need to be deployed seamlessly in the cloud and on-prems. For this it’s key to implement IaC deployments but most importantly, that the nodes in the different locations are autonomous but are managed from a single point. From a microservices perspective, a Hybrid Cloud deployment needs to take advantage of elasticity using cloud resources and all of the orchestration, upgrade strategies and CI/CD. In summary, it must work as a single system with operations serving the business requirements that are executed in the most suitable location, optimising costs, but without over complicating the workflow definitions and maintenance. For this approach a Media Integration Platform is KEY.