Setting containers free: Keep IT in line with business needs

Setting containers free: Keep IT in line with business needs Deepak Verma is the VP of product management at Zerto. He is responsible for managing the release of next-generation products at Zerto. He has 20 years of experience in the IT industry with a focus on disaster recovery and data protection and has led product management teams responsible for building and delivering solutions for cloud platforms at multiple companies. Deepak holds a Master of Computer Science and a Bachelor of Engineering. He is certified in AWS, Microsoft Azure and Google Cloud.


The adoption of containers in enterprises is in full swing, with container platforms like Kubernetes fast becoming the de facto standard for deploying applications to production. Indeed, a recent study of data protection strategies for containers found that 67% of respondents are already running containers for production applications, while the remaining 33% plan to do so within the next 12 months.

Enabling application developers to package small, focused code into independent portable modules containing everything needed to run the code, the use of containers has grown dramatically in the wake of the coronavirus crisis. 

Containers are becoming the go-to-choice for production

Last year, organisations began to adopt containers at an unprecedented rate as they moved workloads to the cloud and upped their consumption of cloud-native services. Enabling organisations to scale quickly and accelerate the digitisation of business models, containerisation is set for a continued growth trajectory in 2021 – particularly as enterprises shift more of their production environments to the cloud and leverage containers to enable true hybrid multi-cloud deployments. 

According to Gartner, the rate of adoption of containers means that by 2023, more than 70% of global organisations will be running more than two containerised applications in production, up from less than 20% in 2019.

However, as containers rise in popularity, organisations will need to rethink their data protection strategies fast because the traditional ‘snapshot’ approach of data protection will not meet the needs of a modern, containerised business environment. 

To put it starkly, container-based applications cannot be backed up in the same way that on-premises VM-based applications are protected.

Understanding what it takes to protect next-gen containerised applications

As containers democratise the ability to provision infrastructure, data protection is becoming a shared mandate involving IT operations teams, who provide the infrastructure, and the application development and cloud platform teams that creating and deploying applications via containers. 

This shared mandate sometimes creates a disconnect between responsibility (the development team) and accountability (IT operations), which in turn increases the risk of improper protection across production applications.

Adding further complexity to the fragmentation challenge, containers can and do run across on-premises and public clouds environments. In the good old days of virtualisation architectures, IT teams knew that application data was stored in a VMDK or on shared storage. However, containers move data storage to external data storage services in the cloud or on-premises – all of which impacts visibility into the state of data protection across environments. This means that monitoring, logging, and data protection needs to be rethought to ensure the container ecosystem is effectively supported. This has significant implications for how the enterprise assures data resilience and disaster recovery, without compromising agility.

Containers differ from mature virtual environments in one other key respect: they offer fewer ways of ensuring new workloads are configured correctly for data protection. Even next-generation applications that are built with internal availability and resilience in mind often lack an easy and simple way to recover from risks such as human error or malicious attack.

Ultimately, the safe implementation of a container-based application environment will depend on becoming both agile and having the ability to recover quickly without interruption.

Solving the data protection challenge

The old ways of backing up will not be sufficient for taking care of data protection in a multi-cloud estate across containers and virtual machines. Opting for non-native solutions from legacy backup and disaster recovery providers will only add time, resources, and barriers to application development.

Instead, organisations will need to utilise native solutions to help drive a ‘data protection as code’ strategy that ensures data protection and disaster recovery operations are integrated into the application development lifecycle from the start. In other words, ensuring that applications are ‘born protected’.

Adopting this approach means that teams creating container-based workloads simply apply pre-defined policies in a way that makes sense for them – at the Kubernetes resources and object level – using dynamic tags which then automatically ensure all related persistent data is included. This should feel like an extension to ‘infrastructure as code’. Additional backup of data in the container registry or artefact repository level provides end-to-end resilience. 

This eliminates any need to configure policies or build a separate data protection infrastructure. Instead, developers are free to consume containers in a self-service and on-demand manner, applying the policies they know will ensure data protection is taken care of. Meanwhile, IT operations can utilize easy policy-based management to retain visibility and remain compliant.

As a result, organisations can ensure the resilience of their applications, without sacrificing the agility, speed and scale of containerised applications. 

A new approach for next-generation application resilience

The technical and operational advantages offered by platforms like Kubernetes have helped propel the popularisation of containers. Lightweight and modular, containers boost developer agility thanks to cloud-native microservices. However, as some organizations have discovered already, combining containerised technology with outdated monolithic backup apps risks compromising efficiency, application resilience and data protection capabilities.

Utilising today’s innovative containerised technology demands a new way of thinking when it comes to data protection and disaster recovery. Ideally, resilience and data protection should be integrated with existing Kubernetes workflows to minimize any impact on a developer’s day-to-day workload. This frees up developer cycles to work on the next application and IT operation cycles to drive more agility in bringing applications to market.

To take full advantage of their investment in these next-generation cloud-native technologies, organisations will need to view containers and their data as a single entity – and focus on promoting continuous data protection as code for containerised applications.

(Photo by Nowshad Arefin on Unsplash)

Want to learn about DevOps from leaders in the space? Check out the DevOps-as-a-Service Summit on 1 February 2022, where attendees will learn about the benefits of building collaboration and partnerships in delivery.

Author

  • Deepak Verma

    Deepak Verma is the VP of product management at Zerto. He is responsible for managing the release of next-generation products at Zerto. He has 20 years of experience in the IT industry with a focus on disaster recovery and data protection and has led product management teams responsible for building and delivering solutions for cloud platforms at multiple companies. Deepak holds a Master of Computer Science and a Bachelor of Engineering. He is certified in AWS, Microsoft Azure and Google Cloud.

Tags: , , ,

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *