In today’s multi-cloud world, we’re often faced with the tough decisions of creating a long-term strategy fit within existing architectural decisions. How do we properly evaluate the best options that fit within our strategic goals? Sometimes this is easier said than done.
It’s also important to realize that infrastructure modernization becomes an important part of the journey. In this multi-cloud world, selecting platforms that are built with the cloud in mind becomes a core requirement to ensure you’re able to leverage the proper platform for the job.
While the following four principles can be applied to any core technology decision making process, today we decided to focus applying the framework to the data management and protection layer of the technology delivery stack. With no further ado, here are four simple methodologies to modernizing your data management strategy.
Scalability + Performance
If you happened to read our post on “HCI: The Good, The Bad, and The Ugly,” this will sound familiar. Gone are the days of having to wrangle software, server hardware and storage together to build a desired solution. Today, there are options that integrate all of these necessary components into a scalable, highly-performant stack.
This makes normal growth and product lifecycle events easier to manage. Ultimately, this strategy frees your teams to focus on more pressing projects. By linearly increasing performance as a solution grows in capacity, you take advantage of a reduced RTO (recovery time objective) as backup tasks require less time to complete. You also accomplish a lower RPO (recovery point objective) by bringing systems back online quickly through fast restores.
The traditional backup architecture incorporates some blend of a media server(s), agents, and a backup storage target. These legacy architectures don’t allow for increasing capacity and performance as easily. The legacy designs require a lot of effort to design, implement and manage the entire solution which ties up resources and ultimately slows down the business.
As it relates to data management and protection, instead of tuning media servers, wrestling with backup windows, or isolating architectural bottlenecks, you can focus on business-relevant tasks that actually drive organizational progress.
As we discussed in “5 Beginning Steps to Ensure Your Backups are Protecting your Data from Ransomware,” there doesn’t seem to be more than a few days passing before we hear about the next business or municipality that has been severely impacted by ransomware. If an event occurs, losing both your virtual machines and your backups is obviously a worst-case scenario.
The traditional backup architectures that require network shares as backup targets are usually compromised during ransomware events. This prevents organizations from recovering quickly post-outage! For traditional environments, you need automated processes and a tertiary data target to get data offline.
Modern data management and protection solutions typically take a different approach. Newer methodologies are API driven and use a closed storage approach. Ransomware cannot discover storage on the network because the nefarious party would not have access to network credentials. In our space, this is defined as storage immutability.
Beyond this approach, modern solutions also include the ability to discover ransomware activity on the network, notify the admin and then dynamically provide a restore task that just replaces the encrypted files. This significantly reduces RPO after a ransomware attack has taken place.
As companies explore suitable ways to leverage the public cloud, we are finding that object storage is a “low hanging fruit” for long term retention of backup data. The reasoning is simple:
- In specific use cases, it is less expensive than putting storage onsite (see archival or longterm retention).
- As an elastic platform, the cloud is more agile and scalable in supporting your changing business needs. In this case, the platform would also offer integration into other cloud-based services.
For instance, with backup data, virtual machines can be converted to run in public cloud compute infrastructures. This opens the door for cost effective, on-demand test and development environments. You also have the option of using the cloud for DR instead of building your own second data center.
- Most organizations have received extensive “cloud credits” from previous investments. Modern data management and protection platforms are enabling you to leverage existing assets without taking on the cost of additional disk capacity today.
Most next-generation data management and protection solution have many cloud native integrations and features that have been designed from the ground up to facilitate your IT transformation to leverage public cloud resources.
To explore the cloud layer in more depth, check out our post on The Advantages of Leveraging a Cloud-Based Data Management Strategy.
Freeing up resources is the name of the game. IT departments are stretched thin and anything that saves time is useful to a higher priority project. A modern data management and protection solution is going to have management simplicity at its core. This ultimately impacts the complete lifecycle of the product set. You will see quicker time to delivery on initial deployment. The platform will be able to scale instantly. Risk is minimized during platform refreshes. And most importantly, your time supporting complex architectures is minimized.
Having a complete “API first” architecture also goes a long way to simplify your environment. By utilizing automation and orchestration tools, you are able to minimize the amount of effort required to complete mundane tasks.
If you’re currently struggling to modernize your approach with data management and protection, give us a ring! You have options in creating an outcome that helps you and your team move faster, mitigate risk, and drive your organizational relevance into the future.