We live in an increasingly virtual world. Because of that, many organisations not only virtualise their servers, they also explore the benefits of virtualised storage.
Gaining popularity 10-15 years ago, storage virtualisation is the process of sharing storage resources by bringing physical storage from different devices together in a centralised pool of available storage capacity. The strategy is designed to help organisations improve agility and performance while reducing hardware and resource costs. However, this effort, at least to date, has not been as seamless or effective as server virtualisation.
That is starting to change with the rise of object storage – an increasingly popular approach that manages data storage by arranging it into discrete and unique units, called objects. These objects are managed within a single pool of storage instead of a legacy LUN/volume block store structure. The objects are also bundled with associated metadata to form a centralised storage pool.
Object storage truly takes storage virtualisation to the next level. I like to call it storage virtualisation 2.0 because it makes it easier to deploy increased storage capacity through inline deduplication, compression and encryption. It also enables enterprises to effortlessly reallocate storage where needed, while eliminating the layers of management complexity inherent in storage virtualisation. As a result, administrators do not need to worry about allocating a given capacity to a given server with object storage. Why? Because all servers have equal access to the object storage pool.
One key benefit is that organisations no longer need a crystal ball to predict their utilisation requirements. Instead, they can add the exact amount of storage they need, at any time and in any granularity, to meet their storage requirements. And they can continue to grow their storage pool with zero disruption and no application downtime.
Perhaps the most significant benefit of storage virtualisation 2.0 is that it can do a much better job of protecting and securing your data than legacy iterations of storage virtualisation.
Yes, with legacy storage solutions, you can take snapshots of your data. But the problem is that these snapshots are not immutable. And that fact should have you concerned. Why? Because, although you may have a snapshot when data changes or is overwritten, there is no way to recapture the original.
So, once you do any kind of update, you have no way to return to the original data. Quite simply, you are losing the old data snapshots in favour of the new.
With object storage, however, your data snapshots are indeed immutable. Because of that, organisations can now capture and backup their data in near real-time – and do it cost-effectively. An immutable storage snapshot protects your information continuously by taking snapshots every 90 seconds so that even in the case of data loss or a cyber breach, you will always have a backup. All your data will be protected.
Taming the data deluge
Storage virtualisation 2.0 is also more effective than the original storage virtualisation when it comes to taming the data tsunami. Specifically, it can help manage the massive volumes of data – such as digital content, connected services and cloud-based apps – that companies must now deal with. Most of these new content and data is unstructured, and organisations are discovering that their traditional storage solutions are not up to managing it all.
It’s a real problem. Unstructured data eats up a vast amount of a typical organisation’s storage capacity. IDC estimates that 80% of data will be unstructured in five years. For the most part, this data takes up primary, tier-one storage on virtual machines, which can be a very costly proposition.
It doesn’t have to be this way. Organisations can offload much of this unstructured data via storage virtualisation 2.0, with immutable snapshots and centralised pooling capabilities.
The net effect is that by moving the unstructured data to object storage, organisations won’t have it stored on VMs and won’t need to backup in a traditional sense. With object storage taking immutable snaps and replicating to another offsite cluster, it will eliminate 80% of an organisation’s backup requirements/window.
This dramatically lowers costs. Because instead of having 80% of storage in primary, tier-one environments, everything is now on object storage.
All of this also dramatically reduces the recovery time of both unstructured data from days and weeks to less than a minute, regardless of whether it’s TB or PB of data. And because the network no longer moves the data around from point to point, it’s much less congested. What’s more, the probability of having failed data backups goes away, because there are no more backups in the traditional sense.
The need for a new approach
As storage needs increase, organisations need more than just virtualisation. They need to take a different approach to manage storage than they have in the past. You can’t merely use the same old technology and hope for a different outcome. It’s time to take a closer look at the various storage architectures in the market and better understand which ones offer the greatest benefits.
Storage virtualisation 2.0 offers organisations the ability to better manage both structured and unstructured data, along with the added benefit of greater protection against data loss. In other words, object storage solves today’s most pressing problems for IT administrators and organisations, innovatively and cost-effectively.
Issued by Loophold Security Distribution