Technology Advancement Drives Hybrid Cloud Storage Adoption
A hybrid cloud implementation behaves as if it is homogeneous storage – and is most often implemented by using proprietary commercial storage software, by using a cloud storage appliance that serves as a gateway between on-premise and public cloud storage or by using an application program interface (API) to access the cloud storage.
A hybrid cloud storage solution, acts as an intermediary that maintains a super set of your data that is within the primary storage of the organization. It is implemented either through software specialized for the same or through storage appliances that act as a gateway between the primary storage (within the organization) and the public cloud storage. This gateway environment is helpful in syncing the relevant data to the cloud based on the policies defined for such data and at the same time can help in encrypting the data that is to be synced with public cloud storage.
Technology Advancements Enable Hybrid Cloud Storage
Cloud-integrated storage (CiS) or Hybrid cloud storage is a full-featured on-premises storage system that integrates with the cloud. Organizations can create a hybrid model or architecture by storing their data locally and then protecting it with snapshots on-premise and in the cloud. Here, the dormant or less frequently used data can automatically and seamlessly be tiered to the cloud, thus ensuring adequate space locally for new data.
Typically, the technologies that are put in use for hybrid cloud storage are storage appliance based. These appliances provide for a mechanism to incorporate the policies and encryption algorithm to encrypt the data while on transit. These appliances act as a super-set environment and are also responsible for syncing the data between them and cloud storage. The type of storage within the organization and the cloud environment need not be the same, and optimal storage environment on the public cloud ensures a cost-effective hybrid cloud storage option.
Applications can be broadly classified as file-system dependent applications and block-device dependent applications. For e.g. file-based workloads will typically be NFS or CIFS data from end users. This includes Microsoft Office products and other local applications on user file shares. This data typically consists of large reads and writes, with files ranging in size from a few kilobytes to megabytes. I/O response time for user data is less critical than for block-based workloads, and it is unlikely the end users will notice an additional few milliseconds delay in accessing their data. However, when organizations use NFS and CIFS for production workloads, it may create a different (and more demanding) I/O workload profile. A good example of this is the use of NFS for server virtualization.
Block-based workloads are more complex. Latency – the time taken to complete an I/O request – is a key factor in the performance of block-based storage. Storage vendors have worked hard to reduce and improve the consistency of latency within their storage products. Clearly, latency is application dependent, but it is impractical to deliver files across the Internet without local caching hardware. The ability to deliver block-based storage will therefore be highly dependent on the capabilities of the local caching device.
Most of the hybrid storage cloud solutions provide security at multiple levels.
- Firstly, secure access to cloud storage provider, based on Cloud Service Providers’ authentication mechanism.
- Secondly, it encrypts data in transit across the network using protocols such as SSL/ TLS.
- Thirdly, it also protects data at rest within the cloud provider’s storage environment.
Therefore hybrid cloud storage solution provides security across all layers of data
Reliability Is A Key Concern
Data travelling between a cloud storage provider and the host needs to be moved reliably. Reliable delivery means ensuring data transfers are completed successfully and acknowledgements received in the correct order.
In local storage, the protocols used ensure reliable delivery of data, but these cannot be used for transport to and from the cloud. Within the data center, Fiber Channel, iSCSI and NFS/CIFS are dominant, but they are not suited to long-distance operation. Fiber Channel is a non-routed protocol, so it relies on IP for routing over disparate networks or requires dark fibre or other dedicated expensive connections. CIFS and NFS are both ”chatty” protocols in that they require a large management overhead; CIFS, for example, requires confirmation of each block of data transferred before transferring the next. Where a network has large latency issues, performance with CIFS can be particularly slow.
Appliances provide standard protocol support within the client data center, including NAS protocols (CIFS and NFS) and block protocols such as iSCSI. But, these are difficult or impossible to use for data to and from the cloud.
Therefore, appliances convert local protocol instructions to Web-based APIs such as Representational State Transfer (REST), which use simplified I/O commands that perform read, write and delete of data stored as objects. In addition, data integrity is maintained by storing metadata containing write time stamps with the content itself. The time stamps allow data to be reconstructed in the correct order in the event of a failure.
A Word Of Advice
CIOs should carefully examine the reliability and performance of various approaches to a hybrid cloud storage strategy. As you migrate from traditional storage infrastructure to integrate with the cloud to make it hybrid cloud storage, it is essential to keep a close watch on performance, data protection and cost of the overall project.
It would be interesting to know at what stage of the cloud journey you are, and at what point you will be looking at the storage strategy to integrate. Write in to have a detailed discussion on various methods, architectures, considerations that one needs to look at – and it will be insightful to know your journey too.
Govind Desikan is the Business Development Head for Cloud Services, Netmagic, responsible in evangelising Cloud initiatives and to engage with customers deeply in preparing a Cloud blue-print for successful Cloud roll-outs. He has been associated with IT industry for close to 20 years with a wide ranging experiences from large Enterprises, working with Software Vendors, building Datacenter services grounds-up as well as in architecting large system roll-outs including elastic and adaptable architecture. Conversant with most software technologies, he is a passionate and vivid believer in simplifying technology to make it relevant for Business Decision Maker. In his past, he has worked with popular software OEM brands such as VMware, Microsoft, Sun, Oracle etc. Apart from being an Computer Science Engineer, he is also a Cost Accountant and an ISO Lead Auditor. He is well known with his customers who fondly recognize him as "one of the best consultative solution seller".