One way to make sense out of these apparent forest of acronyms and concepts is try to appreciate what we need storage for. Essentially entire purpose of all storage technologies is to help us to store our ever-expanding digital data in such a way that is
- safe and persistent, that is data does not get destroyed, lost, mutated or corrupted once stored
- secure against unauthorized access
- accessible when one needs and
- affordable.
Personal Storage
For non-professional personal need, typically a 300GB hard disk that comes by default with a laptop is more than sufficient. A 250GB hard disk for example can hold around 50,000 normal size photos or mp3 music. If you are avid user of video, you probably will buy few 1 TB external hard disk in addition and that would be DAS or Directly Attached Storage system for you. If you are a cloud aficionado, you probably would rely on Google Drive or Microsoft SkyDrive for your additional needs. In which case you have both DAS and public Cloud in your system.Enterprise Storage
When it comes to enterprise, many aspects like, data growth, data retention, preparedness towards recovery of data against site disaster and access frequency of data comes into consideration, making the storage planning a costly and complex business. Additionally with increasing sensitivity towards unstructured data, enterprise is experiencing faster expansion of storage demands. According to IDC's Worldwide Quarterly Disk Storage Systems Tracker, 3Q12 marked the first time that external disk storage systems makers shipped over seven exabytes, or 7,104 petabytes, of capacity in a single quarter for a year-over-year growth rate of 24.4 percent.[source: Infostor]. This means in next 5-6 years there will be many organizations that would hit exabyte of enterprise data.Storage Tiers
To get around this challenge of data explosion, enterprise try to bring storage tiers where the data is organized into different classes based how actively they are used. For example, very active (data modification rate is high and data access rate is very high) data requires that they are kept online in fast and most reliable storage tier [let's say tier 1] and the least active [no data modification and only accessed in special scenario like past data audit or recovery] data could be archived in off-line storage. This way, the enterprise provides most resources to most active data and efficiently reduces cost of storage for lesser active data.Fig 1. Storage Tiers based on Data usage |
Fig. 2 tapes and disks |
Compared to archival data, Tier-1 storage employs high-end faster disks [15K RPM] quite often along with SSDs [Solid State Disks]. SSDs are new favourite in this segment with vendors, like Samsung, Sandisk competing with each other to bring out new products that are cheaper, denser and last longer. SSDs are a lot faster and support true random read/write compared to disks.With fast falling price, higher capacity and increased life-time, solid-state drives are finding their places in a large way in tier-1 storage gears. Other advantages of SSDs are that they occupy less physical space, less electrical power and can transition from offline to online a lot quicker compared to disks. It however will take some time, before we see SSDs completely replacing disks in this Tier.
Fig 3: simple stack comparison - SAN, NAS and DAS |
Fig. 4 Tiered NAS storage organization in Data Centre |
Fig 4 provides an illustration of a typical enterprise with two data centres, both simultaneously serving its users as well as providing storage replication service to the other site, a popular configuration to support Site Disaster Recovery, while internally each data centre organizes data into 3 tiers. Tier 1 storage almost always come in a primary-standby configuration in order to support high availability.
Cloud Storage
courtesy: HDS: Thin Provisioning with Virtual Volume |
Cloud as a concept became popular only after Virtualization became successful in large-scale. With virtualization, one could have hundreds of virtual servers running on a single physical server. With that, came software that could make provisioning hundreds of applications a matter of running few software commands which could be invoked remotely over HTTP. Ability to dynamically configure servers using software brought up a new paradigm where an application can be commissioned to run across multiples of virtual servers (that are communicating with each other using a common communication structure), serving a large user base entirely using software commands that administrator could execute remotely from his desktop. This type of server provisioning demanded new way of storage provisioning. Concept of virtual volume or logical storage container became popular. Now one can define multiple containers residing in the same physical storage volume and provision them to the server manager remotely. The concept of Thin provisioning became predominant in storage provisioning where the idea is that a server is provided a virtual volume that uses little physical storage to start with but as it grows the physical storage allocation also grows underneath based on demand. Advantage with this is that one does not need to plan for all the storage in advance, as the data grows, one can keep adding more storage to the virtual volume, making the virtual volume grow. That decoupled physical storage planning from server's storage provisioning. Storage provisioning became dynamic like virtualized server provisioning. As long as the software can provision, monitor and manage the servers and virtual volumes allotted to the server over a software defined interface, without errors and within acceptable performance degradation, the model can scale to any size. As it is apparent, there is no real category called 'Cloud storage', what we have rather is 'Cloud service'. Data centres are designed and maintained in the same way the data centres are designed and built all along using combinaton of NAS, SAN and DAS.
Cloud provides a software framework to manage the resources in the data centres by bringing them in a common sharable pool. Cloud in that sense is more about integrating and managing the resources and is less about what storage technologies or systems per se. are used in the data centre(s). Given that Cloud software is the essential element of Cloud service, as long as the software is designed carefully, one can have any type of devices /systems below it, ranging from inexpensive storage arrays of JBOD (Just a Bunch Of Disks) to highly sophisticated HDS, HP, EMC disk arrays or NAS servers. The figure below from EMC's online literature illustrates this nicely.
It is apparent that as the cloud size grows larger and larger, the complexity and sophistication of the software increase by magnitude and so does the cost advantage of data storage. One can look at the cost of provisioning (server and storage) in public clouds like that of Google, Rackspace and Amazon and imagine the complexity and sophistication of their Cloud management software. Fortunately many have published a version of their software in open source for others to learn/try.
source: http://managedview.emc.com/2012/08/the-software-defined-data-center/ courtesy: EMC |
Further Reading:
Brocade Document on Data centre infrastructure
My slides on slideshare
Share great information about your blog , Blog really helpful for us . We read your blog , share most useful information in blog . Thanks for share your blog here .IBM TS2250 Tape Drive Model H5S
ReplyDelete