Wednesday, December 12, 2012

Storage Deduplication: a technology quick-digest

Six years ago, when we started the journey with Storage Deduplication Technology, I was excited at the opportunity to be able to associate with something that was posited to be the game-changer for storage industry in the near future. It was not that conceptually something new was being created. Newness came from the fact that enterprise customers, technology analysts and leading storage vendors, all three saw the importance and immediate need of deduplication in the storage jigsaw puzzle from their own perspectives. While Datadomain was the biggest newsmaker that sailed on deduplication, many others came up with their own deduplication solution. And as it turned out, Storage Deduplication did turn up as the game changer for storage industry for next couple of years. If you must need a reference, Google for Data domain or NetApp's guarantee of 50% savings with VMware VM.
This post mostly is an attempt to look back and distill the threadbare technological essence for the present readers.

Why and What

Essentially value of deduplication lies on the common perception that digital content tends to be copied multiple times and in an organization IT setup, one often sees 7-14 copies of same data lying across large but finite storage pool. Think about a document shared among multiple users and also saved inside multiple back-up images (I am talking about disk backup alone here) taken daily/weekly. Intuitively one cannot but marvel at the possibility of having a magic tool that would wipe clean the duplicate copies transparently without specific user's knowledge. If that is possible, one can imagine the cost saving it would provide to the IT manager, always juggling with his precious storage budget.

How

Given that modern digital storage is organized in layers thereby decoupling physical storage layout from logical view presented to users, it turns out that building such a tool is not only feasible but can be done without any major change in the storage architecture. In the most simplistic version, the tool would scan the storage-pool and for each logically identifiable content i.e. file, find the duplicates, remove the duplicate content and convert the duplicate files to pointers to one original copy. In the software parlance this is quite similar to multiple directory-entries pointing to single inode entry.
But there are couple of major challenges with this approach:
1. it will miss the duplicate data in files that have one (or more) new  block (of bytes) appended [or even a block is removed /modified] to the original file and
2. it does not quite work in SAN setup where all data are seen as stream of bytes.
To solve these, one needs to look at the physical organization of data. Most of the modern disk or solid-state storage system organizes data as series of contiguous blocks of bytes called blocks. That gives one the opportunity to think about block level deduplication engine. Given that the blocks are of relatively smaller size [less than 512KB] and block layout in the disk is relatively immune to logical view of the storage [files in NAS or streams in SAN], it would be a lot easier to manipulate block pointers thereby leaving the top level applications undisturbed, that is, assuming that block read path and block write path of applications remain unchanged by all the deduplication-induced manipulation of blocks.
As we shall see later, that assumption is not entirely necessary especially when one can afford to be specific, e.g. building a dedicated backup appliance that intends to use deduplication in a major way.
                        The puzzle of deduplication, essentially is therefore, to find an efficient way to 1. design a mechanism to store duplicate information [which logical block(s) map to which physical block] for a storage container and 2. use that information to sort the new data when they are being written to storage subsystem. To appreciate and understand the problem better, I will pick a little different but algorithmically similar puzzle.

Under the hoods

Imagine that we have billions (or trillions) of data blocks  with millions of duplicate entries and we have to find the duplicates and remove them.
The easy approach would be 1. sort the blocks so that duplicate blocks are next to each other in the order and 2. scan the sorted list and for each unique block check if the next block is identical to it, if it is identical remove it and manipulate the reference to the duplicate block so that it points to the first unique block. Cost for this approach is one sort + one scan per deduplication cycle. Can we improve it further?
Evidently only way we can improve it is if we can somehow avoid sort in each cycle. There is another scope of improvement. Given that each data block comparison takes finite amount of processing depending on the block size, we can make the cycle faster if we avoid full-block comparison. One way of doing it would be to hash the blocks to m bits [m should be sufficiently large and hash function should be reasonably sophisticated e.g. SHA1 so that probability of hash values of two different blocks being same would be small enough to ignore. At the same time m should be small enough compared to size of the data block, so that there is appreciable saving] and create a bloomfilter of k bits which could uniquely accommodate [within reasonable margin of error] all possibilities of m-bit hash. With bloomfilter, one advantage is that cost of evaluating if a hash is present or not is constant. Once the hashes and bloomfilter are created, one can determine if a block is a duplicate and find its copy in the list with constant cost without needing to sort the list. That is big improvement only if one can ignore the additional cost of maintaining bloomfilter and hash.
In a dynamic system, there will be more data coming in as time progresses and list would continue to grow where the above scheme will stand more beneficial compared to the first scheme. Most of the commercial storage deduplication solutions use variants of the above scheme. However some implement it in inline mode i.e. as the data comes, data is deduplicated before writing to storage (e.g. disks) and few others implement it in offline mode, i.e. normal data read/write path are not disturbed; the deduplication engine runs as a post-data-write process somewhat similar to File system maintenance cycle. Obviously Inline mode requires large deduplication processing capacity whereas offline mode poses less processing tax especially during peak load.

The case of Back-up Appliance

Alright, so far so good. How can we improve this if the data is primarily for backup? One of the characteristic of backup data is that the data is written once, never modified and read very rarely. Two things are important here:
1. how much one can squeeze the data and
2. how can one improve accuracy of read data.
Experience with digital tape tells us that reading back often fails with taped data. Disk backup solves that problem. With disk backup, one can employ deduplication quite efficiently, since 90% of data between two consecutive backup images remain almost unchanged which means 90% of the second image is outright duplicate! Backup appliance takes the advantage of that and manipulates the data organization in such that it takes most advantage of deduplication even sometimes with a small penalty to read-path. Additionally one can compress the data after deduplication which should provide additional storage savings. Typically most modern backup appliances use compression after deduplication which means during read, the backup image has to be decompressed before read request of specific data block can be served. If logical data to physical block mapping is transparent in the image, deduplicated blocks will continue to remain deduplicated.

Disadvantages, if any

Hardly there is any engineering solution that does not have any flip-side. Deduplication whether made inline or offline, inflicts one problem, it affects the contiguity of data thereby affecting read-ahead effectiveness of storage system. To address this issue, commercial vendors typically employ more implementation specific heuristics which vary widely across vendors and therefore out of the scope of present deliberation.
There are however couple of open-source deduplication solutions. Unfortunately I have no data on how popular they have been. With the growing popularity of storage cloud, undoubtedly the need and use of deduplication has waned a bit. I hope to visit the case of deduplication in cloud setup sometime in this blog.