I mentioned in my last post that next post will be on infrastructure for large unstructured data. But rather than jumping into available infrastructure I felt a better approach would be to start with clean slate and then pick what we need.
To keep it simple, let us focus on the primary dimensions as we know which are storage and processing capacity of data analytics engine.
In the last post we saw how the commercial products from big houses tried to provide an unified solution for structured data. From the vendors' point of view the solution helps the customer, since IT managers are traditionally used to think of capacity building in terms of adding more devices into the pool. Just like they would add a new NAS box to increase storage capacity in their existing NAS setup, one can partition the analytics data and keep adding Big Data Appliances to address added capacity need in each partition. It helps because most of the structured data, even when they become big, follow a pattern that is not very different from when they were small. But when we think of unstructured data, it is difficult to forecast the analytics pattern of the data and it could be that while the data source and repository may be one, there would be multiple analytics engines to address different analytics need.
So if we are to build a scalable solution, it makes sense to look at what we need to build large data store that can be scaled with linear cost and then address how we can adapt our analytics engine to this store.
'unstructured' property of the data can be an advantage!
Unstrucured means no implicit relation that one can use to map to tables and so on, can be expected of the data. If we force relational construct on the data, we are going to create artificial constraint(s) that would become hindrance later, since imposition of a particular relational structure would mean 1. implicitly removing the other possible relations from the data design and later if the analytics engine has to find those relations, it will have to deconstruct the relational structure first, which is kind of a double taxation.
Base model
Second issue is that some compute server can be down at times. If we are to ensure Availability of data across these unit failures, we have to keep some redundant copies of data in the network. That does add challenge w.r.t to keeping the entire data consistent all the time. But we will come back to that aspect later.
Improving on the basic design
One way of addressing the unpredictability aspect of retreival function would be to find out the primary search/Query pattern and create a metadata store that makes the search predictable. In that way we are not imposing the structure on data store but adding a bit of extra information by processing the data itself so that search output can be consistent. To illustrate it a bit more, let's consider the data store for all web-content inside an organization. If we find that most sought queries are based on say hundred words, we can build an index that maps each of these hundred word to all matching web-content. If the next search comes on any of the above words, retrieval will be lot faster.
Addressing Modification/update [ACID and CAP]
This looks fine, as long as we assume that data always get added in a consistent and predictable manner so that all update/modifications are recognizable separately and traceable. However, with multiplicity of data source, asynchronous nature of update and property of unstructured-ness, that assumption does not hold much water. And there comes the next problem. One advantage that RDBMS provide us is the inherent support for ACID requirement. ACID refers to the ability to preserve Atomicity, Consistency, Isolation and Durability of all the database transactions. How do we support this requirement in our design? We can trivially support the requirement if we serialize all transactions[see Figure] which means transaction B does not get started till the transaction A is completely committed to database. Now what happens if connection to the compute unit that has particular data, fails in between? All requests wait indefinitely for the transaction to complete, which basically means that system becomes unavailable. That brings us to another interesting aspect of distributed dats store. Brewer's CAP theorem tells that no distributed system can guarantee Consistency of data, Availability of the system and Tolerance to Partition of store [across network] all together. System can only guarantee two of the requirements at a time. This page provides more elaborate explanation.
In order that we do not get confused, CAP tells us about the property of distributed system as a whole and ACID requirement is particularly applicable to database transactions. To bring broad correlation, Consistency in CAP roughly corresponds to Atomicity and Isolation of transaction in ACID, A and P property does not have any correlation with ACID.
Daniel Abidi, an assistant prof with Yale has explained the issue with lot more rigour here. He brought Latency as another dimension alongwith Consistency and argues that if there is Partition of data store [i.e. data store maintained in two different data centres], then the system chooses either Consistency or Availability and if it prioritises Availability over Consistency, it also chooses Latency over Consistency. Example that he cites about this type of system are Cassandra or Amazon's dynamo.
Daniel Abidi, an assistant prof with Yale has explained the issue with lot more rigour here. He brought Latency as another dimension alongwith Consistency and argues that if there is Partition of data store [i.e. data store maintained in two different data centres], then the system chooses either Consistency or Availability and if it prioritises Availability over Consistency, it also chooses Latency over Consistency. Example that he cites about this type of system are Cassandra or Amazon's dynamo.
The other type of systems are fully ACID compliant [traditional RDB] system and he shows that this type of datastore makes consistency paramount and in turn compromises on Availability and Latency factor. This is intuitive. If we have datastore divided into partitions and each partition keeps one replica, when the network between two Data centre breaks down, database that chooses consistency [like banking transactions], will make the system unavailable till both the sides are up, otherwise the two replicas will soon be incongruent to each other rendering the database inconsistent overall.
But if Availability (and therefore Latency) is prioritized, the system allows updates continuing even if one partition fails, thereby making the database relatively inconsistent for that time. In this case, responsibility to maintain consistency of data gets transferred to application accessing the data [pushing some work downstream]. One advantage is that it makes the database store design simpler.
Eventually Consistent Database
Concept of Eventually Consistent Database also is attributed to Eric Brewer. He described the consistency as a range. Strict consistency is what RDBMS provides. Weak consistency means system allows an window of inconsistency when most recent update will not be seen by all clients. Some conditions must be met before the data reaches fully consistent state.
Eventual Consistency is a special version of weak consistency. Quoting Werner Vogels, "the storage system guarantees that if no new updates are made to the object, eventually all accesses will return the last updated value. If no failures occur, the maximum size of the inconsistency window can be determined based on factors such as communication delays, the load on the system, and the number of replicas involved in the replication scheme."
Why Eventual Consistency Eventually Consistent Database
Concept of Eventually Consistent Database also is attributed to Eric Brewer. He described the consistency as a range. Strict consistency is what RDBMS provides. Weak consistency means system allows an window of inconsistency when most recent update will not be seen by all clients. Some conditions must be met before the data reaches fully consistent state.
Eventual Consistency is a special version of weak consistency. Quoting Werner Vogels, "the storage system guarantees that if no new updates are made to the object, eventually all accesses will return the last updated value. If no failures occur, the maximum size of the inconsistency window can be determined based on factors such as communication delays, the load on the system, and the number of replicas involved in the replication scheme."
One of the successful system that uses this consistency model is DNS. Even though a DNS name update may not reach all DNS nodes as soon as the update occurs, the protocol ensures that the update reaches all the nodes eventually after sufficient time elapsed since the update happened. Vogels has elaborated on different aspects and variations of eventual consistency in his paper [see reference] which are crucial factors when one designs a datastore.
The reason eventual consistency is important is there are lot of data usage that do not need strict consistency. Adopting the eventual consistency model for such type of data, opens up an opportunity to build cheap, distributed, greatly scalable, still reliable and lot faster databases. Here we enter the realm of so-called NoSQL databases.
NoSQL Movement
The NoSQL movement started with web 2.0 based startups, when they decided to build their own data store for the type of data that they were interested did not fit into relational model [page ranking of web pages or facebook content]. When Yahoo brought up Hadoop, Google brought up BigTable, Amazon brought up dynamo. Facebook later developed Cassandra. Hadoop and Cassandra became open-source. Apache projects after Yahoo and Facebook forked their software. Now of course you have many other NoSQL alternatives like MongoDB, HBASE [an opensource version of BigTable]. Hadoop incidentally has lot many adopters even from established storage players as shown in the table below.
Hadoop Ecosystem
Hadoop Ecosystem [Borrowed Table] |
Reference:
- A compilation paper on NoSQL databases
Thanks for providing such an useful content. Multi-Tenant Cloud Storage is an ideal solution for Unstructured Data Storage management. Please share more useful thoughts with us.
ReplyDelete