Voldemort is a distributed data store designed to support fast, scalable read/write loads. It was not designed specifically with batch computation in mind, but it supports a pluggable architecture which allows the support of multiple storage engines in the same framework. This allows us to integrate a fast, fault-tolerant online storage system, with the heavy offline data crunching running on Hadoop.

Key Features

  • Data is automatically replicated over multiple servers.
  • Data is automatically partitioned so each server contains only a subset of the total data
  • Server failure is handled transparently
  • Pluggable serialization is supported to allow rich keys and values including lists and tuples with named fields
  • Data items are versioned to maximize data integrity in failure scenarios without compromising availability of the system
  • Each node is independent of other nodes with no central point of failure or coordination
  • Good single node performance: you can expect 10-20k operations per second depending on the machines, the network, the disk system, and the data replication factor


Comparison to relational databases:
Voldemort is not a relational database. Nor is it an object database that attempts to transparently map object reference graphs. Nor does it introduce a new abstraction such as document-orientation. It is basically just a big, distributed, persistent, fault-tolerant hash table. For applications in this space, arbitrary in-database joins are already impossible since all the data is not available in any single database. A typical pattern is to introduce a caching layer which will require hashtable semantics anyway. For these applications Voldemort offers a number of advantages:

  • Voldemort combines in memory caching with the storage system
  • Unlike MySQL replication, both reads and writes scale horizontally
  • Data portioning is transparent, and allows for cluster expansion without rebalancing all data
  • Data replication and placement is decided by a simple API
  • The storage layer is completely mockable so development and unit testing can be done against a throw-away in-memory storage system without needing a real cluster


HADOOP and VOLDEMORT
 

  1. Construct step: Voldemort runs an extra MapReduce job to transform the output of a Hadoop job into a highly-optimized custom data and index format, saving the results onto HDFS.
  2. Pull step: Voldemort pulls the data and indexes from HDFS.
  3. Swap step: the indexes are memory mapped for serving, leveraging the operating system’s page cache to provide quick lookups.
facebooktwittergoogle_plusredditlinkedinby feather

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>