HbaseBook:Chapter 10. Architecture

From The Apache Book.

Chapter 10. Architecture

10.1. Client

The HBase client HTable is responsible for finding RegionServers that are serving the particular row range of interest. It does this by querying the .META.and -ROOT- catalog tables (TODO: Explain). After locating the required region(s), the client directly contacts the RegionServer serving that region (i.e., it does not go through the master) and issues the read or write request. This information is cached in the client so that subsequent requests need not go through the lookup process. Should a region be reassigned either by the master load balancer or because a RegionServer has died, the client will requery the catalog tables to determine the new location of the user region.

Administrative functions are handled through HBaseAdmin

10.1.1. Connections

For connection configuration information, see Section 2.6.4, “Client configuration and dependencies connecting to an HBase cluster”.

HTable instances are not thread-safe. When creating HTable instances, it is advisable to use the same HBaseConfiguration instance. This will ensure sharing of ZooKeeper and socket instances to the RegionServers which is usually what you want. For example, this is preferred:

HBaseConfiguration conf = HBaseConfiguration.create();
HTable table1 = new HTable(conf, "myTable");
HTable table2 = new HTable(conf, "myTable");

as opposed to this:

HBaseConfiguration conf1 = HBaseConfiguration.create();
HTable table1 = new HTable(conf1, "myTable");
HBaseConfiguration conf2 = HBaseConfiguration.create();
HTable table2 = new HTable(conf2, "myTable");

For more information about how connections are handled in the HBase client, see HConnectionManager.

10.1.1.1. Connection Pooling

For applications which require high-end multithreaded access (e.g., web-servers or application servers that may serve many application threads in a single JVM), see HTablePool.

10.1.2. WriteBuffer and Batch Methods

If Section 11.5.4, “HBase Client: AutoFlush” is turned off on HTablePuts are sent to RegionServers when the writebuffer is filled. The writebuffer is 2MB by default. Before an HTable instance is discarded, either close() or flushCommits() should be invoked so Puts will not be lost.

Note: htable.delete(Delete); does not go in the writebuffer! This only applies to Puts.

For additional information on write durability, review the ACID semantics page.

For fine-grained control of batching of Puts or Deletes, see the batch methods on HTable.

10.1.3. Filters

Get and Scan instances can be optionally configured with filters which are applied on the RegionServer.

10.2. Daemons

10.2.1. Master

HMaster is the implementation of the Master Server. The Master server is responsible for monitoring all RegionServer instances in the cluster, and is the interface for all metadata changes.

10.2.1.1. Startup Behavior

If run in a multi-Master environment, all Masters compete to run the cluster. If the active Master loses it’s lease in ZooKeeper (or the Master shuts down), then then the remaining Masters jostle to take over the Master role.

10.2.1.2. Interface

The methods exposed by HMasterInterface are primarily metadata-oriented methods:

  • Table (createTable, modifyTable, removeTable, enable, disable)
  • ColumnFamily (addColumn, modifyColumn, removeColumn)
  • Region (move, assign, unassign)

For example, when the HBaseAdmin method disableTable is invoked, it is serviced by the Master server.

10.2.1.3. Processes

The Master runs several background threads:

  • LoadBalancer periodically reassign regions in the cluster.
  • CatalogJanitor periodically checks and cleans up the .META. table.

 

10.2.2. RegionServer

HRegionServer is the RegionServer implementation. It is responsible for serving and managing regions.

10.2.2.1. Interface

The methods exposed by HRegionRegionInterface contain both data-oriented and region-maintenance methods:

  • Data (get, put, delete, next, etc.)
  • Region (splitRegion, compactRegion, etc.)

For example, when the HBaseAdmin method majorCompact is invoked on a table, the client is actually iterating through all regions for the specified table and requesting a major compaction directly to each region.

10.2.2.2. Processes

The RegionServer runs a variety of background threads:

  • CompactSplitThread checks for splits and handle minor compactions.
  • MajorCompactionChecker checks for major compactions.
  • MemStoreFlusher periodically flushes in-memory writes in the MemStore to StoreFiles.
  • LogRoller periodically checks the RegionServer’s HLog.

10.3. Regions

This chapter is all about Regions.

Note

Regions are comprised of a Store per Column Family.

10.3.1. Region Size

Region size is one of those tricky things, there are a few factors to consider:

  • Regions are the basic element of availability and distribution.
  • HBase scales by having regions across many servers. Thus if you have 2 regions for 16GB data, on a 20 node machine you are a net loss there.
  • High region count has been known to make things slow, this is getting better, but it is probably better to have 700 regions than 3000 for the same amount of data.
  • Low region count prevents parallel scalability as per point #2. This really cant be stressed enough, since a common problem is loading 200MB data into HBase then wondering why your awesome 10 node cluster is mostly idle.
  • There is not much memory footprint difference between 1 region and 10 in terms of indexes, etc, held by the RegionServer.

Its probably best to stick to the default, perhaps going smaller for hot tables (or manually split hot regions to spread the load over the cluster), or go with a 1GB region size if your cell sizes tend to be largish (100k and up).

10.3.2. Region Splits

Splits run unaided on the RegionServer; i.e. the Master does not participate. The RegionServer splits a region, offlines the split region and then adds the daughter regions to META, opens daughters on the parent’s hosting RegionServer and then reports the split to the Master. See Section 2.8.2.7, “Managed Splitting” for how to manually manage splits (and for why you might do this)

10.3.3. Region Load Balancer

Periodically, and when there are not any regions in transition, a load balancer will run and move regions around to balance cluster load. The period at which it runs can be configured.

10.3.4. Store

A Store hosts a MemStore and 0 or more StoreFiles (HFiles). A Store corresponds to a column family for a table for a given region.

10.3.4.1. MemStore

The MemStore holds in-memory modifications to the Store. Modifications are KeyValues. When asked to flush, current memstore is moved to snapshot and is cleared. HBase continues to serve edits out of new memstore and backing snapshot until flusher reports in that the flush succeeded. At this point the snapshot is let go.

10.3.4.2. StoreFile (HFile)

10.3.4.2.1. HFile Format

The hfile file format is based on the SSTable file described in the BigTable [2006] paper and on Hadoop’s tfile (The unit test suite and the compression harness were taken directly from tfile). Schubert Zhang’s blog post on HFile: A Block-Indexed File Format to Store Sorted Key-Value Pairs makes for a thorough introduction to HBase’s hfile. Matteo Bertozzi has also put up a helpful description, HBase I/O: HFile.

10.3.4.2.2. HFile Tool

To view a textualized version of hfile content, you can do use the org.apache.hadoop.hbase.io.hfile.HFile tool. Type the following to see usage:

$ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile 

For example, to view the content of the file hdfs://10.81.47.41:9000/hbase/TEST/1418428042/DSMP/4759508618286845475, type the following:

 $ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -v -f hdfs://10.81.47.41:9000/hbase/TEST/1418428042/DSMP/4759508618286845475 

If you leave off the option -v to see just a summary on the hfile. See usage for other things to do with the HFile tool.

10.3.4.3. Compaction

There are two types of compactions: minor and major. Minor compactions will usually pick up a couple of the smaller adjacent files and rewrite them as one. Minors do not drop deletes or expired cells, only major compactions do this. Sometimes a minor compaction will pick up all the files in the store and in this case it actually promotes itself to being a major compaction. For a description of how a minor compaction picks files to compact, see the ascii diagram in the Store source code.

After a major compaction runs there will be a single storefile per store, and this will help performance usually. Caution: major compactions rewrite all of the stores data and on a loaded system, this may not be tenable; major compactions will usually have to be done manually on large systems. SeeSection 2.8.2.8, “Managed Compactions”.

10.3.5. Block Cache

The Block Cache contains three levels of block priority to allow for scan-resistance and in-memory ColumnFamilies. A block is added with an in-memory flag if the containing ColumnFamily is defined in-memory, otherwise a block becomes a single access priority. Once a block is accessed again, it changes to multiple access. This is used to prevent scans from thrashing the cache, adding a least-frequently-used element to the eviction algorithm. Blocks from in-memory ColumnFamilies are the last to be evicted.

For more information, see the LruBlockCache source

10.4. Write Ahead Log (WAL)

10.4.1. Purpose

Each RegionServer adds updates (Puts, Deletes) to its write-ahead log (WAL) first, and then to the Section 10.3.4.1, “MemStore” for the affectedSection 10.3.4, “Store”. This ensures that HBase has durable writes. Without WAL, there is the possibility of data loss in the case of a RegionServer failure before each MemStore is flushed and new StoreFiles are written. HLog is the HBase WAL implementation, and there is one HLog instance per RegionServer.

The WAL is in HDFS in /hbase/.logs/ with subdirectories per region.For more general information about the concept of write ahead logs, see the Wikipedia Write-Ahead Log article.

10.4.2. WAL Flushing

TODO (describe).

10.4.3. WAL Splitting

10.4.3.1. How edits are recovered from a crashed RegionServer

When a RegionServer crashes, it will lose its ephemeral lease in ZooKeeper…TODO

10.4.3.2. hbase.hlog.split.skip.errors

When set to true, the default, any error encountered splitting will be logged, the problematic WAL will be moved into the .corrupt directory under the hbaserootdir, and processing will continue. If set to false, the exception will be propagated and the split logged as failed.[19]

10.4.3.3. How EOFExceptions are treated when splitting a crashed RegionServers’ WALs

If we get an EOF while splitting logs, we proceed with the split even when hbase.hlog.split.skip.errors == false. An EOF while reading the last log in the set of files to split is near-guaranteed since the RegionServer likely crashed mid-write of a record. But we’ll continue even if we got an EOF reading other than the last file in the set.[20]

HbaseBook:Chapter 9. Data Model

From The Apache Book.

Chapter 9. Data Model

In short, applications store data into an HBase table. Tables are made of rows and columns. All columns in HBase belong to a particular column family. Table cells — the intersection of row and column coordinates — are versioned. A cell’s content is an uninterpreted array of bytes.

Table row keys are also byte arrays so almost anything can serve as a row key from strings to binary representations of longs or even serialized data structures. Rows in HBase tables are sorted by row key. The sort is byte-ordered. All table accesses are via the table row key — its primary key.

9.1. Conceptual View

The following example is a slightly modified form of the one on page 2 of the BigTable paper. There is a table called webtable that contains two column families named contents and anchor. In this example, anchor contains two columns (anchor:cssnsi.comanchor:my.look.ca) and contents contains one column (contents:html).

Column Names

By convention, a column name is made of its column family prefix and a qualifier. For example, the column contents:html is of the column familycontents The colon character (:) delimits the column family from the column family qualifier.

 

Table 9.1. Table webtable

Row Key Time Stamp ColumnFamily contents ColumnFamily anchor
“com.cnn.www” t9 anchor:cnnsi.com = “CNN”
“com.cnn.www” t8 anchor:my.look.ca = “CNN.com”
“com.cnn.www” t6 contents:html = “<html>…”
“com.cnn.www” t5 contents:html = “<html>…”
“com.cnn.www” t3 contents:html = “<html>…”

 

9.2. Physical View

Although at a conceptual level tables may be viewed as a sparse set of rows. Physically they are stored on a per-column family basis. New columns (i.e.,columnfamily:column) can be added to any column family without pre-announcing them.

Table 9.2. ColumnFamily anchor

Row Key Time Stamp Column Family anchor
“com.cnn.www” t9 anchor:cnnsi.com = “CNN”
“com.cnn.www” t8 anchor:my.look.ca = “CNN.com”

 

Table 9.3. ColumnFamily contents

Row Key Time Stamp ColumnFamily “contents:”
“com.cnn.www” t6 contents:html = “<html>…”
“com.cnn.www” t5 contents:html = “<html>…”
“com.cnn.www” t3 contents:html = “<html>…”


It is important to note in the diagram above that the empty cells shown in the conceptual view are not stored since they need not be in a column-oriented storage format. Thus a request for the value of the contents:html column at time stamp t8 would return no value. Similarly, a request for an anchor:my.look.cavalue at time stamp t9 would return no value. However, if no timestamp is supplied, the most recent value for a particular column would be returned and would also be the first one found since timestamps are stored in descending order. Thus a request for the values of all columns in the row com.cnn.www if no timestamp is specified would be: the value of contents:html from time stamp t6, the value of anchor:cnnsi.com from time stamp t9, the value of anchor:my.look.cafrom time stamp t8.

9.3. Table

Tables are declared up front at schema definition time.

9.4. Row

Row keys are uninterrpreted bytes. Rows are lexicographically sorted with the lowest order appearing first in a table. The empty byte array is used to denote both the start and end of a tables’ namespace.

9.5. Column Family

Columns in HBase are grouped into column families. All column members of a column family have the same prefix. For example, the columns courses:history andcourses:math are both members of the courses column family. The colon character (:) delimits the column family from the . The column family prefix must be composed of printable characters. The qualifying tail, the column family qualifier, can be made of any arbitrary bytes. Column families must be declared up front at schema definition time whereas columns do not need to be defined at schema time but can be conjured on the fly while the table is up an running.

Physically, all column family members are stored together on the filesystem. Because tunings and storage specifications are done at the column family level, it is advised that all column family members have the same general access pattern and size characteristics.

 

9.6. Cells

{row, column, version} tuple exactly specifies a cell in HBase. Cell content is uninterrpreted bytes

9.7. Versions

{row, column, version} tuple exactly specifies a cell in HBase. Its possible to have an unbounded number of cells where the row and column are the same but the cell address differs only in its version dimension.

While rows and column keys are expressed as bytes, the version is specified using a long integer. Typically this long contains time instances such as those returned by java.util.Date.getTime() or System.currentTimeMillis(), that is: the difference, measured in milliseconds, between the current time and midnight, January 1, 1970 UTC.

The HBase version dimension is stored in decreasing order, so that when reading from a store file, the most recent values are found first.

There is a lot of confusion over the semantics of cell versions, in HBase. In particular, a couple questions that often come up are:

  • If multiple writes to a cell have the same version, are all versions maintained or just the last?[13]
  • Is it OK to write cells in a non-increasing version order?[14]

Below we describe how the version dimension in HBase currently works[15].

9.7.1. Versions and HBase Operations

In this section we look at the behavior of the version dimension for each of the core HBase operations.

9.7.1.1. Get/Scan

Gets are implemented on top of Scans. The below discussion of Get applies equally to Scans.

By default, i.e. if you specify no explicit version, when doing a get, the cell whose version has the largest value is returned (which may or may not be the latest one written, see later). The default behavior can be modified in the following ways:

  • to return more than one version, see Get.setMaxVersions()
  • to return versions other than the latest, see Get.setTimeRange()

    To retrieve the latest version that is less than or equal to a given value, thus giving the ‘latest’ state of the record at a certain point in time, just use a range from 0 to the desired version and set the max versions to 1.

9.7.1.2. Default Get Example

The following Get will only retrieve the current version of the row

        Get get = new Get(Bytes.toBytes("row1"));
        Result r = htable.get(get);
        byte[] b = r.getValue(Bytes.toBytes("cf"), Bytes.toBytes("attr"));  // returns current version of value

 

9.7.1.3. Versioned Get Example

The following Get will return the last 3 versions of the row.

        Get get = new Get(Bytes.toBytes("row1"));
        get.setMaxVersions(3);  // will return last 3 versions of row
        Result r = htable.get(get);
        byte[] b = r.getValue(Bytes.toBytes("cf"), Bytes.toBytes("attr"));  // returns current version of value
        List<KeyValue> kv = r.getColumn(Bytes.toBytes("cf"), Bytes.toBytes("attr"));  // returns all versions of this column

 

9.7.1.4. Put

Doing a put always creates a new version of a cell, at a certain timestamp. By default the system uses the server’s currentTimeMillis, but you can specify the version (= the long integer) yourself, on a per-column level. This means you could assign a time in the past or the future, or use the long value for non-time purposes.

To overwrite an existing value, do a put at exactly the same row, column, and version as that of the cell you would overshadow.

9.7.1.4.1. Implicit Version Example

The following Put will be implicitly versioned by HBase with the current time.

          Put put = new Put(Bytes.toBytes(row));
          put.add(Bytes.toBytes("cf"), Bytes.toBytes("attr1"), Bytes.toBytes( data));
          htable.put(put);

 

9.7.1.4.2. Explicit Version Example

The following Put has the version timestamp explicitly set.

          Put put = new Put( Bytes.toBytes(row ));
          long explicitTimeInMs = 555;  // just an example
          put.add(Bytes.toBytes("cf"), Bytes.toBytes("attr1"), explicitTimeInMs, Bytes.toBytes(data));
          htable.put(put);

 

9.7.1.5. Delete

When performing a delete operation in HBase, there are two ways to specify the versions to be deleted

  • Delete all versions older than a certain timestamp
  • Delete the version at a specific timestamp

A delete can apply to a complete row, a complete column family, or to just one column. It is only in the last case that you can delete explicit versions. For the deletion of a row or all the columns within a family, it always works by deleting all cells older than a certain version.

Deletes work by creating tombstone markers. For example, let’s suppose we want to delete a row. For this you can specify a version, or else by default thecurrentTimeMillis is used. What this means is delete all cells where the version is less than or equal to this version. HBase never modifies data in place, so for example a delete will not immediately delete (or mark as deleted) the entries in the storage file that correspond to the delete condition. Rather, a so-called tombstone is written, which will mask the deleted values[16]. If the version you specified when deleting a row is larger than the version of any value in the row, then you can consider the complete row to be deleted.

9.7.2. Current Limitations

There are still some bugs (or at least ‘undecided behavior’) with the version dimension that will be addressed by later HBase releases.

9.7.2.1. Deletes mask Puts

Deletes mask puts, even puts that happened after the delete was entered[17]. Remember that a delete writes a tombstone, which only disappears after then next major compaction has run. Suppose you do a delete of everything <= T. After this you do a new put with a timestamp <= T. This put, even if it happened after the delete, will be masked by the delete tombstone. Performing the put will not fail, but when you do a get you will notice the put did have no effect. It will start working again after the major compaction has run. These issues should not be a problem if you use always-increasing versions for new puts to a row. But they can occur even if you do not care about time: just do delete and put immediately after each other, and there is some chance they happen within the same millisecond.

9.7.2.2. Major compactions change query results

…create three cell versions at t1, t2 and t3, with a maximum-versions setting of 2. So when getting all versions, only the values at t2 and t3 will be returned. But if you delete the version at t2 or t3, the one at t1 will appear again. Obviously, once a major compaction has run, such behavior will not be the case anymore…[18]


[13Currently, only the last written is fetchable.

[14Yes

[15See HBASE-2406 for discussion of HBase versions. Bending time in HBase makes for a good read on the version, or time, dimension in HBase. It has more detail on versioning than is provided here. As of this writing, the limiitation Overwriting values at existing timestamps mentioned in the article no longer holds in HBase. This section is basically a synopsis of this article by Bruno Dumon.

[16When HBase does a major compaction, the tombstones are processed to actually remove the dead values, together with the tombstones themselves.

[18See Garbage Collection in Bending time in HBase