From The Apache Book.
Chapter 10. Architecture
Table of Contents
The HBase client HTable is responsible for finding RegionServers that are serving the particular row range of interest. It does this by querying the .META.and -ROOT- catalog tables (TODO: Explain). After locating the required region(s), the client directly contacts the RegionServer serving that region (i.e., it does not go through the master) and issues the read or write request. This information is cached in the client so that subsequent requests need not go through the lookup process. Should a region be reassigned either by the master load balancer or because a RegionServer has died, the client will requery the catalog tables to determine the new location of the user region.
Administrative functions are handled through HBaseAdmin
For connection configuration information, see Section 2.6.4, “Client configuration and dependencies connecting to an HBase cluster”.
HTable instances are not thread-safe. When creating HTable instances, it is advisable to use the same HBaseConfiguration instance. This will ensure sharing of ZooKeeper and socket instances to the RegionServers which is usually what you want. For example, this is preferred:
HBaseConfiguration conf = HBaseConfiguration.create(); HTable table1 = new HTable(conf, "myTable"); HTable table2 = new HTable(conf, "myTable");
as opposed to this:
HBaseConfiguration conf1 = HBaseConfiguration.create(); HTable table1 = new HTable(conf1, "myTable"); HBaseConfiguration conf2 = HBaseConfiguration.create(); HTable table2 = new HTable(conf2, "myTable");
For more information about how connections are handled in the HBase client, see HConnectionManager.
For applications which require high-end multithreaded access (e.g., web-servers or application servers that may serve many application threads in a single JVM), see HTablePool.
If Section 11.5.4, “HBase Client: AutoFlush” is turned off on HTable, Puts are sent to RegionServers when the writebuffer is filled. The writebuffer is 2MB by default. Before an HTable instance is discarded, either close() or flushCommits() should be invoked so Puts will not be lost.
Note: htable.delete(Delete); does not go in the writebuffer! This only applies to Puts.
For additional information on write durability, review the ACID semantics page.
For fine-grained control of batching of Puts or Deletes, see the batch methods on HTable.
HMaster is the implementation of the Master Server. The Master server is responsible for monitoring all RegionServer instances in the cluster, and is the interface for all metadata changes.
If run in a multi-Master environment, all Masters compete to run the cluster. If the active Master loses it’s lease in ZooKeeper (or the Master shuts down), then then the remaining Masters jostle to take over the Master role.
The methods exposed by HMasterInterface are primarily metadata-oriented methods:
- Table (createTable, modifyTable, removeTable, enable, disable)
- ColumnFamily (addColumn, modifyColumn, removeColumn)
- Region (move, assign, unassign)
For example, when the HBaseAdmin method disableTable is invoked, it is serviced by the Master server.
HRegionServer is the RegionServer implementation. It is responsible for serving and managing regions.
The methods exposed by HRegionRegionInterface contain both data-oriented and region-maintenance methods:
- Data (get, put, delete, next, etc.)
- Region (splitRegion, compactRegion, etc.)
For example, when the HBaseAdmin method majorCompact is invoked on a table, the client is actually iterating through all regions for the specified table and requesting a major compaction directly to each region.
The RegionServer runs a variety of background threads:
CompactSplitThreadchecks for splits and handle minor compactions.MajorCompactionCheckerchecks for major compactions.MemStoreFlusherperiodically flushes in-memory writes in the MemStore to StoreFiles.LogRollerperiodically checks the RegionServer’s HLog.
This chapter is all about Regions.
Note
Regions are comprised of a Store per Column Family.
Region size is one of those tricky things, there are a few factors to consider:
- Regions are the basic element of availability and distribution.
- HBase scales by having regions across many servers. Thus if you have 2 regions for 16GB data, on a 20 node machine you are a net loss there.
- High region count has been known to make things slow, this is getting better, but it is probably better to have 700 regions than 3000 for the same amount of data.
- Low region count prevents parallel scalability as per point #2. This really cant be stressed enough, since a common problem is loading 200MB data into HBase then wondering why your awesome 10 node cluster is mostly idle.
- There is not much memory footprint difference between 1 region and 10 in terms of indexes, etc, held by the RegionServer.
Its probably best to stick to the default, perhaps going smaller for hot tables (or manually split hot regions to spread the load over the cluster), or go with a 1GB region size if your cell sizes tend to be largish (100k and up).
Splits run unaided on the RegionServer; i.e. the Master does not participate. The RegionServer splits a region, offlines the split region and then adds the daughter regions to META, opens daughters on the parent’s hosting RegionServer and then reports the split to the Master. See Section 2.8.2.7, “Managed Splitting” for how to manually manage splits (and for why you might do this)
Periodically, and when there are not any regions in transition, a load balancer will run and move regions around to balance cluster load. The period at which it runs can be configured.
A Store hosts a MemStore and 0 or more StoreFiles (HFiles). A Store corresponds to a column family for a table for a given region.
The MemStore holds in-memory modifications to the Store. Modifications are KeyValues. When asked to flush, current memstore is moved to snapshot and is cleared. HBase continues to serve edits out of new memstore and backing snapshot until flusher reports in that the flush succeeded. At this point the snapshot is let go.
The hfile file format is based on the SSTable file described in the BigTable [2006] paper and on Hadoop’s tfile (The unit test suite and the compression harness were taken directly from tfile). Schubert Zhang’s blog post on HFile: A Block-Indexed File Format to Store Sorted Key-Value Pairs makes for a thorough introduction to HBase’s hfile. Matteo Bertozzi has also put up a helpful description, HBase I/O: HFile.
To view a textualized version of hfile content, you can do use the org.apache.hadoop.hbase.io.hfile.HFile tool. Type the following to see usage:
$ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile
For example, to view the content of the file hdfs://10.81.47.41:9000/hbase/TEST/1418428042/DSMP/4759508618286845475, type the following:
$ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -v -f hdfs://10.81.47.41:9000/hbase/TEST/1418428042/DSMP/4759508618286845475
If you leave off the option -v to see just a summary on the hfile. See usage for other things to do with the HFile tool.
There are two types of compactions: minor and major. Minor compactions will usually pick up a couple of the smaller adjacent files and rewrite them as one. Minors do not drop deletes or expired cells, only major compactions do this. Sometimes a minor compaction will pick up all the files in the store and in this case it actually promotes itself to being a major compaction. For a description of how a minor compaction picks files to compact, see the ascii diagram in the Store source code.
After a major compaction runs there will be a single storefile per store, and this will help performance usually. Caution: major compactions rewrite all of the stores data and on a loaded system, this may not be tenable; major compactions will usually have to be done manually on large systems. SeeSection 2.8.2.8, “Managed Compactions”.
The Block Cache contains three levels of block priority to allow for scan-resistance and in-memory ColumnFamilies. A block is added with an in-memory flag if the containing ColumnFamily is defined in-memory, otherwise a block becomes a single access priority. Once a block is accessed again, it changes to multiple access. This is used to prevent scans from thrashing the cache, adding a least-frequently-used element to the eviction algorithm. Blocks from in-memory ColumnFamilies are the last to be evicted.
For more information, see the LruBlockCache source
Each RegionServer adds updates (Puts, Deletes) to its write-ahead log (WAL) first, and then to the Section 10.3.4.1, “MemStore” for the affectedSection 10.3.4, “Store”. This ensures that HBase has durable writes. Without WAL, there is the possibility of data loss in the case of a RegionServer failure before each MemStore is flushed and new StoreFiles are written. HLog is the HBase WAL implementation, and there is one HLog instance per RegionServer.
The WAL is in HDFS in /hbase/.logs/ with subdirectories per region.For more general information about the concept of write ahead logs, see the Wikipedia Write-Ahead Log article.
When a RegionServer crashes, it will lose its ephemeral lease in ZooKeeper…TODO
When set to true, the default, any error encountered splitting will be logged, the problematic WAL will be moved into the .corrupt directory under the hbaserootdir, and processing will continue. If set to false, the exception will be propagated and the split logged as failed.[19]
If we get an EOF while splitting logs, we proceed with the split even when hbase.hlog.split.skip.errors == false. An EOF while reading the last log in the set of files to split is near-guaranteed since the RegionServer likely crashed mid-write of a record. But we’ll continue even if we got an EOF reading other than the last file in the set.[20]
[19] See HBASE-2958 When hbase.hlog.split.skip.errors is set to false, we fail the split but thats it. We need to do more than just fail split if this flag is set.
[20] For background, see HBASE-2643 Figure how to deal with eof splitting logs