hbase.mapreduce Description from JavaDoc

From org.apache.hadoop.hbase.mapreduce HBase 0.90.4 API.

Package org.apache.hadoop.hbase.mapreduce Description

Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility

Table of Contents

HBase, MapReduce and the CLASSPATH

MapReduce jobs deployed to a MapReduce cluster do not by default have access to the HBase configuration under $HBASE_CONF_DIR nor to HBase classes. You could add hbase-site.xml to$HADOOP_HOME/conf and add HBase jars to the $HADOOP_HOME/lib and copy these changes across your cluster (or edit conf/hadoop-env.sh and add them to the HADOOP_CLASSPATH variable) but this will pollute your hadoop install with HBase references; its also obnoxious requiring restart of the hadoop cluster before it’ll notice your HBase additions.

As of 0.90.x, HBase will just add its dependency jars to the job configuration; the dependencies just need to be available on the local CLASSPATH. For example, to run the bundled HBaseRowCounter mapreduce job against a table named usertable, type:

$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.90.0.jar rowcounter usertable

Expand $HBASE_HOME and $HADOOP_HOME in the above appropriately to suit your local environment. The content of HADOOP_CLASSPATH is set to the HBase CLASSPATH via backticking the command${HBASE_HOME}/bin/hbase classpath.

When the above runs, internally, the HBase jar finds its zookeeper and guava, etc., dependencies on the passed HADOOP_CLASSPATH and adds the found jars to the mapreduce job configuration. See the source at TableMapReduceUtil#addDependencyJars(org.apache.hadoop.mapreduce.Job) for how this is done.

The above may not work if you are running your HBase from its build directory; i.e. you’ve done $ mvn test install at ${HBASE_HOME} and you are now trying to use this build in your mapreduce job. If you get

java.lang.RuntimeException: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterMapper
...

exception thrown, try doing the following:

$ HADOOP_CLASSPATH=${HBASE_HOME}/target/hbase-0.90.0-SNAPSHOT.jar:`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/target/hbase-0.90.0-SNAPSHOT.jar rowcounter usertable

Notice how we preface the backtick invocation setting HADOOP_CLASSPATH with reference to the built HBase jar over in the target directory.

 

Bundled HBase MapReduce Jobs

The HBase jar also serves as a Driver for some bundled mapreduce jobs. To learn about the bundled mapreduce jobs run:

$ ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.90.0-SNAPSHOT.jar
An example program must be given as the first argument.
Valid program names are:
  copytable: Export a table from local cluster to peer cluster
  completebulkload: Complete a bulk data load.
  export: Write table data to HDFS.
  import: Import data written by Export.
  importtsv: Import data in TSV format.
  rowcounter: Count rows in HBase table

HBase as MapReduce job data source and sink

HBase can be used as a data source, TableInputFormat, and data sink, TableOutputFormat or MultiTableOutputFormat, for MapReduce jobs. Writing MapReduce jobs that read or write HBase, you’ll probably want to subclass TableMapper and/or TableReducer. See the do-nothing pass-through classes IdentityTableMapper and IdentityTableReducer for basic usage. For a more involved example, seeRowCounter or review the org.apache.hadoop.hbase.mapreduce.TestTableMapReduce unit test.

Running mapreduce jobs that have HBase as source or sink, you’ll need to specify source/sink table and column names in your configuration.

Reading from HBase, the TableInputFormat asks HBase for the list of regions and makes a map-per-region or mapred.map.tasks maps, whichever is smaller (If your job only has two maps, up mapred.map.tasks to a number > number of regions). Maps will run on the adjacent TaskTracker if you are running a TaskTracer and RegionServer per node. Writing, it may make sense to avoid the reduce step and write yourself back into HBase from inside your map. You’d do this when your job does not need the sort and collation that mapreduce does on the map emitted data; on insert, HBase ‘sorts’ so there is no point double-sorting (and shuffling data around your mapreduce cluster) unless you need to. If you do not need the reduce, you might just have your map emit counts of records processed just so the framework’s report at the end of your job has meaning or set the number of reduces to zero and use TableOutputFormat. See example code below. If running the reduce step makes sense in your case, its usually better to have lots of reducers so load is spread across the HBase cluster.

There is also a new HBase partitioner that will run as many reducers as currently existing regions. The HRegionPartitioner is suitable when your table is large and your upload is not such that it will greatly alter the number of existing regions when done; otherwise use the default partitioner.

Bulk import writing HFiles directly

If importing into a new table, its possible to by-pass the HBase API and write your content directly to the filesystem properly formatted as HBase data files (HFiles). Your import will run faster, perhaps an order of magnitude faster if not more. For more on how this mechanism works, see Bulk Loads documentation.

Example Code

Sample Row Counter

See RowCounter. This job uses TableInputFormat and does a count of all rows in specified table. You should be able to run it by doing: % ./bin/hadoop jar hbase-X.X.X.jar. This will invoke the hbase MapReduce Driver class. Select ‘rowcounter’ from the choice of jobs offered. This will emit rowcouner ‘usage’. Specify tablename, column to count and output directory. You may need to add the hbase conf directory to $HADOOP_HOME/conf/hadoop-env.sh#HADOOP_CLASSPATH so the rowcounter gets pointed at the right hbase cluster (or, build a new jar with an appropriate hbase-site.xml built into your job jar).

HBase tutorial for beginners

From ole-martin.net » HBase tutorial for beginners – a blog by Ole-Martin Mørk.

First of all, HBase is a column oriented database. However, you have to forget everything you have learned about tables, columns and rows in the RDBMS world. The data in an HBase instance is layed out more like a hashtable, and the data is immutable. Whenever you update the data, you are actually just creating a new version of it.

This tutorial will be very hands-on, with not too much explanation. There are a number of articles where the column oriented databases are described in details. Check out my delicious tag for some good ones, for instance jimbojw.com’s excellent introduction

I used Apple OSX 10.5.6 in this tutorial, I am not sure if this will work on windows and linux.

The goal for this tutorial is to create a model for a blog with integration from a java program.

Get started

  • Download from the latest stable release from apache. I went with the hbase-0.18.1 release.
  • Unpack it, for instance to ~/hbase
  • Edit ~/hbase/conf/hbase-env.sh and set the correct JAVA_HOME variable.
  • Start hbase by running ~/hbase/bin/start-hbase.sh

Create a table

  • Start the hbase shell by running ~/hbase/bin/hbase shell
  • Run create ‘blogposts’,’post’,’image’ in the shell

Now you have a table called blogposts, with a post, and a image family. These families are “static” like the columns in the RDBMS world.

Add some data to the table

Run the following commands in the shell:

  • put ‘blogposts’,’post1′,’post:title’,’Hello World’
  • put ‘blogposts’,’post1′,’post:author’,’The Author’
  • put ‘blogposts’,’post1′,’post:body’,’This is a blog post’
  • put ‘blogposts’,’post1′,’image:header’,’image1.jpg’
  • put ‘blogposts’,’post1′,’image:bodyimage’,’image2.jpg’

Look at the data

Run get ‘blogposts’, ‘post1’ in the shell. This should output something like this.

COLUMN CELL
image:bodyimage timestamp=1229953133260, value=image2.jpg
image:header timestamp=1229953110419, value=image1.jpg
post:author timestamp=1229953071910, value=The Author
post:body timestamp=1229953072029, value=This is a blog post
post:title timestamp=1229953071791, value=Hello World

Summary part1

So, what have we accomplished so far? We have created a table and added one ‘record’ to it. This record consists of the blogpost itself, and the images attached to it. So, how do we retrieve those data from a java application?

Integrate with HBase from Java

In order to integrate with HBase you will need the following jar files in your classpath:

  • commons-logging-1.0.4.jar
  • hadoop-0.18.1-core.jar
  • hbase-0.18.1.jar
  • log4j-1.2.13.jar

All these are found within ~/hbase/lib and ~/hbase

Ok. Here’s the java code:

01 import org.apache.hadoop.hbase.client.HTable;
02 import org.apache.hadoop.hbase.HBaseConfiguration;
03 import org.apache.hadoop.hbase.io.RowResult;
04
05 import java.util.HashMap;
06 import java.util.Map;
07 import java.io.IOException;
08
09 public class HBaseConnector {
10
11 public static Map retrievePost(String postId) throws IOException {
12 HTable table = new HTable(new HBaseConfiguration(), "blogposts");
13 Map post = new HashMap();
14
15 RowResult result = table.getRow(postId);
16
17 for (byte[] column : result.keySet()) {
18 post.put(new String(column), new String(result.get(column).getValue()));
19 }
20 return post;
21 }
22
23 public static void main(String[] args) throws IOException {
24 Map blogpost = HBaseConnector.retrievePost("post1");
25 System.out.println(blogpost.get("post:title"));
26 System.out.println(blogpost.get("post:author"));
27 }
28 }

This code should print out ‘Hello World’ and ‘The Author’.