班级网站源代码下载,WordPress模板cms,django wordpress,wordpress响应免费主题最近需要为一些数据增加随机读的功能#xff0c;于是采用生成HFile再bulk load进HBase的方式。 运行的时候map很快完成#xff0c;reduce在sort阶段花费时间很长#xff0c;reducer用的是KeyValueSortReducer而且只有一个#xff0c;这就形成了单reducer全排序的瓶颈。于是…最近需要为一些数据增加随机读的功能于是采用生成HFile再bulk load进HBase的方式。 运行的时候map很快完成reduce在sort阶段花费时间很长reducer用的是KeyValueSortReducer而且只有一个这就形成了单reducer全排序的瓶颈。于是就想着采用TotalOrderPartitioner使得MR Job可以有多个reducer来提高并行度解决这个瓶颈。 于是动手写代码不仅用了TotalOrderPartitioner还使用InputSampler.RandomSampler生成分区文件。但执行时碰到问题查资料时无意发现HFileOutputFormat内部是使用TotalOrderPartitioner来进行全排序的 public static void configureIncrementalLoad(Job job, HTable table)throws IOException {Configuration conf job.getConfiguration();Class? extends Partitioner topClass;try {topClass getTotalOrderPartitionerClass();} catch (ClassNotFoundException e) {throw new IOException(Failed getting TotalOrderPartitioner, e);}job.setPartitionerClass(topClass);......分区文件的内容就是各region的startKey(去掉最小的) private static void writePartitions(Configuration conf, Path partitionsPath,ListImmutableBytesWritable startKeys) throws IOException {if (startKeys.isEmpty()) {throw new IllegalArgumentException(No regions passed);}// Were generating a list of split points, and we dont ever// have keys the first region (which has an empty start key)// so we need to remove it. Otherwise we would end up with an// empty reducer with index 0//没有哪个rowkey会排在最小的startKey之前所以去掉最小的startKeyTreeSetImmutableBytesWritable sorted new TreeSetImmutableBytesWritable(startKeys);ImmutableBytesWritable first sorted.first();//如果最小的region startKey不是“法定”的最小rowkey那就报异常if (!first.equals(HConstants.EMPTY_BYTE_ARRAY)) {throw new IllegalArgumentException(First region of table should have empty start key. Instead has: Bytes.toStringBinary(first.get()));}sorted.remove(first);// Write the actual fileFileSystem fs partitionsPath.getFileSystem(conf);SequenceFile.Writer writer SequenceFile.createWriter(fs,conf, partitionsPath, ImmutableBytesWritable.class, NullWritable.class);try {//写入分区文件中 for (ImmutableBytesWritable startKey : sorted) {writer.append(startKey, NullWritable.get());}} finally {writer.close();}}因为我的表都是新表只有一个region 所以肯定是只有一个reducer了。 既然如此使用HFileOutputFormat时reducer的数量就是HTable的region数量如果使用bluk load HFile的方式导入巨量数据最好的办法是在定义htable是就预先定义好各region。这种方式其实叫Pre-Creating RegionsPCR还能带来些别的优化比如减少split region的操作淘宝有些优化就是应用PCR并且关闭自动split等到系统空闲时再手动split这样可以保证系统繁忙时不会再被split雪上加霜。 关于Pre-Creating Regions: http://hbase.apache.org/book.html#precreate.regions 11.7.2. Table Creation: Pre-Creating Regions Tables in HBase are initially created with one region by default. For bulk imports, this means that all clients will write to the same region until it is large enough to split and become distributed across the cluster. A useful pattern to speed up the bulk import process is to pre-create empty regions. Be somewhat conservative in this, because too-many regions can actually degrade performance. There are two different approaches to pre-creating splits. The first approach is to rely on the default HBaseAdmin strategy (which is implemented in Bytes.split)... byte[] startKey ...; // your lowest keuy
byte[] endKey ...; // your highest key
int numberOfRegions ...; // # of regions to create
admin.createTable(table, startKey, endKey, numberOfRegions); And the other approach is to define the splits yourself... byte[][] splits ...; // create your own splits
admin.createTable(table, splits);转载于:https://www.cnblogs.com/aprilrain/archive/2013/03/27/2985064.html