Caused by: java.io.IOException: Added a key not lexically larger than previous.

为了重复这个实验,遇到不少坑

https://www.iteblog.com/archives/1889.html

/**
 * Created by Administrator on 2017/8/18.
 */
public class IteblogBulkLoadDriver {
    public static class IteblogBulkLoadMapper extends Mapper<LongWritable, Text, StringWriter, Put> {
        protected void map(LongWritable key, Text value, Context context) throws InterruptedException, IOException {
            if(value==null) {
                return;
            }

            String line = value.toString();

            String[] items = line.split("\^");
            if(items.length<3){
                items = line.split("\^");
            }
            if(items.length<3){
                System.out.println("================less 3");
                return;
            }
            System.out.println(line);
            String rowKey = items[0]+items[1];
            Put put = new Put(Bytes.toBytes(items[0]));   //ROWKEY
            put.addColumn("cf".getBytes(), "url".getBytes(), items[1].getBytes());
            put.addColumn("cf".getBytes(), "name".getBytes(), items[2].getBytes());
            context.write(new StringWriter().append(rowKey), put);
        }
    }



    public static class HBaseHFileReducer extends
            Reducer<StringWriter, Put, ImmutableBytesWritable, Put> {
        protected void reduce(StringWriter key, Iterable<Put> values,
                              Context context) throws IOException, InterruptedException {
            String value = "";
            ImmutableBytesWritable k  = new ImmutableBytesWritable(key.toString().getBytes());

            Put val = values.iterator().next();
            context.write(k, val);
        }


    }

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
//              String SRC_PATH= "hdfs:/slave1:8020/maats5/pay/logdate=20170906";
//              String DESC_PATH= "hdfs:/slave1:8020/maats5_test/pay/logdate=20170906";
            String SRC_PATH= args[0];
            String DESC_PATH=args[1];
            Configuration conf = HBaseConnectionFactory.config;
            conf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem");
            Job job=Job.getInstance(conf);
            job.setJarByClass(IteblogBulkLoadDriver.class);
            job.setMapperClass(IteblogBulkLoadMapper.class);
            job.setMapOutputKeyClass(StringWriter.class);
            job.setMapOutputValueClass(Put.class);
            job.setReducerClass(HBaseHFileReducer.class);
            job.setOutputFormatClass(HFileOutputFormat2.class);
            HTable table = new HTable(conf,"maatstest");
            HFileOutputFormat2.configureIncrementalLoad(job,table,table.getRegionLocator());
            FileInputFormat.addInputPath(job,new Path(SRC_PATH));
            FileOutputFormat.setOutputPath(job,new Path(DESC_PATH));

            System.exit(job.waitForCompletion(true)?0:1);
        }
    }

When using the bulkloader (LoadIncrementalHFiles, doBulkLoad) you can only add items that are "lexically ordered", ie. you need to make sure that the items you add are sorted by the row-id.

https://stackoverflow.com/questions/25860114/hfile-creation-added-a-key-not-lexically-larger-than-previous-key

 http://ganliang13.iteye.com/blog/1884921

原文地址:https://www.cnblogs.com/rocky-AGE-24/p/7532613.html