Hadoop的SequenceFile读写实例

 1     SequenceFile可以处理hdfs上大量小文件,它可以作为大量小文件的容器。HDFS和MapReduce是针对大文件优化的,所以通过SequenceFile类型将小文件包装起来可以获得更高效的存储和处理。存储
 2 在SequenceFile中的键和值并不一定是Writable类型,只要能被Serialization序列化和反序列化,任何类型都可以。
 3 
 4   SequenceFile的优点是:以键值对方式存储、支持压缩、合并大量小文件。
 5 
 6    
 7         Configuration conf = new Configuration();
 8         FileSystem fs = FileSystem.get(new URI("hdfs://single32:9000"), conf);
 9         Path targetPath = new Path("/sfs");
10         
11         //在HDFS上创建不使用压缩的SequenceFile
12         final Option optPath = SequenceFile.Writer.file(targetPath);
13         final Option optKeyClass = SequenceFile.Writer.keyClass(Text.class);
14         final Option optValueClass = SequenceFile.Writer.valueClass(BytesWritable.class);
15         final SequenceFile.Writer writer = SequenceFile.createWriter(conf, optPath, optKeyClass, optValueClass);
16         final Collection<File> listFiles = FileUtils.listFiles(new File("/usr/local/"), new String[]{"txt"}, false);
17         Text key = null;
18         BytesWritable value = null;
19         for (File file : listFiles) {
20             key = new Text(file.getPath());
21             value = new BytesWritable(FileUtils.readFileToByteArray(file));
22             writer.append(key, value);
23         }
24         IOUtils.closeStream(writer);
25 
26         //读取HDFS上指定目录下的SequenceFile文件
27         final SequenceFile.Reader reader = new SequenceFile.Reader(fs, targetPath, conf);
28         final Text outputKey = new Text();
29         final BytesWritable outputValue = new BytesWritable();
30         while(reader.next(outputKey, outputValue)){
31             final File file = new File("/usr/"+outputKey.toString());
32             FileUtils.writeByteArrayToFile(file, outputValue.getBytes());
33         }
34         IOUtils.closeStream(reader);
35   
原文地址:https://www.cnblogs.com/mengyao/p/4456148.html