Hadoop 序列化特点
Java 的序列化是一个重量级序列化框架(Serializable),一个对象被序列化后,会附带很多额外的信息(各种校验信息,Header,继承体系等),不便于在网络中高效传输。所以,Hadoop 自己开发了一套序列化机制(Writable) Hadoop 序列化特点: 紧凑:高效使用存储空间 快速:读写数据的额外开销小 可扩展:随着通信协议的升级而可升级
互操作:支持多语言的交互
常用数据类型对应的 Hadoop 数据序列化类型
Java类型 |
Hadoop Writable类型 |
boolean |
BooleanWritable |
byte |
ByteWritable |
int |
IntWritable |
float |
FloatWritable |
long |
LongWritable |
double |
DoubleWritable |
String |
Text |
map |
MapWritable |
array |
ArrayWritable |
自定义序列化数据类型
(1)必须实现Writable接口 (2)反序列化时,需要反射调用空参构造函数,所以必须有空参构造 (3)重写序列化方法 (4)重写反序列化方法 (5)注意反序列化的顺序和序列化的顺序完全一致 (6)要想把结果显示在文件中,需要重写 toString(),可用 分开,方便后续用 (7)如果需要将自定义的 bean 放在 key 中传输,则还需要实现Comparable 接口,因为 MapReduce 框中的 Shuffle 过程要求对 key 必须能排序
测试:完成手机号的总上行流量,总下行流量,总流量的统计
测试数据 phone.txt
1 13736230513 192.196.100.1 www.atguigu.com 2481 24681 200 2 13846544121 192.196.100.2 264 0 200 3 13956435636 192.196.100.3 132 1512 200 4 13966251146 192.168.100.1 240 0 404 5 18271575951 192.168.100.2 www.atguigu.com 1527 2106 200 6 13470253144 192.168.100.3 www.atguigu.com 4116 1432 200 7 13590439668 192.168.100.4 1116 954 200 8 15910133277 192.168.100.5 www.hao123.com 3156 2936 200 9 13729199489 192.168.100.6 240 0 200 10 13630577991 192.168.100.7 www.shouhu.com 6960 690 200 11 15043685818 192.168.100.8 www.baidu.com 3659 3538 200 12 15959002129 192.168.100.9 www.atguigu.com 1938 180 500 13 13560439638 192.168.100.10 918 4938 200 14 13470253144 192.168.100.11 180 180 200 15 13682846555 192.168.100.12 www.qq.com 1938 2910 200 16 13992314666 192.168.100.13 www.gaga.com 3008 3720 200 17 13509468723 192.168.100.14 www.qinghua.com 7335 110349 404 18 18390173782 192.168.100.15 www.sogou.com 9531 2412 200 19 13975057813 192.168.100.16 www.baidu.com 11058 48243 200 20 13768778790 192.168.100.17 120 120 200 21 13568436656 192.168.100.18 www.alibaba.com 2481 24681 200 22 13568436656 192.168.100.19 1116 954 200
定义序列化对象
import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import org.apache.hadoop.io.Writable; public class FlowBean implements Writable { // 上行流量 private long upFlow; // 下行流量 private long downFlow; // 总流量 private long sumFlow; public FlowBean() { // 空参构造, 后续反射用 super(); } public FlowBean(long upFlow, long downFlow) { super(); this.upFlow = upFlow; this.downFlow = downFlow; this.sumFlow = upFlow + downFlow; } @Override public void write(DataOutput out) throws IOException { // 序列化方法 out.writeLong(upFlow); out.writeLong(downFlow); out.writeLong(sumFlow); } @Override public void readFields(DataInput in) throws IOException { // 反序列化方法 // 必须要求和序列化方法顺序一致 upFlow = in.readLong(); downFlow = in.readLong(); sumFlow = in.readLong(); } @Override public String toString() { return upFlow + " " + downFlow + " " + sumFlow; } public long getSumFlow() { return sumFlow; } public void setSumFlow(long sumFlow) { this.sumFlow = sumFlow; } }
MapReduce程序
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.log4j.BasicConfigurator; import java.io.IOException; public class FlowsumDriver { static { try { // 设置 HADOOP_HOME 环境变量 System.setProperty("hadoop.home.dir", "D://DevelopTools/hadoop-2.9.2/"); // 日志初始化 BasicConfigurator.configure(); // 加载库文件 System.load("D://DevelopTools/hadoop-2.9.2/bin/hadoop.dll"); } catch (UnsatisfiedLinkError e) { System.err.println("Native code library failed to load. " + e); System.exit(1); } } public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Configuration conf = new Configuration(); // 获取job对象 Job job = Job.getInstance(conf); // 设置jar的路径 job.setJarByClass(FlowsumDriver.class); // 关联mapper和reducer job.setMapperClass(FlowCountMapper.class); job.setReducerClass(FlowCountReducer.class); // 设置mapper输出的key和value类型 job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(FlowBean.class); // 设置最终输出的key和value类型 job.setOutputKeyClass(Text.class); job.setOutputValueClass(FlowBean.class); // 设置输入输出路径 args = new String[]{"D://tmp/phone.txt", "D://tmp/456"}; FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); // 提交job boolean result = job.waitForCompletion(true); System.exit(result ? 0 : 1); } } class FlowCountMapper extends Mapper<LongWritable, Text, Text, FlowBean> { private Text k = new Text(); private FlowBean v; @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { // 获取一行 String line = value.toString(); // 切割 String[] fields = line.split(" "); // 手机号 k.set(fields[1]); long upFlow = Long.parseLong(fields[fields.length - 3]); long downFlow = Long.parseLong(fields[fields.length - 2]); v = new FlowBean(upFlow, downFlow); // 写出 context.write(k, v); } } class FlowCountReducer extends Reducer<Text, FlowBean, Text, FlowBean> { private FlowBean v; @Override protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException { long sumUpFlow = 0L; long sumDownFlow = 0L; // 累加求和 for (FlowBean flowBean : values) { sumUpFlow += flowBean.getUpFlow(); sumDownFlow += flowBean.getDownFlow(); } v = new FlowBean(sumUpFlow, sumDownFlow); // 写出 context.write(key, v); } }
结果 part-r-00000
13470253144 4296 1612 5908 13509468723 7335 110349 117684 13560439638 918 4938 5856 13568436656 3597 25635 29232 13590439668 1116 954 2070 13630577991 6960 690 7650 13682846555 1938 2910 4848 13729199489 240 0 240 13736230513 2481 24681 27162 13768778790 120 120 240 13846544121 264 0 264 13956435636 132 1512 1644 13966251146 240 0 240 13975057813 11058 48243 59301 13992314666 3008 3720 6728 15043685818 3659 3538 7197 15910133277 3156 2936 6092 15959002129 1938 180 2118 18271575951 1527 2106 3633 18390173782 9531 2412 11943
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/Writable.html