flume集成hdfs(hdfs开启kerberos认证)

)当 sink 到 hdfs 时:
) 需修改 flume-env.sh 配置,增添 hdfs 依赖库:
  FLUME_CLASSPATH="/root/TDH-Client/hadoop/hadoop/*:/root/TDHClient/hadoop/hadoop-hdfs/*:/root/TDH-Client/hadoop/hadoop/lib/*"
 
实例:
a1.sources=r1
a1.sinks=k2
a1.channels=c2
 
a1.sources.r1.type=avro
a1.sources.r1.channels=c1 c2
a1.sources.r1.bind=172.20.237.105
a1.sources.r1.port=8888
 
#r1的数据通过c2发送给k2输出到HDFS中存储
a1.sinks.k2.channel = c2
a1.sinks.k2.type=hdfs
a1.sinks.k2.hdfs.kerberosKeytab=/etc/hdfs1/conf/hdfs.keytab
a1.sinks.k2.hdfs.kerberosPrincipal=hdfs/gz237-105@TDH
#存储到hdfs上的位置
a1.sinks.k2.hdfs.filePrefix=log-%Y-%m-%d
a1.sinks.k2.hdfs.useLocalTimeStamp = true
a1.sinks.k2.hdfs.writeFormat = text
a1.sinks.k2.hdfs.fileType=DataStream
a1.sinks.k2.hdfs.inUseSuffix=.log
#a1.sinks.k2.hdfs.rollInterval = 0
a1.sinks.k2.hdfs.rollInterval = 60
a1.sinks.k2.hdfs.rollSize = 10240
a1.sinks.k2.hdfs.rollCount = 100
#a1.sinks.k2.hdfs.rollCount = 0
a1.sinks.k2.hdfs.idleTimeout=60
 
a1.channels.c2.type=memory
a1.channels.c2.capacity=100000
a1.channels.c2.transactionCapacity=10000
原文地址:https://www.cnblogs.com/yfb918/p/10413929.html