Falcon

1. JE

falcon还需要安装je用来处理jdbc,否则打不开falcon的页面,爆内部错误503,然后看异常信息:
Caused by: org.apache.falcon.FalconException: java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [com.thinkaurelius.titan.core.TitanFactory].
Caused by: java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [com.thinkaurelius.titan.core.TitanFactory].
Caused by: java.lang.reflect.InvocationTargetException
Caused by: java.lang.NoClassDefFoundError: com/sleepycat/je/LockMode
Caused by: java.lang.ClassNotFoundException: com.sleepycat.je.LockMode

处理方式:
1.wget –O je-5.0.73.jar http://search.maven.org/remotecontent?filepath=com/sleepycat/je/5.0.73/je-5.0.73.jar
2. Log in to the Ambari server with administrator privileges.
su – root
3. Copy the file to the Ambari server share folder.
cp je-5.0.73.jar /usr/share/
4. Set permissions on the file to owner=read/write, group=read, other=read.
chmod 644 /usr/share/je-5.0.73.jar
5. Configure the Ambari server to use the Berkeley DB driver.
ambari-server setup --jdbc-db=bdb --jdbc-driver=/usr/share/je-5.0.73.jar
6. Restart the Ambari server.
ambari-server restart
7. Restart the Falcon service from the Ambari UI.
You need to have administrator privileges in Ambari to restart a service.
a) In the Ambari web UI, click the Services tab and select the Falcon service in the left Services pane.
b) From the Falcon Summary page, click Service Actions > Restart All.
c) Click Confirm Restart All.
When the service is available, the Falcon status displays as Started on the Summary page.
摘自:https://community.hortonworks.com/questions/77600/faclon-web-ui-failing-with-http-503-service-unavai.html
参考:https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_data_governance/content/ch_hdp_data_governance_overview.html

2. 创建相应的文件夹
sudo su falcon
hadoop fs mkdir -p /apps/falcon/{clusterName}/staging
hadoop fs mkdir -p /apps/falcon/{clusterName}/working
hadoop fs mkdir -p /apps/falcon/tmp

3. 日志路径
/var/log/falcon

在安装的时候每次都会让我配置一下hadoop以及yarn的配置,因为里面有包含home路径;尽管不是很清楚为什么不允许home目录,ambari每次还会默认添加上home目录;但是这次安装却无论如何也改不了配置,改了DataNode的路径,自动就会在改回带home的路径;后来我在hadoop的配置的config group中自定义了一个LorryGroup,并且把所有的节点拖进去,问题解决(安装falcon的时候,在配置hadoop选择LorryGroup);为什么这次需要通过添加group的方式来安装falcon之前那次不用?这个Config Group到底是做什么用的?

4. Cluster,Feed各个实体xml属性介绍
https://falcon.apache.org/EntitySpecification.html

falcon在cluster中定义了很多数据源,数据目的地的信息,interface(接口)就是定义这些用的;这里包括很多类型。整体来讲分为两类,一类是关系型到大数据,对于关系型数据库需要在DataSource中定义;大数据HDFS的相关信息定义在cluster中,包括hive的接口,hdfs的接口等;Feed则是定义一套数据的处理流程,包括源和目的地,源和目的地分别和之前定义的Datasource和cluster绑定。

在创建数据源的时候,如果是MySQL,在指定Driver jar的时候路径是HDFS路径(直接写路径):例如/tools/mysql-connector-java.jar;另外还要保证这个文件falcon用户是有访问权限的。

5. 后续调研

可以通过JMS实现数据传输;这个还是可以研究一下;是否可以形成一套数据导入的机制
原文地址:https://www.cnblogs.com/xiashiwendao/p/8597453.html