hadoop,zookeeper,hbase安装需要修改的配置

一:hadoop安装

/etc/profile

#在文件最后添加
export JAVA_HOME=/home/software/jdk1.7
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$JAVA_HOME/bin;$HADOOP_HOME/bin:$HADOOP_HOME/sbin #刷新配置 source /etc/profile

hadoop-env.sh

export JAVA_HOME=/home/software/jdk1.7

core-site.xml

<configuration>
    <!-- 指定HDFS老大(namenode)的通信地址 -->
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://locahost:9000</value>
</property>
    <!-- 指定hadoop运行时产生文件的存储路径 -->
<property>
        <name>hadoop.tmp.dir</name>
        <value>/cloud/hadoop/tmp</value>
</property>
</configuration>

hdfs-site.xml

<property> 
  <name>dfs.name.dir</name>
  <value>/usr/local/data/namenode</value>
</property>
<property>
   <name>dfs.data.dir</name>
   <value>/usr/local/data/datanode</value>
</property>
<property>
  <name>dfs.tmp.dir</name>
  <value>/usr/local/data/tmp</value>
</property>
<!-- 设置hdfs副本数量 -->
<property> 
  <name>dfs.replication</name>
  <value>3</value>
</property>

mapred-site.xml

<configuration>
<!-- 通知框架MR使用YARN -->
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
</configuration>

yarn-site.xml

<configuration>
<!-- reducer取数据的方式是mapreduce_shuffle -->
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>localhost</value>
</property>
</configuration>

slaves文件:

hadoop1
hadoop2
hadoop3

启动命令:

source /etc/profile
start-all.sh
或者: 先启动HDFS sbin
/start-dfs.sh 再启动YARN sbin/start-yarn.sh


额外的一些命令:
hdfs相关命令:
格式化hdfs:bin/hadoop namenode -format
启动hdfs的Namenode:sbin/hadoop-daemon.sh start namenode
启动hdfs的Datanode:sbin/hadoop-daemon.sh start datanode
如果需要启动多个Datanode:sbin/hadoop一daemons.sh start datanode
一次性启动Namenode和所有的Datanode: sbin/ start一dfs .sh
yarn相关命令:
启动ResourceManager:sbin/yarn一daemon.sh start resourcemanager
启动NodeManager:sbin/yarn一daemon.sh start nodemanager
启动多个NodeManager:sbin/yarn一daemons.sh start nodemanager
一次性启动ResourceManager和所有的NodeManager:sbin/start一yarn.sh
jps(前三个是hdfs):
NameNode
SecondaryNameNode
DataNode
ResourceManager
NodeManaqer

监控端口:

hdfs管理界面:http://localhost:50070  
yarn的管理界面:http://localhost:8088

扩展阅读:hadoop的安装和启动,参考 Hadoop技术内幕深入解析YARN架构设计与实现原理 1.5章节

      

二:zookeeper安装

conf/zoo.cfg

tickTime=2000   //客户端与服务器的心跳时间
dataDir=/usr/myapp/zookeeper-3.4.5/data
dataLogDir=/usr/myapp/zookeeper-3.4.5/logs
clientPort=2181
initLimit=5
syncLimit=2
server.1=dev-hadoop4:2888:3888
server.2=dev-hadoop5:2888:3888
server.3=dev-hadoop6:2888:3888
dataDir创建myid文件,并填写1,2或3依次类推。

启动和停止

进入bin目录,启动、停止、重启分和查看当前节点状态(包括集群中是何角色)别执行:

./zkServer.sh start
./zkServer.sh stop
./zkServer.sh restart
./zkServer.sh status

多个节点同时启动:

#! /bin/bash
case $1 in
"start"){
for i in hadoop102 hadoop103 hadoop104
do
ssh $i "/opt/module/zookeeper-3.4.10/bin/zkServer.sh
start"
done
};;
"stop"){
for i in hadoop102 hadoop103 hadoop104
do
ssh $i "/opt/module/zookeeper-3.4.10/bin/zkServer.sh
stop"
done
};;
"status"){
for i in hadoop102 hadoop103 hadoop104
do
ssh $i "/opt/module/zookeeper-3.4.10/bin/zkServer.sh
status"
done
};;
esac

  

三:hbase安装

Zookeeper对hbase作用在于:

                1、hbase regionserver 向zookeeper注册,提供hbase regionserver状态信息(是否在线)。

                2、hmaster启动时候会将hbase系统表-ROOT- 加载到 zookeeper cluster,通过zookeeper cluster可以获取当前系统表.META.的存储所对应的regionserver信息。

1-修改配置文件

hbase-site.xml

<configuration>
  ...
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://namenode.example.org:9000/hbase</value>
    <description>The directory shared by RegionServers.
    </description>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>The mode the cluster will be in. Possible values are
      false: standalone and pseudo-distributed setups with managed Zookeeper
      true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
    </description>
  </property>
  ...
</configuration>

regionservers:

dev-hadoop4
dev-hadoop5
dev-hadoop6

hbase-env.sh

如果你希望Hadoop集群上做HDFS 客户端配置 ,例如你的HDFS客户端的配置和服务端的不一样。按照如下的方法配置,HBase就能看到你的配置信息:

  • hbase-env.sh里将HBASE_CLASSPATH环境变量加上HADOOP_CONF_DIR

  • ${HBASE_HOME}/conf下面加一个 hdfs-site.xml (或者 hadoop-site.xml) ,最好是软连接

  • 如果你的HDFS客户端的配置不多的话,你可以把这些加到 hbase-site.xml上面.

启动命令:

启动HBase集群:
bin/start-hbase.sh
单独启动一个HMaster进程:
bin/hbase-daemon.sh start master
单独停止一个HMaster进程:
bin/hbase-daemon.sh stop master
单独启动一个HRegionServer进程:
bin/hbase-daemon.sh start regionserver
单独停止一个HRegionServer进程:
bin/hbase-daemon.sh stop regionserver

默认监控端口:60010或16010

四:ElasticSearch的安装和配置

elasticsearch.yml

cluster.name: elasticsearch_production  //集群名字
node.name: elasticsearch_005_data    //节点名字
path.data: /path/to/data1,/path/to/data2   //数据存放位置
path.logs: /path/to/logs               //插件存放位置
path.plugins: /path/to/plugins      //日志存放位置
discovery.zen.minimum_master_nodes: 2   //最小主节点数,可防止脑裂,默认为集群机器数/2 + 1
gateway.recover_after_nodes: 8  //集群恢复方面的配置,包括下面两个参数
gateway.expected_nodes: 10
gateway.recover_after_time: 5m
discovery.zen.ping.unicast.hosts: ["host1", "host2:port"]   //最好使用单播代替组播

启动命令:

./bin/elasticsearch

监控端口号:9200

五:hive的安装和配置(只需要安装到一台服务器即可,不需要集群安装)

/etc/profile

export HIVE_HOME=/usr/local/hive
export PATH=$HIVE_HOME/bin

安装mysql

1、在hadoop1上安装mysql。
2、使用yum安装mysql server。
yum install -y mysql-server
service mysqld start
chkconfig mysqld on
3、使用yum安装mysql connector
yum install -y mysql-connector-java
4、将mysql connector拷贝到hive的lib包中
cp /usr/share/java/mysql-connector-java-5.1.17.jar /usr/local/hive/lib
5、在mysql上创建hive元数据库,并对hive进行授权
create database if not exists hive_metadata;
grant all privileges on hive_metadata.* to 'hive'@'%' identified by 'hive';
grant all privileges on hive_metadata.* to 'hive'@'localhost' identified by 'hive';
grant all privileges on hive_metadata.* to 'hive'@'hadoop1' identified by 'hive';
flush privileges;
use hive_metadata;

hive-site.xml

mv hive-default.xml.template hive-site.xml
vi hive-site.xml
<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://spark1:3306/hive_metadata?createDatabaseIfNotExist=true</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>hive</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>hive</value>
</property>
<property>
  <name>hive.metastore.warehouse.dir</name>
  <value>/user/hive/warehouse</value>
</property>

hive.config

export JAVA_HOME=/usr/java/latest
export HIVE_HOME=/usr/local/hive
export HADOOP_HOME=/usr/local/hadoop

修改hive启动脚本

mv hive-env.sh.template hive-env.sh

六:kafka的安装和配置

server.properties

broker.id:依次增长的整数,01234,集群中Broker的唯一id
zookeeper.connect=192.168.1.107:2181,192.168.1.108:2181,192.168.1.109:2181

启动命令

nohup bin/kafka-server-start.sh config/server.properties &

解决kafka Unrecognized VM option 'UseCompressedOops'问题

vi bin/kafka-run-class.sh 
if [ -z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then
  KAFKA_JVM_PERFORMANCE_OPTS="-server  -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true"
fi
去掉-XX:+UseCompressedOops即可

七:spark安装

/etc/profile

export SPARK_HOME=/usr/local/spark
export PATH=$SPARK_HOME/bin
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

spark-env.sh

export JAVA_HOME=/usr/java/latest
export SCALA_HOME=/usr/local/scala
export SPARK_MASTER_IP=192.168.1.107
export SPARK_WORKER_MEMORY=1g
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop

slaves

spark1
spark2
spark3

启动命令

./start-all.sh

总结:启动顺序(不是严格的启动顺序),hadoop ->  zookeeper  ->  hbase  -> elasticSearch

参考文献:https://www.cnblogs.com/gyouxu/p/4183417.html

原文地址:https://www.cnblogs.com/parent-absent-son/p/10151676.html