activemq部署

系统环境

IP

salt-master-1:192.168.0.156

salt-master-2:192.168.0.157

node-test-1:192.168.0.158

node-test-2:192.168.0.159

系统版本

CentOS release 5.9 (Final)

内核版本

2.6.18-348.el5 x86_64

软件版本

JDK

1.7.79

activemq

5.5.1

软件安装

源码安装

JDK

wget http://192.168.0.155/soft/java/jdk-7u79-linux-x64.gz

tar xzf jdk-7u79-linux-x64.gz -C /usr/local/

vim /etc/profile

#JAVA ENV

export JAVA_HOME=/usr/local/jdk1.7.0_79

export CLASSPATH=:.${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar

export PATH=${JAVA_HOME}/bin:${PATH}

source /etc/profile

java –version

java version "1.7.0_79"

Java(TM) SE Runtime Environment (build 1.7.0_79-b15)

Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)

activemq

wget http://archive.apache.org/dist/activemq/apache-activemq/5.5.1/apache-activemq-5.5.1-bin.tar.gz

tar xzf apache-activemq-5.5.1-bin.tar.gz -C /usr/local/

cd /usr/local/apache-activemq-5.5.1/

bin/activemq start

http://192.168.0.158:8161/admin/

bin/activemq stop

activemq主从

Shared File System Master Slave

NFS服务
  1. 在192.168.0.156上搭建NFS服务

yum -y install nfs-utils nfs-utils-lib-devel

vim /etc/exports

/data/kahadb 192.168.0.0/24(rw,no_root_squash,sync)

mkdir -p /data/kahadb

service nfs start

  1. 将NFS共享盘分别挂载到192.168.0.158和192.168.0.159

mkdir -p /data/kahadb

mount 192.168.0.156:/data/kahadb /data/kahadb

  1. 测试NFS服务和挂载是否正常
    1. 在192.168.0.156上创建test.txt

touch /data/kahadb/test.txt

  1. 分别在192.168.0.158和192.168.0.159上查看是否有test.txt文件

ll /data/kahadb/

activemq配置

vim /usr/local/apache-activemq-5.5.1/conf/activemq.xml

        <persistenceAdapter>

        <!--    <kahaDB directory="${activemq.base}/data/kahadb"/> -->

            <kahaDB directory="/data/kahadb"/>

        </persistenceAdapter>

重启activemq

/usr/local/apache-activemq-5.5.1/bin/activemq restart

activemq主从测试
  1. 只关闭192.168.0.157的activemq

/usr/local/apache-activemq-5.5.1/bin/activemq stop

  1. 开启192.168.0.157的activemq

/usr/local/apache-activemq-5.5.1/bin/activemq start

  1. 关闭192.168.0.159的activemq

/usr/local/apache-activemq-5.5.1/bin/activemq stop

  1. 开启192.168.0.159的activemq

/usr/local/apache-activemq-5.5.1/bin/activemq start

  1. 关闭192.168.0.158和192.168.0.159的activemq
  1. 开启192.168.0.158和192.168.0.159的activemq
  1. 停用192.168.0.156的NFS服务
  1. 停用192.168.0.156的NFS并重启activemq
  1. 打192.168.0.156的NFS服务并重启activemq

JDBC Master Slave

mysql安装配置

1           在192.168.0.156上安装mysql

yum -y install mysql mysql-devel mysql-server

2           启动mysql

service mysqld start

3           创建activemq数据库

mysql> create database activemq default character set utf8;

mysql> grant all on activemq.* to "activemq"@"192.168.0.%" identified by "activemq@test";

mysql> flush privileges;

activemq配置

5.9和5.14测试失败

vim /usr/local/apache-activemq-5.5.1/conf/activemq.xml

    <destinationPolicy>

      <policyMap><policyEntries>

          <policyEntry topic="FOO.>">

            <dispatchPolicy>

              <strictOrderDispatchPolicy />

            </dispatchPolicy>

            <subscriptionRecoveryPolicy>

              <lastImageSubscriptionRecoveryPolicy />

            </subscriptionRecoveryPolicy>

          </policyEntry>

      </policyEntries></policyMap>

</destinationPolicy>

    <persistenceAdapter>

        <jdbcPersistenceAdapter         dataDirectory="activemq-data"

        dataSource="#mysql-ds"

        createTablesOnStartup="false"/> (第一次启动时设置为true,建三张表)

</persistenceAdapter>

  <bean id="mysql-ds" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">

    <property name="driverClassName" value="com.mysql.jdbc.Driver"/>

    <property name="url" value="jdbc:mysql://192.168.0.156:3306/activemq?relaxAutoCommit=true"/>

    <property name="username" value="activemq"/>

    <property name="password" value="activemq@test"/>

    <property name="poolPreparedStatements" value="true"/>

  </bean>

主从测试

测试方法未知

Replicated LevelDB Store

该功能从5.9.0开始支持,5.9.0之前版本不支持

zookeeper安装配置

wget http://192.168.0.155/soft/apache/zookeeper/zookeeper-3.3.6.tar.gz

tar xzf zookeeper-3.3.6.tar.gz -C /usr/local/

cp /usr/local/zookeeper-3.3.6/conf/zoo_sample.cfg /usr/local/zookeeper-3.3.6/conf/zoo.cfg

vim /usr/local/zookeeper-3.3.6/conf/zoo.cfg

# The number of milliseconds of each tick

tickTime=2000

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

dataDir=/usr/local/zookeeper-3.3.6/data

# the port at which the clients will connect

clientPort=2181

server.1=192.168.0.157:2888:3888

server.2=192.168.0.158:2888:3888

server.3=192.168.0.159:2888:3888

mkdir -p /usr/local/zookeeper-3.3.6/data

l  192.168.0.157添加myid

echo "1" > /usr/local/zookeeper-3.3.6/data/myid

l  192.168.0.158添加myid

echo "2" > /usr/local/zookeeper-3.3.6/data/myid

l  192.168.0.159添加myid

echo "3" > /usr/local/zookeeper-3.3.6/data/myid

/usr/local/zookeeper-3.3.6/bin/zkServer.sh start

activemq配置

vim /usr/local/apache-activemq-5.9.1/conf/activemq.xml

<broker xmlns="http://activemq.apache.org/schema/core" brokerName="uce-core" dataDirectory="${activemq.data}">

        <persistenceAdapter>

    <replicatedLevelDB

      directory="/data/kahadb"

      replicas="3"

      bind="tcp://0.0.0.0:0"

      zkAddress="192.168.0.157:2181,192.168.0.158:2181,192.168.0.159:2181"

      zkPath="/data/leveldb-stores"

      hostname="192.168.0.157"

      />

        </persistenceAdapter>

官网标准配置:

<broker brokerName="broker" ... >

  ...

  <persistenceAdapter>

    <replicatedLevelDB

      directory="activemq-data"(可自定义)

      replicas="3"(必须3台,1主+2备)

      bind="tcp://0.0.0.0:0"(默认设置即可)

      zkAddress="zoo1.example.org:2181,zoo2.example.org:2181,zoo3.example.org:2181"

      zkPassword="password"(可设置也不可设置,设置值三台必须一致)

      zkPath="/activemq/leveldb-stores"(可自定义)

      hostname="broker1.example.org"(必须跟本机地址一致)

      />

  </persistenceAdapter>

  ...

</broker>

主从测试

只要停用master,而另两台可自动运行

每天更新一点点,温习一点点点,进步一点点
原文地址:https://www.cnblogs.com/lmgsanm/p/6485563.html