elasticsearch

elasticsearch安装及安装中遇到的坑:

一、安装步骤:

上传三个安装文件:

elasticsearch-5.2.2.tar   --elasticsearch的安装包

elasticsearch-head-master  --连接elasticsearch需要的插件安装包

node-v6.9.2-linux-x64.tar  --安装插件所需要的nodejs继承环境

分别上传并进行解压

2、在/opt/module/elasticsearch-5.2.2路径下创建datalogs文件夹

3、修改配置文件/opt/module/elasticsearch-5.2.2/config/elasticsearch.yml

需要修改的地方:

cluster.name: my-application    --集群的名称

node.name: node-102   --集群节点的名称

path.data: /opt/module/elasticsearch-5.2.2/data  --配置数据存储路劲

path.logs: /opt/module/elasticsearch-5.2.2/logs  --配置日志存储路径

bootstrap.memory_lock: false

bootstrap.system_call_filter: false

network.host: 182.168.126.xxx

discovery.zen.ping.unicast.hosts: ["mynode1"],要是集群有多个节点就将这些节点全部写上

添加防脑裂配置 如果不配不知道具体数量,不好控制脑裂

discovery.zen.ping.multicast.enabled: false

discovery.zen.ping.unicast.hosts: ["192.168.133.6","192.168.133.7", "192.168.133.8"]

discovery.zen.ping_timeout: 120s

client.transport.ping_timeout: 60s

集群搭建注意的点:

(1)、如果要配置集群需要两个节点上的elasticsearch配置的cluster.name相同,都启动可以自动组成集群,这里如果不改cluster.name则默认是cluster.name=my-application

(2)、nodename随意取但是集群内的各节点不能相同

(3)、修改后的每行前面不能有空格,修改后的“:”后面必须有一个空格

4、配置linux系统环境(参考http://blog.csdn.net/satiling/article/details/59697916

(1)、切换到root用户,编辑limits.conf 添加类似如下内容

[root@hadoop102 elasticsearch-5.2.2]# vi /etc/security/limits.conf

添加如下内容:

* soft nofile 65536

* hard nofile 131072

* soft nproc 2048

* hard nproc 4096

注:这些参数主要就是申请一些内存资源

(2)、切换到root用户,进入limits.d目录下修改配置文件。

修改如下内容:

* soft nproc 1024

#修改为

* soft nproc 2048

(3)、切换到root用户修改配置sysctl.conf

[root@hadoop102 elasticsearch-5.2.2]# vi /etc/sysctl.conf 

添加下面配置:

vm.max_map_count=655360

并执行命令:

[root@hadoop102 elasticsearch-5.2.2]# sysctl -p

然后,重新启动elasticsearch,即可启动成功。(注:集群启动的时候一定要在子用户下面启动,root用户不能启动,要与elasticsearch有远程脚本访问权限,如果在root下启动会不安全)

5、[atguigu@hadoop102 elasticsearch-5.2.2]$ bin/elasticsearch

6、测试集群安装进度:登陆 mynode1:9200 查看网页信息

elasticsearch head 插件安装步骤:

1、下载插件 https://github.com/mobz/elasticsearch-head elasticsearch-head-master.zip

2、配置nodejs环境变量:

export NODE_HOME=/opt/module/node-v6.9.2-linux-x64

export PATH=$PATH:$NODE_HOME/bin

[root@hadoop102 software]# source /etc/profile

3、查看nodenpm版本

[root@hadoop102 software]# node -v

v6.9.2

[root@hadoop102 software]# npm -v

3.10.9

4、解压head插件/opt/module目录

[atguigu@hadoop102 software]$ unzip elasticsearch-head-master.zip -d /opt/module/

5、查看当前head插件目录下有无node_modules/grunt目录:

没有:执行命令创建:

[atguigu@hadoop102 elasticsearch-head-master]$ npm install grunt --save

6、安装head插件:

[atguigu@hadoop102 elasticsearch-head-master]$ npm install -g cnpm --registry=https://registry.npm.taobao.org

7、安装grunt

[atguigu@hadoop102 elasticsearch-head-master]$ npm install -g grunt-cli

8、编辑Gruntfile.js

[atguigu@hadoop102 elasticsearch-head-master]$ vim Gruntfile.js
文件93行添加hostname:'0.0.0.0'
options: {
        hostname:'0.0.0.0',
        port: 9100,
        base: '.',
        keepalive: true
      }

9、检查head根目录下是否存在base文件夹

没有:将 _site下的base文件夹及其内容复制到head根目录下

[atguigu@hadoop102 elasticsearch-head-master]$ mkdir base

[atguigu@hadoop102 _site]$ cp base/* ../base/

10、启动grunt server

[atguigu@hadoop102 elasticsearch-head-master]$ grunt server -d

启动信息:

Running "connect:server" (connect) task
[D] Task source: /opt/module/elasticsearch-head-master/node_modules/grunt-contrib-connect/tasks/connect.js
Waiting forever...
Started connect web server on http://localhost:9100

如果提示grunt的模块没有安装

Local Npm module “grunt-contrib-clean” not found. Is it installed? 

Local Npm module “grunt-contrib-concat” not found. Is it installed? 

Local Npm module “grunt-contrib-watch” not found. Is it installed? 

Local Npm module “grunt-contrib-connect” not found. Is it installed? 

Local Npm module “grunt-contrib-copy” not found. Is it installed? 

Local Npm module “grunt-contrib-jasmine” not found. Is it installed? 

Warning: Task “connect:server” not found. Use –force to continue. 

执行以下命令: 

npm install grunt-contrib-clean -registry=https://registry.npm.taobao.org

npm install grunt-contrib-concat -registry=https://registry.npm.taobao.org

npm install grunt-contrib-watch -registry=https://registry.npm.taobao.org 

npm install grunt-contrib-connect -registry=https://registry.npm.taobao.org

npm install grunt-contrib-copy -registry=https://registry.npm.taobao.org 

npm install grunt-contrib-jasmine -registry=https://registry.npm.taobao.org

最后一个模块可能安装不成功,但是不影响使用。

11、启动集群插件后发现集群未连接

/opt/module/elasticsearch-5.2.2/config路径下修改配置文件elasticsearch.yml在文件末尾增加

[atguigu@hadoop102 config]$ pwd

/opt/module/elasticsearch-5.2.2/config

[atguigu@hadoop102 config]$ vi elasticsearch.yml

http.cors.enabled: true

http.cors.allow-origin: "*"

重新启动elasticsearch

注:如果按照上述要求修改之后,健康值依然连接不成功,在配置文件elasticsearch.yml文件末尾增加

#http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
#http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"

并将健康值URL地址栏中默认的localhost改为本机的ip地址

用java -API操作elasticsearch: (JDK环境必须是JDK-1.8的)

ElasticsearchJava客户端非常强大;它可以建立一个嵌入式实例并在必要时运行管理任务。

    运行一个Java应用程序和Elasticsearch时,有两种操作模式可供使用。该应用程序可在Elasticsearch集群中扮演更加主动或更加被动的角色。在更加主动的情况下(称为Node Client),应用程序实例将从集群接收请求,确定哪个节点应处理该请求,就像正常节点所做的一样。(应用程序甚至可以托管索引和处理请求。)另一种模式称为Transport Client,它将所有请求都转发到另一个Elasticsearch节点,由后者来确定最终目标。

1、新建一个maven工程。并在工程中添加pom文件依赖,下载相应的jar包

 

<dependencies>

<dependency>

<groupId>junit</groupId>

<artifactId>junit</artifactId>

<version>3.8.1</version>

<scope>test</scope>

</dependency>

<dependency>

<groupId>org.elasticsearch</groupId>

<artifactId>elasticsearch</artifactId>

<version>5.2.2</version>

</dependency>

<dependency>

<groupId>org.elasticsearch.client</groupId>

<artifactId>transport</artifactId>

<version>5.2.2</version>

</dependency>

<dependency>

<groupId>org.apache.logging.log4j</groupId>

<artifactId>log4j-core</artifactId>

<version>2.9.0</version>

</dependency>

</dependencies>

 

等待依赖的jar包下载完成当直接在ElasticSearch 建立文档对象时,如果索引不存在的,默认会自动创建,映射采用默认方式

 elasticsearch API操作源码实例:

package com.wcg.elasticsearch;

import java.io.IOException;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.concurrent.ExecutionException;

import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.get.MultiGetItemResponse;
import org.elasticsearch.action.get.MultiGetResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.Requests;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.SearchHits;
import org.elasticsearch.transport.client.PreBuiltTransportClient;
import org.junit.Before;
import org.junit.Test;

/**
 * Hello world!
 *
 */
public class App 
{
    TransportClient client;
    @SuppressWarnings("unchecked")
    @Before
    public  void getClient()
    {
        Settings setting =Settings.builder().put("cluster.name","my-application").build();
        //设置连接集群的名称
        
        client = new PreBuiltTransportClient(setting);
         
        try {
            
            client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("mynode1"),9300));
            System.out.println(client.toString());
        } catch (UnknownHostException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
        
    
    }
    
    //创建索引(就相当于创建一个数据库)
    @Test
    public void createIndex() {
        //创建索引
        client.admin().indices().prepareCreate("myblog").get();
        //关闭资源
        client.close();
    }
    
    //删除索引
    @Test
    public void deleteIndex() {
        
        client.admin().indices().prepareDelete("myblog").get();
        
        client.close();
        
    }
    //创建文档json格式
    @Test
    public void createDoc() {
        //具体创建的文档的内容
        String json = "{" + ""id":"1"," + ""title":"基于Lucene的搜索服务器","
                + ""content":"它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口"" + "}";
        //创建文档
        IndexResponse response = client.prepareIndex("myblog","article","1").setSource(json).execute().actionGet();
        //打印返回值
        System.out.println("索引"+response.getIndex());
        System.out.println("类型"+response.getType());
        System.out.println("版本号"+response.getVersion());
        System.out.println("结果"+response.getResult());
        client.close();
        
    }
    
    //创建文档以hashmap的形式
    @Test
    public void createIndexByMap() {
        
                // 1 文档数据准备
                Map<String, Object> json = new HashMap<String, Object>();
                json.put("id", "2");
                json.put("title", "基于Lucene的搜索服务器");
                json.put("content", "它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口");

               //创建文档
                IndexResponse response = client.prepareIndex("myblog","article","2").setSource(json).execute().actionGet();
        
                //打印返回结果
                System.out.println("索引"+response.getIndex());
                System.out.println("类型"+response.getType());

                System.out.println("版本号"+response.getVersion());

                System.out.println("返回结果"+response.getResult());

                System.out.println("id"+response.getId());
                
                //关闭资源
                client.close();
                

    }
    
    //创建文档以builder的形式
    @Test
    public void createIndexByBuilder() throws IOException {
        // 1 通过es自带的帮助类,构建json数据
                XContentBuilder builder = XContentFactory.jsonBuilder().startObject().field("id", 3)
                        .field("title", "学霸").field("content", "能力的全文搜索引擎,基于RESTful web接口。")
                        .endObject();
                //创建文档
                IndexResponse response = client.prepareIndex("myblog","article","3").setSource(builder).execute().actionGet();
        
                //打印返回结果
                System.out.println("索引"+response.getIndex());
                System.out.println("类型"+response.getType());

                System.out.println("版本号"+response.getVersion());

                System.out.println("返回结果"+response.getResult());

                System.out.println("id"+response.getId());
                
                //关闭资源
                client.close();
        
    }
    
    //单个索引
    @Test
    public void queryIndex() {
        //查询 
        GetResponse response = client.prepareGet("myblog","article","2").get();
        
        //打印
        System.out.println(response.getSourceAsString());
        
        client.close();
    }
    
    //多个索引查询
    @Test
    public void queryIndexs() {
        
        MultiGetResponse response = client.prepareMultiGet().add("myblog", "article", "1").add("myblog", "article", "2").add("myblog", "article", "3").get();
        
        for(MultiGetItemResponse multiGetItemResponse : response) {
            GetResponse response2 = multiGetItemResponse.getResponse();
            if(response2.isExists()) {
                System.out.println(response2.getSourceAsString());
            }
        }
        client.close();
    }
    
    //跟新文档 
    @Test
    public void update() throws IOException, InterruptedException, ExecutionException {
        UpdateRequest updateRequest = new UpdateRequest("myblog","article","2");
        updateRequest.doc(XContentFactory.jsonBuilder().startObject().field("id","2")
                .field("title","大牛")
                .field("content","苏大虎,看包").endObject());
        
        UpdateResponse response = client.update(updateRequest).get();
        
        System.out.println("索引"+response.getIndex());
        System.out.println("类型"+response.getType());

        System.out.println("版本号"+response.getVersion());

        System.out.println("返回结果"+response.getResult());
        
        client.close();
        
    }
    
    //跟新文档(有则跟新,没有则插入类似于oracle的merge)
    @Test
    public void updateinsert() throws IOException, InterruptedException, ExecutionException {
        //没有这个文档内容就进行创建
        IndexRequest indexrequest = new IndexRequest("blog","article","5");
        indexrequest.source(XContentFactory.jsonBuilder().startObject().field("id","5")
                .field("title","二牛")
                .field("content","苏小虎,喝水").endObject());
        
        //有文档内容就进行跟新
        UpdateRequest updateRequest = new UpdateRequest("myblog","article","5");
        updateRequest.doc(XContentFactory.jsonBuilder().startObject().field("id","5")
                .field("title","黄牛")
                .field("content","苏二虎,迈赛").endObject());
        
        updateRequest.upsert(indexrequest);
        //具体跟新操作
        UpdateResponse response = client.update(updateRequest).get();
        
        System.out.println("索引"+response.getIndex());
        System.out.println("类型"+response.getType());

        System.out.println("版本号"+response.getVersion());

        System.out.println("返回结果"+response.getResult());
        
        client.close();
    }
    
    //删除文档内容
    @Test
    public  void  delIndex(){
        DeleteResponse response = client.prepareDelete("myblog","article","5").get();
        
        client.close();
        
        
    }
    
    //全文查询 
    @Test
    public void queryMatchAll() {
    //查询    
    SearchResponse response =    client.prepareSearch("myblog").setTypes("article").setQuery(QueryBuilders.matchAllQuery()).get();
    //获取查询的对象    
    SearchHits hits = response.getHits();
    //打印查询结果条数
    System.out.println("查询结果有几条数据"+hits.getTotalHits());
    //遍历查询结果
    Iterator<SearchHit> iterator = hits.iterator();
    while(iterator.hasNext()) {
        
        SearchHit next = iterator.next();
        System.out.println(next.sourceAsString());
        
        
    }
    client.close();
    
    }
    
    //分词查询
    @Test 
    public void query() {
        
    SearchResponse response =    client.prepareSearch("myblog").setTypes("article").setQuery(QueryBuilders.queryStringQuery("全文")).get();
    //获取具体的查询对象
    SearchHits hits = response.getHits();
  //打印查询结果条数
    System.out.println("查询结果有几条数据"+hits.getTotalHits());
    //遍历查询结果
    Iterator<SearchHit> iterator = hits.iterator();
    while(iterator.hasNext()) {
        
        SearchHit next = iterator.next();
        
        System.out.println(next.getSourceAsString());
    }
    client.close();
        
    }
    
    //通配符查询
    
    @Test 
    public void wildcardQuery() {
        
    SearchResponse response =    client.prepareSearch("myblog").setTypes("article").setQuery(QueryBuilders.wildcardQuery("content", "*全*")).get();
    //获取具体的查询对象
    SearchHits hits = response.getHits();
  //打印查询结果条数
    System.out.println("查询结果有几条数据"+hits.getTotalHits());
    //遍历查询结果
    Iterator<SearchHit> iterator = hits.iterator();
    while(iterator.hasNext()) {
        
        SearchHit next = iterator.next();
        
        System.out.println(next.getSourceAsString());
    }
        
    client.close();
    }
    //字段查询 
    @Test 
    public void queryterm() {
        
    SearchResponse response =    client.prepareSearch("myblog").setTypes("article").setQuery(QueryBuilders.termQuery("content", "")).get();
    //获取具体的查询对象
    SearchHits hits = response.getHits();
  //打印查询结果条数
    System.out.println("查询结果有几条数据"+hits.getTotalHits());
    //遍历查询结果
    Iterator<SearchHit> iterator = hits.iterator();
    while(iterator.hasNext()) {
        
        SearchHit next = iterator.next();
        
        System.out.println(next.getSourceAsString());
    }
        
    client.close();
    }
    
    //模糊查询
    @Test 
    public void fuzzyQuery() {
        
    SearchResponse response =    client.prepareSearch("myblog").setTypes("article").setQuery(QueryBuilders.fuzzyQuery("content", "全1")).get();
    //获取具体的查询对象
    SearchHits hits = response.getHits();
  //打印查询结果条数
    System.out.println("查询结果有几条数据"+hits.getTotalHits());
    //遍历查询结果
    Iterator<SearchHit> iterator = hits.iterator();
    while(iterator.hasNext()) {
        
        SearchHit next = iterator.next();
        
        System.out.println(next.getSourceAsString());
    }
        
    client.close();
    }
    //创建约束条件 (注:创建约束条件的时候表中是不能有数据的,否则创建是不会成功的,当表中已经有数据的时候,表明表中的约束条件已经存在了)
    @Test
    public void putmapping() throws InterruptedException, ExecutionException, IOException {
        // 1设置mapping
                XContentBuilder builder = XContentFactory.jsonBuilder()
                        .startObject()
                            .startObject("article")
                                .startObject("properties")
                                    .startObject("id1")
                                        .field("type", "string")
                                        .field("store", "yes")
                                    .endObject()
                                    .startObject("title2")
                                        .field("type", "string")
                                        .field("store", "no")
                                    .endObject()
                                    .startObject("content")
                                        .field("type", "string")
                                        .field("store", "yes")
                                    .endObject()
                                .endObject()
                            .endObject()
                        .endObject();

                // 2 添加mapping
                PutMappingRequest mapping = Requests.putMappingRequest("myblog").type("article").source(builder);
                
                client.admin().indices().putMapping(mapping).get();
                
                // 3 关闭资源
                client.close();
        
        
        
        
        
        
    }
    
    
}

 

原文地址:https://www.cnblogs.com/wcgstudy/p/11186558.html