String整合es的步骤

首先应该开启es,在/opt/head/目录下,开启head,命令:cnpm run start,链接:自己的IP地址:9100

然后用绝对路径,但用es的用户,开启es命令为:/opt/es/bin/elasticsearch  链接:自己的IP地址:9200

1.写入依赖

<dependencies>
          <dependency>
            <groupId>com.github.pagehelper</groupId>
            <artifactId>pagehelper</artifactId>
            <version>5.1.2</version>
        </dependency>
        <dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch</artifactId>
            <version>5.6.8</version>
        </dependency>
        <dependency>
            <groupId>org.elasticsearch.client</groupId>
            <artifactId>transport</artifactId>
            <version>5.6.8</version>
        </dependency>
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-to-slf4j</artifactId>
            <version>2.9.1</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>1.7.24</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-simple</artifactId>
            <version>1.7.21</version>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.12</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-core</artifactId>
            <version>2.8.1</version>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-databind</artifactId>
            <version>2.8.1</version>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-annotations</artifactId>
            <version>2.8.1</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.data</groupId>
            <artifactId>spring-data-elasticsearch</artifactId>
            <version>3.0.5.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <version>5.0.4.RELEASE</version>
        </dependency>
    </dependencies>

2.写入配置文件

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:elasticsearch="http://www.springframework.org/schema/data/elasticsearch"
    xmlns:context="http://www.springframework.org/schema/context"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
                            http://www.springframework.org/schema/data/elasticsearch http://www.springframework.org/schema/data/elasticsearch/spring-elasticsearch.xsd
                            http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">
    
    <!-- 扫描Dao包,自动创建实例,制定一个es仓库的包扫描位置 -->
    <!-- 主要是用spring-data的方式来操作es的增删改查 -->
    <!-- 这个包下就是我们声明的es仓库的接口 -->
    <elasticsearch:repositories base-package="com.liujin.dao" />
    <!-- es提供了2个端口号:9200和9300
        9200:对浏览器暴露的端口号
        9300:是对java编程需要操作es所暴露的端口号
     -->
     <!-- 指定es的IP地址和端口号    通过浏览器发送restful请求,就能够实现es的crud-->
    <elasticsearch:transport-client id="client"
        cluster-nodes="192.168.26.130:9300" /> <!-- spring data elasticSearcheDao 必须继承 ElasticsearchTemplate -->
        
        <!-- 声明一个对象,叫elasticsearchTemplate  就是负责es的CRUD的操作 -->
    <bean id="elasticsearchTemplate"
        class="org.springframework.data.elasticsearch.core.ElasticsearchTemplate">
        <constructor-arg name="client" ref="client"></constructor-arg>
    </bean>
    
</beans>

3.写dao接口(当你的接口继承了ElasticsearchCrudRepository这个接口之后,就自动具备了crud的功能 )

public interface UserRespository extends ElasticsearchCrudRepository<User, Integer>{
//为什么在这个接口中声明一个方法调用就可以实现了呢
//这就是spring-data的操作规范
//只要你的方法命名规则按要求来,这时候,就会自动查询
    
    List<User> findByName(String name);
    
    //根据地址来查询
    //List<User> findByAddress(String address);
    
    //根据地址和姓名查询
    //List<User> findByNameAndAddress(String name,String address);
    
    //根据地址或者名字
    //List<User> findByNameOrAddress(String address,String name);
    
    //查询ID小于5的数据
    //List<User> findByIdLessThan(int id);
    
}

4.写实体类(要指定库名,表名,库名用纯小写,不能有特殊字符)

//指定了库名、表名(库名必须用纯小写,不能有特殊字符)
@Document(indexName = "test_user",type = "user")
public class User implements Serializable{

    //指定主键
    @Id
    private int id;
    //指定name字段的值是否索引,是否存储          分词方式                                           搜索的关键字分词的方式                              指定数据类型
    @Field(index = true,store = true,analyzer = "ik_smart",searchAnalyzer ="ik_smart",type = FieldType.text )
    private String name;
    private String address;
    public String getAddress() {
        return address;
    }
    public void setAddress(String address) {
        this.address = address;
    }
    @Override
    public String toString() {
        return "User [id=" + id + ", name=" + name + ",address="+address+"]";
    }
    public int getId() {
        return id;
    }
    public void setId(int id) {
        this.id = id;
    }
    public String getName() {
        return name;
    }
    public void setName(String name) {
        this.name = name;
    }
    
}

5.写测试类

整合junit

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("classpath:es.xml")

注入dao接口

@Autowired
    UserRespository userRespository;

保存和修改使用的是一个方法,同一个方法做了两件事:分别是添加和修改!你要是写代码判断实体类中有没有ID,如果ID不存在就是添加,存在就是修改

//保存
    @Test
    public void testSave() {
        User user = new User();
        user.setId(1);
        user.setName("中国人民解放军成立了,值得庆祝!大家好,我是XXX");
        userRespository.save(user);
        System.out.println("成功往索引库中保存了user对象!");
    }
    //总结:同一个方法做了两件事:分别是添加和修改!你要是写代码判断实体类中有没有ID,如果ID不存在就是添加,存在就是修改
    //修改数据
    @Test
    public void testUpdate() {
        User user = new User();
        user.setId(1);
        user.setName("中国人民解放军成立了,值得庆祝!大家好,我是hhh===");
        //同一个save方法同时具备了修改和添加的功能
        //主要是根据id判断,如果库中ID已经存在说明是修改,如果ID不存在说明是添加
        userRespository.save(user);
    }

删除

@Test
    public void testDel() {
        userRespository.deleteById(1);
        System.out.println("根据id删除成功");
    }

测试查询

@Test
    public void testFind() {
        //查询所有
//        Iterable<User> findAll = userRespository.findAll();
//        for (User user : findAll) {
//            System.out.println(user);
//        }
        //根据name查询                                                                      分词器的问题
        List<User> list = userRespository.findByName("值得");
        for (User user : list) {
            System.out.println(user);
        }
    }

这里如果想根据字段来查询可以自动意但有命名规范

直接在dao接口中根据命名规范来写,就可以实现查询

测试高量查询

(写工具类)

/**   
 * Copyright 漏 2019 鍏�徃鍚�. All rights reserved.
 * 
 * @Title: ESUtils.java 
 * @Prject: chengongjun_cms
 * @Package: com.chengongjun.chengongjun_cms.utils 
 * @Description: TODO
 * @author: chj   
 * @date: 2019骞�7鏈�24鏃� 涓婂崍10:14:13 
 * @version: V1.0   
 */
package com.liujin.util;

import java.lang.reflect.Field;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;

import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.SearchHits;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightField;
import org.springframework.data.domain.PageRequest;
import org.springframework.data.domain.Pageable;
import org.springframework.data.domain.Sort;
import org.springframework.data.elasticsearch.core.ElasticsearchTemplate;
import org.springframework.data.elasticsearch.core.SearchResultMapper;
import org.springframework.data.elasticsearch.core.aggregation.AggregatedPage;
import org.springframework.data.elasticsearch.core.aggregation.impl.AggregatedPageImpl;
import org.springframework.data.elasticsearch.core.query.GetQuery;
import org.springframework.data.elasticsearch.core.query.IndexQuery;
import org.springframework.data.elasticsearch.core.query.IndexQueryBuilder;
import org.springframework.data.elasticsearch.core.query.NativeSearchQueryBuilder;
import org.springframework.data.elasticsearch.core.query.SearchQuery;

import com.github.pagehelper.PageInfo;


/**
 * @ClassName: ESUtils
 * @Description: TODO
 * @author: 
 * @date: 2019骞�7鏈�24鏃� 涓婂崍10:14:13
 */
public class HLUtils {


    /**
     * 淇濆瓨鍙婃洿鏂版柟娉�
     * 
     * @param elasticsearchTemplate
     * @param id
     * @param object
     */
    public static void saveObject(ElasticsearchTemplate elasticsearchTemplate, String id, Object object) {
        // 鍒涘缓鎵�浠ュ�璞�
        IndexQuery query = new IndexQueryBuilder().withId(id).withObject(object).build();
        // 寤虹珛绱㈠紩
        elasticsearchTemplate.index(query);
    }

    /**
     * 鎵归噺鍒犻櫎
     * 
     * @param elasticsearchTemplate
     * @param clazz
     * @param ids
     */
    public static void deleteObject(ElasticsearchTemplate elasticsearchTemplate, Class<?> clazz, Integer ids[]) {
        for (Integer id : ids) {
            // 寤虹珛绱㈠紩
            elasticsearchTemplate.delete(clazz, id + "");
        }
    }

    /**
     * 
     * @Title: selectById
     * @Description: 鏍规嵁id鍦╡s鏈嶅姟鍚�腑鏌ヨ�瀵硅薄
     * @param elasticsearchTemplate
     * @param clazz
     * @param id
     * @return
     * @return: Object
     */
    public static Object selectById(ElasticsearchTemplate elasticsearchTemplate, Class<?> clazz, Integer id) {
        GetQuery query = new GetQuery();
        query.setId(id + "");
        return elasticsearchTemplate.queryForObject(query, clazz);
    }

    // 鏌ヨ�鎿嶄綔
    public static PageInfo<?> findByHighLight(ElasticsearchTemplate elasticsearchTemplate, Class<?> clazz, Integer page,
            Integer rows, String fieldNames[],String sortField, String value) {
        AggregatedPage<?> pageInfo = null;
        PageInfo<?> pi = new PageInfo<>();
        // 鍒涘缓Pageable瀵硅薄                                                        涓婚敭鐨勫疄浣撶被灞炴�у悕
        final Pageable pageable = PageRequest.of(page - 1, rows, Sort.by(Sort.Direction.ASC, sortField));
        //鏌ヨ�瀵硅薄
        SearchQuery query = null;
        //鏌ヨ�鏉′欢楂樹寒鐨勬瀯寤哄�璞�
        QueryBuilder queryBuilder = null;
        
        if (value != null && !"".equals(value)) {
            // 楂樹寒鎷兼帴鐨勫墠缂�涓庡悗缂�
            String preTags = "<font color="red">";
            String postTags = "</font>";

            // 瀹氫箟鍒涘缓楂樹寒鐨勬瀯寤洪泦鍚堝�璞�
            HighlightBuilder.Field highlightFields[] = new HighlightBuilder.Field[fieldNames.length];

            for (int i = 0; i < fieldNames.length; i++) {
                // 杩欎釜浠g爜鏈夐棶棰�
                highlightFields[i] = new HighlightBuilder.Field(fieldNames[i]).preTags(preTags).postTags(postTags);
            }

            // 鍒涘缓queryBuilder瀵硅薄
            queryBuilder = QueryBuilders.multiMatchQuery(value, fieldNames);
            query = new NativeSearchQueryBuilder().withQuery(queryBuilder).withHighlightFields(highlightFields)
                    .withPageable(pageable).build();

            pageInfo = elasticsearchTemplate.queryForPage(query, clazz, new SearchResultMapper() {

                public <T> AggregatedPage<T> mapResults(SearchResponse response, Class<T> clazz, Pageable pageable1) {

                    List<T> content = new ArrayList<T>();
                    long total = 0l;

                    try {
                        // 鏌ヨ�缁撴灉
                        SearchHits hits = response.getHits();
                        if (hits != null) {
                            //鑾峰彇鎬昏�褰曟暟
                            total = hits.getTotalHits();
                            // 鑾峰彇缁撴灉鏁扮粍
                            SearchHit[] searchHits = hits.getHits();
                            // 鍒ゆ柇缁撴灉
                            if (searchHits != null && searchHits.length > 0) {
                                // 閬嶅巻缁撴灉
                                for (int i = 0; i < searchHits.length; i++) {
                                    // 瀵硅薄鍊�
                                    T entity = clazz.newInstance();

                                    // 鑾峰彇鍏蜂綋鐨勭粨鏋�
                                    SearchHit searchHit = searchHits[i]; 

                                    // 鑾峰彇瀵硅薄鐨勬墍鏈夌殑瀛楁�
                                    Field[] fields = clazz.getDeclaredFields();

                                    // 閬嶅巻瀛楁�瀵硅薄
                                    for (int k = 0; k < fields.length; k++) {
                                        // 鑾峰彇瀛楁�瀵硅薄
                                        Field field = fields[k];
                                        // 鏆村姏鍙嶅皠
                                        field.setAccessible(true);
                                        // 瀛楁�鍚嶇О
                                        String fieldName = field.getName();
                                        if (!fieldName.equals("serialVersionUID")&&!fieldName.equals("user")&&!fieldName.equals("channel")&&!fieldName.equals("category")&&!fieldName.equals("articleType")&&!fieldName.equals("imgList")) {
                                            HighlightField highlightField = searchHit.getHighlightFields()
                                                    .get(fieldName);
                                            if (highlightField != null) {
                                                // 楂樹寒 澶勭悊 鎷垮埌 琚�<font color='red'> </font>缁撴潫鎵�鍖呭洿鐨勫唴瀹归儴鍒�
                                                String value = highlightField.getFragments()[0].toString();
                                                // 娉ㄦ剰涓�涓嬩粬鏄�惁鏄� string绫诲瀷
                                                field.set(entity, value);
                                            } else {
                                                //鑾峰彇鏌愪釜瀛楁�瀵瑰簲鐨� value鍊�
                                                Object value = searchHit.getSourceAsMap().get(fieldName);
                                                // 鑾峰彇瀛楁�鐨勭被鍨�
                                                Class<?> type = field.getType();
                                                if (type == Date.class) {
                                                    // bug
                                                    if(value!=null) {
                                                        field.set(entity, new Date(Long.valueOf(value + "")));
                                                    }
                                                } else {
                                                    field.set(entity, value);
                                                }
                                            }
                                        }
                                    }

                                    content.add(entity);
                                }
                            }
                        }
                    } catch (Exception e) {
                        e.printStackTrace();
                    }

                    return new AggregatedPageImpl<T>(content, pageable, total);
                }
            });

        } else {
            // 娌℃湁鏌ヨ�鏉′欢鐨勭殑鏃跺�欙紝鑾峰彇es涓�殑鍏ㄩ儴鏁版嵁 鍒嗛〉鑾峰彇
            query = new NativeSearchQueryBuilder().withPageable(pageable).build();
            pageInfo = elasticsearchTemplate.queryForPage(query, clazz);
        }
        int totalCount = (int) pageInfo.getTotalElements();
        int pages = totalCount%rows==0?totalCount/rows:totalCount/rows+1;
        pi.setTotal(pageInfo.getTotalElements());
        pi.setPageNum(page);
        pi.setPageSize(rows);
        pi.setPrePage(page-1);
        pi.setLastPage(page+1);
        pi.setPages(pages);
        List content = pageInfo.getContent();
        pi.setList(content);

        return pi;
    }

}

(2)需要ElasticsearchTemplate ,所以注入依赖

@Autowired
    ElasticsearchTemplate elasticsearchTemplate;

(根据工具类调用方法)

//测试高量显示
    @Test
    public void testHighLight() {
        //1.搜索需要的模板类     2.指定要操作的实体类类型    3.当前页  4.每页显示多少条数据    5.是一个string类型的数组,数组里存放的是要根据指定的字段搜索,这个字段必须与实体类中的字段保持一致   6.指定要排序的字段  7.搜索的关键字
        PageInfo<?> info = HLUtils.findByHighLight(elasticsearchTemplate, User.class, 1, 5, new String[] {"name"}, "id", "庆祝");
        List<?> list = info.getList();
        for (Object object : list) {
            System.out.println(object);
        }
    }
原文地址:https://www.cnblogs.com/liujinqq7/p/12427298.html