ElasticSearch定时删除数据(非时间结尾规律索引)

首先:

感谢博主:参考博客:https://www.orchome.com/477

对于针对性的索引清理定期数据可行方法:

1.手动清理

可基于kibana索引管理进行索引数据清理

2.api清理

curl -XDELETE 'http://ip:端口/索引'

3.脚本清理

#!/bin/sh
# example: sh  delete_es_by_day.sh logstash-kettle-log logsdate 30
index_name=$1
daycolumn=$2
savedays=$3
format_day=$4
if [ ! -n "$savedays" ]; then
  echo "the args is not right,please input again...."
  exit 1
fi

if [ ! -n "$format_day" ]; then
  format_day='%Y%m%d'
fi
sevendayago=`date -d "-${savedays} day " +${format_day}`
curl -X POST  "ip:端口/${index_name}/_delete_by_query?pretty" -H "Content-Type: application/json" -d '
{
  "query": {
    "range": {
      "'${daycolumn}'": {
	"from": null, 
	"to": '${sevendayago}'
      }
    }
  }
}'
echo "ok"
#/bin/bash
# 定时清理微服务子系统日志
# 只保留10天内的日志索引
savedays=$1
ip=$2
port=$3
LAST_DATA=`date -d "-${savedays} days" "+%Y.%m.%d"`
curl -XDELETE 'http://'${ip}':'${port}'/*-'${LAST_DATA}'*'

4.基于脚本定时任务清理数据

将脚本添加到定时任务,定期执行

 修改crontab 编辑器: https://www.twle.cn/t/492

linux nano语法:https://www.cnblogs.com/wx170119/p/12084397.html

0 0 * * * root /opt/elk/cron/delete_log.sh ct-oa-operationlog createDate 15
05 0 * * * root /opt/elk/cron/delete_log.sh ct-oa-loginlog landingTime 30
10 0 * * * root /opt/elk/cron/delete_log.sh ct-dmp-operationlog createDate 15
30 0 * * * root /opt/elk/cron/es_index_clear.sh 15 ip 端口  

5.基于elasticsearch-curator定时清理

https://www.cnblogs.com/java-zhao/p/5900590.html

6.基于索引生命周期ILM

https://elasticsearch.cn/article/6358

原文地址:https://www.cnblogs.com/durenniu/p/12365590.html