bloom-server 基于 rust 编写的 rest api cache 中间件

bloom-server 基于 rust 编写的 rest api cache 中间件,他位于lb 与api worker 之间,使用redis 作为缓存内容存储,
我们需要做的就是配置proxy,同时他使用基于share 的概念,进行cache 的分布存储,包含了请求端口(proxy,访问数据)
以及cache 控制端口(api 方便cache 策略的控制)

测试环境使用openresty+ docker + docker-compose 运行

一张参考图

Bloom Schema

环境准备

  • docker-compose 文件
 
version: "3"
services:
  redis:
    image: redis
    ports:
    - "6379:6379"
  lb: 
    image: openresty/openresty:alpine-fat
    volumes:
    - "./nginx/nginx-lb.conf:/usr/local/openresty/nginx/conf/nginx.conf"
    ports:
    - "9000:80"
  webapi:
    image: openresty/openresty:alpine-fat
    volumes:
    - "./nginx/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf"
    ports:
    - "8090:80"
  webapi2:
    image: openresty/openresty:alpine-fat
    volumes:
    - "./nginx/nginx2.conf:/usr/local/openresty/nginx/conf/nginx.conf"
    ports:
    - "8091:80"
  bloom:
    image: valeriansaliou/bloom:v1.25.0
    environment:
    - "RUST_BACKTRACE=1"
    volumes:
    - "./bloom/config.cfg:/etc/bloom.cfg"
    ports:
    - "8080:8080"
    - "8811:8811"
  bloom2:
    image: valeriansaliou/bloom:v1.25.0
    environment:
    - "RUST_BACKTRACE=1"
    volumes:
    - "./bloom/config2.cfg:/etc/bloom.cfg"
    ports:
    - "8081:8080"
    - "8812:8811"
 
  • webapi 配置
    主要是基于openresty 的 lua 脚本编写
    webapi1 nginx/nginx.conf
 
worker_processes 1;
user root;  
events {
    worker_connections 1024;
}
http {
    include mime.types;
    default_type application/octet-stream;
    sendfile on;
    lua_code_cache off;
    lua_need_request_body on;
    gzip on;
    resolver 127.0.0.11 ipv6=off;          
    real_ip_header X-Forwarded-For;
    real_ip_recursive on;
    server {
        listen 80;
        server_name app;
        charset utf-8;
        default_type text/html;
        location / {
           default_type text/plain;
           index index.html index.htm;
        }
        location /userinfo {
            default_type application/json;
            content_by_lua_block {
             local cjson = require("cjson");
             local user = {
                 name ="dalongdempo",
                 age =333,
             }
             ngx.say(cjson.encode(user))
            }
        }
        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
            root html;
        }
    }
}
 
 

webapi2 nginx/nginx2.conf

worker_processes 1;
user root;  
events {
    worker_connections 1024;
}
http {
    include mime.types;
    default_type application/octet-stream;
    sendfile on;
    lua_code_cache off;
    lua_need_request_body on;
    gzip on;
    resolver 127.0.0.11 ipv6=off;          
    real_ip_header X-Forwarded-For;
    real_ip_recursive on;
    server {
        listen 80;
        server_name app;
        charset utf-8;
        default_type text/html;
        location / {
           default_type text/plain;
           index index.html index.htm;
        }
        location /userinfo2 {
            default_type application/json;
            content_by_lua_block {
             local cjson = require("cjson");
             local user = {
                 name ="dalongdempo222",
                 age =333,
             }
             ngx.say(cjson.encode(user))
            }
        }
        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
            root html;
        }
    }
}
 
 
  • bloom 配置
    bloom 类似一个sidecar,官方建议的方式是部署在每个服务的节点,但是实际上我们可以部署在
    其他地方,进行分离部署
    bloom1 配置 bloom/config.cfg 注意我将log级别修改为了debug,方便测试
 
# Bloom
# HTTP REST API caching middleware
# Configuration file
# Example: https://github.com/valeriansaliou/bloom/blob/master/config.cfg
[server]
log_level = "debug"
inet = "0.0.0.0:8080"
[control]
inet = "0.0.0.0:8811"
tcp_timeout = 600
[proxy]
[[proxy.shard]]
shard = 0
host = "webapi"
port = 80
[cache]
ttl_default = 600
executor_pool = 64
disable_read = false
disable_write = false
compress_body = true
[redis]
host = "redis"
port = 6379
database = 0
pool_size = 80
max_lifetime_seconds = 60
idle_timeout_seconds = 600
connection_timeout_seconds = 1
max_key_size = 256000
max_key_expiration = 2592000
 
 

bloom2 配置 bloom/config2.cfg

# Bloom
# HTTP REST API caching middleware
# Configuration file
# Example: https://github.com/valeriansaliou/bloom/blob/master/config.cfg
[server]
log_level = "debug"
inet = "0.0.0.0:8080"
[control]
inet = "0.0.0.0:8811"
tcp_timeout = 600
[proxy]
[[proxy.shard]]
shard = 1
host = "webapi2"
port = 80
[cache]
ttl_default = 600
executor_pool = 64
disable_read = false
disable_write = false
compress_body = true
[redis]
host = "redis"
port = 6379
database = 0
pool_size = 80
max_lifetime_seconds = 60
idle_timeout_seconds = 600
connection_timeout_seconds = 1
max_key_size = 256000
max_key_expiration = 2592000
 
 

启动&&测试

  • 启动
docker-compose up -d
 
  • 测试效果
    从bloom 访问,注意需要Bloom-Request-Shard
 
curl -X GET 
  http://localhost:8081/userinfo2 
  -H 'Bloom-Request-Shard: 1' 
  -H 'Content-Type: application/json' 
  -H 'Postman-Token: d13543ca-a031-47e3-b47a-996a6faaad53' 
  -H 'cache-control: no-cache'
{"age":333,"name":"dalongdempo222"}
curl -X GET 
  http://localhost:8080/userinfo 
  -H 'Bloom-Request-Shard: 0' 
  -H 'Content-Type: application/json' 
  -H 'Postman-Token: bc168c7c-3b8b-471d-aa01-6eb6cb0d421c' 
  -H 'cache-control: no-cache'
 
 

redis key

KEYS *
1) "bloom:0:a:dc56d17a"
2) "bloom:1:a:dc56d17a"
 
 

openresty(lb) 端访问: 不用添加header,因为nginx端已经配置了

curl http://localhost:9000/userinfo
{"age":333,"name":"dalongdempo"}
 

说明

bloom-server 还是一个不错的rest cache proxy,同时openresty 端也是可以做的,作者在项目中也介绍为什么不那么
干的原因,同时bloom-server 基于rust 编写,具有类型安全,以及很不错的性能,同时也包含了好多可选的控制,但是
如果你在运行的时候可能会发现部分提示并不是很友好,尤其对于在排查问题的时候,但是总的来说设计还是挺好的

参考资料

https://github.com/valeriansaliou/bloom
https://crates.io/crates/bloom-server
https://github.com/rongfengliang/bloom-server-docker-compose

原文地址:https://www.cnblogs.com/rongfengliang/p/10425485.html