K8S之traefik高级特性

Traefik

Traefik是一个用Golang开发的轻量级的Http反向代理和负载均衡器。由于可以自动配置和刷新backend节点,目前可以被绝大部分容器平台支持,例如Kubernetes,Swarm,Rancher等。由于traefik会实时与Kubernetes API交互,所以对于Service的节点变化,traefik的反应会更加迅速。总体来说traefik可以在Kubernetes中完美的运行.

Traefik 还有很多特性如下:

  • 速度快
  • 不需要安装其他依赖,使用 GO 语言编译可执行文件
  • 支持最小化官方 Docker 镜像
  • 支持多种后台,如 Docker, Swarm mode, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS 等等
  • 支持 REST API
  • 配置文件热重载,不需要重启进程
  • 支持自动熔断功能
  • 支持轮训、负载均衡
  • 提供简洁的 UI 界面
  • 支持 Websocket, HTTP/2, GRPC
  • 自动更新 HTTPS 证书
  • 支持高可用集群模式

接下来我们使用 Traefik 来替代 Nginx + Ingress Controller 来实现反向代理和服务暴漏。

那么二者有什么区别呢?简单点说吧,在 Kubernetes 中使用 nginx 作为前端负载均衡,通过 Ingress Controller 不断的跟 Kubernetes API 交互,实时获取后端 Service、Pod 等的变化,然后动态更新 Nginx 配置,并刷新使配置生效,来达到服务自动发现的目的,而 Traefik 本身设计的就能够实时跟 Kubernetes API 交互,感知后端 Service、Pod 等的变化,自动更新配置并热重载。大体上差不多,但是 Traefik 更快速更方便,同时支持更多的特性,使反向代理、负载均衡更直接更高效。

1.Role Based Access Control configuration (Kubernetes 1.6+ only) 

  1. kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml

授权,官方文档不懂下下来看文档

2.Deploy Træfik using a Deployment or DaemonSet

  1. To deploy Træfik to your cluster start by submitting one of the YAML files to the cluster with kubectl:
  2.  
  3. kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-deployment.yaml   此模板有些问题,我先用ds模板
  4.  
  5. kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml

deployment和ds的区别:ds会在每台node上都创造一个pod.而deploy是人为控制的副本。如果几台很多了,没有必要用ds,比如100台 会造100个pod,没有意义。自己用ds模板改下,kind: Deployment

如下

  1. 直接找到DS模板吧kind改成deploy模式
  2. kind: Deployment

3.Check the Pods

  1. # kubectl --namespace=kube-system get pods -o wide 
  2. traefik-ingress-controller-79877bbc66-p29jh 1/1 Running 0 32m   10.249.243.182    k8snode2-175v136

查找一下在那台服务器上,deploy会随机分配一台服务器

4.Ingress and UI

  1. kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml.

自己再造个web测试用

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4.   name: nginx-svc
  5. spec:
  6.   template:
  7.     metadata:
  8.       labels:
  9.         name: nginx-svc
  10.         namespace: default
  11. spec:
  12.   selector:
  13.     run: ngx-pod
  14.   ports:
  15.   - protocol: TCP
  16.     port: 80
  17.     targetPort: 80
  18. ---
  19. apiVersion: apps/v1beta1
  20. kind: Deployment
  21. metadata:
  22.   name: ngx-pod
  23. spec:
  24.   replicas: 4
  25.   template:
  26.     metadata:
  27.       labels:
  28.         run: ngx-pod
  29.     spec:
  30.       containers:
  31.       - name: nginx
  32.         image: nginx:1.10
  33.         ports:
  34.         - containerPort: 80
  35. ---
  36. apiVersion: extensions/v1beta1
  37. kind: Ingress
  38. metadata:
  39.   name: ngx-ing
  40.   annotations:
  41.     kubernetes.io/ingress.class: traefik
  42. spec:
  43.   rules:
  44.   - host: www.ha.com
  45.     http:
  46.       paths:
  47.       - backend:
  48.           serviceName: nginx-svc
  49.           servicePort: 80

5.测试成功

6.HTTPS证书

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4.   name: traefik-web-ui
  5.   namespace: kube-system
  6.   annotations:
  7.     kubernetes.io/ingress.class: traefik
  8. spec:
  9.   rules:
  10.   - host: traefik-ui.minikube
  11.     http:
  12.       paths:
  13.       - backend:
  14.           serviceName: traefik-web-ui
  15.           servicePort: 80
  16.   tls:
  17.    - secretName: traefik-ui-tls-cert

官方是怎么导入证书的呢? 注:key和crt必须要有

  1. openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=traefik-ui.minikube"
  2. kubectl -n kube-system create secret tls traefik-ui-tls-cert --key=tls.key --cert=tls.crt

7.Basic Authentication

  1. A. Use htpasswd to create a file containing the username and the MD5-encoded password:
  2. htpasswd -./auth myusername
  3. You will be prompted for a password which you will have to enter twice. htpasswd will create a file with the following:
  4. cat auth
  5. myusername:$apr1$78Jyn/1K$ERHKVRPPlzAX8eBtLuvRZ0
  6. B. Now use kubectl to create a secret in the monitoring namespace using the file created by htpasswd
  7. kubectl create secret generic mysecret --from-file auth --namespace=monitoring
  8. Note
  9. Secret must be in same namespace as the Ingress object.
  10. C. Attach the following annotations to the Ingress object:
  11.     ingress.kubernetes.io/auth-type: "basic"
  12.     ingress.kubernetes.io/auth-secret: "mysecret"
  13. They specify basic authentication and reference the Secret mysecret containing the credentials.
  14. Following is a full Ingress example based on Prometheus:
  15. #配置文件如下
  16. apiVersion: extensions/v1beta1
  17. kind: Ingress
  18. metadata:
  19.  name: prometheus-dashboard
  20.  namespace: monitoring
  21.  annotations:
  22.    kubernetes.io/ingress.class: traefik
  23.    ingress.kubernetes.io/auth-type: "basic"
  24.    ingress.kubernetes.io/auth-secret: "mysecret"
  25. spec:
  26.  rules:
  27.  - host: dashboard.prometheus.example.com
  28.    http:
  29.      paths:
  30.      - backend:
  31.          serviceName: prometheus
  32.          servicePort: 9090

模板1 多域名暴漏端口:再看一下 UI 页面,立马更新过来,可以看到刚刚配置的 dashboard.k8s.traefik 和 ela.k8s.traefik

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4.   name: dashboard-ela-k8s-traefik
  5.   namespace: kube-system
  6.   annotations:
  7.     kubernetes.io/ingress.class: traefik
  8. spec:
  9.   rules:
  10.   - host: dashboard.k8s.traefik
  11.     http:
  12.       paths:
  13.       - path: /  
  14.         backend:
  15.           serviceName: kubernetes-dashboard
  16.           servicePort: 80
  17.   - host: ela.k8s.traefik
  18.     http:
  19.       paths:
  20.       - path: /  
  21.         backend:
  22.           serviceName: elasticsearch-logging
  23.           servicePort: 9200

模板2

注意:这里我们根据路径来转发,需要指明 rule 为 PathPrefixStrip,配置为 traefik.frontend.rule.type: PathPrefixStrip

再看一下 UI 页面,也是立马更新过来,可以看到刚刚配置的 my.k8s.traefik/dashboard 和 my.k8s.traefik/kibana

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4.   name: my-k8s-traefik
  5.   namespace: kube-system
  6.   annotations:
  7.     kubernetes.io/ingress.class: traefik
  8.     traefik.frontend.rule.type: PathPrefixStrip
  9. spec:
  10.   rules:
  11.   - host: my.k8s.traefik
  12.     http:
  13.       paths:
  14.       - path: /dashboard
  15.         backend:
  16.           serviceName: kubernetes-dashboard
  17.           servicePort: 80
  18.       - path: /kibana
  19.         backend:
  20.           serviceName: kibana-logging
  21.           servicePort: 5601

8.自动熔断

在集群中,当某一个服务大量出现请求错误,或者请求响应时间过久,或者返回500+错误状态码时,我们希望可以主动剔除该服务,也就是不在将请求转发到该服务上,而这一个过程是自动完成,不需要人工执行。Traefik 通过配置很容易就能帮我们实现,Traefik 可以通过定义策略来主动熔断服务。

  • NetworkErrorRatio() > 0.5:监测服务错误率达到50%时,熔断。
  • LatencyAtQuantileMS(50.0) > 50:监测延时大于50ms时,熔断。
  • ResponseCodeRatio(500, 600, 0, 600) > 0.5:监测返回状态码为[500-600]在[0-600]区间占比超过50%时,熔断。

案例

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4.   name: wensleydale
  5.   annotations:
  6.     traefik.backend.circuitbreaker: "NetworkErrorRatio() > 0.5" 
  7.     traefik.backend.circuitbreaker: LatencyAtQuantileMS(50.0) > 2000 #>2秒熔断

9.官方文档:

其他多看官方文档

https://docs.traefik.io/user-guide/kubernetes/

10.update

由于业务需求,node会扩充, ds模式多了会浪费资源 20台node+,我们怎么把traefik固定在几台机器上。查了一些文档找到了这个解决方法。

给node打标签,用ds模式启动标签化节点 :https://www.kubernetes.org.cn/daemonset 参考文档。

案例:

给三台node打标签

  1. kubectl label node k8snode10-146v78-taiji traefik=svc
  2. kubectl label node k8snode10-146v78-taiji traefik=svc
  3. kubectl label node k8snode10-146v78-taiji traefik=svc
  4. ##########取消标签
  5. kubectl label node k8snode1-174v136-taiji traefik-
  6. 查看标签
  7. [root@k8s-m1 Traefik]# kubectl get nodes --show-labels
  8. NAME                     STATUS    ROLES     AGE       VERSION   LABELS
  9. k8snode1-174v136-taiji   Ready     node      42d       v1.10.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8snode1-174v136-taiji,node-role.kubernetes.io/node=,traefik=svc
  10. [root@k8s-m1 Traefik]# cat traefik-ds.yaml 
  11. kind: DaemonSet
  12. apiVersion: extensions/v1beta1
  13. metadata:
  14.   name: traefik-ingress-controller
  15.   namespace: kube-system
  16.   labels:
  17.     k8s-app: traefik-ingress-lb
  18. spec:
  19.   template:
  20.     metadata:
  21.       labels:
  22.         k8s-app: traefik-ingress-lb
  23.         name: traefik-ingress-lb
  24.     spec:
  25.       nodeSelector:
  26.         traefik: "svc"            #重点2行
  27. ...................
  28. 验证
  29. [root@k8s-m1 Traefik]# kubectl get ds -n kube-system 
  30. NAME                         DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR  
  31. traefik-ingress-controller   3         3         3         3            3           traefik=svc

 总结:后期可以根据业务量加标签扩展traefik节点

11.限流限速

官文 : Valid values for extractorfunc are: * client.ip * request.host * request.header.<header name>

我们根据2个维度 (request.host | client.ip)

重点首选保障前端HAPROXY,LVS,nginx,CDN传递过来的IP是用户IP,而不是上层负载IP

haproxy可以用配置 实现

  1. option forwardfor                      #2者选一个即可
  2. option forwardfor header Client-IP

ngx可以通过upstream通过 X-Forwarded-For传递IP

  1. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

1. Client.ip验证:我们通过HAPROXY 进行测试,出传递了client-IP到 traefik上进行测试。client.ip限流限速效果满足

  1. "time":"2019-01-10T02:03:25Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  2. "time":"2019-01-10T02:03:25Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  3. "time":"2019-01-10T02:03:26Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
  4. "time":"2019-01-10T02:03:26Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1" 
  5. "time":"2019-01-10T02:03:26Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
  6. "time":"2019-01-10T02:03:29Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  7. "time":"2019-01-10T02:03:29Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests"  "request_Client-Ip":"127.0.0.1"
  8. "time":"2019-01-10T02:03:29Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests"  "request_Client-Ip":"127.0.0.1"
  9. "time":"2019-01-10T02:03:32Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  10. "time":"2019-01-10T02:03:32Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"  

2.Request.host验证:我们通过HAPROXY 进行测试 ,client.ip限流限速效果满足

  1. "time":"2019-01-10T03:14:10Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  2. "time":"2019-01-10T03:14:11Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  3. "time":"2019-01-10T03:14:11Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  4. "time":"2019-01-10T03:14:11Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  5. "time":"2019-01-10T03:14:13Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  6. "time":"2019-01-10T03:14:14Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  7. "time":"2019-01-10T03:14:14Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
  8. "time":"2019-01-10T03:14:14Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
  9. "time":"2019-01-10T03:14:15Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  10. "time":"2019-01-10T03:14:15Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
  11. "time":"2019-01-10T03:14:21Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  12. "time":"2019-01-10T03:14:22Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  13. "time":"2019-01-10T03:14:22Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  14. "time":"2019-01-10T03:14:22Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  15. "time":"2019-01-10T03:14:23Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  16. "time":"2019-01-10T03:14:23Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
  17. "time":"2019-01-10T03:14:23Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
  18. "time":"2019-01-10T03:14:24Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1" 

12.金丝雀发布 (A,B)发布

金丝雀发布功能,在K8S-traefik里在v1.7.5时官方进行了修正

  • [k8s] Support canary weight for external name service (#4135 by yue9944882)
  1.       "test.if.org/": {
  2.         "servers": {
  3.           "hpa-httpd-5856fd66bf-2qpm6": {
  4.             "url": "http://10.249.221.61:80",
  5.             "weight": 90000
  6.           },
  7.           "hpb-httpd-6bc6f55488-mllq2": {
  8.             "url": "http://10.249.89.29:80",
  9.             "weight": 10000
  10.           }
  11.         },

13.保持会话,session亲和性,sticky特性

原理会话粘粘:在客户端第一次发起请求时,反向代理为客户端分配一个服务端,并且将该服务端的地址以SetCookie的形式发送给客户端,这样客户端下一次访问该反向代理时,便会带着这个cookie,里面包含了上一次反向代理分配给该客户端的服务端信息。这种机制是通过一个名为Sticky的插件实现的。而Traefik则集成了与Nginx的Sticky相同功能,并且可以在Kubernetes中方便的开启和配置该特性。

解决:认证服务器第一次认证到A POD 第2次访问到B POD导致,认证失效问题,保障一致性

service层配置

  1. metadata:
  2.   annotations:
  3.     traefik.ingress.kubernetes.io/affinity: "true" 

验证

web请求头里带了Cookie信息

原文地址:https://www.cnblogs.com/lvcisco/p/11280721.html