java 单机接口限流处理方案
551
2022-11-03
k8s+etcd集群配置
一 、 使用kubeadm 搭建 kubernetes 集群
1.1 获取kubernetes 安装包
登陆github , clone 命令下载到linux, 进入rpm目录,安装好docker执行 docker-build.sh脚本,即可在output目录下编译出rpm包,
1.2 下载镜像
由于gci.io 是goolge的, 所以使用kubeadm init时 会出现下载不了镜像,镜像的版本会更具rpm版本不同略有不同。
通过FQ可以把镜像下载到本地, 通过docker push 到私有registry,kubeadm init 修改配置文件使用私有registry,apiserver 会启动不了
查看使用的镜像版本,kubeadm init 会出现 /etc/kubernetes/manifests/ 目录,里面的json文件,可以查看image
但是还会有一些镜像无法找到,比如kubedns、pause 等, 可以在源码中找到
指定节点的数据存储目录,这些数据包括节点ID,集群ID,集群初始化配置,Snapshot文件,若未指定—wal-dir,还会存储WAL文件;—wal-dir 指定节点的was文件的存储目录,若指定了该参数,wal文件会和其他数据文件分开存储。—name 节点名称—initial-advertise-peer-urls 告知集群其他节点url.— listen-peer-urls 监听URL,用于与其他节点通讯— advertise-client-urls 告知客户端url, 也就是服务的url— initial-cluster-token 集群的ID— initial-cluster 集群中所有节点
使用阿里云yum源
192.168.20.206 192.168.20.223 192.168.20.224 三台分别安装 etcd服务器
[root@docker1~]# yum -y install etcd
查看etcd版本
[root@kubernetes ~]# etcdctl -v etcdctl version: 3.0.15 API version: 2
docker1 /etc/etcd/etcd.conf配置文件
[root@docker1 ~]# grep -v "#" /etc/etcd/etcd.confETCD_NAME=etcd1ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_ADVERTISE_CLIENT_URLS=" /etc/etcd/etcd.conf配置文件
[root@docker2 ~]# grep -v "#" /etc/etcd/etcd.conf ETCD_NAME=etcd2ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_ADVERTISE_CLIENT_URLS="/etc/etcd/etcd.conf配置文件
[root@docker3 etcd]# grep -v "#" /etc/etcd/etcd.confETCD_NAME=etcd3ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_ADVERTISE_CLIENT_URLS="ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyWorkingDirectory=/var/lib/etcd/etcd.confUser=etcd# setfailureLimitNOFILE=65536[Install]WantedBy=multi-user.target
重启3台etcd的服务
[root@docker1 etcd]# systemctl restart etcd.service[root@docker2 etcd]# systemctl restart etcd.service[root@docker3 etcd]# systemctl restart etcd.service
etcd集群操作
[root@docker1 ~]# etcdctl member listb6d60bb2e9e4f37c: name=etcd1 peerURLs=clientURLs= isLeader=falsec4abafabbe0d2097: name=etcd3 peerURLs=clientURLs=isLeader=falsee8f6dc6436ef6d63: name=etcd2 peerURLs=clientURLs=isLeader=true cluster[root@docker1 ~]# etcdctl member remove b6d60bb2e9e4f37cRemoved member b6d60bb2e9e4f37c from cluster[root@docker1 ~]# etcdctl member listc4abafabbe0d2097: name=etcd3 peerURLs=clientURLs=isLeader=falsee8f6dc6436ef6d63: name=etcd2 peerURLs=clientURLs=isLeader=true添加节点:1. 关闭etcd1 节点的进程,模拟节点挂了[root@docker1 ~]# systemctl stop etcd.service2. 192.168.20.209 删除数据[root@etcd1 ~]# rm /var/lib/etcd/default.etcd/member/* -rf3.etcd2 添加节点[root@etcd2 ~]# etcdctl member add etcd1 member named etcd1 with ID d92ef1d9c9d9ebe7 to clusterETCD_NAME="etcd1"ETCD_INITIAL_CLUSTER_STATE="existing"状态变成 如下:[root@docker1 ~]# etcdctl member listc4abafabbe0d2097: name=etcd3 peerURLs=clientURLs=isLeader=falsed92ef1d9c9d9ebe7[unstarted]: peerURLs=name=etcd2 peerURLs=clientURLs=isLeader=true5. 修改/etc/etcd/etcd.conf ETCD_INITIAL_CLUSTER_STATE 由 new 变成 existing
此时d92ef1d9c9d9ebe7 192.168.20.226 状态为[unstarted]:6. 重启 etcd1 的服务进程[root@docker1 ~]# systemctl start etcd.service7. 查看etcd 服务
[root@docker1 etcd]# etcdctl member list5190eb3277e38296: name=etcd1 peerURLs=clientURLs=isLeader=falsec4abafabbe0d2097: name=etcd3 peerURLs=clientURLs=isLeader=falsee8f6dc6436ef6d63: name=etcd2 peerURLs=clientURLs=isLeader=true
三 安装kubeadm
y ebtables socatEbtables即是以太网桥防火墙,以太网桥工作在数据链路层,Ebtables来过滤数据链路层数据包启动kubelet客户端[root@docker1 etcd]# systemctl start kubelet.service[root@docker1 etcd]# systemctl enable kubelet.service
四 初始化master1
添加vip[root@docker1 etcd]# ip addr add 192.168.20.227/24 dev ens32[root@docker1 etcd]# kubeadm init --api-advertise-addresses=192.168.20.227 --external-etcd-endpoints=--external-etcd-endpoints has been deprecated, this flag will be removed when componentconfig exists[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.[preflight] Running pre-flight checks[preflight] Starting the kubelet service[init] Using Kubernetes version: v1.5.3[tokens] Generated token: "7fd3db.6c8b8f165050c555"[certificates] Generated Certificate Authority key and certificate.[certificates] Generated API Server key and certificate[certificates] Generated Service Account signing keys[certificates] Created keys and certificates in "/etc/kubernetes/pki"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"[apiclient] Created API client, waiting for the control plane to become ready[apiclient] All control plane components are healthy after 18.581028 seconds[apiclient] Waiting for at least one node to register and become ready[apiclient] First node is ready after 4.009030 seconds[apiclient] Creating a test deployment[apiclient] Test deployment succeeded[token-discovery] Created the kube-discovery deployment, waiting for it to become ready[token-discovery] kube-discovery is ready after 2.004518 seconds[addons] Created essential addon: kube-proxy[addons] Created essential addon: kube-dnsYour Kubernetes master has initialized successfully!You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: can now join any number of machines by running the following on each node:kubeadm join --token=7fd3db.6c8b8f165050c555 192.168.20.227--api-advertise-addresses 支持多个ip,但是会导致kubeadm join无法正常加入, 所以对外服务只配置为一个vip如果使用flannel 网络,使用:kubeadm init --api-advertise-addresses=192.168.20.227 --external-etcd-endpoints=--pod-network-cidr=10.244.0.0/16
五,部署其他master(docker2 docker3)
安装rpm包(kubeadm, )[root@docker2 ~]# rpm -ivh kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm kubectl-1.5.1-0.x86_64.rpm kubelet-1.5.1-0.x86_64.rpm kubernetes-cni-0.3.0.1-0.07a8a2.x86_64.rpm[root@docker2 ~]# systemctl start kubelet.service ; systemctl enable kubelet.service[root@docker2 ~]# scp -r 192.168.20.209:/etc/kubernetes/* /etc/kubernetes/kube-controller-manager kube-scheduler 通过 --leader-elect实现了分布式锁. 所以三个master节点可以正常运行
六 查看各master 进程、pod情况
192.168.20.209 master,启动了 192.168.20.227 vip
192.168.20.223 master
192.168.20.224 master
192.168.20.209 查看启动的node节点, 和vip地址
查看master 节点pod 情况, kube-dns pod 没有启动,应为没有配置网络
七 创建POD网络
创建好集群后,为了能让容器进行跨主机通讯还要部署 Pod 网络,这里使用官方推荐的 weave 方式,也可以采用 flannel,以下为 weave 示例只在:docker1上执行。
[root@docker1 ~]#docker pull weaveworks/weave-kube:1.9.2[root@docker1 ~]#docker tag weaveworks/weave-kube:1.9.2 weaveworks/weave-kube:1.9.2[root@docker1 ~]#docker rmi weaveworks/weave-kube:1.7.2[root@docker1 ~]# kubectl create -f weave-kube.yaml或者 kubectl create -f flannel 网络wget create -f kube-flannel.yml
八 采用daemonsets 方式,实现核心组件高可用
dns 组件,只在docker 上执行
1.方案一,直接replicas[root@docker1 ~]# kubectl scale deploy/kube-dns --replicas=3 --namespace=kube-system或者 kubectl scale deployment/kube-dns --replicas=3 --namespace=kube-system
使用最新的 dns
Kubernetes DNS pod拥有3个容器 - kubedns,dnsmasq和一个名为healthz的健康检查。kubedns进程监视Kubernetes主服务和端点的更改,并维护内存中查找结构来服务DNS请求。dnsmasq容器添加DNS缓存以提高性能。healthz容器在执行双重健康检查(对于dnsmasq和kubedns)时提供单个健康检查端点。
1.删除自带dns组件 [root@docker1 ~]# kubectl delete deploy/kube-dns svc/kube-dns -n kube-system
2.下载最新的dns组件 [root@docker1 ~]# cd /root/dns/
for i in Makefile kubedns-controller.yaml.base kubedns-svc.yaml.base transforms2salt.sed transforms2sed.sed; do wget dns]# lskubedns-controller.yaml.base kubedns-svc.yaml.base kubernetes-dashboard.yaml Makefile transforms2salt.sed transforms2sed.sed3. 修改本地环境[root@docker1 ~]# kubectl get svc |grep kubernetes kubernetes 10.0.96.1
[root@docker1 dns]# kubectl create -f kubedns-controller.yaml.sed deployment "kube-dns" created
[root@docker1 dns]# kubectl create -f kubedns-svc.yaml.sed service "kube-dns" created
[root@docker1 dns]# kubectl get service --all-namespaces NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes 10.96.0.1
测试创建2个pod
[root@docker1 ~]# cat centos.yaml
apiVersion: v1kind: Servicemetadata: name: my-centos-svc labels: app: centosspec: ports: - port: 80 selector: app: centos---apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: my-centosspec: replicas: 1 template: metadata: labels: app: centos spec: containers: - name: centos image: docker.cinyi.com:443/centos7.3 ports: - containerPort: 80
[root@docker1 ~]# vim nginx.yaml
apiVersion: v1kind: Servicemetadata: name: my-nginx-svc labels: app: nginxspec: ports: - port: 80 selector: app: nginx---apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: my-nginxspec: replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: nginx image: docker.cinyi.com:443/senyint/nginx ports: - containerPort: 80
[root@docker1 ~]# kubectl create -f nginx.yaml -f centos.yaml
方案二、采用 采用daemonsets方式,实现核心组件实现高可用,
DaemonSet能够让所有(或者一些特定)的Node节点运行同一个pod。当节点加入到kubernetes集群中,pod会被(DaemonSet)调度到该节点上运行,当节点从kubernetes集群中被移除,被(DaemonSet)调度的pod会被移除
DaemonSet 与 nodeSelector 合用 ,并且在使用的node 打上label,这样就可以指定到不同的node节点上。
[root@docker1 dns]# cat kubends.yaml
apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "KubeDNS"spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP---apiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true"spec: selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: scheduler.alpha.kubernetes.io/critical-pod: '' scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' spec: nodeSelector: zone: master containers: - name: kubedns image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.12.1 resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi livenessProbe: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: path: /readiness port: 8081 scheme: HTTP initialDelaySeconds: 3 timeoutSeconds: 5 args: - --domain=cluster.local. - --dns-port=10053- --v=2 env: - name: PROMETHEUS_PORT value: "10055" ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP - name: dnsmasq image: gcr.io/google_containers/k8s-dns-dnsmasq-amd64:1.12.1 livenessProbe: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - --cache-size=1000 - --server=/cluster.local/127.0.0.1#10053 - --server=/in-addr.arpa/127.0.0.1#10053 - --server=/ip6.arpa/127.0.0.1#10053 - --log-facility=- ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP resources: requests: cpu: 150m memory: 10Mi - name: sidecar image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.12.1 livenessProbe: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - --v=2 - --logtostderr - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: memory: 20Mi cpu: 10m dnsPolicy: Default # Don't use cluster DNS.
创建 dns service 和 daemonset ,创建完成 没有dns pod的信息,需要告诉在那个节点启动,使用 kubecet label 命令
[root@docker1 dns]# kubectl create -f /root/dns/kubends.yaml
#zone=master,在dns.yaml文件的 nodeSelector 定义
[root@docker1 dns]# kubectl label node docker1 zone=master
执行完成后,只在docker1 上启动dns pod,为了冗余,需要在 docker2 docker3 也启动
[root@docker2 ~]# kubectl label node docker1 zone=master
[root@docker3 ~]# kubectl label node docker1 zone=master
kube-discovery,kube-discovery 主要负责集群密钥的分发,如果这个组件不正常, 将无法正常新增节点kubeadm join
docker1上直接执行
方案一,直接添加副本数,出现错误,只能存在唯一的 一个node上
kubectl scale deploy/kube-discovery --replicas=3 -n kube-system
报错信息为: pod (kube-discovery-1769846148-55sjr) failed to fit in any node fit failure summary on nodes : MatchNodeSelector (2), PodFitsHostPorts (1)
方案二
#1. 导出kube-discovery配置kubectl get deploy/kube-discovery -n kube-system -o yaml > /data/kube-discovery.yaml#2. 把Deployment类型改为DaemonSet,并加上master nodeSelector#3. 删掉自带kube-discovery kubectl delete deploy/kube-discovery -n kube-system#4. 部署kube-discoverykubectl apply -f kube-discovery.yaml
#5. 打docker1 docker2 docker3 标记[root@docker1 ~]# kubectl label node docker1 role=discovery[root@docker2 ~]# kubectl label node docker2 role=discovery[root@docker3 ~]# kubectl label node docker3 role=discovery#6.配置中删除了 annotaions 选项,不然只能启动一个descovery pod
注意: nodeSelector 标记变成了 zone=descovery apiVersion: extensions/v1beta1kind: DaemonSetmetadata: annotations: deployment.kubernetes.io/revision: "1" creationTimestamp: 2017-03-09T10:07:44Z generation: 1 labels: component: kube-discovery k8s-app: kube-discovery kubernetes.io/cluster-service: "true" name: kube-discovery tier: node name: kube-discovery namespace: kube-system resourceVersion: "85792" selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/kube-discovery uid: 3d1333e4-04b0-11e7-ae57-0050568450f4spec: selector: matchLabels: component: kube-discovery k8s-app: kube-discovery kubernetes.io/cluster-service: "true" name: kube-discovery tier: node template: metadata: creationTimestamp: null labels: component: kube-discovery k8s-app: kube-discovery kubernetes.io/cluster-service: "true" name: kube-discovery tier: node spec: nodeSelector: role: discovery containers: - command: - /usr/local/bin/kube-discovery image: gcr.io/google_containers/kube-discovery-amd64:1.0 imagePullPolicy: IfNotPresent name: kube-discovery ports: - containerPort: 9898 hostPort: 9898 name: protocol: TCP resources: {} securityContext: seLinuxOptions: type: spc_t terminationMessagePath: /dev/termination-log volumeMounts: - mountPath: /tmp/secret name: clusterinfo readOnly: true dnsPolicy: ClusterFirst hostNetwork: true restartPolicy: Always securityContext: {} terminationGracePeriodSeconds: 30 volumes: - name: clusterinfo secret: defaultMode: 420 secretName: clusterinfo
九 vip192.168.20.227 漂移
到目前为止,三个master节点 相互独立运行,互补干扰. kube-apiserver作为核心入口, 可以使用keepalived 实现高可用, kubeadm join暂时不支持负载均衡的方式
3台mater安装keepalived
yum -y install keepalived
以下是三台服务器的keepalived 配置文件
docker1 keepalived 配置文件global_defs { router_id LVS_k8s}vrrp_script CheckK8sMaster { script "curl -k interval 3 timeout 9 fall 2 rise 2}vrrp_instance VI_1 { state MASTER interface ens32 virtual_router_id 61 priority 115 nopreempt authentication { auth_type PASS auth_pass 123456 } unicast_peer { 192.168.20.223 192.168.20.224 } virtual_ipaddress { 192.168.20.227/24 } track_script { CheckK8sMaster }}
docker2 keepalived 配置文件global_defs { router_id LVS_k8s}vrrp_script CheckK8sMaster { script "curl -k interval 3 timeout 9 fall 2 rise 2}vrrp_instance VI_1 { state BACKUP interface ens32 virtual_router_id 61 priority 110 nopreempt authentication { auth_type PASS auth_pass 123456 } unicast_peer { 192.168.20.224 192.168.20.209 } virtual_ipaddress { 192.168.20.227/24 } track_script { CheckK8sMaster }}
docker3 keeplived 配置文件global_defs { router_id LVS_k8s}vrrp_script CheckK8sMaster { script "curl -k interval 3 timeout 9 fall 2 rise 2}vrrp_instance VI_1 { state MASTER interface ens32 virtual_router_id 61 priority 100 nopreempt authentication { auth_type PASS auth_pass 123456 } unicast_peer { 192.168.20.223 192.168.20.209 } virtual_ipaddress { 192.168.20.227/24 } track_script { CheckK8sMaster }}
十 验证
加入节点
[root@docker4 ~]# yum install -y ebtables socat docker
[root@docker4 ~]# scp 192.168.20.209:/root.*.rpm /root/
[root@docker4 ~]# ls
kubectl-1.5.1-0.x86_64.rpm kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm kubelet-1.5.1-0.x86_64.rpm kubernetes-cni-0.3.0.1-0.07a8a2.x86_64.rpm
[root@docker4 ~]# rpm -ivh *.rpm[root@docker4 ~]# systemctl start kubelet.service[root@docker4 ~]# systemctl enable kubelet.service[root@docker4 ~]# kubeadm join --token=7fd3db.6c8b8f165050c555 192.168.20.227登陆到 docker1 上查看 docker4 是否加入到集群
192.168.20.209 master docker1 关机
192.168.20.223 /var/log/messages 日志,192.168.20.227 自动添加
master1 docker 服务器关机, 所有docker节点状态都为丢失,使用docker2,登陆到pod中, 解析可以访问。
kubernetes 安装dashboard
[root@docker2 ~]# wget ~]# kubectl create -f kubernetes-dashboard.yaml[root@docker2 ~]#kubectl describe service kubernetes-dashboard --namespace=kube-system
IE访问:
,InfluxDB, BigQuery, 目前Heapster项目主页为
是分布式时序数据库,每条记录都带有时间戳属性,主要用于实时数据采集、事件跟踪记录、存储时间图表,原始数据等。 InfluxDB 提供了 RESt API 用于数据
的存储和查询, 主页是 通过dashboard 将influxDB中的实时数据展现成图表或者曲线等形式, 主页是 ~]# git clone ~]# cd /root/heapster/deploy/kube-config/influxdb/[root@docker2 ~]# ls grafana-service.yaml heapster-controller.yaml heapster-service.yaml influxdb-grafana-controller.yaml influxdb-service.yaml由于apiserver 使用的vip,并且是容器化后,默认是127.0.0.1:8080 端口, heapster-controller.yaml文件需要连接apiserver:8080端口, telnet 192.168.20.227 超时,所以使用loadbalancer IP地址,前提是dns能够正常解析。其实 10.96.0.1 就是apiserver的服务端口,端口为443
登录到nginx pod中,测试dns解析结果
修改 heapster-controller.yaml 文件,注意 dns域名要写完整。kubernetes.default.svc.cluster.local
spec: containers: - name: heapster image: kubernetes/heapster:canary imagePullPolicy: IfNotPresent command: - /heapster - --source=kubernetes: - --sink=influxdb:~]# kubectl create -f /root/heapster/deploy/kube-config/influxdb/登录dashboard 映射的nodeport端口,查看监控是否正常,端口为30263.
Kubernetes Nginx Ingress
Kubernetes 暴露服务的方式目前只有三种:LoadBlancer Service、NodePort Service、Ingress
由于pod 和 service 是 kubernetes 集群范围内的虚拟的概念,所以集群外的客户端无法通过pod ip地址或者service的虚拟IP地址和虚拟端口号访问它们,可以通过将pod 或者service
的端口号映射到宿主机。
1. 将容器应用的端口号映射到物理机
通过设置pod级别的hostNetwork=true,将pod端口映射到物理机上,在容器的ports定义部分如果不指定hostPort,默认hostPort 等于containerPort,如果指定了hostPort,则hostPort必须等于containerPort的值
apiVersion: v1kind: Podmetadata: name: webapp labels: app: webappspec: hostNetwork: true containers: - name: webapp image: docker.cinyi.com:443/senyint/nginx ports: - containerPort: 80 hostPort: 80
在docker4 上查看端口
2.将service端口映射到物理机
通过设置nodePort映射到物理机上,如果不指定nodeport 将随机分配 30001---30761之间的端口。
apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: tomcat-deploymentspec: replicas: 3 template: metadata: labels: app: tomcat tier: frontendspec: containers: - name: tomcat image: docker.cinyi.com:443/tomcat ports: - containerPort: 80---apiVersion: v1kind: Servicemetadata: name: tomcat-serverspec: type: NodePort ports: - port: 11111 targetPort: 8080 nodePort: 30001 selector: tier: frontend
Ingress 就是能利用 Nginx 方向代理 service 名字和端口,Ingress将不同的URl访问请求转发到后端不同的service,实现Controller的定义结合起来,才能形成完整的解决2个问题:
1. pod 迁移问题,Pod挂掉时自动从其他机器启动一个新的,所以pod的ip地址发生变化,
2.采用nodeport 暴露server的 nodeport端口,会出现物理机的 端口原来越多,不方便管理。
安装 Ingress Nginx
1.部署默认后端,如果访问的域名不存在,全部转到默认后端
地址:cat /root/default-backend.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: default- labels: k8s-app: default- namespace: kube-systemspec: replicas: 1 template: metadata: labels: k8s-app: default- spec: terminationGracePeriodSeconds: 60 containers: - name: default- image: gcr.io/google_containers/defaultbackend:1.0 livenessProbe: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi---apiVersion: v1kind: Servicemetadata: name: default- namespace: kube-system labels: k8s-app: default- ports: - port: 80 targetPort: 8080 selector: k8s-app: default-]# kubcectl create -f default-backend.yaml
2. 部署 Ingress Controller
我在 nginx-ingress-controller.yaml 文件的 kind 使用了 DaemonSet,通过nodeselect zone属性,让它在3台master (apiserver)运行,
apiVersion: extensions/v1beta1kind: DaemonSet#kind: Deploymentmetadata: name: nginx-ingress-controller labels: k8s-app: nginx-ingress-controller namespace: kube-systemspec:# replicas: 1 template: metadata: labels: k8s-app: nginx-ingress-controller spec: nodeSelector: zone: nginx-ingress hostNetwork: true terminationGracePeriodSeconds: 60 containers: - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.2 name: nginx-ingress-controller readinessProbe: path: /healthz port: 10254 scheme: HTTP livenessProbe: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 1 ports: - containerPort: 80 hostPort: 80 - containerPort: 443 hostPort: 443 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-~]# kubectl create -f nginx-ingress-controller.yaml指定运行的机器,如果已经运行到了其他node节点,使用 kubectl delete pods pod名称 删除。[root@docker1 ~]# kubectl lable node docker1 zone=nginx-ingress[root@docker2 ~]# kubectl lable node docker2 zone=nginx-ingress[root@docker3 ~]# kubectl lable node docker3 zone=nginx-ingress
部署Ingress, 查看所有的service
编写yaml文件,文件内容是域名 - 服务 - 端口
[root@docker1 ~]# vim show-ingress.yaml apiVersion: extensions/v1beta1kind: Ingressmetadata: name: dashboard-nginx-ingress namespace: kube-systemspec: rules: - host: dashboard.feng.com paths: - backend: serviceName: kubernetes-dashboard servicePort: 80 - host: nginx.feng.com paths: - backend: serviceName: my-nginx-svc servicePort: 80[root@docker1~]# kubectl create -f show-ingress.yaml
[root@docker1 ~]# kubectl get ingress --namespace=kube-system -o wideNAME HOSTS ADDRESS PORTS AGEdashboard-nginx-ingress dashboard.feng.com,nginx.feng.com 192.168.20.209,192.168.20.223,192.168.20.224 80 19h
2个域名,指定到3个IP上,映射到宿主机的80端口上。
如果是不同namespaces 命名空间下的服务, 可以写2个 ingress yaml文件
[root@docker1 ~]# cat dashboard-ingressA.yaml apiVersion: extensions/v1beta1kind: Ingressmetadata: name: dashboard-ingressa namespace: kube-systemspec: rules: - host: dashboard.feng.com paths: - backend: serviceName: kubernetes-dashboard servicePort: 80 - host: nginx.feng.com paths: - backend: serviceName: monitoring-grafana servicePort: 80
[root@docker1 ~]# cat dashboard-ingressB.yaml apiVersion: extensions/v1beta1kind: Ingressmetadata: name: dashboard-ingressb namespace: testspec: rules: - host: imweb.feng.com paths: - backend: serviceName: im-web servicePort: 80
[root@docker1 ~]# kubectl create -f default-backend.yaml -f nginx-ingress-controller.yaml -f dashboard-ingressA.yaml -f dashboard-ingressB.yaml deployment "default-createdservice "default-createddaemonset "nginx-ingress-controller" createdingress "dashboard-ingressa" createdingress "dashboard-ingressb"
[root@docker1 ~]# kubectl get ingress --all-namespacesNAMESPACE NAME HOSTS ADDRESS PORTS AGEkube-system dashboard-ingressa dashboard.feng.com,nginx.feng.com 80 27stest dashboard-ingressb imweb.feng.com 80
修改client host后,访问域名
想不使用 Ingress, 直接使用nginx, 设置hostNetwork=true,创建pod和service后,进入到pod中,nslookup 不能解析其他service 的域名。
apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nginxspec: replicas: 3 template: metadata: labels: app: nginx spec: hostNetwork: true containers: - name: nginx image: docker.cinyi.com:443/senyint/nginx ports: - containerPort: 80---apiVersion: v1kind: Servicemetadata: name: nginxspec: ports: - port: 80 selector: app: nginx
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~