雁塔区建设局网站,WordPress中菜单变色,百度一下百度一下,wordpress手机电脑目录
一、属性说明
二、定义和基本配置 1、定义 2、创建Service
2.1、typeClusterIP 2.2、typeNodePort 2.3、固定IP访问
三、Service、EndPoint、Pod之间的关系
四、服务发现
1、基于Service中IP访问外部服务
2、基于Service中域名访问外部服务
五、Ingress的安装和使…目录
一、属性说明
二、定义和基本配置 1、定义 2、创建Service
2.1、typeClusterIP 2.2、typeNodePort 2.3、固定IP访问
三、Service、EndPoint、Pod之间的关系
四、服务发现
1、基于Service中IP访问外部服务
2、基于Service中域名访问外部服务
五、Ingress的安装和使用
1、Ingress是啥
2、Ingress的安装
2.1、安装helm
2.2、环境准备
2.3、配置SSl证书
六、附录
1、helm安装包
2、values.yml文件修改后的内容
3、Helm和kubernetes版本对照
4、Ingress-nginx和k8s版本对照 一、属性说明
属性名称取值类型是否必须取值说明versionString✔v1kindString✔ServicemetadataObejct✔元数据metadata.nameString✔Service名称metadata.namespaceString✔命名空间不指定默认为defaultmetadata.labels[]list自定义标签属性列表metadata.annotation[]list自定义注解属性列表specObejct✔详细描述spec.selector[]list✔Label Selector配置将选择具有指定Label标签的Pod作为管理范围spec.typeString✔ Service的类型默认为ClusterIp。 ClusterIp:虚拟服务IP地址该地址用于Kubernetes集群内部的Pod访问在Node上Kube-proxy通过设置的iptables规则进行转发。 NodePort使用宿主机的端口使能够访问各Node的外部客户端通过Node的IP地址和端口号就能访问服务。 LoadBanlancer使用外接负载均衡器完成到服务的负载分发需要在spec.status.loadBalancer字段指定外部负载均衡器的IP地址同时定义NodePort和clusterIp用于公有云环境。 spec.clusterIpString虚拟服务的IP地址当typeClusterIP时如果不指定则系统进行自动分配也可以手工指定typeLoadBalancer时需要指定。spec.sessionAffinityString 是否支持Session可选值为ClientIP默认值为None。 ClientIp:表示将同一个客户端根据客户端的IP地址决定的访问请求都转发到同一个后端Pod。 spec.ports[]listService端口列表spec.ports[].nameString端口名称spec.ports[].protocolString端口协议支持TCP和UDP默认 为TCPspec.ports[].portint服务监听的端口号spec.ports[].targetPortint需要转发到Pod的端口号spec.ports[].nodePortint当spec.typeNodePort时指定映射到宿主机的端口号。StatusObject当spec.typeLoadBalancer时设置外部负载均衡器的地址用于公有云。status.loadBalancerObject外部负载均衡器status.loadBalancer.ingressObject外部负载均衡器status.loadBalancer.ingress.ipString外部负载均衡器的ipstatus.loadBalancer.ingress.hostnameString外部负载均衡器的主机名
二、定义和基本配置 1、定义 Service主要用于提供网络服务通过Service的定义能够为客户端应用提供稳定的访问地址域名或者IP和负载均衡功能以及屏蔽后端EndPoint的变化是Kubernetes实现微服务的核心资源。通常我们的服务都是分布式的这样就不会是一个单一的Pod而且Pod还会面对扩容和缩容除此之外Pod发生了故障转移这些都会导致Pod的IP发生变化而Service恰好可以通过自己的负载均衡策略实现请求到Pod上而不用关注Pod的ip变化。 2、创建Service
#创建deploykubectl create -f nginx-deploy.yaml#创建servicekubectl create -f nginx-svc.yaml
apiVersion: apps/v1 #版本
kind: Deployment #类型为Deployment
metadata: #元数据labels: #标签app: my-nginxname: nginx-deploy
spec: #描述replicas: 3 #副本数量revisionHistoryLimit: 10 #历史版本限制用来回退如果设置为0则没有回退selector: #选择器matchLabels: #按标签匹配app: my-nginx #标签的值template:metadata:labels:app: my-nginxspec: #容器描述containers:- name: nginx-containerimage: nginx:1.21.4imagePullPolicy: IfNotPresentports:- containerPort: 80name: nginx-portprotocol: TCPapiVersion: v1
kind: Service #类型
metadata: #元数据name: nginx-svc #service的名称labels:app: nginx-svc #自身的标签
spec:selector:app: my-nginx #所有匹配到改标签的pod都可以通过service访问ports:- protocol: TCP #可选值TCP、UDP、SCTP 默认为 TCPport: 80 #service自己的端口通过内网访问你的端口targetPort: 80 #目标端口type: ClusterIP #nodeport 随机端口 30000-32000该端口直接绑定到node上且每个节点都会绑定该端口可以暴露端口供外部访问# ClusterIP 默认值 #创建资源
kubectl create -f nginx-deploy.yaml kubectl create -f nginx-svc.yaml #查看service
kubectl get serviceskubectl get svc#查看endpoint
kubectl get endpointskubectl get ep#查看pod
kubectl get po -o wide 分别修改三个Pod中nginx的内容
#分别进入容器修改nginx欢迎页内容
kubectl exec nginx-deploy-6c648dd6dd-867b6 -it -- /bin/shkubectl exec nginx-deploy-6c648dd6dd-pj5tp -it -- /bin/sh kubectl exec nginx-deploy-6c648dd6dd-tgl82 -it -- /bin/sh echo 10.244.1.145 node2 /usr/share/nginx/html/index.htmlecho 10.244.1.144 node2 /usr/share/nginx/html/index.htmlecho 172.17.0.3 node1 /usr/share/nginx/html/index.html
2.1、typeClusterIP 该地址用于Kubernetes内部Pod访问。通过service访问nginx
#通过service访问nginx
while true ;do curl 10.97.61.178 ; sleep 1;done; 可以看到Service是随机访问到pod的。
#编辑deploy对pod扩容将副本数改为4
kubectl edit deploy nginx-deploy#查看Pod
kubectl get po -o wide 再次通过service访问pod是可以访问到扩容的这个pod的。 2.2、typeNodePort
使用宿主机的端口使能够访问各Node的外部客户端通过Node的IP地址和端口号就能访问服务。
apiVersion: v1
kind: Service #类型
metadata: #元数据name: nginx-svc #service的名称labels:app: nginx-svc #自身的标签
spec:selector:app: my-nginx #所有匹配到改标签的pod都可以通过service访问ports:- protocol: TCP #可选值TCP、UDP、SCTP 默认为 TCPport: 80 #service自己的端口通过内网访问你的端口targetPort: 80 #目标端口nodePort: 30080 #指定端口访问 ,外部访问绑定到主机node上的端口范围为30000-32767type: NodePort
#创建资源
kubectl create -f nginx-svc.yaml #查看svc
kubectl get svc 2.3、固定IP访问
apiVersion: v1
kind: Service #类型
metadata: #元数据name: nginx-svc #service的名称labels:app: nginx-svc #自身的标签
spec:selector:app: my-nginx #所有匹配到改标签的pod都可以通过service访问ports:- protocol: TCP #可选值TCP、UDP、SCTP 默认为 TCPport: 80 #service自己的端口通过内网访问你的端口targetPort: 80 #目标pod端口type: ClusterIPsessionAffinity: ClientIP #指定客户端ip访问默认是NonesessionAffinityConfig: #会话配置clientIP:timeoutSeconds: 3600 #会话保持的最长时间单位为秒 #创建资源
kubectl create -f nginx-svc.yaml#查看
kubectl get svc#通过service访问pod,只会访问一个pod
while true ;do curl 10.111.86.26 ; sleep 1;done; 三、Service、EndPoint、Pod之间的关系 Endpoint 类似我们在之前所学习的 服务注册中心 eureka、nacos。Endpoint是Kubernetes中的一个资源对象存储在etcd中用于记录一个service对应的所有pod的访问地址。
一个Service由一组Pod通过标签关联组成这些Pod通过Endpoints暴露出来Endpoints是实现实际服务的端点集合。
通俗易懂总结kubernetes 中的 Service与Endpoints通讯获取实际服务地址在路由转发到具体pod上。
四、服务发现
1、基于Service中IP访问外部服务 普通的Service通过Label Selector对后端Endpoint列表进行了一次抽象如果后端Endpoint不是由Pod副本提供则Service还可以抽象定义为任意其他服务将一个Kubernetes集群外的已知服务定义为Kubernetes内的一个Service供集群内的其他应用访问。常见的应用场景
已部署的一个集群外服务比如 数据库服务缓存服务等。其他Kubernetes集群的某个服务。迁移过程中对某个服务进行Kubernetes内的服务名访问机制的验证。
apiVersion: v1
kind: Service #类型
metadata: #元数据name: nginx-svc-external #service的名称labels:app: nginx #自身的标签
spec:ports:- port: 80 #service自己的端口通过内网访问你的端口targetPort: 80 #目标pod端口name: webtype: ClusterIP---
#自定义EndpointapiVersion: v1
kind: Endpoints #类型
metadata:labels:app: nginx #与上面service保持一致name: nginx-svc-external #与service一样namespace: default #命名空间
subsets:- addresses:- ip: 192.168.139.1 #目标ip当集群内部访问到该service时会转发到该ip上这里测试ip为本机ip外网访问的ip同样适用ports:- port: 8080 #端口这里我配置的是tomcatname: web #端口名字与上面service一致protocol: TCP #与Service一致
#创建资源kubectl create -f nginx-svc-external-ip.yaml#查看svc,ep
kubectl get svc,ep#使用busybox容器检测没有的可以创建
kubectl run -it --image busybox:1.28.4 dns-test -- /bin/sh#已存在使用该指令进入pod
kubectl exec -it dns-test -- sh#进入pod内使用wget命令检测其中nginx-svc-external为服务的名称支持跨namespace访问访问方式为serviceName.namespace
wget http://nginx-svc-external 结论可以看到通过访问service服务已经访问到了Tomcat的服务。
2、基于Service中域名访问外部服务
apiVersion: v1
kind: Service #类型
metadata: #元数据name: nginx-svc-external-domain #service的名称labels:app: nginx-svc-external-domain #自身的标签
spec:type: ExternalNameexternalName: www.wssnail.com
#创建资源kubectl create -f nginx-svc-external-domain.yaml#查看
kubectl get svc 五、Ingress的安装和使用
1、Ingress是啥 Ingress提供从集群外部到集群内服务的 HTTP 和 HTTPS 路由。 流量路由由 Ingress 资源所定义的规则来控制。使用Ingress进行服务路由时Ingress Controller 基于Ingress 规则将客户端请求直接转发到Service对应的后端Endpoint上这样会跳过kube-proxy设置的路由转发规则提高网络转发效率。下面是Ingress网络访问的示意图。
对www.wsssnail.com/api的访问将被路由到api的service然后通过service访问到它管理的pod 对www.wsssnail.com/web的访问将被路由到web的service然后通过service访问到它管理的pod 对www.wsssnail.com/doc的访问将被路由到doc的service然后通过service访问到它管理的pod
2、Ingress的安装
2.1、安装helm
#官方地址
https://helm.sh/zh/docs/intro/install/#创建目录helm
mkdir helm#进入目录 下载helm包
wget https://get.helm.sh/helm-v3.11.3-linux-amd64.tar.gz#解压文件并移动helm文件到/usr/local/bin
tar -zxf helm-v3.11.3-linux-amd64.tar.gz #移动文件
mv helm /usr/local/bin/#查看版本
helm version#添加仓库
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx#查看仓库列表
helm repo list#搜索
helm search repo ingress-nginx#拉取指定版本
helm pull ingress-nginx/ingress-nginx --version 4.5.0#移动文件到/helm文件
mv ingress-nginx-4.5.0.tgz /root/helm/#解压文件
tar -xf ingress-nginx-4.5.0.tgz #进入ingress-nginx目录修改value.yml文件主要修改镜像地址和镜像以及node标签
values.yml修改后的完整文件见附录2 #创建命名空间kubectl create ns ingress-nginx#给节点打标签
kubectl label node node1 ingresstrue#执行安装注意后面有个点别丢了
helm install ingress-nginx -n ingress-nginx .#查看
kubectl get po -n ingress-nginx#如果安装失败了卸载helm库#查看helm列表helm list -n namespace#卸载
helm delete ingress-nginx -n namespace 2.2、环境准备
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: example
# annotations:
# kubernetes.io/ingerss.class: nginx
# nginx.ingress.kubernetes.io/enable-cors: true # 启用CORS。
# nginx.ingress.kubernetes.io/cors-allow-origin: * # 允许所有域访问。
# nginx.ingress.kubernetes.io/cors-allow-methods: GET, PUT, POST, DELETE, PATCH, OPTIONS # 允许的HTTP方法。
spec:ingressClassName: nginxrules:- host: test.wssnail.comhttp:paths:- pathType: Prefixbackend:service:name: nginx-svcport:number: 80path: /tls:- hosts:- test.wssnail.comsecretName: ingress-secret
#---
#apiVersion: v1
#kind: Secret
#metadata:
# name: example-tls
#data:
# tls.crt: base64 encoded cert
# tls.key: base64 encoded key
#type: kubernetes.io/tls apiVersion: apps/v1 #版本
kind: Deployment #类型为Deployment
metadata: #元数据name: nginx-deploy-test-ingress
spec: #描述replicas: 3 #副本数量selector: #选择器matchLabels:app: nginx-test-ingress #标签的值template:metadata:labels:app: nginx-test-ingressspec: #容器描述containers:- name: nginx-containerimage: nginx:1.21.6imagePullPolicy: IfNotPresentports:- containerPort: 80name: nginx-portprotocol: TCP
---
apiVersion: v1
kind: Service #类型
metadata: #元数据name: nginx-svc #service的名称labels:app: nginx-test-ingress #自身的标签
spec:selector:app: nginx-test-ingress #所有匹配到改标签的pod都可以通过service访问ports:- protocol: TCP #可选值TCP、UDP、SCTP 默认为 TCPport: 80 #service自己的端口通过内网访问你的端口name: webtype: NodePort
#创建资源
kubectl create -f ingress-nginx.yaml
kubectl create -f nginx-svc-test-ingress.yaml 配置主机host在C:\Windows\System32\drivers\etc路径下添加dns解析 192.168.139.207 test.wssnail.com #其中ip为ingress pod所在节点的ip 此域名我在浏览器访问的时候出现了跨域问题目前不知道怎么解决但是可以ping通证明ingress已经转发到了相应的服务上。 2.3、配置SSl证书 生成证书
#使用openssl生成证书,生成证书会生成证书文件openssl req -x509 -nodes -days 500 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj /CNtest.wssnail.com 创建secret
#创建secret ,其中ingress-secret为证书名称 tls.key、tls.crt对应上一步生成的文件
kubectl create secret tls ingress-secret --key tls.key --cert tls.crt
创建ingress apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: example
# annotations:
# kubernetes.io/ingerss.class: nginx
# nginx.ingress.kubernetes.io/enable-cors: true # 启用CORS。
# nginx.ingress.kubernetes.io/cors-allow-origin: * # 允许所有域访问。
# nginx.ingress.kubernetes.io/cors-allow-methods: GET, PUT, POST, DELETE, PATCH, OPTIONS # 允许的HTTP方法。
spec:ingressClassName: nginxrules:- host: test.wssnail.comhttp:paths:- pathType: Prefixbackend:service:name: nginx-svcport:number: 80path: /tls:- hosts:- test.wssnail.comsecretName: ingress-secret #对应创建secret的名字#---
#apiVersion: v1
#kind: Secret
#metadata:
# name: example-tls
#data:
# tls.crt: base64 encoded cert 直接使用base64转码后的内容
# tls.key: base64 encoded key
#type: kubernetes.io/tls
#创建资源再访问域名就可以访问https
kubectl create -f ingress-nginx.yaml
六、附录
1、helm安装包
链接: https://pan.baidu.com/s/1Pve4W3cMGh9HvasapL-81A?pwdiafb 提取码: iafb
2、values.yml文件修改后的内容
## nginx configuration
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/index.md
#### Overrides for generated resource names
# See templates/_helpers.tpl
# nameOverride:
# fullnameOverride:## Labels to apply to all resources
##
commonLabels: {}
# scmhash: abc123
# myLabel: aakkmdcontroller:name: controllerimage:## Keep false as default for now!chroot: falseregistry: registry.cn-hangzhou.aliyuncs.com #此处修改镜像image: google_containers/nginx-ingress-controller #此处修改镜像## for backwards compatibility consider setting the full image url via the repository value below## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail## repository:tag: v1.6.3#digest: sha256:b92667e0afde1103b736e6a3f00dd75ae66eec4e71827d19f19f471699e909d2 #此处注释掉验证#digestChroot: sha256:4b4a249c9a35ac16a8ec0e22f6c522b8707f7e59e656e64a4ad9ace8fea830a4 #此处注释掉验证pullPolicy: IfNotPresent# www-data - uid 101runAsUser: 101allowPrivilegeEscalation: true# -- Use an existing PSP instead of creating oneexistingPsp: # -- Configures the controller container namecontainerName: controller# -- Configures the ports that the nginx-controller listens oncontainerPort:http: 80https: 443# -- Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/config: {}# -- Annotations to be added to the controller config configuration configmap.configAnnotations: {}# -- Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/custom-headersproxySetHeaders: {}# -- Will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headersaddHeaders: {}# -- Optionally customize the pod dnsConfig.dnsConfig: {}# -- Optionally customize the pod hostname.hostname: {}# -- Optionally change this to ClusterFirstWithHostNet in case you have hostNetwork: true.# By default, while using host network, name resolution uses the hosts DNS. If you wish nginx-controller# to keep resolving names inside the k8s network, use ClusterFirstWithHostNet.#dnsPolicy: ClusterFirstdnsPolicy: ClusterFirstWithHostNet #此处修改# -- Bare-metal considerations via the host network https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network# Ingress status was blank because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not applyreportNodeInternalIp: false# -- Process Ingress objects without ingressClass annotation/ingressClassName field# Overrides value for --watch-ingress-without-class flag of the controller binary# Defaults to falsewatchIngressWithoutClass: false# -- Process IngressClass per name (additionally as per spec.controller).ingressClassByName: false# -- This configuration enables Topology Aware Routing feature, used together with service annotation service.kubernetes.io/topology-aware-hintsauto# Defaults to falseenableTopologyAwareRouting: false# -- This configuration defines if Ingress Controller should allow users to set# their own *-snippet annotations, otherwise this is forbidden / dropped# when users add those annotations.# Global snippets in ConfigMap are still respectedallowSnippetAnnotations: true# -- Required for use with CNI based kubernetes installations (such as ones set up by kubeadm),# since CNI and hostport dont mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920# is mergedhostNetwork: true #此处修改## Use host ports 80 and 443## Disabled by defaulthostPort:# -- Enable hostPort or notenabled: falseports:# -- hostPort http porthttp: 80# -- hostPort https porthttps: 443# -- Election ID to use for status update, by default it uses the controller name combined with a suffix of leaderelectionID: ## This section refers to the creation of the IngressClass resource## IngressClass resources are supported since k8s 1.18 and required since k8s 1.19ingressClassResource:# -- Name of the ingressClassname: nginx# -- Is this ingressClass enabled or notenabled: true# -- Is this the default ingressClass for the clusterdefault: false# -- Controller-value of the controller that is processing this ingressClasscontrollerValue: k8s.io/ingress-nginx# -- Parameters is a link to a custom resource containing additional# configuration for the controller. This is optional if the controller# does not require extra parameters.parameters: {}# -- For backwards compatibility with ingress.class annotation, use ingressClass.# Algorithm is as follows, first ingressClassName is considered, if not present, controller looks for ingress.class annotationingressClass: nginx# -- Labels to add to the pod container metadatapodLabels: {}# key: value# -- Security Context policies for controller podspodSecurityContext: {}# -- See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for notes on enabling and using sysctlssysctls: {}# sysctls:# net.core.somaxconn: 8192# -- Allows customization of the source of the IP address or FQDN to report# in the ingress status field. By default, it reads the information provided# by the service. If disable, the status field reports the IP address of the# node or nodes where an ingress controller pod is running.publishService:# -- Enable publishService or notenabled: true# -- Allows overriding of the publish service to bind to# Must be namespace/service_namepathOverride: # Limit the scope of the controller to a specific namespacescope:# -- Enable scope or notenabled: false# -- Namespace to limit the controller to; defaults to $(POD_NAMESPACE)namespace: # -- When scope.enabled false, instead of watching all namespaces, we watching namespaces whose labels# only match with namespaceSelector. Format like foobar. Defaults to empty, means watching all namespaces.namespaceSelector: # -- Allows customization of the configmap / nginx-configmap namespace; defaults to $(POD_NAMESPACE)configMapNamespace: tcp:# -- Allows customization of the tcp-services-configmap; defaults to $(POD_NAMESPACE)configMapNamespace: # -- Annotations to be added to the tcp config configmapannotations: {}udp:# -- Allows customization of the udp-services-configmap; defaults to $(POD_NAMESPACE)configMapNamespace: # -- Annotations to be added to the udp config configmapannotations: {}# -- Maxmind license key to download GeoLite2 Databases.## https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databasesmaxmindLicenseKey: # -- Additional command line arguments to pass to nginx-ingress-controller# E.g. to specify the default SSL certificate you can useextraArgs: {}## extraArgs:## default-ssl-certificate: namespace/secret_name# -- Additional environment variables to setextraEnvs: []# extraEnvs:# - name: FOO# valueFrom:# secretKeyRef:# key: FOO# name: secret-resource# -- Use a DaemonSet or Deploymentkind: DaemonSet #此处修改为DaemonSet# -- Annotations to be added to the controller Deployment or DaemonSet##annotations: {}# keel.sh/pollSchedule: every 60m# -- Labels to be added to the controller Deployment or DaemonSet and other resources that do not have option to specify labels##labels: {}# keel.sh/policy: patch# keel.sh/trigger: poll# -- The update strategy to apply to the Deployment or DaemonSet##updateStrategy: {}# rollingUpdate:# maxUnavailable: 1# type: RollingUpdate# -- minReadySeconds to avoid killing pods before we are ready##minReadySeconds: 0# -- Node tolerations for server scheduling to nodes with taints## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/##tolerations: []# - key: key# operator: Equal|Exists# value: value# effect: NoSchedule|PreferNoSchedule|NoExecute(1.6 only)# -- Affinity and anti-affinity rules for server scheduling to nodes## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity##affinity: {}# # An example of preferred pod anti-affinity, weight is in the range 1-100# podAntiAffinity:# preferredDuringSchedulingIgnoredDuringExecution:# - weight: 100# podAffinityTerm:# labelSelector:# matchExpressions:# - key: app.kubernetes.io/name# operator: In# values:# - ingress-nginx# - key: app.kubernetes.io/instance# operator: In# values:# - ingress-nginx# - key: app.kubernetes.io/component# operator: In# values:# - controller# topologyKey: kubernetes.io/hostname# # An example of required pod anti-affinity# podAntiAffinity:# requiredDuringSchedulingIgnoredDuringExecution:# - labelSelector:# matchExpressions:# - key: app.kubernetes.io/name# operator: In# values:# - ingress-nginx# - key: app.kubernetes.io/instance# operator: In# values:# - ingress-nginx# - key: app.kubernetes.io/component# operator: In# values:# - controller# topologyKey: kubernetes.io/hostname# -- Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in.## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/##topologySpreadConstraints: []# - maxSkew: 1# topologyKey: topology.kubernetes.io/zone# whenUnsatisfiable: DoNotSchedule# labelSelector:# matchLabels:# app.kubernetes.io/instance: ingress-nginx-internal# -- terminationGracePeriodSeconds to avoid killing pods before we are ready## wait up to five minutes for the drain of connections##terminationGracePeriodSeconds: 300# -- Node labels for controller pod assignment## Ref: https://kubernetes.io/docs/user-guide/node-selection/##nodeSelector:kubernetes.io/os: linuxingress: true #此处修改节点选择器## Liveness and readiness probe values## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes#### startupProbe:## httpGet:## # should match container.healthCheckPath## path: /healthz## port: 10254## scheme: HTTP## initialDelaySeconds: 5## periodSeconds: 5## timeoutSeconds: 2## successThreshold: 1## failureThreshold: 5livenessProbe:httpGet:# should match container.healthCheckPathpath: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10timeoutSeconds: 1successThreshold: 1failureThreshold: 5readinessProbe:httpGet:# should match container.healthCheckPathpath: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10timeoutSeconds: 1successThreshold: 1failureThreshold: 3# -- Path of the health check endpoint. All requests received on the port defined by# the healthz-port parameter are forwarded internally to this path.healthCheckPath: /healthz# -- Address to bind the health check endpoint.# It is better to set this option to the internal node address# if the ingress nginx controller is running in the hostNetwork: true mode.healthCheckHost: # -- Annotations to be added to controller pods##podAnnotations: {}replicaCount: 1# -- Define either minAvailable or maxUnavailable, never both.minAvailable: 1# -- Define either minAvailable or maxUnavailable, never both.# maxUnavailable: 1## Define requests resources to avoid probe issues due to CPU utilization in busy nodes## ref: https://github.com/kubernetes/ingress-nginx/issues/4735#issuecomment-551204903## Ideally, there should be no limits.## https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/resources:## limits:## cpu: 100m## memory: 90Mirequests:cpu: 100mmemory: 90Mi# Mutually exclusive with keda autoscalingautoscaling:apiVersion: autoscaling/v2enabled: falseannotations: {}minReplicas: 1maxReplicas: 11targetCPUUtilizationPercentage: 50targetMemoryUtilizationPercentage: 50behavior: {}# scaleDown:# stabilizationWindowSeconds: 300# policies:# - type: Pods# value: 1# periodSeconds: 180# scaleUp:# stabilizationWindowSeconds: 300# policies:# - type: Pods# value: 2# periodSeconds: 60autoscalingTemplate: []# Custom or additional autoscaling metrics# ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics# - type: Pods# pods:# metric:# name: nginx_ingress_controller_nginx_process_requests_total# target:# type: AverageValue# averageValue: 10000m# Mutually exclusive with hpa autoscalingkeda:apiVersion: keda.sh/v1alpha1## apiVersion changes with keda 1.x vs 2.x## 2.x keda.sh/v1alpha1## 1.x keda.k8s.io/v1alpha1enabled: falseminReplicas: 1maxReplicas: 11pollingInterval: 30cooldownPeriod: 300restoreToOriginalReplicaCount: falsescaledObject:annotations: {}# Custom annotations for ScaledObject resource# annotations:# key: valuetriggers: []# - type: prometheus# metadata:# serverAddress: http://prometheus-host:9090# metricName: http_requests_total# threshold: 100# query: sum(rate(http_requests_total{deploymentmy-deployment}[2m]))behavior: {}# scaleDown:# stabilizationWindowSeconds: 300# policies:# - type: Pods# value: 1# periodSeconds: 180# scaleUp:# stabilizationWindowSeconds: 300# policies:# - type: Pods# value: 2# periodSeconds: 60# -- Enable mimalloc as a drop-in replacement for malloc.## ref: https://github.com/microsoft/mimalloc##enableMimalloc: true## Override NGINX templatecustomTemplate:configMapName: configMapKey: service:enabled: true# -- If enabled is adding an appProtocol option for Kubernetes service. An appProtocol field replacing annotations that were# using for setting a backend protocol. Here is an example for AWS: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http# It allows choosing the protocol for each backend specified in the Kubernetes service.# See the following GitHub issue for more details about the purpose: https://github.com/kubernetes/kubernetes/issues/40244# Will be ignored for Kubernetes versions older than 1.20##appProtocol: trueannotations: {}labels: {}# clusterIP: # -- List of IP addresses at which the controller services are available## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips##externalIPs: []# -- Used by cloud providers to connect the resulting LoadBalancer to a pre-existing static IP according to https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancerloadBalancerIP: loadBalancerSourceRanges: []enableHttp: trueenableHttps: true## Set external traffic policy to: Local to preserve source IP on providers supporting it.## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer# externalTrafficPolicy: ## Must be either None or ClientIP if set. Kubernetes will default to None.## Ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies# sessionAffinity: ## Specifies the health check node port (numeric port number) for the service. If healthCheckNodePort isn’t specified,## the service controller allocates a port from your cluster’s NodePort range.## Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip# healthCheckNodePort: 0# -- Represents the dual-stack-ness requested or required by this Service. Possible values are# SingleStack, PreferDualStack or RequireDualStack.# The ipFamilies and clusterIPs fields depend on the value of this field.## Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/ipFamilyPolicy: SingleStack# -- List of IP families (e.g. IPv4, IPv6) assigned to the service. This field is usually assigned automatically# based on cluster configuration and the ipFamilyPolicy field.## Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/ipFamilies:- IPv4ports:http: 80https: 443targetPorts:http: httphttps: https#type: LoadBalancer type: ClusterIP #此处修改## type: NodePort## nodePorts:## http: 32080## https: 32443## tcp:## 8080: 32808nodePorts:http: https: tcp: {}udp: {}external:enabled: trueinternal:# -- Enables an additional internal load balancer (besides the external one).enabled: false# -- Annotations are mandatory for the load balancer to come up. Varies with the cloud service.annotations: {}# loadBalancerIP: # -- Restrict access For LoadBalancer service. Defaults to 0.0.0.0/0.loadBalancerSourceRanges: []## Set external traffic policy to: Local to preserve source IP on## providers supporting it## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer# externalTrafficPolicy: # shareProcessNamespace enables process namespace sharing within the pod.# This can be used for example to signal log rotation using kill -USR1 from a sidecar.shareProcessNamespace: false# -- Additional containers to be added to the controller pod.# See https://github.com/lemonldap-ng-controller/lemonldap-ng-controller as example.extraContainers: []# - name: my-sidecar# image: nginx:latest# - name: lemonldap-ng-controller# image: lemonldapng/lemonldap-ng-controller:0.2.0# args:# - /lemonldap-ng-controller# - --alsologtostderr# - --configmap$(POD_NAMESPACE)/lemonldap-ng-configuration# env:# - name: POD_NAME# valueFrom:# fieldRef:# fieldPath: metadata.name# - name: POD_NAMESPACE# valueFrom:# fieldRef:# fieldPath: metadata.namespace# volumeMounts:# - name: copy-portal-skins# mountPath: /srv/var/lib/lemonldap-ng/portal/skins# -- Additional volumeMounts to the controller main container.extraVolumeMounts: []# - name: copy-portal-skins# mountPath: /var/lib/lemonldap-ng/portal/skins# -- Additional volumes to the controller pod.extraVolumes: []# - name: copy-portal-skins# emptyDir: {}# -- Containers, which are run before the app containers are started.extraInitContainers: []# - name: init-myservice# image: busybox# command: [sh, -c, until nslookup myservice; do echo waiting for myservice; sleep 2; done;]# -- Modules, which are mounted into the core nginx image. See values.yaml for a sample to add opentelemetry moduleextraModules: []# containerSecurityContext:# allowPrivilegeEscalation: false## The image must contain a /usr/local/bin/init_module.sh executable, which# will be executed as initContainers, to move its config files within the# mounted volume.opentelemetry:enabled: falseimage: registry.k8s.io/ingress-nginx/opentelemetry:v20230107-helm-chart-4.4.2-2-g96b3d2165sha256:331b9bebd6acfcd2d3048abbdd86555f5be76b7e3d0b5af4300b04235c6056c9containerSecurityContext:allowPrivilegeEscalation: falseadmissionWebhooks:annotations: {}# ignore-check.kube-linter.io/no-read-only-rootfs: This deployment needs write access to root filesystem.## Additional annotations to the admission webhooks.## These annotations will be added to the ValidatingWebhookConfiguration and## the Jobs Spec of the admission webhooks.enabled: false #此处修改不使用ssl# -- Additional environment variables to setextraEnvs: []# extraEnvs:# - name: FOO# valueFrom:# secretKeyRef:# key: FOO# name: secret-resource# -- Admission Webhook failure policy to usefailurePolicy: Fail# timeoutSeconds: 10port: 8443certificate: /usr/local/certificates/certkey: /usr/local/certificates/keynamespaceSelector: {}objectSelector: {}# -- Labels to be added to admission webhookslabels: {}# -- Use an existing PSP instead of creating oneexistingPsp: networkPolicyEnabled: falseservice:annotations: {}# clusterIP: externalIPs: []# loadBalancerIP: loadBalancerSourceRanges: []servicePort: 443type: ClusterIPcreateSecretJob:securityContext:allowPrivilegeEscalation: falseresources: {}# limits:# cpu: 10m# memory: 20Mi# requests:# cpu: 10m# memory: 20MipatchWebhookJob:securityContext:allowPrivilegeEscalation: falseresources: {}patch:enabled: trueimage:registry: registry.cn-hangzhou.aliyuncs.com #此处修改 修改镜像地址image: google_containers/kube-webhook-certgen #此处修改 修改镜像## for backwards compatibility consider setting the full image url via the repository value below## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail## repository:#tag: v20220916-gd32f8c343tag: v1.3.0#digest: sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10fpullPolicy: IfNotPresent# -- Provide a priority class name to the webhook patching job##priorityClassName: podAnnotations: {}nodeSelector:kubernetes.io/os: linuxtolerations: []# -- Labels to be added to patch job resourceslabels: {}securityContext:runAsNonRoot: truerunAsUser: 2000fsGroup: 2000# Use certmanager to generate webhook certscertManager:enabled: false# self-signed root certificaterootCert:# default to be 5yduration: admissionCert:# default to be 1yduration: # issuerRef:# name: issuer# kind: ClusterIssuermetrics:port: 10254portName: metrics# if this port is changed, change healthz-port: in extraArgs: accordinglyenabled: falseservice:annotations: {}# prometheus.io/scrape: true# prometheus.io/port: 10254# -- Labels to be added to the metrics service resourcelabels: {}# clusterIP: # -- List of IP addresses at which the stats-exporter service is available## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips##externalIPs: []# loadBalancerIP: loadBalancerSourceRanges: []servicePort: 10254type: ClusterIP# externalTrafficPolicy: # nodePort: serviceMonitor:enabled: falseadditionalLabels: {}## The label to use to retrieve the job name from.## jobLabel: app.kubernetes.io/namenamespace: namespaceSelector: {}## Default: scrape .Release.Namespace only## To scrape all, use the following:## namespaceSelector:## any: truescrapeInterval: 30s# honorLabels: truetargetLabels: []relabelings: []metricRelabelings: []prometheusRule:enabled: falseadditionalLabels: {}# namespace: rules: []# # These are just examples rules, please adapt them to your needs# - alert: NGINXConfigFailed# expr: count(nginx_ingress_controller_config_last_reload_successful 0) 0# for: 1s# labels:# severity: critical# annotations:# description: bad ingress config - nginx config test failed# summary: uninstall the latest ingress changes to allow config reloads to resume# - alert: NGINXCertificateExpiry# expr: (avg(nginx_ingress_controller_ssl_expire_time_seconds) by (host) - time()) 604800# for: 1s# labels:# severity: critical# annotations:# description: ssl certificate(s) will expire in less then a week# summary: renew expiring certificates to avoid downtime# - alert: NGINXTooMany500s# expr: 100 * ( sum( nginx_ingress_controller_requests{status~5.} ) / sum(nginx_ingress_controller_requests) ) 5# for: 1m# labels:# severity: warning# annotations:# description: Too many 5XXs# summary: More than 5% of all requests returned 5XX, this requires your attention# - alert: NGINXTooMany400s# expr: 100 * ( sum( nginx_ingress_controller_requests{status~4.} ) / sum(nginx_ingress_controller_requests) ) 5# for: 1m# labels:# severity: warning# annotations:# description: Too many 4XXs# summary: More than 5% of all requests returned 4XX, this requires your attention# -- Improve connection draining when ingress controller pod is deleted using a lifecycle hook:# With this new hook, we increased the default terminationGracePeriodSeconds from 30 seconds# to 300, allowing the draining of connections up to five minutes.# If the active connections end before that, the pod will terminate gracefully at that time.# To effectively take advantage of this feature, the Configmap feature# worker-shutdown-timeout new value is 240s instead of 10s.##lifecycle:preStop:exec:command:- /wait-shutdownpriorityClassName:
# -- Rollback limit
##
revisionHistoryLimit: 10
## Default 404 backend
##
defaultBackend:##enabled: falsename: defaultbackendimage:registry: registry.k8s.ioimage: defaultbackend-amd64## for backwards compatibility consider setting the full image url via the repository value below## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail## repository:tag: 1.5pullPolicy: IfNotPresent# nobody user - uid 65534runAsUser: 65534runAsNonRoot: truereadOnlyRootFilesystem: trueallowPrivilegeEscalation: false# -- Use an existing PSP instead of creating oneexistingPsp: extraArgs: {}serviceAccount:create: truename: automountServiceAccountToken: true# -- Additional environment variables to set for defaultBackend podsextraEnvs: []port: 8080## Readiness and liveness probes for default backend## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/##livenessProbe:failureThreshold: 3initialDelaySeconds: 30periodSeconds: 10successThreshold: 1timeoutSeconds: 5readinessProbe:failureThreshold: 6initialDelaySeconds: 0periodSeconds: 5successThreshold: 1timeoutSeconds: 5# -- The update strategy to apply to the Deployment or DaemonSet##updateStrategy: {}# rollingUpdate:# maxUnavailable: 1# type: RollingUpdate# -- minReadySeconds to avoid killing pods before we are ready##minReadySeconds: 0# -- Node tolerations for server scheduling to nodes with taints## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/##tolerations: []# - key: key# operator: Equal|Exists# value: value# effect: NoSchedule|PreferNoSchedule|NoExecute(1.6 only)affinity: {}# -- Security Context policies for controller pods# See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for# notes on enabling and using sysctls##podSecurityContext: {}# -- Security Context policies for controller main container.# See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for# notes on enabling and using sysctls##containerSecurityContext: {}# -- Labels to add to the pod container metadatapodLabels: {}# key: value# -- Node labels for default backend pod assignment## Ref: https://kubernetes.io/docs/user-guide/node-selection/##nodeSelector:kubernetes.io/os: linux# -- Annotations to be added to default backend pods##podAnnotations: {}replicaCount: 1minAvailable: 1resources: {}# limits:# cpu: 10m# memory: 20Mi# requests:# cpu: 10m# memory: 20MiextraVolumeMounts: []## Additional volumeMounts to the default backend container.# - name: copy-portal-skins# mountPath: /var/lib/lemonldap-ng/portal/skinsextraVolumes: []## Additional volumes to the default backend pod.# - name: copy-portal-skins# emptyDir: {}autoscaling:annotations: {}enabled: falseminReplicas: 1maxReplicas: 2targetCPUUtilizationPercentage: 50targetMemoryUtilizationPercentage: 50service:annotations: {}# clusterIP: # -- List of IP addresses at which the default backend service is available## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips##externalIPs: []# loadBalancerIP: loadBalancerSourceRanges: []servicePort: 80type: ClusterIPpriorityClassName: # -- Labels to be added to the default backend resourceslabels: {}
## Enable RBAC as per https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/rbac.md and https://github.com/kubernetes/ingress-nginx/issues/266
rbac:create: truescope: false
## If true, create use Pod Security Policy resources
## https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:enabled: false
serviceAccount:create: truename: automountServiceAccountToken: true# -- Annotations for the controller service accountannotations: {}
# -- Optional array of imagePullSecrets containing private registry credentials
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# - name: secretName# -- TCP service key-value pairs
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md
##
tcp: {}
# 8080: default/example-tcp-svc:9000# -- UDP service key-value pairs
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md
##
udp: {}
# 53: kube-system/kube-dns:53# -- Prefix for TCP and UDP ports names in ingress controller service
## Some cloud providers, like Yandex Cloud may have a requirements for a port name regex to support cloud load balancer integration
portNamePrefix:
# -- (string) A base64-encoded Diffie-Hellman parameter.
# This can be generated with: openssl dhparam 4096 2 /dev/null | base64
## Ref: https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/ssl-dh-param
dhParam:3、Helm和kubernetes版本对照
Helm 版本支持的 Kubernetes 版本3.12.x1.27.x - 1.24.x3.11.x1.26.x - 1.23.x3.10.x1.25.x - 1.22.x3.9.x1.24.x - 1.21.x3.8.x1.23.x - 1.20.x3.7.x1.22.x - 1.19.x3.6.x1.21.x - 1.18.x3.5.x1.20.x - 1.17.x3.4.x1.19.x - 1.16.x3.3.x1.18.x - 1.15.x3.2.x1.18.x - 1.15.x3.1.x1.17.x - 1.14.x3.0.x1.16.x - 1.13.x2.16.x1.16.x - 1.15.x2.15.x1.15.x - 1.14.x2.14.x1.14.x - 1.13.x2.13.x1.13.x - 1.12.x2.12.x1.12.x - 1.11.x2.11.x1.11.x - 1.10.x2.10.x1.10.x - 1.9.x2.9.x1.10.x - 1.9.x2.8.x1.9.x - 1.8.x2.7.x1.8.x - 1.7.x2.6.x1.7.x - 1.6.x2.5.x1.6.x - 1.5.x2.4.x1.6.x - 1.5.x2.3.x1.5.x - 1.4.x2.2.x1.5.x - 1.4.x2.1.x1.5.x - 1.4.x2.0.x1.4.x - 1.3.x 4、Ingress-nginx和k8s版本对照
SupportedIngress-NGINX versionk8s supported versionAlpine VersionNginx VersionHelm Chart Versionv1.11.21.30, 1.29, 1.28, 1.27, 1.263.20.01.25.54.11.2v1.11.11.30, 1.29, 1.28, 1.27, 1.263.20.01.25.54.11.1v1.11.01.30, 1.29, 1.28, 1.27, 1.263.20.01.25.54.11.0v1.10.41.30, 1.29, 1.28, 1.27, 1.263.20.01.25.54.10.4v1.10.31.30, 1.29, 1.28, 1.27, 1.263.20.01.25.54.10.3v1.10.21.30, 1.29, 1.28, 1.27, 1.263.20.01.25.54.10.2v1.10.11.30, 1.29, 1.28, 1.27, 1.263.19.11.25.34.10.1v1.10.01.29, 1.28, 1.27, 1.263.19.11.25.34.10.0v1.9.61.29, 1.28, 1.27, 1.26, 1.253.19.01.21.64.9.1v1.9.51.28, 1.27, 1.26, 1.253.18.41.21.64.9.0v1.9.41.28, 1.27, 1.26, 1.253.18.41.21.64.8.3v1.9.31.28, 1.27, 1.26, 1.253.18.41.21.64.8.*v1.9.11.28, 1.27, 1.26, 1.253.18.41.21.64.8.*v1.9.01.28, 1.27, 1.26, 1.253.18.21.21.64.8.*v1.8.41.27, 1.26, 1.25, 1.243.18.21.21.64.7.*v1.7.11.27, 1.26, 1.25, 1.243.17.21.21.64.6.*v1.6.41.26, 1.25, 1.24, 1.233.17.01.21.64.5.*v1.5.11.25, 1.24, 1.233.16.21.21.64.4.*v1.4.01.25, 1.24, 1.23, 1.223.16.21.19.10†4.3.0v1.3.11.24, 1.23, 1.22, 1.21, 1.203.16.21.19.10†4.2.5