当前位置: 首页 > news >正文

百度搜索关键词排名优化技术灰色网站建设优化

百度搜索关键词排名优化技术,灰色网站建设优化,软件开发合同模板范本,网站建设项目分工操作系统兼容性 我们建议在 Red Hat Enterprise Linux (RHEL) 或使用systemd的基于 Debian 的 Linux 发行版上安装 OpenSearch #xff0c;例如 CentOS、Amazon Linux 2 和 Ubuntu Long-Term Support (LTS)。OpenSearch 应该适用于大多数 Linux 发行版#xff0c;但我们只测…操作系统兼容性 我们建议在 Red Hat Enterprise Linux (RHEL) 或使用systemd的基于 Debian 的 Linux 发行版上安装 OpenSearch 例如 CentOS、Amazon Linux 2 和 Ubuntu Long-Term Support (LTS)。OpenSearch 应该适用于大多数 Linux 发行版但我们只测试了少数几个。对于任何版本的 OpenSearch我们建议使用 RHEL 7 或 8、CentOS 7 或 8、Amazon Linux 2、Ubuntu 16.04、18.04 或 20.04。 Java 兼容性 适用于 Linux 的 OpenSearch 发行版在目录中附带了兼容的Adoptium JDK版本的 Java 。jdk要查找 JDK 版本请运行./jdk/bin/java -version. 例如OpenSearch 1.0.0 tarball 随附 Java 15.0.19非 LTSOpenSearch 1.3.0 随附 Java 11.0.14.11 (LTS)OpenSearch 2.0.0 随附 Java 17.0.2 8 (LTS)。OpenSearch 使用所有兼容的 Java 版本进行了测试。 OpenSearch版本兼容的 Java 版本需要 Java 版本1.0 - 1.2.x11, 1515.0.191.3.x8, 11, 1411.0.14.112.0.011, 1717.0.28 docker安装 创建docker-compose.yml version: 3 services:opensearch-node1:image: opensearchproject/opensearch:2.2.0container_name: opensearch-node1environment:- cluster.nameopensearch-cluster- node.nameopensearch-node1- discovery.seed_hostsopensearch-node1,opensearch-node2- cluster.initial_master_nodesopensearch-node1,opensearch-node2- bootstrap.memory_locktrue # along with the memlock settings below, disables swapping- OPENSEARCH_JAVA_OPTS-Xms512m -Xmx512m # minimum and maximum Java heap size, recommend setting both to 50% of system RAMulimits:memlock:soft: -1hard: -1nofile:soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systemshard: 65536volumes:- opensearch-data1:/usr/share/opensearch/dataports:- 9200:9200- 9600:9600 # required for Performance Analyzernetworks:- opensearch-netopensearch-node2:image: opensearchproject/opensearch:2.2.0container_name: opensearch-node2environment:- cluster.nameopensearch-cluster- node.nameopensearch-node2- discovery.seed_hostsopensearch-node1,opensearch-node2- cluster.initial_master_nodesopensearch-node1,opensearch-node2- bootstrap.memory_locktrue- OPENSEARCH_JAVA_OPTS-Xms512m -Xmx512mulimits:memlock:soft: -1hard: -1nofile:soft: 65536hard: 65536volumes:- opensearch-data2:/usr/share/opensearch/datanetworks:- opensearch-netopensearch-dashboards:image: opensearchproject/opensearch-dashboards:2.2.0container_name: opensearch-dashboardsports:- 5601:5601expose:- 5601environment:OPENSEARCH_HOSTS: [https://opensearch-node1:9200,https://opensearch-node2:9200] # must be a string with no spaces when specified as an environment variablenetworks:- opensearch-netvolumes:opensearch-data1:opensearch-data2:networks:opensearch-net: 启动集群 docker-compose up启动日志 看到以下信息并且日志没明显error说明启动成功 查看集群节点 通过opensearch-dashboard查看 访问服务器的5601端口 用户名密码为admin admin 进入opensearch-dashboard页面 停止集群 docker-compose down停止集群并删除所有数据信息 docker-compose down -v裸服务器安装 设计集群的方法有很多种组合。 下图显示了一个基本架构其中包括一个四节点集群该集群具有一个集群管理器节点、一个协调节点和两个数据节点。 更多节点信息请参照配置详解。 集群架构 服务器规划 IP地址配置节点172.21.84.1192C 4G 100G SATACluster manager data ingest172.21.84.1202C 4G 100G SATACluster manager data ingest172.21.84.1212C 4G 100G SATACluster manager data ingest 进群之间请确保以下端口是连通的。 需要为 OpenSearch 组件打开以下端口。 端口号开放搜索组件443AWS OpenSearch Service 中的 OpenSearch 仪表板具有传输中加密 (TLS)5601开放搜索仪表板9200开放搜索 REST API9250跨集群搜索9300节点通信和传输9600性能分析器 安装步骤 请在所有节点安装单机版opensearch并测试启动成功安装步骤请参阅–单机版安装删除对应单机启动opensearch的日志和数据文件目录 data logs修改配置文件 172.21.84.119 配置文件 cluster.name: bigdata node.name: master01 node.roles: [cluster_manager ,data, ingest] path.data: /data/opensearch/opensearch-2.2.0/data path.logs: /data/opensearch/opensearch-2.2.0/logs network.host: 172.21.84.119 http.port: 9200 discovery.seed_hosts: [master01, node01, node02] cluster.initial_cluster_manager_nodes: [master01, node01, node02] plugins.security.ssl.transport.pemcert_filepath: esnode.pem plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem plugins.security.ssl.transport.enforce_hostname_verification: false plugins.security.ssl.http.enabled: true plugins.security.ssl.http.pemcert_filepath: esnode.pem plugins.security.ssl.http.pemkey_filepath: esnode-key.pem plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem plugins.security.allow_unsafe_democertificates: true plugins.security.allow_default_init_securityindex: true plugins.security.authcz.admin_dn: CNkirk,OUclient,Oclient,Ltest, Cde plugins.security.audit.type: internal_opensearch plugins.security.enable_snapshot_restore_privilege: true plugins.security.check_snapshot_restore_write_privileges: true plugins.security.restapi.roles_enabled: [all_access, security_rest_api_access] plugins.security.system_indices.enabled: true plugins.security.system_indices.indices: [.plugins-ml-model, .plugins-ml-task, .opendistro-alerting-config, .opendistro-alerting-alert*, .opendistro-anomaly-results*, .opendistro-anomaly-detector*, .opendistro-anomaly-checkpoints, .opendistro-anomaly-detection-state, .opendistro-reports-, .opensearch-notifications-, .opensearch-notebooks, .opensearch-observability, .opendistro-asynchronous-search-response*, .replication-metadata-store] node.max_local_storage_nodes: 3172.21.84.120 配置文件 注: 相同配置不再展示 cluster.name: bigdata node.name: master01 node.roles: [cluster_manager ,data, ingest] path.data: /data/opensearch/opensearch-2.2.0/data path.logs: /data/opensearch/opensearch-2.2.0/logs network.host: 172.21.84.120172.21.84.121 配置文件 cluster.name: bigdata node.name: master01 node.roles: [cluster_manager ,data, ingest] path.data: /data/opensearch/opensearch-2.2.0/data path.logs: /data/opensearch/opensearch-2.2.0/logs network.host: 172.21.84.121依次启动三台openserach su - opensearch -c /data/opensearch/opensearch-2.2.0/bin/opensearch看到如下信息集群创建成功 查看集群 查看节点信息 查看集群健康状态 至此裸金属版部署完成 Helm安装 官方安装 请确保k8s集群安装了helm命令且可以链接外网。默认 Helm 部署一个三节点集群。我们建议您为此部署至少有 8 GiB 的可用内存。 例如如果可用内存少于 4 GiB可能会部署会失败。 版本要求 Kubernetes 1.14Helm 2.17.0 Kubernetes 中部署 NFS-Subdir-External-Provisioner 为 NFS 提供动态分配卷 并且自动绑定pv,pvc,若绑定失败需执行 kubectl patch storageclass nfs-storage -p {metadata: {annotations:{storageclass.kubernetes.io/is-default-class:true}}}安装步骤 将opensearch helm-charts 存储库添加到 Helm helm repo add opensearch https://opensearch-project.github.io/helm-charts/ 从图表存储库本地更新可用源 helm repo update要搜索与 OpenSearch 相关的 Helm 图表 helm search repo opensearch部署 OpenSearch helm install my-deployment opensearch/opensearch查看部署的pod 确认节点运行状态 卸载opensearch 运行 helm delete my-deployment 自定义安装 其中data节点也可拆分出来本文不做展示。 openserach-master.yaml--- clusterName: opensearch-cluster nodeGroup: master# If discovery.type in the opensearch configuration is set to single-node, # this should be set to true # If true, replicas will be forced to 1 singleNode: false# The service that non master groups will try to connect to when joining the cluster # This should be set to clusterName - nodeGroup for your master group masterService: opensearch-cluster-master# OpenSearch roles that will be applied to this nodeGroup # These will be set as environment variable node.roles. E.g. node.rolesmaster,ingest,data,remote_cluster_client roles:- master- ingest- datareplicas: 3# if not set, falls back to parsing .Values.imageTag, then .Chart.appVersion. majorVersion: global:# Set if you want to change the default docker registry, e.g. a private one.dockerRegistry: # Allows you to add any config files in {{ .Values.opensearchHome }}/config opensearchHome: /usr/share/opensearch # such as opensearch.yml and log4j2.properties config:# Values must be YAML literal style scalar / YAML multiline string.# filename: |# formatted-value(s)# log4j2.properties: |# status error## appender.console.type Console# appender.console.name console# appender.console.layout.type PatternLayout# appender.console.layout.pattern [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n## rootLogger.level info# rootLogger.appenderRef.console.ref consoleopensearch.yml: |cluster.name: opensearch-cluster# Bind to all interfaces because we dont know what IP address Docker will assign to us.network.host: 0.0.0.0# Setting network.host to a non-loopback address enables the annoying bootstrap checks. Single-node mode disables them again.# Implicitly done if .singleNode is set to true.# discovery.type: single-node# Start OpenSearch Security Demo Configuration# WARNING: revise all the lines below before you go into productionplugins:security:ssl:transport:pemcert_filepath: esnode.pempemkey_filepath: esnode-key.pempemtrustedcas_filepath: root-ca.pemenforce_hostname_verification: falsehttp:enabled: truepemcert_filepath: esnode.pempemkey_filepath: esnode-key.pempemtrustedcas_filepath: root-ca.pemallow_unsafe_democertificates: trueallow_default_init_securityindex: trueauthcz:admin_dn:- CNkirk,OUclient,Oclient,Ltest,Cdeaudit.type: internal_opensearchenable_snapshot_restore_privilege: truecheck_snapshot_restore_write_privileges: truerestapi:roles_enabled: [all_access, security_rest_api_access]system_indices:enabled: trueindices:[.opendistro-alerting-config,.opendistro-alerting-alert*,.opendistro-anomaly-results*,.opendistro-anomaly-detector*,.opendistro-anomaly-checkpoints,.opendistro-anomaly-detection-state,.opendistro-reports-*,.opendistro-notifications-*,.opendistro-notebooks,.opendistro-asynchronous-search-response*,]######## End OpenSearch Security Demo Configuration ######### log4j2.properties:# Extra environment variables to append to this nodeGroup # This will be appended to the current env: key. You can use any of the kubernetes env # syntax here extraEnvs: [] # - name: MY_ENVIRONMENT_VAR # value: the_value_goes_here# Allows you to load environment variables from kubernetes secret or config map envFrom: [] # - secretRef: # name: env-secret # - configMapRef: # name: config-map# A list of secrets and their paths to mount inside the pod # This is useful for mounting certificates for security and for mounting # the X-Pack license secretMounts: []hostAliases: [] # - ip: 127.0.0.1 # hostnames: # - foo.local # - bar.localimage:repository: opensearchproject/opensearch# override image tag, which is .Chart.AppVersion by defaulttag: pullPolicy: IfNotPresentpodAnnotations: {}# iam.amazonaws.com/role: es-cluster# additionals labels labels: {}opensearchJavaOpts: -Xmx512M -Xms512Mresources:requests:cpu: 500mmemory: 100MiinitResources: {} # limits: # cpu: 25m # memory: 128Mi # requests: # cpu: 25m # memory: 128MisidecarResources: {} # limits: # cpu: 25m # memory: 128Mi # requests: # cpu: 25m # memory: 128MinetworkHost: 0.0.0.0rbac:create: falseserviceAccountAnnotations: {}serviceAccountName: podSecurityPolicy:create: falsename: spec:privileged: truefsGroup:rule: RunAsAnyrunAsUser:rule: RunAsAnyseLinux:rule: RunAsAnysupplementalGroups:rule: RunAsAnyvolumes:- secret- configMap- persistentVolumeClaim- emptyDirpersistence:enabled: true# Set to false to disable the fsgroup-volume initContainer that will update permissions on the persistent disk.enableInitChown: true# override image, which is busybox by default# image: busybox# override image tag, which is latest by default# imageTag:labels:# Add default labels for the volumeClaimTemplate of the StatefulSetenabled: false# OpenSearch Persistent Volume Storage Class# If defined, storageClassName: storageClass# If set to -, storageClassName: , which disables dynamic provisioning# If undefined (the default) or set to null, no storageClassName spec is# set, choosing the default provisioner. (gp2 on AWS, standard on# GKE, AWS OpenStack)## storageClass: -accessModes:- ReadWriteOncesize: 5Giannotations: {}extraVolumes: []# - name: extras# emptyDir: {}extraVolumeMounts: []# - name: extras# mountPath: /usr/share/extras# readOnly: trueextraContainers: []# - name: do-something# image: busybox# command: [do, something]extraInitContainers: []# - name: do-somethings# image: busybox# command: [do, something]# This is the PriorityClass settings as defined in # https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass priorityClassName: # By default this will make sure two pods dont end up on the same node # Changing this to a region would allow you to spread pods across regions antiAffinityTopologyKey: kubernetes.io/hostname# Hard means that by default pods will only be scheduled if there are enough nodes for them # and that they will never end up on the same node. Setting this to soft will do this best effort antiAffinity: soft# This is the node affinity settings as defined in # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature nodeAffinity: {}# This is the pod topology spread constraints # https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ topologySpreadConstraints: []# The default is to deploy all pods serially. By setting this to parallel all pods are started at # the same time when bootstrapping the cluster podManagementPolicy: Parallel# The environment variables injected by service links are not used, but can lead to slow OpenSearch boot times when # there are many services in the current namespace. # If you experience slow pod startups you probably want to set this to false. enableServiceLinks: trueprotocol: https httpPort: 9200 transportPort: 9300service:labels: {}labelsHeadless: {}headless:annotations: {}type: ClusterIPnodePort: annotations: {}httpPortName: httptransportPortName: transportloadBalancerIP: loadBalancerSourceRanges: []externalTrafficPolicy: updateStrategy: RollingUpdate# This is the max unavailable setting for the pod disruption budget # The default value of 1 will make sure that kubernetes wont allow more than 1 # of your pods to be unavailable during maintenance maxUnavailable: 1podSecurityContext:fsGroup: 1000runAsUser: 1000securityContext:capabilities:drop:- ALL# readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000securityConfig:enabled: truepath: /usr/share/opensearch/plugins/opensearch-security/securityconfigactionGroupsSecret:configSecret:internalUsersSecret:rolesSecret:rolesMappingSecret:tenantsSecret:# The following option simplifies securityConfig by using a single secret and# specifying the config files as keys in the secret instead of creating# different secrets for for each config file.# Note that this is an alternative to the individual secret configuration# above and shouldnt be used if the above secrets are used.config:# There are multiple ways to define the configuration here:# * If you define anything under data, the chart will automatically create# a secret and mount it.# * If you define securityConfigSecret, the chart will assume this secret is# created externally and mount it.# * It is an error to define both data and securityConfigSecret.securityConfigSecret: dataComplete: truedata: {}# config.yml: |-# internal_users.yml: |-# roles.yml: |-# roles_mapping.yml: |-# action_groups.yml: |-# tenants.yml: |-# How long to wait for opensearch to stop gracefully terminationGracePeriod: 120sysctlVmMaxMapCount: 262144startupProbe:tcpSocket:port: 9200initialDelaySeconds: 5periodSeconds: 10timeoutSeconds: 3failureThreshold: 30 readinessProbe:tcpSocket:port: 9200periodSeconds: 5timeoutSeconds: 3failureThreshold: 3## Use an alternate scheduler. ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ ## schedulerName: imagePullSecrets: [] nodeSelector: {} tolerations: []# Enabling this will publically expose your OpenSearch instance. # Only enable this if you have security enabled on your cluster ingress:enabled: false# For Kubernetes 1.18 you should specify the ingress-controller via the field ingressClassName# See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress# ingressClassName: nginxannotations: {}# kubernetes.io/ingress.class: nginx# kubernetes.io/tls-acme: truepath: /hosts:- chart-example.localtls: []# - secretName: chart-example-tls# hosts:# - chart-example.localnameOverride: fullnameOverride: masterTerminationFix: falselifecycle: {}# preStop:# exec:# command: [/bin/sh, -c, echo Hello from the postStart handler /usr/share/message]# postStart:# exec:# command:# - bash# - -c# - |# #!/bin/bash# # Add a template to adjust number of shards/replicas1# TEMPLATE_NAMEmy_template# INDEX_PATTERNlogstash-*# SHARD_COUNT8# REPLICA_COUNT1# ES_URLhttp://localhost:9200# while [[ $(curl -s -o /dev/null -w %{http_code}\n $ES_URL) ! 200 ]]; do sleep 1; done# curl -XPUT $ES_URL/_template/$TEMPLATE_NAME -H Content-Type: application/json -d{index_patterns:[\$INDEX_PATTERN\],settings:{number_of_shards:$SHARD_COUNT,number_of_replicas:$REPLICA_COUNT}}keystore: [] # To add secrets to the keystore: # - secretName: opensearch-encryption-keynetworkPolicy:create: false## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.## In order for a Pod to access OpenSearch, it needs to have the following label:## {{ template uname . }}-client: true## Example for default configuration to access HTTP port:## opensearch-master-http-client: true## Example for default configuration to access transport port:## opensearch-master-transport-client: truehttp:enabled: false# Deprecated # please use the above podSecurityContext.fsGroup instead fsGroup: ## Set optimal sysctls. This requires privilege. Can be disabled if ## the system has already been preconfigured. (Ex: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html) ## Also see: https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ sysctl:enabled: false## Enable to add 3rd Party / Custom plugins not offered in the default OpenSearch image. plugins:enabled: falseinstallList: []# - example-fake-plugin# -- Array of extra K8s manifests to deploy extraObjects: []# - apiVersion: secrets-store.csi.x-k8s.io/v1# kind: SecretProviderClass# metadata:# name: argocd-secrets-store# spec:# provider: aws# parameters:# objects: |# - objectName: argocd# objectType: secretsmanager# jmesPath:# - path: client_id# objectAlias: client_id# - path: client_secret# objectAlias: client_secret# secretObjects:# - data:# - key: client_id# objectName: client_id# - key: client_secret# objectName: client_secret# secretName: argocd-secrets-store# type: Opaque# labels:# app.kubernetes.io/part-of: argocd安装命令 注version并非opensearch版本而是CHART VERSION helm install opensearch-master -f openserach-master.yaml --version 2.5.1 opensearch/opensearch opensearch-client.yaml--- clusterName: opensearch-cluster nodeGroup: client# If discovery.type in the opensearch configuration is set to single-node, # this should be set to true # If true, replicas will be forced to 1 singleNode: false# The service that non master groups will try to connect to when joining the cluster # This should be set to clusterName - nodeGroup for your master group masterService: opensearch-cluster-master# OpenSearch roles that will be applied to this nodeGroup # These will be set as environment variable node.roles. E.g. node.rolesmaster,ingest,data,remote_cluster_client roles:- remote_cluster_clientreplicas: 2# if not set, falls back to parsing .Values.imageTag, then .Chart.appVersion. majorVersion: global:# Set if you want to change the default docker registry, e.g. a private one.dockerRegistry: # Allows you to add any config files in {{ .Values.opensearchHome }}/config opensearchHome: /usr/share/opensearch # such as opensearch.yml and log4j2.properties config:# Values must be YAML literal style scalar / YAML multiline string.# filename: |# formatted-value(s)# log4j2.properties: |# status error## appender.console.type Console# appender.console.name console# appender.console.layout.type PatternLayout# appender.console.layout.pattern [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n## rootLogger.level info# rootLogger.appenderRef.console.ref consoleopensearch.yml: |cluster.name: opensearch-cluster# Bind to all interfaces because we dont know what IP address Docker will assign to us.network.host: 0.0.0.0# Setting network.host to a non-loopback address enables the annoying bootstrap checks. Single-node mode disables them again.# Implicitly done if .singleNode is set to true.# discovery.type: single-node# Start OpenSearch Security Demo Configuration# WARNING: revise all the lines below before you go into productionplugins:security:ssl:transport:pemcert_filepath: esnode.pempemkey_filepath: esnode-key.pempemtrustedcas_filepath: root-ca.pemenforce_hostname_verification: falsehttp:enabled: truepemcert_filepath: esnode.pempemkey_filepath: esnode-key.pempemtrustedcas_filepath: root-ca.pemallow_unsafe_democertificates: trueallow_default_init_securityindex: trueauthcz:admin_dn:- CNkirk,OUclient,Oclient,Ltest,Cdeaudit.type: internal_opensearchenable_snapshot_restore_privilege: truecheck_snapshot_restore_write_privileges: truerestapi:roles_enabled: [all_access, security_rest_api_access]system_indices:enabled: trueindices:[.opendistro-alerting-config,.opendistro-alerting-alert*,.opendistro-anomaly-results*,.opendistro-anomaly-detector*,.opendistro-anomaly-checkpoints,.opendistro-anomaly-detection-state,.opendistro-reports-*,.opendistro-notifications-*,.opendistro-notebooks,.opendistro-asynchronous-search-response*,]######## End OpenSearch Security Demo Configuration ######### log4j2.properties:# Extra environment variables to append to this nodeGroup # This will be appended to the current env: key. You can use any of the kubernetes env # syntax here extraEnvs: [] # - name: MY_ENVIRONMENT_VAR # value: the_value_goes_here# Allows you to load environment variables from kubernetes secret or config map envFrom: [] # - secretRef: # name: env-secret # - configMapRef: # name: config-map# A list of secrets and their paths to mount inside the pod # This is useful for mounting certificates for security and for mounting # the X-Pack license secretMounts: []hostAliases: [] # - ip: 127.0.0.1 # hostnames: # - foo.local # - bar.localimage:repository: opensearchproject/opensearch# override image tag, which is .Chart.AppVersion by defaulttag: pullPolicy: IfNotPresentpodAnnotations: {}# iam.amazonaws.com/role: es-cluster# additionals labels labels: {}opensearchJavaOpts: -Xmx512M -Xms512Mresources:requests:cpu: 500mmemory: 100MiinitResources: {} # limits: # cpu: 25m # memory: 128Mi # requests: # cpu: 25m # memory: 128MisidecarResources: {} # limits: # cpu: 25m # memory: 128Mi # requests: # cpu: 25m # memory: 128MinetworkHost: 0.0.0.0rbac:create: falseserviceAccountAnnotations: {}serviceAccountName: podSecurityPolicy:create: falsename: spec:privileged: truefsGroup:rule: RunAsAnyrunAsUser:rule: RunAsAnyseLinux:rule: RunAsAnysupplementalGroups:rule: RunAsAnyvolumes:- secret- configMap- persistentVolumeClaim- emptyDirpersistence:enabled: false# Set to false to disable the fsgroup-volume initContainer that will update permissions on the persistent disk.enableInitChown: false# override image, which is busybox by default# image: busybox# override image tag, which is latest by default# imageTag:labels:# Add default labels for the volumeClaimTemplate of the StatefulSetenabled: false# OpenSearch Persistent Volume Storage Class# If defined, storageClassName: storageClass# If set to -, storageClassName: , which disables dynamic provisioning# If undefined (the default) or set to null, no storageClassName spec is# set, choosing the default provisioner. (gp2 on AWS, standard on# GKE, AWS OpenStack)## storageClass: -accessModes:- ReadWriteOncesize: 1Giannotations: {}extraVolumes: []# - name: extras# emptyDir: {}extraVolumeMounts: []# - name: extras# mountPath: /usr/share/extras# readOnly: trueextraContainers: []# - name: do-something# image: busybox# command: [do, something]extraInitContainers: []# - name: do-somethings# image: busybox# command: [do, something]# This is the PriorityClass settings as defined in # https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass priorityClassName: # By default this will make sure two pods dont end up on the same node # Changing this to a region would allow you to spread pods across regions antiAffinityTopologyKey: kubernetes.io/hostname# Hard means that by default pods will only be scheduled if there are enough nodes for them # and that they will never end up on the same node. Setting this to soft will do this best effort antiAffinity: soft# This is the node affinity settings as defined in # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature nodeAffinity: {}# This is the pod topology spread constraints # https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ topologySpreadConstraints: []# The default is to deploy all pods serially. By setting this to parallel all pods are started at # the same time when bootstrapping the cluster podManagementPolicy: Parallel# The environment variables injected by service links are not used, but can lead to slow OpenSearch boot times when # there are many services in the current namespace. # If you experience slow pod startups you probably want to set this to false. enableServiceLinks: trueprotocol: https httpPort: 9200 transportPort: 9300service:type: NodePortnodePort: 30601updateStrategy: RollingUpdate# This is the max unavailable setting for the pod disruption budget # The default value of 1 will make sure that kubernetes wont allow more than 1 # of your pods to be unavailable during maintenance maxUnavailable: 1podSecurityContext:fsGroup: 1000runAsUser: 1000securityContext:capabilities:drop:- ALL# readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000securityConfig:enabled: truepath: /usr/share/opensearch/plugins/opensearch-security/securityconfigactionGroupsSecret:configSecret:internalUsersSecret:rolesSecret:rolesMappingSecret:tenantsSecret:# The following option simplifies securityConfig by using a single secret and# specifying the config files as keys in the secret instead of creating# different secrets for for each config file.# Note that this is an alternative to the individual secret configuration# above and shouldnt be used if the above secrets are used.config:# There are multiple ways to define the configuration here:# * If you define anything under data, the chart will automatically create# a secret and mount it.# * If you define securityConfigSecret, the chart will assume this secret is# created externally and mount it.# * It is an error to define both data and securityConfigSecret.securityConfigSecret: dataComplete: truedata: {}# config.yml: |-# internal_users.yml: |-# roles.yml: |-# roles_mapping.yml: |-# action_groups.yml: |-# tenants.yml: |-# How long to wait for opensearch to stop gracefully terminationGracePeriod: 120sysctlVmMaxMapCount: 262144startupProbe:tcpSocket:port: 9200initialDelaySeconds: 5periodSeconds: 10timeoutSeconds: 3failureThreshold: 30 readinessProbe:tcpSocket:port: 9200periodSeconds: 5timeoutSeconds: 3failureThreshold: 3## Use an alternate scheduler. ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ ## schedulerName: imagePullSecrets: [] nodeSelector: {} tolerations: []# Enabling this will publically expose your OpenSearch instance. # Only enable this if you have security enabled on your cluster ingress:enabled: false# For Kubernetes 1.18 you should specify the ingress-controller via the field ingressClassName# See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress# ingressClassName: nginxannotations: {}# kubernetes.io/ingress.class: nginx# kubernetes.io/tls-acme: truepath: /hosts:- chart-example.localtls: []# - secretName: chart-example-tls# hosts:# - chart-example.localnameOverride: fullnameOverride: masterTerminationFix: falselifecycle: {}# preStop:# exec:# command: [/bin/sh, -c, echo Hello from the postStart handler /usr/share/message]# postStart:# exec:# command:# - bash# - -c# - |# #!/bin/bash# # Add a template to adjust number of shards/replicas1# TEMPLATE_NAMEmy_template# INDEX_PATTERNlogstash-*# SHARD_COUNT8# REPLICA_COUNT1# ES_URLhttp://localhost:9200# while [[ $(curl -s -o /dev/null -w %{http_code}\n $ES_URL) ! 200 ]]; do sleep 1; done# curl -XPUT $ES_URL/_template/$TEMPLATE_NAME -H Content-Type: application/json -d{index_patterns:[\$INDEX_PATTERN\],settings:{number_of_shards:$SHARD_COUNT,number_of_replicas:$REPLICA_COUNT}}keystore: [] # To add secrets to the keystore: # - secretName: opensearch-encryption-keynetworkPolicy:create: false## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.## In order for a Pod to access OpenSearch, it needs to have the following label:## {{ template uname . }}-client: true## Example for default configuration to access HTTP port:## opensearch-master-http-client: true## Example for default configuration to access transport port:## opensearch-master-transport-client: truehttp:enabled: false# Deprecated # please use the above podSecurityContext.fsGroup instead fsGroup: ## Set optimal sysctls. This requires privilege. Can be disabled if ## the system has already been preconfigured. (Ex: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html) ## Also see: https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ sysctl:enabled: false## Enable to add 3rd Party / Custom plugins not offered in the default OpenSearch image. plugins:enabled: falseinstallList: []# - example-fake-plugin# -- Array of extra K8s manifests to deploy extraObjects: []# - apiVersion: secrets-store.csi.x-k8s.io/v1# kind: SecretProviderClass# metadata:# name: argocd-secrets-store# spec:# provider: aws# parameters:# objects: |# - objectName: argocd# objectType: secretsmanager# jmesPath:# - path: client_id# objectAlias: client_id# - path: client_secret# objectAlias: client_secret# secretObjects:# - data:# - key: client_id# objectName: client_id# - key: client_secret# objectName: client_secret# secretName: argocd-secrets-store# type: Opaque# labels:# app.kubernetes.io/part-of: argocdhelm install opensearch-client -f openserach-client.yaml --version 2.5.1 opensearch/opensearch访问测试 yaml文件部署opensearch三节点 部署文件如下: os_cm.yml、os_headless.yml、os_statefulset_hostpath.yml、os_svc.yml 需要注意镜像地址可以拉取到此方案采用的是hostpath方式需要每个pod节点创建目录也可采用nfs共享目录方式做数据持久化。 执行 kubectl apply -f os_cm.yml kubectl apply -f os_headless.yml kubectl apply -f os_statefulset_hostpath.yml kubectl apply -f os_svc.yml 查看文件内容 [rootmaster01 openserach_install]# cat os_cm.ymlapiVersion: v1 data:opensearch.yml: |cluster.name: opensearch-cluster# Bind to all interfaces because we dont know what IP address Docker will assign to us.network.host: 0.0.0.0# Setting network.host to a non-loopback address enables the annoying bootstrap checks. Single-node mode disables them again.# Implicitly done if .singleNode is set to true.# discovery.type: single-node# Start OpenSearch Security Demo Configuration# WARNING: revise all the lines below before you go into productionplugins:security:ssl:transport:pemcert_filepath: esnode.pempemkey_filepath: esnode-key.pempemtrustedcas_filepath: root-ca.pemenforce_hostname_verification: falsehttp:enabled: truepemcert_filepath: esnode.pempemkey_filepath: esnode-key.pempemtrustedcas_filepath: root-ca.pemallow_unsafe_democertificates: trueallow_default_init_securityindex: trueauthcz:admin_dn:- CNkirk,OUclient,Oclient,Ltest,Cdeaudit.type: internal_opensearchenable_snapshot_restore_privilege: truecheck_snapshot_restore_write_privileges: truerestapi:roles_enabled: [all_access, security_rest_api_access]system_indices:enabled: trueindices:[.opendistro-alerting-config,.opendistro-alerting-alert*,.opendistro-anomaly-results*,.opendistro-anomaly-detector*,.opendistro-anomaly-checkpoints,.opendistro-anomaly-detection-state,.opendistro-reports-*,.opendistro-notifications-*,.opendistro-notebooks,.opendistro-asynchronous-search-response*,]######## End OpenSearch Security Demo Configuration ######## kind: ConfigMap metadata:labels:app.kubernetes.io/component: opensearch-cluster-masterapp.kubernetes.io/instance: opensearch-serverapp.kubernetes.io/name: opensearchname: opensearch-cluster-master-confignamespace: default [rootmaster01 openserach_install]# cat os_headless.yml apiVersion: v1 kind: Service metadata:annotations:service.alpha.kubernetes.io/tolerate-unready-endpoints: truelabels:app.kubernetes.io/component: opensearch-cluster-masterapp.kubernetes.io/instance: opensearch-serverapp.kubernetes.io/name: opensearchname: opensearch-cluster-master-headlessnamespace: default spec:clusterIP: NoneclusterIPs:- NoneinternalTrafficPolicy: ClusteripFamilies:- IPv4ipFamilyPolicy: SingleStackports:- name: httpport: 9200protocol: TCPtargetPort: 9200- name: transportport: 9300protocol: TCPtargetPort: 9300publishNotReadyAddresses: trueselector:app.kubernetes.io/instance: opensearch-serverapp.kubernetes.io/name: opensearchsessionAffinity: Nonecat os_statefulset_hostpath.yml apiVersion: v1 items: - apiVersion: apps/v1kind: StatefulSetmetadata:annotations:majorVersion: 2generation: 1labels:app.kubernetes.io/component: opensearch-cluster-masterapp.kubernetes.io/instance: opensearch-serverapp.kubernetes.io/name: opensearchname: opensearch-cluster-masternamespace: defaultspec:podManagementPolicy: Parallelreplicas: 3revisionHistoryLimit: 10selector:matchLabels:app.kubernetes.io/instance: opensearch-serverapp.kubernetes.io/name: opensearchserviceName: opensearch-cluster-master-headlesstemplate:metadata:creationTimestamp: nulllabels:app.kubernetes.io/component: opensearch-cluster-masterapp.kubernetes.io/instance: opensearch-serverapp.kubernetes.io/name: opensearchname: opensearch-cluster-masterspec:affinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- podAffinityTerm:labelSelector:matchExpressions:- key: app.kubernetes.io/instanceoperator: Invalues:- opensearch-server- key: app.kubernetes.io/nameoperator: Invalues:- opensearchtopologyKey: kubernetes.io/hostnameweight: 1containers:- env:- name: node.namevalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.name- name: cluster.initial_master_nodesvalue: opensearch-cluster-master-0,opensearch-cluster-master-1,opensearch-cluster-master-2,- name: discovery.seed_hostsvalue: opensearch-cluster-master-headless- name: cluster.namevalue: opensearch-cluster- name: network.hostvalue: 0.0.0.0- name: OPENSEARCH_JAVA_OPTSvalue: -Xmx512M -Xms512M- name: node.rolesvalue: master,ingest,data,remote_cluster_client,image: opensearchproject/opensearch:2.0.0imagePullPolicy: IfNotPresentname: opensearchports:- containerPort: 9200name: httpprotocol: TCP- containerPort: 9300name: transportprotocol: TCPreadinessProbe:failureThreshold: 3periodSeconds: 5successThreshold: 1tcpSocket:port: 9200timeoutSeconds: 3resources:requests:cpu: 1memory: 100MisecurityContext:capabilities:drop:- ALLrunAsNonRoot: truerunAsUser: 1000startupProbe:failureThreshold: 30initialDelaySeconds: 5periodSeconds: 10successThreshold: 1tcpSocket:port: 9200timeoutSeconds: 3terminationMessagePath: /dev/termination-logterminationMessagePolicy: FilevolumeMounts:- mountPath: /usr/share/opensearch/dataname: opensearch-cluster-master- mountPath: /usr/share/opensearch/config/opensearch.ymlname: configsubPath: opensearch.ymldnsPolicy: ClusterFirstenableServiceLinks: trueinitContainers:- args:- chown -R 1000:1000 /usr/share/opensearch/datacommand:- sh- -cimage: busybox:latestimagePullPolicy: Alwaysname: fsgroup-volumeresources: {}securityContext:runAsUser: 0terminationMessagePath: /dev/termination-logterminationMessagePolicy: FilevolumeMounts:- mountPath: /usr/share/opensearch/dataname: opensearch-cluster-masterrestartPolicy: AlwaysschedulerName: default-schedulersecurityContext:fsGroup: 1000runAsUser: 1000terminationGracePeriodSeconds: 120volumes:- configMap:defaultMode: 420name: opensearch-cluster-master-configname: config- hostPath:path: /tmp/osdataname: opensearch-cluster-masterupdateStrategy:type: RollingUpdate kind: Listcat os_svc.ymlapiVersion: v1 kind: Service metadata:labels:app.kubernetes.io/component: opensearch-cluster-masterapp.kubernetes.io/instance: opensearch-serverapp.kubernetes.io/name: opensearchname: opensearch-cluster-masternamespace: default spec:internalTrafficPolicy: ClusteripFamilies:- IPv4ipFamilyPolicy: SingleStackports:- name: httpport: 9200nodePort: 32001protocol: TCPtargetPort: 9200- name: transportport: 9300protocol: TCPtargetPort: 9300selector:app.kubernetes.io/instance: opensearch-serverapp.kubernetes.io/name: opensearchsessionAffinity: Nonetype: NodePort
http://www.dnsts.com.cn/news/85731.html

相关文章:

  • 邯郸形象网站建设app软件开发定义
  • 制作网站源码软件竞价推广方案
  • 餐饮系统网站建设做网站功能模块
  • 平面ui设计网站办网络宽带多少钱
  • 一起做玩具网站最专业的做网站公司哪家好
  • 小程序怎么删除掉站长工具seo综合查询可以访问
  • 简历网站有哪些网站建设中国的发展
  • 建设银行官方网站个人系统板块专业群建设网站
  • 什么网站可以做实验室二级院系网站建设
  • 网站开发及app开发报价cosy WordPress
  • 网站管理与维护的优势进出口贸易公司怎么注册
  • 学校做的网站外面访问不了做商品网站的教学视频
  • yes风淘宝网站网站上不去原因
  • 网站开发推广wordpress多地区
  • 外贸营销型网站制作网站备案期间做什么
  • 南阳哪里做网站eclipse网站开发流程图
  • 郑州网站优化托管做的网站怎么放在网上
  • 网站开发工作量免费自助创建网站
  • 然后建设一个论坛网站精品影视资源推荐入口
  • 缘魁上海网站建设云搜索
  • 网站建设好后给领导作介绍网站开发和网络设计有什么区别
  • 西安做网站陕西必达中国网络安全公司排名
  • 崇明集团网站建设北京做手机网站的公司名称
  • 网站群信息管理系统企业网站功能介绍
  • 西海岸新区城市建设局网站核酸二维码
  • 做一网站需要多少钱深圳网站建设公司 概况
  • 天津网站优化公司哪家好五泉山网页设计宣传网站制作
  • 北京市建设工程信息网站网络广告投放
  • 门户网站大全网站路径301重定向怎么做
  • 商城网站建设与维护方案深圳网络科技公司大全