当前位置: 首页 > news >正文

化妆品设计网站下载app赚钱的平台

化妆品设计网站,下载app赚钱的平台,有高级感的公司名字,移动互联网目录 一、kubesphere安装 1、安装本地持久存储 1.1、default-storage-class.yaml 1.2、 openebs-operator.yaml 1.3、安装 Default StorageClass 2、安装kubesphere 2.1、安装Helm 2.2、安装kubesphere 二、配置kubesphere 1、安装插件 2、创建devops项目 3、配置…目录 一、kubesphere安装 1、安装本地持久存储 1.1、default-storage-class.yaml 1.2、 openebs-operator.yaml 1.3、安装 Default StorageClass 2、安装kubesphere 2.1、安装Helm 2.2、安装kubesphere  二、配置kubesphere 1、安装插件 2、创建devops项目 3、配置SonarQube 3.1、安装SonarQube 服务器 3.2、获取 SonarQube 控制台地址 3.3、配置 SonarQube 服务器 3.3.1、创建 SonarQube 管理员令牌 (Token) 3.3.2、创建 Webhook 服务器 3.3.3、将 SonarQube 服务器添加至 Jenkins 3.3.4、配置devops插件 3.3.5、进入jenkins配置sonarqube若需要输入密码( admin/P88w0rd) 3.3.6、将 SonarQube 配置添加到 DevOps 3.3.7、将 sonarqubeURL 添加到 KubeSphere 控制台 3.3.8、重启服务 3.4、配置Maven私服配置 三、配置微服务项目 1、创建Harbor凭证 2、构建maven 2.1、下载源码 2.2、配置文件 2.3、 构建镜像推到私服 2.4、修改kubesphere配置文件 2.5、创建docker密钥 3、构建devops 3.1、创建凭证 3.2、创建流水线 3.3、编辑jenkinsfile 3.4、创建harbor-secret 3.5、验证 四、参考 一、kubesphere安装 1、安装本地持久存储 1.1、default-storage-class.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata:name: localannotations:cas.openebs.io/config: |- name: StorageTypevalue: hostpath- name: BasePathvalue: /var/openebs/local/kubectl.kubernetes.io/last-applied-configuration: {apiVersion:storage.k8s.io/v1,kind:StorageClass,metadata:{annotations:{cas.openebs.io/config:-name: StorageType\n value: \hostpath\\n- name: BasePath\n value:\/var/openebs/local/\\n,openebs.io/cas-type:local,storageclass.beta.kubernetes.io/is-default-class:true,storageclass.kubesphere.io/supported-access-modes:[\ReadWriteOnce\]},name:local},provisioner:openebs.io/local,reclaimPolicy:Delete,volumeBindingMode:WaitForFirstConsumer}openebs.io/cas-type: localstorageclass.beta.kubernetes.io/is-default-class: truestorageclass.kubesphere.io/supported-access-modes: [ReadWriteOnce] provisioner: openebs.io/local reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer1.2、 openebs-operator.yaml # This manifest deploys the OpenEBS control plane components, # with associated CRs RBAC rules # NOTE: On GKE, deploy the openebs-operator.yaml in admin context # # NOTE: The Jiva and cStor components previously included in the Operator File # are now removed and it is recommended for users to use cStor and Jiva CSI operators. # To upgrade your Jiva and cStor volumes to CSI, please checkout the documentation at: # https://github.com/openebs/upgrade # # To deploy the legacy Jiva and cStor: # kubectl apply -f https://openebs.github.io/charts/legacy-openebs-operator.yaml # # To deploy cStor CSI: # kubectl apply -f https://openebs.github.io/charts/cstor-operator.yaml # # To deploy Jiva CSI: # kubectl apply -f https://openebs.github.io/charts/jiva-operator.yaml ## Create the OpenEBS namespace apiVersion: v1 kind: Namespace metadata:name: openebs --- # Create Maya Service Account apiVersion: v1 kind: ServiceAccount metadata:name: openebs-maya-operatornamespace: openebs --- # Define Role that allows operations on K8s pods/deployments kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:name: openebs-maya-operator rules: - apiGroups: [*]resources: [nodes, nodes/proxy]verbs: [*] - apiGroups: [*]resources: [namespaces, services, pods, pods/exec, deployments, deployments/finalizers, replicationcontrollers, replicasets, events, endpoints, configmaps, secrets, jobs, cronjobs]verbs: [*] - apiGroups: [*]resources: [statefulsets, daemonsets]verbs: [*] - apiGroups: [*]resources: [resourcequotas, limitranges]verbs: [list, watch] - apiGroups: [*]resources: [ingresses, horizontalpodautoscalers, verticalpodautoscalers, certificatesigningrequests]verbs: [list, watch] - apiGroups: [*]resources: [storageclasses, persistentvolumeclaims, persistentvolumes]verbs: [*] - apiGroups: [volumesnapshot.external-storage.k8s.io]resources: [volumesnapshots, volumesnapshotdatas]verbs: [get, list, watch, create, update, patch, delete] - apiGroups: [apiextensions.k8s.io]resources: [customresourcedefinitions]verbs: [ get, list, create, update, delete, patch] - apiGroups: [openebs.io]resources: [ *]verbs: [* ] - apiGroups: [cstor.openebs.io]resources: [ *]verbs: [* ] - apiGroups: [coordination.k8s.io]resources: [leases]verbs: [get, watch, list, delete, update, create] - apiGroups: [admissionregistration.k8s.io]resources: [validatingwebhookconfigurations, mutatingwebhookconfigurations]verbs: [get, create, list, delete, update, patch] - nonResourceURLs: [/metrics]verbs: [get] - apiGroups: [*]resources: [poddisruptionbudgets]verbs: [get, list, create, delete, watch] - apiGroups: [coordination.k8s.io]resources: [leases]verbs: [get, create, update] --- # Bind the Service Account with the Role Privileges. # TODO: Check if default account also needs to be there kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:name: openebs-maya-operator subjects: - kind: ServiceAccountname: openebs-maya-operatornamespace: openebs roleRef:kind: ClusterRolename: openebs-maya-operatorapiGroup: rbac.authorization.k8s.io --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata:annotations:controller-gen.kubebuilder.io/version: v0.5.0creationTimestamp: nullname: blockdevices.openebs.io spec:group: openebs.ionames:kind: BlockDevicelistKind: BlockDeviceListplural: blockdevicesshortNames:- bdsingular: blockdevicescope: Namespacedversions:- additionalPrinterColumns:- jsonPath: .spec.nodeAttributes.nodeNamename: NodeNametype: string- jsonPath: .spec.pathname: Pathpriority: 1type: string- jsonPath: .spec.filesystem.fsTypename: FSTypepriority: 1type: string- jsonPath: .spec.capacity.storagename: Sizetype: string- jsonPath: .status.claimStatename: ClaimStatetype: string- jsonPath: .status.statename: Statustype: string- jsonPath: .metadata.creationTimestampname: Agetype: datename: v1alpha1schema:openAPIV3Schema:description: BlockDevice is the Schema for the blockdevices APIproperties:apiVersion:description: APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resourcestype: stringkind:description: Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kindstype: stringmetadata:type: objectspec:description: DeviceSpec defines the properties and runtime status of a BlockDeviceproperties:aggregateDevice:description: AggregateDevice was intended to store the hierarchical information in cases of LVM. However this is currently not implemented and may need to be re-looked into for better design. To be deprecatedtype: stringcapacity:description: Capacityproperties:logicalSectorSize:description: LogicalSectorSize is blockdevice logical-sector size in bytesformat: int32type: integerphysicalSectorSize:description: PhysicalSectorSize is blockdevice physical-Sector size in bytesformat: int32type: integerstorage:description: Storage is the blockdevice capacity in bytesformat: int64type: integerrequired:- storagetype: objectclaimRef:description: ClaimRef is the reference to the BDC which has claimed this BDproperties:apiVersion:description: API version of the referent.type: stringfieldPath:description: If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: spec.containers{name} (where name refers to the name of the container that triggered the event) or if no container name is specified spec.containers[2] (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.type: stringkind:description: Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kindstype: stringname:description: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#namestype: stringnamespace:description: Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/type: stringresourceVersion:description: Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistencytype: stringuid:description: UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uidstype: stringtype: objectdetails:description: Details contain static attributes of BD like model,serial, and so forthproperties:compliance:description: Compliance is standards/specifications version implemented by device firmware such as SPC-1, SPC-2, etctype: stringdeviceType:description: DeviceType represents the type of device like sparse, disk, partition, lvm, cryptenum:- disk- partition- sparse- loop- lvm- crypt- dm- mpathtype: stringdriveType:description: DriveType is the type of backing drive, HDD/SSDenum:- HDD- SSD- Unknown- type: stringfirmwareRevision:description: FirmwareRevision is the disk firmware revisiontype: stringhardwareSectorSize:description: HardwareSectorSize is the hardware sector size in bytesformat: int32type: integerlogicalBlockSize:description: LogicalBlockSize is the logical block size in bytes reported by /sys/class/block/sda/queue/logical_block_sizeformat: int32type: integermodel:description: Model is model of disktype: stringphysicalBlockSize:description: PhysicalBlockSize is the physical block size in bytes reported by /sys/class/block/sda/queue/physical_block_sizeformat: int32type: integerserial:description: Serial is serial number of disktype: stringvendor:description: Vendor is vendor of disktype: stringtype: objectdevlinks:description: DevLinks contains soft links of a block device like /dev/by-id/... /dev/by-uuid/...items:description: DeviceDevLink holds the mapping between type and links like by-id type or by-path type linkproperties:kind:description: Kind is the type of link like by-id or by-path.enum:- by-id- by-pathtype: stringlinks:description: Links are the soft linksitems:type: stringtype: arraytype: objecttype: arrayfilesystem:description: FileSystem contains mountpoint and filesystem typeproperties:fsType:description: Type represents the FileSystem type of the block devicetype: stringmountPoint:description: MountPoint represents the mountpoint of the block device.type: stringtype: objectnodeAttributes:description: NodeAttributes has the details of the node on which BD is attachedproperties:nodeName:description: NodeName is the name of the Kubernetes node resource on which the device is attachedtype: stringtype: objectparentDevice:description: ParentDevice was intended to store the UUID of the parent Block Device as is the case for partitioned block devices. \n For example: /dev/sda is the parent for /dev/sda1 To be deprecatedtype: stringpartitioned:description: Partitioned represents if BlockDevice has partitions or not (Yes/No) Currently always default to No. To be deprecatedenum:- Yes- Notype: stringpath:description: Path contain devpath (e.g. /dev/sdb)type: stringrequired:- capacity- devlinks- nodeAttributes- pathtype: objectstatus:description: DeviceStatus defines the observed state of BlockDeviceproperties:claimState:description: ClaimState represents the claim state of the block deviceenum:- Claimed- Unclaimed- Releasedtype: stringstate:description: State is the current state of the blockdevice (Active/Inactive/Unknown)enum:- Active- Inactive- Unknowntype: stringrequired:- claimState- statetype: objecttype: objectserved: truestorage: truesubresources: {} status:acceptedNames:kind: plural: conditions: []storedVersions: []--- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata:annotations:controller-gen.kubebuilder.io/version: v0.5.0creationTimestamp: nullname: blockdeviceclaims.openebs.io spec:group: openebs.ionames:kind: BlockDeviceClaimlistKind: BlockDeviceClaimListplural: blockdeviceclaimsshortNames:- bdcsingular: blockdeviceclaimscope: Namespacedversions:- additionalPrinterColumns:- jsonPath: .spec.blockDeviceNamename: BlockDeviceNametype: string- jsonPath: .status.phasename: Phasetype: string- jsonPath: .metadata.creationTimestampname: Agetype: datename: v1alpha1schema:openAPIV3Schema:description: BlockDeviceClaim is the Schema for the blockdeviceclaims APIproperties:apiVersion:description: APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resourcestype: stringkind:description: Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kindstype: stringmetadata:type: objectspec:description: DeviceClaimSpec defines the request details for a BlockDeviceproperties:blockDeviceName:description: BlockDeviceName is the reference to the block-device backing this claimtype: stringblockDeviceNodeAttributes:description: BlockDeviceNodeAttributes is the attributes on the node from which a BD should be selected for this claim. It can include nodename, failure domain etc.properties:hostName:description: HostName represents the hostname of the Kubernetes node resource where the BD should be presenttype: stringnodeName:description: NodeName represents the name of the Kubernetes node resource where the BD should be presenttype: stringtype: objectdeviceClaimDetails:description: Details of the device to be claimedproperties:allowPartition:description: AllowPartition represents whether to claim a full block device or a device that is a partitiontype: booleanblockVolumeMode:description: BlockVolumeMode represents whether to claim a device in Block mode or Filesystem mode. These are use cases of BlockVolumeMode: 1) Not specified: VolumeMode check will not be effective 2) VolumeModeBlock: BD should not have any filesystem or mountpoint 3) VolumeModeFileSystem: BD should have a filesystem and mountpoint. If DeviceFormat is specified then the format should match with the FSType in BDtype: stringformatType:description: Format of the device required, eg:ext4, xfstype: stringtype: objectdeviceType:description: DeviceType represents the type of drive like SSD, HDD etc.,nullable: truetype: stringhostName:description: Node name from where blockdevice has to be claimed. To be deprecated. Use NodeAttributes.HostName insteadtype: stringresources:description: Resources will help with placing claims on Capacity, IOPSproperties:requests:additionalProperties:anyOf:- type: integer- type: stringpattern: ^(\|-)?(([0-9](\.[0-9]*)?)|(\.[0-9]))(([KMGTPE]i)|[numkMGTPE]|([eE](\|-)?(([0-9](\.[0-9]*)?)|(\.[0-9]))))?$x-kubernetes-int-or-string: truedescription: Requests describes the minimum resources required. eg: if storage resource of 10G is requested minimum capacity of 10G should be available TODO for validatingtype: objectrequired:- requeststype: objectselector:description: Selector is used to find block devices to be considered for claimingproperties:matchExpressions:description: matchExpressions is a list of label selector requirements. The requirements are ANDed.items:description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.properties:key:description: key is the label key that the selector applies to.type: stringoperator:description: operator represents a keys relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.type: stringvalues:description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.items:type: stringtype: arrayrequired:- key- operatortype: objecttype: arraymatchLabels:additionalProperties:type: stringdescription: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is key, the operator is In, and the values array contains only value. The requirements are ANDed.type: objecttype: objecttype: objectstatus:description: DeviceClaimStatus defines the observed state of BlockDeviceClaimproperties:phase:description: Phase represents the current phase of the claimtype: stringrequired:- phasetype: objecttype: objectserved: truestorage: truesubresources: {} status:acceptedNames:kind: plural: conditions: []storedVersions: [] --- # This is the node-disk-manager related config. # It can be used to customize the disks probes and filters apiVersion: v1 kind: ConfigMap metadata:name: openebs-ndm-confignamespace: openebslabels:openebs.io/component-name: ndm-config data:# udev-probe is default or primary probe it should be enabled to run ndm# filterconfigs contains configs of filters. To provide a group of include# and exclude values add it as , separated stringnode-disk-manager.config: |probeconfigs:- key: udev-probename: udev probestate: true- key: seachest-probename: seachest probestate: false- key: smart-probename: smart probestate: truefilterconfigs:- key: os-disk-exclude-filtername: os disk exclude filterstate: trueexclude: /,/etc/hosts,/boot- key: vendor-filtername: vendor filterstate: trueinclude: exclude: CLOUDBYT,OpenEBS- key: path-filtername: path filterstate: trueinclude: exclude: /dev/loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/md,/dev/dm-,/dev/rbd,/dev/zd# metconfig can be used to decorate the block device with different types of labels# that are available on the node or come in a device properties.# node labels - the node where bd is discovered. A whitlisted label prefixes# attribute labels - a property of the BD can be added as a ndm label as ndm.io/propertyproperty-valuemetaconfigs:- key: node-labelsname: node labelspattern: - key: device-labelsname: device labelstype: --- apiVersion: apps/v1 kind: DaemonSet metadata:name: openebs-ndmnamespace: openebslabels:name: openebs-ndmopenebs.io/component-name: ndmopenebs.io/version: 3.5.0 spec:selector:matchLabels:name: openebs-ndmopenebs.io/component-name: ndmupdateStrategy:type: RollingUpdatetemplate:metadata:labels:name: openebs-ndmopenebs.io/component-name: ndmopenebs.io/version: 3.5.0spec:# By default the node-disk-manager will be run on all kubernetes nodes# If you would like to limit this to only some nodes, say the nodes# that have storage attached, you could label those node and use# nodeSelector.## e.g. label the storage nodes with - openebs.io/nodegroupstorage-node# kubectl label node node-name openebs.io/nodegroupstorage-node#nodeSelector:# openebs.io/nodegroup: storage-nodeserviceAccountName: openebs-maya-operatorhostNetwork: true# host PID is used to check status of iSCSI Service when the NDM# API service is enabled#hostPID: truecontainers:- name: node-disk-managerimage: openebs/node-disk-manager:2.1.0args:- -v4# The feature-gate is used to enable the new UUID algorithm.- --feature-gatesGPTBasedUUID# Use partition table UUID instead of create single partition to get# partition UUID. Require GPTBasedUUID to be enabled with.# - --feature-gatesPartitionTableUUID# Detect changes to device size, filesystem and mount-points without restart.# - --feature-gatesChangeDetection# The feature gate is used to start the gRPC API service. The gRPC server# starts at 9115 port by default. This feature is currently in Alpha state# - --feature-gatesAPIService# The feature gate is used to enable NDM, to create blockdevice resources# for unused partitions on the OS disk# - --feature-gatesUseOSDiskimagePullPolicy: IfNotPresentsecurityContext:privileged: truevolumeMounts:- name: configmountPath: /host/node-disk-manager.configsubPath: node-disk-manager.configreadOnly: true# make udev database available inside container- name: udevmountPath: /run/udev- name: procmountmountPath: /host/procreadOnly: true- name: devmountmountPath: /dev- name: basepathmountPath: /var/openebs/ndm- name: sparsepathmountPath: /var/openebs/sparseenv:# namespace in which NDM is installed will be passed to NDM Daemonset# as environment variable- name: NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# pass hostname as env variable using downward API to the NDM container- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# specify the directory where the sparse files need to be created.# if not specified, then sparse files will not be created.- name: SPARSE_FILE_DIRvalue: /var/openebs/sparse# Size(bytes) of the sparse file to be created.- name: SPARSE_FILE_SIZEvalue: 10737418240# Specify the number of sparse files to be created- name: SPARSE_FILE_COUNTvalue: 0livenessProbe:exec:command:- pgrep- ndminitialDelaySeconds: 30periodSeconds: 60volumes:- name: configconfigMap:name: openebs-ndm-config- name: udevhostPath:path: /run/udevtype: Directory# mount /proc (to access mount file of process 1 of host) inside container# to read mount-point of disks and partitions- name: procmounthostPath:path: /proctype: Directory- name: devmount# the /dev directory is mounted so that we have access to the devices that# are connected at runtime of the pod.hostPath:path: /devtype: Directory- name: basepathhostPath:path: /var/openebs/ndmtype: DirectoryOrCreate- name: sparsepathhostPath:path: /var/openebs/sparse --- apiVersion: apps/v1 kind: Deployment metadata:name: openebs-ndm-operatornamespace: openebslabels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatoropenebs.io/version: 3.5.0 spec:selector:matchLabels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatorreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-ndm-operatoropenebs.io/component-name: ndm-operatoropenebs.io/version: 3.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: node-disk-operatorimage: openebs/node-disk-operator:2.1.0imagePullPolicy: IfNotPresentenv:- name: WATCH_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name# the service account of the ndm-operator pod- name: SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName- name: OPERATOR_NAMEvalue: node-disk-operator- name: CLEANUP_JOB_IMAGEvalue: openebs/linux-utils:3.5.0# OPENEBS_IO_IMAGE_PULL_SECRETS environment variable is used to pass the image pull secrets# to the cleanup pod launched by NDM operator#- name: OPENEBS_IO_IMAGE_PULL_SECRETS# value: livenessProbe:httpGet:path: /healthzport: 8585initialDelaySeconds: 15periodSeconds: 20readinessProbe:httpGet:path: /readyzport: 8585initialDelaySeconds: 5periodSeconds: 10 --- # Create NDM cluster exporter deployment. # This is an optional component and is not required for the basic # functioning of NDM apiVersion: apps/v1 kind: Deployment metadata:name: openebs-ndm-cluster-exporternamespace: openebslabels:name: openebs-ndm-cluster-exporteropenebs.io/component-name: ndm-cluster-exporteropenebs.io/version: 3.5.0 spec:replicas: 1strategy:type: Recreateselector:matchLabels:name: openebs-ndm-cluster-exporteropenebs.io/component-name: ndm-cluster-exportertemplate:metadata:labels:name: openebs-ndm-cluster-exporteropenebs.io/component-name: ndm-cluster-exporteropenebs.io/version: 3.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: ndm-cluster-exporterimage: openebs/node-disk-exporter:2.1.0command:- /usr/local/bin/exporterargs:- start- --modecluster- --port$(METRICS_LISTEN_PORT)- --metrics/metricsports:- containerPort: 9100protocol: TCPname: metricsimagePullPolicy: IfNotPresentenv:- name: NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: METRICS_LISTEN_PORTvalue: :9100 --- # Create NDM cluster exporter service # This is optional and required only when # ndm-cluster-exporter deployment is used apiVersion: v1 kind: Service metadata:name: openebs-ndm-cluster-exporter-servicenamespace: openebslabels:name: openebs-ndm-cluster-exporter-serviceopenebs.io/component-name: ndm-cluster-exporterapp: openebs-ndm-exporter spec:clusterIP: Noneports:- name: metricsport: 9100targetPort: 9100selector:name: openebs-ndm-cluster-exporter --- # Create NDM node exporter daemonset. # This is an optional component used for getting disk level # metrics from each of the storage nodes apiVersion: apps/v1 kind: DaemonSet metadata:name: openebs-ndm-node-exporternamespace: openebslabels:name: openebs-ndm-node-exporteropenebs.io/component-name: ndm-node-exporteropenebs.io/version: 3.5.0 spec:updateStrategy:type: RollingUpdateselector:matchLabels:name: openebs-ndm-node-exporteropenebs.io/component-name: ndm-node-exportertemplate:metadata:labels:name: openebs-ndm-node-exporteropenebs.io/component-name: ndm-node-exporteropenebs.io/version: 3.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: node-disk-exporterimage: openebs/node-disk-exporter:2.1.0command:- /usr/local/bin/exporterargs:- start- --modenode- --port$(METRICS_LISTEN_PORT)- --metrics/metricsports:- containerPort: 9101protocol: TCPname: metricsimagePullPolicy: IfNotPresentsecurityContext:privileged: trueenv:- name: NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: METRICS_LISTEN_PORTvalue: :9101 --- # Create NDM node exporter service # This is optional and required only when # ndm-node-exporter daemonset is used apiVersion: v1 kind: Service metadata:name: openebs-ndm-node-exporter-servicenamespace: openebslabels:name: openebs-ndm-node-exporteropenebs.io/component: openebs-ndm-node-exporterapp: openebs-ndm-exporter spec:clusterIP: Noneports:- name: metricsport: 9101targetPort: 9101selector:name: openebs-ndm-node-exporter --- apiVersion: apps/v1 kind: Deployment metadata:name: openebs-localpv-provisionernamespace: openebslabels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisioneropenebs.io/version: 3.5.0 spec:selector:matchLabels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisionerreplicas: 1strategy:type: Recreatetemplate:metadata:labels:name: openebs-localpv-provisioneropenebs.io/component-name: openebs-localpv-provisioneropenebs.io/version: 3.5.0spec:serviceAccountName: openebs-maya-operatorcontainers:- name: openebs-provisioner-hostpathimagePullPolicy: IfNotPresentimage: openebs/provisioner-localpv:3.4.0args:- --bd-time-out$(BDC_BD_BIND_RETRIES)env:# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s# based on this address. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_K8S_MASTER# value: http://10.128.0.12:8080# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s# based on this config. This is ignored if empty.# This is supported for openebs provisioner version 0.5.2 onwards#- name: OPENEBS_IO_KUBE_CONFIG# value: /home/ubuntu/.kube/config# This sets the number of times the provisioner should try # with a polling interval of 5 seconds, to get the Blockdevice# Name from a BlockDeviceClaim, before the BlockDeviceClaim# is deleted. E.g. 12 * 5 seconds 60 seconds timeout- name: BDC_BD_BIND_RETRIESvalue: 12- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: OPENEBS_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as# environment variable- name: OPENEBS_SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName- name: OPENEBS_IO_ENABLE_ANALYTICSvalue: true- name: OPENEBS_IO_INSTALLER_TYPEvalue: openebs-operator- name: OPENEBS_IO_HELPER_IMAGEvalue: openebs/linux-utils:3.5.0- name: OPENEBS_IO_BASE_PATHvalue: /var/openebs/local# LEADER_ELECTION_ENABLED is used to enable/disable leader election. By default# leader election is enabled.#- name: LEADER_ELECTION_ENABLED# value: true# OPENEBS_IO_IMAGE_PULL_SECRETS environment variable is used to pass the image pull secrets# to the helper pod launched by local-pv hostpath provisioner#- name: OPENEBS_IO_IMAGE_PULL_SECRETS# value: # Process name used for matching is limited to the 15 characters# present in the pgrep output.# So fullname cant be used here with pgrep (15 chars).A regular expression# that matches the entire command name has to specified.# Anchor ^ : matches any string that starts with provisioner-loc# .*: matches any string that has provisioner-loc followed by zero or more charlivenessProbe:exec:command:- sh- -c- test pgrep -c ^provisioner-loc.* 1initialDelaySeconds: 30periodSeconds: 60 --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:name: openebs-hostpathannotations:openebs.io/cas-type: localcas.openebs.io/config: |#hostpath type will create a PV by # creating a sub-directory under the# BASEPATH provided below.- name: StorageTypevalue: hostpath#Specify the location (directory) where# where PV(volume) data will be saved. # A sub-directory with pv-name will be # created. When the volume is deleted, # the PV sub-directory will be deleted.#Default value is /var/openebs/local- name: BasePathvalue: /var/openebs/local/ provisioner: openebs.io/local volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Delete --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:name: openebs-deviceannotations:openebs.io/cas-type: localcas.openebs.io/config: |#device type will create a PV by# issuing a BDC and will extract the path# values from the associated BD.- name: StorageTypevalue: device provisioner: openebs.io/local volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Delete --- 1.3、安装 Default StorageClass # 安装 iSCSI 协议客户端OpenEBS 需要该协议提供存储支持 所有节点都执行 yum install iscsi-initiator-utils -y# 设置开机启动 systemctl enable --now iscsid# 启动服务 systemctl start iscsid# 查看服务状态 systemctl status iscsid# 安装 OpenEBS kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml# 查看状态下载镜像可能需要一些时间 kubectl get all -n openebs# 在主节点创建本地 storage class kubectl apply -f default-storage-class.yaml 2、安装kubesphere 2.1、安装Helm 参考官网 Helm | 安装Helm 这里注意你的k8s版本与Helm版本兼容性。 2.2、安装kubesphere  #假设你已经安装好了Helm3 helm repo add kubesphere https://charts.kubesphere.io/main#搜索镜像 helm search repo kubesphere#拉取 helm pull kubesphere/ks-core --version1.1.3#解压 tar -xf ks-core-1.1.3.tgz #创建namespace kubectl create ns kubesphere-system kubectl create ns kubesphere-controls-system kubectl create ns kubesphere-monitoring-system#安装 helm install ks-core ks-core -n kubesphere-system #访问路径 http://192.168.139.176:30880#账号密码 Account: admin Password: P88w0rd 二、配置kubesphere 1、安装插件 2、创建devops项目 进入工作台企业空间创建 3、配置SonarQube 3.1、安装SonarQube 服务器 helm upgrade --install sonarqube sonarqube --repo https://charts.kubesphere.io/main -n \kubesphere-devops-system --create-namespace --set service.typeNodePort 3.2、获取 SonarQube 控制台地址 export NODE_PORT$(kubectl get --namespace kubesphere-devops-system -o jsonpath{.spec.ports[0].nodePort} services sonarqube-sonarqube)export NODE_IP$(kubectl get nodes --namespace kubesphere-devops-system -o jsonpath{.items[0].status.addresses[0].address})echo http://$NODE_IP:$NODE_PORT #查看资源是否创建完成 kubectl get pod -n kubesphere-devops-system 访问 SonarQube 控制台默认账号 :密码为admin/admin 3.3、配置 SonarQube 服务器 3.3.1、创建 SonarQube 管理员令牌 (Token) 3.3.2、创建 Webhook 服务器 export NODE_PORT$(kubectl get --namespace kubesphere-devops-system -o jsonpath{.spec.ports[0].nodePort} services devops-jenkins) export NODE_IP$(kubectl get nodes --namespace kubesphere-devops-system -o jsonpath{.items[0].status.addresses[0].address}) echo http://$NODE_IP:$NODE_PORT/sonarqube-webhook/ 3.3.3、将 SonarQube 服务器添加至 Jenkins export NODE_PORT$(kubectl get --namespace kubesphere-devops-system -o jsonpath{.spec.ports[0].nodePort} services devops-jenkins) export NODE_IP$(kubectl get nodes --namespace kubesphere-devops-system -o jsonpath{.items[0].status.addresses[0].address}) echo http://$NODE_IP:$NODE_PORT 3.3.4、配置devops插件 第一步修改地址为实际地址 第二步进入系统空间项目kubesphere-devops-system配置jenkins-casc-configjenkins_user.yaml 第三步修改以下内容为实际能访问的地址 securityRealm:oic:clientId: jenkinsclientSecret: jenkinstokenServerUrl: http://192.168.139.176:30880/oauth/tokenauthorizationServerUrl: http://192.168.139.176:30880/oauth/authorizeuserInfoServerUrl: http://192.168.139.176:30880/oauth/userinfoendSessionEndpoint: http://192.168.139.176:30880/oauth/logoutlogoutFromOpenidProvider: truescopes: openid profile emailfullNameFieldName: urluserNameField: preferred_username 第四步系统空间kubesphere-system项目kubesphere-config配置  第五步重启Deployment ks-apiserver kubectl -n kubesphere-system rollout restart deploy ks-apiserver 3.3.5、进入jenkins配置sonarqube若需要输入密码( admin/P88w0rd) 添加凭据供sonarqube配置使用 3.3.6、将 SonarQube 配置添加到 DevOps 1、执行如下命令 kubectl -n kubesphere-devops-system edit cm devops-config 2、修改配置文件  data:kubesphere.yaml: |authentication:authenticateRateLimiterMaxTries: 10authenticateRateLimiterDuration: 10m0sloginHistoryRetentionPeriod: 168hmaximumClockSkew: 10sjwtSecret: UDjssmmDgxZtkXVDSeFvBtsZeBSFWhJ6devops:host: http://devops-jenkins.kubesphere-devops-systemusername: adminmaxConnections: 100namespace: kubesphere-devops-systemworkerNamespace: kubesphere-devops-workersonarqube:host: http://192.168.139.176:31850token: deafc2f1c17bf0d6bbeccb2a742a1706bebc0c5a 3、退出保存 3.3.7、将 sonarqubeURL 添加到 KubeSphere 控制台 kubectl edit cm -n kubesphere-system ks-console-config data:local_config.yaml: |server:http:hostname: localhostport: 8000static:production:/public: server/public/assets: dist/assets/dist: distredis:port: 6379host: redis.kubesphere-system.svcredisTimeout: 5000sessionTimeout: 7200000apiServer:url: http://ks-apiserverwsUrl: ws://ks-apiserverclient:version:kubesphere: v4.1.2kubernetes: v1.28.2enableKubeConfig: truedevops: #添加sonarqubeURL: http://192.168.139.176:31850 #添加enableNodeListTerminal: true 3.3.8、重启服务 kubectl -n kubesphere-devops-system rollout restart deploy devops-apiserverkubectl -n kubesphere-system rollout restart deploy ks-console 3.4、配置Maven私服配置 集群管理host主机群配置字典配置ks-devops-agent kind: ConfigMap apiVersion: v1 metadata:name: ks-devops-agentnamespace: kubesphere-devops-workerlabels:app.kubernetes.io/managed-by: Helmkubesphere.io/extension-ref: devopsannotations:meta.helm.sh/release-name: devops-agentmeta.helm.sh/release-namespace: kubesphere-devops-system data:MavenSetting: |?xml version1.0 encodingUTF-8?settingsxmlnshttp://maven.apache.org/SETTINGS/1.2.0xmlns:xsihttp://www.w3.org/2001/XMLSchema-instancexsi:schemaLocationhttp://maven.apache.org/SETTINGS/1.2.0 https://maven.apache.org/xsd/settings-1.2.0.xsdlocalRepository/var/jenkins_home/repository/localRepositoryserversserveridrelease/idusernameadmin/usernamepassword123456/password/serverserveridsnapshots/idusernameadmin/usernamepassword123456/password/serverserveridsnail/idusernameadmin/usernamepassword123456/password/server/serversmirrorsmirroridsnail/idnamesnail/nameurlhttp://192.168.139.184:8081/repository/snail-group//urlmirrorOf*/mirrorOf/mirror/mirrorspluginGroupspluginGrouporg.sonarsource.scanner.maven/pluginGroup/pluginGroupsprofilesprofileiddev/idrepositoriesrepositoryidnexus/idurlhttp://192.168.139.184:8081/repository/snail-group//urlreleasesenabledtrue/enabled/releasessnapshotsenabledtrue/enabled/snapshots/repository/repositoriespluginRepositoriespluginRepositoryidpublic/idnamePublic Repositories/nameurlhttp://192.168.139.184:8081/repository/snail-group//url/pluginRepository/pluginRepositories/profileprofileidjdk-17/idactivationactiveByDefaulttrue/activeByDefaultjdk17/jdk/activationpropertiessonar.host.urlhttp://192.168.139.176:30335/sonar.host.url/properties/profile/profilesactiveProfilesactiveProfiledev/activeProfile/activeProfiles/settings三、配置微服务项目 1、创建Harbor凭证 集群管理配置保密字典添加 2、构建maven 由于kubesphere的maven版本使用的是3.5.3版本太低了需要自己构建高版本Docker镜像 2.1、下载源码 https://github.com/carlossg/docker-maven/tree/main/eclipse-temurin-17 2.2、配置文件 Dockerfile FROM eclipse-temurin:17-jdk as builderARG MAVEN_VERSION3.9.9 ARG USER_HOME_DIR/root ARG SHAa555254d6b53d267965a3404ecb14e53c3827c09c3b94b5678835887ab404556bfaf78dcfe03ba76fa2508649dca8531c74bca4d5846513522404d48e8c4ac8b ARG BASE_URLhttps://dlcdn.apache.org/maven/maven-3/${MAVEN_VERSION}/binariesENV MAVEN_HOME/usr/share/maven ENV MAVEN_CONFIG$USER_HOME_DIR/.m2RUN apt-get update \ apt-get install -y ca-certificates curl git gnupg dirmngr --no-install-recommends \ rm -rf /var/lib/apt/lists/* RUN set -eux; curl -fsSLO --retry 3 --retry-connrefused --compressed ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \ echo ${SHA} *apache-maven-${MAVEN_VERSION}-bin.tar.gz | sha512sum -c - \ curl -fsSLO --compressed ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz.asc \ export GNUPGHOME$(mktemp -d); \for key in \6A814B1F869C2BBEAB7CB7271A2A1C94BDE89688 \29BEA2A645F2D6CED7FB12E02B172E3E156466E8 \88BE34F94BDB2B5357044E2E3A387D43964143E3 \; do \gpg --batch --keyserver hkps://keyserver.ubuntu.com --recv-keys $key ; \done; \gpg --batch --verify apache-maven-${MAVEN_VERSION}-bin.tar.gz.asc apache-maven-${MAVEN_VERSION}-bin.tar.gz RUN mkdir -p ${MAVEN_HOME} ${MAVEN_HOME}/ref \ tar -xzf apache-maven-${MAVEN_VERSION}-bin.tar.gz -C ${MAVEN_HOME} --strip-components1 \ ln -s ${MAVEN_HOME}/bin/mvn /usr/bin/mvn # smoke test RUN mvn --versionFROM eclipse-temurin:17-jdkRUN apt-get update \ apt-get install -y ca-certificates curl git openssh-client --no-install-recommends \ rm -rf /var/lib/apt/lists/*LABEL org.opencontainers.image.titleApache Maven LABEL org.opencontainers.image.sourcehttps://github.com/carlossg/docker-maven LABEL org.opencontainers.image.urlhttps://github.com/carlossg/docker-maven LABEL org.opencontainers.image.descriptionApache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a projects build, reporting and documentation from a central piece of information.ENV MAVEN_HOME/usr/share/mavenCOPY --frombuilder ${MAVEN_HOME} ${MAVEN_HOME} COPY mvn-entrypoint.sh /usr/local/bin/mvn-entrypoint.sh COPY settings-docker.xml /usr/share/maven/ref/RUN ln -s ${MAVEN_HOME}/bin/mvn /usr/bin/mvnARG MAVEN_VERSION3.9.9 ARG USER_HOME_DIR/root ENV MAVEN_CONFIG$USER_HOME_DIR/.m2ENTRYPOINT [/usr/local/bin/mvn-entrypoint.sh] CMD [mvn]mvn-entrypoint.sh #! /bin/sh -eu# Copy files from /usr/share/maven/ref into ${MAVEN_CONFIG} # So the initial ~/.m2 is set with expected content. # Dont override, as this is just a reference setupcopy_reference_files() {local log$MAVEN_CONFIG/copy_reference_file.loglocal ref/usr/share/maven/refif mkdir -p ${MAVEN_CONFIG}/repository touch ${log} /dev/null 21 ; thencd ${ref}local reflinkif cp --help 21 | grep -q reflink ; thenreflink--reflinkautofiif [ -n $(find ${MAVEN_CONFIG}/repository -maxdepth 0 -type d -empty 2/dev/null) ] ; then# destination is empty...echo --- Copying all files to ${MAVEN_CONFIG} at $(date) ${log}cp -rv ${reflink} . ${MAVEN_CONFIG} ${log}else# destination is non-empty, copy file-by-fileecho --- Copying individual files to ${MAVEN_CONFIG} at $(date) ${log}find . -type f -exec sh -eu -c log${1}shiftreflink${1}shiftfor f in $ ; doif [ ! -e ${MAVEN_CONFIG}/${f} ] || [ -e ${f}.override ] ; thenmkdir -p ${MAVEN_CONFIG}/$(dirname ${f})cp -rv ${reflink} ${f} ${MAVEN_CONFIG}/${f} ${log}fidone _ ${log} ${reflink} {} fiecho ${log}elseecho Can not write to ${log}. Wrong volume permissions? Carrying on ...fi }owd$(pwd) copy_reference_files unset MAVEN_CONFIGcd ${owd} unset owdexec $settings-docker.xml settings xmlnshttp://maven.apache.org/SETTINGS/1.0.0xmlns:xsihttp://www.w3.org/2001/XMLSchema-instancexsi:schemaLocationhttp://maven.apache.org/SETTINGS/1.0.0https://maven.apache.org/xsd/settings-1.0.0.xsdlocalRepository/usr/share/maven/ref/repository/localRepository /settings2.3、 构建镜像推到私服 #构建镜像 docker build -t 192.168.139.184:8899/library/maven:3.9.9-jdk17 .#登录docker私服 docker login -uadmin 192.168.139.184:8899#推送到仓库 docker push 192.168.139.184:8899/library/maven:3.9.9-jdk17 2.4、修改kubesphere配置文件 集群管理host主集群配置字典配置jenkins-casc-config在mavenjdk11同级目录下添加jdk17的配置。 修改这个文件 jenkins_user.yaml - name: mavenjdk17label: mavenjdk17inheritFrom: mavenimagePullSecrets:- name: harbor-secretcontainers:- name: mavenimage: 192.168.139.184:8899/library/maven:3.9.9-jdk17volumes:- hostPathVolume:hostPath: /var/run/docker.sockmountPath: /var/run/docker.sock- hostPathVolume:hostPath: /var/data/jenkins_maven_cachemountPath: /root/.m2- hostPathVolume:hostPath: /var/data/jenkins_sonar_cachemountPath: /root/.sonar/cache- hostPathVolume:hostPath: /usr/bin/dockermountPath: /usr/bin/docker- hostPathVolume:hostPath: /usr/bin/kubectlmountPath: /usr/bin/kubectl- hostPathVolume:hostPath: /usr/bin/envsubstmountPath: /usr/bin/envsubstyaml: |spec:containers:- name: mavenvolumeMounts:- name: config-volumemountPath: /usr/share/maven/conf/settings.xmlsubPath: settings.xmlvolumes:- name: config-volumeconfigMap:name: ks-devops-agentitems:- key: MavenSettingpath: settings.xml 2.5、创建docker密钥 #namespace为 kubesphere-devops-worker kubectl create secret docker-registry harbor-secret --docker-server192.168.139.184:8899 --docker-usernameadmin --docker-passwordHarbor12345 -n kubesphere-devops-worker 3、构建devops 3.1、创建凭证 企业空间wssnail-shopdevops项目ks-wssnail-shop-devDevOps 项目设置凭证 3.2、创建流水线 3.3、编辑jenkinsfile pipeline {agent {node {label mavenjdk17 //这里要和自定义的maven仓库一致}}stages {stage(checkout scm) {agent nonesteps {git(url: http://192.168.139.184:9000/shop/wssnail-shop.git, credentialsId: git-user-pwd, branch: $BRANCH, changelog: true, poll: false)}}stage(unit test) {agent nonesteps {container(maven) {sh cd ${SERVICE}pwdecho ${SERVICE}mvn clean test}}}stage(Code Analysis) {agent nonesteps {container(maven) {withCredentials([string(credentialsId: sonar-token, variable: SONAR_TOKEN)]) {withSonarQubeEnv(sonar) {sh service_name${SERVICE#*/}service_name${service_name#*/}cd ${SERVICE}mvn sonar:sonar -Dsonar.projectKey${service_name} -Dsonar.login$SONAR_TOKENecho mvn sonar:sonar -Dsonar.projectKey${service_name}}}timeout(unit: MINUTES, activity: true, time: 15) {waitForQualityGate true}}}}stage(build push) {agent nonesteps {withCredentials([usernamePassword(credentialsId: harbor-user-pwd, passwordVariable: DOCKER_PASSWORD, usernameVariable: DOCKER_USERNAME)]) {container(maven) {sh cd ${SERVICE}mvn clean package -DskipTestscd ${WORKSPACE}chmod -R 777 deploy/copy.sh deploy/copy.shsh echo ${DOCKER_PASSWORD} | docker login ${REGISTRY} -u ${DOCKER_USERNAME} --password-stdinservice_name${SERVICE#*/}service_name${service_name#*/}cd deploy/${service_name}/buildif test \${DOCKERHUB_NAMESPACE} \${DOCKERHUB_NAMESPACE_SNAPSHOT}; thenecho DOCKERHUB_NAMESPACE is snapshot....docker build -f Dockerfile -t \${REGISTRY}/\${DOCKERHUB_NAMESPACE}/\${service_name}:SNAPSHOT-\$BUILD_NUMBER .docker push \${REGISTRY}/\${DOCKERHUB_NAMESPACE}/\${service_name}:SNAPSHOT-\${BUILD_NUMBER}elsedocker build -f Dockerfile -t \${REGISTRY}/\${DOCKERHUB_NAMESPACE}/\${service_name}:SNAPSHOT-\$BUILD_NUMBER .echo DOCKERHUB_NAMESPACE is release....fi}}}}stage(push latest) {steps {container(maven) {sh service_name${SERVICE#*/}service_name${service_name#*/}cd deploy/${service_name}/builddocker tag ${REGISTRY}/${DOCKERHUB_NAMESPACE}/${service_name}:SNAPSHOT-${BUILD_NUMBER} ${REGISTRY}/${DOCKERHUB_NAMESPACE}/${service_name}:latestdocker push ${REGISTRY}/${DOCKERHUB_NAMESPACE}/${service_name}:latest}}}stage(deploy to dev) {agent nonewhen {expression {return params.TAG_NAME ~ /snapshot.*/}}steps {input(message: deploy to dev?, submitter: )container(maven) {withCredentials([kubeconfigContent(credentialsId: kubeconfig-id, variable: ADMIN_KUBECONFIG)]) {sh service_name${SERVICE#*/}service_name${service_name#*/}cd deploy/${service_name}sed -i\\ s#REGISTRY#${REGISTRY}# deployment.yamlsed -i\\ s#DOCKERHUB_NAMESPACE#${DOCKERHUB_NAMESPACE}# deployment.yamlsed -i\\ s#APP_NAME#${service_name}# deployment.yamlsed -i\\ s#BUILD_NUMBER#${BUILD_NUMBER}# deployment.yamlsed -i\\ s#REPLICAS#${REPLICAS}# deployment.yamlmkdir ~/.kubeecho $ADMIN_KUBECONFIG ~/.kube/configkubectl create cm ${service_name}-yml --dry-run\client\ -o yaml --from-filebuild/target/bootstrap.yml -n prod-wssnail-shopf9vqj ${service_name}-configmap.ymlkubectl apply -f .}}}}stage(push with tag) {agent nonewhen {expression {return params.TAG_NAME ~ /v.*/}}steps {input(message: release image with tag?, submitter: )withCredentials([usernamePassword(credentialsId: git-user-pwd, passwordVariable: GIT_PASSWORD, usernameVariable: GIT_USERNAME)]) {sh git config --global user.email snailsh git config --global user.name snailsh git tag -a ${TAG_NAME} -m ${TAG_NAME}sh git push http://${GIT_USERNAME}:${GIT_PASSWORD}${GIT_REPO_URL}/${GIT_ACCOUNT}/${APP_NAME}.git --tags --ipv4container(maven) {sh service_name${SERVICE#*/}service_name${service_name#*/}docker tag ${REGISTRY}/${DOCKERHUB_NAMESPACE}/${service_name}:SNAPSHOT-${BUILD_NUMBER} ${REGISTRY}/${DOCKERHUB_NAMESPACE}/${service_name}:${TAG_NAME}docker push ${REGISTRY}/${DOCKERHUB_NAMESPACE}/${service_name}:${TAG_NAME}}}}}stage(deploy to production) {agent nonewhen {expression {return params.TAG_NAME ~ /v.*/}}steps {input(message: deploy-to-production?, submitter: )container(maven) {withCredentials([kubeconfigContent(credentialsId: kubeconfig-id, variable: ADMIN_KUBECONFIG)]) {sh service_name${SERVICE#*/}service_name${service_name#*/}cd deploy/${service_name}/prodsed -i\\ s#REGISTRY#${REGISTRY}# deployment.yamlsed -i\\ s#DOCKERHUB_NAMESPACE#${DOCKERHUB_NAMESPACE}# deployment.yamlsed -i\\ s#APP_NAME#${service_name}# deployment.yamlsed -i\\ s#TAG_NAME#${TAG_NAME}# deployment.yamlsed -i\\ s#REPLICAS#${REPLICAS}# deployment.yamlmkdir ~/.kubeecho $ADMIN_KUBECONFIG ~/.kube/configkubectl create cm ${service_name}-yml --dry-run\client\ -o yaml --from-file../build/target/bootstrap.yml -n prod-wssnail-shopf9vqj ${service_name}-configmap.ymlkubectl apply -f .}}}}}environment {APP_NAME wssnail-shopDOCKER_CREDENTIAL_ID harbor-user-pwdREGISTRY 192.168.139.184:8899GIT_REPO_URL 192.168.139.184:9000GIT_CREDENTIAL_ID git-user-pwdGIT_ACCOUNT shopSONAR_CREDENTIAL_ID sonar-tokenDOCKERHUB_NAMESPACE_SNAPSHOT snapshotDOCKERHUB_NAMESPACE_RELEASE release}parameters {choice(name: SERVICE, choices: [wssnail-shop-parent/shop-gateway,wssnail-shop-parent/shop-uaa,wssnail-shop-parent/shop-commodity,wssnail-shop-parent/shop-order], description: 请选择要部署的服务)choice(name: DOCKERHUB_NAMESPACE, choices: [snapshot, release], description: 请选择部署到哪个镜像仓库)choice(name: REPLICAS, choices: [1, 3, 5, 7], description: 请选择构建后的副本数)string(name: BRANCH, defaultValue: master, description: 请输入要构建的分支名称)string(name: TAG_NAME, defaultValue: snapshot, description: 部署版本必须以 v 开头例如v1、v1.0.0)} } 3.4、创建harbor-secret kubectl create secret docker-registry harbor-secret --docker-server192.168.139.184:8899 --docker-usernameadmin --docker-passwordHarbor12345 -n prod-wssnail-shopf9vqj 3.5、验证 四、参考 https://blog.csdn.net/huangh0914/article/details/136363139 文档中心
http://www.dnsts.com.cn/news/246973.html

相关文章:

  • 做网站需要了解的东西中国营销传播网手机版
  • 哈尔滨大型网站建设电话新闻最新北京消息今天
  • 网站取源用iapp做软件网站建设对电子商务的作用
  • 移动公司营销网站设计做网络维护的工资高吗
  • 祁东县建设局网站淄博免费网站建设哪家好
  • 中国免费建站网开一个工作室需要多少钱
  • 山西大同专业网站建设价格手机做图纸app下载网站
  • 网站制作费用申请长沙 公司网站
  • 虚拟主机 部署网站吗红河网页设计
  • 网站建设的例子一个做问卷调查的网站
  • 2021好心人给个开车的网站搭建影视网站
  • 百度如何才能搜到你的网站新郑市网站建设小程
  • 手机网站开发免费视频教程南极电商是做什么的
  • 社区网站模板2023年防疫新政策
  • 站酷魔方网站建设中eclipse网站开发
  • 标志设计公司网站定制家具网站平台
  • 南京企业网站建设手机搭建免费网站
  • 做视频网站需要什么手续现在中国空间站有几个人
  • 网站的关键词排名怎么做精神文明建设专题网站
  • 网站建设书籍2013年的网站开发 北京外包公司
  • 个人网站 备案备注类似百科式的网站建设
  • 自己做微网站制作教程开发一款app需要投入多少钱
  • 网站首页 栏目页 内容页网站建设费用高
  • 网站开发工具链接服务器电子章怎么制作教程
  • 视频资源的网站怎么做手工制作玩具
  • 公司网站上传图库杭州seo推广服务
  • 南京网站建设苏icp备制作动画软件app手机
  • 做网站的公司成本怀化建设企业网站
  • 网站搜索优化找哪家建设工程合同管理网站
  • 亚马逊网站开发的技术图片搜集网站怎么做