1、在K8s中 摆设Jenkins优点和缺点标题,简单先容下:
1.传统Jenkins集群架构一些标题
- Master发生故障时,整个流程都不可用
- Slave集群的情况设置不一样,来完成不同语言的编译打包,但是这些差异化的设置导致管理起来不方便,维护贫苦
- 资源分配不平衡,有的slave要运行的job出现列队等候,而有的salve处于空闲状态
- 资源浪费,每台slave大概是物理机大概捏造机,当slave处于空闲状态时,也不能完全开释掉资源
2.K8s中Jenkins集群架构优点
- 当Jenkins Master担当到Build哀求后,会根据设置的Label动态创建一个运行在Pod中的Jenkins Slave并注册到Master上,当运行完Job后,这个Slave会被注销而且这个Pod也会自动删除,规复到最初的状态(这个战略可以设置)
- 服务高可用,当Jenkins Master出现故障时,Kubernetes会自动创建一个新的Jenkins Master容器,而且将Volume分配给新创建的容器,包管数据不丢失,从而到达集群服务高可用的作用
- 动态伸缩,公道利用资源,每次运行Job时,会自动创建一个Jenkins Slave,Job完成后,Slave自动注销并删除容器,资源自动开释,而且Kubernetes会根据每个资源的利用情况,动态分配slave到空闲的节点上创建,低落出现因某节点资源利用率高,低落出现因某节点利用率高出现列队的情况
- 扩展性好,当Kubernetes集群的资源严峻不敷导致Job列队等候时,可以很容器的添加一个Kubernetes Node到集群,从而实现扩展
2、集群情况
3、利用Deployment和StatefulSet,两个控制器方式摆设Jenkins
Jenkins-Deployment 控制器方式
1)k8s-node1摆设NFS服务端设置
# 全部服务端节点安装nfs yum install -y nfs-utilssystemctl enable nfs-server rpcbind --now# 创建nfs共享目次,授权mkdir -p /data/k8schown -R 777 /data/k8s# 写入/etc/exports文件cat > /etc/exports << EOFecho "/data/k8s 192.168.56.0/24(rw,no_root_squash,sync)" >/etc/exportsEOF# 重启systemctl reload nfs-server# 利用如下下令验证:showmout -e 192.168.56.11....创建Jenkins集群所需的YAML文件
1)创建定名空间和存放Jenkins的YAML目次
kubectl create namespace devopsmkdir -p /opt/jenkins2)为Jenkins数据长期化存储创建一个PV
cat >/opt/jenkins/jenkins_pv.yaml <<EOFapiVersion: v1kind: PersistentVolumemetadata: name: opspvspec: capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete nfs: server: 192.168.56.11 path: /data/k8s---kind: PersistentVolumeClaimapiVersion: v1metadata: name: opspvc namespace: devopsspec: accessModes: - ReadWriteMany resources: requests: storage: 10GiEOF3)创建Jenkins集群权限serviceAccount文件
cat >/opt/jenkins/jenkins_rbac.yaml <<EOFapiVersion: v1kind: ServiceAccountmetadata: name: jenkins namespace: devops---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: jenkinsrules: - apiGroups: ["extensions", "apps"] resources: ["deployments"] verbs: ["create", "delete", "get", "list", "watch", "patch", "update"] - apiGroups: [""] resources: ["services"] verbs: ["create", "delete", "get", "list", "watch", "patch", "update"] - apiGroups: [""] resources: ["pods"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/exec"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get","list","watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get"]---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata: name: jenkins namespace: devopsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: jenkinssubjects: - kind: ServiceAccount name: jenkins namespace: devopsEOF4)创建Jenkins Deployment
cat jenkins_deployment.yaml apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: jenkins namespace: devopsspec: template: metadata: labels: app: jenkins spec: terminationGracePeriodSeconds: 10 serviceAccount: jenkins containers: - name: jenkins image: jenkins/jenkins:latest imagePullPolicy: IfNotPresent ports: - containerPort: 8080 #外部访问端口 name: web protocol: TCP - containerPort: 50000 #jenkins save发现端口 name: agent protocol: TCP resources: limits: cpu: 1000m memory: 1Gi requests: cpu: 500m memory: 512Mi livenessProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 #容器初始化完成后,等候60秒举行探针查抄 timeoutSeconds: 5 failureThreshold: 12 #当Pod乐成启动且查抄失败时,Kubernetes将在放弃之前实行failureThreshold次。放弃生存查抄意味偏重新启动Pod。而放弃停当查抄,Pod将被标记为未停当。默认为3.最小值为1 readinessProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 failureThreshold: 12 volumeMounts: #须要将jenkins_home目次挂载出来 - name: jenkinshome subPath: jenkins mountPath: /var/jenkins_home env: - name: LIMITS_MEMORY valueFrom: resourceFieldRef: resource: limits.memory divisor: 1Mi - name: JAVA_OPTS value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 -Duser.timezone=Asia/Shanghai securityContext: fsGroup: 1000 volumes: - name: jenkinshome persistentVolumeClaim: claimName: opspvc5)创建Jenkins SVC
cat >/opt/jenkins/jenkins_svc.yaml <<EOFapiVersion: v1kind: Servicemetadata: name: jenkins namespace: demon labels: app: jenkinsspec: selector: app: jenkins type: NodePort ports: - name: web port: 8080 targetPort: web nodePort: 30002 - name: agent port: 50000 targetPort: agentEOF6)依次创建
[root@k8s-node1 jenkins]# lsjenkins_deployment.yaml jenkins_pv.yaml jenkins_rbac.yaml jenkins_svc.yaml[root@k8s-node1 jenkins]# kubectl apply -f ./7)检察结果
[root@k8s-node1 jenkins]# kubectl get pv,pvc,pod,svc -n devopsNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpersistentvolume/opspv 10Gi RWX Delete Bound devops/opspvc 1hNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpersistentvolumeclaim/opspvc Bound opspv 10Gi RWX 1hNAME READY STATUS RESTARTS AGEpod/jenkins-6d7bc49b74-d9jxc 1/1 Running 0 1hNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/jenkins NodePort 10.1.148.201 <none> 8080:30002/TCP,50000:26723/TCP 1h8080:端口为我们jenkins访问端口
50000:端口为jenkins save发现端口
Jenkins-StatefulSet 控制器方式
1.创建两个目次
mkdir -pv /data/nfs-client/ /data/jenkins2.nfs服务端
# 全部服务端节点安装nfs yum install -y nfs-utilssystemctl enable nfs-server rpcbind --now# 创建nfs共享目次,授权mkdir -p /data/k8s && chown -R 777 /data/k8s# 写入/etc/exports文件cat > /etc/exports << EOFecho "/data/k8s 192.168.56.0/24(rw,no_root_squash,sync)" >/etc/exportsEOF# 重启systemctl reload nfs-server# 利用如下下令验证:showmout -e 192.168.56.11....摆设nfs-client全部认证(rbac、class、)
1.创建rbac.yaml文件
cat > rbac.yaml << EOFapiVersion: v1kind: ServiceAccountmetadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: nfs-client-provisioner-runnerrules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-client-provisionersubjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultrules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultsubjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.ioEOF4.创建class.yaml文件
cat > class.yaml << EOFapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: managed-nfs-storageprovisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'parameters: archiveOnDelete: "false"EOF5.创建deployment.yaml
cat > deployment.yaml << EOFapiVersion: apps/v1kind: Deploymentmetadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultspec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.1.201 # 修改为本身NFS服务IP - name: NFS_PATH value: /data/k8s # 修改为共享目次 volumes: - name: nfs-client-root nfs: server: 192.168.1.201 # 修改为本身NFS服务IP path: /data/k8s # 修改为共享目次EOF创建Jenkins yaml文件
1)创建jenkins_rbac.yaml
cat > jenkins_rbac.yaml << EOFapiVersion: rbac.authorization.k8s.io/v1beta1kind: Rolemetadata: name: jenkins namespace: devopsrules: - apiGroups: ["extensions", "apps"] resources: ["deployments"] verbs: ["create", "delete", "get", "list", "watch", "patch", "update"] - apiGroups: [""] resources: ["services"] verbs: ["create", "delete", "get", "list", "watch", "patch", "update"] - apiGroups: [""] resources: ["pods"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/exec"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get","list","watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get"]---apiVersion: rbac.authorization.k8s.io/v1beta1kind: RoleBindingmetadata: name: jenkins namespace: devopsroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: jenkinssubjects: - kind: ServiceAccount name: jenkins namespace: devops---apiVersion: rbac.authorization.k8s.io/v1beta1kind: Rolemetadata: name: jenkinsClusterRole namespace: devopsrules: - apiGroups: [""] resources: ["pods"] verbs: ["create", "delete", "get", "list", "watch", "patch", "update"] - apiGroups: [""] resources: ["pods/exec"] verbs: ["create", "delete", "get", "list", "watch", "patch", "update"] - apiGroups: [""] resources: ["pods/log"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get"]---apiVersion: rbac.authorization.k8s.io/v1beta1kind: RoleBindingmetadata: name: jenkinsClusterRoleBinding namespace: devopsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: jenkinsClusterRolesubjects: - kind: ServiceAccount name: jenkins namespace: devopsEOF2)创建jenkins_serviceaccount.yaml
cat > jenkins_serviceaccount.yaml << EOFapiVersion: v1kind: ServiceAccountmetadata: name: jenkins namespace: devopsEOF3)创建jenkins_StatefulSet.yaml
cat > jenkins_StatefulSet.yaml << EOFapiVersion: apps/v1kind: StatefulSetmetadata: name: jenkins namespace: devops01 labels: name: jenkins spec: serviceName: jenkins selector: matchLabels: app: jenkins replicas: 1 updateStrategy: type: RollingUpdate template: metadata: name: jenkins labels: app: jenkins spec: terminationGracePeriodSeconds: 10 serviceAccountName: jenkins containers: - name: jenkins image: jenkins/jenkins:latest imagePullPolicy: IfNotPresent ports: - containerPort: 8080 #外部访问端口 name: web protocol: TCP - containerPort: 50000 #jenkins save发现端口 name: agent protocol: TCP resources: limits: cpu: 1000m memory: 1Gi requests: cpu: 500m memory: 512Mi livenessProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 #容器初始化完成后,等候60秒举行探针查抄 timeoutSeconds: 5 failureThreshold: 12 #当Pod乐成启动且查抄失败时,Kubernetes将在放弃之前实行failureThreshold次。放弃生存查抄意味偏重新启动Pod。而放弃停当查抄,Pod将被标记为未停当。默认为3.最小值为1 readinessProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 failureThreshold: 12 volumeMounts: #须要将jenkins_home目次挂载出来 - name: jenkins-home subPath: jenkins mountPath: /var/jenkins_home env: - name: LIMITS_MEMORY valueFrom: resourceFieldRef: resource: limits.memory divisor: 1Mi - name: JAVA_OPTS value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 -Duser.timezone=Asia/Shanghai securityContext: fsGroup: 1000 volumeClaimTemplates: - metadata: name: jenkins-home spec: storageClassName: "managed-nfs-storage" accessModes: - ReadWriteOnce resources: requests: storage: 5GiEOF4)创建jenkins_Service.yaml
cat > jenkins_Service.yaml << EOFapiVersion: v1kind: Servicemetadata: name: jenkins namespace: devops01 labels: app: jenkinsspec: selector: app: jenkins type: NodePort ports: - name: web port: 8080 targetPort: web nodePort: 8081 - name: agent port: 50000 targetPort: agentEOF5)依次创建
[root@k8s-node1 jenkins]# lsjenkins_rbac.yaml jenkins_Service.yaml jenkins_StatefulSet.yaml jenkins_rbac.yaml[root@k8s-node1 jenkins]# kubectl apply -f ./7)检察结果
.....
Jenkins摆设OK后,可以通过浏览器访问集群恣意IP的svc端口
管理员暗码路径:长期化在/data/k8s下,以是jenkins的全部设置都在这下面
cat /data/k8s/jenkins/secrets/initialAdminPassword1)直接保举安装即可
2)安装完成后我们进入jenkins主页面
3)Jenkins–>插件–>安装插件Kubernetes
Jenkins中设置k8s
1)系统管理->系统设置
2)设置拉到最下面找到Kubernetes插件
Name # 设置的名称Kubernetes URL # 这里的URL是K8s内部的URL,现实上就是`svcname` = `https://kubernetes.default.svc.cluster.local`Kubernetes Namespace k8s的定名空间 # 现实上就是Jenkins所在的定名空间
3)Jenkins URL设置
Jenkins URL # 这里的URL是jenkins的svc名称加上定名空间,现实上就是在k8s集群内部访问jenkins的一个方法,这里也不须要修改http://jenkins.demon.svc.cluster.local:80804)设置添 Jenkins Slave Pod模板
Name = Pod 名称 Namespave = Pod定名空间 Labels = Pod标签5)容器的模板设置
6)创建volume的设置
Jenkins Master收到Build哀求时,会根据设置的Label动态创建一个运行在Pod中的Jenkins Slave并注册到Master上,当Job运行完,这个Slave会被注销而且这个Pod也会自动删除,规复到最初状态
7)测试验证
新建Job选择流水线
流水线Pipeline
def label = "jenkins-slave"podTemplate(label: label, cloud: 'kubernetes'){node(label) { stage('pull code') { echo "拉代替码" } stage('build') { echo "代码编译" } stage('SonarQube') { echo "质量扫描" } }}实行结果
|