云原生混合架构 K8s 自动化部署平台

云原生混合架构 K8s 自动化部署平台

本项目构建了一套 “本地虚拟化 + 阿里云公有云” 的混合云原生 K8s 自动化部署平台,核心目标是落地安全隔离、自动化交付、弹性稳定且可监控的运维体系,完整覆盖从基础环境搭建到云原生集群部署、服务交付、混合云网络打通的全流程。

1 环境搭建

本阶段核心目标是通过虚拟化技术创建3个节点的本地集群(1个master节点+3个node节点),为后续云原生环境测试、CI/CD组件部署提供基础环境。

1.1 环境规划

节点角色CPU内存磁盘IP规划(桥接模式)
master节点(master)2核8G50G192.168.0.200
node1节点(node1)2核8G50G192.168.0.201
node2节点(node2)2核8G50G192.168.0.202
node3节点(node3)2核8G50G192.168.0.203
阿里云ECS(Jenkins)2核4G40G弹性公网IP
阿里云ECS(Gitlab)2核8G40G弹性公网IP
阿里云ACR容器镜像服务----
阿里云SLS日志服务----

1.2 技术栈总览

  • 虚拟化层:VMware Workstation 17 Pro、Ubuntu 22.04;
  • 云原生核心:Kubernetes 1.32.10、containerd 1.7.18、Calico CNI;
  • 公有云服务(阿里云):ECS、SLS、ACR;
  • CI/CD链路:GitLab、Jenkins、ArgoCD;
  • 监控体系:Prometheus、Grafana、Alertmanager。

1.2 虚拟机创建与系统部署

  1. 打开VMware,创建新虚拟机,选择“自定义(高级)”模式,硬件兼容性默认;
  2. 选择Ubuntu 镜像文件(22.04.5),设置虚拟机名称与存储路径;
  3. 按规划配置CPU、内存,网络选择“桥接模式”(确保虚拟机可访问外网,使用桥接后续与ECS网络互通比较方便);
  4. 磁盘选择“创建新虚拟磁盘”,容量50G,勾选“将虚拟磁盘拆分为多个文件”;
  5. 启动虚拟机,安装Ubuntu系统:设置root密码(统一为Root@123456,测试环境简化),分区选择“自动分区”,等待安装完成后重启;
  6. 克隆虚拟机:右键已创建的master节点虚拟机,选择“克隆”,创建完整克隆,分别命名为node1、node2,避免重复安装系统;

修改各节点网络配置:

# 编辑网络配置文件,固定静态ipvim /etc/netplan/50-cloud-init.yaml network: ethernets: ens32: dhcp4: no addresses: [192.168.0.200/24] routes: - to: default via: 192.168.0.1 nameservers: addresses: [223.5.5.5,114.114.114.114] version: 2# 更新网络服务 netplan apply 

8.关闭各节点selinux(Ubuntu默认不启用)9.关闭防火墙(Ubuntu默认不启用)10.永久关闭swap交换分区

vim /etc/fstab # 注释#/swap.img none swap sw 0 0

11.修改主机名

hostnamectl set-hostname master hostnamectl set-hostname node1 hostnamectl set-hostname node2 

12.添加docker官方 GPG密钥

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg 

13.更新系统并安装基础依赖

apt update &&apt upgrade -y aptinstall -y ca-certificates curl gnupg lsb-release apt-transport-https software-properties-common 

14.加载内核参数

# 加载必需内核模块(overlay/br_netfilter,容器存储/网络依赖) modprobe overlay modprobe br_netfilter cat> /etc/modules-load.d/containerd.conf <<EOF overlay br_netfilter EOF# 配置内核网络参数(确保容器网络转发/端口映射正常)cat> /etc/sysctl.d/99-containerd.conf <<EOF net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF sysctl --system 

15.添加docker官方软件源

# 查看系统发行版本代号 lsb_release -cs # 添加软件源echo"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu jammy stable"|tee /etc/apt/sources.list.d/docker.list > /dev/null 

16.配置hosts域名映射

vim /etc/hosts 192.168.121.100 master 192.168.121.101 node1 192.168.121.102 node2 

​ 17.配置代理

vim ~/.bashrc exporthttp_proxy="http://192.168.0.101:7890"exporthttps_proxy="http://192.168.0.101:7890"exportno_proxy="192.168.0.0/24, localhost,127.0.0.1, 10.96.0.0/12, 10.20.0.0/16, cluster.local,.svc,.svc.cluster.local, 192.168.0.200"

2 云原生核心层部署(本地k8s集群)

本阶段核心目标是搭建基于K8s 1.32.10和containerd 1.7.18的云原生集群。

2.1 部署containerd: 1.7.18(k8s集群)

2.1.1 前置准备
# 添加docker官方 GPG密钥curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg # 更新系统并安装基础依赖apt update &&apt upgrade -y aptinstall -y ca-certificates curl gnupg lsb-release apt-transport-https software-properties-common # 加载必需内核模块(overlay/br_netfilter,容器存储/网络依赖) modprobe overlay modprobe br_netfilter cat> /etc/modules-load.d/containerd.conf <<EOF overlay br_netfilter EOF# 配置内核网络参数(确保容器网络转发/端口映射正常)cat> /etc/sysctl.d/99-containerd.conf <<EOF net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF sysctl --system 
2.1.2 添加docker官方软件源
# 查看系统发行版本代号 lsb_release -cs # 添加软件源echo"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu focal stable"|tee /etc/apt/sources.list.d/docker.list > /dev/null 
2.1.3 更新源安装指定版本
apt update aptinstall -y containerd.io=1.7.18-1 
2.1.4 适配 systemd

容器运行时 | Kubernetes

# 生成默认配置文件(containerd 无默认配置,需手动生成)mkdir -p /etc/containerd containerd config default > /etc/containerd/config.toml # 配置 systemd cgroup 驱动# 结合 runc 使用 systemd cgroup 驱动,在 /etc/containerd/config.toml 中设置:[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup =true# 一键修改 SystemdCgroup = true(适配 Ubuntu 的 systemd 管理)sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml # 替换镜像仓库为国内sudosed -i 's/registry.k8s.io\/pause/registry.aliyuncs.com\/google_containers\/pause/g' /etc/containerd/config.toml # 重启服务使配置生效 systemctl restart containerd && systemctl enable containerd 

containerd 遇到了无法拉取镜像的问题,解决:

root@master:/# ctr image pull docker.io/library/busybox:alpine WARN[0000] Config "/etc/crictl.yaml" does not exist, trying next: "/usr/bin/crictl.yaml" WARN[0000] Image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. E1202 00:06:16.457412 16804 log.go:32]"PullImage from image service failed"err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox:alpine\": failed to resolve reference \"docker.io/library/busybox:alpine\": failed to do request: Head \"https://registry-1.docker.io/v2/library/busybox/manifests/alpine\": dial tcp 54.89.135.129:443: connect: connection refused"image="docker.io/library/busybox:alpine" FATA[0020] pulling image: failed to pull and unpack image "docker.io/library/busybox:alpine": failed to resolve reference "docker.io/library/busybox:alpine": failed to do request: Head "https://registry-1.docker.io/v2/library/busybox/manifests/alpine": dial tcp 54.89.135.129:443: connect: connection refused # 配置 crictl 指定容器运行时端点解决警告vim /etc/crictl.yaml -------------------------- runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false pull-image-on-create: false --------------------------- crictl info # 无报错则配置生效# 如果没有/etc/containerd/config.toml,先生成默认配置: containerd config default > /etc/containerd/config.toml # 网络代理# 创建代理配置文件mkdir -p /etc/systemd/system/containerd.service.d cat> /etc/systemd/system/containerd.service.d/proxy.conf <<EOF [Service] Environment="HTTP_PROXY=http://192.168.0.101:7890" Environment="HTTPS_PROXY=http://192.168.0.101:7890" Environment="NO_PROXY=localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,172.16.0.0/12,*.local,kubernetes.default,service,*.cluster.local,192.168.0.200,192.168.0.*,crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com" EOF# 重新加载配置并重启 systemctl daemon-reload systemctl restart containerd root@master1:~# ctr image pull docker.io/library/nginx:latest Image is up to datefor sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9 
2.1.5 部署nerdctl工具

nerdctl兼容docker语法,containerd是划分命名空间的

# 下载对应版本curl -L https://github.com/containerd/nerdctl/releases/download/v1.7.0/nerdctl-1.7.0-linux-amd64.tar.gz -o nerdctl.tar.gz # 解压到系统路径sudotar Cxzvf /usr/local/bin nerdctl.tar.gz nerdctl # 验证安装 nerdctl version # 常用命令# 查看指定命名空间的镜像 nerdctl -n 命名空间名称 images # 删除指命名空间的镜像 nerdctl -n 命名空间名称 rm# 语法适配docker,差别就是docker 变成了 nerdctl,在命令前需要指定命名空间 nerdctl -n 命名空间名称 (images/rm/tag/rmi/stop/pull/push)

2.2 部署k8s集群

官网链接安装 kubeadm | Kubernetes

2.2.1 安装 kubelet kubeadm kubectl 1.32.10 每台服务器执行
# 配置内核参数(开启 IPVS/IP 转发)# 加载内核模块sudotee /etc/modules-load.d/k8s.conf <<EOF overlay br_netfilter ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack EOFsudo modprobe overlay &&sudo modprobe br_netfilter &&sudo modprobe ip_vs # 配置sysctl参数sudotee /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOFsudo sysctl --system # 生效配置# 更新apt包索引apt-get update # 安装k8s apt仓库需要的包apt-getinstall -y apt-transport-https ca-certificates curl gpg # 下载用于k8s软件包仓库的公共签名密钥curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key |sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg # 添加 K8s apt 仓库echo'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /'|sudotee /etc/apt/sources.list.d/kubernetes.list # 更新 apt 包索引,安装 kubelet、kubeadm 和 kubectl,并锁定版本sudoapt-get update sudoapt-getinstall -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl 
2.2.2 kubeadm初始化k8s集群

初始化过程中遇到了报错 pause版本不一致

sed -i 's#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.8"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.10"#g' /etc/containerd/config.toml 

初始化 k8s 集群:

# kubeadm 初始化集群 root@master:~# vim kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta4 kind: ClusterConfiguration kubernetesVersion: v1.32.10 imageRepository: registry.aliyuncs.com/google_containers # 阿里云镜像源(解决拉取镜像慢) networking: podSubnet: 10.20.0.0/16 # 需和后续部署的网络插件网段匹配(如Calico适配此网段) controlPlaneEndpoint: "192.168.0.200:6443"# apiserver对外地址 --- apiVersion: kubeadm.k8s.io/v1beta4 kind: InitConfiguration nodeRegistration: ignorePreflightErrors: - SystemVerification # 忽略cgroups v1警告 criSocket: unix:///run/containerd/containerd.sock # containerd套接字(正确格式) kubeletExtraArgs: - name: cgroup-driver value: "systemd"# 需和containerd的cgroup驱动一致(默认就是systemd) localAPIEndpoint: advertiseAddress: 192.168.0.200 # master节点IP(和controlPlaneEndpoint保持一致) bindPort: 6443 root@master:~# kubeadm init --config=kubeadm-config.yaml # 初始化成功后会有添加节点命令 kubeadm join192.168.0.200:6443 --token zwf3h4.qcy63iq2avjnflvt \ --discovery-token-ca-cert-hash sha256:7de5455af5d69939dfb49379f85d7f4f96e9a7962920569a8f29b4ca3079d21e 
2.2.3 配置管理权限
#配置 kubectl 的配置文件 config,相当于对 kubectl 进行授权,这样 kubectl 命令可以使用这个证书对 k8s 集群进行管理 root@master:~# mkdir -p $HOME/.kube root@master:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config root@master:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config
2.2.4 扩容工作节点
# 在每一个工作节点输入 kubeadm join192.168.121.100:6443 --token e6p5bq.bqju9z9dqwj2ydvy \ --discovery-token-ca-cert-hash sha256:9b3750aedaed5c1c3f95f689ce41d7da1951f2bebba6e7974a53e0b20754a09d 
2.2.5 把roles变成work
root@master:~# kubectl label node node1 node-role.kubernetes.io/work=work root@master:~# kubectl label node node2 node-role.kubernetes.io/work=work root@master:~# kubectl label node node3 node-role.kubernetes.io/work=work
2.2.6 安装kubernetes 网络组件-Calico
# 下载Calico配置文件(适配k8s 1.32) root@master:~# curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.30.0/manifests/calico.yaml# 部署calico root@master:~# kubectl apply -f calico.yaml# 验证网络插件状态(等待所有Pod Running) kubectl get pods -n kube-system -w 
2.2.7 命令补全
# 安装bash-completionapt update &&aptinstall -y bash-completion # 将kubectl补全写入bashrc配置文件echo"source <(kubectl completion bash)">> ~/.bashrc # 生效配置source ~/.bashrc 

2.3 环境确认

# 1. 确认containerd 1.7.18运行状态 systemctl status containerd # 2. 确认k8s集群节点状态(1master+2node均为Ready) kubectl get nodes # 3. 确认虚拟机网络可达阿里云(ping OSS endpoint测试)ping oss-cn-hangzhou.aliyuncs.co 

2.4 部署核心微服务(主节点执行)

通过K8s原生部署服务,采用PVC实现数据持久化。部署前需先完成服务镜像的构建与阿里云ACR推送,具体步骤如下:

2.4.1 前置准备:确认基础环境与资源
  • 环境要求:已安装Containerd(本地k8s环境已部署),且本地机器可访问阿里云ACR(网络通畅,无防火墙限制);
  • 资源准备:① 商品服务源代码(含pom.xml/mvn配置文件,用于编译构建);此部分用nginx官方镜像代替② 阿里云账号(已开通ACR服务,拥有命名空间权限);③ 本地已配置阿里云访问凭证(或后续步骤中输入账号密码登录ACR)。
2.4.2 步骤1:开通阿里云ACR服务
  1. 进入ACR控制面板
image-20260106130533425

2.进入个人版实例,创建命名空间

image-20260106130650854
image-20260106130704294
  1. 创建本地私有镜像仓库
image-20260106130745295
2.4.3 步骤2:制作镜像并上传ACR镜像仓库(master节点)

此章节商品服务镜像的构建采用nginx官方镜像作为案例,模拟实际生产环境中的微服务镜像

  1. 拉取nginx官方镜像
root@master:~# crictl pull nginx:latest Image is up to datefor sha256:058f4935d1cbc026f046e4c7f6ef3b1d778170ac61f293709a2fc89b1cff7009 root@master:~# crictl images IMAGE TAG IMAGE ID SIZE docker.io/calico/cni v3.30.0 15f996c472622 71.8MB docker.io/calico/node v3.30.0 d12dae9bc0999 156MB docker.io/library/nginx latest 058f4935d1cbc 59.8MB registry.aliyuncs.com/google_containers/coredns v1.11.3 c69fa2e9cbf5f 18.6MB registry.aliyuncs.com/google_containers/etcd 3.5.24-0 8cb12dd0c3e42 23.7MB registry.aliyuncs.com/google_containers/kube-apiserver v1.32.10 77f8b0de97da9 29.1MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.32.10 34e0beef266f1 26.6MB registry.aliyuncs.com/google_containers/kube-proxy v1.32.10 db4bcdca85a39 31.2MB registry.aliyuncs.com/google_containers/kube-scheduler v1.32.10 fd6f6aae834c2 21.1MB registry.aliyuncs.com/google_containers/pause 3.10 873ed75102791 320kB registry.aliyuncs.com/google_containers/pause 3.8 4873874c08efc 311kB 
  1. 登录阿里云
# 官方命令# docker login --username=chenjun_3127103271 crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com root@master:~# nerdctl login --username=chenjun_3127103271 crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com Enter Password: WARNING: Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded 
  1. 将镜像推送到阿里云镜像仓库
# 官方命令# docker tag [ImageId] crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service:[镜像版本号] root@master:~# nerdctl -n k8s.io tag docker.io/library/nginx:latest crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service:v1 root@master:~# nerdctl -n k8s.io images REPOSITORY TAG IMAGE ID CREATED PLATFORM SIZE BLOB SIZE crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service v1 ca871a86d45a 9 seconds ago linux/amd64 157.5 MiB 57.0 MiB # 官方命令# docker push crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service:[镜像版本号] root@master:~# nerdctl -n k8s.io push crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service:v1 INFO[0000] pushing as a reduced-platform image (application/vnd.oci.image.index.v1+json, sha256:32502741bf9dbc4ad2c22e24f46c001506711f5bb7d674ac043aaa3242326ef3) index-sha256:32502741bf9dbc4ad2c22e24f46c001506711f5bb7d674ac043aaa3242326ef3: done|++++++++++++++++++++++++++++++++++++++| manifest-sha256:8c329d819008c669731d333c44c766c1d9de3492beb03f8fc035bb5ef7081000: done|++++++++++++++++++++++++++++++++++++++| config-sha256:058f4935d1cbc026f046e4c7f6ef3b1d778170ac61f293709a2fc89b1cff7009: done|++++++++++++++++++++++++++++++++++++++| elapsed: 1.3 s 
image-20260106132221351

控制台进入镜像版本可以看到镜像已经推送成功

2.4.4 步骤3:部署服务(基于ACR镜像)
1.ConfigMap 配置(Nginx主页)
root@master:~/yaml/product-service# vim product-service-welcome-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: welcome-nginx-cm namespace: product data: index.html: |<!DOCTYPE html><html><head><title>Welcome</title></head><body><h1>v1</h1></body></html>
2.创建sc动态存储卷供应
root@master:~# mkdir yaml root@master:~# cd yaml# 部署 local-path-provisioner kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.24/deploy/local-path-storage.yaml root@master:~/yaml# vim sc-local-path.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-path provisioner: rancher.io/local-path reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer parameters: pathPattern: "/var/lib/local-path-provisioner" root@master:~/yaml# kubectl apply -f sc-local-path.yaml root@master:~/yaml# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-path rancher.io/local-path Delete WaitForFirstConsumer true 68m 
3.创建PVC(持久化存储)
# 创建命名空间 root@master:~/yaml# kubectl create ns product  root@master:~/yaml# vim product-service-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: product-service-pvc namespace: product spec: accessModes: - ReadWriteOnce storageClassName: local-path resources: requests: storage: 10Gi # 应用PVC配置 root@master:~/yaml# kubectl apply -f product-service-pvc.yaml root@master:~/yaml# kubectl get pvc -n product NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE product-service-pvc Bound pvc-f9f2916d-98ba-4435-aa80-ffcfb342cd6a 10Gi RWO local-path <unset> 69m root@master:~/yaml# kubectl get pv -n product NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pvc-f9f2916d-98ba-4435-aa80-ffcfb342cd6a 10Gi RWO Delete Bound product/product-service-pvc local-path <unset> 68m <unset> 60m 
4.部署服务(deployment)
# 需要先创建凭证否则无权限拉取私有镜像 kubectl create secret docker-registry acr-pull-secret \ --namespace=product \ --docker-server=crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com \ --docker-username=用户名 \ --docker-password='密码'# 创建商品服务部署配置文件 root@master:~/yaml# vim product-service-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: product-service namespace: product spec: replicas: 3 selector: matchLabels: app: product-service # 滚动更新配置 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 template: metadata: labels: app: product-service spec: imagePullSecrets: - name: acr-pull-secret # ACR密钥 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: # 硬亲和性 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname # 按节点名匹配(指定只调度到node2) operator: In values: - node2 containers: - name: product-service image: crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service:v1 ports: - containerPort: 80# 挂载ConfigMap volumeMounts: - name: welcome-page mountPath: /usr/share/nginx/html/index.html subPath: index.html - name: product-data mountPath: /data # 资源限制 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 200m memory: 256Mi # 健康检查 livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 periodSeconds: 10 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 3 periodSeconds: 5 volumes: - name: welcome-page configMap: name: welcome-nginx-cm items: - key: index.html path: index.html - name: product-data persistentVolumeClaim: claimName: product-service-pvc # 应用 root@master:~/yaml# kubectl apply -f product-service-deploy.yaml  deployment.apps/product-service configured root@master:~/yaml# kubectl get pod -n product  NAME READY STATUS RESTARTS AGE product-service-65dff7d8d4-b8lc7 1/1 Running 0 6s product-service-65dff7d8d4-czc7w 1/1 Running 0 4s product-service-65dff7d8d4-gcpsp 1/1 Running 0 5s 
5.创建svc暴露服务端口
root@master:~/yaml# vim product-service-svc.yaml apiVersion: v1 kind: Service metadata: name: product-service-nodeport namespace: product labels: app: product-service spec: # 类型为NodePort,用于暴露到节点端口,暂时暴露测试页面内容以及后续ci/cd版本变化 type: NodePort selector: app: product-service # 端口映射配置 ports: - name: http port: 80 targetPort: 80# NodePort端口(范围30000-32767,固定一个方便测试) nodePort: 30080 protocol: TCP apiVersion: v1 kind: Service metadata: name: product-service namespace: product spec: selector: app: product-service ports: - port: 80 targetPort: 8080 type: ClusterIP root@master:~/yaml# kubectl apply -f product-service-svc.yaml  service/product-service created root@master:~/yaml/product-service# kubectl get svc -n product  NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE product-service-nodeport NodePort 10.107.131.224 <none>80:30080/TCP 17h 

浏览器访问192.168.0.201:30080

image-20260110132809242
6.创建hpa自动扩缩容
# 下载官方部署文件 root@master:~/yaml# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml# 修改Metrics Server部署,跳过kubelet TLS验证 root@master:~/yaml# vim components.yaml spec: containers: - args: - --kubelet-insecure-tls # 添加此行 - --cert-dir=/tmp - --secure-port=10250 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s root@master:~/yaml# kubectl apply -f components.yaml root@master:~/yaml# vim product-service-hpa.yaml apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: product-service-hpa namespace: product spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: product-service minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization # 基于资源使用率 averageUtilization: 50# 目标 CPU 平均使用率:50% behavior: scaleUp: stabilizationWindowSeconds: 30# 扩容前稳定观察时间(避免抖动) policies: - type: Percent # 按百分比扩容 value: 50# 每次扩容50%的当前副本数 periodSeconds: 60# 扩容间隔(60秒内仅触发一次) scaleDown: stabilizationWindowSeconds: 600# 缩容前稳定观察时间(默认5分钟,避免缩容过快) policies: - type: Percent value: 30 periodSeconds: 60 root@master:~/yaml# kubectl apply -f product-service-hpa.yaml  horizontalpodautoscaler.autoscaling/product-service-hpa created root@master:~/yaml# kubectl get hpa -n product  NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE product-service-hpa Deployment/product-service cpu: 0%/50% 2103 97s 

3 CI/CD链路搭建

配置云端ECS与本地Kubernetes集群关联需要打通网络,通过 WireGuard VPN打通本地VMware环境与阿里云VPC,实现本地k8s集群访问阿里云RDS、OSS等资源。

3.1 混合云网络打通

服务端:阿里云 ECS(公网可访问),部署 WireGuard 作为 VPN 服务端。

客户端:本地 K8s 集群的主节点(Master),部署 WireGuard 客户端,接入 VPN 网络。

核心目标

本地 K8s 节点 ↔ ECS 互通;

3.1.1 阿里云ECS配置

创建ECS实例:

操作系统:Ubuntu 22.04

网络: VPC(10.0.0.0/16)和交换机(10.0.10.0/24) 弹性公网ip

安全组入方向规则:

服务协议访问来源访问目的
WireGuard监听端口UDP本机ip51820
GitLabTCP本机ip + vpc专有网络网段 + jenkins公网ip443
GitLabTCP本机ip + vpc专有网络网段 + jenkins公网ip80
GitLab ssh端口TCP所有ip2222
jenkinsTCP本机ip + vpc专有网络网段 + gitlab公网ip8080
jenkinsTCP本机ip + vpc专有网络网段50000
image-20260110140845637
3.1.2 WireGuard 安装与配置

(1)在阿里云 ECS以及本地节点 上安装 WireGuard

常用命令

# 启动(自动加载配置) wg-quick up wg0 # 停止 wg-quick down wg0 # 重启 wg-quick down wg0 && wg-quick up wg0 # 显示所有接口状态 wg show # 显示指定接口状态 wg show wg0 # 显示接口详细信息(包括私钥、监听端口等) wg show wg0 dump 
apt update aptinstall wireguard -y # 生成ECS端密钥mkdir -p /etc/wireguard cd /etc/wireguard # 生成私钥sudo wg genkey |sudotee private.key |sudo wg pubkey > public.key # 私钥 root@iZwz9cnnlu0g55olnxfuw4Z:/etc/wireguard# cat private.key YG9CkSAnVIy4F8hIiE6ugma5xcgDiT5bMqqTRcy0M2M=# 公钥 root@iZwz9cnnlu0g55olnxfuw4Z:/etc/wireguard# cat public.key  k5FafPFqLcQG6MhkIrHy8U2fg5bhN/VgDpXqmiVgwls=

(2)在ECS节点 上创建配置文件

# 创建配置文件/etc/wireguard/wg0.confvim /etc/wireguard/wg0.conf [Interface]# ECS在VPN网络中的内网IP(固定为10.255.255.1/24) Address =10.255.255.1/24 # WireGuard监听端口(默认51820,需放行UDP) ListenPort =51820# ECS的WireGuard私钥(替换为第一步生成的ecs_private.key内容) PrivateKey = YJUSqwLfS/VZWsC8qBXPxdIiilsRBUnbZszPtrKoN0A=# 启动时配置转发和NAT(eth0替换为ECS实际网卡名,执行ip addr查看) PostUp = iptables -A FORWARD -i wg0 -j ACCEPT PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE PostUp = ip6tables -A FORWARD -i wg0 -j ACCEPT # 可选(IPv6) PostUp = ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # 可选(IPv6)# 停止时清理规则 PostDown = iptables -D FORWARD -i wg0 -j ACCEPT PostDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE PostDown = ip6tables -D FORWARD -i wg0 -j ACCEPT # 可选(IPv6) PostDown = ip6tables -t nat -D POSTROUTING -o eth0 -j MASQUERADE # 可选(IPv6)# 本地K8s master(添加Peer段,每个K8s节点对应一个Peer)[Peer]# K8s节点1的公钥(替换为第一步生成的k8s_public.key内容) PublicKey =8JAEThs8LkcYv27YBc1ROVX2QMD9TODwsYKuUmLHyRI=# 允许的IP:K8s节点1的VPN IP + K8s集群的Pod网段 + Service网段# 10.255.255.2/32(节点VPN IP)+ 192.168.0.0/24(本地节点网段) AllowedIPs =10.255.255.2/32, 192.168.0.0/24 # 启动 systemctl enable wg-quick@wg0 systemctl start wg-quick@wg0 # 检查状态 root@jenkins:/etc/wireguard# wg show wg0 interface: wg0 public key: fwNl1Us9Hk0oEebqGLdi8Bo9NyeiFoUAIYYeX5qdsHI= private key: (hidden) listening port: 51820 peer: 8JAEThs8LkcYv27YBc1ROVX2QMD9TODwsYKuUmLHyRI= allowed ips: 10.255.255.2/32, 192.168.0.0/24 

(3)在本地节点 上创建配置文件

# 创建配置文件/etc/wireguard/wg0.confvim /etc/wireguard/wg0.conf [Interface]# 该K8s节点的VPN IP(与ECS的Peer段AllowedIPs对应,如节点1为10.255.255.2/24) Address =10.255.255.2/24 # 该K8s节点的WireGuard私钥(替换为k8s_private.key内容) PrivateKey =iHhpTPwdNSl4cCYCPmOGyUDU46gcAtuNlsRn1QqTOVg=# 可选:客户端监听端口(自动随机分配,可省略)# ListenPort = 51820# 启动时添加路由(确保K8s网段能转发到VPN) PostUp = sysctl -w net.ipv4.ip_forward=1# 若K8s使用calico/flannel等CNI,需确保路由不冲突,可添加自定义路由(可选)# PostUp = ip route add ECS侧网段 via 10.255.255.1 dev wg0[Peer]# ECS的公钥(替换为ecs_public.key内容) PublicKey =fwNl1Us9Hk0oEebqGLdi8Bo9NyeiFoUAIYYeX5qdsHI=# 允许的IP:ECS的VPN IP + ECS侧需要访问的网段(如ECS内网IP、阿里云其他服务网段)# 0.0.0.0/0表示所有流量走VPN,填ECS VPN IP+ECS所属交换机网段) AllowedIPs =10.255.255.1/32, 10.0.10.0/24 # ECS的公网IP + WireGuard端口 Endpoint =8.129.132.103:51820 # 保持连接(防止隧道断开) PersistentKeepalive =25# 启动 systemctl enable wg-quick@wg0 systemctl start wg-quick@wg0 root@master:/etc/wireguard# wg show interface: wg0 public key: 8JAEThs8LkcYv27YBc1ROVX2QMD9TODwsYKuUmLHyRI= private key: (hidden) listening port: 37352 peer: fwNl1Us9Hk0oEebqGLdi8Bo9NyeiFoUAIYYeX5qdsHI= endpoint: 8.129.132.103:51820 allowed ips: 10.255.255.1/32, 10.0.10.0/24 latest handshake: 4 seconds ago transfer: 92 B received, 180 B sent persistent keepalive: every 25 seconds # 在本地K8s节点执行(添加iptables转发规则)ens32替换为实际网卡名# 1. 允许wg0(VPN)↔ ens32(物理网卡)的流量转发 iptables -A FORWARD -i wg0 -o ens32 -j ACCEPT iptables -A FORWARD -i ens32 -o wg0 -j ACCEPT 
遇到的问题:

在systemctl start wg-quick@wg0 启动之后master节点的calico-node-t4r7h处于未运行状态

停止 wg0 接口后 Calico 恢复正常,核心问题:WireGuard 的 wg0 接口抢占了 10.0.0.0/16 网段的路由,而 Calico 的 Pod 网段(10.20.219.64/26)恰好属于这个范围,导致 Calico 的 BGP 通信流量被错误路由到 wg0 接口(而非集群内网的 ens32 接口),最终引发 BGP 连接失败。

解决方案

编辑 Calico 的 DaemonSet,强制其使用集群内网的 ens32 接口(而非 wg0)进行 BGP 通信:

# 编辑calico-node的DaemonSet配置 kubectl edit ds calico-node -n kube-system # 在配置中找到spec.template.spec.containers.env部分,添加以下环境变量: - name: IP_AUTODETECTION_METHOD value: "interface=ens32"# 强制Calico使用ens32接口,避开wg0 - name: CALICO_NETWORK_INTERFACE value: "ens32"# 兼容旧版本Calico

最后重新启动 systemctl start wg-quick@wg0 观察calico运行状态,问题解决

3.1.4 连通性测试

从本地 VM 测试到阿里云ECS

root@master:~# ping 10.255.255.1 PING 10.255.255.1 (10.255.255.1)56(84) bytes of data. 64 bytes from 10.255.255.1: icmp_seq=1ttl=64time=20.8 ms 64 bytes from 10.255.255.1: icmp_seq=2ttl=64time=22.3 ms 64 bytes from 10.255.255.1: icmp_seq=3ttl=64time=21.3 ms ^C --- 10.255.255.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev =20.815/21.464/22.275/0.606 ms root@master:~# ping 10.0.10.45 PING 10.0.10.45 (10.0.10.45)56(84) bytes of data. 64 bytes from 10.0.10.45: icmp_seq=1ttl=64time=22.2 ms 64 bytes from 10.0.10.45: icmp_seq=2ttl=64time=20.9 ms 64 bytes from 10.0.10.45: icmp_seq=3ttl=64time=21.3 ms ^C --- 10.0.10.45 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev =20.913/21.498/22.240/0.552 ms root@master:~# telnet 10.0.10.45 22 Trying 10.0.10.45... Connected to 10.0.10.45. Escape character is '^]'. SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.13 # 可以看到测试全部ping通

从阿里云 ECS 测试到本地

root@iZwz9cnnlu0g55olnxfuw4Z:/etc/wireguard# ping 10.255.255.2 PING 10.255.255.2 (10.255.255.2)56(84) bytes of data. 64 bytes from 10.255.255.2: icmp_seq=1ttl=64time=20.9 ms 64 bytes from 10.255.255.2: icmp_seq=2ttl=64time=21.4 ms 64 bytes from 10.255.255.2: icmp_seq=3ttl=64time=21.0 ms ^C --- 10.255.255.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev =20.873/21.103/21.424/0.233 ms root@iZwz9cnnlu0g55olnxfuw4Z:/etc/wireguard# ping 192.168.121.100 PING 192.168.121.100 (192.168.121.100)56(84) bytes of data. 64 bytes from 192.168.121.100: icmp_seq=1ttl=64time=21.0 ms 64 bytes from 192.168.121.100: icmp_seq=2ttl=64time=20.8 ms 64 bytes from 192.168.121.100: icmp_seq=3ttl=64time=20.5 ms ^C --- 192.168.121.100 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev =20.541/20.778/20.956/0.174 ms # 测试完成本地与ecs网络互通

3.2 部署GitLab(阿里云ECS)

3.2.1 在阿里云ECS创建一台实例

2核4G

绑定弹性公网ip

部署docker社区版

ubuntu 22.04与本地环境一致

image-20260106164505680

修改实例名,复制公网ip,进行远程连接

安全组设置允许本机ip和jenkins服务器的公网ip访问

3.2.2 docker-compose部署Gitlab
  1. 安装docker-compsoe
root@iZwz90hzjc4m3pd9ick3miZ:~# hostnamectl set-hostname gitlab root@iZwz90hzjc4m3pd9ick3miZ:~# su# 安装docker-compose root@gitlab:~# apt install -y docker-compose root@gitlab:~# docker-compose --versiondocker-compose version 1.25.0, build unknown 
  1. 创建 GitLab 数据持久化目录
# 创建三个核心目录(配置、数据、日志) root@gitlab:~# mkdir -p /data/gitlab/{config,data,logs}# 设置目录权限(避免容器读写权限不足) root@gitlab:~# chmod -R 777 /data/gitlab
  1. 创建docker-compose.yml文件
root@gitlab:~# vim docker-compose.yml version: '3' services: gitlab: image: gitlab/gitlab-ce:14.3.6-ce.0 container_name: gitlab privileged: true restart: always ports: - "80:80" - "443:443" - "2222:22" volumes: - /data/gitlab/config:/etc/gitlab - /data/gitlab/data:/var/opt/gitlab - /data/gitlab/logs:/var/log/gitlab environment: - TZ=Asia/Shanghai - GITLAB_OMNIBUS_CONFIG=external_url 'http://120.25.51.235'; gitlab_rails['gitlab_shell_ssh_port']=2222;# external_url:必须填写 ECS 公网 IP,否则访问会报错;# gitlab_shell_ssh_port:对应主机映射的 2222 端口,后续克隆代码需用此端口; 
  1. 启动容器
root@gitlab:~# docker-compose up -d Creating network "root_default" with the default driver Pulling gitlab (gitlab/gitlab-ce:latest)... latest: Pulling from gitlab/gitlab-ce 7b1a6ab2e44d: Pull complete 6c37b8f20a77: Pull complete f50912690f18: Pull complete bb6bfd78fa06: Pull complete 2c03ae575fcd: Pull complete 839c111a7d43: Pull complete 4989fee924bc: Pull complete 666a7fb30a46: Pull complete Digest: sha256:5a0b03f09ab2f2634ecc6bfeb41521d19329cf4c9bbf330227117c048e7b5163 Status: Downloaded newer image for gitlab/gitlab-ce:latest Creating gitlab ... done# 查看容器启动日志(确认是否正常) root@gitlab:~# docker-compose logs -f gitlab# 当日志中出现 gitlab Reconfigured! 时,说明初始化完成。
  1. 获取初始化密码
# GitLab 默认生成 root 用户的随机密码,存储在容器内的/etc/gitlab/initial_root_password文件中:# 进入容器 root@gitlab:~# docker exec -it gitlab /bin/bash# 查看初始密码 root@4c054babda87:/# cat /etc/gitlab/initial_root_password# WARNING: This value is valid only in the following conditions# 1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password']` setting in `gitlab.rb`, it was provided before database was seeded for the first time (usually, the first reconfigure run).# 2. Password hasn't been changed manually, either via UI or via command line.## If the password shown here doesn't work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password. Password: jqV6Dmlo+kbke3pLVFP0PTV2ttWiFPnDq54uX4WQ0Hc=# NOTE: This file will be automatically deleted in the first reconfigure run after 24 hours.
    1. 打开浏览器,访问 http://弹性公网ip
    2. 输入用户名 root,粘贴上述初始密码,点击登录;
    1. 修改密码,设置新的强密码。
访问 GitLab 并登录
image-20260106174317230
image-20260107134500255
image-20260107134514829
image-20260107134532185

如果无法访问需要检查安全组是否开放端口访问权限,没有则新建允许80和443端口

image-20260106163729794
3.2.3 初始化gitlab

创建项目

image-20260106175326506
image-20260106175341276
image-20260106175458883
image-20260106183943308
3.2.4 配置master节点ssh免密认证
root@master:~/gitlab/e-commerce-platform# cat ~/.ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC2/jHMzETQsYS0+IkoKsZGDvqF3mmjEMYS1hjGfJnMin+mPRKH0quZll/4RuHFky3sbn3WSDonCcgvXP0TWUZTvCe9CGvlnU+zkkCMuOwCRqXNb/pXeAjzOCDBkUX+vXYHrhkmtNPylS8JDAuOdr+6qnIKG8GBjRVFmu7tl6+NFgjgpEGbgTE6vowWK+J3zKx6iN7FCKx+oMcWdEvcOy/WNnYWq7uCfQQgXerONTKHTJ6I9z6x/MMHnCTszSAYHSr7D9HV9un0k9tnoV5cSTA0tuDmFzNWX288v702DWDxgDJeaJLSeQTAAu6lm93GAdNC77QpI7IPDcZ/NkO3/AQoE5yIdCX8ApE7hobNQVL/24+8n+EmzfYsP+IWK/SWf7WZV4BR7v1QTz2M7HqPiYNR5rxOniCAhJ4dwnoS4LjeYMknGoB4SBqPcnpoUZT9q1iYf02JunKgCpAHSdNJ4IfbdiKYeO6IlCPL78xjvEAfOuqwSjOgUbiH70OXWfrJKmj5j/4J4crWm7cApCcevx6dzqo072rQtZLLoOZSBf114EkjCglE5W0hlnh6/sivBt/Yq0iNMAGVBsexJ8c8n5+saKuY+T1SU5JQiIeoISgVG/Ssv1913RRravFj5Fme3A8UnyYri0/4k3PYGu7QBBTytFmuim3sBYaQIzmqpRBLbw== root@master 
image-20260110134627296

3.3 部署jenkins(阿里云ECS)

3.3.1 在阿里云ECS创建一台实例

2核4G

分配公网ip

部署docker社区版

ubuntu 22.04与本地环境一致

image-20260106184134799

修改实例名,复制弹性公网ip进行远程连接

3.3.2 docker-compose部署jenkins
  1. 安装docker-compose
root@iZwz9749p6a8r7y1673ypyZ:~# hostnamectl set-hostname jenkins root@iZwz9749p6a8r7y1673ypyZ:~# su# 部署docker-compose root@jenkins:~# apt install -y docker-compose
  1. 创建 Jenkins 数据目录(持久化数据)
# 1. 创建目录 root@jenkins:~# mkdir -p /opt/jenkins/data# 2. 设置目录权限 root@jenkins:~# chown -R 1000:1000 /opt/jenkins/data root@jenkins:~# chmod -R 755 /opt/jenkins/data
  1. 编写 docker-compose.yml 文件
# 在/opt/jenkins目录下创建docker-compose.yml文件: root@jenkins:~# cd /opt/jenkins root@jenkins:/opt/jenkins# vim docker-compose.yml version: '2.2' services: jenkins: image: jenkins/jenkins:2.528.2 container_name: jenkins # 容器名称 restart: always # 容器异常退出时自动重启 privileged: true# 赋予容器特权(解决权限问题) user: root # 使用root用户运行容器(简化权限配置) ports: - "8080:8080"# 宿主机8080端口映射到容器8080(Jenkins访问端口) - "50000:50000"# 宿主机50000端口映射到容器50000(代理端口) volumes: - ./data:/var/jenkins_home # 宿主机/opt/jenkins/data映射到容器内Jenkins数据目录(持久化) - /var/run/docker.sock:/var/run/docker.sock # 让Jenkins容器能访问宿主机Docker(如需Jenkins构建Docker镜像) - /usr/bin/docker:/usr/bin/docker # 映射Docker命令到容器内 - /usr/local/bin/docker-compose:/usr/local/bin/docker-compose # 映射docker-compose命令到容器内 environment: - TZ=Asia/Shanghai # 设置时区为上海(避免日志时间错乱) 
  1. 启动jenkins容器
root@jenkins:/opt/jenkins# docker-compose up -d Creating network "jenkins_default" with the default driver Pulling jenkins (jenkins/jenkins:2.528.2)... 2.528.2: Pulling from jenkins/jenkins 13cc39f8244a: Pull complete dc2a77f462ea: Pull complete 33300af18dd0: Pull complete c27509c3e53b: Pull complete e4beac64dffa: Pull complete a37b858bb47a: Pull complete 744b4792e083: Pull complete 05a7d9a8b608: Pull complete 8d2a75b252b2: Pull complete 65e4ba8066bc: Pull complete 5dc07232677a: Pull complete 7718ff514022: Pull complete Digest: sha256:7b1c378278279c8688efd6168c25a1c2723a6bd6f0420beb5ccefabee3cc3bb1 Status: Downloaded newer image for jenkins/jenkins:2.528.2 Creating jenkins ... done root@jenkins:/opt/jenkins# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e6e126cdd99b jenkins/jenkins:2.528.2 "/sbin/tini -- /usr/…"2 seconds ago Up 2 seconds 0.0.0.0:8080->8080/tcp, [::]:8080->8080/tcp, 0.0.0.0:50000->50000/tcp, [::]:50000->50000/tcp jenkins 
3.3.3 jenkins初始化配置
  1. 访问jenkins页面

在浏览器中输入:http://ECS公网IP:8080(需要在安全组放行8080访问端口和50000代理端口)。

image-20260106190554104

2. 输入初始化密码

root@jenkins:/opt/jenkins# docker exec -it jenkins cat /var/jenkins_home/secrets/initialAdminPassword de747fc1faa540cabfcd937c36e71ac6 # 在页面中粘贴获取的initialAdminPassword,点击 “继续”。
  1. 安装插件

选择 “安装推荐的插件”,等待插件安装完成。

若部分插件安装失败,可点击 “重试”,或后续在 Jenkins 插件管理中手动安装。

我这里上传准备好的插件包,节省下载安装时间

image-20260106192225989
root@jenkins:/opt/jenkins# mv plugins.tar data/ root@jenkins:/opt/jenkins# cd data/ root@jenkins:/opt/jenkins/data/# tar -xvf plugins.tar 
  1. 创建用户
image-20260106192800043
  1. 配置实例地址
image-20260106192815437
image-20260106192957072
  1. 安装额外插件:GitLab Plugin、Kubernetes Plugin、Nexus Plugin;
  2. 配置GitLab关联:在Jenkins系统管理→系统设置→GitLab中,添加GitLab服务器,输入GitLab地址和Access Token(从GitLab个人设置→Access Tokens创建);
image-20260106194413476
image-20260106194444630
image-20260106194502456
image-20260110171554091
image-20260106194201145
image-20260106195915637

测试连通性,显示success说明连接成功

image-20260106202253134

最后保存

如果不连通需要检查ecs的安全组是否开放了jenkins服务器公网访问gitlab服务器公网ip:80的权限

3.4 部署Argo CD(本地k8s集群)

3.4.1 安装ArgoCD
# 创建argocd命名空间 root@master:~/yaml# kubectl create namespace argocd root@master:~/yaml# mkdir argocd/# 安装ArgoCD root@master:~/yaml/argocd# kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.8.3/manifests/install.yaml# 暴露ArgoCD UI(本地访问用NodePort) root@master:~/yaml/argocd# kubectl patch svc argocd-server -n argocd -p '{"spec":{"type":"NodePort"}}'# 获取ArgoCD初始密码(用户名:admin) root@master:~/yaml/argocd# kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo eWQ3NOqAVLDGak1o root@master:~/yaml/argocd# kubectl get svc -n argocd  NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE argocd-applicationset-controller ClusterIP 10.106.160.96 <none>7000/TCP,8080/TCP 27h argocd-dex-server ClusterIP 10.107.111.20 <none>5556/TCP,5557/TCP,5558/TCP 27h argocd-metrics ClusterIP 10.97.249.73 <none>8082/TCP 27h argocd-notifications-controller-metrics ClusterIP 10.110.61.50 <none>9001/TCP 27h argocd-redis ClusterIP 10.105.69.236 <none>6379/TCP 27h argocd-repo-server ClusterIP 10.99.240.50 <none>8081/TCP,8084/TCP 27h argocd-server NodePort 10.108.75.197 <none>80:31375/TCP,443:32324/TCP 27h argocd-server-metrics ClusterIP 10.110.198.250 <none>8083/TCP 27h 

访问 ArgoCD UI:http://<本地K8s节点IP>:<argocd-server的NodePort>,用admin和初始密码登录。

image-20260110141401318
3.4.2 配置 ArgoCD 访问 GitLab

1.在 ArgoCD UI 中 → Settings → Repositories → Connect Repo

Repository URL:GitLab 的仓库地址(http://47.112.194.64/root/e-commerce-platform.git)

Type:Git

Authentication:Username + Password → 用户名填 GitLab 账号root,密码填之前生成的个人访问令牌

点击 Connect,验证连接成功。

image-20260110141541342
image-20260110141610327
image-20260110141953995
image-20260110142004620
3.5 Jenkins 流水线配置
3.5.1 git克隆gitlab仓库

首先本地克隆gitlab仓库,然后进行代码文件编写与提交

root@master:~# mkdir gitlab root@master:~# cd gitlab root@master:~/github# git config --global user.name "chenjun" root@master:~/github# git config --global user.email "[email protected]" root@master:~/github# git config --global color.ui true root@master:~/github# git config --list# 初始化 root@master:~/github# git init# 克隆gitlab远程仓库到本地,由于之前已经配置了ssh免密,现在直接clone就可以了 root@master:~/gitlab# git clone ssh://[email protected]:2222/root/e-commerce-platform.git Cloning into 'e-commerce-platform'... remote: Enumerating objects: 17, done. remote: Counting objects: 100% (14/14), done. remote: Compressing objects: 100% (13/13), done. remote: Total 17(delta 2), reused 0(delta 0), pack-reused 3 Receiving objects: 100% (17/17), done. Resolving deltas: 100% (2/2), done. root@master:~/gitlab# ls e-commerce-platform root@master:~/gitlab# cd e-commerce-platform/
image-20260110142829558
3.5.2 编写 Jenkinsfile(存放在 GitLab 的仓库根目录)
root@master:~/gitlab/e-commerce-platform# vim Jenkinsfile pipeline { agent any environment { ACR_REGISTRY ="crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test" APP_NAME ="product-service" GITLAB_REPO_URL ="http://47.112.194.64/root/e-commerce-platform.git" GITLAB_REPO_HOST ="47.112.194.64/root/e-commerce-platform.git" GIT_CRED_ID ="Gitlab-token-Secret"# secret text格式密钥 ACR_CRED_ID ="acr-cred" MANIFEST_FILE ="product-service-deploy.yaml" MANIFEST_CLONE_DIR ="e-commerce-platform-manifests" VERSION_FILE ="version.txt"} options { timeout(time: 30, unit: 'MINUTES') retry(1) skipDefaultCheckout(false) disableConcurrentBuilds()} stages { stage('Check Skip Conditions'){ steps { script { // 只检测Jenkins提交,不再检测version.txt变更 def commitMessage = sh( script: 'git log -1 --pretty=%B || echo ""', returnStdout: true).trim() def commitAuthor = sh( script: 'git log -1 --pretty=%an || echo ""', returnStdout: true).trim() // 如果是Jenkins提交,跳过 if(commitMessage.contains('[Jenkins]')|| commitMessage.contains('[ci skip]')|| commitAuthor =='jenkins-bot'){echo"===== 检测到Jenkins提交,跳过构建 =====" currentBuild.result ='SUCCESS' env.SKIP_BUILD ='true'return} // 用户提交,继续构建(即使version.txt被修改) echo"===== 用户提交,继续构建 ====="}}} stage('Get Version'){ when { expression { env.SKIP_BUILD !='true'}} steps { script { // 从源代码读取版本号(用户手动指定的) env.NEXT_VERSION = sh( script: 'cat ${VERSION_FILE} 2>/dev/null || echo "v0"', returnStdout: true).trim()if(env.NEXT_VERSION =='v0'){ error "version.txt不存在或为空,请先创建并提交"}echo"使用手动指定的版本: ${env.NEXT_VERSION}"}}} stage('Build Docker Image'){ when { expression { env.SKIP_BUILD !='true'}} steps {echo"===== 构建镜像:${ACR_REGISTRY}/${APP_NAME}:${NEXT_VERSION} ====="sh""" if[! -f Dockerfile ];thenecho'错误:Dockerfile不存在'exit1fidocker build --no-cache -t ${ACR_REGISTRY}/${APP_NAME}:${NEXT_VERSION}.""" }} stage('Push to ACR'){ when { expression { env.SKIP_BUILD !='true'}} steps {echo"===== 推送镜像到ACR =====" withCredentials([usernamePassword( credentialsId: "${ACR_CRED_ID}", passwordVariable: 'ACR_PWD', usernameVariable: 'ACR_USER')]){sh""" echo${ACR_PWD}|docker login --username ${ACR_USER} --password-stdin ${ACR_REGISTRY.split('/')[0]}docker push ${ACR_REGISTRY}/${APP_NAME}:${NEXT_VERSION}dockerlogout${ACR_REGISTRY.split('/')[0]}""" }}} stage('Update K8s Manifest'){ when { expression { env.SKIP_BUILD !='true'}} steps {echo"===== 更新K8s清单 =====" withCredentials([string(credentialsId: "${GIT_CRED_ID}", variable: 'GITLAB_TOKEN')]){ script {sh"rm -rf ${MANIFEST_CLONE_DIR} 2>/dev/null || true"sh""" git clone http://oauth2:${GITLAB_TOKEN}@${GITLAB_REPO_HOST}${MANIFEST_CLONE_DIR}||{echo'克隆仓库失败'exit1}""" dir("${MANIFEST_CLONE_DIR}"){ // 更新K8s清单 sh""" sed -i.bak 's|image: .*${APP_NAME}:.*|image: ${ACR_REGISTRY}/${APP_NAME}:${NEXT_VERSION}|g'${MANIFEST_FILE}rm -f ${MANIFEST_FILE}.bak """ // 可选:同步更新manifest仓库的version.txt(保持一致) sh""" echo"${NEXT_VERSION}">${VERSION_FILE}""" // 提交到manifest仓库 sh""" git config user.email "[email protected]"git config user.name "jenkins-bot"ifgit status --porcelain |grep -q .;thengitadd${MANIFEST_FILE}${VERSION_FILE}git commit -m "[Jenkins] Update ${APP_NAME} to ${NEXT_VERSION} [ci skip]"git push origin main echo"已推送修改到manifest仓库"elseecho"无修改,跳过提交"fi""" }}}}}} post { always {echo"===== 清理资源 ====="sh"rm -rf ${MANIFEST_CLONE_DIR} || true" script {if(env.NEXT_VERSION){sh"docker rmi ${ACR_REGISTRY}/${APP_NAME}:${NEXT_VERSION} || true 2>/dev/null"}}} success { script {if(env.SKIP_BUILD !='true'){echo"Pipeline成功!镜像:${ACR_REGISTRY}/${APP_NAME}:${NEXT_VERSION}"}else{echo"构建跳过(自动化提交)"}}} failure {echo"Pipeline失败!请检查配置"}}}
3.5.3 编写Dockerfile
root@master:~/gitlab/e-commerce-platform# vim Dockerfile FROM docker.io/library/nginx:latest 
3.5.4 准备k8s部署清单提交至代码仓库

product-service-deploy.yaml

product-service-welcome-cm.yaml

Dockerfile

Jenkinsfile

root@master:~/gitlab/e-commerce-platform# cp /root/yaml/product-service/product-service-deploy.yaml ./ root@master:~/gitlab/e-commerce-platform# cp /root/yaml/product-service/product-service-welcome-cm.yaml ./ root@master:~/gitlab/e-commerce-platform# ls Dockerfile Jenkinsfile product-service-deploy.yaml product-service-welcome-cm.yaml # git push 到gitlab仓库 root@master:~/gitlab/e-commerce-platform# git add ./ root@master:~/gitlab/e-commerce-platform# git commit -m "v1"[main 3429cff] v1 4 files changed, 156 insertions(+) create mode 100644 Dockerfile create mode 100644 Jenkinsfile create mode 100644 product-service-deploy.yaml create mode 100644 product-service-welcome-cm.yaml root@master:~/gitlab/e-commerce-platform# git push origin main Enumerating objects: 7, done. Counting objects: 100% (7/7), done. Delta compression using up to 2 threads Compressing objects: 100% (5/5), done. Writing objects: 100% (6/6), 2.51 KiB |2.51 MiB/s, done. Total 6(delta 0), reused 0(delta 0), pack-reused 0 To ssh://47.112.194.64:2222/root/e-commerce-platform.git 62ecaae..3429cff main -> main 
image-20260110150047972
3.6 配置 GitLab WebHook 触发 Jenkins
3.6.1 在 Jenkins 中创建流水线任务

新建任务 ->选择 “流水线” -> 名称设为product-service-ci

流水线 -> 定义选择 “Pipeline script from SCM” ->SCM 选 Git -> 填入 app-code 仓库地址,凭证选gitlab-token (必须是用户密码类型否则不显示)->分支main->脚本路径填Jenkinsfile -> 保存。

image-20260110150604625
image-20260110173126152
image-20260110152247772
3.6.2 配置 GitLab WebHook

生成jenkins api token

image-20260110173305281
image-20260110173322008
image-20260110173331288

进入 GitLab 的 app-code 仓库-> 设置 ->Webhooks -> 添加 Webhook:

URL:Jenkins 的触发地址(格式:http://用户名:token<Jenkins公网IP>:8080/project/构建名

触发条件:勾选 “Push events”

点击 “Add webhook” → 测试(点击 “Test” → 选 “Push events”),验证 Jenkins 能触发构建。

image-20260110173440881
image-20260110154408574
image-20260110173516048
image-20260110154436237
image-20260110173532207
3.6.3 测试jenkins自动构建
  1. 本地修改镜像版本号并提交
root@master:~/gitlab/e-commerce-platform# vim product-service-deploy.yaml # 修改为v2 image: crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service:v2 root@master:~/gitlab/e-commerce-platform# git add . root@master:~/gitlab/e-commerce-platform# git commit -m "test自动构建,修改了版本号"[main 84df6a9] test自动构建,修改了版本号 1file changed, 1 insertion(+), 1 deletion(-) root@master:~/gitlab/e-commerce-platform# git push origin main  Enumerating objects: 5, done. Counting objects: 100% (5/5), done. Delta compression using up to 2 threads Compressing objects: 100% (3/3), done. Writing objects: 100% (3/3), 334 bytes |334.00 KiB/s, done. Total 3(delta 2), reused 0(delta 0), pack-reused 0 To ssh://47.112.194.64:2222/root/e-commerce-platform.git 8e98ca7..84df6a9 main -> main 
image-20260110175113320
image-20260110175051365
image-20260110175247631

版本号会随构建次数自动叠加

上传到ACR镜像仓库成功

image-20260110175448087

输出日志

Started by GitLab push by Administrator Obtained Jenkinsfile from git http://47.112.194.64/root/e-commerce-platform.git [Pipeline] Start of Pipeline [Pipeline]node Running on Jenkins in /var/jenkins_home/workspace/product-service-ci [Pipeline]{[Pipeline] stage [Pipeline]{(Declarative: Checkout SCM)[Pipeline] checkout The recommended git tool is: NONE using credential Gitlab-token-us >git rev-parse --resolve-git-dir /var/jenkins_home/workspace/product-service-ci/.git # timeout=10 Fetching changes from the remote Git repository >git config remote.origin.url http://47.112.194.64/root/e-commerce-platform.git # timeout=10 Fetching upstream changes from http://47.112.194.64/root/e-commerce-platform.git >git --version # timeout=10>git --version # 'git version 2.47.3' using GIT_ASKPASS to set credentials gitlab的用户密码凭据 >git fetch --tags --force --progress -- http://47.112.194.64/root/e-commerce-platform.git +refs/heads/*:refs/remotes/origin/* # timeout=10 skipping resolution of commit remotes/origin/main, since it originates from another repository >git rev-parse refs/remotes/origin/main^{commit}# timeout=10 Checking out Revision 84df6a957f017b0e488b72121bf3e3d455cad5aa (refs/remotes/origin/main)>git config core.sparsecheckout # timeout=10>git checkout -f 84df6a957f017b0e488b72121bf3e3d455cad5aa # timeout=10 Commit message: "test自动构建,修改了版本号" First time build. Skipping changelog. [Pipeline]}[Pipeline] // stage [Pipeline] withEnv [Pipeline]{[Pipeline] withEnv [Pipeline]{[Pipeline]timeout Timeout set to expire in30 min [Pipeline]{[Pipeline] retry [Pipeline]{[Pipeline] stage [Pipeline]{(Check Skip Conditions)[Pipeline] script [Pipeline]{[Pipeline]sh + git log -1 --pretty=%B [Pipeline]sh + git log -1 --pretty=%an [Pipeline]echo 检测到chenjun的提交,继续构建 [Pipeline]}[Pipeline] // script [Pipeline]}[Pipeline] // stage [Pipeline] stage [Pipeline]{(Build Docker Image)[Pipeline]echo===== 构建镜像:crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service:v1 =====[Pipeline]sh + test -f Dockerfile [Pipeline]sh + docker build --no-cache -t crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service:v1 . DEPRECATED: The legacy builder is deprecated and will be removed in a future release. Install the buildx component to build images with BuildKit: https://docs.docker.com/go/buildx/ Sending build context to Docker daemon 354.3kB Step 1/1 : FROM docker.io/library/nginx:latest ---> 605c77e624dd Successfully built 605c77e624dd Successfully tagged crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service:v1 [Pipeline]}[Pipeline] // stage [Pipeline] stage [Pipeline]{(Push to ACR)[Pipeline]echo===== 推送镜像到ACR =====[Pipeline] withCredentials Masking supported pattern matches of $ACR_PWD[Pipeline]{[Pipeline]sh Warning: A secret was passed to "sh" using Groovy String interpolation, which is insecure. Affected argument(s) used the following variable(s): [ACR_PWD] See https://jenkins.io/redirect/groovy-string-interpolation for details. + echo **** + docker login --username chenjun_3127103271 --password-stdin crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com WARNING! Your credentials are stored unencrypted in'/root/.docker/config.json'. Configure a credential helper to remove this warning. See https://docs.docker.com/go/credential-store/ Login Succeeded [Pipeline]sh + docker push crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service:v1 The push refers to repository [crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service] d874fd2bc83b: Preparing 32ce5f6a5106: Preparing f1db227348d0: Preparing b8d6e692a25e: Preparing e379e8aedd4d: Preparing 2edcec3590a4: Preparing 2edcec3590a4: Waiting f1db227348d0: Layer already exists b8d6e692a25e: Layer already exists 32ce5f6a5106: Layer already exists d874fd2bc83b: Layer already exists e379e8aedd4d: Layer already exists 2edcec3590a4: Layer already exists v1: digest: sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3 size: 1570[Pipeline]sh + dockerlogout crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com Removing login credentials for crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com [Pipeline]}[Pipeline] // withCredentials [Pipeline]}[Pipeline] // stage [Pipeline] stage [Pipeline]{(Update K8s Manifest)[Pipeline]echo===== 更新K8s清单,版本:v1 =====[Pipeline] withCredentials Masking supported pattern matches of $GITLAB_TOKEN[Pipeline]{[Pipeline]sh Warning: A secret was passed to "sh" using Groovy String interpolation, which is insecure. Affected argument(s) used the following variable(s): [GITLAB_TOKEN] See https://jenkins.io/redirect/groovy-string-interpolation for details. + git clone http://oauth2:****@47.112.194.64/root/e-commerce-platform.git e-commerce-platform-manifests Cloning into 'e-commerce-platform-manifests'... [Pipeline]dir Running in /var/jenkins_home/workspace/product-service-ci/e-commerce-platform-manifests [Pipeline]{[Pipeline]sh + sed -i.bak s|image: .*product-service:.*|image: crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service:v1|g product-service-deploy.yaml + rm -f product-service-deploy.yaml.bak [Pipeline]sh + echo v1 [Pipeline]sh + git config user.email [email protected] + git config user.name chenjun + git status --porcelain + grep -q . + gitadd product-service-deploy.yaml version.txt + git commit -m [Jenkins] Update product-service to v1 [ci skip][main 1a7ff1b][Jenkins] Update product-service to v1 [ci skip]2 files changed, 2 insertions(+), 2 deletions(-) + git push origin main To http://47.112.194.64/root/e-commerce-platform.git 84df6a9..1a7ff1b main -> main + echo 已推送修改到e-commerce-platform仓库 已推送修改到e-commerce-platform仓库 [Pipeline]}[Pipeline] // dir[Pipeline]}[Pipeline] // withCredentials [Pipeline]}[Pipeline] // stage [Pipeline] stage [Pipeline]{(Declarative: Post Actions)[Pipeline]echo===== 清理资源 =====[Pipeline]sh + rm -rf e-commerce-platform-manifests [Pipeline]sh + docker rmi crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service:v1 Untagged: crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service:v1 [Pipeline]echo Pipeline成功!镜像:crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/product-service-test/product-service:v1 [Pipeline]}[Pipeline] // stage [Pipeline]}[Pipeline] // retry [Pipeline]}[Pipeline] // timeout[Pipeline]}[Pipeline] // withEnv [Pipeline]}[Pipeline] // withEnv [Pipeline]}[Pipeline] // node[Pipeline] End of Pipeline Finished: SUCCESS 
3.7 ArgoCD 配置自动同步
1. 在 ArgoCD UI 中 → New App:

Application Name:my-demo-app

Project:default

Sync Policy:勾选 “Automatic”(自动同步)、“Prune Resources”、“Self Heal”

Source:

  • Repository URL:GitLab 的k8s-manifests仓库地址
  • Revision:main
  • Path:./(部署清单所在路径)

Destination:

  • Cluster URL:https://kubernetes.default.svc(本地 K8s)
  • Namespace:default
image-20260110183017358
image-20260110183200544
image-20260110183214503
2.点击 Create,ArgoCD 会自动同步部署清单到本地 K8s。
image-20260110183239965

滚动更新

image-20260110183255524

argo cd显示同步完成

image-20260110183415700
image-20260110183619520
3.8 全链路测试
3.8.1 触发CI/CD流程
  1. 本地修改 cm,提交并推送到 GitLab:

原页面内容

image-20260110183950517
# 修改页面内容 root@master:~/gitlab/e-commerce-platform# vim product-service-welcome-cm.yaml  apiVersion: v1 kind: ConfigMap metadata: name: welcome-nginx-cm namespace: product data: index.html: |<!DOCTYPE html><html><head><title>Welcome</title></head><body><h1>v2</h1></body># 修改成v2</html># 修改版本号 root@master:~/gitlab/e-commerce-platform# vim version.txt v2 root@master:~/gitlab/e-commerce-platform# git add . root@master:~/gitlab/e-commerce-platform# git commit -m "v2"[main 2c5da2e] v2 2 files changed, 2 insertions(+), 2 deletions(-) root@master:~/gitlab/e-commerce-platform# git push origin main  Enumerating objects: 7, done. Counting objects: 100% (7/7), done. Delta compression using up to 2 threads Compressing objects: 100% (2/2), done. Writing objects: 100% (4/4), 344 bytes |344.00 KiB/s, done. Total 4(delta 2), reused 2(delta 1), pack-reused 0 To ssh://47.112.194.64:2222/root/e-commerce-platform.git f8f2f1b..2c5da2e main -> main 

可以看到提交代码后,jenkins触发了自动构建

image-20260110191138020

状态完成

image-20260110191149081

迭代版本镜像上传成功

image-20260110191205817

argoCD 正在持续部署

image-20260110191224791

cm 内容从v1变成了v2

image-20260110191236066

pod镜像也变成了v2

image-20260110191251449

页面访问测试也变成了v2

image-20260110191300320

4 监控体系搭建(本地部署)

4.1 前置准备

4.1.1 阿里云ACR创建命名空间

4.1.2 拉取镜像上传对应的镜像仓库

直接在monitoring_k8s命名空间后面加上镜像仓库名上传会自动创建镜像仓库

# ctr拉取node-exporter镜像并上传ACR镜像仓库减少拉取时间 root@master:~# ctr images pull docker.io/prom/node-exporter:v1.8.1 root@master:~# nerdctl tag prom/node-exporter:v1.8.1 crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/monitoring_k8s/node-exporter:v1.8.1 root@master:~# nerdctl push crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/monitoring_k8s/node-exporter:v1.8.1# ctr拉取prometheus镜像并上传ACR镜像仓库减少拉取时间 root@master:~# ctr images pull docker.io/prom/prometheus:v2.53.1 root@master:~# nerdctl tag prom/prometheus:v2.53.1 crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/monitoring_k8s/prometheus:v2.53.1 root@master:~# nerdctl push crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/monitoring_k8s/prometheus:v2.53.1# ctr拉取grafana/grafana镜像并上传ACR镜像仓库减少拉取时间 root@master:~# ctr images pull docker.io/grafana/grafana:11.2.0 root@master:~# nerdctl tag grafana/grafana:11.2.0 crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/monitoring_k8s/grafana:11.2.0 root@master:~# nerdctl push crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/monitoring_k8s/grafana:11.2.0# ctr拉取blackbox-exporter镜像并上传ACR镜像仓库减少拉取时间 root@master:~# ctr images pull docker.io/prom/blackbox-exporter:v0.24.0 root@master:~# nerdctl tag prom/blackbox-exporter:v0.24.0 crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/monitoring_k8s/blackbox-exporter:v0.24.0 root@master:~# nerdctl push crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/monitoring_k8s/blackbox-exporter:v0.24.0# ctr拉取Alertmanager镜像并上传ACR镜像仓库减少拉取时间 root@master:~/yaml/monitoring# ctr images pull docker.io/prom/alertmanager:v0.26.0 root@master:~/yaml/monitoring# nerdctl tag prom/alertmanager:v0.26.0 crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/monitoring_k8s/alertmanager:v0.26.0 root@master:~/yaml/monitoring# nerdctl push crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/monitoring_k8s/alertmanager:v0.26.0# ctr拉取filebeat镜像并上传ACR镜像仓库减少拉取时间 root@master:~/yaml/filebeat# ctr images pull docker.io/elastic/filebeat:8.11.0 root@master:~/yaml/filebeat# nerdctl tag elastic/filebeat:8.11.0 crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/logging_k8s/filebeat:8.11.0 root@master:~/yaml/filebeat# nerdctl push crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/logging_k8s/filebeat:8.11.0
image-20260108132252129
4.1.4 创建监控命名空间
# 创建监控命名空间 root@master:~# kubectl create ns monitoring

4.2 部署 node-exporter

root@master:~/yaml# ls product-service secret root@master:~/yaml# mkdir monitoring# 需要先创建凭证否则无权限拉取私有镜像 root@master:~/yaml# kubectl create secret docker-registry acr-pull-secret \ --namespace=monitoring \ --docker-server=crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com \ --docker-username=用户名 \ --docker-password='密码' secret/acr-pull-secret created root@master:~/yaml/monitoring# vim node-exporter.yaml# DaemonSet 类型:确保集群中每一个节点(包括 Master)都运行一个该 Pod 实例# 用途:采集每个节点的系统级指标(CPU、内存、磁盘、网络等) apiVersion: apps/v1 kind: DaemonSet metadata: # Pod 名称前缀(最终 Pod 名格式:node-exporter-xxxx) name: node-exporter # 部署到 monitoring 命名空间(Prometheus/Grafana 等监控组件通常集中在此命名空间) namespace: monitoring # 自定义标签:用于 Service/Selector 关联、资源筛选 labels: app: node-exporter spec: # 标签选择器:关联下面的 Pod 模板(必须匹配 Pod 模板的 labels) selector: matchLabels: app: node-exporter # Pod 模板:定义要运行的 Pod 具体配置 template: metadata: # Pod 标签:与上面的 selector.matchLabels 一致,用于 Service 发现 labels: app: node-exporter spec: # 容忍度配置:让 Pod 能调度到 Master 节点(Master 节点默认有污点,阻止普通 Pod 调度) tolerations: #- key: "node-role.kubernetes.io/master" # 旧版匹配 Master 节点的污点 Key - key: "node-role.kubernetes.io/control-plane"# 新版 control-plane 节点污点 1.24+ operator: "Exists"# 只要该 Key 存在就容忍(无需匹配 Value) effect: "NoSchedule"# 匹配污点的 Effect(NoSchedule 表示不调度普通 Pod)# 启用主机网络:Pod 直接使用宿主机的网络命名空间# 原因:1. Node Exporter 需采集主机网络指标;2. 避免端口冲突,便于 Prometheus 直接通过节点 IP:9100 抓取指标 hostNetwork: true# 启用主机 PID 命名空间:Pod 能看到宿主机的所有进程# 原因:Node Exporter 需采集主机进程相关指标(如进程数、CPU 占用等) hostPID: true# 镜像拉取密钥:引用之前创建的 acr-pull-secret,用于拉取阿里云私有镜像仓库的镜像 imagePullSecrets: [{ name: acr-pull-secret }]# 容器配置(DaemonSet 中仅运行 node-exporter 一个容器) containers: - name: node-exporter # 容器名称# 阿里云私有镜像地址(替换为你自己的镜像仓库地址) image: crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/monitoring_k8s/node-exporter:v1.8.1 # Node Exporter 启动参数:定义采集指标的规则和路径 args: - --path.procfs=/host/proc # 指定主机 /proc 目录挂载路径(采集进程、CPU 等指标) - --path.sysfs=/host/sys # 指定主机 /sys 目录挂载路径(采集内核、硬件等指标)# 忽略无用的挂载点:避免采集 /sys /proc 等虚拟文件系统的磁盘指标(无意义) - --collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($|/)# 安全上下文:赋予容器特权模式# 原因:Node Exporter 需要访问主机的敏感文件/目录(如 /proc /sys),普通权限会被拒绝 securityContext: privileged: true# 卷挂载:将主机的目录挂载到容器内,让 Node Exporter 能读取主机数据 volumeMounts: - name: proc # 关联下面 volumes 中定义的 proc 卷 mountPath: /host/proc # 容器内挂载路径 - name: sys mountPath: /host/sys - name: rootfs mountPath: /rootfs # 挂载主机根目录(采集磁盘挂载、文件系统指标)# 卷定义:将主机的物理目录映射为 Pod 可访问的卷 volumes: - name: proc hostPath: # 宿主机路径类型卷 path: /proc # 宿主机的 /proc 目录(存储进程、CPU、内存等实时数据) - name: sys hostPath: path: /sys # 宿主机的 /sys 目录(存储内核、硬件、设备等信息) - name: rootfs hostPath: path: / # 宿主机的根目录(/) --- # ========== Node Exporter Service 配置 ==========# Service 类型:为 DaemonSet 部署的所有 Node Exporter Pod 提供统一的访问入口# 用途:供 Prometheus 通过 Service 发现机制抓取所有节点的 Node Exporter 指标 apiVersion: v1 kind: Service metadata: name: node-exporter # Service 名称 namespace: monitoring # 与 DaemonSet 同命名空间 labels: app: node-exporter # 标签:用于 Prometheus 配置抓取目标时筛选 spec: # 标签选择器:关联所有带有 app=node-exporter 标签的 Pod selector: app: node-exporter # 端口配置:定义 Service 暴露的端口和对应的容器端口 ports: - name: metrics # 端口名称(自定义,便于识别) port: 9100# Service 暴露的端口(Prometheus 访问此端口) targetPort: 9100# 容器内 Node Exporter 的监听端口(默认 9100)# Service 类型:ClusterIP(仅集群内部可访问,监控组件无需暴露到集群外) type: ClusterIP # 更新配置文件 root@master:~/yaml/monitoring# kubectl apply -f node-exporter.yaml# 验证是否所有节点都持有node-export root@master:~/yaml# kubectl get pods -n monitoring -l app=node-exporter -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES node-exporter-7kcrl 1/1 Running 0 2m3s 192.168.0.200 master <none><none> node-exporter-gknxb 1/1 Running 0 2m3s 192.168.0.202 node2 <none><none> node-exporter-p99j6 1/1 Running 0 2m4s 192.168.0.203 node3 <none><none> node-exporter-q5m95 1/1 Running 0 2m3s 192.168.0.201 node1 <none><none>

4.3 部署 Prometheus

4.3.1 配置 Prometheus RBAC 权限
root@master:~/yaml/monitoring# vim prometheus-rbac.yaml# ========== ClusterRole(集群角色)配置 ==========# 作用:定义一组集群级别的权限规则(哪些资源可以被操作、执行哪些操作)# 适用场景:Prometheus 需要跨命名空间发现节点、Pod、Service 等资源,因此必须用 ClusterRole(而非 Namespace 级的 Role) apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # 集群角色名称:prometheus(需与下方 ClusterRoleBinding 的 roleRef.name 一致) name: prometheus # 权限规则列表:定义 Prometheus 可以操作的资源和对应的操作 rules: # 规则1:操作核心组("" 代表 k8s 核心 API 组)的基础资源 - apiGroups: [""]# 核心 API 组(nodes、services、pods 等都属于核心组) resources: # 允许操作的资源类型 - nodes # 节点资源:Prometheus 需发现所有节点,抓取 Node Exporter 指标 - nodes/metrics # 节点指标:抓取 kubelet 暴露的节点原生指标(如容器资源使用) - services # 服务资源:Prometheus 通过 Service 发现监控目标(如 node-exporter Service) - endpoints # 端点资源:获取 Service 对应的后端 Pod IP/端口,精准抓取指标 - pods # Pod 资源:发现集群内所有 Pod,支持 Pod 级别的指标抓取(如应用监控) verbs: ["get", "list", "watch"]# 允许的操作:# get(获取单个资源)、list(列出所有资源)、watch(实时监听资源变化)# 仅授予只读权限,符合最小权限原则# 规则2:操作核心组的 configmaps 资源(可选,按需开放) - apiGroups: [""] resources: - configmaps # 配置映射:Prometheus 若需从 ConfigMap 读取自定义配置(如抓取规则),需此权限 verbs: ["get"]# 仅授予 get 权限(无需 list/watch,按需最小化)# 规则3:操作网络组的 ingresses 资源(可选,按需开放) - apiGroups: - networking.k8s.io # 网络 API 组(Ingress 资源所属组) resources: - ingresses # 入口资源:Prometheus 若需监控 Ingress 规则、流量等指标,需此权限 verbs: ["get", "list", "watch"]# 规则4:操作非资源型 URL(K8s 节点/组件暴露的指标接口) - nonResourceURLs: ["/metrics", "/metrics/cadvisor"]# 非资源 URL:# /metrics:kube-apiserver 等组件的指标接口# /metrics/cadvisor:cadvisor 暴露的容器指标接口 verbs: ["get"]# 允许 GET 请求:Prometheus 需访问这些 URL 抓取集群组件指标 --- # ========== ClusterRoleBinding(集群角色绑定)配置 ==========# 作用:将上面定义的 prometheus ClusterRole 权限绑定到指定的 ServiceAccount(服务账户)# 核心逻辑:让 monitoring 命名空间的 default 账户拥有 prometheus ClusterRole 的所有权限 apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: # 集群角色绑定名称:prometheus(自定义,便于识别) name: prometheus # 角色引用:指定要绑定的 ClusterRole roleRef: apiGroup: rbac.authorization.k8s.io # RBAC 权限 API 组(固定值) kind: ClusterRole # 绑定的角色类型:ClusterRole(集群级) name: prometheus # 绑定的 ClusterRole 名称(需与上方 ClusterRole.name 一致)# 主体:指定被授予权限的对象(这里是 ServiceAccount) subjects: - kind: ServiceAccount # 主体类型:服务账户(Pod 运行时的身份) name: default # 服务账户名称:default(monitoring 命名空间的默认账户) namespace: monitoring # 服务账户所属命名空间:monitoring(Prometheus 部署在此命名空间)
4.3.2 配置 Prometheus 抓取规则
root@master:~/yaml/monitoring# vim prometheus-config.yaml# ========== Prometheus 核心配置 ConfigMap ==========# 作用:存储 Prometheus 的核心配置文件(prometheus.yml),可通过挂载到 Prometheus Pod 中生效# 核心能力:定义全局抓取规则、各监控目标的抓取策略(静态目标/K8s自动发现) apiVersion: v1 kind: ConfigMap metadata: # ConfigMap 名称:prometheus-config(需与 Prometheus Pod 挂载的名称一致) name: prometheus-config # 部署到 monitoring 命名空间(与 Prometheus 同命名空间) namespace: monitoring data: # 核心配置文件:prometheus.yml(Prometheus 启动时读取此文件) prometheus.yml: |# ========== 全局配置(所有抓取任务的默认规则) ========== global: # 抓取指标的间隔:每15秒抓取一次所有监控目标的指标(默认值,可被单个job覆盖) scrape_interval: 15s # 规则评估间隔:每15秒评估一次告警规则/记录规则(如 PromQL 告警表达式) evaluation_interval: 15s # ========== 抓取配置列表(定义所有需要监控的目标) ========== scrape_configs: # 1. 抓取 Prometheus 自身的运行指标(监控监控系统本身) - job_name: 'prometheus' static_configs: - targets: ['localhost:9090']# Prometheus自身的指标端口(9090为默认端口)# 2. 抓取K8s集群节点的Node Exporter指标(K8s自动发现) - job_name: 'k8s-node-exporter'# K8s服务发现配置:基于K8s的Endpoints自动发现监控目标 kubernetes_sd_configs: - role: endpoints # 发现角色:Endpoints(Service对应的后端Pod端点) namespaces: # 仅发现monitoring命名空间下的Endpoints(Node Exporter部署在此) names: ['monitoring']# 标签重写规则:过滤/修改目标的标签,只保留需要的监控目标 relabel_configs: # 规则1:仅保留Service标签包含 app=node-exporter 的Endpoints - source_labels: [__meta_kubernetes_service_label_app]# 源标签:K8s Service的app标签 regex: node-exporter # 匹配规则:值为node-exporter action: keep # 动作:保留匹配的目标(不匹配的丢弃)# 规则2:仅保留端口名称为metrics的Endpoints(Node Exporter的端口名) - source_labels: [__meta_kubernetes_endpoint_port_name]# 源标签:Endpoints的端口名称 regex: metrics # 匹配规则:值为metrics action: keep # 动作:保留匹配的目标# 3. 抓取Blackbox Exporter指标(页面/接口可用性监控) - job_name: 'blackbox-exporter'# 指标路径:Blackbox Exporter的探针接口(默认/probe) metrics_path: /probe # 请求参数:指定检测模块为http_2xx(检测HTTP接口是否返回200状态码) params: module: [http_2xx]# K8s服务发现:自动发现monitoring命名空间下的Blackbox Exporter Endpoints kubernetes_sd_configs: - role: endpoints namespaces: names: ['monitoring']# 标签重写规则:适配Blackbox Exporter的探针请求逻辑 relabel_configs: # 规则1:仅保留Service标签为app=blackbox-exporter的目标 - source_labels: [__meta_kubernetes_service_label_app] regex: blackbox-exporter action: keep # 规则2:将目标地址(__address__)作为探针请求的target参数 - source_labels: [__address__] target_label: __param_target # 规则3:将target参数值作为instance标签(Prometheus UI中显示的实例名) - source_labels: [__param_target] target_label: instance # 规则4:修改目标地址为Blackbox Exporter的Service地址(所有探针请求转发到这里) - target_label: __address__ replacement: blackbox-exporter.monitoring.svc:9115 # Blackbox Service的集群内地址# 规则5:将instance标签值赋值给target标签(便于在Grafana中筛选目标) - source_labels: [instance] regex: (.*) target_label: target replacement: ${1}# 4. 抓取K8s集群核心组件:APIServer指标 - job_name: 'kubernetes-apiservers'# K8s服务发现:全局发现所有Endpoints(APIServer在default命名空间) kubernetes_sd_configs: - role: endpoints # 访问协议:APIServer仅支持HTTPS scheme: https # TLS配置:使用K8s ServiceAccount的CA证书(Pod内默认挂载的证书) tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt # 认证配置:使用Pod内默认挂载的ServiceAccount Token(RBAC权限认证) bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token # 标签重写规则:仅保留default命名空间下kubernetes Service的https端口 relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]# 匹配规则:命名空间=default、Service名=kubernetes、端口名=https regex: default;kubernetes;https action: keep # 仅保留APIServer的Endpoints(过滤其他无关目标)
4.3.3 部署 Prometheus Deployment + Service
root@master:~/yaml/monitoring# vim prometheus-deployment.yaml# ========== Prometheus Deployment 配置 ==========# 作用:以无状态部署(Deployment)方式运行 Prometheus 单实例# 适用场景:小规模集群/测试环境(生产环境做 HA 部署,如 Prometheus Operator) apiVersion: apps/v1 kind: Deployment metadata: # Deployment 名称:prometheus(需与 Service selector 匹配) name: prometheus # 部署到 monitoring 命名空间(与监控组件统一管理) namespace: monitoring # 自定义标签:用于 Service 关联、资源筛选 labels: app: prometheus spec: # 副本数:1(单实例部署,无高可用;生产环境可结合 PersistentVolume 做 2 副本) replicas: 1# 标签选择器:关联带有 app=prometheus 标签的 Pod selector: matchLabels: app: prometheus # Pod 模板:定义 Prometheus Pod 的具体配置 template: metadata: # Pod 标签:与 selector.matchLabels 一致,用于 Service 发现 labels: app: prometheus spec: # 镜像拉取密钥:引用 harbor-registry-secret,拉取私有 Harbor 镜像仓库的 Prometheus 镜像 imagePullSecrets: [{ name: acr-pull-secret }]# 容器配置(核心:仅运行 Prometheus 一个容器) containers: - name: prometheus # 容器名称# 私有镜像地址:替换为你自己的 Harbor 镜像仓库地址 image: crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/monitoring_k8s/prometheus:v2.53.1 # Prometheus 启动参数(核心配置,定义运行规则) args: - --config.file=/etc/prometheus/prometheus.yml # 指定配置文件路径(挂载自 ConfigMap) - --storage.tsdb.path=/prometheus # TSDB 时序数据库存储路径(挂载自 emptyDir) - --web.console.libraries=/usr/share/prometheus/console_libraries # 控制台库文件路径(镜像内置) - --web.console.templates=/usr/share/prometheus/consoles # 控制台模板路径(镜像内置)# 容器端口:Prometheus 默认监听 9090 端口(需与 Service targetPort 一致) ports: - containerPort: 9090# 卷挂载:将 ConfigMap/存储卷挂载到容器内指定路径 volumeMounts: # 挂载 Prometheus 配置文件(来自 prometheus-config ConfigMap) - name: prometheus-config mountPath: /etc/prometheus # 容器内挂载路径(对应 --config.file 的目录)# 挂载 Prometheus 数据存储目录(临时存储,重启 Pod 数据丢失) - name: prometheus-storage mountPath: /prometheus # 容器内存储路径(对应 --storage.tsdb.path)# 资源限制:防止 Prometheus 占用过多节点资源(根据集群规模调整) resources: limits: # 资源上限(最多占用 1核CPU、1Gi内存) cpu: 1000m memory: 1Gi requests: # 资源请求(至少分配 0.5核CPU、512Mi内存) cpu: 500m memory: 512Mi # 卷定义:为容器提供配置文件和存储目录 volumes: # 配置卷:引用 prometheus-config ConfigMap(存储 prometheus.yml 配置) - name: prometheus-config configMap: name: prometheus-config # 对应之前创建的 ConfigMap 名称# 存储卷:emptyDir(临时存储,Pod 销毁则数据丢失) - name: prometheus-storage emptyDir: {}# 生产环境建议替换为 PersistentVolume(PV/PVC),避免数据丢失 --- # ========== Prometheus Service 配置 ==========# 作用:以 NodePort 方式暴露 Prometheus 服务,允许集群外访问 Prometheus UI apiVersion: v1 kind: Service metadata: # Service 名称:prometheus name: prometheus # 与 Deployment 同命名空间 namespace: monitoring spec: # 标签选择器:关联所有带有 app=prometheus 标签的 Pod selector: app: prometheus # 端口配置:定义 Service 暴露的端口规则 ports: - port: 9090# Service 集群内访问端口(集群内可通过 prometheus.monitoring.svc:9090 访问) targetPort: 9090# 容器内 Prometheus 监听端口(与 containerPort 一致) nodePort: 30090# 固定 NodePort 端口(集群外通过 节点IP:30090 访问 Prometheus UI)# Service 类型:NodePort(暴露到集群所有节点的指定端口,适合测试/小规模集群)# 生产环境建议用 Ingress + HTTPS 暴露,更安全 type: NodePort 
4.3.4 应用Prometheus所有配置
root@master:~/yaml/monitoring# kubectl apply -f .# 验证Prometheus Pod运行状态 root@master:~/yaml# kubectl get pod -n monitoring -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES node-exporter-7kcrl 1/1 Running 0 11m 192.168.0.200 master <none><none> node-exporter-gknxb 1/1 Running 0 11m 192.168.0.202 node2 <none><none> node-exporter-p99j6 1/1 Running 0 12m 192.168.0.203 node3 <none><none> node-exporter-q5m95 1/1 Running 0 11m 192.168.0.201 node1 <none><none> prometheus-68f95956cf-v5bh2 1/1 Running 0 32s 10.20.166.132 node1 <none><none>
4.3.5 访问验证

浏览器访问node1节点ip,192.168.0.201:30090

image-20260108111327931
image-20260108111342727
image-20260108111351644

4.4 部署Grafana

4.4.1 部署 Grafana Deployment + Service
root@master:~/yaml/monitoring# vim grafana-deployment.yaml# ========== Grafana Deployment 配置 ==========# 作用:以无状态部署(Deployment)方式运行 Grafana 单实例# 适用场景:小规模集群/测试环境(生产环境建议结合 PersistentVolume 做持久化,保证仪表盘/配置不丢失) apiVersion: apps/v1 kind: Deployment metadata: # Deployment 名称:grafana(需与 Service selector 匹配) name: grafana # 部署到 monitoring 命名空间(与 Prometheus/Node Exporter 等监控组件统一管理) namespace: monitoring # 自定义标签:用于 Service 关联、资源筛选 labels: app: grafana spec: # 副本数:1(单实例部署;Grafana 支持多实例,但需共享存储/数据库,测试环境单实例足够) replicas: 1# 标签选择器:关联带有 app=grafana 标签的 Pod selector: matchLabels: app: grafana # Pod 模板:定义 Grafana Pod 的具体配置 template: metadata: # Pod 标签:与 selector.matchLabels 一致,用于 Service 发现 labels: app: grafana spec: # 镜像拉取密钥:引用 acr-pull-secret,拉取阿里云私有镜像仓库的 Grafana 镜像 imagePullSecrets: [{ name: acr-pull-secret }]# 容器配置(核心:仅运行 Grafana 一个容器) containers: - name: grafana # 容器名称# 阿里云私有镜像地址 image: crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/monitoring_k8s/grafana:11.2.0 # 容器端口:Grafana 默认监听 3000 端口(需与 Service targetPort 一致) ports: - containerPort: 3000# Grafana 环境变量:配置核心运行参数(无需修改配置文件,启动时注入) env: # 环境变量1:设置 Grafana 管理员(admin)的登录密码 - name: GF_SECURITY_ADMIN_PASSWORD value: "admin123"## 环境变量2:禁用 Grafana 注册功能(仅允许管理员创建用户,提升安全性) - name: GF_USERS_ALLOW_SIGN_UP value: "false"# 测试/生产环境均建议关闭公开注册# 卷挂载:将存储卷挂载到 Grafana 数据目录(保存仪表盘、用户、配置等数据) volumeMounts: - name: grafana-storage mountPath: /var/lib/grafana # Grafana 核心数据目录(镜像内置的默认路径)# 资源限制:防止 Grafana 占用过多节点资源(根据监控面板数量调整) resources: limits: # 资源上限(最多占用 0.5核CPU、512Mi内存) cpu: 500m memory: 512Mi requests: # 资源请求(至少分配 0.2核CPU、256Mi内存) cpu: 200m memory: 256Mi # 卷定义:为 Grafana 提供数据存储目录 volumes: # 存储卷:emptyDir(临时存储,Pod 销毁则数据丢失) - name: grafana-storage emptyDir: {}# 生产环境必须替换为 PersistentVolume(PV/PVC),否则仪表盘/用户配置会丢失 --- # ========== Grafana Service 配置 ==========# 作用:以 NodePort 方式暴露 Grafana 服务,允许集群外访问 Grafana 可视化面板 apiVersion: v1 kind: Service metadata: # Service 名称:grafana name: grafana # 与 Deployment 同命名空间 namespace: monitoring spec: # 标签选择器:关联所有带有 app=grafana 标签的 Pod selector: app: grafana # 端口配置:定义 Service 暴露的端口规则 ports: - port: 3000# Service 集群内访问端口(集群内可通过 grafana.monitoring.svc:3000 访问) targetPort: 3000# 容器内 Grafana 监听端口(与 containerPort 一致) nodePort: 30030# 固定 NodePort 端口(集群外通过 节点IP:30030 访问 Grafana UI)# Service 类型:NodePort(暴露到集群所有节点的指定端口,适合测试/小规模集群)# 生产环境建议用 Ingress + HTTPS 暴露,同时配置域名和认证,提升安全性 type: NodePort 
4.4.2 应用 Grafana 配置
root@master:~/yaml/monitoring# kubectl apply -f grafana-deployment.yaml# 验证Grafana Pod运行 root@master:~/yaml# kubectl get pods -n monitoring -o wide -l app=grafana NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES grafana-57596f6bcb-5lw47 1/1 Running 0 2m39s 10.20.135.4 node3 <none><none>
4.4.3 访问验证

浏览器访问192.168.0.201:30030

用户名admin

密码admin123

image-20260108112019944
4.4.4 配置 Grafana 数据源

设置中文

首页,右上角头像-profile-Language

image-20260108112457991
  1. 登录 Grafana 后,点击左侧 连接->数据源->添加新数据源;
image-20260108112626605
image-20260108112636268
  1. 选择Prometheus,配置 URL 为:http://prometheus.monitoring.svc:9090(K8s 内部 Service 地址);
image-20260108112730742
image-20260108112749452
4.4.5 导入 Grafana 仪表盘
  1. 点击左侧仪表板->右侧新建导入
image-20260108112851363
  1. 输入仪表盘 ID,点击 加载:
  • 节点状态监控:1860(Node Exporter Full,节点 CPU / 内存 / 磁盘);
  • K8s 集群监控:7249(Kubernetes Cluster Monitoring,集群组件);
image-20260108112937812
image-20260108112959215

4.5 部署Alertmanager

4.5.1 编写 Alertmanager 核心配置文件

Alertmanager 的核心配置是 alertmanager.yml,主要包含路由规则、接收人、通知渠道、抑制 / 静默规则等。

root@master:~/yaml/monitoring# vim alertmanager.yml# alertmanager.yml global: # 解决告警的超时时间(若超过该时间未解决,会重复发送) resolve_timeout: 5m # 邮件配置(全局)# 发件人邮箱 smtp_from: '[email protected]' smtp_smarthost: 'smtp.qq.com:587'# 发件人邮箱 smtp_auth_username: '[email protected]' smtp_auth_password: '666666666' smtp_require_tls: true# 路由规则:定义告警的分发逻辑(类似 Prometheus 的 rule) route: # 所有告警的根路由(默认接收所有告警) receiver: 'chenjun'# 告警分组:相同标签的告警合并为一个通知 group_by: ['alertname', 'cluster', 'service']# 首次发送告警的等待时间(避免抖动) group_wait: 10s # 同组告警的间隔发送时间 group_interval: 10s # 同一告警的重复发送间隔 repeat_interval: 1h # 接收人配置:定义具体的通知渠道 receivers: - name: 'chenjun'# 接收人名称(需与 route 中的 receiver 对应) email_configs: - to: '[email protected]'# 告警接收邮箱 send_resolved: true# 告警解决后发送恢复通知# 抑制规则:避免告警风暴(比如集群不可用后,不重复发送该集群下的所有告警) inhibit_rules: - source_match: severity: 'critical' target_match: severity: 'warning' equal: ['alertname', 'cluster', 'service']
4.5.2 将配置文件存储为 ConfigMap
root@master:~/yaml/monitoring# kubectl create configmap alertmanager-config \ --namespace=monitoring \ --from-file=alertmanager.yml=./alertmanager.yml 
4.5.3 编写 Alertmanager 部署清单(Deployment + Service)
# 创建 alertmanager-deploy.yaml 文件,包含 Deployment(运行 Alertmanager 容器)和 Service(暴露服务): root@master:~/yaml/monitoring# vim alertmanager-deploy.yaml# alertmanager-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: alertmanager namespace: monitoring labels: app: alertmanager spec: replicas: 1# 生产环境部署 2-3 副本(需配置持久化和集群) selector: matchLabels: app: alertmanager template: metadata: labels: app: alertmanager spec: imagePullSecrets: [{ name: acr-pull-secret }] containers: - name: alertmanager # 使用官方镜像 image: crpi-2pnpj68s945gixnz.cn-shenzhen.personal.cr.aliyuncs.com/monitoring_k8s/alertmanager:v0.26.0 imagePullPolicy: IfNotPresent # 启动参数:指定配置文件路径 args: - --config.file=/etc/alertmanager/alertmanager.yml - --storage.path=/alertmanager # 告警状态存储目录# 挂载 ConfigMap(配置文件) volumeMounts: - name: alertmanager-config mountPath: /etc/alertmanager - name: alertmanager-storage mountPath: /alertmanager # 资源限制 resources: limits: cpu: 100m memory: 128Mi requests: cpu: 50m memory: 64Mi # 健康检查 livenessProbe: httpGet: path: /-/healthy port: 9093 initialDelaySeconds: 10 periodSeconds: 10 readinessProbe: httpGet: path: /-/ready port: 9093 initialDelaySeconds: 5 periodSeconds: 10 volumes: - name: alertmanager-config configMap: name: alertmanager-config - name: alertmanager-storage # 生产环境建议使用 PersistentVolume(PV),此处先用 emptyDir 测试 emptyDir: {} --- # Service:暴露 Alertmanager 服务(ClusterIP 仅集群内访问,NodePort 可外部访问) apiVersion: v1 kind: Service metadata: name: alertmanager namespace: monitoring spec: type: NodePort # 测试用,生产环境建议用 ClusterIP + Ingress selector: app: alertmanager ports: - name: web port: 9093 targetPort: 9093 nodePort: 30093# 自定义 NodePort 端口(范围 30000-32767)
4.5.4 部署 Alertmanager 到集群
root@master:~/yaml/monitoring# kubectl apply -f alertmanager-deploy.yaml# 验证状态 root@master:~/yaml/monitoring# kubectl get pod -n monitoring  NAME READY STATUS RESTARTS AGE alertmanager-5757855787-6p69n 1/1 Running 0 58m grafana-5d56cd8487-s5z22 1/1 Running 0 3h1m node-exporter-7kcrl 1/1 Running 0 3h27m node-exporter-gknxb 1/1 Running 0 3h27m node-exporter-p99j6 1/1 Running 0 3h28m node-exporter-q5m95 1/1 Running 0 3h27m prometheus-6d756fcfff-4tc7h 1/1 Running 0 7m26s # 检查svc root@master:~/yaml/monitoring# kubectl get svc -n monitoring  NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE alertmanager NodePort 10.100.74.196 <none>9093:30093/TCP 61m grafana NodePort 10.98.127.132 <none>3000:30030/TCP 3h1m node-exporter ClusterIP 10.107.29.187 <none>9100/TCP 3h28m prometheus NodePort 10.96.26.78 <none>9090:30090/TCP 3h16m 
4.5.5 访问 Alertmanager Web UI

浏览器访问192.168.0.201:30093

若能看到 Alertmanager 界面,说明部署成功。

image-20260108142624697
4.5.6 配置 Prometheus 关联 Alertmanager
# 修改prometheus-config.yaml root@master:~/yaml/monitoring# vim prometheus-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: prometheus-config namespace: monitoring data: prometheus.yml: | global: scrape_interval: 15s evaluation_interval: 15s # 添加以下内容--------------------------------- alerting: alertmanagers: - static_configs: - targets: # Alertmanager 的 Service 地址 - alertmanager.monitoring.svc:9093 rule_files: - "alert_rules.yml"# 结束-------------------------------------------# 中间采集指标略# 最后添加以下内容告警规则,与prometheus.yml:同级 alert_rules.yml: | groups: # 1. 节点级告警(服务器资源) - name: node-resource-alerts rules: # 1.1 节点内存使用率过高 - alert: NodeHighMemoryUsage expr: (node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes * 100>85 for: 5m labels: severity: warning annotations: summary: "节点内存使用率过高" description: "节点 {{ $labels.instance }} 内存使用率超过 85% (当前值: {{ printf \"%.2f\"$value }}%),已持续5分钟。"# 1.2 节点内存使用率紧急(临界值) - alert: NodeCriticalMemoryUsage expr: (node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes * 100>95 for: 2m labels: severity: critical annotations: summary: "节点内存使用率紧急" description: "节点 {{ $labels.instance }} 内存使用率超过 95% (当前值: {{ printf \"%.2f\"$value }}%),已持续2分钟,可能导致服务不可用!"# 1.3 节点CPU使用率过高 - alert: NodeHighCPUUsage expr: 100 - (avg by (instance)(rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)>80 for: 5m labels: severity: warning annotations: summary: "节点CPU使用率过高" description: "节点 {{ $labels.instance }} CPU使用率超过 80% (当前值: {{ printf \"%.2f\"$value }}%),已持续5分钟。"# 1.4 节点根磁盘使用率过高 - alert: NodeRootDiskHighUsage expr: 100 * (node_filesystem_size_bytes{mountpoint="/"} - node_filesystem_avail_bytes{mountpoint="/"}) / node_filesystem_size_bytes{mountpoint="/"}>85 for: 5m labels: severity: warning annotations: summary: "节点根磁盘使用率过高" description: "节点 {{ $labels.instance }} 根目录 / 磁盘使用率超过 85% (当前值: {{ printf \"%.2f\"$value }}%),已持续5分钟。"# 1.5 节点磁盘IO使用率过高 - alert: NodeHighDiskIO expr: 100 * rate(node_disk_io_time_seconds_total{device!~"loop.*|sr.*"}[5m])>80 for: 5m labels: severity: warning annotations: summary: "节点磁盘IO使用率过高" description: "节点 {{ $labels.instance }} 的磁盘 {{ $labels.device }} IO使用率超过 80% (当前值: {{ printf \"%.2f\"$value }}%),已持续5分钟。"# 1.6 节点不可达(NodeExporter失联) - alert: NodeDown expr: up{job=~"k8s-node-exporter|harbor-node-exporter|lb-node-exporter"}==0 for: 3m labels: severity: critical annotations: summary: "节点监控失联" description: "节点 {{ $labels.instance }} 的 NodeExporter 已失联超过3分钟,无法采集指标!"# 2. K8s Pod/容器级告警 - name: k8s-pod-alerts rules: # 2.1 Pod 重启次数过多(1小时内重启≥3次) - alert: PodRestartTooFrequent expr: increase(kube_pod_container_restarts_total[1h])>=3 for: 10m labels: severity: warning annotations: summary: "Pod 重启次数过多" description: "命名空间 {{ $labels.namespace }} 的 Pod {{ $labels.pod }} 容器 {{ $labels.container }} 1小时内重启 {{ $value }} 次,可能存在服务异常。"# 2.2 Pod 状态异常(Pending/Failed/Error) - alert: PodStatusAbnormal expr: kube_pod_status_phase{phase=~"Pending|Failed|Error"}==1 for: 5m labels: severity: critical annotations: summary: "Pod 状态异常" description: "命名空间 {{ $labels.namespace }} 的 Pod {{ $labels.pod }} 状态为 {{ $labels.phase }},已持续5分钟。"# 2.3 容器CPU使用率过高 - alert: ContainerHighCPUUsage expr: (sum by (namespace, pod, container)(rate(container_cpu_usage_seconds_total{container!=""}[5m])) / sum by (namespace, pod, container)(kube_pod_container_resource_limits_cpu_cores{container!=""})) * 100>80 for: 5m labels: severity: warning annotations: summary: "容器CPU使用率过高" description: "命名空间 {{ $labels.namespace }} 的 Pod {{ $labels.pod }} 容器 {{ $labels.container }} CPU使用率超过 80% (当前值: {{ printf \"%.2f\"$value }}%),已持续5分钟。"# 2.4 容器内存使用率过高 - alert: ContainerHighMemoryUsage expr: (sum by (namespace, pod, container)(container_memory_usage_bytes{container!=""}) / sum by (namespace, pod, container)(kube_pod_container_resource_limits_memory_bytes{container!=""})) * 100>85 for: 5m labels: severity: warning annotations: summary: "容器内存使用率过高" description: "命名空间 {{ $labels.namespace }} 的 Pod {{ $labels.pod }} 容器 {{ $labels.container }} 内存使用率超过 85% (当前值: {{ printf \"%.2f\"$value }}%),已持续5分钟。"# 3. K8s 核心组件告警 - name: k8s-component-alerts rules: # 3.1 APIServer 请求延迟过高 - alert: K8sAPIServerHighRequestLatency expr: (apiserver_request_latency_seconds_sum{verb!~"LIST|WATCH"} / apiserver_request_latency_seconds_count{verb!~"LIST|WATCH"})>0.5 for: 5m labels: severity: warning annotations: summary: "K8s APIServer 请求延迟过高" description: "APIServer {{ $labels.instance }} {{ $labels.verb }} 请求平均延迟超过 500ms (当前值: {{ printf \"%.3f\"$value }}s),已持续5分钟。"# 3.2 APIServer 错误率过高 - alert: K8sAPIServerHighErrorRate expr: sum by (instance)(rate(apiserver_request_total{code=~"5.."}[5m])) / sum by (instance)(rate(apiserver_request_total[5m]))>0.05 for: 5m labels: severity: critical annotations: summary: "K8s APIServer 错误率过高" description: "APIServer 5XX 错误率超过 5% (当前值: {{ printf \"%.2f\"$value }}%),已持续5分钟。"

更新 Prometheus ConfigMap 并重启 Prometheus Pod:

# 更新 ConfigMap root@master:~/yaml/monitoring# kubectl apply -f prometheus-config.yaml# 重启 Prometheus Pod(触发配置重载) root@master:~/yaml/monitoring# kubectl rollout restart deployment prometheus -n monitoring# 检查启动状态 root@master:~/yaml/monitoring# kubectl get pod -n monitoring  NAME READY STATUS RESTARTS AGE alertmanager-5757855787-6p69n 1/1 Running 0 64m grafana-5d56cd8487-s5z22 1/1 Running 0 3h7m node-exporter-7kcrl 1/1 Running 0 3h33m node-exporter-gknxb 1/1 Running 0 3h33m node-exporter-p99j6 1/1 Running 0 3h33m node-exporter-q5m95 1/1 Running 0 3h33m prometheus-6d756fcfff-4tc7h 1/1 Running 0 13m 

进入Prometheus web页面查看规则是否生效

image-20260108143207296
4.5.7 测试邮箱告警
# 修改内存告警规则# 1.1 节点内存使用率过高 - alert: NodeHighMemoryUsage expr: (node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes * 100>10# 修改为10 for: 1m # 修改为1m labels: severity: warning annotations: summary: "节点内存使用率过高" description: "节点 {{ $labels.instance }} 内存使用率超过 85% (当前值: {{ printf \"%.2f\"$value }}%),已持续5分钟。"# 更新 ConfigMap root@master:~/yaml/monitoring# kubectl apply -f prometheus-config.yaml# 重启 Prometheus Pod(触发配置重载) root@master:~/yaml/monitoring# kubectl rollout restart deployment prometheus -n monitoring# 检查启动状态 root@master:~/yaml/monitoring# kubectl get pod -n monitoring  NAME READY STATUS RESTARTS AGE alertmanager-5757855787-6p69n 1/1 Running 0 64m grafana-5d56cd8487-s5z22 1/1 Running 0 3h7m node-exporter-7kcrl 1/1 Running 0 3h33m node-exporter-gknxb 1/1 Running 0 3h33m node-exporter-p99j6 1/1 Running 0 3h33m node-exporter-q5m95 1/1 Running 0 3h33m prometheus-6d756fcfff-4tc7h 1/1 Running 0 13m 

image-20260108143558952

可以看到收到邮箱告警了

image-20260108144044073

测试完毕后修改回原指标,将通知已解决

image-20260108144514415

4.6 阿里云SLS日志服务

4.6.1 部署LoongCollector(本地集群master)

在Kubernetes集群中以DaemonSet和Sidecar安装LoongCollector-日志服务-阿里云

卸载教程运行管理-日志服务-阿里云

创建主账号AccessKey

ea0e56196cb4feb1b360f885138e7bf4
456b82149f4acad283bb39a8102f990a
root@master:~/yaml# mkdir logging root@master:~/yaml# cd logging root@master:~/yaml/logging# wget https://aliyun-observability-release-cn-shanghai.oss-cn-shanghai.aliyuncs.com/loongcollector/k8s-custom-pkg/3.0.12/loongcollector-custom-k8s-package.tgz; tar xvf loongcollector-custom-k8s-package.tgz; chmod 744 ./loongcollector-custom-k8s-package/k8s-custom-install.sh# 修改配置文件values.yaml:进入loongcollector-custom-k8s-package目录,修改配置文件./loongcollector/values.yaml root@master:~/yaml/logging# cd loongcollector-custom-k8s-package/ root@master:~/yaml/logging/loongcollector-custom-k8s-package# vim loongcollector/values.yaml# 本集群要采集到的Project名 projectName: "k8s-pod-logs"# Project所属地域,例如上海:cn-shanghai region: "cn-shenzhen"# Project所属主账号uid,请用引号包围,例如"123456789" aliUid: "123456"# 使用网络,可选参数:公网Internet,内网Intranet,默认使用公网 net: Internet # 主账号或者子账号的AK,SK accessKeyID: "123456" accessKeySecret: "123456"# 自定义集群ID,命名只支持大小写,数字,短划线(-)。 clusterID: "k8s-pod"# 执行安装脚本:在loongcollector-custom-k8s-package目录下执行如下命令,安装LoongCollector及其他依赖组件。 root@master:~/yaml/logging/loongcollector-custom-k8s-package# bash k8s-custom-install.sh install# 验证安装结果:安装完成后,执行如下命令查看组件状态:# 检查Pod状态 root@master:~/yaml/logging/loongcollector-custom-k8s-package# kubectl get po -n kube-system -o wide | grep loongcollector-ds loongcollector-ds-6hcvp 1/1 Running 0 78s 10.20.166.154 node1 <none><none> loongcollector-ds-hhklj 1/1 Running 0 78s 10.20.104.20 node2 <none><none> loongcollector-ds-jx4ll 1/1 Running 0 78s 10.20.135.23 node3 <none><none> loongcollector-ds-wj8c7 1/1 Running 0 78s 10.20.219.71 master <none><none>

组件安装成功后,日志服务会自动创建如下资源,可登录日志服务控制台查看。

image-20260108200249283
4.6.2 创建日志采集规则
1. 标准输出日志采集(容器日志)

选择project->创建日志库->数据介入->选择k8s-标准输出-新版模板

image-20260108200353712
image-20260108203254692
image-20260108203422420
image-20260108203438197
image-20260108201059178
2.配置机器组

使用场景:k8s场景

部署方式:自建集群Daemonset

添加机器组:k8s-group-k8s-pod

image-20260108202144908
image-20260108202203859
3.Logtail配置

全局配置:填写采集名称(如k8s-stdout)。

容器过滤:通过Pod标签、命名空间或容器名称筛选目标日志(如命名空间kube-system)

image-20260108201733238

下一步

容器日志采集完成

image-20260108203723382
4.本地k8s集群主机日志采集(本地k8s集群所有主机节点)
前提条件

创建Project

若您无可用Project,请参考此处步骤创建一个基础Project,如需详细了解创建配置请参见管理Project

登录日志服务控制台,单击创建Project,完成下述基础配置,其他配置保持默认即可:

  • 所属地域:请根据日志来源等信息选择合适的阿里云地域,创建后不可修改。
  • Project名称:设置名称,名称在阿里云地域内全局唯一,创建后不可修改。
image-20260109093053556

创建Logstore

若您无可用Logstore,请参考此处步骤创建一个基础Logstore,如需详细了解创建配置请参见管理LogStore

  1. 登录日志服务控制台,在Project列表中单击目标Project。
  2. 填写Logstore名称,其余配置保持默认无需修改。

在****日志存储** > **日志库**页签中,单击+**图标。

image
image-20260109093143163

在Linux服务器上分场景安装LoongCollector采集器-日志服务-阿里云

  1. 选择传输方式并执行安装命令:替换${region_id}为Project所属地域的RegionID

下载安装包:在服务器上执行下载命令,示例代码中${region_id}可使用cn-hangzhou替换。

#wget https://aliyun-observability-release-${region_id}.oss-${region_id}.aliyuncs.com/loongcollector/linux64/latest/loongcollector.sh -O loongcollector.sh; root@master:~# mkdir logotail root@master:~# cd logotail/ root@master:~/logotail# wget https://aliyun-observability-release-cn-shenzhen.oss-cn-shenzhen.aliyuncs.com/loongcollector/linux64/latest/loongcollector.sh -O loongcollector.sh; --2026-01-09 09:16:04-- https://aliyun-observability-release-cn-shenzhen.oss-cn-shenzhen.aliyuncs.com/loongcollector/linux64/latest/loongcollector.sh 

公网:适用于大多数场景,常见于跨地域或其他云/自建服务器,但受带宽限制且可能不稳定。

# chmod +x loongcollector.sh; ./loongcollector.sh install ${region_id}-internet root@master:~/logotail# chmod +x loongcollector.sh; ./loongcollector.sh install cn-shenzhen-internet loongcollector.sh version: 1.7.0 OS Arch: x86_64 OS Distribution: Ubuntu current glibc version is :2.35 glibc >=2.12, and cpu flag meet BIN_DIR: /usr/local/ilogtail CONTROLLER_FILE: loongcollectord update-rc.d del loongcollectord successfully. Uninstall loongcollector successfully. RUNUSER:root Downloading package from region cn-shenzhen-internet ... Package address: http://aliyun-observability-release-cn-shenzhen.oss-cn-shenzhen.aliyuncs.com/loongcollector/linux64 2026-01-09 09:23:47 URL:http://aliyun-observability-release-cn-shenzhen.oss-cn-shenzhen.aliyuncs.com/loongcollector/linux64/latest/x86_64/main/loongcollector-linux64.tar.gz [82695097/82695097] ->"loongcollector-linux64.tar.gz"[1] Download loongcollector-linux64.tar.gz successfully. Generate config successfully. Installing loongcollector in /usr/local/ilogtail ... sysom-cn-shenzhenPreparing eBPF enviroment ... Found valid btf file: /sys/kernel/btf/vmlinux Prepare eBPF enviroment successfully agent stub for telegraf has been installed agent stub for jvm has been installed Install loongcollector files successfully. Configuring loongcollector service... Use systemd for startup service_file_path: /etc/systemd/system/loongcollectord.service Synchronizing state of loongcollectord.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable loongcollectord Created symlink /etc/systemd/system/default.target.wants/loongcollectord.service → /etc/systemd/system/loongcollectord.service. systemd startup successfully. Synchronizing state of ilogtaild.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable ilogtaild Created symlink /etc/systemd/system/default.target.wants/ilogtaild.service → /etc/systemd/system/ilogtaild.service. Configure loongcollector successfully. Starting loongcollector ... Start loongcollector successfully. {"UUID":"DD64E1D0-ECF9-11F0-92B1-9D94276D7AA7", "compiler":"GCC 9.3.1", "host_id":"DCCBAF1A-ECF9-11F0-92B1-9D94276D7AA7", "hostname":"master", "instance_id":"DD64D532-ECF9-11F0-92B1-9D94276D7AA7_192.168.0.200_1767921834", "ip":"192.168.0.200", "loongcollector_version":"3.2.6", "os":"Linux; 5.15.0-164-generic; #174-Ubuntu SMP Fri Nov 14 20:25:16 UTC 2025; x86_64", "update_time":"2026-01-09 09:23:55"}
  1. 查看启动状态:执行命令,返回loongcollector is running表示启动成功。
# sudo /etc/init.d/loongcollectord status root@master:~/logotail# sudo /etc/init.d/loongcollectord status loongcollector is running 
  1. 配置用户ID:用户ID文件包含Project所属阿里云主账号的ID信息,用于标识该账号有权限访问、采集这台服务器的日志。
只有在采集非本账号ECS、自建服务器、其他云厂商服务器日志时需要配置用户ID。多个账号对同一台服务器进行日志采集时,支持在同一台服务器上创建多个用户ID文件。
  1. 登录日志服务控制台,鼠标悬浮在右上角用户头像上,在弹出的标签页中查看并复制账号ID。注意需要复制主账号ID。
  2. 在安装了LoongCollector的服务器上,以主账号ID作为文件名,创建用户ID文件。
#touch /etc/ilogtail/users/{阿里云账号ID} # 如果/etc/ilogtail/users目录不存在,请手动创建目录。用户ID文件只需配置文件名,无需配置文件后缀。 root@master:~/logotail# touch /etc/ilogtail/users/123456
  1. 配置机器组:日志服务通过机器组发现用户自定义标识并与主机上的LoongCollector建立心跳连接。
  2. 在服务器上将自定义字符串user-defined-test-1写入用户自定义标识文件,该字符串将在后续步骤中使用。
#向指定文件写入自定义字符串,若目录不存在需手动创建。文件路径和名称由日志服务固定,不可自定义。echo"user-defined-test-1"> /etc/ilogtail/user_defined_id root@master:~/logotail# echo "user-defined-test-1" > /etc/ilogtail/user_defined_id
    1. 登录日志服务控制台。在Project列表中,单击目标Project。
      • 设置机器组名称:名称Project内唯一,必须以小写字母或数字开头和结尾,且只能包含小写字母、数字、连字符(-)和下划线(_),长度为3~128字符。
      • 机器组标识:选择用户自定义标识
      • 用户自定义标识:输入配置的用户自定义标识,需要与服务器用户自定义标识文件中自定义字符串内容一致。此例为user-defined-test-1
  1. 机器组创建完成后,在机器组列表单击目标机器组,在机器组状态中查看心跳状态,若为FAIL,请等待两分钟左右并手动刷新。如果心跳为OK则表示创建成功。

进行如下配置后单击确定。

image-20260109093419667

单击机器组右侧的

image

,单击创建机器组。

image

单击****

image

****资源,单击机器组。

image-20260109093523763

上述同样的安装步骤对剩余服务器进行安装配置

image-20260109093953909

安装完成后若需要采集日志还需进行采集配置

image-20260109094141453
image-20260109094244226
image-20260109094301936
image-20260109094648827
image-20260109094831422
image-20260109094909776
5.ECS服务器日志采集同主机日志采集步骤

ystemd for startup
service_file_path: /etc/systemd/system/loongcollectord.service
Synchronizing state of loongcollectord.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable loongcollectord
Created symlink /etc/systemd/system/default.target.wants/loongcollectord.service → /etc/systemd/system/loongcollectord.service.
systemd startup successfully.
Synchronizing state of ilogtaild.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable ilogtaild
Created symlink /etc/systemd/system/default.target.wants/ilogtaild.service → /etc/systemd/system/ilogtaild.service.
Configure loongcollector successfully.
Starting loongcollector …
Start loongcollector successfully.
{
“UUID” : “DD64E1D0-ECF9-11F0-92B1-9D94276D7AA7”,
“compiler” : “GCC 9.3.1”,
“host_id” : “DCCBAF1A-ECF9-11F0-92B1-9D94276D7AA7”,
“hostname” : “master”,
“instance_id” : “DD64D532-ECF9-11F0-92B1-9D94276D7AA7_192.168.0.200_1767921834”,
“ip” : “192.168.0.200”,
“loongcollector_version” : “3.2.6”,
“os” : “Linux; 5.15.0-164-generic; #174-Ubuntu SMP Fri Nov 14 20:25:16 UTC 2025; x86_64”,
“update_time” : “2026-01-09 09:23:55”
}

 3. 查看启动状态:执行命令,返回`loongcollector is running`表示启动成功。 ```bash # sudo /etc/init.d/loongcollectord status root@master:~/logotail# sudo /etc/init.d/loongcollectord status loongcollector is running 
  1. 配置用户ID:用户ID文件包含Project所属阿里云主账号的ID信息,用于标识该账号有权限访问、采集这台服务器的日志。
只有在采集非本账号ECS、自建服务器、其他云厂商服务器日志时需要配置用户ID。多个账号对同一台服务器进行日志采集时,支持在同一台服务器上创建多个用户ID文件。
  1. 登录日志服务控制台,鼠标悬浮在右上角用户头像上,在弹出的标签页中查看并复制账号ID。注意需要复制主账号ID。
  2. 在安装了LoongCollector的服务器上,以主账号ID作为文件名,创建用户ID文件。
#touch /etc/ilogtail/users/{阿里云账号ID} # 如果/etc/ilogtail/users目录不存在,请手动创建目录。用户ID文件只需配置文件名,无需配置文件后缀。 root@master:~/logotail# touch /etc/ilogtail/users/123456
  1. 配置机器组:日志服务通过机器组发现用户自定义标识并与主机上的LoongCollector建立心跳连接。
  2. 在服务器上将自定义字符串user-defined-test-1写入用户自定义标识文件,该字符串将在后续步骤中使用。
#向指定文件写入自定义字符串,若目录不存在需手动创建。文件路径和名称由日志服务固定,不可自定义。echo"user-defined-test-1"> /etc/ilogtail/user_defined_id root@master:~/logotail# echo "user-defined-test-1" > /etc/ilogtail/user_defined_id
    1. 登录日志服务控制台。在Project列表中,单击目标Project。
    2. 进行如下配置后单击确定。
      • 设置机器组名称:名称Project内唯一,必须以小写字母或数字开头和结尾,且只能包含小写字母、数字、连字符(-)和下划线(_),长度为3~128字符。
      • 机器组标识:选择用户自定义标识
      • 用户自定义标识:输入配置的用户自定义标识,需要与服务器用户自定义标识文件中自定义字符串内容一致。此例为user-defined-test-1

单击机器组右侧的

image

,单击创建机器组。

image

单击*

image

资源,单击机器组。

image-20260109093419667

机器组创建完成后,在机器组列表单击目标机器组,在机器组状态中查看心跳状态,若为FAIL,请等待两分钟左右并手动刷新。如果心跳为OK则表示创建成功。

image-20260109093523763

上述同样的安装步骤对剩余服务器进行安装配置

image-20260109093953909

安装完成后若需要采集日志还需进行采集配置

image-20260109094141453
image-20260109094244226


image-20260109094301936
image-20260109094648827
image-20260109094831422
image-20260109094909776
5.ECS服务器日志采集同主机日志采集步骤

Read more

Python之程序控制语句第一部分:if条件判断语句。

程序代码除了从第一行代码开始依次往下运行外,还存在多种不同走向的运行方式。一共有3种不同的语法的运行方式,这些语句分别是条件判断语句、循环语句和异常处理语句。 这篇我们就先介绍第一种:条件判断语句, 条件判断语句 在条件判断语句当中,代码运行的走向是根据条件来决定运行哪行代码,一共是涉及3种语句,分别是if语句、if+else语句和if+elif+else语句。 1.1 if语句 if语句代码执行的走向会存在两种情况: 第 1 种情况,程序按顺序开始执行代码 1,遇到条件表达式,开始判断该表达式是否满足条件,若满足条件则开始执行代码 2,执行完代码 2 后程序结束。 第 2 种情况,程序按顺序开始执行代码 1,遇到条件表达式,开始判断该表达式是否满足条件,若不满足则跳过代码 2 直接开始执行代码 3,执行完代码 3 后程序结束。 在使用 if 语句前,需要了解

By Ne0inhk
Python驱动Ksycopg2连接和使用Kingbase:国产数据库实战指南

Python驱动Ksycopg2连接和使用Kingbase:国产数据库实战指南

引言 在国产数据库蓬勃发展的今天,KingbaseES作为国产数据库的佼佼者,凭借其高兼容性、高性能和高安全性,在金融、政府、能源等关键领域得到了广泛应用。本文将介绍如何通过Python的ksycopg2驱动连接并操作Kingbase数据库,从基础连接到高级操作全面掌握这一技术栈。 KingbaseES 数据库【系列篇章】: No.文章地址(点击进入)1电科金仓KingbaseES数据库解析:国产数据库的崛起与技术创新2KingBase数据库迁移利器:KDTS工具深度解析与实战指南3KingBase数据库迁移利器:KDTS工具 MySQL数据迁移到KingbaseES实战4电科金仓KingbaseES V9数据库:国产数据库的自主创新与行业实践深度解析5KingbaseES客户端工具Ksql使用全指南:从安装到高级操作6Spring JDBC与KingbaseES深度集成:构建高性能国产数据库应用实战7深度解析:基于 ODBC连接 KingbaseES 数据库的完整操作与实践 一、ksycopg2驱动:连接Kingbase的桥梁 1.1 驱动架构深度剖析 ksyc

By Ne0inhk

2026年 Java 面试八股文总结(完整版)

1、Java中有几种类型的流    难度系数:⭐ 2、请写出你最常见的5个RuntimeException    难度系数:⭐ 1. java.lang.NullPointerException 空指针异常;出现原因:调用了未经初始化的对象或者是不存在的对象。 1. java.lang.ClassNotFoundException 指定的类找不到;出现原因:类的名称和路径加载错误;通常都是程序试图通过字符串来加载某个类时可能引发异常。 1. java.lang.NumberFormatException 字符串转换为数字异常;出现原因:字符型数据中包含非数字型字符。 1. java.lang.IndexOutOfBoundsException 数组角标越界异常,常见于操作数组对象时发生。 1. java.lang.IllegalArgumentException 方法传递参数错误。 1. java.lang.ClassCastException 数据类型转换异常。 3、谈谈你对反射的理解    难度系数:⭐ 1. 反射

By Ne0inhk
保姆级教程!VSCode 配置 Python 环境一篇就够

保姆级教程!VSCode 配置 Python 环境一篇就够

欢迎阅读我的文章!更多精彩内容,欢迎关注: • B站主页:🫱小枫Geek • 微信公众号:Procode   • 视频教程:🫱VSCode配置Python环境   前言 刚上大学,很多同学第一次接触编程,想用 Python 入门,却常常卡在环境配置上。今天带大家从 Python 安装 → VSCode 配置 → 运行代码,一步到位搞定,不踩坑! 一、下载安装 Python 1. 打开 Python 官网。 2. 在下载页面选择 Windows 系统版本。 3. 我们只需要下载 3.0 以上版本即可,这里以 Python 3.12 为例。 下载完成后开始安装: * 其他选项默认即可,点击下一步完成安装。 重点注意:一定要勾选

By Ne0inhk