Open Cluster Management 多集群管理
创始人
2025-05-29 14:57:34
0

什么是 Open Cluster Management

Open Cluster Management 组成

Open Cluster Management 发展历史

Open Cluster Management 快速安装

准备

  • 确保安装了kubectl和kustomize 。
  • 确保已安装kind(大于v0.9.0+,或首选最新版本)。
  • 中心(hub)集群应该是v1.19+. v1.16(要在 [ , ]之间的 hub 集群版本上运行v1.18,请手动启用功能门“V1beta1CSRAPICompatibility”)。
  • 目前,引导过程依赖于通过 CSR 进行的客户端身份验证。因此,不支持它的 Kubernetes 发行版不能用作 hub。例如:EKS。

安装 hub 集群与托管集群

下载并安装最新版本的clusteradm命令行工具

curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash

或者使用go 安装

# Installing clusteradm to $GOPATH/bin/
GO111MODULE=off go get -u open-cluster-management.io/clusteradm/...

快速设置一个 hub 集群和 2 个托管集群

curl -L https://raw.githubusercontent.com/open-cluster-management-io/OCM/main/solutions/setup-dev-environment/local-up.sh | bash

脚本内容如下:

#!/bin/bashcd $(dirname ${BASH_SOURCE})set -ehub=${CLUSTER1:-hub}
c1=${CLUSTER1:-cluster1}
c2=${CLUSTER2:-cluster2}hubctx="kind-${hub}"
c1ctx="kind-${c1}"
c2ctx="kind-${c2}"kind create cluster --name "${hub}"
kind create cluster --name "${c1}"
kind create cluster --name "${c2}"kubectl config set-context ${hubctx}helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \--namespace cert-manager \--create-namespace \--version v1.10.0 \--set ingressShim.defaultIssuerName=letsencrypt-prod \--set ingressShim.defaultIssuerKind=ClusterIssuer \--set ingressShim.defaultIssuerGroup=cert-manager.io \--set featureGates="ExperimentalCertificateSigningRequestControllers=true" \--set installCRDs=truecat < cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:name: letsencrypt-prod
spec:acme:server: https://acme-v02.api.letsencrypt.org/directoryemail: "1zoxun1@gmail.com"privateKeySecretRef:name: letsencrypt-prodsolvers:- http01:ingress:class: nginx
EOFkubectl apply -f cluster-issuer.yamlecho "Initialize the ocm hub cluster\n"
clusteradm init --wait --context ${hubctx}
joincmd=$(clusteradm get token --context ${hubctx} | grep clusteradm)echo "Join cluster1 to hub\n"
$(echo ${joincmd} --force-internal-endpoint-lookup --wait --context ${c1ctx} | sed "s//$c1/g")echo "Join cluster2 to hub\n"
$(echo ${joincmd} --force-internal-endpoint-lookup --wait --context ${c2ctx} | sed "s//$c2/g")echo "Accept join of cluster1 and cluster2"
clusteradm accept --context ${hubctx} --clusters ${c1},${c2} --waitkubectl get managedclusters --all-namespaces --context ${hubctx}

安装 OCM 组件并托管集群

在将 OCM 组件实际安装到您的集群之前,请在运行我们的命令行工具之前在您的终端中导出以下环境变量,clusteradm以便它可以正确区分 hub 集群。

export CTX_HUB_CLUSTER=

clusteradm init:

 # By default, it installs the latest release of the OCM components.# Use e.g. "--bundle-version=latest" to install latest development builds.# NOTE: For hub cluster version between v1.16 to v1.19 use the parameter: --use-bootstrap-tokenclusteradm init --wait --context ${CTX_HUB_CLUSTER}

clusteradm init命令在 hub 集群上安装 registration-operator ,它负责为 OCM 环境持续安装和升级一些核心组件。

命令完成后init,将在控制台上输出生成的命令以注册您的托管集群。生成命令的示例如下所示。

clusteradm join \--hub-token  \--hub-apiserver  \--wait \--cluster-name 

建议将命令保存在安全的地方以备将来使用。如果丢失了,可以使用 clusteradm get token重新获取生成的命令。

$ kind get clusters
enabling experimental podman provider
cluster1
cluster2
hub
kind$ kubectl get ns --context kind-hub
NAME                          STATUS   AGE
default                       Active   24h
kube-node-lease               Active   24h
kube-public                   Active   24h
kube-system                   Active   24h
local-path-storage            Active   24h
open-cluster-management       Active   23h
open-cluster-management-hub   Active   23h$ kubectl get clustermanager --context kind-hub
NAME              AGE
cluster-manager   23h$ kubectl -n open-cluster-management get pod --context kind-hub
NAME                               READY   STATUS    RESTARTS   AGE
cluster-manager-79dcdf496f-mfv72   1/1     Running   0          23h$ kubectl -n open-cluster-management-hub get pod --context kind-hub
NAME                                                       READY   STATUS    RESTARTS   AGE
cluster-manager-placement-controller-6597644b5b-crcmp      1/1     Running   0          23h
cluster-manager-registration-controller-7d774d4866-vtqwc   1/1     Running   0          23h
cluster-manager-registration-webhook-f549cb5bd-lmgmx       2/2     Running   0          23h
cluster-manager-work-webhook-64f95b566d-drtv8              2/2     Running   0          23h$ kubectl -n open-cluster-management-agent get pod --context  ${CTX_HUB_CLUSTER}
NAME                                             READY   STATUS    RESTARTS   AGE
klusterlet-registration-agent-57d7bf7749-4rck7   1/1     Running   0          28m
klusterlet-work-agent-5848786fdc-rzgrx           1/1     Running   0          27m查看已成功创建cluster1 ManagedCluster对象:
$ kubectl get managedcluster --context ${CTX_HUB_CLUSTER}
NAME      HUB ACCEPTED   MANAGED CLUSTER URLS         JOINED   AVAILABLE   AGE
default   true           https://10.168.110.21:6443   True     True        28m#整体安装信息在自定义资源上可见clustermanager:kubectl get clustermanager cluster-manager -o yaml --context kind-hub
apiVersion: operator.open-cluster-management.io/v1
kind: ClusterManager
metadata:creationTimestamp: "2023-03-14T03:10:16Z"finalizers:- operator.open-cluster-management.io/cluster-manager-cleanupgeneration: 2name: cluster-managerresourceVersion: "3178"uid: cd60535e-7264-4760-ad46-edca6e617da5
spec:deployOption:mode: DefaultnodePlacement: {}placementImagePullSpec: quay.io/open-cluster-management/placement:v0.10.0registrationConfiguration:featureGates:- feature: DefaultClusterSetmode: EnableregistrationImagePullSpec: quay.io/open-cluster-management/registration:v0.10.0workImagePullSpec: quay.io/open-cluster-management/work:v0.10.0
status:conditions:- lastTransitionTime: "2023-03-14T03:10:43Z"message: Registration is managing credentialsobservedGeneration: 2reason: RegistrationFunctionalstatus: "False"type: HubRegistrationDegraded- lastTransitionTime: "2023-03-14T03:11:03Z"message: Placement is scheduling placement decisionsobservedGeneration: 2reason: PlacementFunctionalstatus: "False"type: HubPlacementDegraded- lastTransitionTime: "2023-03-14T03:10:22Z"message: Feature gates are all validreason: FeatureGatesAllValidstatus: "True"type: ValidFeatureGates- lastTransitionTime: "2023-03-14T03:11:20Z"message: Components of cluster manager are up to datereason: ClusterManagerUpToDatestatus: "False"type: Progressing- lastTransitionTime: "2023-03-14T03:10:22Z"message: Components of cluster manager are appliedreason: ClusterManagerAppliedstatus: "True"type: Appliedgenerations:- group: appslastGeneration: 1name: cluster-manager-registration-controllernamespace: open-cluster-management-hubresource: deploymentsversion: v1- group: appslastGeneration: 1name: cluster-manager-registration-webhooknamespace: open-cluster-management-hubresource: deploymentsversion: v1- group: appslastGeneration: 1name: cluster-manager-work-webhooknamespace: open-cluster-management-hubresource: deploymentsversion: v1- group: appslastGeneration: 1name: cluster-manager-placement-controllernamespace: open-cluster-management-hubresource: deploymentsversion: v1observedGeneration: 2relatedResources:- group: apiextensions.k8s.ioname: clustermanagementaddons.addon.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: managedclusters.cluster.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: managedclustersets.cluster.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: manifestworks.work.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: managedclusteraddons.addon.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: managedclustersetbindings.cluster.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: placements.cluster.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: addondeploymentconfigs.addon.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: placementdecisions.cluster.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: addonplacementscores.cluster.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: ""name: open-cluster-management-hubnamespace: ""resource: namespacesversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-registration:controllernamespace: ""resource: clusterrolesversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-registration:controllernamespace: ""resource: clusterrolebindingsversion: v1- group: ""name: cluster-manager-registration-controller-sanamespace: open-cluster-management-hubresource: serviceaccountsversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-registration:webhooknamespace: ""resource: clusterrolesversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-registration:webhooknamespace: ""resource: clusterrolebindingsversion: v1- group: ""name: cluster-manager-registration-webhook-sanamespace: open-cluster-management-hubresource: serviceaccountsversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-work:webhooknamespace: ""resource: clusterrolesversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-work:webhooknamespace: ""resource: clusterrolebindingsversion: v1- group: ""name: cluster-manager-work-webhook-sanamespace: open-cluster-management-hubresource: serviceaccountsversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-placement:controllernamespace: ""resource: clusterrolesversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-placement:controllernamespace: ""resource: clusterrolebindingsversion: v1- group: ""name: cluster-manager-placement-controller-sanamespace: open-cluster-management-hubresource: serviceaccountsversion: v1- group: ""name: cluster-manager-registration-webhooknamespace: open-cluster-management-hubresource: servicesversion: v1- group: ""name: cluster-manager-work-webhooknamespace: open-cluster-management-hubresource: servicesversion: v1- group: appsname: cluster-manager-registration-controllernamespace: open-cluster-management-hubresource: deploymentsversion: v1- group: appsname: cluster-manager-registration-webhooknamespace: open-cluster-management-hubresource: deploymentsversion: v1- group: appsname: cluster-manager-work-webhooknamespace: open-cluster-management-hubresource: deploymentsversion: v1- group: appsname: cluster-manager-placement-controllernamespace: open-cluster-management-hubresource: deploymentsversion: v1- group: admissionregistration.k8s.ioname: managedclustervalidators.admission.cluster.open-cluster-management.ionamespace: ""resource: validatingwebhookconfigurationsversion: v1- group: admissionregistration.k8s.ioname: managedclustermutators.admission.cluster.open-cluster-management.ionamespace: ""resource: mutatingwebhookconfigurationsversion: v1- group: admissionregistration.k8s.ioname: managedclustersetbindingvalidators.admission.cluster.open-cluster-management.ionamespace: ""resource: validatingwebhookconfigurationsversion: v1- group: admissionregistration.k8s.ioname: managedclustersetbindingv1beta1validators.admission.cluster.open-cluster-management.ionamespace: ""resource: validatingwebhookconfigurationsversion: v1- group: admissionregistration.k8s.ioname: manifestworkvalidators.admission.work.open-cluster-management.ionamespace: ""resource: validatingwebhookconfigurationsversion: v1

接受加入请求并验证

OCM 代理在您的托管集群上运行后,它将向您的 hub 集群发送“握手”并等待 hub 集群管理员的批准。在本节中,我们将从 OCM 中心管理员的角度逐步接受注册请求。

等待 CSR 对象的创建,该对象将由您的托管集群的 OCM 代理在 hub 集群上创建:

# or the previously chosen cluster name
kubectl get csr -w --context ${CTX_HUB_CLUSTER} | grep cluster1  

待处理 CSR 请求的示例如下所示:

cluster1-tqcjj   33s   kubernetes.io/kube-apiserver-client   system:serviceaccount:open-cluster-management:cluster-bootstrap   Pending

使用工具接受加入请求clusteradm

clusteradm accept --clusters cluster1 --context ${CTX_HUB_CLUSTER}

运行该accept命令后,来自名为“cluster1”的托管集群的 CSR 将获得批准。此外,它将指示 OCM hub 控制平面自动设置相关对象(例如 hub 集群中名为“cluster1”的命名空间)和 RBAC 权限。

通过运行以下命令验证托管集群上 OCM 代理的安装:

kubectl -n open-cluster-management-agent get pod --context ${CTX_MANAGED_CLUSTER}
NAME                                             READY   STATUS    RESTARTS   AGE
klusterlet-registration-agent-598fd79988-jxx7n   1/1     Running   0          19d
klusterlet-work-agent-7d47f4b5c5-dnkqw           1/1     Running   0          19d

从控制平面卸载 OCM

在从您的集群中卸载 OCM 组件之前,请将受管集群与控制平面分离。

clusteradm clean --context ${CTX_HUB_CLUSTER}

检查 OCM 的集线器控制平面的实例是否已删除。

$ kubectl -n open-cluster-management-hub get pod --context ${CTX_HUB_CLUSTER}
No resources found in open-cluster-management-hub namespace.
$ kubectl -n open-cluster-management get pod --context ${CTX_HUB_CLUSTER}
No resources found in open-cluster-management namespace.

检查clustermanager资源是否已从控制平面中删除。

$ kubectl get clustermanager --context ${CTX_HUB_CLUSTER}
error: the server doesn't have a resource type "clustermanager"

注销已管理集群

删除向中心群集注册时生成的资源。
格式:

clusteradm unjoin --cluster-name "cluster1" --context ${CTX_MANAGED_CLUSTER}

例如:

$ clusteradm unjoin --cluster-name "default" --context ${CTX_HUB_CLUSTER}
Remove applied resources in the managed cluster default ...
Applied resources have been deleted during the default joined stage. The status of mcl default will be unknown in the hub cluster.

检查OCM代理的安装是否已从受管集群中删除。

kubectl -n open-cluster-management-agent get pod --context ${CTX_MANAGED_CLUSTER}
No resources found in open-cluster-management-agent namespace.

检查已注销的集群对应的 klusterlet 是否被删除

$ kubectl get klusterlet --context  ${CTX_HUB_CLUSTER}
error: the server doesn't have a resource type "klusterlet"

Open Cluster Management 部署

Open Cluster Management 如何管理 k8s

Open Cluster Management 如何开发

参考:
https://open-cluster-management.io/getting-started/installation/register-a-cluster/
https://open-cluster-management.io/getting-started/installation/start-the-control-plane/
https://cert-manager.io/docs/usage/kube-csr/

相关内容

热门资讯

监控摄像头接入GB28181平... 流程简介将监控摄像头的视频在网站和APP中直播,要解决的几个问题是:1&...
Windows10添加群晖磁盘... 在使用群晖NAS时,我们需要通过本地映射的方式把NAS映射成本地的一块磁盘使用。 通过...
protocol buffer... 目录 目录 什么是protocol buffer 1.protobuf 1.1安装  1.2使用...
Fluent中创建监测点 1 概述某些仿真问题,需要创建监测点,用于获取空间定点的数据࿰...
educoder数据结构与算法...                                                   ...
MySQL下载和安装(Wind... 前言:刚换了一台电脑,里面所有东西都需要重新配置,习惯了所...
MFC文件操作  MFC提供了一个文件操作的基类CFile,这个类提供了一个没有缓存的二进制格式的磁盘...
在Word、WPS中插入AxM... 引言 我最近需要写一些文章,在排版时发现AxMath插入的公式竟然会导致行间距异常&#...
有效的括号 一、题目 给定一个只包括 '(',')','{','}'...
【Ctfer训练计划】——(三... 作者名:Demo不是emo  主页面链接:主页传送门 创作初心ÿ...