Multi-Cluster com Karmada [Lab Session] | DevsDay.ru

IT-блоги Multi-Cluster com Karmada [Lab Session]

dev.to 15 мая 2024 г. Paulo Ponciano


Image description

What is Karmada?

Karmada (Kubernetes Armada) is a Kubernetes management system that enables you to run your cloud-native applications across multiple Kubernetes clusters and clouds, with no changes to your applications. By speaking Kubernetes-native APIs and providing advanced scheduling capabilities, Karmada enables truly open, multi-cloud Kubernetes.

Karmada aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios, with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling.

Architecture
Image description
Source: https://karmada.io

Arquitetura do Lab

Image description

Deploy da infra

  • Cluster 1 ou pegasus:
git clone https://github.com/paulofponciano/EKS-Istio-Karpenter-ArgoCD.git

[!NOTE]
Altere os valores em variables.tfvars se for necessário.
No arquivo nlb.tf, estamos informando um certificado do ACM. Altere para o seu certificado ou remova comentando as linhas 38, 39, 40 e descomentando a linha 37.

Com certificado:

resource "aws_lb_listener" "ingress_443" {
  load_balancer_arn = aws_lb.istio_ingress.arn
  port              = "443"
  #protocol          = "TCP"
  protocol        = "TLS"
  certificate_arn = "arn:aws:acm:us-east-2:ACCOUNTID:certificate/bfbfe3ce-d347-4c42-8986-f45e95e04ca1"
  alpn_policy     = "HTTP2Preferred"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.https.arn
  }
}

Sem certificado:

resource "aws_lb_listener" "ingress_443" {
  load_balancer_arn = aws_lb.istio_ingress.arn
  port              = "443"
  protocol          = "TCP"
  # protocol        = "TLS"
  # certificate_arn = "arn:aws:acm:us-east-2:ACCOUNTID:certificate/bfbfe3ce-d347-4c42-8986-f45e95e04ca1"
  # alpn_policy     = "HTTP2Preferred"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.https.arn
  }
}
tofu init
tofu plan --var-file variables.tfvars
tofu apply --var-file variables.tfvars
  • Cluster 2 ou pegasus-2:
git clone https://github.com/paulofponciano/EKS-Istio-Karpenter.git

[!NOTE]
Altere os valores em variables.tfvars se for necessário.

tofu init
tofu plan --var-file variables.tfvars
tofu apply --var-file variables.tfvars

Deploy do karmada

git clone https://github.com/paulofponciano/karmada.git
cd karmada

Entrando no contexto do cluster pegasus:

aws eks update-kubeconfig --region us-east-2 --name pegasus

Image description

[!NOTE]
Os comandos a seguir serão executados no cluster pegasus (Cluster 1), onde temos o Argo rodando.

helm repo add karmada-charts https://raw.githubusercontent.com/karmada-io/karmada/master/charts
helm repo update
helm --namespace karmada-system upgrade -i karmada karmada-charts/karmada --create-namespace

Image description

Verificando o deployment:

kubectl get pods -n karmada-system

Image description

Agora já temos o controlplane do karmada rodando no cluster pegasus.

Com o deploy, é gerado um secret com o kubeconfig necessário para conectarmos no controlplane do karmada:

kubectl get secrets -n karmada-system | grep karmada-kubeconfig

Image description

kubectl get secret -n karmada-system karmada-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 --decode

Criando IRSA (Iam Role for Service Account)

[!NOTE]
Altere os valores de ACCOUNTID na iam-policy-irsa-karmada.json e no comando eksctl abaixo.

Isso refletirá em uma Service Account dentro do cluster pegasus, usaremos ela montando em um pod do ubuntu como ponto de acesso temporário ao controlplane do karmada.

aws iam create-policy --policy-name ubuntu-admin-karmada \
  --policy-document file://iam-policy-irsa-karmada.json
eksctl create iamserviceaccount --name ubuntu-admin-karmada \
  --namespace karmada-system \
  --cluster pegasus \
  --attach-policy-arn arn:aws:iam::ACCOUNTID:policy/ubuntu-admin-karmada \
  --region us-east-2 \
  --profile default \
  --approve

Image description

No lado AWS, isso reflete em uma IAM role que podemos adicionar nos dois clusters EKS.

  • EKS IAM - Console (pegasus-2):

Image description

Image description

Faça o mesmo para o cluster pegasus, onde o karmada controlplane está rodando.

Acessando Karmada API-Server

Vamos subir agora aquele pod do ubuntu-admin no cluster pegasus. No manifesto já está tudo definido para utilizar a Service Account que criamos mais acima.

kubectl apply -f https://raw.githubusercontent.com/paulofponciano/karmada/main/ubuntu-admin-karmada.yaml
kubectl get pods -n karmada-system | grep ubuntu

Image description

Nesse momento, vamos entrar no container do ubuntu, que está rodando no cluster pegasus:

kubectl exec -it ubuntu-admin-karmada -n karmada-system -- /bin/bash
  • Instalando o kubectl karmada:
curl -s https://raw.githubusercontent.com/karmada-io/karmada/master/hack/install-cli.sh | bash -s kubectl-karmada

Image description

  • Entrando no contexto do cluster pegasus-2:
aws eks update-kubeconfig --region us-west-2 --name pegasus-2 --kubeconfig $HOME/.kube/pegasus-2.config 
kubectl get nodes --kubeconfig $HOME/.kube/pegasus-2.config

Image description

  • Vamos fazer o mesmo para o cluster pegasus:
aws eks update-kubeconfig --region us-east-2 --name pegasus --kubeconfig $HOME/.kube/pegasus.config
kubectl get nodes --kubeconfig $HOME/.kube/pegasus.config

Image description

  • Checando acesso ao karmada apiserver:
kubectl get all -A --kubeconfig /etc/karmada-kubeconfig/kubeconfig

Image description

  • Join do pegasus-2 no karmada:
kubectl karmada --kubeconfig /etc/karmada-kubeconfig/kubeconfig join pegasus-2 --cluster-kubeconfig=$HOME/.kube/pegasus-2.config

Image description

  • Join do pegasus no karmada:
kubectl karmada --kubeconfig /etc/karmada-kubeconfig/kubeconfig join pegasus --cluster-kubeconfig=$HOME/.kube/pegasus.config

Image description

  • Checando status dos clusters adicionados:
kubectl --kubeconfig /etc/karmada-kubeconfig/kubeconfig get clusters

Image description

  • Instalando CLI do ArgoCD:
curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
rm argocd-linux-amd64
  • Recuperando o secret / password do Argo server e fazendo login:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
argocd login argocd-server.argocd.svc.cluster.local:80 --username admin

Image description

  • Adicionando o karmada como cluster no ArgoCD:
argocd cluster add karmada-apiserver --kubeconfig /etc/karmada-kubeconfig/kubeconfig --name karmada-controlplane

Image description

Deploy com ArgoCD

Se acessarmos a UI do Argo, que está rodando no cluster pegasus, veremos que o karmada está registrado como um cluster onde é possível o Argo fazer deploy:

Image description

Podemos agora aplicar um manifesto que irá definir uma nova fonte para o Argo buscar por deploys. Nesse caso, essa fonte é um repositório no GitHub:

kubectl apply -f karmada-argo-app.yaml

Como já temos manifestos nesse repositório (no path /app-manifests), o Argo já faz o sync entregando essas aplicações no controlplane do karmada e o karmada por sua vez, entrega nos dois clusters de acordo com o que for definido nos manifestos de PropagationPolicy:

Image description

Image description

No cluster pegasus podemos ver:

kubectl get pods -o wide | grep redis

Image description

Cluster pegasus-2:

kubectl get pods -o wide | grep nginx

Image description

No caso do deploy do RabbitMQ, podemos ver que existem replicas rodando nos dois clusters, quando olharmos os aquivos de PropagationPolicy poderemos entender.

kubectl get pods -o wide --context arn:aws:eks:us-east-2:ACCOUNTID:cluster/pegasus | grep rabbitmq

Image description

kubectl get pods -o wide --context arn:aws:eks:us-west-2:ACCOUNTID:cluster/pegasus-2 | grep rabbitmq

Image description

Karmada OverridePolicy e PropagationPolicy

No repositório que o Argo está monitorando, podemos ver os manifestos do karmada e também o manifestos de deployment que usamos como exemplo.

Exemplo Redis

Regras de override e selector do deployment onde será aplicado, no caso 'redis':

apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
  name: redis-op
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: redis
  overrideRules:
    - targetCluster:
        clusterNames:
          - pegasus-2
      overriders:
        labelsOverrider:
          - operator: add
            value:
              env: skoala-dev
          - operator: add
            value:
              env-stat: skoala-stage
          - operator: remove
            value:
              for: for
          - operator: replace
            value:
              bar: test
    - targetCluster:
        clusterNames:
          - pegasus
      overriders:
        annotationsOverrider:
          - operator: add
            value:
              env: skoala-stage
          - operator: remove
            value:
              bom: bom
          - operator: replace
            value:
              emma: sophia

Regras de propagação, selector do deployment e target cluster. Nesse caso de failover, esse deployment deve ser migrado para o cluster pegasus-2 caso o cluster pegasus entre em falha:

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: redis-propagation
spec:
  propagateDeps: true
  failover:
    application:
      decisionConditions:
        tolerationSeconds: 120
      purgeMode: Never
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: redis
  placement:
    clusterAffinity:
      clusterNames:
        - pegasus
        - pegasus-2
    spreadConstraints:
      - maxGroups: 1
        minGroups: 1
        spreadByField: cluster

Exemplo Nginx

apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
  name: nginx-op
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: nginx
  overrideRules:
    - targetCluster:
        clusterNames:
          - pegasus-2
      overriders:
        labelsOverrider:
          - operator: add
            value:
              env: skoala-dev
          - operator: add
            value:
              env-stat: skoala-stage
          - operator: remove
            value:
              for: for
          - operator: replace
            value:
              bar: test

Neste caso, apenas o cluster pegasus-2 foi definido em 'targetCluster':

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: nginx-propagation
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: nginx
  placement:
    clusterAffinity:
      clusterNames:
        - pegasus-2
    replicaScheduling:
      replicaDivisionPreference: Weighted
      replicaSchedulingType: Divided
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - pegasus-2
            weight: 1

Exemplo RabbitMQ

apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
  name: rabbitmq-op
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: rabbitmq
  overrideRules:
    - targetCluster:
        clusterNames:
          - pegasus-2
      overriders:
        labelsOverrider:
          - operator: add
            value:
              env: skoala-dev
          - operator: add
            value:
              env-stat: skoala-stage
          - operator: remove
            value:
              for: for
          - operator: replace
            value:
              bar: test
    - targetCluster:
        clusterNames:
          - pegasus
      overriders:
        annotationsOverrider:
          - operator: add
            value:
              env: skoala-stage
          - operator: remove
            value:
              bom: bom
          - operator: replace
            value:
              emma: sophia

Aqui temos algo diferente, onde os dois clusters são definidos em 'targetCluster', porém com pesos (weights) diferentes, fazendo com que o karmada entregue as réplicas de acordo com o peso de cada cluster:

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: rabbitmq-propagation
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: rabbitmq
  placement:
    clusterAffinity:
      clusterNames:
        - pegasus
        - pegasus-2
    replicaScheduling:
      replicaDivisionPreference: Weighted
      replicaSchedulingType: Divided
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - pegasus
            weight: 2
          - targetCluster:
              clusterNames:
                - pegasus-2
            weight: 1

Remover ubuntu-admin e IRSA

Podemos deletar o pod ubuntu que usamos para setup do karmada, e também a IRSA:

kubectl delete -f ubuntu-admin-karmada.yaml
eksctl delete iamserviceaccount --name ubuntu-admin-karmada \                         
  --namespace karmada-system \
  --cluster pegasus \
  --region us-east-2 \
  --profile default

Para a próxima, vamos buscar um cenário total de DR com o karmada e ver até onde chegamos.

Keep shipping!

Источник: dev.to

Наш сайт является информационным посредником. Сообщить о нарушении авторских прав.

kubernetes karmada community aws