czwartek, 21 marca 2024

Mount your Amazon EBS in an EKS Pod

If you want to use EBS in your Kubernetes Persistent Volume.

EC2

Dynamic provisioning:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
  encrypted: "true"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-sc
  resources:
    requests:
      storage: 4Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: ebs-claim

Fargate

You can't mount Amazon EBS volumes to Fargate Pods.

More examples here.

środa, 20 marca 2024

Mount your Amazon EFS in an EKS Pod

If you have already created an EFS and you want to mount it in a Kubernetes Pod you have to add the EKS' Security Group to each mount target.

EC2

This example shows how to make a static provisioned EFS persistent volume (PV) mounted inside container with encryption in transit configured:

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
mountOptions:
  - tls
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-03e456ec05d6df74e # Replace with you EFS identifier.
    volumeAttributes:
      encryptInTransit: "true"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim

Fargate

A Pod running on AWS Fargate automatically mounts an Amazon EFS file system without needing the driver installation.

You can't use dynamic provisioning for persistent volumes with Fargate nodes but you can use static provisioning. An example how to use:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: efs-sc
  persistentVolumeReclaimPolicy: Retain
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-03e456ec05d6df74e # Replace with you EFS identifier.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim

More examples here.

wtorek, 12 grudnia 2023

Vim tip and tricks

In the normal mode:

  • go to the top of the file: press "gg". 

Open ".vimrc" in your home directory to set permanently:

  • "set number": add the line numbers,
  • "set mouse+=a": easy copy without the line numbers.

wtorek, 19 września 2023

Konfiguracja BIND

Post pisany w oparciu o Debiana 11. 

Następnie otwieramy "/etc/bind/named.conf.options":

acl "trusted" {

        ADRES_IP; # Serwer podstawowy

        ADRES_IP; # Serwer zapasowy

};


options {

directory "/var/cache/bind";

        recursion yes; # Pozwol na rekursywne zapytania.

        allow-recursion { trusted; }; # Pozwol na rekursywne zapytania z listy zaufanych klientow.

        allow-transfer { none; }; # Wylacz domyslny transfer strefy.

dnssec-validation auto;

auth-nxdomain no;

listen-on-v6 { any; };


        forwarders {

                8.8.8.8;

                8.8.4.4;

        };

};

Teraz "/etc/bind/named.conf.local":

zone "yolandi.pl" {

    type master;

    file "/etc/bind/db.yolandi.pl"; # Sciezka do pliku strefy

    allow-transfer { ADRES_IP; }; # IP serwera zapasowego

};

 Tworzymy "/etc/bind/zones/db.yolandi.pl":

$TTL 604800

@ IN SOA yolandi.pl. admin.yolandi.pl. (

2021051601 ; Serial

604800 ; Refresh

  86400 ; Retry

2419200 ; Expire

604800 ) ; Negative Cache TTL

    IN NS TWOJA_GLOWNA_DOMENA.

    IN NS DOMENA_SERWERA_ZAPASOWEGO.

yolandi.pl. IN A ADRES_IP

poniedziałek, 8 maja 2023

EFS as a storage for an EKS deployment

Let's imagine you have an EKS cluster with a Grafana deployment. You cluster has as a default storage EBS. Unfortunately EBS doesn't allow you to create more replicas of your deployment because the EBS is already attached to the instance. To workaround this we can use EFS.

Before you start changing your workloads you install Amazon EFS CSI driver as an addon.

Then create an EFS disk and write down the identifier. It's better to have an EFS per deployment (better maintenance for example).

Now we have a possibility to create a StorageClass in our Helm chart:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-storageclass
provisioner: efs.csi.aws.com
parameters:
  provisioningMode: efs-ap
  fileSystemId: {{ .Values.grafana_custom.StorageClass.efs_id | quote }}
  directoryPerms: "755"
  basePath: "/grafana"

When a PVC is created with a StorageClass that has a basePath specified, the dynamically provisioned PV will be created with a subdirectory path under the specified basePath. This can be useful for organizing and managing the storage resources in a cluster.

It is better to first deploy the StorageClass before you change the Grafana storage type.

Change the StorageClass name in the Helm values: https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L327

Change the access mode to "ReadWriteMany": https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L328

Enter the needed disk size: https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L329

poniedziałek, 12 grudnia 2022

Create a certificate by cert-manager and AWS Private CA

In our EKS ecosystem there is a possibility to create a certificate issued via ACM Private CA. To do this we use (already implemented) https://github.com/cert-manager/cert-manager with https://github.com/cert-manager/aws-privateca-issuer module.

Let’s pretend we have an EKS cluster with some custom deployment. Usually to issue a certificate you can use Issuer or ClusterIssuer Kubernetes resource. The different is that ClusterIssuer you can use from any namespace. If we use aws-privateca-issuer module we must use AWSPCAIssuer or AWSPCAClusterIssuer.

On our platform the AWSPCAClisterIssuer already exists:

apiVersion: awspca.cert-manager.io/v1beta1
kind: AWSPCAClusterIssuer
metadata:
  name: YOUR_NAME
spec:
  arn: PRIVATE_CA_ARN
  region: YOUR_REGION

But how to create a certificate? To do this we use a Certificate:

kind: Certificate
apiVersion: cert-manager.io/v1
metadata:
  name: MY_SUBDOMAIN
spec:
  commonName: MY_SUBDOMAIN
  dnsNames:
    - MY_SUBDOMAIN
  duration: 2160h0m0s
  issuerRef:
    group: awspca.cert-manager.io
    kind: AWSPCAClusterIssuer
    name: YOUR_NAME
  renewBefore: 360h0m0s
  secretName: MY_SUBDOMAIN
  usages:
    - server auth
    - client auth
  privateKey:
    algorithm: "RSA"
    size: 2048

Use "kubectl -n MY_NAMESPACE get certificate" and check the result:

NAME                      READY   SECRET                    AGE
MY_SUBDOMAIN True         MY_SUBDOMAIN   12s

The certificate is stored in a Secret. To view the details:

kubectl get secret MY_SUBDOMAIN -n MY_NAMESPACE -o 'go-template={{index .data "tls.crt"}}' | base64 --decode | openssl x509 -noout -text

piątek, 2 grudnia 2022

EKS upgrade

Why do we need to upgrade an EKS? First of all every new version of EKS (Kubernetes) provides a new features and adds some fixes, patches. Thanks to this your EKS cluster is safe and tour workloads can be more sophisticated. Remember also that one day your EKS will not be supported by AWS. Check when your version starts to be unsupported.

Before you start to upgrade an EKS cluster please check the available versions at https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html and if you upgrade to 1.22 you must complete the 1.22 prerequisites. But if your current cluster version is 1.21 and you want to upgrade to 1.23 you must first update your cluster to 1.22 and then to 1.23.

Check the requirements here: https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html#update-existing-cluster

You must check what was changed in the version you want upgrade to: https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html

First check the changelog: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#changelog-since-v1220

Then read the blogs:

In these blogs you can read about the potential risks and how to handle them.

One of the important things can happen is that the API versions of your resources (describe in the manifest) can be deprecated. That’s why before you upgrade compare your Helm Charts and (or) resources with the Kubernetes changelog. And if in a current version an API is deprecated it means in a future it will be deleted and you will not able to use it.

In my case I used a module. But we can imagine using directly the resources. Then you must upgrade separately the main node and then the workers. In the beginning add the same number of worker nodes to the cluster. Then upgrade the main node and next the worker nodes you added. At the and remove the old worker nodes.

Thanks to this we have the development and production environments we can test an upgrade before we go production.

As a default Kubernetes uses the rolling update strategy when a Pod is deploying. Thanks to this for an user an upgrade should be invisible.

After you read all requirements and changelog and you’re still not sure if your application will work on a new EKS version. You can create a separate EKS and deploy your application (as an Helm Chart for example) to check if it’s up and running.

Check what version of the add-ons you must put when you upgrade. For instance:

Be sure if your Terraform provider of Kubernetes has right API version. You can check it in the prerequisites. But don’t change the providers setup before you applied Terraform to deploy new cluster version (it won’t work because Terraform firs uses the old configuration).

After it’s done check the server version:

kubectl version --short

And then check the versions of your nodes:

kubectl get nodes