niedziela, 15 września 2024

LXC - let's get started

I know that everyone uses Docker but sometimes you have to take care of some legacy server where you have, for example, LXC container in which your client runs a Docker container (I know it's weird).

Le'ts imagine you have in you LXC container a device which is mounted as "/var/lib/docker" path. You don't have enough space on it and you don't want to restart the server or even LXC daemon. Let's just replace it.

Add a new device:
lxc storage create new-device btrfs size=100GB

Add a new volume to the device:
lxc storage volume create new-device volume

Then we have get into the LXC container and stop the Docker daemon:
lxc exec lxc-container -- /bin/bash
systemctl stop docker
exit

Disable the previous device from the LXC container:
lxc config device remove lxc-container old-device

Add the new device:
lxc config device add lxc-container some-name disk pool=new-device path=/var/lib/docker source=volume

Run the Docker daemon in the container.

Some additional useful commands:

  • storage list: "lxc storage list",
  • what connected device your container has: "lxc config show container-name --expanded",
  • run a LXC container with a given profile and image: "lxc launch ubuntu:20.04 container-name -p default",
  • run containers: "lxc list",
  • stop and remove a container: "lxc stop container-name", "lxc delete container-name",
  • information about a storage: "lxc storage show storage-name",
  • list the volumes in a storage: "lxc storage volume list storage-name",
  • remove a volume from a storage: "lxc storage volume delete storage-name volume-name",
  • remove a storage: "lxc storage delete storage-name".

czwartek, 21 marca 2024

Mount your Amazon EBS in an EKS Pod

If you want to use EBS in your Kubernetes Persistent Volume.

EC2

Dynamic provisioning:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
  encrypted: "true"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-sc
  resources:
    requests:
      storage: 4Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: ebs-claim

Fargate

You can't mount Amazon EBS volumes to Fargate Pods.

More examples here.

środa, 20 marca 2024

Mount your Amazon EFS in an EKS Pod

If you have already created an EFS and you want to mount it in a Kubernetes Pod you have to add the EKS' Security Group to each mount target.

EC2

This example shows how to make a static provisioned EFS persistent volume (PV) mounted inside container with encryption in transit configured:

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
mountOptions:
  - tls
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-03e456ec05d6df74e # Replace with you EFS identifier.
    volumeAttributes:
      encryptInTransit: "true"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim

Fargate

A Pod running on AWS Fargate automatically mounts an Amazon EFS file system without needing the driver installation.

You can't use dynamic provisioning for persistent volumes with Fargate nodes but you can use static provisioning. An example how to use:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: efs-sc
  persistentVolumeReclaimPolicy: Retain
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-03e456ec05d6df74e # Replace with you EFS identifier.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim

More examples here.

wtorek, 12 grudnia 2023

Vim tip and tricks

In the normal mode:

  • go to the top of the file: press "gg". 

Open ".vimrc" in your home directory to set permanently:

  • "set number": add the line numbers,
  • "set mouse+=a": easy copy without the line numbers.

wtorek, 19 września 2023

Konfiguracja BIND

Post pisany w oparciu o Debiana 11. 

Następnie otwieramy "/etc/bind/named.conf.options":

acl "trusted" {

        ADRES_IP; # Serwer podstawowy

        ADRES_IP; # Serwer zapasowy

};


options {

directory "/var/cache/bind";

        recursion yes; # Pozwol na rekursywne zapytania.

        allow-recursion { trusted; }; # Pozwol na rekursywne zapytania z listy zaufanych klientow.

        allow-transfer { none; }; # Wylacz domyslny transfer strefy.

dnssec-validation auto;

auth-nxdomain no;

listen-on-v6 { any; };


        forwarders {

                8.8.8.8;

                8.8.4.4;

        };

};

Teraz "/etc/bind/named.conf.local":

zone "yolandi.pl" {

    type master;

    file "/etc/bind/db.yolandi.pl"; # Sciezka do pliku strefy

    allow-transfer { ADRES_IP; }; # IP serwera zapasowego

};

 Tworzymy "/etc/bind/zones/db.yolandi.pl":

$TTL 604800

@ IN SOA yolandi.pl. admin.yolandi.pl. (

2021051601 ; Serial

604800 ; Refresh

  86400 ; Retry

2419200 ; Expire

604800 ) ; Negative Cache TTL

    IN NS TWOJA_GLOWNA_DOMENA.

    IN NS DOMENA_SERWERA_ZAPASOWEGO.

yolandi.pl. IN A ADRES_IP

poniedziałek, 8 maja 2023

EFS as a storage for an EKS deployment

Let's imagine you have an EKS cluster with a Grafana deployment. You cluster has as a default storage EBS. Unfortunately EBS doesn't allow you to create more replicas of your deployment because the EBS is already attached to the instance. To workaround this we can use EFS.

Before you start changing your workloads you install Amazon EFS CSI driver as an addon.

Then create an EFS disk and write down the identifier. It's better to have an EFS per deployment (better maintenance for example).

Now we have a possibility to create a StorageClass in our Helm chart:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-storageclass
provisioner: efs.csi.aws.com
parameters:
  provisioningMode: efs-ap
  fileSystemId: {{ .Values.grafana_custom.StorageClass.efs_id | quote }}
  directoryPerms: "755"
  basePath: "/grafana"

When a PVC is created with a StorageClass that has a basePath specified, the dynamically provisioned PV will be created with a subdirectory path under the specified basePath. This can be useful for organizing and managing the storage resources in a cluster.

It is better to first deploy the StorageClass before you change the Grafana storage type.

Change the StorageClass name in the Helm values: https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L327

Change the access mode to "ReadWriteMany": https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L328

Enter the needed disk size: https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L329

poniedziałek, 12 grudnia 2022

Create a certificate by cert-manager and AWS Private CA

In our EKS ecosystem there is a possibility to create a certificate issued via ACM Private CA. To do this we use (already implemented) https://github.com/cert-manager/cert-manager with https://github.com/cert-manager/aws-privateca-issuer module.

Let’s pretend we have an EKS cluster with some custom deployment. Usually to issue a certificate you can use Issuer or ClusterIssuer Kubernetes resource. The different is that ClusterIssuer you can use from any namespace. If we use aws-privateca-issuer module we must use AWSPCAIssuer or AWSPCAClusterIssuer.

On our platform the AWSPCAClisterIssuer already exists:

apiVersion: awspca.cert-manager.io/v1beta1
kind: AWSPCAClusterIssuer
metadata:
  name: YOUR_NAME
spec:
  arn: PRIVATE_CA_ARN
  region: YOUR_REGION

But how to create a certificate? To do this we use a Certificate:

kind: Certificate
apiVersion: cert-manager.io/v1
metadata:
  name: MY_SUBDOMAIN
spec:
  commonName: MY_SUBDOMAIN
  dnsNames:
    - MY_SUBDOMAIN
  duration: 2160h0m0s
  issuerRef:
    group: awspca.cert-manager.io
    kind: AWSPCAClusterIssuer
    name: YOUR_NAME
  renewBefore: 360h0m0s
  secretName: MY_SUBDOMAIN
  usages:
    - server auth
    - client auth
  privateKey:
    algorithm: "RSA"
    size: 2048

Use "kubectl -n MY_NAMESPACE get certificate" and check the result:

NAME                      READY   SECRET                    AGE
MY_SUBDOMAIN True         MY_SUBDOMAIN   12s

The certificate is stored in a Secret. To view the details:

kubectl get secret MY_SUBDOMAIN -n MY_NAMESPACE -o 'go-template={{index .data "tls.crt"}}' | base64 --decode | openssl x509 -noout -text