poniedziałek, 23 września 2024

DevOps engineer's starting pack

Let's pretend you have a new computer after you joined to a new company and you have to setup everything from scratch.

Usually I use macOS but it suits to a Linux machine even.

This is how my starting pack looks like:

  • Chrome browser (on my private machine I use Vivaldi but this is because many companies has Google Workspace),
  • Homebrew (a package manager),
  • KeePassXC (a password manager),
  • iTerm2 (a terminal emulator with a terminal split),
  • Oh My Zsh (manage your Zsh shell),
  • Visual Studio Code:
    • Terraform extension,
    • YAML extension,
    • Hashicorp HCL extension,
  • a GitHub account (I don't use my personal account but I create an account per company because sometime they want you to setup something additional what you maybe don't want),
  • AWS CLI,
  • aws-vault (store and access your AWS credentials),
  • kubectl,
  • kubectx (switch between your Kubernetes clusters),
  • Helm,
  • Python pip,
  • tfenv to switch between Terraform versions (you don't have to separately install Terraform),
  • Docker,
  • Vim:
    • add in ".vimrc" file in your home directory (start from the last position):
      au BufReadPost * if line("'\"") > 0 && line("'\"") <= line("$") | exe "normal! g`\"" | endif

niedziela, 15 września 2024

LXC - let's get started

I know that everyone uses Docker but sometimes you have to take care of some legacy server where you have, for example, LXC container in which your client runs a Docker container (I know it's weird).

Le'ts imagine you have in you LXC container a device which is mounted as "/var/lib/docker" path. You don't have enough space on it and you don't want to restart the server or even LXC daemon. Let's just replace it.

Add a new device:
lxc storage create new-device btrfs size=100GB

Add a new volume to the device:
lxc storage volume create new-device volume

Then we have get into the LXC container and stop the Docker daemon:
lxc exec lxc-container -- /bin/bash
systemctl stop docker
exit

Disable the previous device from the LXC container:
lxc config device remove lxc-container old-device

Add the new device:
lxc config device add lxc-container some-name disk pool=new-device path=/var/lib/docker source=volume

Run the Docker daemon in the container.

Some additional useful commands:

  • storage list: "lxc storage list",
  • what connected device your container has: "lxc config show container-name --expanded",
  • run a LXC container with a given profile and image: "lxc launch ubuntu:20.04 container-name -p default",
  • run containers: "lxc list",
  • stop and remove a container: "lxc stop container-name", "lxc delete container-name",
  • information about a storage: "lxc storage show storage-name",
  • list the volumes in a storage: "lxc storage volume list storage-name",
  • remove a volume from a storage: "lxc storage volume delete storage-name volume-name",
  • remove a storage: "lxc storage delete storage-name".

czwartek, 21 marca 2024

Mount your Amazon EBS in an EKS Pod

If you want to use EBS in your Kubernetes Persistent Volume.

EC2

Dynamic provisioning:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
  encrypted: "true"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-sc
  resources:
    requests:
      storage: 4Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: ebs-claim

Fargate

You can't mount Amazon EBS volumes to Fargate Pods.

More examples here.

środa, 20 marca 2024

Mount your Amazon EFS in an EKS Pod

If you have already created an EFS and you want to mount it in a Kubernetes Pod you have to add the EKS' Security Group to each mount target.

EC2

This example shows how to make a static provisioned EFS persistent volume (PV) mounted inside container with encryption in transit configured:

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
mountOptions:
  - tls
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-03e456ec05d6df74e # Replace with you EFS identifier.
    volumeAttributes:
      encryptInTransit: "true"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim

Fargate

A Pod running on AWS Fargate automatically mounts an Amazon EFS file system without needing the driver installation.

You can't use dynamic provisioning for persistent volumes with Fargate nodes but you can use static provisioning. An example how to use:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: efs-sc
  persistentVolumeReclaimPolicy: Retain
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-03e456ec05d6df74e # Replace with you EFS identifier.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim

More examples here.

wtorek, 12 grudnia 2023

Vim tip and tricks

In the normal mode:

  • go to the top of the file: press "gg". 

Open ".vimrc" in your home directory to set permanently:

  • "set number": add the line numbers,
  • "set mouse+=a": easy copy without the line numbers.

wtorek, 19 września 2023

Konfiguracja BIND

Post pisany w oparciu o Debiana 11. 

Następnie otwieramy "/etc/bind/named.conf.options":

acl "trusted" {

        ADRES_IP; # Serwer podstawowy

        ADRES_IP; # Serwer zapasowy

};


options {

directory "/var/cache/bind";

        recursion yes; # Pozwol na rekursywne zapytania.

        allow-recursion { trusted; }; # Pozwol na rekursywne zapytania z listy zaufanych klientow.

        allow-transfer { none; }; # Wylacz domyslny transfer strefy.

dnssec-validation auto;

auth-nxdomain no;

listen-on-v6 { any; };


        forwarders {

                8.8.8.8;

                8.8.4.4;

        };

};

Teraz "/etc/bind/named.conf.local":

zone "yolandi.pl" {

    type master;

    file "/etc/bind/db.yolandi.pl"; # Sciezka do pliku strefy

    allow-transfer { ADRES_IP; }; # IP serwera zapasowego

};

 Tworzymy "/etc/bind/zones/db.yolandi.pl":

$TTL 604800

@ IN SOA yolandi.pl. admin.yolandi.pl. (

2021051601 ; Serial

604800 ; Refresh

  86400 ; Retry

2419200 ; Expire

604800 ) ; Negative Cache TTL

    IN NS TWOJA_GLOWNA_DOMENA.

    IN NS DOMENA_SERWERA_ZAPASOWEGO.

yolandi.pl. IN A ADRES_IP

poniedziałek, 8 maja 2023

EFS as a storage for an EKS deployment

Let's imagine you have an EKS cluster with a Grafana deployment. You cluster has as a default storage EBS. Unfortunately EBS doesn't allow you to create more replicas of your deployment because the EBS is already attached to the instance. To workaround this we can use EFS.

Before you start changing your workloads you install Amazon EFS CSI driver as an addon.

Then create an EFS disk and write down the identifier. It's better to have an EFS per deployment (better maintenance for example).

Now we have a possibility to create a StorageClass in our Helm chart:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-storageclass
provisioner: efs.csi.aws.com
parameters:
  provisioningMode: efs-ap
  fileSystemId: {{ .Values.grafana_custom.StorageClass.efs_id | quote }}
  directoryPerms: "755"
  basePath: "/grafana"

When a PVC is created with a StorageClass that has a basePath specified, the dynamically provisioned PV will be created with a subdirectory path under the specified basePath. This can be useful for organizing and managing the storage resources in a cluster.

It is better to first deploy the StorageClass before you change the Grafana storage type.

Change the StorageClass name in the Helm values: https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L327

Change the access mode to "ReadWriteMany": https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L328

Enter the needed disk size: https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L329