poniedziałek, 8 maja 2023

EFS as a storage for an EKS deployment

Let's imagine you have an EKS cluster with a Grafana deployment. You cluster has as a default storage EBS. Unfortunately EBS doesn't allow you to create more replicas of your deployment because the EBS is already attached to the instance. To workaround this we can use EFS.

Before you start changing your workloads you install Amazon EFS CSI driver as an addon.

Then create an EFS disk and write down the identifier. It's better to have an EFS per deployment (better maintenance for example).

Now we have a possibility to create a StorageClass in our Helm chart:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-storageclass
provisioner: efs.csi.aws.com
parameters:
  provisioningMode: efs-ap
  fileSystemId: {{ .Values.grafana_custom.StorageClass.efs_id | quote }}
  directoryPerms: "755"
  basePath: "/grafana"

When a PVC is created with a StorageClass that has a basePath specified, the dynamically provisioned PV will be created with a subdirectory path under the specified basePath. This can be useful for organizing and managing the storage resources in a cluster.

It is better to first deploy the StorageClass before you change the Grafana storage type.

Change the StorageClass name in the Helm values: https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L327

Change the access mode to "ReadWriteMany": https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L328

Enter the needed disk size: https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L329

poniedziałek, 12 grudnia 2022

Create a certificate by cert-manager and AWS Private CA

In our EKS ecosystem there is a possibility to create a certificate issued via ACM Private CA. To do this we use (already implemented) https://github.com/cert-manager/cert-manager with https://github.com/cert-manager/aws-privateca-issuer module.

Let’s pretend we have an EKS cluster with some custom deployment. Usually to issue a certificate you can use Issuer or ClusterIssuer Kubernetes resource. The different is that ClusterIssuer you can use from any namespace. If we use aws-privateca-issuer module we must use AWSPCAIssuer or AWSPCAClusterIssuer.

On our platform the AWSPCAClisterIssuer already exists:

apiVersion: awspca.cert-manager.io/v1beta1
kind: AWSPCAClusterIssuer
metadata:
  name: YOUR_NAME
spec:
  arn: PRIVATE_CA_ARN
  region: YOUR_REGION

But how to create a certificate? To do this we use a Certificate:

kind: Certificate
apiVersion: cert-manager.io/v1
metadata:
  name: MY_SUBDOMAIN
spec:
  commonName: MY_SUBDOMAIN
  dnsNames:
    - MY_SUBDOMAIN
  duration: 2160h0m0s
  issuerRef:
    group: awspca.cert-manager.io
    kind: AWSPCAClusterIssuer
    name: YOUR_NAME
  renewBefore: 360h0m0s
  secretName: MY_SUBDOMAIN
  usages:
    - server auth
    - client auth
  privateKey:
    algorithm: "RSA"
    size: 2048

Use "kubectl -n MY_NAMESPACE get certificate" and check the result:

NAME                      READY   SECRET                    AGE
MY_SUBDOMAIN True         MY_SUBDOMAIN   12s

The certificate is stored in a Secret. To view the details:

kubectl get secret MY_SUBDOMAIN -n MY_NAMESPACE -o 'go-template={{index .data "tls.crt"}}' | base64 --decode | openssl x509 -noout -text

piątek, 2 grudnia 2022

EKS upgrade

Why do we need to upgrade an EKS? First of all every new version of EKS (Kubernetes) provides a new features and adds some fixes, patches. Thanks to this your EKS cluster is safe and tour workloads can be more sophisticated. Remember also that one day your EKS will not be supported by AWS. Check when your version starts to be unsupported.

Before you start to upgrade an EKS cluster please check the available versions at https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html and if you upgrade to 1.22 you must complete the 1.22 prerequisites. But if your current cluster version is 1.21 and you want to upgrade to 1.23 you must first update your cluster to 1.22 and then to 1.23.

Check the requirements here: https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html#update-existing-cluster

You must check what was changed in the version you want upgrade to: https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html

First check the changelog: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#changelog-since-v1220

Then read the blogs:

In these blogs you can read about the potential risks and how to handle them.

One of the important things can happen is that the API versions of your resources (describe in the manifest) can be deprecated. That’s why before you upgrade compare your Helm Charts and (or) resources with the Kubernetes changelog. And if in a current version an API is deprecated it means in a future it will be deleted and you will not able to use it.

In my case I used a module. But we can imagine using directly the resources. Then you must upgrade separately the main node and then the workers. In the beginning add the same number of worker nodes to the cluster. Then upgrade the main node and next the worker nodes you added. At the and remove the old worker nodes.

Thanks to this we have the development and production environments we can test an upgrade before we go production.

As a default Kubernetes uses the rolling update strategy when a Pod is deploying. Thanks to this for an user an upgrade should be invisible.

After you read all requirements and changelog and you’re still not sure if your application will work on a new EKS version. You can create a separate EKS and deploy your application (as an Helm Chart for example) to check if it’s up and running.

Check what version of the add-ons you must put when you upgrade. For instance:

Be sure if your Terraform provider of Kubernetes has right API version. You can check it in the prerequisites. But don’t change the providers setup before you applied Terraform to deploy new cluster version (it won’t work because Terraform firs uses the old configuration).

After it’s done check the server version:

kubectl version --short

And then check the versions of your nodes:

kubectl get nodes

poniedziałek, 27 września 2021

Behind the Kubernetes wheel

Everyone is containerizing with Docker. Thanks to this we can vastly implement new software versions, however, this post is not about the advantages of containerization. I want to focus more on Kubernetes.

In my work, I have very often encountered problems with scaling the application. When the load increases, you need to up the right number of containers to handle it. We must do it quickly with one command and not by calling another „docker run”. Thanks to this action savings appear.

The number of deployments based on Docker that we can have in our companies makes it necessary to manage a container farm. Without the right tools, resources can’t be controlled effectively. Additionally, it is necessary to keep the configuration in a place that can be shared to easily restore the services. So here we have a concept of orchestration.

The phenomenon of containerization is not so new, but the orchestration relatively is. If we already decided to put our software into Docker, then in order not to be a victim of our success, sooner or later, we will have to choose a solution that will provide us a stability when scaling up the project. It is worth to know the concept even if we do not plan to increase the size of the stack beyond one frontend container and one backend container.

What is the orchestration?

Let’s get through some basic definitions for rookies. An orchestration is a mechanism which allows you to administrate servers cluster with running a containerization service.

Kubernetes is a big open source project originally designed by the Google. Huge community means that code can have very little bugs, but if from the beginning it was written wrong or a community doesn’t have enough skills to make a code safe, then there may be a lot of holes in it, like in WordPress for example.

K8s (synonym for „Kubernetes”) is free and anyone can implement it on their own infrastructure. It’s great for managing containers on development servers that require frequent changes.

I don’t have to say that initiation of orchestration is necessary if you have a bunch of servers which run many applications in containers. Thanks to the orchestration we can keep our configuration in the repository, create changes and easily deploy, even in a case of accidental removal of the given containers. It also allows for comfortable scaling which is needed, when we want our solutions to have high availability.

Why not Docker Swarm?

It is worth noting that Docker himself encourages to use Kubernetes. Kubernetes gives you more options to scale your applications. There was a case when I wanted to have many replicas of Apache Kafka and ZooKeeper and doing it on Docker Swarm failed. However, with Kubernetes it did not cause any major problems.

Both orchestration technologies allow you to define in the file what applications you want to run and how. However, Kubernetes has many types of so-called manifestos. It gives more flexibility. In order to define deployments for Docker Swarm, we must additionally install the Docker Compose tool, for which the files must be in the right version.

Namespaces give us separation of our deployments. If we properly watch where we put the containers, then, for example, we don’t need to create additional Kubernetes clusters and fear data overwriting. With one cluster we can better manage our resources.

Kubernetes has native service discovery technology using etcd. In addition, it allows balancing the load. It makes the services register under a specific name and can be easily recognized by the name of the space.

For some people, it may be a defect that the Docker Compose only allows you to use YAML files when Kubernetes also accepts JSON.

If we want to use Kubernetes in the cloud we can use various solutions such as:

  • Google Container Engine (Google),
  • Cloud Kubernetes Service (IBM),
  • Azure Kubernetes Service (Microsoft),
  • VMware Kubernetes Engine (VMware).

If someone used Docker Compose earlier it will be difficult for him to go to manifestos but I think that is worth considering the benefits. There is a similar situation with the Vi editor which is very difficult to use in the beginning, but the more time we spend using it, the more it speeds up our work.

The main servers in the Kubernetes cluster provide APIs implementing the RESTful interface. Thanks to that we can communicate with the cluster using various programs.

Why companies should start using Kubernetes?

The benefits of having your own applications in containers are obvious. Orchestration using Kubernetes gives you the ability to automatically handle the network, store data (you can also choose your own Software Defined Storage), autoscaling, logging, notifications etc. However, the most important factor for business is that K8s can significantly reduce costs by better use of hardware resources without affecting application performance or user experience and even more – you can see a performance increase.

czwartek, 5 sierpnia 2021

Wspaniały DevOps

Docker:


Kubernetes:

Helm:

Terraform:

Ansible:
  • Ansible Tower.

CI CD:

AWS:

Monitoring i logowanie:
  • VictoriaLogs,
  • Datadog,
  • New Relic.

Wirtualizacja:
  • Vagrant.

Testowanie:

Bezpieczeństwo:

Inne:

środa, 4 sierpnia 2021

Konfiguracja i bezpieczeństwo SSH

 Generujemy klucz ECDSA, którego później będziemy używać do uwierzytelniania:

ssh-keygen -t ecdsa -b 521 -C "seprob"

Edytujemy plik "/etc/ssh/sshd_config":

# Nasłuchuj na danym porcie.

Port 5022

# Użytkownik "root" nie może się zalogowac.

PermitRootLogin no

# Zadwaj uzytkownikowi dowolna liczbe wielowatkowych pytan.

ChallengeResponseAuthentication no

# Wlacz interfejs Pluggable Authentication Module (wtedy nawet zablokowany uzytkownik moze sie zalogowac).

UsePAM no

X11Forwarding yes

# Wyswietlaj Message Of The Day.

PrintMotd yes

AcceptEnv LANG LC_*

Subsystem sftp /usr/lib/openssh/sftp-server

# Nie wysyłaj pakietu aby sprawdzić czy serwer żyje.

TCPKeepAlive no

# Wysyłaj zaszyfrowaną wiadomość co 30 sekund.

ClientAliveInterval 30

# Rozłącz niaktywnego użytkownika po 120 minutach (30 sekund * 240).

ClientAliveCountMax 240

# Dozwolone uwierzytelnianie za pomocą hasła.

PasswordAuthentication yes

# Nie zezwalaj na zalogowanie się na konta posiadające puste hasła.

PermitEmptyPasswords no 

Walidacja konfiguracji demona SSH może być wykonana za pomocą komendy "sshd -t" (lub "sshd -T" jako wersja rozszerzona). 

wtorek, 3 sierpnia 2021

Kubernetes RBAC

Dane wejściowe:

  • system: Debian 9,
  • użytkownik: root,
  • Kubernetes 1.21.3.
Na samym początku stwórzmy dedykowaną dla użytkownika przestrzeń nazw:

kind: Namespace
apiVersion: v1
metadata:
  name: seprob
  labels:
    name: seprob

Przejdźmy do "/etc/kubernetes/pki" gdzie zlokalizowane jest CA Kubernetesa.

Teraz musimy wygenerować klucz i certyfikat:
openssl genrsa -out seprob.key 2048

openssl req -new -key seprob.key -out seprob.csr -subj "/CN=seprob/O=yolandi"

openssl x509 -req -in seprob.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out seprob.crt -days 500

Certyfikat i klucz dostarczamy użytkownikowi aby skonfigurował sobie odpowiednio kubectl. Może to zrobić np. w poniższy sposób:

kubectl config set-credentials seprob --client-certificate=~/Documents/seprob_yolandi_kubernetes.crt --client-key=~/Documents/seprob_yolandi_kubernetes.crt

kubectl config set-context seprob-yolandi --cluster=yolandi --namespace=seprob --user=seprob

Dodatkowo musi ustawić w konfiguracji adres oraz CA klastra. Aktualnie jeżeli będziemy się próbowali połączyć to dostaniemy błąd.

Najpierw stwórzmy obiekt Role:

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  namespace: seprob

  name: seprob-role

rules:

- apiGroups: ["", "extensions", "apps"]

  resources: ["deployments", "replicasets", "pods"]

  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]

Teraz obiekt RoleBinding:

kind: RoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: seprob-rolebinding

  namespace: seprob

subjects:

- kind: User

  name: seprob

  apiGroup: ""

roleRef:

  kind: Role

  name: seprob-role

  apiGroup: ""

W tym momencie już powinniśmy mieć możliwość sprawdzić np. czy mamy jakieś Pody w przestrzeni nazw.