poniedziałek, 12 grudnia 2022

Create a certificate by cert-manager and AWS Private CA

In our EKS ecosystem there is a possibility to create a certificate issued via ACM Private CA. To do this we use (already implemented) https://github.com/cert-manager/cert-manager with https://github.com/cert-manager/aws-privateca-issuer module.

Let’s pretend we have an EKS cluster with some custom deployment. Usually to issue a certificate you can use Issuer or ClusterIssuer Kubernetes resource. The different is that ClusterIssuer you can use from any namespace. If we use aws-privateca-issuer module we must use AWSPCAIssuer or AWSPCAClusterIssuer.

On our platform the AWSPCAClisterIssuer already exists:

apiVersion: awspca.cert-manager.io/v1beta1
kind: AWSPCAClusterIssuer
metadata:
  name: YOUR_NAME
spec:
  arn: PRIVATE_CA_ARN
  region: YOUR_REGION

But how to create a certificate? To do this we use a Certificate:

kind: Certificate
apiVersion: cert-manager.io/v1
metadata:
  name: MY_SUBDOMAIN
spec:
  commonName: MY_SUBDOMAIN
  dnsNames:
    - MY_SUBDOMAIN
  duration: 2160h0m0s
  issuerRef:
    group: awspca.cert-manager.io
    kind: AWSPCAClusterIssuer
    name: YOUR_NAME
  renewBefore: 360h0m0s
  secretName: MY_SUBDOMAIN
  usages:
    - server auth
    - client auth
  privateKey:
    algorithm: "RSA"
    size: 2048

Use "kubectl -n MY_NAMESPACE get certificate" and check the result:

NAME                      READY   SECRET                    AGE
MY_SUBDOMAIN True         MY_SUBDOMAIN   12s

The certificate is stored in a Secret. To view the details:

kubectl get secret MY_SUBDOMAIN -n MY_NAMESPACE -o 'go-template={{index .data "tls.crt"}}' | base64 --decode | openssl x509 -noout -text

piątek, 2 grudnia 2022

EKS upgrade

Why do we need to upgrade an EKS? First of all every new version of EKS (Kubernetes) provides a new features and adds some fixes, patches. Thanks to this your EKS cluster is safe and tour workloads can be more sophisticated. Remember also that one day your EKS will not be supported by AWS. Check when your version starts to be unsupported.

Before you start to upgrade an EKS cluster please check the available versions at https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html and if you upgrade to 1.22 you must complete the 1.22 prerequisites. But if your current cluster version is 1.21 and you want to upgrade to 1.23 you must first update your cluster to 1.22 and then to 1.23.

Check the requirements here: https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html#update-existing-cluster

You must check what was changed in the version you want upgrade to: https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html

First check the changelog: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#changelog-since-v1220

Then read the blogs:

In these blogs you can read about the potential risks and how to handle them.

One of the important things can happen is that the API versions of your resources (describe in the manifest) can be deprecated. That’s why before you upgrade compare your Helm Charts and (or) resources with the Kubernetes changelog. And if in a current version an API is deprecated it means in a future it will be deleted and you will not able to use it.

In my case I used a module. But we can imagine using directly the resources. Then you must upgrade separately the main node and then the workers. In the beginning add the same number of worker nodes to the cluster. Then upgrade the main node and next the worker nodes you added. At the and remove the old worker nodes.

Thanks to this we have the development and production environments we can test an upgrade before we go production.

As a default Kubernetes uses the rolling update strategy when a Pod is deploying. Thanks to this for an user an upgrade should be invisible.

After you read all requirements and changelog and you’re still not sure if your application will work on a new EKS version. You can create a separate EKS and deploy your application (as an Helm Chart for example) to check if it’s up and running.

Check what version of the add-ons you must put when you upgrade. For instance:

Be sure if your Terraform provider of Kubernetes has right API version. You can check it in the prerequisites. But don’t change the providers setup before you applied Terraform to deploy new cluster version (it won’t work because Terraform firs uses the old configuration).

After it’s done check the server version:

kubectl version --short

And then check the versions of your nodes:

kubectl get nodes