niedziela, 24 listopada 2024

WireGuard instead of AWS Client VPN

Let's pretend your client wants to have an access to your private EKS cluster but don't want to pay much for AWS Client VPN. A solution is to establish an EC2 instance (for example t3.micro with 10 GB storage) based on Amazon Linux in a public Subnet with Elastic IP. Also Instance's Security Group must have open 51820 UDP port.

The server is created so let's install WireGuard (as root):
yum update -y
amazon-linux-extras enable epel
yum install epel-release -y
yum install wireguard-tools -y

Then we have to generate a key pair of WireGuard server:
cd /etc/wireguard
umask 077
wg genkey > privatekey
wg pubkey < privatekey > publickey

Now open “/etc/wireguard/wg0.conf” file and put:
[Interface]
Address = 10.100.0.1/24 # Choose a different range than your VPC CIDR.
SaveConfig = true
ListenPort = 51820
PrivateKey = GENERATED_PRIVATE_KEY
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

[Peer]
PublicKey = GENERATED_PUBLIC_KEY_OF_YOUR_CLIENT # Described below.
AllowedIPs = 10.100.0.2/32 # Put an IP you want to assign to your client.

Start WireGuard:
systemctl enable wg-quick@wg0
systemctl start wg-quick@wg0

Check if IP forwarding is enabled (if it’s not then enable):
sysctl net.ipv4.ip_forward
echo "net.ipv4.ip_forward=1" | tee -a /etc/sysctl.conf
sysctl -p

To change a configuration and apply new changes:
systemctl reload wg-quick@wg0

Now install a client on your favourite system. Then you have to add a new configuration (an empty tunnel). It will generate a private key and a public key for you. Put this public key in an additional [Peer] section on the server in “/etc/wireguard/wg0.conf” file. Now we have edit the client configuration to look like this:
[Interface]
PrivateKey = GENERATED_PRIVATE_KEY # Don't touch.
Address = 10.100.0.2/32 # IP you want to assign.

[Peer]
PublicKey = SERVER_PUBLIC_KEY
AllowedIPs = 10.100.0.0/24, 10.21.0.0/16 # VPN CIDR, VPC CIDR
Endpoint = 13.50.30.59:51820 # VPN address.
PersistentKeepalive = 25

niedziela, 17 listopada 2024

EKS private access without a VPN

When you create a new EKS cluster Amazon also creates an endpoint for the managed Kubernetes API server that you use to communicate with your K8s (using Kubernetes management tools such as kubectl). By default this API server endpoint is public to the Internet and access to the API server is secured using a combination of IAM and native Kubernetes Role Based Access Control.

You can enable both private and public access. Thanks to this you have the remote access but all communication between your nodes and the API server stays within your VPC. This guide assume your cluster has public access disabled.

You must have an EC2 bastion host created within the VPC where your EKS cluster is established. Also your instance must have SSH server run and SSH key added to a user (use Key Pair).

I assume you have an AWS profile configured ("AWS_PROFILE" variable exporter or a similar) and Session Manager plugin installed.

Let's test a connection with our instance (in this case we have Amazon Linux):
ssh -o ProxyCommand="sh -c 'aws ssm start-session --region YOUR_REGION --target YOUR_EC2_INSTANCE_IDENTIFIER --document-name AWS-StartSSHSession --parameters portNumber=22'" ec2-user@YOUR_EC2_INSTANCE_IDENTIFIER -i YOUR_PRIVATE_KEY

Let's imagine your don't have a VPN or you don't want to use it. Install sshuttle.

Open a terminal window and put (don't close the terminal):
aws ssm start-session --region=YOUR_REGION  --target YOUR_EC2_INSTANCE_IDENTIFIER --document-name AWS-StartPortForwardingSession --parameters "localPortNumber=2222,portNumber=22"

Open a second terminal (don't close the terminal):
sshuttle --dns -NHr ec2-user@localhost:2222 -e 'ssh -i EC2_BASTION_PRIVATE_KEY_PATH' YOUR_VPC_CIDR

Then from a third terminal you can connect with your private EKS cluster without a VPN.

There is an another way. In a terminal (don't close):
ssh -o ProxyCommand="sh -c 'aws ssm start-session --region YOUR_REGION --target YOUR_EC2_INSTANCE_IDENTIFIER --document-name AWS-StartSSHSession --parameters portNumber=22'" -q -D 6669 ec2-user@YOUR_EC2_INSTANCE_IDENTIFIER -i YOUR_PRIVATE_KEY

In a terminal where you want to connect with your EKS cluster:
export http_proxy=socks5://127.0.0.1:6669
export https_proxy=socks5://127.0.0.1:6669

poniedziałek, 11 listopada 2024

YubiKey and SSH key

On macOS (put your FIDO2 PIN when asked):
ssh-keygen -t ed25519-sk -C "mailbox@address" -f ~/.ssh/id_ed25519-sk

Probably you'll see this error:
Generating public/private ed25519-sk key pair.
You may need to touch your authenticator to authorize key generation.
No FIDO SecurityKeyProvider specified
Key enrollment failed: invalid format

You have to install SSH via Homebrew:
brew install openssh

sobota, 9 listopada 2024

YubiKey and AWS CLI

First you have to install YubiKey Manager CLI. For me on macOS:
brew install ykman

Add a virtual device for your IAM user:


In the next wizard window click on "Show secret key":


Copy the QR code and don't close the card in your browser.

Put your YubiKey in your hardware and:
ykman oath accounts add -t YOUR_LABEL YOUR_QR_CODE

Now ask the YubiKey twice to get a code:
ykman oath accounts code YOUR_LABEL

Put the codes in the right fields and finish the configuration.

Now we can configure our terminal to communicate with AWS. Set your AWS profile before ("AWS_PROFILE" variable).

For example a "~/.aws/config":
[profile yubikey-test]
region = eu-central-1

"~/.aws/credentials":
[yubikey-test]
aws_access_key_id = YOUR_IAM_USER_SECRET_KEY
aws_secret_access_key = YOUR_IAM_USER_ACCESS_KEY

The following command will give your a set of needed variables to get control on your IAM user:
aws sts get-session-token --serial-number ARN_OF_THE_DEVICE --token-code ASK_YUBIKEY_TOKEN --output json

Write down the values and export these environment variables:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_SESSION_TOKEN
  • AWS_DEFAULT_REGION
To make it easier add at your home directory in ".bash_profile" file (or in ".zshrc" depending on your shell)
function aws-get-yubikey-mfa-code {
    ykman oath accounts code YOUR_LABEL 2>/dev/null | sed -E 's/(None:)?AWS[[:space:]]+([[:digit:]]+)/\2/'
}

Close the editor and put
source ~/.bash_profile

Now using aws-get-yubikey-mfa-code command you can get a code using YubiKey.

We don't want to set manually every time all needed variables so let's create another function in "~/.zshrc" file (my case):
AWS_MFA_SERIAL="YOUR_VIRTUAL_DEVICE_ARN"

function aws-yubikey-mfa-session {
    STS_CREDENTIALS=$(aws sts get-session-token --serial-number "$AWS_MFA_SERIAL" --token-code "$1" --output json)

   if [ "$?" -eq "0" ]
    then
        export AWS_ACCESS_KEY_ID=$(echo $STS_CREDENTIALS | jq -r '.Credentials.AccessKeyId')
        export AWS_SECRET_ACCESS_KEY=$(echo $STS_CREDENTIALS | jq -r '.Credentials.SecretAccessKey')
        export AWS_SECURITY_TOKEN=$(echo $STS_CREDENTIALS | jq -r '.Credentials.SessionToken')
        export AWS_SESSION_TOKEN=$(echo $STS_CREDENTIALS | jq -r '.Credentials.SessionToken')
        export AWS_SESSION_EXPIRY=$(echo $STS_CREDENTIALS | jq -r '.Credentials.Expiration')

        echo "[*] Session credentials set. Expires at $AWS_SESSION_EXPIRY."
    else
        echo "[!] Failed to obtain temporary credentials."
    fi
}

Now put
source ~/.zshrc

Install jq command. On macOS it will be:
brew install jq

Now you're ready:
aws-yubikey-mfa-session GET_A_YUBIKEY_TOKEN

niedziela, 3 listopada 2024

YubiKey and AWS Console

Your company wants you to start using YubiKey. You have an AWS account and it's time to reconfigure the access. I recommend to do this easily. I mean if you already have configured a MFA device as an authenticator application you can add another device and test in a first few days simultaneously.

Add new MFA device in your Console:


Plug in your YubiKey and tap:


Put your FIDO2 PIN:


Now you can log into Console using a given method:


poniedziałek, 23 września 2024

DevOps engineer's starting pack

Let's pretend you have a new computer after you joined to a new company and you have to setup everything from scratch.

Usually I use macOS but it suits to a Linux machine even.

This is how my starting pack looks like:

  • Chrome browser (on my private machine I use Vivaldi but this is because many companies has Google Workspace),
  • Homebrew (a package manager),
  • KeePassXC (a password manager),
  • iTerm2 (a terminal emulator with a terminal split),
  • Oh My Zsh (manage your Zsh shell),
  • Visual Studio Code:
    • Terraform extension,
    • YAML extension,
    • Hashicorp HCL extension,
  • a GitHub account (I don't use my personal account but I create an account per company because sometime they want you to setup something additional what you maybe don't want),
  • AWS CLI,
  • aws-vault (store and access your AWS credentials),
  • kubectl:
  • kubectx (switch between your Kubernetes clusters),
  • Helm,
  • Python pip,
  • tfenv to switch between Terraform versions (you don't have to separately install Terraform),
  • Docker,
  • Vim:
    • add to ".vimrc" file in your home directory (start from the last position):
      au BufReadPost * if line("'\"") > 0 && line("'\"") <= line("$") | exe "normal! g`\"" | endif

niedziela, 15 września 2024

LXC - let's get started

I know that everyone uses Docker but sometimes you have to take care of some legacy server where you have, for example, LXC container in which your client runs a Docker container (I know it's weird).

Le'ts imagine you have in you LXC container a device which is mounted as "/var/lib/docker" path. You don't have enough space on it and you don't want to restart the server or even LXC daemon. Let's just replace it.

Add a new device:
lxc storage create new-device btrfs size=100GB

Add a new volume to the device:
lxc storage volume create new-device volume

Then we have get into the LXC container and stop the Docker daemon:
lxc exec lxc-container -- /bin/bash
systemctl stop docker
exit

Disable the previous device from the LXC container:
lxc config device remove lxc-container old-device

Add the new device:
lxc config device add lxc-container some-name disk pool=new-device path=/var/lib/docker source=volume

Run the Docker daemon in the container.

Some additional useful commands:

  • storage list: "lxc storage list",
  • what connected device your container has: "lxc config show container-name --expanded",
  • run a LXC container with a given profile and image: "lxc launch ubuntu:20.04 container-name -p default",
  • run containers: "lxc list",
  • stop and remove a container: "lxc stop container-name", "lxc delete container-name",
  • information about a storage: "lxc storage show storage-name",
  • list the volumes in a storage: "lxc storage volume list storage-name",
  • remove a volume from a storage: "lxc storage volume delete storage-name volume-name",
  • remove a storage: "lxc storage delete storage-name".