poniedziałek, 6 października 2025

S3 cross account replication

I was moving AWS resources from one account to separate staging and production accounts. One of the steps was to migrate S3 buckets. A solution was cross account replication. Because S3 cross region replication moves only new files we have to create a S3 Batch Operation to move existing objects.

S3 cross account replication and Batch Operation

As following:

  • enable S3 bucket versioning on your buckets,
  • in source account prepare an IAM policy as a part of an IAM role to be used by the S3 replication:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:ListBucket",
                "s3:GetReplicationConfiguration",
                "s3:GetObjectVersionForReplication",
                "s3:GetObjectVersionAcl",
                "s3:GetObjectVersionTagging",
                "s3:GetObjectRetention",
                "s3:GetObjectLegalHold"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::SOURCE_BUCKET",
                "arn:aws:s3:::SOURCE_BUCKET/*",
                "arn:aws:s3:::TARGET_BUCKET",
                "arn:aws:s3:::TARGET_BUCKET/*"
            ]
        },
        {
            "Action": [
                "s3:ReplicateObject",
                "s3:ReplicateDelete",
                "s3:ReplicateTags",
                "s3:ObjectOwnerOverrideToBucketOwner"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::SOURCE_BUCKET/*",
                "arn:aws:s3:::TARGET_BUCKET/*"
            ]
        }
    ]
}
  • in source account create an IAM role that includes above policy (trusted entity type = AWS service, Use case = S3),
  • in source account prepare an IAM role for a S3 Batch Operation (trusted entity type = AWS service, Use case = S3 Batch Operations):
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "GetSourceBucketConfiguration",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation",
                "s3:GetBucketAcl",
                "s3:GetReplicationConfiguration",
                "s3:GetObjectVersionForReplication",
                "s3:GetObjectVersionAcl",
                "s3:GetObjectVersionTagging",
                "s3:PutInventoryConfiguration",
                "s3:GetInventoryConfiguration",
                "s3:PutObject",
                "s3:GetObject",
                "s3:InitiateReplication",
                "s3:AbortMultipartUpload"
            ],
            "Resource": [
                "arn:aws:s3:::SOURCE_BUCKET",
                "arn:aws:s3:::SOURCE_BUCKET/*"
            ]
        },
        {
            "Sid": "ReplicateToDestinationBuckets",
            "Effect": "Allow",
            "Action": [
                "s3:List*",
                "s3:*Object",
                "s3:ReplicateObject",
                "s3:ReplicateDelete",
                "s3:ReplicateTags"
            ],
            "Resource": [
                "arn:aws:s3:::TARGET_BUCKET",
                "arn:aws:s3:::TARGET_BUCKET/*"
            ]
        },
        {
            "Sid": "PermissionToOverrideBucketOwner",
            "Effect": "Allow",
            "Action": [
                "s3:ObjectOwnerOverrideToBucketOwner"
            ],
            "Resource": [
                "arn:aws:s3:::TARGET_BUCKET",
                "arn:aws:s3:::TARGET_BUCKET/*"
            ]
        }
    ]
}
  • in target account update S3 bucket policy:
{
    "Version": "2012-10-17" ,
    "Id": "",
    "Statement": [
        {
            "Sid": "Set-permissions-for-objects",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::SOURCE_ACCOUNT_NUMBER:role/service-role/REPLICATION_IAM_ROLE_NAME"
            },
            "Action": [
                "s3:ReplicateObject",
                "s3:ReplicateDelete"
            ],
            "Resource": "arn:aws:s3:::TARGET_BUCKET/*"
        },
        {
            "Sid": "Set permissions on bucket",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::SOURCE_ACCOUNT_NUMBER:role/service-role/REPLICATION_IAM_ROLE_NAME"
            },
            "Action": [
                "s3:GetBucketVersioning",
                "s3:PutBucketVersioning"
            ],
            "Resource": "arn:aws:s3:::TARGET_BUCKET"
        },
        {
            "Sid": "Permissions on objects and buckets",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::SOURCE_ACCOUNT_NUMBER:role/BATCH_OPERATIONS_IAM_ROLE_NAME"
            },
            "Action": [
                "s3:List*",
                "s3:GetBucketVersioning",
                "s3:PutBucketVersioning",
                "s3:ReplicateDelete",
                "s3:ReplicateObject"
            ],
            "Resource": [
                "arn:aws:s3:::TARGET_BUCKET",
                "arn:aws:s3:::TARGET_BUCKET/*"
            ]
        },
        {
            "Sid":"1",
            "Effect":"Allow",
            "Principal":{"AWS":"arn:aws:iam::SOURCE_ACCOUNT_NUMBER:role/service-role/REPLICATION_IAM_ROLE_NAME"},
            "Action":["s3:ObjectOwnerOverrideToBucketOwner"],
            "Resource":"arn:aws:s3:::TARGET_BUCKET/*"
        },
        {
            "Sid": "Permission to override bucket owner",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::SOURCE_ACCOUNT_NUMBER:role/BATCH_OPERATIONS_IAM_ROLE_NAME"
            },
            "Action": "s3:ObjectOwnerOverrideToBucketOwner",
            "Resource": "arn:aws:s3:::TARGET_BUCKET/*"
        }
    ]
}
  • go to your source S3 bucket, then "Management" bookmark and click on "Create replication rule":
    • give a name,
    • status = "Enabled",
    • role scope = "Apply to all objects in the bucket",
    • choose your destination bucket (mark "Change object ownership to destination bucket owner"),
    • choose your S3 replication IAM role,
    • mark "Change the storage class for the replicated objects with Standard storage class",
    • mark "Delete marker replication" as a additional replication option,
  • in your account go to "S3", open "Batch Operations" and push "Create job":
    • object list = "Generate an object list based on a replication configuration" (it will check S3 replication rule we created previously),
    • choose your source S3 bucket,
    • click "Next",
    • operation = "Replicate",
    • click "Next",
    • put a name,
    • unmark "Generate completion report",
    • choose your S3 Batch Operations IAM role,
    • click "Next",
    • check settings and click "Submit".

poniedziałek, 14 lipca 2025

Include Terraform dependency lock file

Why? Because in the beginning of an initialization it save modules and providers checksums. Thanks to this you can track if anything changed in the version you used.

Source: https://www.hashicorp.com/en/blog/terraform-security-5-foundational-practices

czwartek, 27 lutego 2025

A cost optimized AWS environment

Costs saving:

  • Saving Plans,
  • Reserved Instances,
  • change your default payment method to avoid currency conversion,
  • Spot Instances (a development environment),
  • Data Lifecycle Management for EBSes (remove unneeded EBSes),
  • S3:
    • a lifecycle policy for a bucket (move your data into a cheaper storage class),
  • use VPC endpoints (AWS charges for outbound data transfer),
  • use Graviton instance type,
  • use Lambda to switch off your instances (for example EC2, RDS) out of working hours on your development environments.
  • choose a right region because a resource can be cheaper in a different region,
  • Parameter Store instead of Secrets Manager if you don't need a versioning or rotation,
  • ElastiCache for Redis:
    • consider using ElastiCache for Valkey,
  • CloudWatch:
    • logs retention,
  • NAT Gateway:
  • Route 53:
    • check your records TTLs - the lower TTL the less you pay.

Monitoring:

  • Cost Explorer,
  • Cost and Usage Reports,
  • Cost Anomaly Detection,
  • Budgets,
  • Trusted Advisor,
  • cost allocation tags,
  • AWS Compute Optimizer,
  • S3 Storage Lens.

niedziela, 24 listopada 2024

WireGuard instead of AWS Client VPN

Let's pretend your client wants to have an access to your private EKS cluster but don't want to pay much for AWS Client VPN. A solution is to establish an EC2 instance (for example t3.micro with 10 GB storage) based on Amazon Linux in a public Subnet with Elastic IP. Also Instance's Security Group must have open 51820 UDP port.

The server is created so let's install WireGuard (as root):
yum update -y
amazon-linux-extras enable epel
yum install epel-release -y
yum install wireguard-tools -y

Then we have to generate a key pair of WireGuard server:
cd /etc/wireguard
umask 077
wg genkey > privatekey
wg pubkey < privatekey > publickey

Now open “/etc/wireguard/wg0.conf” file and put:
[Interface]
Address = 10.100.0.1/24 # Choose a different range than your VPC CIDR.
SaveConfig = true
ListenPort = 51820
PrivateKey = GENERATED_PRIVATE_KEY
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

[Peer]
PublicKey = GENERATED_PUBLIC_KEY_OF_YOUR_CLIENT # Described below.
AllowedIPs = 10.100.0.2/32 # Put an IP you want to assign to your client.

Start WireGuard:
systemctl enable wg-quick@wg0
systemctl start wg-quick@wg0

Check if IP forwarding is enabled (if it’s not then enable):
sysctl net.ipv4.ip_forward
echo "net.ipv4.ip_forward=1" | tee -a /etc/sysctl.conf
sysctl -p

To change a configuration and apply new changes:
systemctl reload wg-quick@wg0

Now install a client on your favourite system. Then you have to add a new configuration (an empty tunnel). It will generate a private key and a public key for you. Put this public key in an additional [Peer] section on the server in “/etc/wireguard/wg0.conf” file. Now we have edit the client configuration to look like this:
[Interface]
PrivateKey = GENERATED_PRIVATE_KEY # Don't touch.
Address = 10.100.0.2/32 # IP you want to assign.

[Peer]
PublicKey = SERVER_PUBLIC_KEY
AllowedIPs = 10.100.0.0/24, 10.21.0.0/16 # VPN CIDR, VPC CIDR
Endpoint = 13.50.30.59:51820 # VPN address.
PersistentKeepalive = 25

niedziela, 17 listopada 2024

EKS private access without a VPN

When you create a new EKS cluster Amazon also creates an endpoint for the managed Kubernetes API server that you use to communicate with your K8s (using Kubernetes management tools such as kubectl). By default this API server endpoint is public to the Internet and access to the API server is secured using a combination of IAM and native Kubernetes Role Based Access Control.

You can enable both private and public access. Thanks to this you have the remote access but all communication between your nodes and the API server stays within your VPC. This guide assume your cluster has public access disabled.

You must have an EC2 bastion host created within the VPC where your EKS cluster is established. Also your instance must have SSH server run and SSH key added to a user (use Key Pair).

I assume you have an AWS profile configured ("AWS_PROFILE" variable exporter or a similar) and Session Manager plugin installed.

Let's test a connection with our instance (in this case we have Amazon Linux):
ssh -o ProxyCommand="sh -c 'aws ssm start-session --region YOUR_REGION --target YOUR_EC2_INSTANCE_IDENTIFIER --document-name AWS-StartSSHSession --parameters portNumber=22'" ec2-user@YOUR_EC2_INSTANCE_IDENTIFIER -i YOUR_PRIVATE_KEY

Let's imagine your don't have a VPN or you don't want to use it. Install sshuttle.

Open a terminal window and put (don't close the terminal):
aws ssm start-session --region=YOUR_REGION  --target YOUR_EC2_INSTANCE_IDENTIFIER --document-name AWS-StartPortForwardingSession --parameters "localPortNumber=2222,portNumber=22"

Open a second terminal (don't close the terminal):
sshuttle --dns -NHr ec2-user@localhost:2222 -e 'ssh -i EC2_BASTION_PRIVATE_KEY_PATH' YOUR_VPC_CIDR

Then from a third terminal you can connect with your private EKS cluster without a VPN.

There is an another way. In a terminal (don't close):
ssh -o ProxyCommand="sh -c 'aws ssm start-session --region YOUR_REGION --target YOUR_EC2_INSTANCE_IDENTIFIER --document-name AWS-StartSSHSession --parameters portNumber=22'" -q -D 6669 ec2-user@YOUR_EC2_INSTANCE_IDENTIFIER -i YOUR_PRIVATE_KEY

In a terminal where you want to connect with your EKS cluster:
export http_proxy=socks5://127.0.0.1:6669
export https_proxy=socks5://127.0.0.1:6669

poniedziałek, 11 listopada 2024

YubiKey and SSH key

On macOS (put your FIDO2 PIN when asked):
ssh-keygen -t ed25519-sk -C "mailbox@address" -f ~/.ssh/id_ed25519-sk

Probably you'll see this error:
Generating public/private ed25519-sk key pair.
You may need to touch your authenticator to authorize key generation.
No FIDO SecurityKeyProvider specified
Key enrollment failed: invalid format

You have to install SSH via Homebrew:
brew install openssh

sobota, 9 listopada 2024

YubiKey and AWS CLI

First you have to install YubiKey Manager CLI. For me on macOS:
brew install ykman

Add a virtual device for your IAM user:


In the next wizard window click on "Show secret key":


Copy the QR code and don't close the card in your browser.

Put your YubiKey in your hardware and:
ykman oath accounts add -t YOUR_LABEL YOUR_QR_CODE

Now ask the YubiKey twice to get a code:
ykman oath accounts code YOUR_LABEL

Put the codes in the right fields and finish the configuration.

Now we can configure our terminal to communicate with AWS. Set your AWS profile before ("AWS_PROFILE" variable).

For example a "~/.aws/config":
[profile yubikey-test]
region = eu-central-1

"~/.aws/credentials":
[yubikey-test]
aws_access_key_id = YOUR_IAM_USER_SECRET_KEY
aws_secret_access_key = YOUR_IAM_USER_ACCESS_KEY

The following command will give your a set of needed variables to get control on your IAM user:
aws sts get-session-token --serial-number ARN_OF_THE_DEVICE --token-code ASK_YUBIKEY_TOKEN --output json

Write down the values and export these environment variables:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_SESSION_TOKEN
  • AWS_DEFAULT_REGION
To make it easier add at your home directory in ".bash_profile" file (or in ".zshrc" depending on your shell)
function aws-get-yubikey-mfa-code {
    ykman oath accounts code YOUR_LABEL 2>/dev/null | sed -E 's/(None:)?AWS[[:space:]]+([[:digit:]]+)/\2/'
}

Close the editor and put
source ~/.bash_profile

Now using aws-get-yubikey-mfa-code command you can get a code using YubiKey.

We don't want to set manually every time all needed variables so let's create another function in "~/.zshrc" file (my case):
AWS_MFA_SERIAL="YOUR_VIRTUAL_DEVICE_ARN"

function aws-yubikey-mfa-session {
    STS_CREDENTIALS=$(aws sts get-session-token --serial-number "$AWS_MFA_SERIAL" --token-code "$1" --output json)

   if [ "$?" -eq "0" ]
    then
        export AWS_ACCESS_KEY_ID=$(echo $STS_CREDENTIALS | jq -r '.Credentials.AccessKeyId')
        export AWS_SECRET_ACCESS_KEY=$(echo $STS_CREDENTIALS | jq -r '.Credentials.SecretAccessKey')
        export AWS_SECURITY_TOKEN=$(echo $STS_CREDENTIALS | jq -r '.Credentials.SessionToken')
        export AWS_SESSION_TOKEN=$(echo $STS_CREDENTIALS | jq -r '.Credentials.SessionToken')
        export AWS_SESSION_EXPIRY=$(echo $STS_CREDENTIALS | jq -r '.Credentials.Expiration')

        echo "[*] Session credentials set. Expires at $AWS_SESSION_EXPIRY."
    else
        echo "[!] Failed to obtain temporary credentials."
    fi
}

Now put
source ~/.zshrc

Install jq command. On macOS it will be:
brew install jq

Now you're ready:
aws-yubikey-mfa-session GET_A_YUBIKEY_TOKEN