Destroy every resources from your AWS accounts with aws-nuke

Destroy every resources from your AWS accounts with aws-nuke

You probably heard - or lived yourself - the story of this surprisingly high AWS bill because you forgot to turn off your development environment. Here comes aws-nuke ! It's not only useful for dev/sandox resources, but can also help reduce your AWS costs and test your distaster recovery plan...

Don't want to read? See full example on my Github !

Nuking your AWS account: why?

Why would you want to destroy all resources in your AWS account? Well, appart the fact that it's kind of fun, here are a few common situations:

  • Cleanup development accounts - Your dev or IT teams are using a sandbox account on which they deploy their testing environments, or you're deploying your infra as code from scratch every morning on a fresh environment? You'll definitely want to run proper cleanup every night or off-hours to ensure you won't pay for unused resources and start from a fresh environment.
  • Testing disaster recovery plan (and cleanup) - When testing your disaster recovery plan (you do have such a plan, right?) you'll want to start from an empty environment - i.e. an empty AWS account - and cleanup things afterward. The AWS data center has been nuked can be closely reproduced!
  • Cleanup resources from your personal AWS account - If you're an IT worker or developer, chances are you own an AWS account on which you try out services and tutorials. Finding your personal bank account missing a non-negligible amount of money because you forgot to cleanup after testing this Kubernetes EKS tutorial may also hurt your pride. (trust me, I've been there)

These are real-world examples, for instance that's exactly what we're doing at Novadiscovery:

  • Our AWS development account is nuked every night, and we automatically deploy from scratch our development environment every morning to test our Infra as Code
  • We test our recovery plan by nuking a dedicated AWS account and redeploying then restoring our Production on it.

I'm also running aws-nuke every night on my personal AWS account.

How to nuke your AWS account

You're gonna use aws-nuke: it's an open-source tool which will Remove all resources from an AWS account. It's been around and stable for some time now.

Requirements:

  • Admin access on your target AWS account

Please be extra careful: running aws-nuke is very destructive and non-reversible. Make sure you're not using a production account or anywhere you have important data or services.


Install aws-nuke

A quick install:

AWS_NUKE_VERSION=v2.17.0 sh -c 'wget -c https://github.com/rebuy-de/aws-nuke/releases/download/$AWS_NUKE_VERSION/aws-nuke-$AWS_NUKE_VERSION-linux-amd64.tar.gz -O - | sudo tar -xz && sudo mv aws-nuke-$AWS_NUKE_VERSION-linux-amd64 /usr/local/bin/aws-nuke'

# Check install
aws-nuke version

Alternatively, using with Docker is an easy solution, for example:

# Run an interactive shell in a Docker container
# With access to current directory
docker run \
    --rm -it \
    -v $HOME/.aws:/home/aws-nuke/.aws \
    -v $PWD:/nuke \
    -w /nuke \
    --entrypoint sh \
    quay.io/rebuy/aws-nuke:v2.17.0 

Create your aws-nuke configuration

aws-nuke use a YAML config file specifying the resources we want to delete as well as AWS accounts whitelist and/or blocklist. Create a nuke-config.yml file such as:

# Regions on which to run aws-nuke
regions:
- eu-west-3
- global

# aws-nuke requires at least one account be configured in blocklist
# such as your production accounts
# if you don't want anything there, you can specify a dummy account ID
account-blocklist: # aws-nuke <=2.14 need "account-blacklist"
- "000000000000"

# Account IDs on which you allow aws-nuke to run
# You can add additional config per account here
# Such as filters to prevent some resources deletion
# See below for more examples
accounts:
  010562097198:

# List of resource types to be deleted
# Here delete every EC2 instances, volumes, 
# security groups and splot fleet requests
resource-types:
  targets:
  - EC2Instance
  - EC2SecurityGroup
  - EC2SpotFleetRequest
  - EC2Volume
  # ...

  # You can instead specify exclusion
  # Every resources will be deleted except excluded ones
  #
  # excludes:
  # - IAMUser
  # - IAMUserAccessKey
  # - IAMGroup
  # - IAMPolicy
  # - IAMRole
  # ...

You can list all available resource types with

aws-nuke resource-types

# ACMCertificate
# ACMPCACertificateAuthority
# ...

Nuke 'em !

Once nuke-config.yml is ready, run aws-nuke with:

aws-nuke -c nuke-config.yml

Or with Docker:

# Mount current directory and local AWS config into container
# and use current directory nuke-config.yml
docker run \
    --rm -it \
    -v $HOME/.aws:/home/aws-nuke/.aws \
    -v $PWD:/nuke \
    -w /nuke \
    quay.io/rebuy/aws-nuke:v2.17.0 \
    -c nuke-config.yml

By default aws-nuke will prompt you to confirm account alias and resource deletion and will run in dry mode: it won't delete anything but output resources it would delete. Output will look like:

aws-nuke version v2.17.0 - Mon Jan 31 10:04:50 UTC 2022 - 4f8848fdb9358fc2a8b8c4a59c424febde09a409

Do you really want to nuke the account with the ID 1234567891011 and the alias 'crafteo'?
Do you want to continue? Enter account alias to continue.
> crafteo

eu-west-3 - EC2SecurityGroup - sg-00abcdefghijklm42 - [Name: "default"] - cannot delete group 'default'
eu-west-3 - EC2SecurityGroup - sg-123bcdefghijfoobar - [Name: "DeleteMeIfYouCan"] - would remove
Scan complete: 2 total, 1 nukeable, 1 filtered.

The above resources would be deleted with the supplied configuration. Provide --no-dry-run to actually destroy resources.

When you're ready to nuke away, add --no-dry-run flag:

aws-nuke -c nuke-config.yml --no-dry-run

Confirmation will still be asked (twice!) before performing deletion.

Running aws-nuke wihout dry-run nor confirmation

For the impatient, you can skip confirmation entirely with --force and --force-sleep. USE WITH CARE as it will start deleting right way!

# /!\ /!\ /!\ /!\ /!\ /!\ /!\
# WARNING - WILL NUKE YOUR ACCOUNT WITHOUT CONFIRMATION - USE WITH CARE

aws-nuke -c nuke-config.yml --no-dry-run --force --force-sleep 3

Wait ! How can I filter some resources so they won't be deleted?

You can add filters and presets in your configuration. For example:

accounts:
  01234567891011:

    # Keep these resources
    filters:
      S3Bucket:      

      # Any bucket matching this glob pattern will be kept
      - type: glob
        property: "Name"
        value: "important-bucket-*"

      # Bucket matching exactly this name will be kept
      - type: exact
        property: "Name"
        value: "keep-bucket-with-this-name"

      # EC2 instances with tag "nuke.keep: true" will be kept
      EC2Instance:
      - property: "tag:nuke.keep"
        value: "true"

When using multiple accounts, avoid code duplication with presets:

presets:
  # Keep some S3 buckets
  s3-bucket:
    filters:
      S3Bucket:      
      - type: glob
        property: "Name"
        value: "important-bucket-*"
      - type: exact
        property: "Name"
        value: "keep-bucket-with-this-name"

  # Keep S3 buckets and EC2 instances tagged with nuke.keep=true
  keep-tag:
    filters:
      S3Bucket: 
      - property: "tag:nuke.keep"
        value: "true"
      EC2Instance:
      - property: "tag:nuke.keep"
        value: "true"

# Set presets on your accounts as desired
accounts:
  01234567891011:
    presets:
    - s3-bucket
    - keep-tag
  98765432100000:
    presets:
    - keep-tag

Keep all resources tagged with nuke.keep: true

A simple yet efficient pattern to easily keep resources is to tag them with nuke.keep: true (or any other tag you find suitable). For example:

accounts:
  01234567891011:
    presets:
    - keep-tag
  98765432100000:
    presets:
    - keep-tag

presets:
  keep-tag:
    filters:
      S3Bucket: additional-keep
      - property: "tag:nuke.keep"
        value: "true"
      EC2Instance:
      - property: "tag:nuke.keep"
        value: "true"
      # ...

To avoid manually adding all resources to your filter config, you can run this command which will output the necessary config snippet you can copy/paste:

aws-nuke resource-types | sed -r --expression \
  "s/(.*)/      \1:\n      - property: \"tag:nuke.keep\"\n        value: \"true\"/g"

#      ACMCertificate:
#      - property: "tag:nuke.keep"
#        value: "true"
#      ACMPCACertificateAuthority:
#      - property: "tag:nuke.keep"
#        value: "true"
#      ...

Full configuration examples

A few examples to quickly getting started

Minimal configuration

Example nuke-config.yml


# Delete resources from eu-west-3 (Paris) and global regions
regions:
- eu-west-3
- global

# Dummy otherwise aws-nuke complains at least one account is required 
account-blocklist:
- "000000000000"

# Account from which to destroy all resources
accounts:
  "010562097198":
    filters:

# Delete all EC2 instances and volumes
resource-types:
  targets:
  - EC2Instance
  - EC2Volume

Multi-AWS account deletion with tag:nuke.keep filter

See my GitHub's nuke-config.yml full example

Typical issues and errors you may encounter using aws-nuke

By default aws-nuke will automatically retry deletion

aws-nuke retries deleting all resources until all specified ones are deleted or until there are only resources with errors left.

However in some situation it may still fail:

S3 Buckets won't be deleted unless all Objects within are also deleted

If you want to cleanup S3 buckets, may sure to specify both S3Bucket and S3Object resources, for example:

resource-types:
  targets:
  - S3Object
  - S3Bucket

Otherwise AWS will prevent deletion of S3 bucket still containing objects.

Make sure the bucket is empty – You can only delete buckets that don't have any objects in them. Make sure the bucket is empty.

Some resources cannot be deleted because they depend on other resources which are kept

Make sure deleted resources do not have dependencies on other resources preventing their deletion. For example:

  • RDSDBCluster blocked by existing RDSDBInstance
  • S3Bucket blocked by S3Object in the deleted bucket (see above)
  • EC2VPC blocked by other EC2 resources such as EC2Instance or EC2NetworkInterface
  • etc.

Listing all possible situations may prove diffiult (if not impossible), the best way to adapt your configuration is to try them out and update as needed.

Automate aws-nuke on CI

Going further, you may want to automate aws-nuke via CI. Most CI systems allow you to provide AWS credentials via environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

What IAM Policy should I use for the User/Role running aws-nuke?

This is a tricky question depending on your needs:

  • If you want to delete only certain resource types (such as all EC2 resources or all S3 resources), you can use AWS built-in *FullAccess policies such as AmazonS3FullAccess.
  • If you want to delete ALL resources, an admin-level access will be faster to setup, but less secure.
  • Otherwise you can use fine-grained IAM policies to only allow read and deletion (read access is required to identify resources to delete). Leveraging xxx:Delete* and alike actions will allow you to setup more easily such policies.

Here's an IAM Policy example allowing deletion of most EC2, EKS and CoudFormation resources:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "iam:ListAccountAliases",
                "ec2:Describe*",
                "ec2:List*",
                "ec2:Delete*",
                "ec2:Terminate*",
                "ec2:Cancel*",
                "eks:Describe*",
                "eks:List*",
                "eks:Delete*",
                "elasticloadbalancing:Describe*",
                "elasticloadbalancing:Delete*",
                "cloudformation:Delete*",
                "cloudformation:Get*",
                "cloudformation:Describe*",
                "cloudformation:List*"
            ],
            "Resource": "*"
        }
    ]
}

GitHub Actions setup example

Example GitHub action workflow to add under .github/workflows/nuke.yml in your repository:

# GitHub workflow runing every night at 2am
name: Run aws-nuke
on:
  schedule:
    - cron:  '0 2 * * *'

  # Specify workflow_dispatch to allow manual run
  # See https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow
  workflow_dispatch:

jobs:
  aws_nuke:
    runs-on: ubuntu-latest
    # Use aws-nuke official DOcker image with root user
    container:
      image: quay.io/rebuy/aws-nuke:v2.16.0
      options: --user root
    steps:
    # Checkout your repository and setup AWS credentials
    - name: checkout repo
      uses: actions/checkout@v2
    - name: Configure AWS Credentials
      uses: aws-actions/configure-aws-credentials@v1
      with:
        # These secrets should be configured on your repository
        # Settings > Secrets > Actions
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        aws-region: eu-west-3

    # Run aws-nuke
    # Use --force --no-dry-run and --force-sleep
    # to prevent aws-nuke confirmation prompt
    - name: run aws-nuke
      run: |-
        aws-nuke -c ${GITHUB_WORKSPACE}/nuke-config.yml --force --no-dry-run --force-sleep 3

See GitHub actions docs for details.

GitLab CI setup example

In your .gitlab-ci.yml add a job such as:

# Make sure nuke stage exists (among others if needed)
stages:
  - nuke

aws-nuke:
  stage: nuke
  image: 
    name: quay.io/rebuy/aws-nuke:v2.16.0
    # Nullify entrypoint otherwise CI won't run
    # as GitLab will use "sh"-like Docker command to run container
    # incompatible with aws-nuke default entrypoint
    entrypoint: [""]
  # Run aws-nuke
  # Use --force --no-dry-run and --force-sleep
  # to prevent aws-nuke confirmation prompt
  script:
  - aws-nuke -c nuke-config.yml --force --no-dry-run --force-sleep 3

Docker Compose setup example

Create a docker-compose.yml such as:

version: "3.8"
services:
  nuke:
    image: quay.io/rebuy/aws-nuke:v2.17.0
    volumes:
    - ${HOME}/.aws:/home/aws-nuke/.aws:ro
    - ${PWD}:/nuke
    working_dir: /nuke

Run command:

docker-compose run nuke -c nuke-config.yml

To get an interactive shell:

docker-compose run --entrypoint sh nuke

Conclusion

You can now delete every resources on your AWS account, while keeping fine control on resources to keep and production account to blacklist. Remember to use this with care!

Visit my aws-nuke GitHub repo for complete examples and my DevOps examples repository for more nice stuff.

Do not hesitate to leave comments or suggestions 😉

Hi there! You went this far, maybe you'll want to subscribe?

Get mail notification when new posts are published. No spam, no ads, I promise.

2 Comments

  1. Thank you so much for this article, it’s helpfull
    But I have questions.
    If we want to nuke multiple aws accounts, how can we give all their credentials or we just have to provide their ID in the config_file like you do in your github config file?

  2. Providing multiple account ID is enough, aws-nuke will use the account linked to current credentials. Providing account ID is more of a security feature to avoid nuking accounts accidentally – see https://github.com/rebuy-de/aws-nuke#caution

    For example if current AWS Profile creds are attached to account 123456789, then aws-nuke will use this account , even though other accounts are listed. You can also specify credentials using –profile or –access-key-id + –secret-access-key (and maybe –assume-role-arn)

Leave a Reply

Your email address will not be published. Required fields are marked *