Yesterday, HashiCorp announced a new “free” tier for Terraform Cloud.

Although this seems like a good improvement (free SSO and OPA!), in the FAQ you can see that it introduces a limit on the number of resources (500) that you can manage. Once you are over that limit, you get charged per resource. This is less than ideal for current users with non-commercial projects in it.

Terraform Cloud Pricing. Courtesy of HashiCorp.
Terraform Cloud Pricing. Courtesy of HashiCorp.

Now, I (used to) use Terraform Cloud exclusively for storing multiple state files (as Terraform is run in CI/CD pipelines), roughly one per AWS account I manage. On average, each of these workspaces contains more than 500 resources.

My (now deprecated) Terraform Cloud workspaces.
My (now deprecated) Terraform Cloud workspaces.

Failed Automated Migration

The first thing I did when I heard of this was try to migrate a test project to using S3 as remote storage.

However, when I tried to migrate to any other backend, I got the following error (the page linked from the error message is far from helpful in providing an actionable solution).

Terraform does not support migrating away from Terraform Cloud.
Terraform does not support migrating away from Terraform Cloud.

Manual Migration

I was trying to avoid a manual migration as it could be error-prone, but also, looking in the HashiCorp community forum, it seemed that the only way was to pull the state down manually and re-initialise it in another backend (see this discussion for example).

In the meantime, the HashiCorp support team replied with a similar answer:

Support Answer.
Support Answer.

So, let’s migrate.

Create resources needed to manage the state in S3

You basically want a private S3 bucket with versioning enabled + a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.

Although I ended up creating a custom module, if you are looking for something quick, this module from cloudposse is a great starting point:

Import the module from your existing TF configuration and run terraform apply (still using your old state in Terraform Cloud).

Manually pull down the state from Terraform Cloud

Next, we need to download the current state from Terraform Cloud:

terraform state pull >terraform.tfstate

Once you have downloaded it, be sure to remove the existing local .terraform directory, as it will contain another terraform.tfstate still pointing to Terraform Cloud:

mv .terraform .terraform.old

Swap backend

We can then finally swap backend.

We are going to change the backend from cloud:

terraform {
  cloud {
    hostname = "app.terraform.io"

    organization = "<your-org>"
    workspaces {
      name = "<your-workspace"
    }
  }

  required_providers {
   ...
  }
}

To s3:

terraform {
  backend "s3" {
    region         = "<aws-region>"
    key            = "terraform.tfstate"
    bucket         = "<bucket-name>"
    dynamodb_table = "<dynamo-db-table-lock>"
  }

  required_providers {
   ...

  }
}

You’ll have to set the value of region, key, bucket, and dynamodb_table as you created earlier with the cloudposse module.

Finally, run terraform init with the new configuration to initialise the new backend. In the process, terraform will ask you if you want to migrate the existing state. Select yes, and it will automatically port all your resources in the new state stored in S3.

Cleanup

As a last step, remember to remove the old .terraform.old folder and the state you pulled from Terraform Cloud.

rm .terraform.old
rm terraform.tfstate

Conclusions

I hope you found this post valuable and interesting, and I’m keen to get feedback on it! If you find the information shared helpful, if something is missing, or if you have ideas on improving it, please let me know on 🐣 Twitter or at 📢 feedback.marcolancini.it.

Thank you! 🙇‍♂️