Sample devsecops flow

Using PGP to enhance security and non-repudiation of terraform ops

Shekhar Jha
InfoSec Write-ups
Published in
5 min readApr 27, 2022

--

Terraform has transitioned to a lingua franca for multi-cloud infrastructure as a code. Due to the sensitive content (depending on organization policies passwords, IP addresses, network structure, etc may be identified as sensitive information) stored as part of terraform plans and state, most organizations need to put additional controls to ensure that the terraform artifacts are secured properly.

Terraform provides the ability to mark variables and values as sensitive, encrypt generated passwords and store encrypted states to reduce the possibility of sensitive data leakage. This article explores approach to build on these capabilities to create a standard approach to achieve security and non-repudiation for multi-cloud scenarios. In this process, it builds on the high level ideas captured in Enterprise Cloud Security: Application Development.

Considerations

Terraform has provided capabilities that can be used to create a secure devops infrastructure and a lot of articles are available that cover using these capabilities. At the same time, there are a few considerations with the current state of the terraform security.

Security depends on provider

Most of the security capability like encrypting passwords, marking values as sensitive, backend encryption feature are dependent on the plugin provider (or in case of backend — backend implementation). This results in an ad-hoc approach to security across providers and even within a single provider (e.g. AWS IAM Access Key uses PGP to enable encryption but password in DB Instance is maintained in plain text) which can create an inconsistent security approach for multi-cloud scenarios. This raises a need for an approach that can work across providers without any dependency on individual provider capabilities.

State and beyond

An important part of terraform security focuses on securing the state file. This is primarily because this file contains the passwords and other sensitive data in a plain text unless explicitly secured by developers by using provider capabilities. The terraform backend varies in terms of ability to secure the state files from AWS (Server Side encryption), GCS (Customer supplied encryption) that support encryption using customer key at rest to Azure, Artifactory that don’t provide any such capability. The best available security measure like server-side encryption can still be susceptible to deep-packet inspection methods (without proper certificate validation controls).

In addition to that there are other artifacts like variable files, plans that may contain sensitive content and need to be secured properly. Multiple environments and associated files, configurations can quickly explode the number of artifacts that development teams need to keep track of for security.

What about non-repudiation?

How do you identify whether the change has been made as part of terraform operation or by a rogue entity with access to identity used by terraform operation. With the growth of automation there is a significant need to ensure that authorized changes are clearly identified. This requires going beyond traditional security controls like encryption using symmetric keys to ensure that artifacts are signed and published to right tools for analysis.

Approach

In order to provide a way to security and non-repudiation while providing support for multi-cloud, multi-environment scenarios, an opinionated approach has been developed and implemented that leverage a standard directory structure and PGP to encrypt/sign the artifacts. The scripts to perform these operations are available on github.

Each environment has a PGP key pair generated during setup (see setup.sh for implementation details). The terraform specific directory structure is created and all the relevant files are copied. This is followed by execution of terraform validate, plan operations. The output of plan is stored for reference and use in next step. Terraform apply creates a backup of state and uses the previously created plan to apply the changes. On successful application, the terraform directory is cleaned up (*.tf files are removed, plugins are packaged to .plugin directory) and packed. This file is encrypted and saved to storage location.

During the next execution (typically performed using apply.sh), the saved state is loaded, decrypted and then the terraform apply command is executed. This ensures that all the content is encrypted when stored. This can easily be extended to support signing using external keys to add non-repudiation to the process.

Directory structure

Traditionally environment specific variable files have been checked in to source control while plans and other intermediate artifacts are not tracked. In this approach, we create a directory structure for each environment and keep track of plans, states, backend configuration and variables.

Terraform directory structure

Terraform directory has appropriate directory for plans (.plans), state backup (.states) and plugins for audit purpose.

Content of various directories for a terraform environment

This directory forms the input for -chdir option to ensure that all the content generated (.terraform, .terraform.lock.hcl) by terraform, backend configuration (backend.cfg), variable values (terraform.tfvars) are managed in single location.

The terraform operations are implemented in tf.sh and it is abstracted through the infra.sh. It provides functions to Initialize environment, apply changes and cleanup. Pack and Unpack methods cleanup unnecessary files and create tar and zipped file for storage

Encryption and signing

The terraform directory can be packed and then stored to recreate the environment and perform various operations. This packed file is encrypted using PGP keys that are generated and stored in appropriate store (e.g. AWS Secret Manager) to be retrieved as needed. The pgp.sh file provides the methods to generate, store, retrieve and delete PGP keys using gpg package. In addition to that it also provides methods to encrypt and decrypt files.

Managing files

The encrypted files can be stored and retrieved from the file store of choice (currently AWS S3 is supported) using available scripts in store.sh file that contains generic methods to store keys and files. The aws.sh provides the AWS specific operations.

Growth of terraform as the language of choice for infrastructure as code has raised a unique challenge to develop controls to ensure that appropriate level of audit, logging, security and non-repudiation is maintained during the lifecycle of generated environment. This article shares some of the work done so far. The work will continue to build on current work to add support for GCP and azure as store, ability to sign and validate files.

--

--

Focus area: Identity and access management (workforce and CIAM) and cloud security