-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terraform is creating a terraform.tfstate.backup file locally even when state is remote (S3 or Google Storage Bucket) #15339
Comments
I see that as well - *.tfstate.backup is being created locally. Doesn't affect any functionality but is slightly annoying as you expect to have no tfstate files in the working directory. If that file is needed locally I would suggest moving it to .terraform - same place where local copy of *.tfstate is. |
I see the same on Terraform v0.9.8, S3 backend |
I see the same on Terraform v0.9.11, S3 backend |
Every version from 0.9.6 to 0.9.11 does the same thing. This is becoming frustrating. |
same thing |
1 |
1 similar comment
1 |
As a general point of GitHub etiquette, please don't 1 issues. Use the reactions feature GitHub has provided. You can actually sort issues by reaction counts. You can't do that with 1 comments. |
This is indeed a very annoying bug, I'm adding |
👍 using |
Just bit by this as well. Could we please get a reply from Hashicorp? |
Hi all! Sorry for the delayed response here. This backup file is created locally to allow for it to be used to recover in the event of an erroneous update. It's placed locally rather than remotely because the recovery commands (via You're all correct that prior to 0.9 this file was created as a sibling of the I understand the annoyance this causes, and agree that it should be written instead into the One wrinkle here is that we do still have a Therefore I'd like to propose that we move it to I acknowledge that this doesn't address the concern of the state containing sensitive information and now being written in the local filesystem. My proposed change above doesn't address this, and focused only on preventing the backup from being inadvertently added to version control. We could consider separately providing the option to disable the local backup to address this, but since the state being on local disk isn't a new problem (that's been true from day one) I'd prefer to address that separately. Does that seem like a reasonable path here? |
@apparentlymart this sounds like some great steps in the right direction! And you're right that renaming the backup file to |
👍 for simplicity and just moving |
Wasn't the whole point of removing the local Now these backup files are getting dropped all over the place, littered with secrets, which is a major security regression from the initial 0.9.0 release in my opinion. (Unless they were getting written to My preference would be completely disabling backups by default if using remote states. If you're using S3 with versioning enabled (or something equivalent), you've got backups of all your previous states anyway. If you really want to create a backup, you should be required explicitly pass If there's concern about failing to update the remote state after applying changes, why not just keep everything in RAM, and only drop the backup to disk if you get an error writing to the remote state store? Or at the very least, delete it from disk after you confirm a successful write to the remote state store. |
I agree with @reubit, but I'm not sure if it's within the scope of this ticket. I hate worrying a dev will get their laptop stolen with all of our sensitive state on it. |
Hi all, As noted before, I understand that there are really two issues being discussed here. Moving the file back to where it used to be (in the I agree that disabling the backup files altogether by default seems like a reasonable idea, but that requires more caution since it's something that could affect people's workflows. I'm open to it, but at least we'd need to wait until the next major release so we can talk more loudly about it in case anyone is relying on it and needs to make new accommodations, such as adding a new option to commands as @reubit suggested. We can move the file back into the I also want to note that as of 0.9.6 Terraform got a new behavior where if the final state write (at the conclusion of In general I would not recommend those who have sensitive data in state files to be routinely working with those state files on arbitrary laptops -- in that case, it's better to run Terraform in a well-maintained, secure environment -- but I know that this is often easier said than done, so I'm definitely open to improving the default behavior to reduce the risk of accidental secret leakage especially since, as noted, the use-case for this backup file can be served in other ways. |
I think that moving the file to the .terraform directory is a good pragmatic first step. |
This is preventing me from executing terraform via a lambda function (because AWS Lambda has read only file systems). Why would I attempt to use Lambda in the first place you might ask. To this, I counter why not. I thought that this would be an interesting project that I might be able to create a use case for. A great example for a use case would be if a third party developer wanted to spin up a dev/staging environment they would be able to email me and AWS Lambda with Terraform would spin up the predefined environment automagically, without releasing console credentials if there is no need for it. Please at the very least let us disable creation of terraform.tfstate.backup files. If we have versioning enabled on our S3 buckets this 'feature' is pretty useless. |
@apparentlymart could you respond directly to this comment:
Should we open a new issue? Or can you do that? Or is there an existing issue out there already? Pull requests welcome I imagine? Earlier you said:
Is this a problem "now" or has it always been around? The docs seem somewhat inconsistent: At https://www.terraform.io/docs/backends/config.html I see this:
Meanwhile at https://www.terraform.io/docs/backends/state.html I see this:
|
As a work around could you give us a flag to force the backup to be placed along the tfstate file in the remote? In the event a recovery is required then downloading the backup from the remote (S3) in my case, isn't going to be a huge deal.... |
This is a huge difference from the docs explicitly stating that no state will be written to disk. This really needs to be fixed, pronto! |
i just ran into this and i agree: the docs are very misleading. The terraform.tfstate.backup shouldn't be created at all or that information should be added to the documentation |
For our purposes we absolutely need to prevent writes of secrets to local disk, that's why we're using remote state to begin with. Even storing the backup state to .terraform is unacceptable by the same reasoning. I'd advocate for a flag to prevent any backup state storage. |
@snwight agree ultimately remote storage should never write backups to disk, but writing into the root as |
Totally agree, yes |
1 |
i guess you didn't read through the thread before commenting? If you did, then you missed @nbering comment? "As a general point of GitHub etiquette, please don't 1 issues. Use the reactions feature GitHub has provided. You can actually sort issues by reaction counts. You can't do that with 1 comments." @nbering |
true sorry will do reactions next time
…Sent from my iPhone
On 13 Oct 2017, at 15:52, A A Omoware ***@***.***> wrote:
@yosefy
i guess you didn't read through the thread before commenting? If you did, then you missed @nbering comment?
"As a general point of GitHub etiquette, please don't 1 issues. Use the reactions feature GitHub has provided. You can actually sort issues by reaction counts. You can't do that with 1 comments." @nbering
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Previously we forced all remote state backends to be wrapped in a BackupState wrapper that generates a local "terraform.tfstate.backup" file before updating the remote state. This backup mechanism was motivated by allowing users to recover a previous state if user error caused an undesirable change such as loss of the record of one or more resources. However, it also has the downside of flushing a possibly-sensitive state to local disk in a location where users may not realize its purpose and accidentally check it into version control. Those using remote state would generally prefer that state never be flushed to local disk at all. The use-case of recovering older states can be dealt with for remote backends by selecting a backend that has preservation of older versions as a first-class feature, such as S3 versioning or Terraform Enterprise's first-class historical state versioning mechanism. There remains still one case where state can be flushed to local disk: if a write to the remote backend fails during "terraform apply" then we will still create the "errored.tfstate" file to allow the user to recover. This seems like a reasonable compromise because this is done only in an _exceptional_ case, and the console output makes it very clear that this file has been created. Fixes #15339.
Previously we forced all remote state backends to be wrapped in a BackupState wrapper that generates a local "terraform.tfstate.backup" file before updating the remote state. This backup mechanism was motivated by allowing users to recover a previous state if user error caused an undesirable change such as loss of the record of one or more resources. However, it also has the downside of flushing a possibly-sensitive state to local disk in a location where users may not realize its purpose and accidentally check it into version control. Those using remote state would generally prefer that state never be flushed to local disk at all. The use-case of recovering older states can be dealt with for remote backends by selecting a backend that has preservation of older versions as a first-class feature, such as S3 versioning or Terraform Enterprise's first-class historical state versioning mechanism. There remains still one case where state can be flushed to local disk: if a write to the remote backend fails during "terraform apply" then we will still create the "errored.tfstate" file to allow the user to recover. This seems like a reasonable compromise because this is done only in an _exceptional_ case, and the console output makes it very clear that this file has been created. Fixes #15339.
I was hoping it would fix the creation of local tfstat.*.backup files (hashicorp/terraform#15339), but it didn't. Change-Id: I7eef74ad024b4fae9dd8cd3a44bb712aef9d1f72 GitOrigin-RevId: be780e8
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I'm seeing a new issue, starting with (I think it was?) 0.9.4, where Terraform is writing a terraform.tfstate.backup file in the local working directory even when state is configured to be stored remotely. This is happening for me using remote state in a Google Storage Bucket, but I've confirmed on the hangops Slack group in the #hashicorp channel that others have also started noticing it when using Amazon S3 for remote state, so it appears to be a global issue for remote state storage. It means anything sensitive in the state is now being stored in the local system, and potentially even being pushed to source code repositories if these files aren't set to be ignored.
Terraform Version
v0.9.6
Terraform Configuration Files
terraform-base.tf:
Expected Behavior
State should only be written remotely to the configured Amazon S3 bucket or the Google Storage Bucket.
Actual Behavior
State is written both to the remote state target and to a local terraform.tfstate.backup file.
Steps to Reproduce
The text was updated successfully, but these errors were encountered: