-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a centralized backup and restore mechanism #5008
Comments
We had this feature a very long time ago, but it was quite broken - it expected every plugin to write it's backup contents to stdout and then tar'd it all up, which ended up causing broken backups in some cases. This is actually a bit more complex than you'd expect.
I'd previously tried to raise money via Github Sponsors for backup/restore functionality across the project, but didn't get to my goal of $500. That said, if you'd like to work on this functionality, by all means go ahead. I'd be happy to provide feedback once you have a skeleton working to get it to a place that is usable by core, datastore, and community plugins. |
Dokku is more or less at the $500 a month goal, so I'll start working on this in the next few releases. I think there is still some work to move a few files out of the git repositories (docker options, env vars, nginx configs) but those should be relatively straightforward to do. Let me think about a good backup interface and then post my comments here. I'm guessing a few things will change about how datastores do backups, but I believe that will be a Good Thing™. |
Okay I think what I'll do is something like this:
|
That sounds great! If I was going to extend this functionality to e.g. store backups on S3, would I use the post-backup and pre-import hooks? How would that plugin prune old backups? (Totally fine if the last bit is just a custom command or something) |
I think I would like to concentrate on this creating a tarball on disk. If someone wants to upload that somewhere, they can do so - similarly, if their backup process should prune old backups, they can handle that themselves. To extend and do something custom with the file, you'd use a hook of some sort. |
Absolutely -- not suggesting that s3 storage be implemented in Dokku directly. Just thinking through the ways that this could be extended by other plugins -- if the answer is that s3 storage stuff should be done without interacting with the backup plugin directly, that's totally workable. |
I just want to avoid the issue we have with the datastore plugins where each one implements S3. Folks use things other than S3, so it makes us seem like users cant do something else. I'd rather we document tools in the ecosystem to make those things possible, maybe even writing a tutorial for them. |
Totally understood! Appreciate the time and thought :) |
Description of problem
As a dokku admin, if I want to export a backup of an entire application and all of the dependencies, I need to know how that application works, whether or not the plugins that provide the backing services offer backup commands, etc. It would be much better if there were a mechanism by which an application and all of the attached services can be told to export whatever they need to export.
Possible Solution
I think a very lightweight solution could be put in place here: we just need six new plugn triggers:
The only thing Dokku needs to handle is dispatching those triggers providing a directory that each plugin should write stuff to or read stuff from (as well as backing up/restoring core data, of course -- from what I can see, this is already handled in other commands so it would just need wired up).
Example:
If I'm running an application that was pushed to Git with a MySQL backing service and a persistent storage directory, here's what I'm imagining would happen when I trigger a backup:
/tmp/backup/app-name-[timestamp]/
* Dokku exports a copy of the application code from the git repo to
/tmp/backup/app-name-[timestamp]/code.tar.gz
* Dokku exports a list of domains to
/tmp/backup/app-name-[timestamp]/domains.txt
* Dokku exports a snapshot of the persistent storage dir to
/tmp/backup/app-name-[timestamp]/some-meaningful-name.tar.gz
* (repeat for all core plugins)
* The MySQL plugin is asked to export a backup for app
[app-name]
into dir/tmp/backup/app-name-[timestamp]
* Under the hood, this could use
mysqldump
ormysqlhotcopy
or whatever. The implementation details don't matter and will vary by plugin.When I ask to restore something, the reverse should happen. If I have an S3 backups plugin or w/e, it should be responsible for downloading the backup that I've specified and extracting it into the location where Dokku expects the backup data to be available. Each plugin is responsible for reading its own data from the backup dir.
Aside
I feel like this existed at some point and that I'm reproducing it (or something like it) from memory, but maybe I'm remembering wrong? Is there some reason that the core product should not provide this kind of plumbing? Backup and restore seems really important for a PaaS.
The text was updated successfully, but these errors were encountered: