Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update bootstrap to not recover v2store #16470

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft

Conversation

geetasg
Copy link

@geetasg geetasg commented Aug 24, 2023

@geetasg geetasg marked this pull request as draft August 24, 2023 21:02
@@ -28,7 28,7 @@ const (
// ability to rollback to etcd v3.5.
V2_DEPR_2_GONE = V2DeprecationEnum("gone")

V2_DEPR_DEFAULT = V2_DEPR_1_WRITE_ONLY
V2_DEPR_DEFAULT = V2_DEPR_1_WRITE_ONLY_DROP
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v3.6 should not drop v2 entries as it would break downgrade to v3.5

@@ -378,7 376,7 @@ func bootstrapClusterWithWAL(cfg config.ServerConfig, meta *snapshotMetadata) (*
}, nil
}

func recoverSnapshot(cfg config.ServerConfig, st v2store.Store, be backend.Backend, beExist bool, beHooks *serverstorage.BackendHooks, ci cindex.ConsistentIndexer, ss *snap.Snapshotter) (*raftpb.Snapshot, backend.Backend, error) {
func recoverSnapshot(cfg config.ServerConfig, be backend.Backend, beExist bool, beHooks *serverstorage.BackendHooks, ci cindex.ConsistentIndexer, ss *snap.Snapshotter) (*raftpb.Snapshot, backend.Backend, error) {
// Find a snapshot to start/restart a raft node
walSnaps, err := wal.ValidSnapshotEntries(cfg.Logger, cfg.WALDir())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v3.6 should bootstrap from db and not last snapshot


if err = serverstorage.AssertNoV2StoreContent(cfg.Logger, st, cfg.V2Deprecation); err != nil {
cfg.Logger.Error("illegal v2store content", zap.Error(err))
if err = serverstorage.AssertV2DeprecationStage(cfg.Logger, cfg.V2Deprecation); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v3.6 should continue to generate snapshot files to ensure v3.5 backward compatibility.

lg.Info("restoring v2 store")
if err := s.v2store.Recovery(toApply.snapshot.Data); err != nil {
lg.Panic("failed to restore v2 store", zap.Error(err))
if err := serverstorage.AssertV2DeprecationStage(lg, s.Cfg.V2Deprecation); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should just not restore v2store when we get it in snapshot. We still expect to get v2store in snapshot when we upgrade v3.5->v3.6 or downgrade.

if err != nil {
return err
func AssertV2DeprecationStage(lg *zap.Logger, deprecationStage config.V2DeprecationEnum) error {
//supported stages are "write-only-drop-data" and "gone"
Copy link
Member

@serathius serathius Aug 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For backward compatibility we need to keep write-only

@geetasg geetasg mentioned this pull request Sep 10, 2023
21 tasks
Copy link

stale bot commented Mar 17, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 21 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Mar 17, 2024
@stale stale bot removed the stale label Jun 11, 2024
@k8s-ci-robot
Copy link

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

3 participants