Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

With wal-push with --pool-size > 1 archive_command cannot call any other archiver safely #435

Open
aristocrates opened this issue Jan 28, 2023 · 0 comments

Comments

@aristocrates
Copy link

Concurrent wal archiving in wal-push scans archive_status .ready files and uploads up to the --pool-size number wal segments per call of wal-push. For each segment uploaded this way it renames the corresponding .ready file to .done, which causes postgres to skip calling archive_command on these wal segments. If wal-push is being run with --pool-size > 1 (default is 32 for wal-push) side-by-side with another archive command (e.g. archive_command is a script that calls both) this will result in data loss in the archive destination of the other command.

This mailing list thread goes into some more details about issues with archive commands modifying the contents of archive_status: https://www.postgresql.org/message-id/flat/[email protected]

relevant code path

wal-e/wal_e/cmd.py

Lines 657 to 660 in f5b3e79

elif subcommand == 'wal-push':
external_program_check([LZOP_BIN])
backup_cxt.wal_archive(args.WAL_SEGMENT,
concurrency=args.pool_size)

# Upload any additional wal segments up to the specified
# concurrency by scanning the Postgres archive_status
# directory.
started = 1
seg_stream = WalSegment.from_ready_archive_status(xlog_dir)
while started < concurrency:
try:
other_segment = next(seg_stream)
except StopIteration:
break
if other_segment.path != wal_path:
group.start(other_segment)
started = 1
try:
# Wait for uploads to finish.
group.join()
except EnvironmentError as e:
if e.errno == errno.ENOENT:
print(e)
raise UserException(
msg='could not find file for wal-push',
detail=('The operating system reported: {0} {1}'
.format(e.strerror, repr(e.filename))))
raise

@staticmethod
def from_ready_archive_status(xlog_dir):
status_dir = path.join(xlog_dir, 'archive_status')
statuses = os.listdir(status_dir)
# Try to send earliest segments first.
statuses.sort()
for status in statuses:
# Only bother with segments, not history files and such;
# it seems like special treatment of such quantities is
# more likely to change than that of the WAL segments,
# which are bulky and situated in a particular place for
# crash recovery.
match = re.match(storage.SEGMENT_READY_REGEXP, status)
if match:
seg_name = match.groupdict()['filename']
seg_path = path.join(xlog_dir, seg_name)
yield WalSegment(seg_path, explicit=False)

if not segment.explicit:
segment.mark_done()

def mark_done(self):
"""Mark the archive status of this segment as 'done'.
This is most useful when performing out-of-band parallel
uploads of segments, so that Postgres doesn't try to go and
upload them again.
This amounts to messing with an internal bookkeeping mechanism
of Postgres, but that mechanism is not changing too fast over
the last five years and seems simple enough.
"""
# Recheck that this is not an segment explicitly passed from Postgres
if self.explicit:
raise UserCritical(
msg='unexpected attempt to modify wal metadata detected',
detail=('Segments explicitly passed from postgres should not '
'engage in archiver metadata manipulation: {0}'
.format(self.path)),
hint='report a bug')

The default concurrency is set to 32, so this is the behavior with default settings:

wal-e/wal_e/cmd.py

Lines 288 to 290 in f5b3e79

wal_push_parser.add_argument(
'--pool-size', '-p', type=int, default=32,
help='Set the maximum number of concurrent transfers')

wal-g supports similar behavior but disables it by default for this very reason wal-g/wal-g#950 (comment) (requires --pg-ready-rename to enable)

It would be nice if there was a prominent warning in the README and docs that wal-push with --pool-size > 1 is not safe to use with an archive command that e.g. is a script that calls multiple archivers and returns successfully if and only if all succeed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant