You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It will be easier to reproduce if you have about 10 files of 1MB or less
Reload browser can be used for crash (in this issue context)
Crash soon after starting upload(immediately, 1 second, 2 second, ...)
Having a heavy file, that is a part of upload files, may make continue uploading. Because finishing upload maybe will delete resumable metadata.
Restored from IndexedDB and ServiceWorker may "culm down" some of issues.
Delete database of IndexedDB, turn off ServiceWorker running and reload browser can reproduce "Browser crashed"
Resume upload
uppy.emit('restore-confirmed');
or
Press resume button of dashboard(I don't use but it is perhaps same behavior)
Expected behavior
Completed upload files may not be affected
Actual behavior
Files having completion flag will restart upload(!)
This is because aws-s3-multipart does not consider the completion flag.
Very fast solution is with patching file: @uppy aws-s3-multipart 3.11.1.patch
diff --git a/node_modules/@uppy/aws-s3-multipart/lib/index.js b/node_modules/@uppy/aws-s3-multipart/lib/index.js
index 027ee52..a12cd4f 100644
--- a/node_modules/@uppy/aws-s3-multipart/lib/index.js b/node_modules/@uppy/aws-s3-multipart/lib/index.js@@ -177,6 177,9 @@ export default class AwsS3Multipart extends BasePlugin {
})();
return uploadPromise;
}
if (file.progress.uploadComplete) { return Promise.resolve(`File ${file.id} was already uploaded`); }
return _classPrivateFieldLooseBase(this, _uploadLocalFile)[_uploadLocalFile](file);
});
const upload = await Promise.all(promises);
diff --git a/node_modules/@uppy/aws-s3-multipart/src/index.ts b/node_modules/@uppy/aws-s3-multipart/src/index.ts
index 2595d1b..d583114 100644
--- a/node_modules/@uppy/aws-s3-multipart/src/index.ts b/node_modules/@uppy/aws-s3-multipart/src/index.ts@@ -958,6 958,9 @@ export default class AwsS3Multipart<
return uploadPromise
}
if (file.progress.uploadComplete) { return Promise.resolve(`File ${file.id} was already uploaded`) }
return this.#uploadLocalFile(file)
})
Completion flag is false
There is case that uploading to s3 have been finished(S3 complete endpoint was called) but flag of uploaded was not updated as true.
This case is caused by (maybe)two reason:
2-1. Golden retriever does not save last metadata before crash.
Golden retriever have to save metadata when file state was changed.
But with looking up the code:
It saves metadata with throttle. Crash causes throttle not to flush last invocation.
Hint: Perhaps uppy.getPlugin('GoldenRetriever').saveFilesStateToLocalStorage.flush() call on beforeunload event or so will save last state when browser was reloaded. But "real browser crash" does not send event anyway.
2-2. S3 complete endpoint was called but browser does not receive response
This makes completion of s3 multipart was done but metadata will not updated as finished upload.
Reloading browser aborts upload
I found a few case that reloading browser aborts upload.
It will call DELETE /s3/multipart/:upload_id and upload ID of this will not be available anymore.
But metadata continues to have such upload id and S3 key.
This cause resuming upload will fail with invalid upload id.
The text was updated successfully, but these errors were encountered:
Initial checklist
Link to runnable example
No response
Steps to reproduce
Note: Some or all issues may be reproduced without setting shouldUseMultiPart always be true
Also: Setup companion server.
or
Press resume button of dashboard(I don't use but it is perhaps same behavior)
Expected behavior
Completed upload files may not be affected
Actual behavior
This is because aws-s3-multipart does not consider the completion flag.
Very fast solution is with patching file:
@uppy aws-s3-multipart 3.11.1.patch
There is case that uploading to s3 have been finished(S3 complete endpoint was called) but flag of uploaded was not updated as true.
This case is caused by (maybe)two reason:
2-1. Golden retriever does not save last metadata before crash.
Golden retriever have to save metadata when file state was changed.
But with looking up the code:
node_modules/@uppy/golden-retriever/src/index.ts:86-90
It saves metadata with throttle. Crash causes
throttle
not to flush last invocation.Hint: Perhaps
uppy.getPlugin('GoldenRetriever').saveFilesStateToLocalStorage.flush()
call onbeforeunload
event or so will save last state when browser was reloaded. But "real browser crash" does not send event anyway.2-2. S3 complete endpoint was called but browser does not receive response
This makes completion of s3 multipart was done but metadata will not updated as finished upload.
I found a few case that reloading browser aborts upload.
It will call
DELETE /s3/multipart/:upload_id
and upload ID of this will not be available anymore.But metadata continues to have such upload id and S3 key.
This cause resuming upload will fail with invalid upload id.
The text was updated successfully, but these errors were encountered: