Learn how to make the most out of the Dropbox Community here đ.
Learn how to make the most out of the Dropbox Community here đ.
Backstory
I have a bunch of small PDFs (18 kb each).
filesUpload worked until I understood that a `429` is quite usual.
Researched and found the batch endpoints.
I wanted to use `/upload_session/start_batch`, `/upload_session/append_batch` & `/upload_session/finish_batch` to upload all files in a session.
For stability I used the JS SDK.....but there is no method for `/upload_session/append_batch` đ§
I created my own method and used the endpoint directly.....worked.
But I got errors in the `finish_batch`
Then I thought: If the file size off all PDFs is so small, maybe I can upload them directly in the `start` without any `append` and without batch session.
I thought, I can use the one `session_id` returned by the `filesUploadSessionStart` method and then go with `filesUploadSessionFinishBatchV2` and split the uploaded file into the original PDFs.
const allContent = concatArrayBuffers(...files.map(({ contents }) => contents));
const startResponse = await dbx.filesUploadSessionStart({
close: true,
contents: allContent,
});
const batchData = files.reduce(
(acc, cur) => {
acc.entries.push({
cursor: {
session_id: startResponse.result.session_id,
offset: acc.offset,
},
commit: {
autorename: true,
mode: "add",
mute: false,
path: cur.path,
},
});
acc.offset += cur.contents.byteLength;
return acc;
},
{
offset: 0,
entries: [],
}
).entries;
await dbx.filesUploadSessionFinishBatchV2({
entries: batchData.map(({ commit, cursor }) => ({ commit, cursor })),
});
This is the code.
Questions
Forgot to mention that I received an array of the following error:
{
'.tag': 'lookup_failed',
lookup_failed: { '.tag': 'incorrect_offset', correct_offset: 257523 }
}
Hi MatthiD
You can definitely just start an upload session and then finish it, without needing to use `filesUploadSessionAppendV2`.
Looking at the code snippet provided, it looks like you are starting an upload session by calling `filesUploadSessionStart()`, but are attempting to finish that same session by calling `filesUploadSessionFinishBatchV2()`. While this technically does work â `filesUploadSessionFinishBatchV2()` will finish an upload session started with `filesUploadSessionStart()` â you may want to use one of the following combinations:
Keep in mind that `filesUploadSessionStart()` is meant to upload a single file, whereas `filesUploadSessionStartBatch()` starts a batch of upload sessions.
Additionally, since there is no functionality to separate the files, please ensure you are not combining all of the file data of multiple files into one large buffer â use one upload session per file needing to be uploaded.
To answer your additional questions:
Lastly, the error you are receiving means:
The specified offset was incorrect. See the value for the correct offset. This error may occur when a previous request was received and processed successfully but the client did not receive the response, e.g. due to a network error.
The "correct_offset" value from the response you provided means that the Dropbox API had only received 257523 bytes for the upload session so far. The error "incorrect_offset" was thrown because the Dropbox API received a value other than 257523 in your request.
Feel free to take a look at this example for an idea of how an upload session can be handled.
For best practices when uploading files via the Dropbox API, please refer to our Performance Guide.
Hey DB-Des ,
thank you so much for the detailed answer.
I already saw the examples and explanations, but could not adapt it to my specific use case.
Meanwhile I solved the issue.
Two things I recognized/learned:
For each file in a batch, in parallel, call /files/upload_session/append_v2 as needed to upload the full contents of the file over multiple requests.
I solved everything for my case, but wanted to drop these thoughts for people having the same problem.
Additionally my code that worked for me:
async function filesUploadSessionAppendBatch(
content: ArrayBufferLike,
entries: AppendBatchEntry[],
) {
// this gets a token or a new one if it is expired
const accessToken = await dropboxMaker.getToken();
const response = await fetch(
"https://content.dropboxapi.com/2/files/upload_session/append_batch",
{
method: "POST",
headers: {
Authorization: `Bearer ${accessToken}`,
"Dropbox-API-Arg": JSON.stringify({
entries,
}).replace(/[\u007f-\uffff]/g, getSafeUnicode),
"Content-Type": "application/octet-stream",
},
body: content,
},
);
return response.json();
}
export async function uploadFiles(files: UploadFileData[]) {
const num_sessions = files.length;
try {
if (num_sessions < 1) {
return {};
}
const dbx = await dropboxMaker.client();
if (num_sessions === 1) {
// upload one file
await dbx.filesUpload({
...files[0],
});
} else {
// upload multiple files
// merges all files in one ArrayBuffer (to make one single request....will still be far away from the upload limit)
const allContent = concatArrayBuffers(
...files.map(({ contents }) => contents),
);
const startResponse = await dbx.filesUploadSessionStartBatch({
num_sessions,
});
const { session_ids } = startResponse.result;
const batchData: Array<FullBatchEntry> = files.reduce(
(
acc: { offset: number; entries: Array<FullBatchEntry> },
cur,
index,
) => {
acc.entries.push({
close: true,
cursor: {
session_id: session_ids[index],
offset: 0,
// always start with 0 for a new session. Donât upcount.
},
commit: {
autorename: true,
/** @ts-expect-error Wrong type used (see CommitInfo) */
mode: "add",
mute: false,
path: cur.path,
},
length: cur.contents.byteLength,
});
// not needed anymore
// acc.offset += cur.contents.byteLength;
return acc;
},
{
offset: 0,
entries: [],
},
).entries;
await filesUploadSessionAppendBatch(
allContent,
batchData.map(({ cursor, length, close }) => ({
cursor,
length,
close,
})),
);
await dbx.filesUploadSessionFinishBatchV2({
entries: batchData.map(({ commit, cursor, length }) => ({
commit,
cursor: {
...cursor,
// point to the end of the upload file part...so length
offset: length,
},
})),
});
}
} catch (error) {
console.log(error);
}
}
Thanks again for your help đ
MatthiD, I'm glad to hear you were able to find a solution for your use case.
To clarify, there are different types of 429 errors. It is always recommended to log the error message that accompanies an error code to get a better idea of what may be happening with the request.
To help avoid a potential lock contention error (429 too_many_write_operations), which might have been the case here, an upload session operation is recommended because of the way it handles uploads â as described in the Performance Guide:
Each file upload to Dropbox consists of the following stages: appending the byte contents of the file to an upload buffer on the Dropbox server, obtaining a namespace lock, and then committing those bytes as a file into a target namespace. The /files/upload endpoint does this atomically, whereas upload session decouples these steps.
If you need more help you can view your support options (expected response time for an email or ticket is 24 hours), or contact us on X or Facebook.
For more info on available support options for your Dropbox plan, see this article.
If you found the answer to your question in this Community thread, please 'like' the post to say thanks and to let us know it was useful!