You might see that the Dropbox Community team have been busy working on some major updates to the Community itself! So, here is some info on what’s changed, what’s staying the same and what you can expect from the Dropbox Community overall.
Forum Discussion
Waleed Magdy
3 months agoExplorer | Level 4
Transfer Dropbox Backups using Dropbox APIs
Hello,
Specifically, I have several backups stored in Dropbox Backup (some exceeding tens of terabytes), and I’m interested in utilizing the Dropbox API to automate the process of downloading these backups and subsequently uploading them to an AWS S3 bucket. Here are a few points where I’d appreciate your insights:
1. Dropbox Backup API Access:
- Is it possible to directly access the files stored in Dropbox Backup via the API, and if so, which specific endpoints or procedures should I be using?
- Are there any limitations or considerations (e.g., rate limits, large file handling) that I should be aware of when transferring large amounts of data?
2. Handling Large Files:
- Could you provide guidance or best practices on efficiently downloading large backups (up to 74 TB) using the API?
- Should I be using chunked downloads for these large files, and how does the API handle large data streams?
I would greatly appreciate any documentation, example scripts, or advice that could help me streamline this migration process.
Thank you in advance for your support.
Best regards,
Waleed
- Greg-DBDropbox Staff
It is possible to list and download files from Dropbox Backup using the Dropbox API, the same way you would any other files in a Dropbox account. For instance, you can use /2/files/list_folder and /2/files/list_folder/continue to list files/folders, /2/files/download to download specific files, and /2/files/download_zip to download entire folders.
Check out the following guides for more information on how to get started and interact with files and folders using the API:
The Dropbox API does have a general rate limiting system that applies to all account types, but we don't have specific rate numbers documented for that. Apps should be written to handle these rate limit responses automatically. Also note that not all responses with a 429 or 503 status code indicate explicit rate limiting, but in any case that you get a response with 429 or 503 status code the best practice is to retry the request, respecting the Retry-After header if given in the response, or using an exponential back-off, if not. I recommend referring to the error documentation and Error Handling Guide for more information. It's worth noting in this case though that the system operates on the basis of the rate of calls (that is, the number of calls per time window), not the file size.
When making content download calls, such as /2/files/download, downloads may take some time for large files/folders, however connections may be more likely to fail if held open for a long period of time, so make sure you're catching and handling any errors from your network client. In particular, for these calls, do not attempt to hold a single connection for longer than one hour. Most Dropbox API content download endpoints (except for /2/files/download_zip) do support Range requests though, so if any single download request would take longer than that, you should instead split the download across multiple requests using ranges.
The best strategy will depend on the specifics of your scenario, e.g., how many files/folders your data is spread across, etc., but in general I suggest trying to use /2/files/download_zip where possible (refer to the documentation for information on count/size limits), and otherwise falling back to /2/files/download (with Range requests as needed).
- Waleed MagdyExplorer | Level 4
Thanks for your reply, however I tried many times but still only getting the folders and files in the root directory for the dropbox account but not the backups in dropbox backups
this is the script I am trying with if you can take a look:import requests import json # Dropbox API credentials DROPBOX_ACCESS_TOKEN = "TOKEN" DROPBOX_REFRESH_TOKEN = ( "REFRESH_TOKEN" ) APP_KEY = "APP_KEY" APP_SECRET = "APP_SECRET" # endpoints DROPBOX_OAUTH2_TOKEN_URL = "https://api.dropbox.com/oauth2/token" DROPBOX_LIST_FOLDER_URL = "https://api.dropboxapi.com/2/files/list_folder" DROPBOX_TEAM_MEMBERS_URL = "https://api.dropboxapi.com/2/team/members/list" # Refresh the access token using refresh token def refresh_dropbox_token(): data = { 'grant_type': 'refresh_token', 'refresh_token': DROPBOX_REFRESH_TOKEN, 'client_id': APP_KEY, 'client_secret': APP_SECRET } response = requests.post(DROPBOX_OAUTH2_TOKEN_URL, data=data) if response.status_code == 200: tokens = response.json() return tokens['access_token'] else: print("Error refreshing token:", response.text) return None # This is to list team members to choose which team member to work with def list_dropbox_team_members(): global DROPBOX_ACCESS_TOKEN access_token = refresh_dropbox_token() if access_token is None: print("Failed to refresh Dropbox access token.") return None headers = { 'Authorization': f'Bearer {access_token}', 'Content-Type': 'application/json' } data = json.dumps({}) response = requests.post(DROPBOX_TEAM_MEMBERS_URL, headers=headers, data=data) if response.status_code == 200: members = response.json() team_members = {} for member in members['members']: team_members[member['profile']['team_member_id']] = member['profile']['email'] print(f"User ID: {member['profile']['team_member_id']}, Email: {member['profile']['email']}") return team_members else: print("Error listing team members:", response.text) return None # List Folders and Files in the Team member account def list_dropbox_backups(selected_user_id): global DROPBOX_ACCESS_TOKEN access_token = refresh_dropbox_token() if access_token is None: print("Failed to refresh Dropbox access token.") return headers = { 'Authorization': f'Bearer {access_token}', 'Dropbox-API-Select-User': selected_user_id, 'Content-Type': 'application/json' } data = json.dumps({ "path": "", "recursive": False }) response = requests.post(DROPBOX_LIST_FOLDER_URL, headers=headers, data=data) if response.status_code == 200: files = response.json() for entry in files['entries']: print(f"Name: {entry['name']}, Type: {entry['.tag']}") else: print("Error listing files:", response.text) if __name__ == "__main__": team_members = list_dropbox_team_members() if team_members: selected_email = input("Enter the email of the team member whose backups you want to list: ") selected_user_id = None for user_id, email in team_members.items(): if email == selected_email: selected_user_id = user_id break if selected_user_id: print(f"Listing backups for user: {selected_email} (User ID: {selected_user_id})") list_dropbox_backups(selected_user_id) else: print(f"No team member found with the email: {selected_email}")
Please let me know if you found anything to be changed, also I attached a screen shot here to be in the same page which service for the backups I am talking about.
- Greg-DBDropbox Staff
Looking at this code, there are a few things to note:
- You're only calling /2/files/list_folder but not /2/files/list_folder/continue. You're not guaranteed to get everything back from just /2/files/list_folder; you need to check the returned has_more value and implement support for /2/files/list_folder/continue as well. Please check the linked documentation for more information.
- You're only checking the directly folder ("path": "") contents, and not any of its children ("recursive": False). You'd only get items directly in the folder but not anything any deeper. You'd need to call back again with the relevant subfolder, or otherwise use the recursive mode to get nested entries.
About Dropbox API Support & Feedback
Find help with the Dropbox API from other developers.
5,882 PostsLatest Activity: 3 years agoIf you need more help you can view your support options (expected response time for an email or ticket is 24 hours), or contact us on X or Facebook.
For more info on available support options for your Dropbox plan, see this article.
If you found the answer to your question in this Community thread, please 'like' the post to say thanks and to let us know it was useful!