The ruby s3sync tool (not the similar python option) has proven extremely valuable over the years. However, the emergence of an official AWS CLI and web-based manager, a couple minor glitches with our installed version of s3sync, and the temporary halt of development when the original developer stopped work all led to the decision to make a change. However, the AWS CLI had problems downloading files that were uploaded by s3sync because of how directories were created.
Version note: I currently have s3sync 1.2.5 installed.
Essentially, each S3 folder should have its meta data stored in a file named "dirname_$folder$". This is a zero-byte file that simply provides a place for any meta data to be stored. If no such file is created, then there is effectively no meta data being stored for that directory, which should still be acceptable.
On the other hand, s3sync uses a file named "dirname" that contains a UUID. The data is still stored as meta data, from what I can tell, but the naming convention has the unfortunate side-effect of making downloading using a different program (like the AWS CLI) difficult because there is now a file that has the exact same path as a directory. We know that the file and directory cannot both exist on your filesystem. The result (at least using the AWS CLI) is that the metadata file is downloaded, and further downloads fail because the directory was not and cannot be created.
The key note is that the metadata items are effectively option under both schemas. Unless you have a very specific use case in mind, simply deleting the metadata items from S3 is sufficient to allow the AWS CLI sync to work correctly from your bucket.
The Solution: A Dirty BASH Script
- You have already installed the AWS CLI and run `aws configure`.
- You know your bucket name and have appropriate access credentials.
- You do not use periods in your directory names. If you do, you will need to modify the script or manually handle those cases.
- You do not store files without file extensions. If you do, you have reason to believe that they files never be exactly 38 bytes.
- You are proficient enough in BASH to understand what the script is doing and modify it accordingly. Minimally, you will need to change it to work with your bucket name. If you are proficient, you will likely note inefficiencies -- it really is a quick script.
- You understand that this script (without modifications) is written to automatically delete items from S3, which cannot be undone.
- USE AT YOUR OWN RISK! THIS AUTOMATICALLY DELETES FILES FROM S3!!!
- Line 1: List all objects in a bucket.
- Line 2: Filter rows to only show rows with item keys that have a file size of 38 bytes.
- Line 3: Extract the item key and ignore any keys that have a dot after the last slash (e.g., ignore likely files).
- Line 4: Remove each of the remaining items from S3.
aws s3api list-objects --bucket=BUCKETNAME --max-items=100000000 --output=text | \ egrep "CONTENTS" | sed 's/\t/~/g' | egrep "Z~38~" | \ cut --delimiter=~ --fields=3 | egrep -v '\.[^/]*$' | \ while read obj; do echo "$obj"; aws s3 rm "s3://BUCKETNAME /$obj"; done