Rename S3 Folders Swiftly with Python and Boto SDK

Published:26 July 2019 - 3 min. read

Azure Cloud Labs: these FREE, on‑demand Azure Cloud Labs will get you into a real‑world environment and account, walking you through step‑by‑step how to best protect, secure, and recover Azure data.

To rename a folder on a traditional file system is a piece of cake but what if that file system wasn’t really a file system at all? In that case, it gets a little trickier! Amazon’s S3 service consists of objects with key values. To rename S3 folder objects, we still need to perform typical file system-like actions like renaming folders.

Renaming S3 “folders” isn’t possible; not even in the S3 management console but we can perform a workaround. We can create a new “folder” in S3 and then move all of the files from that “folder” to the new “folder”. Once all of the files are moved, we can then remove the source “folder”.

S3 Buckets Containing Files to Rename S3 Folder Objects
S3 Buckets Containing Files to Rename S3 Folder Objects

To do this, use Python and the boto3 module. If you’re working with S3 and Python and not using the boto3 module, you’re missing out. It makes things much easier to work with.

Prerequisites

For the demonstration I’ll be showing you to work, you’ll need to meet a few prereqs ahead of time:

  • macOS/Linux
  • Python 3+
  • The boto3 module (pip install boto3 to get it)
  • An Amazon S3 Bucket
  • An AWS IAM user access key and secret access key with access to S3
  • An existing “folder” with “files” inside in your S3 bucket

Rename S3 Folder Key with Boto

To rename our S3 folder, we’ll need to import the boto3 module and I’ve chosen to assign some of the values I’ll be working with as variables.

import boto3

awsAccessKey = ''
awsSecretAccessKey = ''
s3BucketName = ''
oldFolderKey = ''
newFolderKey = ''

Once I’ve done that, I’ll need to authenticate to S3 by providing my access key ID and secret key for the IAM user I’ll be using. In this case, I’ve chosen to use a boto3 session. I’ll be using a boto3 resource to work with S3.

session = boto3.Session(aws_access_key_id=awsAccessKey,     aws_secret_access_key=awsSecretAccessKey)
s3 = session.resource('s3')

Once I’ve done that, I then need to find all of the files matching my key prefix to rename S3 folder. You can see below that I’m using a Python for loop to read all of the objects in my S3 bucket. I’m using the optional filter action and filtering all of the S3 objects in the bucket down to only eventually rename S3 folder I want.

bucket = s3.Bucket(s3BucketName)
for object in bucket.objects.filter(Prefix=oldFolderKey):

Once I’ve started the for loop iterating over the “folder” key and all of the “file” keys inside of it, I’ll then need to exclude the “folder” key itself since I won’t be copying that. I just need the file keys. I’m excluding that by an if statement that matches all key values that don’t end with a forward slash.

After I’m in the block that will only contain file key values, I’m now assigning the file name and destination key names to make it easier to reference.

for object in bucket.objects.filter(Prefix=oldFolderKey):
    srcKey = object.key
    if not srcKey.endswith('/'):
        fileName = srcKey.split('/')[-1]
        destFileKey = newFolderKey + '/' + fileName
        copySource = s3BucketName + '/' + srcKey         
        s3.Object(s3BucketName, destFileKey).copy_from(CopySource=copySource)

Once you have all of that setup, I then finally do the actual copy using the copy_from action. You can see below that I’m creating an S3 object using the bucket name and destination file key. I’m then passing the source key to the copy_from action.

for object in bucket.objects.filter(Prefix=oldFolderKey):
    srcKey = object.key
    if not srcKey.endswith('/'):
        fileName = srcKey.split('/')[-1]
        destFileKey = newFolderKey + '/' + fileName
        copySource = s3BucketName + '/' + srcKey         
        s3.Object(s3BucketName, destFileKey).copy_from(CopySource=copySource)

Once the loop has finished and all of the files have been copied to the new key, I’ll then need to use the delete action to clean all of the files including the “folder” key since it is not inside of the if condition.

for object in bucket.objects.filter(Prefix=oldFolderKey):
    srcKey = object.key
    if not srcKey.endswith('/'):
        fileName = srcKey.split('/')[-1]
        destFileKey = newFolderKey + '/' + fileName
        copySource = s3BucketName + '/' + srcKey         
        s3.Object(s3BucketName, destFileKey).copy_from(CopySource=copySource)
        s3.Object(s3BucketName, srcKey).delete()

Summary

At this point, we’re done! You should now see all of the files that were previously in the source key under the destination key with no sign of the source key!

Hate ads? Want to support the writer? Get many of our tutorials packaged as an ATA Guidebook.

Explore ATA Guidebooks

Looks like you're offline!