How to Resolve the "Part number must be an integer between 1 and 10000, inclusive" Error When Changing Storage Class for Large Files in S3
- Emily
- May 28
- 2 min read
When attempting to change the storage class of large files in Amazon S3 via the AWS Management Console, you may encounter the error message: "Part number must be an integer between 1 and 10000, inclusive." This error occurs because the AWS Management Console only allows changes for objects up to a certain size. Instead, you can use lifecycle rules or the AWS CLI to manage this process effectively.
Problem Overview
You have large files already uploaded to an S3 bucket. When you try to change the storage class by selecting Edit Storage Class from the Actions menu in the AWS Management Console, you receive the error: "Part number must be an integer between 1 and 10000, inclusive."

Error Message:
Part number must be an integer between 1 and 10000, inclusive.
How to Address This Issue
As of May 9, 2024, the AWS Management Console allows storage class changes only for objects up to 160 GB in size. If you need to change the storage class for larger objects, you will need to use alternative methods.
Setting the Object Storage Class
If the object size is less than 160 GB, you can change the storage class using the Amazon S3 console. For objects larger than this size, you will need to add S3 Lifecycle settings to change the storage class.
Solutions
There are two primary methods to resolve this issue:
1. Use Lifecycle Rules for Transitioning
Set up appropriate lifecycle rules for the bucket containing the objects whose storage class you want to change.
If there are objects in the bucket that you do not want to change the storage class for, you will need to specify the rules more precisely using prefixes or tag filters.
2. Set a Larger Chunk Size and Execute via AWS CLI
You can also change the storage class by specifying a larger chunk size and executing the command through the AWS CLI.
Increase the chunk size using the aws configure set command. You can specify sizes ranging from 5 MB to 5 GB.
$ aws configure set default.s3.multipart_chunksize <size>
Input the same bucket and key for both the source and destination, and specify the storage class when executing the aws s3 cp command:
$ aws s3 cp s3://bucket/key s3://bucket/key --storage-class <storage-class>
For example, after executing the following commands in your environment:
$ aws configure set default.s3.multipart_chunksize 5GB
$ aws s3 cp s3://XXXXXX/test s3://XXXXXX/test --storage-class STANDARD_IA
You should see a success message indicating that the storage class change was successful:
copy: s3://XXXXXX/test to s3://XXXXXX/test
Storage class change successful!

Reference Information
By following these steps, you can effectively manage the storage class of large files in Amazon S3 without encountering the part number error. For more detailed information and additional context, you can refer to the original post here.
Comments