Beginning with AWS CLI v2.23.0, the S3 client introduced enhanced default integrity protections by automatically including checksums on all Put requests and validating them on Get requests (CRC64-NVME). These changes align with AWS’s broader initiative to strengthen end-to-end data integrity within Amazon S3.
If you encounter issues uploading an object to Zadara's Object Storage using the AWS CLI v2.23 and later or using the recent AWS S3 SDKs, you will need to opt out of this behavior.
Zadara is actively working on adding support for the CRC64-NVME checksum algorithm. A formal announcement will be shared once the feature reaches general availability (GA).
Failure symptoms (AWS CLI):
Upon uploading an object the following error is returned:
$ aws s3 cp --profile $PROFILE --endpoint $ENDPOINT test_object s3://data-source/
upload failed: ./test_object to s3://data-source/test_object An error occurred (InvalidRequest) when calling the PutObject operation: Invalid Request.
Workaround (AWS CLI):
The following ENV var should be set:
AWS_REQUEST_CHECKSUM_CALCULATION=when_required
Objects uploads will complete as expected.
Workaround (AWS SDK - Boto3 example):
Option 1:
Explicitly setting the environment variable not only helps AWS CLI but also ensures that the boto3 API uploads objects without checksum verification, For example:
os.environ["AWS_REQUEST_CHECKSUM_CALCULATION"] = "when_required"
conn = boto3.client( 's3', aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
endpoint_url=endpoint,
region_name='us-east-1',
config=Config(signature_version='s3v4') )
Option 2:
We can specify the checksum algorithm from the client side. Since SHA1, SHA256, CRC32, and CRC32C are supported, any of these can be passed as the algorithm. For example:
conn.put_object( Bucket="bucket",
Key="obj", Body=b"data",
ChecksumAlgorithm="SHA256" # Use a supported checksum algorithm )