Amazon AWS S3 browser-based upload using POST -

Found a solution: had to explicitly configure the s3 client to use Amazon's new signature v4. The error occurs since it defaults to an older version, causing the mismatch. Bit of a facepalm - at the time this wasn't written in boto3 docs, although folks at Amazon say it should be soon.

The method is simplified since it now returns exactly the fields required:

def s3_upload_creds(name):
    BUCKET = 'mybucket'
    REGION = 'us-west-1'
    s3 = boto3.client('s3', region_name=REGION, config=Config(signature_version='s3v4'))
    key = '${filename}'
    return s3.generate_presigned_post(
        Bucket = BUCKET,
        Key = key
    )

Which means the form can be easily generated:

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
  </head>
  <body>
    {{ creds }}
      <form action="https://mybucket.s3.amazonaws.com" method="post" enctype="multipart/form-data">
        {% for key, value in creds.fields.items() %}
          <input type="hidden" name="{{ key }}" value="{{ value }}" />
        {% endfor %}
      File:
      <input type="file"   name="file" /> <br />
    <input type="submit" name="submit" value="Upload to Amazon S3" />
  </form>
</html>

Cheers


Been a few years since the last response, but I've been stuck on this for the last day or two so I'll share my experience for anyone it may help.

I had been getting the error: "403: The AWS Access Key Id you provided does not exist in our records" when trying to upload to an s3 bucket via my presigned url.

I was able to successfully generate a presigned url similarly to above, using the server-side code:

signed_url_dict = self.s3_client.generate_presigned_post(
                self.bucket_name,
                object_name,
                ExpiresIn=300

This returned a dictionary with the structure:

{
    url: "https://___",
    fields: {
        key: "___",
        AWSAccesKeyId: "___",
        x-amz-security-token: "___",
        policy: "___",
        signature: "___"
    }
}

This lead to the part where things were a little different now in 2019 with the browser-side javascript, where the required form inputs seem to have changed. Instead of setting up the form as OP did, I had to create my form as seen below:

<form action="https://pipeline-poc-ed.s3.amazonaws.com/" method="post" enctype="multipart/form-data" name="upload_form">
            <!-- Copy ALL of the 'fields' key:values returned by S3Client.generate_presigned_post() -->
            <input type="hidden" name="key" value="___" />
            <input type="hidden" name="AWSAccessKeyId" value="___" />
            <input type="hidden" name="policy" value="___"/>
            <input type="hidden" name="signature" value="___" />
            <input type="hidden" name="x-amz-security-token" value="___" />
        File:
            <input type="file"   name="file" /> <br />
            <input type="submit" name="submit" value="Upload to Amazon S3" />
        </form>

My error was that I followed an example in the boto3 1.9.138 docs and left out "x-amz-security-token" on the form, which turned out to be quite necessary. A thoughtless oversight on may part, but hopefully this will help someone else.

EDIT: My results above were based on a N. Virginia Lambda Function. When I ran generate_presigned_post(...) in Ohio (the region containing my bucket), I got results similar to OP:

{
  "url": "https://__",
  "fields": {
    "key": "___",
    "x-amz-algorithm": "___",
    "x-amz-credential": "___",
    "x-amz-date": "___",
    "x-amz-security-token": "___",
    "policy": "___",
    "x-amz-signature": "___"
  }
}

Perhaps the results of the function are region specific?