Uploading Images to Amazon S3

Amazon S3 (Simple Storage Service) is a highly-scalable object storage service, which makes it an ideal choice for storing images. There are several ways to upload an image to S3.
  1. Upload from AWS console
  2. Use AWS command line
  3. Use SDKs such as Java, PHP, Ruby or .Net
  4. Use REST APIs
Let's look in detail how each of these methods will work.

Upload from AWS console

Amazon S3 provides a simple WebGUI to upload and view images. Login to https://console.aws.amazon.com/s3 to get started.
  1. Create a bucket by specifying bucket name, and region
  2. Navigate to the bucket and select upload option
  3. Select files and start uploading files
Once files are uploaded, you will be able to view the images as well as modify the access permissions.

Using AWS command line

Web-GUI excels in ease of use but not ideal in all situations. The AWS command line provides a faster way to interact with S3.  Follow these steps to get the command line working.

Setup credentials

Accessing S3 requires access credentials and there are two options:
  1. Use the access keys of your AWS account:  You can get the access keys by logging into the AWS account but this approach is not recommended since you are exposing main credentials for a specific service.
  2. Setup AWS Identity and Access Management (IAM): This method is preferred for granting access to specific services
Here are the steps to setup IAM credentials:
  1. Go to IAM console https://console.aws.amazon.com/iam
  2. Create an IAM group with permission “AmazonS3FullAccess”
  3. Create an IAM user and add to the above group
  4. Choose the security credentials and choose “Create Access Key”
  5. Download credentials and store keys in a secure location

Install command line tools on Mac

  1. Install aws cli on Mac using the command 
    • "pip install --upgrade --user awscli"
  2. Setup PATH for the cli
    • export PATH=$PATH:~/Library/Python/2.7/bin

Configure AWS cli

  1. Run "aws configure" and provide AW access key, and secret key
  2. Test the configuration
    • Run "aws s3 ls"
    • Copy a file by running "aws s3 cp /tmp/test.jp s3://testbucket

Using PHP SDK

AWS SDKs provide a programmatic way to interact with S3 and is ideal when you integrate S3 with an application. SDK provides two ways to upload files
  1. Upload in a single operation: This is recommended for small files
  2. Upload in multi-part operation: For big files, a greater efficiency can be achieved by splitting files into several parts and uploading them in parallel. For multi-part upload, there are two types of APIs provided - high-level API and low-level API. The low-level API is recommended if upload has to be paused and resumed.
Installing PHP SDK
  1. Install composer
    1. curl -sS https://getcomposer.org/installer | php
  2. Run composer to install latest stable version of SDK
    1. php composer.phar aws/aws-sdk-php

Configure credentials

  1. Obtain the credentials using IAM (please see the section under "Using AWS command line") 
  2. Create a file ".aws/credentials" under the home directory
  3. Add the following content
[default]
aws_access_key_id = THISISMYACCESSKEY
aws_secret_access_key = MYSECRETACCESSKEY

Upload in a single operation

require 'vendor/autoload.php';
use Aws\S3\S3Client;
$s3 = new S3Client([
'version' => 'latest',
'region' => 'us-east-1'
]);
try {
$s3->putObject([
        'Bucket' => 'bigbucket',
        'Key'    => 'falling-tree.jpg',
        'Body'   => fopen('/Users/unicorn/images/falling-tree.jpg', 'r'),
        'ACL'    => 'public-read',
]);

} catch (Aws\S3\Exception\S3Exception $e) {
echo "There was an error uploading the file\n";
}

Multi-part upload

require 'vendor/autoload.php';
use Aws\S3\MultipartUploader;
use Aws\Exception\MultipartUploadException;

$bucket = 'bigbucket';
$keyname = 'falling-tree.jpg';
$s3 = new Aws\S3\S3Client([
    'version' => 'latest',
    'region'  => 'us-east-1'
]);
$uploader = new MultipartUploader($s3, '/Users/unicorn/images/falling-tree.jpg', [
    'bucket' => 'bigbucket',
    'key'    => 'falling-tree.jpg',
]);
try {
    $result = $uploader->upload();
    echo "Upload complete: {$result['ObjectURL']}\n";
} catch (MultipartUploadException $e) {
    print "exception";
    echo $e->getMessage() . "\n";
}

Using REST API

REST API supports uploading single or a set of parts. The multi-part upload has three steps:

  1. Initiate upload
    • Once the initiation request is sent, Amazon S3 returns a response with an upload ID
  2. Upload individual object parts
    • To upload each part, you need to provide upload ID and part number. Part number uniquely identifies a part and its position in the object you are uploading. For each part, Amazon S3 will return an ETag header in its response. The part number and the ETag value of each upload need to be included in the subsequent requests.
  3. Complete upload
    • The complete multi-part request should include the upload ID and a list of part numbers and corresponding ETag values. In response, Amazon S3 will create the final object by combining different parts and return ETag that uniquely identifies the combined object data. You can optionally abort multipart upload and this will result in all multi-part objects to be released.
Choosing the endpoint
The REST endpoint could be in a virtual hosted style or path style. An example of virtual-hosted style is  http://bigbucket.s3-use-west2.amazonaws.com/tree.jpg whereas the path-style looks like http://s3-us-west-2.amazonaws.com/bigbucket/tree.jpg

The complete list of endpoints can be found here:

Comments

  1. Great post, thanks for such a clear explanation. This will so useful for AWS beginners.

    Best Regards,
    CourseIng - AWS Online Training in Hyderabad

    ReplyDelete

Post a Comment