Storage

Cloud Storage

By default, Filestack stores uploaded files to an internally managed S3 bucket. If you already have a cloud storage solution in place and would like to integrate it with Filestack, we allow that as well. To do this, simply log in to the Filestack Developer Portal and provide credentials of the service you’d like to use as your storage provider.

Currently, we support many of the world’s largest cloud storage platforms:

  • Amazon S3
  • Rackspace
  • Azure Blob Storage
  • Dropbox
  • Google Cloud Storage

All storage options require a paid plan.

Amazon S3

Using Amazon S3 as a storage backend is easily configured by providing your Amazon Access Key and Secret Access Key in the developer portal.

IAM Setup

It is highly recommended that you create a new IAM user to interface with Filestack. That user should be configured with the following IAM policy, replace YOUR_BUCKET_NAME with the name of your bucket:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:DeleteObject"
             ],
             "Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListAllMyBuckets",
                "s3:GetBucketLocation"
            ],
            "Resource": "arn:aws:s3:::*"
        }
    ]
}

Javascript SDK

If you are uploading to your own S3 bucket from a browser using our Javascript SDK, then you need to configure your bucket’s CORS policy to allow cross-origin requests:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <ExposeHeader>ETag</ExposeHeader>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

It is recommended to replace the wildcard in <AllowedOrigin>*</AllowedOrigin> with your own domain.

Content Ingestion Network

This is also required for using Intelligent Ingestion with your own S3 bucket.

The Content Ingestion Network utilizes an array of global edge servers that serve as shortest-hop points for you and your customer’s upload requests. This is made possible by the availability of our services across AWS regions. If this feature is enabled on your application, your S3 bucket must give access to the Filestack IAM user in order for files to make it from edge nodes to your bucket. Please use the following bucket policy, replacing YOUR_BUCKET_NAME appropriately:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Principal": {
                "AWS": "arn:aws:iam::593058860426:user/filestack-uploads"
            },
            "Action": [
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*"
        }
    ]
}
Note: Moving files between buckets is not currently possible, please contact Support for more information.

Azure

You can add an Azure container in the Filestack Developer Portal from your Azure Storage Account, you just need your Storage Account key and the name of the container. You can read here about how to obtain your Azure Storage Access Keys.

Google Cloud Storage

Filestack can connect to a GCS bucket via a service account key.

Read here to learn about creating Google Cloud service accounts.

Make sure your service account role is Storage Object Admin. Get the JSON key for this account and paste it into the Access Key field in the GCS section of the developer portal.

Dropbox

You need an existing Dropbox application or you can create a new one.

1. Make sure to enable additional development users:

2. Generate an OAuth2 access token that will be added to the Dropbox storage section in the developer portal.

Storage Best Practises

We allow our customers to store their files directly to their cloud storage. In order to work with custom storage you need to add the necessary policies and test the Storage Keys in Developer Portal. Once the connection between our API and your Cloud Storage is set, you can start uploading the file, and use our CDN URL to deliver them. Below you can find the guidance and suggestion of how to work with and manage the files uploaded into your bucket.

Validate where the files has been stored

In order to validate where the files has been sotred, you can use:

  • upload response - as soon as the file is uploaded, you are receiving the upload response where you can find the key that reflects the path under which your file is located. We recommend storing Filestack CDN URL or Filestack handle from the upload response in your database.

  • metadata call- if you would like to match our filestack CDN with the file that is stored in your bucket, you can use our metadata call:

    https://cdn.filestackcontent.com/handle/metadata

    As a result, you will received the key value with the path to the file:

    {
    "mimetype": "image/jpeg", 
    "uploaded": 1595326582365.9338, 
    "container": "your_bucket_name", 
    "writeable": true, 
    "filename": "sample.jpg", 
    "location": "S3", 
    "key": "fmqoiyAQRyQsVd9KKaJI_sample.jpg", 
    "path": "fmqoiyAQRyQsVd9KKaJI_sample.jpg",
    "size": 68772}

Use several S3 buckets

When you would like to upload your files to different buckets under your AWS account, you need to only listed them in your IAM policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:DeleteObject"
             ],
             "Resource": [
                        "arn:aws:s3:::YOUR_BUCKET_NAME/*"
                        "arn:aws:s3:::YOUR_SECOND_BUCKET_NAME/*"
                        ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListAllMyBuckets",
                "s3:GetBucketLocation"
            ],
            "Resource": "arn:aws:s3:::*"
        }
    ]
}

Make sure that all of your buckets have CORS policy implemented.

Implement two or more cloud storages in Developer Portal

  • You can specify different cloud storages in Developer Portal.
  • When S3 bucket and another cloud storage is implemented, and you are not specified the desired location in your Picker code, by default, we are storing your files to your S3 bucket.

Transfer files between the buckets and/or accounts

We do not support migration between the buckets and the accounts.

  • When the file is uploaded to your bucket, no copy of the file is stored within our services. We are only keeping the file metadata, that allows us to send the request for the file to your cloud storage in order to deliver it.
  • The metadata of the file that is kept in our database is the same as you are receiving during the upload process. The part of this metadata are a key and the path values that includes the information of where the file is located. We are sending the request for the file located under this particular path.
  • If the file will be removed from this path (it would be deleted or transfered to another location) we are losing the connection to this file.
  • The easiest way to change the location is to reupload the file to the new bucket.