Skip to main content
Amazon S3 logo

Amazon S3

Overview

Connect Superblocks to S3 to build apps that can list, read, delete, and upload files in S3:

  • Read data from S3 and utilize it in scheduled reports

Read data from S3 in a scheduled job

  • Upload data from API steps or local files to S3

Upload to S3 from an application

Setting up Amazon S3

1. Add integration

Select Amazon S3 from the integrations page.

2. Configure settings

Fill out the form with the following settings:

SettingRequiredDescription
NameTRUEName that will be displayed to users when selecting this integration in Superblocks
RegionTRUEAWS region where the S3 bucket is hosted, e.g. us-east-1
Access Key IDTRUEAccess key ID for your AWS account
Secret KeyTRUESecret access key for your AWS account
IAM Role ARNFALSEARN of the role for Superblocks to assume for accessing S3 resources

3. Test and save

Click Test Connection to check that Superblocks can connect to the data source.

info

If using Superblocks Cloud, add these Superblocks IPs to your allowlist (not necessary for On-Premise-Agent).

After connecting successfully, click Create to save the integration.

4. Set profiles

Optionally, configure different profiles for separate development environments.

tip

Amazon S3 connected Now you can use Amazon S3 in any Application, Workflow, or Scheduled Job.

Creating Amazon S3 steps

Connect to your S3 integration from Superblocks by creating steps in Application APIs, Workflows, and Scheduled Jobs. An S3 step can perform the following actions:

List files in an S3 bucket

info

Superblocks also supports connecting to AWS services with Boto3 in Python steps if you require additional functionality.

Use cases

Applications

Drag files into an application using the FilePicker component, and upload them to S3. See more details in the FilePicker guide here.

Use a form component and file picker to upload a file to S3

Workflows

Export a Google Sheet as a CSV to an S3 bucket.

Use S3 in a workflow to store an exported Google Sheet as a CSV

Scheduled Jobs

Query order analytics, update an inventory prediction model, and upload it to S3 for the data science team to use.

Save a prediction model in S3 daily using a scheduled job

Retrieving More Than 1000 Files

caution

Please keep in mind when retrieving large amounts of data and returning it to the browser that there are memory constraints. See Returning data from the Application backend to the frontend for more information.

Amazon S3 limits the number of Objects retrieved up to 1000. Here is an example of handling this limitation using a Backend JavaScript step. Please keep in mind you may need to modify or port this example to Python to get it working for your use case:

const AWS = require('aws-sdk');

const s3 = new AWS.S3({
region: 'eu-central-1',
accessKeyId: 'AWS_ACCESS_KEY_ID',
secretAccessKey: 'AWS_SECRET_ACCESS_KEY',
});

async function listAllObjectsFromS3Bucket(bucket, prefix = '') {
let isTruncated = true;
let marker;
const elements = [];

while (isTruncated) {
const params = {
Bucket: bucket,
Prefix: prefix,
Marker: marker,
};

try {
const response = await s3.listObjects(params).promise();

response.Contents.forEach((item) => {
elements.push(item.Key);
});

isTruncated = response.IsTruncated;

if (isTruncated) {
marker = response.Contents.slice(-1)[0].Key;
}
} catch (error) {
throw error;
}
}

return elements;
}

// example call
listAllObjectsFromS3Bucket('<your bucket name>', '<optional prefix>')
.then((elements) => console.log(elements))
.catch((error) => console.error('An error occurred: ', error));

Troubleshooting

Check out our guide on common errors across database integrations. If you are encountering an error that you don't see in the guide, or the provided steps are insufficient to resolve the error, please contact us at help@superblocks.com.