Amazon S3
  • 28 Apr 2023
  • 2 Minutes to read
  • Contributors
  • Dark
    Light

Amazon S3

  • Dark
    Light

Article Summary

Output events and detections to an Amazon S3 bucket.

  • bucket: the path to the AWS S3 bucket.
  • key_id: the id of the AWS auth key.
  • secret_key: the AWS secret key to auth with.
  • sec_per_file: the number of seconds after which a file is cut and uploaded.
  • is_compression: if set to "true", data will be gzipped before upload.
  • is_indexing: if set to "true", data is uploaded in a way that makes it searchable.
  • region_name: the region name of the bucket, it is recommended to set it, though not always required.
  • endpoint_url: optionally specify a custom endpoint URL, usually used with region_name to output to S3-compatible 3rd party services.
  • dir: the directory prefix
  • is_no_sharding: do not add a shard directory at the root of the files generated.

Example:

bucket: my-bucket-name
key_id: AKIAABCDEHPUXHHHHSSQ
secret_key: fonsjifnidn8anf4fh74y3yr34gf3hrhgh8er
is_indexing: "true"
is_compression: "true"

If you have your own visualization stack, or you just need the data archived, you can upload
directly to Amazon S3. This way you don't need any infrastructure.

If the is_indexing option is enabled, data uploaded to S3 will be in a specific format enabling some indexed queries.
LC data files begin with a d while special manifest files (indicating
which data files contain which sensors' data) begin with an m. Otherwise (not is_indexing) data is uploaded
as flat files with a UUID name.

The is_compression flag, if on, will compress each file as GZIP when uploaded.

It is recommended you enable is_compression.

  1. Log in to AWS console and go to the IAM service.
  2. Click on "Users" from the menu.
  3. Click "Add User", give it a name and select "Programmatic access".
  4. Click "Next permissions", then "Next review", you will see a warning about no access, ignore it and click "Create User".
  5. Take note of the "Access key", "Secret access key" and ARN name for the user (starts with "arn:").
  6. Go to the S3 service.
  7. Click "Create Bucket", enter a name and select a region.
  8. Click "Next" until you get to the permissions page.
  9. Select "Bucket policy" and input the policy in sample below:
    where you replace the "<<USER_ARN>>" with the ARN name of the user you created and the "<<BUCKET_NAME>>" with the
    name of the bucket you just created.
  10. Click "Save".
  11. Click the "Permissions" tab for your bucket.
  12. Back in limacharlie.io, in your organization view, create a new Output.
  13. Give it a name, select the "s3" module and select the stream you would like to send.
  14. Enter the bucket name, key_id and secret_key you noted down from AWS.
  15. Click "Create".
  16. After a minute, the data should start getting written to your bucket.

Policy Sample

{
   "Version": "2012-10-17",
   "Statement": [
      {
         "Sid": "PermissionForObjectOperations",
         "Effect": "Allow",
         "Principal": {
            "AWS": "<<USER_ARN>>"
         },
         "Action": "s3:PutObject",
         "Resource": "arn:aws:s3:::<<BUCKET_NAME>>/*"
      }
   ]
}

Was this article helpful?

What's Next