How to support a list of uploads as input with Absinthe GraphQL
As you might guess, in our day-to-day, we write GraphQL queries and mutations for Phoenix applications using Absinthe to be able to create, read, update and delete records.
A good way to add an extra layer of security when sharing files stored somewhere in the cloud is by using a signed url. This means that the url will have an expiration time attached and it should not be easily guessable by someone who wants to access the server’s hosted resources.
Usually, there’s an algorithm that generates a unique token to be used as a parameter at the url string.
With this article, I will describe how the signing can be achieved in Elixir with the Arc library, an Amazon S3 bucket, and an EC2 machine.
The bucket should be configured using the *Block all public access *option to ensure the bucket is isolated and no-one can access it from the outside 🔒
Since our S3 bucket is not accessible from an EC2 instance we need to create an IAM Role with an IAM Policy attached that grants the EC2 instance access to the S3 bucket and perform, for example, a GET operation on some resource.
Go to the IAM service and create a new policy using the JSON editor mode, then use the following code:
{
"Version": "2020-04-24",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject", "s3:GetObject", "s3:DeleteObject"],
"Resource": ["arn:aws:s3:::BUCKET_NAME"]
}
]
}
This policy allows the GET, PUT and DELETE action on all objects under the Amazon Resource Name (ARN) listed on the “Resource” array. Note that you can copy the ARN identifier from the S3 buckets list. You just need to select the bucket and then press the Copy ARN button:
Go to the Roles section and press the Create role button. Then select AWS Services and the EC2 use case:
Hit next and then search for the Policy created above and attach it:
Create the role and name it something meaningful to be used next.
Go to the EC2 Service page and select the EC2 instance that should have access to the S3 bucket, then press Actions and Attach/Replace IAM Role:
Then you just need to select the created role and attach it to the EC2 instance.
Arc is a very useful library, it handles the file storage from/to an S3 bucket using the ex_aws_s3 library.
Unfortunately, this library doesn’t have any release since 2018, but it's widely used, and for sure serves its purpose. To use Arc in your project you just need to set up the dependencies and configs as explained in its README.md.
Then you just need to store some files using the store function or the arc*ecto *cast_attachments_ as described on GitHub.
After your file is safe and secure on your private S3 bucket you can retrieve it with a signed link:
# From: https://github.com/stavro/arc/blob/master/README.md#url-generation
Avatar.url({"selfie.png", user}, :thumb, signed: true, expires_in: 3600)
#=> "https://bucket.s3.amazonaws.com/uploads/1/thumb.png?AWSAccessKeyId=AKAAIPDF14AAX7XQ&Signature=5PzIbSgD1V2vPLj%2B4WLRSFQ5M%3D&Expires=1434395458"
You can specify an expiration time to the url using the expires_in parameter, the default value for the expiration time on Arc is 5 minutes, the maximum allowed by the ex_aws_s3 is 7 days.
The recommendation is for this expiration time to be as low as possible, to reduce the chance that some external user catches the url and access the private resource.
With this simple solution, we ensure that the S3 buckets are more secure. This doesn’t mean that the files are behind some bullet proof wall but surely means that getting into them is more difficult.
Join our newsletter
Be part of our community and stay up to date with the latest blog posts.
SubscribeJoin our newsletter
Be part of our community and stay up to date with the latest blog posts.
SubscribeAs you might guess, in our day-to-day, we write GraphQL queries and mutations for Phoenix applications using Absinthe to be able to create, read, update and delete records.
If you are a Flutter developer you might have heard about or even tried the “new” way of navigating with Navigator 2.0, which might be one of the most controversial APIs I have seen.
A database cron job is a process for scheduling a procedure or command on your database to automate repetitive tasks. By default, cron jobs are disabled on PostgreSQL instances. Here is how you can enable them on Amazon Web Services (AWS) RDS console.