Happy users uploading files with Rails 5, Shrine, and Vue.js

Todd Baur
ITNEXT
Published in
11 min readSep 4, 2018

--

I just spent way to long figuring this out. My goal here is to save you an incredible amount of pain by telling you the story of how I approached production server ready uploads. Uploading files is one of those things that comes out of meetings as a trivial thing that can quickly become a sixteen legged octopus. There are so many implementation details that change and require security hardening once an application starts accepting and manipulating files.

  • Do I offload uploading/receiving files from my application server entirely? (commonly called ‘direct uploads’)
  • Do I need to create several versions of what the app uploads?
  • Where should I cache files so they are stored cost-effectively while maintaining performance?
  • What do I do with the existing uploaded files?
  • What kinds of files do I need to allow and what are the constraints on accepting them as valid attachments?

Those six questions were days of discussions with the stakeholders of the application. In between the meetings and designing the implementation, there were plenty of tutorials and discussion regarding the tools at our disposal. None really told anyone how to do this from scratch.

Let’s get busy!

What’s in the stack?

  • Rails 5 in API mode
  • libvips for image manipulation
  • Sidekiq for background processing
  • Nuxt/Vue for the front end
  • Shrine.rb for our uploading integration with Rails

Project Setup

Install libvps per the instructions. Make two folders called backend and frontend.

Backend:

rails new --api --skip-active-storage --skip-action-cable -d postgresql .

Open the Gemfile and add these entries:

gem 'aws-sdk-s3' # for connecting to an S3 bucket
gem 'bcrypt', '~> 3.1.7' # use has_secure_password
gem 'fastimage' # finds the size or type of an image given its uri.
gem 'image_processing' # you guessed it
gem 'jb' # a faster json templating system than jbuilder
gem 'knock' # json web tokens (JWT) for authentication
gem 'rack-cors' # handling CORS requests
gem 'redis', '~> 4.0' # fast keystore perfect for offloading async job state
gem 'redis-rails' # connector and helpers for redis in rails
gem 'ruby-vips' # image manipulation bindings in Ruby
gem 'sidekiq' # background job framework
gem 'shrine' # our uploading toolkit

Go ahead and run bundle install to get those dependencies installed.

Let’s get some authentication going with Knock by running rails g knock:install. For Rails 5.2 you need to add this line in the initializer:

config.token_secret_signature_key = -> { Rails.application.credentials.fetch(:secret_key_base) 
}

This is because from Rails 5.1 to 5.2 the secrets.yml api was migrated to credentials, and the default in Knock still reflects 5.1’s implementation.

Next we’ll want our API to handle CORS requests, so open initializers/cors.rb and uncomment the guts in there:

Rails.application.config.middleware.insert_before 0, Rack::Cors do
allow do
origins '*'

resource '*',
headers: :any,
methods: [:get, :post, :put, :patch, :delete, :options, :head]
end
end

Now keep in mind this is a pretty generic policy so you may want to limit this to only the front-end domain when deploying instead of using *. Also in the Gemfile uncomment gem rack-cors.

Next we’ll want to setup a Shrine initalizer. Shrine is very modular, and its very easy to override anything in the initializer. You can even opt to not have one at all, and instead put everything in the uploader class we’ll create next. It’s up to you. I try to keep the globally used configuration items in initializers/shrine.rb and keep uploader classes smaller.

require 'shrine'
require 'shrine/plugins/activerecord'
require 'shrine/plugins/backgrounding'
require 'shrine/plugins/data_uri'
require 'shrine/plugins/delete_promoted'
require 'shrine/plugins/delete_raw'
require 'shrine/storage/s3'
require 'shrine/storage/file_system'
require 'shrine/plugins/logging'
require 'shrine/plugins/determine_mime_type'
require 'shrine/plugins/store_dimensions'
require 'shrine/plugins/cached_attachment_data'
require 'shrine/plugins/restore_cached_data'
require 'shrine/plugins/validation_helpers'
require 'shrine/plugins/pretty_location'
require 'shrine/plugins/processing'
require 'shrine/plugins/versions'

Shrine.plugin :activerecord
Shrine.plugin :backgrounding
Shrine.plugin :cached_attachment_data
Shrine.plugin :data_uri
Shrine.plugin :determine_mime_type
Shrine.plugin :logging
Shrine.plugin :restore_cached_data
Shrine.plugin :store_dimensions
Shrine.plugin :validation_helpers
Shrine.plugin :versions

def production_storages
s3_options = {
access_key_id: Rails.application.credentials.digitalocean_spaces_key,
secret_access_key: Rails.application.credentials.digitalocean_spaces_secret,
bucket: Rails.application.credentials.digitalocean_spaces_bucket,
endpoint: 'https://nyc3.digitaloceanspaces.com',
region: 'nyc3'
}

# Your probably want the directory to be in a shared location so its persisted between deployments
{
cache: Shrine::Storage::FileSystem.new('public/uploads', prefix: 'cache'), # temporary
store: Shrine::Storage::S3.new(prefix: 'store', upload_options: { acl: 'public-read' }, **s3_options)
}
end

def
development_storages
{
cache: Shrine::Storage::FileSystem.new('public', prefix: 'uploads/cache'), # temporary
store: Shrine::Storage::FileSystem.new('public', prefix: 'uploads'), # permanent
}
end

Shrine.storages = Rails.env.production? ? production_storages : development_storages
# Shrine.storages = production_storages

Shrine works with two places to store files, a cache and a permanent store. This is the most common configuration you’ll find out there, but additional stores can be added just by adding another key to the hash and calling the correct Shrine::Storage subclass. I also added two methods for returning a config for local development configuration hashes.

In my particular case we used Digital Ocean’s Spaces feature. This is by all measure an S3 bucket and doesn’t have any special outside case that aws-sdk-s3 doesn’t handle on its own. If you’re new to using Rails’ credentials:edit then I suggest giving this article a read.

Next we’ll scaffold a widget that has an attached photo with it:

cd backend && rails g scaffold widget title:string photo_data:string

The key thing in the scaffolding is to name your attachment data with the _data suffix and that its type is string. I mistakenly set it as JSON at first and then Shrine was trying to parse a hash and throwing an exception.

Because photo_data will be sent to the controller as a json object, we’ll need to change the way the params are accepted. Open app/controllers/widgets_controller.rb and change the widget_params method as so:

def widget_params
params.require(:widget).permit(:title, photo_data: {})
end

Now that we have a widget model we can attach a photo uploader to it. The model app/models/widget.rb should look like so:

class Widget < ApplicationRecord
include ImageUploader[:photo]
end

Shrine will look for an app/uploaders folder, so let’s make that happen and create an ImageUploader class:

# app/uploaders/image_uploader.rbclass ImageUploader < Shrine
include ImageProcessing::Vips
plugin :backgrounding
# The determine_mime_type plugin allows you to determine and store the actual MIME type of the file analyzed from file content.
plugin :determine_mime_type
# The store_dimensions plugin extracts and stores dimensions of the uploaded image using the fastimage gem, which has built-in protection against image bombs.
plugin :store_dimensions
# The validation_helpers plugin provides helper methods for validating attached files.
plugin :validation_helpers
# The pretty_location plugin attempts to generate a nicer folder structure for uploaded files.
plugin :pretty_location
# Allows you to define processing performed for a specific action.
plugin :processing
# The versions plugin enables your uploader to deal with versions, by allowing you to return a Hash of files when processing.
plugin :versions
# The delete_promoted plugin deletes files that have been promoted, after the record is saved. This means that cached files handled by the attacher will automatically get deleted once they're uploaded to store. This also applies to any other uploaded file passed to Attacher#promote.
plugin :delete_promoted
# The delete_raw plugin will automatically delete raw files that have been uploaded. This is especially useful when doing processing, to ensure that temporary files have been deleted after upload.
plugin :delete_raw
# The cached_attachment_data plugin adds the ability to retain the cached file across form redisplays, which means the file doesn't have to be reuploaded in case of validation errors.
plugin :cached_attachment_data
plugin :logging
plugin :recache

# Define validations
# For a complete list of all validation helpers, see AttacherMethods.
http://shrinerb.com/rdoc/classes/Shrine/Plugins/ValidationHelpers/AttacherMethods.html
Attacher.validate do
validate_max_size 15.megabytes, message: 'is too large (max is 15 MB)'
validate_mime_type_inclusion %w[image/jpeg image/jpg image/png image/gif]
end

# Process additional versions in background.
process(:store) do |io|
versions = {original: io}
io.download do |original|
pipeline = ImageProcessing::Vips.source(original)
versions[:large] = pipeline.resize_to_limit!(1200, 1200)
versions[:medium] = pipeline.resize_to_limit!(640,640)
versions[:small] = pipeline.resize_to_limit!(320,320)
versions[:lg_square] = pipeline.resize_to_fill!(1200, 1200)
versions[:md_square] = pipeline.resize_to_fill!(640, 640)
versions[:sm_square] = pipeline.resize_to_fill!(320, 320)
end
versions
end

Attacher.promote { |data| ShrineBackgrounding::PromoteJob.perform_async(data) }
Attacher.delete { |data| ShrineBackgrounding::DeleteJob.perform_async(data) }
end

Next we need to create the background job to process the images. Shrine will make them immediately available in the cache, and once they are processed it will update the model with the URL of our S3/Spaces bucket.

In app/jobs/shrine_backgrounding we create a PromoteJob and DeleteJob class:

# app/jobs/shrine_backgrounding/promote_job.rb
module
ShrineBackgrounding
class PromoteJob
include Sidekiq::Worker

def perform(data)
Shrine::Attacher.promote(data)
end
end
end
# app/jobs/shrine_backgrounding/delete_job.rb
module
ShrineBackgrounding
class DeleteJob
include Sidekiq::Worker

def perform(data)
Shrine::Attacher.delete(data)
end
end
end

Now to have these jobs executed in the background, we need to have Sidekiq running. For development environment, that is simply using bundle exec sidekiq but for production we used systemd script:

# /etc/systemd/system/sidekiq.service
# https://raw.githubusercontent.com/mperham/sidekiq/master/examples/systemd/sidekiq.service
# systemd unit file for CentOS 7, Ubuntu 15.04
#
# Customize this file based on your bundler location, app directory, etc.
# Put this in /usr/lib/systemd/system (CentOS) or /lib/systemd/system (Ubuntu).
# Run:
# - systemctl enable sidekiq
# - systemctl {start,stop,restart} sidekiq
#
# This file corresponds to a single Sidekiq process. Add multiple copies
# to run multiple processes (sidekiq-1, sidekiq-2, etc).
#
# See Inspeqtor's Systemd wiki page for more detail about Systemd:
# https://github.com/mperham/inspeqtor/wiki/Systemd
#
[Unit]
Description=sidekiq
# start us only once the network and logging subsystems are available,
# consider adding redis-server.service if Redis is local and systemd-managed.
After=syslog.target network.target

# See these pages for lots of options:
# http://0pointer.de/public/systemd-man/systemd.service.html
# http://0pointer.de/public/systemd-man/systemd.exec.html
[Service]
Type=simple
WorkingDirectory=/home/deploy/apps/app/current/backend
# If you use rbenv:
ExecStart=/bin/bash -lc '/home/deploy/.rbenv/shims/bundle exec sidekiq -e production -q backend_production_mailers -q default -q mailers'
# If you use the system's ruby:
# ExecStart=/usr/local/bin/bundle exec sidekiq -e production
User=deploy
Group=deploy
UMask=0002

# Greatly reduce Ruby memory fragmentation and heap usage
# https://www.mikeperham.com/2018/04/25/taming-rails-memory-bloat/
Environment=MALLOC_ARENA_MAX=2
Environment=RAILS_ENV=production
Environment=RAILS_MASTER_KEY=[YOUR MASTER KEY]
Environment=APP_DATABASE=app_production
Environment=APP_DATABASE_HOST=127.0.0.1
Environment=APP_DATABASE_USERNAME=[YOUR DB]
Environment=APP_DATABASE_PASSWORD=[DB PW]

# if we crash, restart
RestartSec=1
Restart=on-failure

# output goes to /var/log/syslog
StandardOutput=syslog
StandardError=syslog

# This will default to "bundler" if we don't specify it
SyslogIdentifier=sidekiq

[Install]
WantedBy=multi-user.target

Key things in this script are that the queues that sidekiq needs to process are defined in the ExecStart command. If you configure queues via config/sidekiq.yml then adjust the command accordingly. Also pay attention the environment variables that need to be set so that sidekiq can access the database and decrypt credentials.

The last piece we’ll want for the backend is a place to immediately upload files into the cache. To do that securely do not follow the advice of Shrine and mount their uploader endpoint as a rack app. That will not allow you to authenticate uploads and effectively turns your S3 bucket into public storage. Instead we’ll want to create an UploadsController. I’ll skip the authentication pieces here, but I highly suggest using Knock for API-based Rails backends. For now I’m showing the before_action as commented out.

class UploadsController < ApplicationController
# before_action :authenticate_admin

def create
uploader = ImageUploader.new(:cache)
@file = uploader.upload(upload_params[:file])
render json: @file
end

protected

def upload_params
params.require(:upload).permit(:file)
end
end

Add a route to the UploadsController:

post 'uploads' => 'uploads#create', defaults: { format: :json }

That puts our uploads into the cache and keeps our UI from having to upload a file on form submit and blocking user interaction. When we are done uploading, we’ll want to grab the photo_data and add it to our form when we submit it.

Lot of code going into our backend. I hope you’re still with us here. Now its time to make the UI!

Frontend:

npm i -g @vue/cli @vue/cli-init
cd frontend && vue init nuxt-community/starter-template .
yarn install

That gives us a basic nuxtjs based starting point. Go ahead and start it with yarn run dev.

Next I added nuxt-axios and bootstrap-vue to the project. I also want to run Rails on port 3001 so that Nuxt can run on port 3000. The nuxt.config.js file:

module.exports = {
/*
** Headers of the page
*/
head: {
title: 'uploader-example',
meta: [
{charset: 'utf-8'},
{name: 'viewport', content: 'width=device-width, initial-scale=1'},
{hid: 'description', name: 'description', content: 'Nuxt.js project'}
],
link: [
{rel: 'icon', type: 'image/x-icon', href: '/favicon.ico'}
]
},
/*
** Customize the progress bar color
*/
loading: {color: '#3B8070'},
/*
** Build configuration
*/
build: {
/*
** Run ESLint on save
*/
extend(config, {isDev, isClient}) {
/* configure bootstrap-vue image paths */
const vueLoader = config.module.rules.find((rule) => rule.loader === 'vue-loader');
vueLoader.options.transformToRequire = {
'img': 'src',
'image': 'xlink:href',
'b-img': 'src',
'b-img-lazy': ['src', 'blank-src'],
'b-card': 'img-src',
'b-card-img': 'img-src',
'b-carousel-slide': 'img-src',
'b-embed': 'src'
};

if (isDev && isClient) {
config.module.rules.push({
enforce: 'pre',
test: /\.(js|vue)$/,
loader: 'eslint-loader',
exclude: /(node_modules)/
})
}
}
},
modules: [
'bootstrap-vue/nuxt',
'@nuxtjs/axios'
],
axios: {
/* set API_URL environment variable to configure access to the API
*/
baseURL: process.env.API_URL || 'http://localhost:3001/',
redirectError: {
401: '/login',
404: '/notfound'
}
}
}

I also added a webpack rule for supporting nuxt image paths with bootstrap-vue in the build section.

Next we’ll make a couple of components, and use Vuex to handle communication between them. Our first component is components/uploader.vue

<template>
<b-form-group>
<b-form-file ref="file"
v-model="file"
:state="Boolean(file)"
placeholder="Choose a file..."
@input="sendFile"
:accept="accept"></b-form-file>
<b-progress v-show="uploadPercentage > 0 && uploadPercentage !== 100"
striped
animated
:max="100"
class="mt-3"
:value="uploadPercentage"></b-progress>
</b-form-group>
</template>
<script>
export default {
props: {
accept: {
type: String,
default: 'image/*'
},
apiUrl: {
type: String,
default: '/uploads'
}
},
data() {
return {
file: null,
presigned: {},
uploadPercentage: 0,
}
},
methods: {
sendFile() {
let vm = this;
let formData = new FormData;
formData.append('upload[file]', vm.file);
this.$axios.post(vm._props.apiUrl, formData, {
onUploadProgress: function (progressEvent) {
vm.$emit('uploading');
vm.uploadPercentage = Math.round((progressEvent.loaded * 100) / progressEvent.total);
}
})
.then(resp => vm.$emit('presigned', resp.data))
.catch(function (errors) {
console.log(errors)
})
.finally(resp => vm.$emit('done'));
this.toDataUrl()
},
toDataUrl() {
let vm = this;
let reader = new FileReader();
reader.addEventListener("load", function () {
let dataUrl = reader.result;
vm.$emit('image', dataUrl);
vm.$store.commit('image_preview/set', dataUrl)
}.bind(vm), false);
if (/\.(jpe?g|png|gif)$/i.test(vm.file.name)) {
reader.readAsDataURL(vm.file);
}
}
}
}
</script>

Here we’re handling the upload to our UploadsController, emitting a ‘presigned’ event for capturing the data we’ll need to send in our widgets form. We are parsing the file object to a data uri and then we can immediately preview what we’re uploading in components/image-preview.vue component:

<template>
<b-img thumbnail fluid :src="image" v-show="validate(image)"></b-img>
</template>

<script>
export default {
name: 'image-preview',
computed: {
image() {
return this.$store.state.image_preview.image;
}
},
methods: {
validate(img) {
return img && (img.match(/jpg|gif|png|jpeg/) || img.match(/^data:image/))
}
},
mounted() {
return this.validate(this.image) ? this.show = true : this.show = false;
}
}
</script>

You notice in these components we’re setting and retrieving a value from the store/image_preview.js vuex store:

export const state = () => ({
image: null
})

export const mutations = {
set(state, image) {
state.image = image
}
};

That handles all that we need to store a file in cache, preview it, and even have a progress bar on upload. Pretty awesome eh?

Now the rest is really no different than any other form you want to submit to a RESTful backend via JSON key/value pairs.

...
<b-card class="text-left">
<h5>New Widget</h5>
<b-form ref="form" @submit.prevent="newWidget">
<b-form-group label="Title">
<b-form-input v-model="title"></b-form-input>
</b-form-group>
<uploader v-model="photo" @presigned="photo_data = $event"></uploader>
<image-preview></image-preview>
<b-btn type="submit" variant="primary">Create Widget</b-btn>
</b-form>
</b-card>
...
<script>
...
data() {
return {
photo: null,
photo_data: null,
title: null
}
},
methods: {
newWidget() {
this.$axios.post('/widgets', {
widget: {
photo_data: this.photo_data,
title: this.title
}
}).then((resp) => {
if(resp.status === 201) {
alert('success!');
this.$refs.form.reset();
this.$store.commit('image_preview/set', null)
}
})
}
}
...

If it’s all done you end up with something looking like this:

Conclusion

I hope this tutorial gets people excited about building web applications. I think it’s always been hard to handle uploads of files, and while there are a lot of tools to make it easier there is still a lot of confusion and loose ends when reading how to do it. I hope I hit the goal and you’re able to look through this tutorial with an excitement to use Rails and Shrine for uploads, and build great user interfaces with Nuxt and Bootstrap. Many modern web apps are following this type of architecture. I hope you get this far and have a good understanding how uploads work in a RESTful world.

I’ve put this all together in an app you can download and view on Github: https://github.com/toadkicker/uploader-example

Happy coding!

--

--