capistrano-s3
Enables static websites deployment to Amazon S3 website buckets using Capistrano.
Hosting your website with Amazon S3
Amazon S3 provides special websites enabled buckets that allows you to serve web pages from S3.
To learn how to setup your website bucket, see Amazon Documentation.
Getting started
# Gemfile
source 'https://rubygems.org'
gem 'capistrano-s3'
Setup
Install gems with bundle and create your public folder that will be published :
bundle install
mkdir -p public
Gem supports both flavors of Capistrano (2/3). Configurations between versions differ a bit though.
Capistrano 2
First initialise Capistrano for given project - bundle exec capify .
Replace deploy.rb
content generated by capify
with these simple Amazon S3 configurations:
# config/deploy.rb
require 'capistrano/s3'
set :bucket, "www.cool-website-bucket.com"
set :access_key_id, "CHANGETHIS"
set :secret_access_key, "CHANGETHIS"
If you want to deploy to multiple buckets, have a look at Capistrano multistage and configure a bucket per stage configuration.
Capistrano 3
Initialise Capistrano by running - bundle exec cap install
Next add require "capistrano/s3"
to Capfile.
Finally, replace deploy.rb
content generated by Capistrano with
this config:
# config/deploy.rb
set :bucket, "www.cool-website-bucket.com"
set :access_key_id, "CHANGETHIS"
set :secret_access_key, "CHANGETHIS"
Deploying
Add content to your public folder and run deploy command:
-
cap deploy
(Capistrano 2)
or
-
cap <stage> deploy
(Capistrano 3).
Advanced options
Custom region
If your bucket is not in the default US Standard region, set region with:
set :region, 'eu-west-1'
Deployment path
You can set deployment_path
to select the local path to deploy relative to the project root. Do not use trailing slash. Default value is: public
.
set :deployment_path, 'dist'
Target path
You can also set a remote path relative to the bucket root using target_path
. Do not use trailing slash. Default value is empty (bucket root).
set :target_path, 'app'
Write options
See aws-sdk S3Client.put_object doc for all available options both for bucket_write_options
and object_write_options
.
Bucket-level options
capistrano-s3 sets files :content_type
and :acl
to public-read
, add or override with:
set :bucket_write_options, {
cache_control: "max-age=94608000, public"
}
Object-level options
You can also set write options for files matching specific patterns using:
set :object_write_options, {
'index.html' => { cache_control: 'no-cache' }
}
or in a more advanced scenario
set :object_write_options, {
'assets/**' => { cache_control: 'public, max-age=86400' },
'index.html' => { cache_control: 'no-cache' }
}
NOTES:
-
object_write_options
are evaulated afterbucket_write_options
and can override them - Also the pattern matching for
object_write_options
is evaluated in the order of definition and overrides on match down the chain. For example defining
set :object_write_options, {
'assets/my-script.js' => { cache_control: 'no-cache' },
'assets/**' => { cache_control: 'public, max-age=86400' }
}
will set Cache-Control: public, max-age=86400
header on assets/my-script.js
as well!
Redirecting
Use :redirect_options
to natively redirect (via HTTP 301 status code)
any hosted page. For example:
set :redirect_options, {
'index.html' => 'http://example.org',
'another.html' => '/test.html',
}
The redirect_options
parameter takes target_path
into account, you can use the same paths regardless of its value.
Valid redirect destination should either start with http
or https
scheme,
or begin with leading slash /
.
Upload only compressed versions
You can configure capistrano-s3 to only upload gzipped assets (when they are present) and remove the .gz
suffix. This feature comes in handy because Amazon S3 does not provide a way to decide when to serve compressed or uncompressed content depending on Accept-Encoding
header.
For example: if you have main.js
and main.js.gz
capistrano-s3 will upload the compressed version as main.js
to S3.
Please note:
- Only the file is renamed, the original
Content-Type
, andContent-Encoding: gzip
headers will be served - By enabling this feature way only compressed assets will be served. Browser support although is pretty good.
Just add to your configuration:
set :only_gzip, true
CloudFront invalidation
If you set a CloudFront distribution ID (not the URL!) and an array of paths, capistrano-s3 will post an invalidation request. CloudFront supports wildcard invalidations. For example:
set :distribution_id, "CHANGETHIS"
set :invalidations, [ "/index.html", "/assets/*" ]
The CloudFront invalidation feature takes target_path
into account. Write your invalidations relatively to your target_path
. For example to invalidate everything inside the remote app
folder:
set :target_path, "app"
set :distribution_id, "CHANGETHIS"
set :invalidations, [ "/*" ]
If you want to wait until the invalidation batch is completed (e.g. on a CI server), you can run cap <stage> deploy:s3:wait_for_invalidation
. The command will wait indefinitely until the invalidation is completed.
Exclude files and directories
You can set a list of files or directories to exclude from upload. The path must relative to deployment_path
and use the dir/**/*
pattern to exclude directories.
set :exclusions, [ "index.html", "resources/**/*" ]
MIME types
Under the hood capistrano-s3 is using the mime-types gem to determine the correct MIME type used for :content_type
based on the filename extension. The possible list of MIME types are in a priority ordered list, and by default capistrano-s3 uses the first element - the "best" match.
However CloudFront has a list of MIME types that are supported by the Serving Compressed Files feature, and the two results are not necessarily overlap.
For example: the "best" MIME type match for a
.js
file isapplication/ecmascript
, but files with this type are not compressed by CloudFront, only the ones withapplication/javascript
.
You can enable to prefer CloudFront-supported MIME types over the "best" ones by setting:
set :prefer_cf_mime_types, true
Example of usage
Our Ruby stack for static websites:
- sinatra : awesome simple ruby web framework
-
sinatra-assetpack : deals with assets management, build static files into
public/
-
sinatra-export : exports all sinatra routes into
public/
as html or other common formats (json, csv, etc)
Mixing it in a capistrano task:
# config/deploy.rb
before 'deploy' do
run_locally "bundle exec ruby sinatra:export"
run_locally "bundle exec rake assetpack:build"
end
See our boilerplate sinatra-static-bp for an example of the complete setup.
Migration guide
From < 2.0.0
If you have customized deployment_path
from 2.0 use a simplified format
# config/deploy.rb
-set :deployment_path, proc { Dir.pwd.gsub('\n', '') + '/build' }
+set :deployment_path, 'build'
If you have configured s3_endpoint
to something other than the default switch to new syntax using region identifiers
-set :s3_endpoint, 's3-eu-west-1.amazonaws.com'
+set :region, 'eu-west-1'
Contributing
See CONTRIBUTING.md for more details on contributing and running test.
Credits
capistrano-s3 is maintained and funded by hooktstudios
Thanks & credits also to all other contributors.