Testing Rails applications in the life of a freelancer

Update February 13th: Join the discussion on Hacker News
If you’re part of the Ruby On Rails community for a long time, you’ve probably read tons of articles about testing Rails application (less these days, though). Although there always have been diverging opinions on the matter, it seems the common wisdom was to say that you had to test everything: models, controllers, views and full-stack tests. Oh, and you had to do all of this with a TDD/BDD mindset as well.
I tried to do this myself and quickly concluded it would lead me right into the abyss. You see, it took me way too long to accept that being a freelancer was not the same as being employed in a trendy company with lots of money and resources. Employees will get paid anyway. If they want they can easily convince themselves and their bosses that testing ERB templates and getting 100% code coverage is the most essential thing in the world. In the past I even heard people say that “every real developers” were striving for 100% code coverage and that “they could not even imagine” a rails developer today not doing TDD. At some point it became a sort of religion. You had the heroes on one side, those who wrote more tests in their lives than actual code, and on the other end you had the “undesirable”, those lazy and bad programmers not committed enough to testing.
If I am all by myself, every single thing I do in my work has to bring me real value, otherwise I am losing my energy, time and money.

One day, I was writing a controller spec to make sure that calling the “index” method with a “get” request would return a 200 status code when I realized how absurd it was.

What the heck was I doing? Where was the value of this test? There was none. If the index method returns a 404, it’s because I didn’t create the damn template yet. Why would I deploy my application at this stage? Someone could object that this test will be useful if I somehow delete the index template by mistake. But come on, do we really want to write tests to defend against this kind of stuff? I know I don’t.
Even though I know there are probably ways to write more valuable controller tests, I decided to drop them and concentrate on other tests. Testing views prove to be an even greater waste of time so I dropped them as well.
What was left for me to test? Unit and full-stack tests. Both give me value but of those two, full-stack tests prove to be the most valuable of all.

Full-stack tests are the ones who give me the most value

For me the main purpose of testing is just to obtain an acceptable level of confidence in my overall application. I don’t want (and don’t have the time) to test every single object on every single case in every single part of the stack.
Here is my preferred and almost too simple workflow:

  1. Think about the feature
  2. Write the feature
  3. Test the feature (RSpec and Capybara)
  4. Deploy with acceptable level of confidence

The testing part is in #3 exactly where it belongs. That’s right, this means no TDD for me. Doesn’t mean TDD isn’t good, it just means it isn’t essential in order to write good and solid code. Experience and some programming skills is what it takes to do that. And whilst it’s true that I could reverse the order of step #2 and #3, the thing is that with me the “thinking” part often blends with the “writing” part. I think the overall feature, then I start writing and continue my thinking along the way, improving the solution I had thought up initially. When I’m happy with the result, I add my feature tests.
Also, even if Full-stack tests are valuable to me, I don’t test everything. Again, time is my most precious resource, I don’ want to waste it in testing mundane stuff.
My tests will target the specific features I am writing on a given project. The workflow of the feature is what matters more to me. I will write tests to make sure that everything happens in the correct order, and in the correct manner as it was thought up by my brain (Thinking. That’s point #1 in my workflow above!). I will write the “happy path” first and then will write some unhappy tests to make sure that the correct error messages / feedback is given to the user.
Once I have that, I have something valuable and it’s enough for me. I can forget the project and come back a few weeks/months later with a level of confidence high enough to refactor or add new features.

Rails isn't trendy anymore. Hooray for Rails!

When Ruby on Rails was the most trendy thing in the web development world, I felt so cutting-edge! The coolest thing to develop with was Rails and I was developing with Rails. This meant I was the coolest guy living on the earth!
Things have changed. Rails is still alive and strong, but it’s not the flavor of the day anymore. As a freelancer I have the chance to decide to build my new projects using promising technologies such as MeteorJS, React or Angular. In actual truth, I did consider this option as I had that fear lurking in me, you know, that fear which was telling me that If I’d stick with Rails for too long, I would soon become a relic.
But then I remembered that I loved ruby way more than Javascript. And I remembered how pleasant it was to work with Rails. And I remembered how proficient I had become working with this framework along the years. What a waste it would be to drop it all just to use what is popular at the moment. I also believe that rendering HTML and CSS is a job for the server and that sprinkling some Javascript on top of a web application is more than enough most of the time. I still think single page applications are great and have their use-cases but have a tendency to be used even when it feels out of place (content based websites, apps with very little user interactions, etc). I might be wrong, but this is where I stand today.
Today, Rails has something very valuable it didn’t have at the beginning: maturity. It feels so good to use such a polished and solid framework that has proven its merits again and again throughout the years. The community is still very strong and friendly and I’m extremely glad to be a part of it. Rails 5 will soon be released and I’m still excited as I was when Rails 3 was just around the corner.
I am going to leave you with something to meditate:

Ruby and Rails are like a couple of lovers: Ruby is the beautiful woman, the precious jewel, the inspiration. And Rails is the man, the hero, the guardian who protects the jewel and make it shine even brighter.

Now, that’s something. How poetic is that! Can we say the same about Javascript?!
Hey, even Matz think this was poetic! 🙂

How to backup your postgres database on SpiderOak using Dokku

Now that we know how to setup a rails application using Dokku on a DigitalOcean droplet, it might be a good time to think about automating our database backups. If you haven’t read the first part, you should do it before reading any further.
Sure, you can enable weekly backups of your whole droplet on DigitalOcean (the cost is minimal), but for a database it is wiser to backup at least once a day. Let’s configure the whole thing. We are freelancers (or small development teams) and we are used to get our hands dirty and do stuff by ourselves. It’s not a question of not having enough money to pay someone else, it’s because we are smart and resourceful! See, it already feels better when we see it in this light!
We will use SpiderOak to store our backups. Their zero-knowledge architecture will make sure our data remains private.
UPDATE: Whilst SpiderOak is not free, they offer a 60-days free trial for 2GB storage (no credit card required). After that, the cost is $7 per month for 30 GB storage. Thanks to NoName in the comments for asking me to clarify this point.

Create an account on SpiderOak

We will first install the client on our local workstation and create our account.
On the SpiderOak page, click on downloads
Click download link
Then, choose the correct client for your distribution:
Choose your SpiderOak client
Run the installer. You should be presented with the following screen:
Enter your info and create your SpiderOak account
Next step is to register your local computer with SpiderOak.
Choose your SpiderOak client
Finally, you will be presented a screen to select what you want to sync from your local computer to the cloud. You can leave the default options for now:
Choose your SpiderOak client

We won’t use the SpiderOak “Hive” folder

SpiderOak creates the SpiderOak Hive folder in the installation process. All files added to the Hive folder of a device are automatically synced to the Hive folder in every other devices. It is a convenient way to have things running quickly without configuring shared folders manually. One problem of using the Hive for our backups is that it will sync everything. You put something personal in your Hive on your local computer and oops, it will be sent to your droplet! That sounds not very good to me. For this reason, we should disable the Hive Folder syncing.
Still on your local workstation, go to your SpiderOak preferences:
Where are the preferences? Here!
And disable the hive:
Disable the hive
Note that if you don’t mind syncing your personal Hive on your DigitalOcean droplet, you can leave the option enabled.

Add your droplet as a SpiderOak device

Log to your DigitalOcean droplet by typing:

ssh [email protected]

Open your sources.list file

nano /etc/apt/sources.list

And add the following line at the end:

deb http://apt.spideroak.com/ubuntu-spideroak-hardy/ release restricted

Save, exit and run

apt-get update

If you get the following error:

W: GPG error: http://apt.spideroak.com release Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A6FF22FF08C15DD0

Look at it straight in the eye and IGNORE IT without showing mercy.
You’re now ready to install SpiderOak

apt-get install spideroakone

We must now configure SpiderOak but we don’t have any GUI on our server. What will we do? Simple, we just run the following command:

SpiderOakONE --setup=-

You will have to provide your SpiderOak login info.

Login: [email protected]
Logging in...
Getting list of devices...
id	name
1	your_local_workstation
To reinstall a device, enter the id (leave blank to set up a new device):

Don’t type any number. Simply press Enter as suggested to set up a new device. It will ask for the name of the device. Enter a descriptive name, something like myapp-droplet. Wait until the end of the syncing process. It may take several minutes, be patient!
Let’s create a folder for our DB backups

mkdir /home/dokku/db_backups

Then we include this folder in SpiderOak:

SpiderOakONE --include-dir=/home/dokku/db_backups

The output should look like this:

New config:
Current selection on device #2: u'myapp-droplet' (local)
Dir:/root/SpiderOak Hive
ExcludeFile:/root/SpiderOak Hive/.Icon.png
ExcludeFile:/root/SpiderOak Hive/Desktop.ini
ExcludeFile:/root/SpiderOak Hive/Icon
ExcludeFile:/root/SpiderOak Hive/.directory

Great, SpiderOak is all configured! Time to setup our database backups.

Create a shell script

Create a new file in /home/dokku and name it backup_db.sh. Paste the following:

/usr/local/bin/dokku psql:dump myapp  | /bin/gzip -9 > "/home/dokku/db_backups/myapp-`date +%Y-%m-%d`.dump.gz"
/usr/bin/SpiderOakONE --batchmode

Give the correct permission to the file:

chmod +x /home/dokku/backup_db.sh

As you can see, we use our Dokku postgres plugin to dump our db and we gzip the result in our db_backups folder. Then we run SpiderOakONE with the –batchmode flag to make it do its thing and shutdown immediately after.

Setup a cronjob

To automate our DB backups, we’ll add a cronjob.

crontab -e

Add the following line, save and exit:

0 5 * * * /home/dokku/backup_db.sh OUT_BACKUP 2>&1

It will run our backup script at 5am everyday. That’s all we need for now. Hmm… perhaps you don’t want to wait at 5am just to test if the script works. In this case, run the script directly.

cd /home/dokku

The call to “SpiderOakONE –batchmode” will probably make this command run slowly. I don’t know what SpiderOak is doing exactly but sometimes it can take several minutes to complete the syncing.
Once it finally completes, go back to your local workstation to see if you can find your backup.
Your backup is here!
If you want, you can make sure that you are able to restore your backup before calling it a day (have a look at the dokku psql:restore command to that end). Restoring postgres databases usually gives of warnings but it’s generally safe to ignore them. Still, you’re better to make sure everything work as expected.
That’s it! You now have automated database backups on a zero-knowledge cloud architecture. Hope you enjoyed this tutorial! As usual, your comments are much appreciated.

Deploy your Rails applications like a pro with Dokku and DigialOcean

UPDATE September 19th, 2016.

This tutorial has been updated to target Dokku version 0.6.5.

After reading this tutorial, you will be able to:

  • Create your own server (droplet) on the cloud using the DigitalOcean cloud architecture. (I will also share with you a link that will give you $10 credit at DigitalOcean).
  • Install your first DOKKU plugin. In this case, a Postgresql database plugin
  • Automate your database migrations using the app.json manifest file
  • Create a swap file to prevent memory issues when using the cheapest droplet type (512 M)
  • Setup zero downtime deployments using the CHECKS feature
  • Setup a Procfile to run your application with something better than WEBrick

I’ve tested each step of this tutorial multiple times so you should not run into any issues. If you do however, please leave me a comment at the end of this post and we will sort it out together!

Heroku has become the standard to host Ruby On Rails web applications. It is understandable because Heroku has such a great infrastructure. Deploying is a matter of typing “git push heroku master” and you’re pretty much done!
The thing is, if you are part of a small development team or you are a freelancer, the cost of using Heroku for all your clients / projects might become a real issue for you. This is where Dokku comes in! But what is Dokku?
The description on the Dokku home page is pretty self-explanatory:

The smallest PaaS implementation you’ve ever seen. Docker powered mini-Heroku in around 200 lines of Bash

So, there you have it. A “mini-heroku” that you can self-host or, better perhaps, use on an affordable cloud infrastructure such as DigitalOcean (use that previous link to get a $10 credit). Small teams and freelancers can now deploy like the pros at a fraction of the cost. Follow this tutorial and soon, you too, will be able to deploy your Rails apps simply by typing: git push dokku master. How neat is that? Sure you will have some configuring to do, but the overall process is not that complicated. This tutorial will show you how to get there.

Get your $10 credit here:

Are you ready for the tutorial…?


First, create the droplet on DigitalOcean.
Droplet creation
Then you have to choose the size of the droplet. Let’s choose the cheapest option (Small teams and freelancers love cheap options. We’re broke!)
Be cheap
Choose your image! Don’t miss this step, it’s very important. Don’t choose a Rails preset or a Ubuntu image. Remember, we want Dokku!
Add your ssh key(s) for a more secure access to your droplet.
SSH Keys
Then select the number of droplets to create and choose a hostname
Choose an hostname
Finally, click on the “Create” button and wait until your droplet is fully created!
Waiting, I hate waiting...
The DigitalOcean part is done. Now we have to make sure we can log in to our droplet

Connect to our droplet via SSH

Open a terminal window and connect to your droplet, like this:

ssh [email protected]

Make sure the Dokku user can connect using your SSH key as well

When you will deploy your app with git, the “dokku” user will be used instead of root, so you need to make sure that this user can connect to your droplet. I’m not sure if this is supposed to be configured automatically when you create your droplet, but it didn’t work for me. Have a look at the file located in /home/dokku/.ssh/authorized_keys (on your droplet). If it’s empty like it was for me, run this command:

cat /root/.ssh/authorized_keys | sshcommand acl-add dokku dokku

Add a swap file!

Since we chose the cheapest option (512 M), we will run into memory problems when we will deploy our Rails application. Rails assets compilation will make your deploy fail. Don’t worry though, your web application will still be running smoothly. What’s the solution if we are determined to use our cheap 512M droplet? Simple, we just add a swap file as explained in this StackOverflow answer. What follows is (almost) an exact copy of that answer.
To see if you have a swap files:

sudo swapon -s

No swap file shown? Check how much disk space space you have:


To create a swap file:
Step 1: Allocate a file for swap

sudo fallocate -l 2048m /mnt/swap_file.swap

Step 2: Change permission

sudo chmod 600 /mnt/swap_file.swap

Step 3: Format the file for swapping device

sudo mkswap /mnt/swap_file.swap

Step 4: Enable the swap

sudo swapon /mnt/swap_file.swap

Step 5: Make sure the swap is mounted when you Reboot. First, open fstab

sudo nano /etc/fstab

Finally, add entry in fstab (only if it wasn’t automatically added)

/mnt/swap_file.swap none swap sw 0 0

Great, now we have our swap file. What’s next?

Create our application in Dokku

If you type the dokku command, the list of commands for dokku will be displayed on the screen. You should study it as it is very instructive, but for now we will simply use the dokku apps:create command to create our application.

dokku apps:create myapp

This will create a container for your new app.

Database? Sure, let’s use Postgres

To interact with a postgres database on Dokku, you need to use a plugin. I had success with this one, so:

dokku plugin:install https://github.com/Flink/dokku-psql-single-container

Once it’s installed, feel free to type the dokku command again. It will show you new commands starting with “psql”. Let’s create our database

dokku psql:create myapp

Done! But, you might ask, where are the credentials to access this database from our future Rails application? Good news! They have already been generated for you, type:

dokku config myapp

And you will be shown a list of all environment variables for your app. In this list you will find your connection string for your postgres database. Great!

Speaking of environment variables…

Thanks to Ariff in the comments for asking questions about environment variables. The following section is a recap of what was discussed in the comments.
To configure a new environment variable for a given application, you do the following:

dokku config:set myapp SOME_SECRET_VAR='hello'

Note that you don’t have to manually set the SECRET_KEY_BASE environment variable which is used in the secrets.yml file of your Rails application. This is because the ruby buildpack already does this for you. As you can see in the source code, SECRET_KEY_BASE is set to a randomly generated key (have a look at the setup_profiled and app_secret methods).

Create our Rails app locally

Switch to your local workstation and create a new rails app.

  rails new myapp
  cd myapp
  git init .

Add a git remote to your Dokku application

   git remote add dokku [email protected]:myapp

Open your database.yml and add your Dokku environment variable:

  adapter: postgresql
  database: myapp_dev
  encoding: utf8
  host: localhost
  username: somedevusername
  password: somepassword
  adapter: postgresql
  database: myapp_test
  encoding: utf8
  host: localhost
  username: somedevusername
  password: somepassword
  adapter: postgresql
  url: <%= ENV['POSTGRESQL_URL'] %> #This is the environment variable created by our Dokku command earlier
  encoding: unicode
  pool: 5

Off topic: Why not take this opportunity to use environment variables for all your secrets?
As for the Gemfile, make sure it has the following lines:

ruby '2.2.3' #or any other ruby version
gem 'rails_12factor', group: :production #rails library tuned to run smoothly on Heroku/Dokku cloud infrastructures
gem 'pg' #postgres gem

We will also create a default controller to have somewhat of a functioning application. On your local worstation, run:

./bin/rails g controller static_pages

Create a new file named home.html.erb in app/views/static_pages and add the following:

<p>Hello world!</p>

In routes.rb, add:

root 'static_pages#home'

Are you ready? Run bundle install, commit everything then type:

git push dokku master

If you did everything correctly, you should see something like this after you pushed to dokku.(I edited the output to keep it brief).

Counting objects: 3, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (2/2), 282 bytes | 0 bytes/s, done.
Total 2 (delta 1), reused 0 (delta 0)
-----> Cleaning up...
-----> Building myapp from herokuish...
-----> Adding BUILD_ENV to build environment...
-----> Ruby app detected
-----> Compiling Ruby/Rails
-----> Using Ruby version: ruby-2.2.3
-----> Installing dependencies using bundler 1.9.7
       Running: bundle install --without development:test --path vendor/bundle --binstubs vendor/bundle/bin -j4 --deployment
=====> myapp container output:
       [2015-12-03 15:06:47] INFO  WEBrick 1.3.1
       [2015-12-03 15:06:47] INFO  ruby 2.2.3 (2015-08-18) [x86_64-linux]
       [2015-12-03 15:06:47] INFO  WEBrick::HTTPServer#start: pid=8 port=5000
-----> Setting config vars
       NO_VHOST: 1
-----> Creating http nginx.conf
-----> Running nginx-pre-reload
       Reloading nginx
-----> Setting config vars
-----> Shutting down old containers in 60 seconds
=====> 69d9e4f83ec73f18c72a81d13cf42525ba2369cda3b440555b57a0cb9bf7835a
=====> Application deployed:
       http://dokku:32772 (container)
       http://dokku:80 (nginx)

Open a browser and type your droplet ip address + the port shown above, in this case it’s 32772. You should see an ugly “Hello World!”, congratulations!

WEBrick? Nah, let’s use Puma

We didn’t specify a ruby application server so the default WEBrick was used. WEBrick is ok in development but in production it is not a good idea. On your local workstation, create a new file in config/puma.rb

workers 1 #Change this to match the number of cores in your sever. Our cheap (but nice) 512M droplet gives us only 1 core
threads_count = Integer(ENV['MAX_THREADS'] || 5)
threads threads_count, threads_count
rackup      DefaultRackup
port        ENV['PORT']     || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
  # Worker specific setup for Rails 4.1+
  # See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot

Now let’s create a procfile to tell Dokku we want to use puma instead of WEBrick
Create a file at the root of your Rails app called Procfile (notice the capital P) and enter the following line:

web: bundle exec puma -C config/puma.rb

Open your gemfile and add the puma gem

gem 'puma'

Run bundle install. Commit everything and push to dokku once more. Have a look at the output, you should see something like this:

=====> myapp web container output:
       [8] Puma starting in cluster mode...
       [8] * Version 3.3.0 (ruby 2.3.0-p0), codename: Jovial Platypus
       [8] * Min threads: 5, max threads: 5
       [8] * Environment: production
       [8] * Process workers: 1
       [8] * Preloading application
       [8] * Listening on tcp://
       [8] Use Ctrl-C to stop
       [8] - Worker 0 (pid: 162) booted, phase: 0

If for some reason you don’t see anything about Puma in the output. Try looking at the logs by typing:

ssh [email protected] dokku logs myapp

Configure pre-flight checks

Something might have caught your attention when we deployed our application:

-----> Running pre-flight checks
       For more efficient zero downtime deployments, create a file CHECKS.
       See http://progrium.viewdocs.io/dokku/checks-examples.md for examples
       CHECKS file not found in container: Running simple container check...

Checks in Dokku are a way to setup zero downtime deployments. You don’t want your users to get an error page while your server is restarting. Since we have not created any custom check, dokku run a default check that simply make sure that the new container is up and running before pointing to the new app. The problem is it will not check if puma has been fully loaded. Let’s create a super simple check to make sure our Rails application is available.

At the root of your app, create a file named CHECKS and add the following:

WAIT=8 #Wait 8 seconds before each attempt
ATTEMPTS=6 #Try 6 times, if it still doesn't work, the deploy has failed and the old container (app) will be kept
/check_deploy.txt deploy_successful

Important: Leave an empty line at the end of this file, otherwise Dokku might not detect your check. Is this a bug? I don’t know… but it took me a while to figure this one out!

Now create a file called “check_deploy.txt” in your rails public directory and add the text:


In other words, dokku will try 6 times to obtain the “deploy_successful” string after calling “/check_deploy.txt”.
Push everything to dokku and verify the output. You will probably see something like that:

-----> Running pre-flight checks
-----> Attempt 1/6 Waiting for 8 seconds ...
       CHECKS expected result:
       http://localhost/check_deploy.txt => "deploy_successful"
-----> All checks successful!

Domains and VHOST

So far we’ve been relying on our droplet ip address for accessing our application. It’s better to have a real domain name pointing to Digital Ocean. I invite you to read this tutorial to setup your domain name correctly with Digital Ocean.

Database migrations

Before Dokku 0.5, it was not really possible to have your database migrations run automatically on deploy. You had to do it in two steps. First you deploy, then you migrate by typing: ssh [email protected] dokku run myapp rake db:migrate

Fortunately, we can automate the process now that Dokku supports the app.json manifest file. Create a app.json file in the root of your repository and add this:

  "name": "myapp",
  "description": "Dummy app to go along the dokku tutorial found on rubyfleebie.com",
  "keywords": [
  "scripts": {
    "dokku": {
      "postdeploy": "bundle exec rake db:migrate"

Let’s create a dummy model to see if the migrations will be run.

./bin/rails g model Book

Then we commit everything and push to dokku. THe output should look like this:

-----> Running post-deploy
-----> Attempting to run scripts.dokku.postdeploy from app.json (if defined)
-----> Running 'rake db:migrate' in app container
       restoring installation cache...
       Migrating to CreateBooks (20160405194531)
       == 20160405194531 CreateBooks: migrating ======================================
       -- create_table(:books)
          -> 0.0139s
       == 20160405194531 CreateBooks: migrated (0.0142s) ==========

How cool is that? I hoped you enjoyed this tutorial. Your comments are appreciated!


Dushyant in the comments had some errors on deploy. He found out that his problem was related to the numbers of containers configured when using DigitalOcean 5$ plan. I didn’t run into this problem myself, so here is what Dushyant says about it:
« Finally I found the solution. My previous solution got me working but ultimately that wasn’t the true solution.
It is happening because of containers and because of 5 dollar plan.
You can get the list of containers by this command
docker ps
Then remove the unwanted containers
docker rm -f docker_id

What’s next?

How about automating your database backups and storing them on a zero-knowledge cloud architecture?

My brief rant against SPA

I know I sound like an old coot whose afraid of change, but there is something that doesn’t feel right for me with the new Single Page Application craze. It’s like all of a sudden, we’ve decided that web applications are crappy and that we want a return to traditional client/server applications. It’s a if we cannot tolerate a page refresh anymore and we want to manage the state of our applications like we were doing in 1995!
Would StackOverflow be better for the end user if it was built with Angular or Meteor? Nope. There are use cases where SPA shine, I can understand that. But for a standard web application or a website? I don’t believe it is needed. The web has been built around the idea of resources. Each URL on the web is supposed to be a document, not an app that you download. We have to come to our senses here. We should not work against the fundamentals of the WWW. If we destroy the basics of the web because of our thirst to create rich client applications, we’re doing a terrible mistake.

Keeping secrets secret without using .gitignore

In the past I used to keep all files containing sensitive data (passwords, api keys and other secrets) out of my git repository. For example, I would add database.yml in the .gitignore file. Then I would put my database.yml on my production server in the « shared » folder. Finally I would ask capistrano to create a symlink to that file in the deployment recipe. This has been fine with me for a while.
But one day, a small gnome came in my home office, all dancing and laughing. So i said : « what are you doing here, little gnome? ». He first told me that I should not speak to him aloud like that, as it would probably scare my wife and my kids and that they would start being concerned with my mental health. « Good point, I kept to myself ». Then, he told me : « There is a better way! You don’t have to rely on .gitignore if you don’t want to expose your secrets. Let me show you how…»
Since that day, my application files that contain secrets are in my repository. And, as you probably guessed, I use environment variables in those files instead of plain-text secrets, exactly like it is suggested in the secrets.rb file of a rails project. You probably have seen it already:

# Do not keep production secrets in the repository,
# instead read values from the environment.
  secret_key_base: <%= ENV["SECRET_KEY_BASE"] %>

Hey, It’s not rocket science! Why a blog post about this!?
Because I struggled at first to set these environment variables correctly. How to set the variables in developement? How to set them in production?
At first I tried using the rbenv-vars plugin both in development and in production. rbenv-vars is a simple plugin for rbenv that let you declare environment variables in a straightforward manner. You just create a .rbenv-vars file in your application directory that looks like this :

DB_USER=db user
DB_PASS=db password
aws_access_key_id=some access key here
aws_secret_access_key=secret access key here

And you’re ready to go (as long as you use rbenv!). Of course this file should not be in your repository as it contains secrets. And since the file is not really a part of your application to begin with, adding it to .gitignore makes complete sense. Anyway, to the point. rbenv-vars worked perfectly on my development machine, so I decided to use it in production, but it didn’t work very well this time.
The Phusion Passenger gotcha
On one of my production server, I use Phusion Passenger, and no matter what I tried it would not set the environment variables configured in my .rbenv-vars file. I know others had success with this approach but it didn’t work for me.
If you use Phusion Passenger (>= 5.0) like me and .rbenv-vars doesn’t work for you neither, just set the environment variables in your nginx or apache configuration file, like this (I use nginx) :

server {
  listen 80;
  server_name mygreatapp.com
  root /home/username/apps/mygreatapp/current/public;
  passenger_ruby /home/username/.rbenv/versions/2.2.4/bin/ruby;
  passenger_enabled on;
  passenger_env_var SECRET_KEY_BASE supersecret
  passenger_env_var DB_USER mydbuser;
  passenger_env_var DB_PASSWORD shhhItsSecret;

All in all I’m really happy with this approach. It allows me to keep files like secrets.yml and database.yml in my repository (instead of gitignoring them) without exposing passwords or other secrets.