Fixing PostgreSQL local issues on Ubuntu/Debian

PGError: ERROR: encoding UTF8 does not match locale en_US
DETAIL: The chosen LC_CTYPE setting requires encoding LATIN1.

You are here probably because you encountered an error like the one above. After a bit of searching I found and tried various different methods from random articles and gists. Although none worked for me as well as what I’m about to document. Which is why I’m blogging about this for future reference.

Although before we proceed, a word of caution – Backup your database. This has NOT been tried in production so use at your own risk!

1) Update the default locale on your system

Note: These commands are specific to Debian/Ubuntu.
Source: https://blog.lnx.cx/2009/08/13/fixing-my-missing-locales/

$ locale-gen en_US.UTF-8
$ update-locale LANG=en_US.UTF-8

2) Now let’s recreatetemplate1 based on the new locale

First we’ll need to switch to the database user. Typically that’ll be “postgres”

$ sudo su postgres
$ psql

OR

$ psql -U postgres

Next issue the following set of commands in order:

# UPDATE pg_database SET datallowconn = TRUE WHERE datname = 'template0';
# \c template0
# UPDATE pg_database SET datistemplate = FALSE WHERE datname = 'template1';
# drop database template1;
# CREATE DATABASE template1 ENCODING = 'utf8' TEMPLATE = "template0" LC_CTYPE = 'en_US.utf8' LC_COLLATE = en_US.utf8';
# UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template1';
# \c template1
# UPDATE pg_database SET datallowconn = FALSE WHERE datname = 'template0';
# \q

If you didn’t see any errors, you should now be all set. You can now go ahead and log out of the postgres user account:

$ exit

URL Validation with KnockoutJS

I recently found myself having to implement URL validation in KnockoutJS. Here’s how I went about doing that:

1) Add Knockout-Validation plugin to your project.

2) Create a custom rule to validate URLs based on the RegEx by Diego Perini found here:

ko.validation.rules['url'] = {
  validator: function(val, required) {
    if (!val) {
      return !required
    }
    val = val.replace(/^\s+|\s+$/, ''); //Strip whitespace
    //Regex by Diego Perini from: http://mathiasbynens.be/demo/url-regex
    return val.match(/^(?:(?:https?|ftp):\/\/)(?:\S+(?::\S*)?@)?(?:(?!10(?:\.\d{1,3}){3})(?!127(?:\.‌​\d{1,3}){3})(?!169\.254(?:\.\d{1,3}){2})(?!192\.168(?:\.\d{1,3}){2})(?!172\.(?:1[‌​6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])(?:\.(?:1?\d{1‌​,2}|2[0-4]\d|25[0-5])){2}(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))|(?:(?:[a-z\u00‌​a1-\uffff0-9]+-?)*[a-z\u00a1-\uffff0-9]+)(?:\.(?:[a-z\u00a1-\uffff0-9]+-?)*[a-z\u‌​00a1-\uffff0-9]+)*(?:\.(?:[a-z\u00a1-\uffff]{2,})))(?::\d{2,5})?(?:\/[^\s]*)?$/i);
  },
  message: 'This field has to be a valid URL'
};
ko.validation.registerExtenders();

3) You are all set. You can now use it either one of these ways:

JS:

var someURL = ko.observable().extend({ url: true });

HTML:

<input type="text" data-bind="value: someURL.extend({ url: true })" />

Ensuring reliable email delivery with DNS records

If your web app is sending emails, you will need to have these three DNS records configured properly to ensure reliable email delivery and avoid ending up in spam/junk folders:

PTR – It’s a pointer to a canonical name. Unlike a CNAME, DNS processing does NOT proceed but it just resolves an IP address to a fully-qualified domain name (FQDN). So just the name is returned. It’s also known as a Reverse DNS Record.

The most common use is for implementing reverse DNS lookups to check if the server name is actually associated with the IP address from where the connection was initiated. Here’s what it would look like:

SPF – Sender Policy Framework (SPF) is an email validation system designed to prevent email spam by detecting email spoofing, a common vulnerability, by verifying sender IP addresses.

Microsoft has a great 4 step wizard to guide you through creating a SPF record.

If you have a Google Apps domain and you are using a 3rd party email delivery service such as SendGrid, here’s what it would typically look like:

"v=spf1 a mx include:_spf.google.com include:sendgrid.net ~all"

without Google Apps:

"v=spf1 a mx include:sendgrid.net ~all"

DKIM – DomainKeys Identified Mail (DKIM) is a method for associating a domain name with an email message, thereby allowing a person, role, or organization to claim some responsibility for the message. The association is set up by means of a digital signature which can be validated by recipients.

This is usually set by the email delivery service you are using. Checkout their account settings to make sure its turned on.

Here’s some more useful links:

How can I check if my DKIM and SPF records are valid? by Postmark
What are SPF and DKIM and do I need to set them up? by Mandrill
How do I add DNS records for my sending domains? by Mandrill
Email Deliverability Guide by SendGrid

How to setup a dev environment using Vagrant & Puppet (Part II)

This is a continuation of my previous post on how to get up and running with Vagrant.

Now let’s get more specific and see how we can install specific modules.

PostgreSQL

To install PostgreSQL I’ve chosen to use this module on GitHub. Let’s go ahead and add that as a submodule. Run this command from the git root folder:

git submodule add https://github.com/akumria/puppet-postgresql.git puppet/modules/postgresql

Also keep in mind, the destination directory should have the same name as the main module’s class. So in this case, postgresql.

You can now add this module into your default.pp in puppet/manifests folder to start using it:

class { 'postgresql': }

That’s the most basic way to get started. Although to make things easier, we should probably also add any databases or users that we will need. And if you plan to access PostgreSQL from your host machine, you will also need to open up the port and allow incoming traffic from the host.

So taking those things into consideration, here’s what it would end up looking like:

class install_postgres {
  class { 'postgresql': }

  class { 'postgresql::server':
    listen => ['*', ],
    port   => 5432,
    acl   => ['host all all 0.0.0.0/0 md5', ],  
  }

  pg_database { ['some-project_database']:
    ensure   => present,
    encoding => 'UTF8',
    require  => Class['postgresql::server']
  }

  pg_user { 'some-project-user':
    ensure  => present,
    require => Class['postgresql::server'],
    superuser => true,
    password => 'some-password'
  }

  pg_user { 'vagrant':
    ensure    => present,
    superuser => true,
    require   => Class['postgresql::server']
  }

  package { 'libpq-dev':
    ensure => installed
  }

  package { 'postgresql-contrib':
    ensure  => installed,
    require => Class['postgresql::server'],
  }
}
class { 'install_postgres': }

Finally in your Vagrantfile, make sure you are forwarding that port to whichever port you want on your host machine:

config.vm.network :forwarded_port, guest: 5432, host: 5433

Ruby

Now let’s take a look at how to go about installing Ruby. Instead of a package, we’ll just do this manually. First let’s defined a variable called $as_vagrant. It’s a reusable command which will let us basically run other commands as the vagrant user:

$as_vagrant = 'sudo -u vagrant -H bash -l -c'

We also need to make sure, curl is installed:

package { 'curl':
  ensure => installed
}

Now we can install RVM (note the dependency on the curl package):

exec { 'install_rvm':
  command => "${as_vagrant} 'curl -L https://get.rvm.io | bash -s stable'",
  creates => "${home}/.rvm/bin/rvm",
  require => Package['curl']
}

Next up, install Ruby 2.0.0 (note the dependency on our previous install_rvm command):

exec { 'install_ruby':
  # We run the rvm executable directly because the shell function assumes an
  # interactive environment, in particular to display messages or ask questions.
  # The rvm executable is more suitable for automated installs.
  #
  # Thanks to @mpapis for this tip.
  command => "${as_vagrant} '${home}/.rvm/bin/rvm install 2.0.0 --latest-binary --autolibs=enabled && rvm --fuzzy alias create default 2.0.0'",
  creates => "${home}/.rvm/bin/ruby",
  require => Exec['install_rvm']
}

Finally, install Bundler:

exec { "${as_vagrant} 'gem install bundler --no-rdoc --no-ri'":
  creates => "${home}/.rvm/bin/bundle",
  require => Exec['install_ruby']
}

That’s it! If you are a RoR dev, I would highly recommend checking out the rails-dev-box project on GitHub which is where the above snippet is form.

Python

For python I’ve chosen to use this module which makes it very easy to install.

Let’s add that as a submodule:

git submodule add https://github.com/stankevich/puppet-python.git puppet/modules/python

Now all you have to do is add this declaration:

class { 'python':
  version => 'system',
  virtualenv => true,
  pip => true,
  dev => true
}

You can checkout more params/arguments on their GitHub page.

If you’d like to learn more about creating and consuming modules, I would also recommend checking out Erika Heidi’s site for in-depth blog posts and talks.

Deploy

Finally you are now ready to fire up your new Vagrant machine. Just run the following commands:

$ vagrant up
$ vagrant provision
$ vagrant ssh

You can now just go to the /vagrant folder where you should see the project folder that you just cloned. This folder is shared with your host so you can open your favorite editor on the host and start hacking away.

Share

New version of Vagrant also includes the ability to Share, Distribute and Discover your Vagrant boxes. I have been sharing my Vagrant machine using ngrok, a utility that I mentioned in my previous post about useful web development tools. With the release of vagrant share, you could skip that and just do:

vagrant share --ssh

That will create a random shareable web URL such as http://hulking-chameleon-3934.vagrantshare.com which would point to your Vagrant instance.

This is very handy for sharing results with co-workers or remote debugging. On the flip side, you can also SSH into other Vagrant machines :

vagrant connect --ssh hulking-chameleon-3934

More details on how this works on their site.

That’s it, I think I’ve covered the basics. Let me know if you’d like to me cover anything else.

How to setup a dev environment using Vagrant & Puppet (Part I)

Vagrant

Vagrant is a great tool for spinning up different dev environments without polluting your own local machine with gems and packages from different projects. It is also great for sharing the same environment across your entire team without having to going through the pain of manual setup every time you on-board a new team member.

You can use providers/platforms such as VMWare, VirtualBox, AWS, etc to run your VM. If you are a first-time user, installing VirtualBox is probably your best option to get started.

After you’ve installed VirtualBox and Vagrant, let’s create a folder where you want to initializing a base machine.

$ mkdir my-project-dev-box
$ cd my-project-dev-box
$ git init
$ vagrant init precise32 http://files.vagrantup.com/precise32.box

Those commands will create a plain Ubuntu 12.04 box and generate a Vagrantfile in my-project-box folder. It will also initialize a git repository since we’ll need that later down the line to add modules.

But the first thing we’ll do now is add custom hostname in the config block to avoid getting confused if you spin up more Vagrant machines in the future. I like to have the project’s name in the hostname so you can easily identify it (same as the folder name):

config.vm.hostname = 'some-project-dev-box'

Since we’ll be using this for a particular dev environment, we need more than just a plain Ubuntu box. So let’s look at how we’d go about installing other packages and software.

Vagrant supports many different provisioners/IT automation tools such as Ansible, Chef, Puppet, or even a plain old shell script to help us automate this process. For the purpose of this post, we’ll use Puppet since IMHO it has the easiest learning curve.

Puppet

To use Puppet for provisioning, add this block in the Vagrantfile:

config.vm.provision :puppet do |puppet|
    puppet.manifests_path = 'puppet/manifests'
    puppet.module_path    = 'puppet/modules'
    puppet.options        = '--verbose'
end

So as you might have guessed, we’ll need a puppet directory with two sub-directories manifests and modules. Manifests will contain a default.pp file which tell Puppet which modules to install and how:

|   .Vagrantfile
\---puppet
    +---manifests
    |       default.pp
    \---modules

Modules are basically just re-usable instructions for how to install specific piece of software. You can create your own modules or just grab some 3rd party modules from the Example42′s repository, Puppet Lab’s official repository, or just from other folks on GitHub.

There are basically three ways of installing/using these modules. You can either clone or just copy/paste the module in the modules folder, you can add it as a git submodule or use the puppet module tool. I’ve never tried the last one since it doesn’t work on Windows.

This is where I’ll stop for now. In my next post, I’ll talk about installing specific modules and how to overcome some common gotchas while configuring them. Meanwhile, here’s what our basic Vagrantfile looks like at this point:

Vagrant.configure('2') do |config|
  config.vm.box      = 'precise32'
  config.vm.box_url  = 'http://files.vagrantup.com/precise32.box'
  config.vm.hostname = 'some-project-dev-box'

  config.vm.provision :puppet do |puppet|
    puppet.manifests_path = 'puppet/manifests'
    puppet.module_path    = 'puppet/modules'
  end
end

If you wanted to you can boot your VM and ssh into it just like any other Ubuntu machine:

$ vagrant up
$ vagrant ssh

Although at this point, there’s not much to see. We’ll get to the fun stuff in part II. Stay tuned!

Useful tools for web development

This time I’m trying something different. I want to talk about some dev tools that I find indispensable in my day-to-day coding tasks:

Vagrant – Helps you spin up different VMs to create isolated dev environments. Definitely worth checking out if you find yourself juggling many different projects at once. (Checkout my two part blog post on how to get started.)

ngrok – Ever found yourself needing to test your local app with external callbacks such as Facebook or Twitter auth? Well this lets you securely expose your localhost to internet traffic to do just that. No need to constantly deploy your app to a staging environment just to test callbacks.

RequestBin – Opposite of ngrok. It lets you quickly test and inspect webhooks or your outgoing HTTP requests to make sure they are formatted correctly before you deploy them to production.

Postman – One of the best Chrome extensions out there for testing and interacting with REST APIs.

ExpanDrive – Elegant solution to mount an external VPS or S3 to your local filesystem. It’s not free but it’s worth it.

HeidiSQL – Most lightweight and fastest SQL client out there. It gets the job done without a lot of fuss.

Notepad++ – I use different IDEs for various projects ranging from Visual Studio to RubyMine but this has been a constant companion for doing quick text manipulations or editing small chunks of code without opening up clunky IDEs.

Awesome Cookie Manager – When testing an app, no need to log out. Just delete the cookie from your browser itself or change it to a certain value to see if your app is responding to it correctly.

smtp4dev or FakeSMTP – These two will help you test emails in your web app without actually sending them out.

JSFiddle – No introduction needed here. If you’ve been developing for a while I’m sure you’ve come across this. Best way to demo your code snippet or get some help fixing a bug.

htacess tester – Name says it all. Quick and easy way to test your .htaccess files to make sure they are working as intended.

RegExr – Beautifully designed online tool to learn, build, & test Regular Expressions with a rich feature set including real time results, sharing, saving, examples library, etc

StackEdit – One of the best markdown editors out there with live previews. Very handy for writing readme files or wikis on GitHub.

explainshell – Found a command but not quite sure what it does? Before executing it, run it through explain shell to make sure it’s doing what you intended it to do.

I would love to hear from you. What are some of your favorite dev tools?

Automated deployments with git-flow, heroku_san & CircleCI

Here’s a simple workflow that I like to use to quickly get automated deployments working on any RoR project running on Heroku.

git-flow

First we need to establish a good git branching process and for that I always use git-flow.

If you are not familiar with it, I highly recommend reading A successful Git branching model which explains what it’s all about.

Basically it’s just a command line wrapper for several git commands. For example, this git-flow command git flow feature start cool_stuff will create a new branch called feature/cool_stuff, based on your develop branch and switch to it.

It’s super easy to use. It establishes best-practices out the door and helps you leverage the power of git.

heroku_san

Now once you have that setup, we can move on to setting up heroku_san. It makes maintaining multiple environments (staging, production, demo) a breeze by providing a simple set of rake tasks.

Install the gem

group :development do
    gem 'heroku_san'
end

and run the rake command

rails generate heroku_san

which will generate a /config/heroku.yml. You can tweak this according to your needs. In our case:

production:
  app: liveapp
  stack: cedar
  config:
    RACK_ENV: production
    RAILS_ENV: production
    BUNDLE_WITHOUT: "development:test"

staging:
  app: stagingapp
  stack: cedar
  config:
    RACK_ENV: staging
    RAILS_ENV: staging
    BUNDLE_WITHOUT: "development:test"

Now you could run rake staging deploy at anytime to deploy your code to the stagingapp on Heroku but we want to automate that.

CircleCI

That brings us to the last part. Here you could swap CircleCI with any other CI server such as Jenkins, Codeship, Travis, Semaphore, etc they should all work similarly.

In this case, go ahead and sign-up for CircleCI and add your project to it. After that, follow this guide to deploy to Heroku.

Once you’ve set that up, create a circle.yml file at the root of your project so it knows what to do when you push your code:

## Customize deployment commands
deployment:
  production:
    branch: master
    commands:
      - bundle exec rake production deploy
  staging:
    branch: develop
    commands:
      - bundle exec rake staging deploy

We are basically telling it to run heroku_san deploy tasks once your test suite passes.

In Action

So now that we’ve got it all dialed in, we can either push to develop or master and the code will be deployed to the correct app on Heroku.

Pushing to staging:

git push origin develop

That’s all it takes and your app will automatically be deployed to your stagingapp on Heroku. It will recognize that we pushed to the develop branch based on our configuration in the circle.yml.

OR

Pushing to production:

git flow release start 1.0.0
git flow release finish 1.0.0
git push origin master develop --tags

In this case, git-flow will create a release branch/tag which will be merged into master and pushed to GitHub. CI will listen in on that and immediately trigger a production deploy since we pushed to master.

Using Postgres Hstore with Rails 4

I’m a huge fan of PostgresSQL given it’s performance and stability. One of it’s great features is the ability to store hashes in tables using the Hstore column type AND to query them! This is perfect for a column like user settings where you may need to add or change the type of settings you need as your app evolves. Here’s some steps and gotchas that I came across while using this column.

Setup

So first let’s start by creating a migration which would enable the Hstore extension on our database:

class AddHstore < ActiveRecord::Migration
  def up
    execute 'CREATE EXTENSION hstore'
  end
 
  def down
    execute 'DROP EXTENSION hstore'
  end
end

Next you'll need to change your schema dump format from ruby to sql as the schema for hstore can't be represented by ruby.

In config/application.rb:

config.active_record.schema_format = :sql

I've also come across a more cleaner way to enable this without using the native execute command (although I never got it to work):

class SetupHstore < ActiveRecord::Migration
  def self.up
    enable_extension "hstore"
  end
  def self.down
    disable_extension "hstore"
  end
end

With this approach you don't have to even change the schema_format, it will automatically add enable_extension "hstore" in your schema.rb file. Again, I've yet to see this work but I'd try this first before using the native sql approach.

Usage

Now you are ready to start using this column. Let's add a settings column to our users table.

class AddSettingsToUsers < ActiveRecord::Migration
  def change
     add_column :users, :settings, :hstore  
  end
end

If you know which keys you'll store in the settings column, you can define accessors in your model and even validate them:

class User < ActiveRecord::Base
  store_accessor :settings, :theme, :language
 
  validates :theme, inclusion: { in: %w{white, red, blue} }
  validates :language, inclusion: { in: I18n.available_locales.map(&:to_s) }
end

Although, if you don't know what kind of keys the client will send your controller, here's a way to receive dynamic keys/value pairs using strong params:

def some_params
  params.require(:some).permit(:name, :title).tap do |whitelisted|
    whitelisted[:settings] = params[:some][:settings]
  end
end

We are now ready to query our users based on their settings:

User.where("settings -> 'language' = 'en'")

More query syntax can be found here in the hstore documentation.

Indexes

hstore provides index support for a few select operators such as @>, ?, ?& or ?|. So if you are planning to use any of those, it's probably a good idea to add an index for better performance:

class AddIndexToUserSettings < ActiveRecord::Migration
  def up
    execute 'CREATE INDEX users_settings ON users USING gin(settings)'
  end

  def down
    execute 'DROP INDEX users_settings'
  end
end

Alternatively you can use the native DSL to accomplish the same thing:

class AddIndexToUserSettings < ActiveRecord::Migration
  def up
    add_index :users, [:settings], name: "users_gin_settings", using: :gin
  end

  def down
    remove_index :users, name: "users_gin_settings"
  end
end

I generally tend to use GIN index as the lookups are three times faster compared to GiST indexes, although they take three times longer to build. You can checkout their documentation to decide which is the best fit for your needs.

Heroku

If you are using Heroku, there is one small issue I ran across. As a consequence of switching to sql schema format, I kept getting this error when heroku ran db:migrate at the end of each deployment:

No such file or directory - pg_dump -i -s -x -O -f /app/db/structure.sql

It's trying to dump a new structure.sql with the pg_dump command which is not present on Heroku's environment. You really don't need that in production anyway. So the way to get around it would be to just silent that task if it's running in any environment other than development by adding this in your Rakefile:

Rake::Task["db:structure:dump"].clear unless Rails.env.development?

That's a wrap! Let me know if I missed anything.

Fixing CPU spikes in Ubuntu 11.10

If you finding your CPU usage spiking in Ubuntu 11.10 it may very well be an issue caused by php5′s cron task.

To confirm, first use top to see if fuser is taking up the most amount of CPU and creating hundreds of zombie processes. If that’s the case, here’s the fix:

Open /etc/cron.d/php5 and you should see something like this:

09,39 *     * * *     root   [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete

Go ahead and comment that out and replace it with:

09,39 *     * * *     root   [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete

The difference being that the 11.10 version runs fuser for each PHP session file, thus using all the CPU when there are hundreds of sessions.

Save that and reboot the instance to regain normalcy.

All credit goes to grazer for suggesting this solution.

Architecting RESTful Rails 4 API

This is a follow-up to my previous post about Authentication with Rails, Devise and AngularJS. This time we’ll focus more on some aspects of the API implementation. One thing to keep in mind, where my previous example used Rails 3.2, I’m using Rails 4 this time around.

Versioning

If you are building an API which could be potentially consumed by many different clients, it’s important to version them to provide backward compatibility. That way clients can catch-up on their own time while newer versions are rolled out. Here’s what I’ve found to be the most recommended approach:

/config/routes.rb

namespace :api, defaults: {format: :json} do
  scope module: :v1, constraints: ApiConstraints.new(version: 1, default: :true) do
     devise_scope :user do
	match '/sessions' => 'sessions#create', :via => :post
	match '/sessions' => 'sessions#destroy', :via => :delete
     end
  end
end

So what we are doing there is wrapping our routes with an API namespace which will give us some separation from the admin and client-side routes by giving them their own top level /api/ segment. But to avoid being too verbose we’ll leave the version number out of the URLs. So instead of a namespace we’ll turn the version module into a scope block. Although this raises a question – how can the client request a specific version? Well that’s where something called constraints comes in.

/lib/api_constraints.rb

class ApiConstraints
  def initialize(options)
    @version = options[:version]
    @default = options[:default]
  end

  def matches?(req)
    @default || req.headers['Accept'].include?("application/vnd.myapp.v#{@version}")
  end
end

This constraint basically says if the client wants anything besides the default version of our API (in this case v1) they can send us an Accept header indicating that.

Next lets take a look at our controller.

/app/controllers/api/v1/sessions_controller.rb

class Api::V1::SessionsController < Devise::SessionsController
    protect_from_forgery with: :null_session, :if => Proc.new { |c| c.request.format == 'application/vnd.myapp.v1' }

    def create
      warden.authenticate!(:scope => resource_name, :recall => "#{controller_path}#failure")
      render :json => { :info => "Logged in", :user => current_user }, :status => 200
    end

    def destroy
      warden.authenticate!(:scope => resource_name, :recall => "#{controller_path}#failure")
      sign_out
      render :json => { :info => "Logged out" }, :status => 200
    end

    def failure
      render :json => { :error => "Login Credentials Failed" }, :status => 401
    end
end

Nothing fancy there. Pretty much what I covered in my previous post i.e overriding the default Devise controller for more control.

Documenting

Having an API means you’ll have clients which will need to know how to consume it. I know die hard HATEOS advocates will say that a REST API should be discoverable by nature. But in most real world scenarios that may not always be the case. So we’ll need to find a way to write our own documentation. Writing documentation manually would be extremely time consuming and unmaintainable. So the best way would be to somehow generate it automatically. There is a perfect gem written with just this intent by the good folks at Zipmark called rspec_api_documentation. It leverages rspec’s metadata to generate documentation using acceptance tests.

Install this gem and run:

rake docs:generate

It will automatically pickup all the passing tests in the /rspec/acceptance/* folder to generate documentation. Here’s an example of a test for the sessions controller. The key here is to use the custom DSL provided by the gem to give some context and structure to the documentation.

/spec/acceptance/api/v1/sessions_spec.rb

require 'spec_helper'
require 'rspec_api_documentation/dsl'

resource 'Session' do
  header "Accept", "application/vnd.myapp.v1"

  let!(:user) { create(:user) }

  post "/api/sessions" do
    parameter :email, "Email", :required => true, :scope => :user
    parameter :password, "Password", :required => true, :scope => :user

    let(:email) { user.email }
    let(:password) { user.password }

    example_request "Logging in" do
      expect(response_body).to be_json_eql({ :info => "Logged in",
                                         :user => user
                                       }.to_json)
      expect(status).to eq 200
    end
  end

  delete "/api/sessions" do
    include Warden::Test::Helpers

    before (:each) do
      login_as user, scope: :user
    end

    example_request "Logging out" do
      expect(response_body).to be_json_eql({ :info => "Logged out"
                                       }.to_json)
      expect(status).to eq 200
    end
  end
end

By default it will generate HTML files in the /docs/ folder. If you want more control over the output, there is an option to generate JSON files which can then be rendered by another gem such as raddocs or your own home brewed solution. Just specify the output format in your spec_helper file.

/spec/spec_helper.rb

RSpec.configure do |config|
.
.
   RspecApiDocumentation.configure do |config|
       config.format = :json
       config.docs_dir = Rails.root.join("docs", "")
   end
.
.
end

References

My sample code is based upon many recommendations found in the Rails 3 in Action book and it’s github repo

I also plan to update the RADD example with Rails 4 and the code shown here as soon as I get some time. RADD has now been upgraded to Rails 4 along with the versioning & documentation techniques shown here.