22 Oct 2013
This is a follow-up to my previous post about Authentication with Rails, Devise and AngularJS. This time we’ll focus more on some aspects of the API implementation. One thing to keep in mind, where my previous example used Rails 3.2, I’m using Rails 4 this time around.
Versioning
If you are building an API which could be potentially consumed by many different clients, it’s important to version them to provide backward compatibility. That way clients can catch-up on their own time while newer versions are rolled out. Here’s what I’ve found to be the most recommended approach:
/config/routes.rb
namespace :api, defaults: {format: :json} do
scope module: :v1, constraints: ApiConstraints.new(version: 1, default: :true) do
devise_scope :user do
match '/sessions' => 'sessions#create', :via => :post
match '/sessions' => 'sessions#destroy', :via => :delete
end
end
end
So what we are doing there is wrapping our routes with an API namespace which will give us some separation from the admin and client-side routes by giving them their own top level /api/ segment. But to avoid being too verbose we’ll leave the version number out of the URLs. So instead of a namespace we’ll turn the version module into a scope block. Although this raises a question - how can the client request a specific version? Well that’s where something called constraints comes in.
/lib/api_constraints.rb
class ApiConstraints
def initialize(options)
@version = options[:version]
@default = options[:default]
end
def matches?(req)
@default || req.headers['Accept'].include?("application/vnd.myapp.v#{@version}")
end
end
This constraint basically says if the client wants anything besides the default version of our API (in this case v1) they can send us an Accept header indicating that.
Next lets take a look at our controller.
/app/controllers/api/v1/sessions_controller.rb
class Api::V1::SessionsController < Devise::SessionsController
protect_from_forgery with: :null_session, :if => Proc.new { |c| c.request.format == 'application/vnd.myapp.v1' }
def create
warden.authenticate!(:scope => resource_name, :recall => "#{controller_path}#failure")
render :json => { :info => "Logged in", :user => current_user }, :status => 200
end
def destroy
warden.authenticate!(:scope => resource_name, :recall => "#{controller_path}#failure")
sign_out
render :json => { :info => "Logged out" }, :status => 200
end
def failure
render :json => { :error => "Login Credentials Failed" }, :status => 401
end
end
Nothing fancy there. Pretty much what I covered in my previous post i.e overriding the default Devise controller for more control.
Documenting
Having an API means you’ll have clients which will need to know how to consume it. I know die hard HATEOS advocates will say that a REST API should be discoverable by nature. But in most real world scenarios that may not always be the case. So we’ll need to find a way to write our own documentation. Writing documentation manually would be extremely time consuming and unmaintainable. So the best way would be to somehow generate it automatically. There is a perfect gem written with just this intent by the good folks at Zipmark called rspec_api_documentation. It leverages rspec’s metadata to generate documentation using acceptance tests.
Install this gem and run:
rake docs:generate
It will automatically pickup all the passing tests in the /rspec/acceptance/* folder to generate documentation. Here’s an example of a test for the sessions controller. The key here is to use the custom DSL provided by the gem to give some context and structure to the documentation.
/spec/acceptance/api/v1/sessions_spec.rb
require 'spec_helper'
require 'rspec_api_documentation/dsl'
resource 'Session' do
header "Accept", "application/vnd.myapp.v1"
let!(:user) { create(:user) }
post "/api/sessions" do
parameter :email, "Email", :required => true, :scope => :user
parameter :password, "Password", :required => true, :scope => :user
let(:email) { user.email }
let(:password) { user.password }
example_request "Logging in" do
expect(response_body).to be_json_eql({ :info => "Logged in",
:user => user
}.to_json)
expect(status).to eq 200
end
end
delete "/api/sessions" do
include Warden::Test::Helpers
before (:each) do
login_as user, scope: :user
end
example_request "Logging out" do
expect(response_body).to be_json_eql({ :info => "Logged out"
}.to_json)
expect(status).to eq 200
end
end
end
By default it will generate HTML files in the /docs/ folder. If you want more control over the output, there is an option to generate JSON files which can then be rendered by another gem such as raddocs or your own home brewed solution. Just specify the output format in your spec_helper file.
/spec/spec_helper.rb
RSpec.configure do |config|
.
.
RspecApiDocumentation.configure do |config|
config.format = :json
config.docs_dir = Rails.root.join("docs", "")
end
.
.
end
References
My sample code is based upon many recommendations found in the Rails 3 in Action book and it’s github repo</a>
I also plan to update the RADD example with Rails 4 and the code shown here as soon as I get some time. RADD has now been upgraded to Rails 4 along with the versioning & documentation techniques shown here.
19 Aug 2013
Apple requires it’s developers to rebuild and redeploy their apps with a new Provisioning Profile each year. Here are the steps that you would need to follow when your profile is close to it’s expiration date so you keep your app running without interruptions:
- Go to developer.apple.com and navigate to the Member Center -> Certificates, Identifiers & Profiles
- Go to Certificates -> Production
- Here you will see all your production certificates. I’m assuming most of them have or soon will be expired. So go ahead and request a new certificate by clicking on the Add (+) button.
- On that Add iOS Certificate screen, select In-House and Ad Hoc option and hit Continue.
- Now before we can continue, let’s open Keychain Access on you computer and generate a Certificate Signing Request by going to Keychain Access -> Certificate Assistant -> Request a Certificate from a Certificate Authority
- In the window that pops up, enter your email address and common name.
- Save the .certSigningRequest file to your disk.
- Now go back to your browser window and upload the .certSigningRequest file which you just created and click on Generate.
- Download and open the .cer file which you just generated in Keychain Access. You should now be able to see the newly generated certificate with a new expiration date.
- Now go back to the browser and navigate to Provisioning Profiles -> Distribution
- Click on the provisioning profile in question and click on the Edit button.
- In the certificates field, select the new certificate which you just created and click Generate.
- Download and open the new provisioning profile (.mobileprovision) in the Organizer. You should now see the new expiring date (a year from now) on that as well.
- Delete the old profiles to avoid confusion and rebuild your app with the new one
- Once you’ve rebuilt the app, just install it again on all devices in question.
08 Aug 2013
I recently started building a Rails driven web app and decided to use Devise for authentication. This would be pretty straight forward to implement but I planned to use AngularJS to power the front-end and decided to only use Rails as a JSON API.
Getting down to development on that path, I quickly ran into some problems structuring AngularJS to recognize Devise sessions. Thanks to some useful examples on GitHub I was able to get around those issues and get them to play nice. Here’s how:
Let’s first look at the main application.js file of my Angular app:
angular.module('radd', ['sessionService','recordService','$strap.directives'])
.config(['$httpProvider', function($httpProvider){
$httpProvider.defaults.headers.common['X-CSRF-Token'] = $('meta[name=csrf-token]').attr('content');
var interceptor = ['$location', '$rootScope', '$q', function($location, $rootScope, $q) {
function success(response) {
return response
};
function error(response) {
if (response.status == 401) {
$rootScope.$broadcast('event:unauthorized');
$location.path('/users/login');
return response;
};
return $q.reject(response);
};
return function(promise) {
return promise.then(success, error);
};
}];
$httpProvider.responseInterceptors.push(interceptor);
}])
.config(['$routeProvider', function($routeProvider){
$routeProvider
.when('/', {templateUrl:'/home/index.html'})
.when('/record', {templateUrl:'/record/index.html', controller:RecordCtrl})
.when('/users/login', {templateUrl:'/users/login.html', controller:UsersCtrl})
.when('/users/register', {templateUrl:'/users/register.html', controller:UsersCtrl});
}]);
First, you’ll see I’m setting the request header with a CSRF token to make sure Rails doesn’t create a new session for every request that goes out. Second, I’m creating an interceptor which will basically intercept any 401 Unauthorized responses and direct them to the login page.
Next up let’s create a sessions controller (derived from Devise::SessionsController) which will give us some CRUD functionality through a JSON interface:
class SessionsController < Devise::SessionsController
respond_to :json
def create
resource = warden.authenticate!(:scope => resource_name, :recall => "#{controller_path}#failure")
render :status => 200,
:json => { :success => true,
:info => "Logged in",
:user => current_user
}
end
def destroy
warden.authenticate!(:scope => resource_name, :recall => "#{controller_path}#failure")
sign_out
render :status => 200,
:json => { :success => true,
:info => "Logged out",
}
end
def failure
render :status => 401,
:json => { :success => false,
:info => "Login Credentials Failed"
}
end
def show_current_user
warden.authenticate!(:scope => resource_name, :recall => "#{controller_path}#failure")
render :status => 200,
:json => { :success => true,
:info => "Current User",
:user => current_user
}
end
end
Now let’s create an AngularJS Session service which would interact with that controller:
angular.module('sessionService', [])
.factory('Session', function($location, $http, $q) {
// Redirect to the given url (defaults to '/')
function redirect(url) {
url = url || '/';
$location.path(url);
}
var service = {
login: function(email, password) {
return $http.post('/login', {user: {email: email, password: password} })
.then(function(response) {
service.currentUser = response.data.user;
if (service.isAuthenticated()) {
//TODO: Send them back to where they came from
//$location.path(response.data.redirect);
$location.path('/record');
}
});
},
logout: function(redirectTo) {
$http.post('/logout').then(function() {
service.currentUser = null;
redirect(redirectTo);
});
},
register: function(email, password, confirm_password) {
return $http.post('/users.json', {user: {email: email, password: password, password_confirmation: confirm_password} })
.then(function(response) {
service.currentUser = response.data;
if (service.isAuthenticated()) {
$location.path('/record');
}
});
},
requestCurrentUser: function() {
if (service.isAuthenticated()) {
return $q.when(service.currentUser);
} else {
return $http.get('/current_user').then(function(response) {
service.currentUser = response.data.user;
return service.currentUser;
});
}
},
currentUser: null,
isAuthenticated: function(){
return !!service.currentUser;
}
};
return service;
});
That pretty much does it. Now if you call a service which tries to access any Rails controller with a “before_filter :authenticate_user!” in it, you will automatically be kicked out and prompted to login.
I’ve put up a working demo on GitHub: https://github.com/jesalg/RADD.
19 Jun 2013
I was building a static public facing website which didn’t need any dynamic functionality except for rendering views and layouts. So I figured, why not try one of the new JavaScript MVC frameworks such as AngularJS which can do essentially the same without the need to worry about a server side framework.
After I was done building it, I realized it wasn’t going to work well with search engine crawlers and bots which don’t play well with content rendered via JavaScript/AJAX. I did some research online but couldn’t find any good solution besides installing PhantonJS on the server. You can set that up to generate HTML snapshots of your site. Felt like that was an overkill for such a simple site.
So here’s what I did. Consider it a poor man’s SEO fix. First I added this meta tag to tell the search engines that this site is AJAX driven:
<meta name="fragment" content="!">
That will basically make the search engine try an alternate route for your content. So for example, instead of going to http://www.mysite.com/about, it will go to http://www.mysite.com/?_escaped_fragment_=about
So the next thing we’ll need to do is modify .htaccess to handle those URLs with _escaped_fragment_ in them:
DirectoryIndex index.html
RewriteEngine On
RewriteCond %{QUERY_STRING} ^_escaped_fragment_=(.*)$
RewriteRule ^$ /crawler.php$1 [QSA,L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_URI} !index
RewriteRule (.*) index.html [L]
So as you’ll notice, I’m sending all of those requests to a special PHP script (crawler.php) just created for the crawlers/bots:
<?
$request = $_GET['_escaped_fragment_'];
$jsonurl = "./shared/data/pages.json";
$json = file_get_contents($jsonurl);
$json_output = json_decode($json);
foreach ($json_output as $page)
{
if ($page->slug == $request)
{
$title = $page->title;
$desc = $page->description;
$keywords = $page->tags;
$image = $page->thumb;
$url = "http://".$_SERVER['HTTP_HOST']."/".$request;
}
}
?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title><?php echo $title;?></title>
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<meta name="title" content="<?php echo $title;?>" />
<meta name="description" content="<?php echo $desc;?>">
<meta name="keywords" content="<?php echo $keywords;?>" />
<meta property="og:url" content="<?php echo $url; ?>" />
<meta property="og:site_name" content="My Site" />
<meta property="og:type" content="website" />
<meta property="og:title" content="<?php echo $title;?>" />
<meta property="og:image" content="<?php echo $image; ?>" />
<meta property="og:description" content="<?php echo $desc;?>" />
</head>
<body>
<!-- Optionally make this body content dynamic to comply with Google's TOS -->
</body>
</html>
Basically what that script is doing is reading the meta tags off of a JSON file based on which URL is being requested. Here’s an example:
[
{
"slug": "work",
"title": "Work",
"thumb": "/shared/img/work.jpg",
"description": "Work bla bla",
"tags": "some, keywords, go, here"
},
{
"slug": "about",
"title": "About",
"thumb": "/shared/img/about.jpg",
"description": "About bla bla",
"tags": "some, keywords, go, here"
},
{
"slug": "services",
"title": "Services",
"thumb": "/shared/img/services.jpg",
"description": "Services bla bla",
"tags": "some, keywords, go, here"
}
]
Now as a bonus, you could program AngularJS to read meta tags for each page from this JSON file as well. Just so you have all the meta information in one central place.
03 Jan 2013
About a year ago I bought a domain with the intent of building something cool for a fun learning experience. I ended up picking www.ruddl.com a short 5 letter domain name which is derived out of the verb ruddle which means to twist or braid together or interlace. Based on that name, some service which aggregates news (kind of like Flipboard) sounded like a good fit.
Then one day, I came across this image on reddit:
That pretty much hit the nail on the head. Every time I browse reddit I end up opening every single imgur link in a new tab. So I figured why not just create a site that will show me all the reddit links in a better layout.
That’s how ruddl was born. I developed it using Ruby/Sinatra, JavaScript, Redis, and Websockets (via Pusher) and hosted it on Heroku. It basically parses every link on reddit and tries to fetch a relevant thumbnail or text associated with that link and lays it out in a easy-to-view masonry style layout.
Let me know what you think -> www.ruddl.com
Happy redditing!