URL Validation with KnockoutJS

I recently found myself having to implement URL validation in KnockoutJS. Here’s how I went about doing that:

1) Add Knockout-Validation plugin to your project.

2) Create a custom rule to validate URLs based on the RegEx by Diego Perini found here:

ko.validation.rules['url'] = {
  validator: function(val, required) {
    if (!val) {
      return !required
    val = val.replace(/^\s+|\s+$/, ''); //Strip whitespace
    //Regex by Diego Perini from: http://mathiasbynens.be/demo/url-regex
    return val.match(/^(?:(?:https?|ftp):\/\/)(?:\S+(?::\S*)?@)?(?:(?!10(?:\.\d{1,3}){3})(?!127(?:\.‌​\d{1,3}){3})(?!169\.254(?:\.\d{1,3}){2})(?!192\.168(?:\.\d{1,3}){2})(?!172\.(?:1[‌​6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])(?:\.(?:1?\d{1‌​,2}|2[0-4]\d|25[0-5])){2}(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))|(?:(?:[a-z\u00‌​a1-\uffff0-9]+-?)*[a-z\u00a1-\uffff0-9]+)(?:\.(?:[a-z\u00a1-\uffff0-9]+-?)*[a-z\u‌​00a1-\uffff0-9]+)*(?:\.(?:[a-z\u00a1-\uffff]{2,})))(?::\d{2,5})?(?:\/[^\s]*)?$/i);
  message: 'This field has to be a valid URL'

3) You are all set. You can now use it either one of these ways:


var someURL = ko.observable().extend({ url: true });


<input type="text" data-bind="value: someURL.extend({ url: true })" />

Ensuring reliable email delivery with DNS records

If your web app is sending emails, you will need to have these three DNS records configured properly to ensure reliable email delivery and avoid ending up in spam/junk folders:

PTR – It’s a pointer to a canonical name. Unlike a CNAME, DNS processing does NOT proceed but it just resolves an IP address to a fully-qualified domain name (FQDN). So just the name is returned. It’s also known as a Reverse DNS Record.

The most common use is for implementing reverse DNS lookups to check if the server name is actually associated with the IP address from where the connection was initiated. Here’s what it would look like:

SPF – Sender Policy Framework (SPF) is an email validation system designed to prevent email spam by detecting email spoofing, a common vulnerability, by verifying sender IP addresses.

Microsoft has a great 4 step wizard to guide you through creating a SPF record.

If you have a Google Apps domain and you are using a 3rd party email delivery service such as SendGrid, here’s what it would typically look like:

"v=spf1 a mx include:_spf.google.com include:sendgrid.net ~all"

without Google Apps:

"v=spf1 a mx include:sendgrid.net ~all"

DKIM – DomainKeys Identified Mail (DKIM) is a method for associating a domain name with an email message, thereby allowing a person, role, or organization to claim some responsibility for the message. The association is set up by means of a digital signature which can be validated by recipients.

This is usually set by the email delivery service you are using. Checkout their account settings to make sure its turned on.

Here’s some more useful links:

How can I check if my DKIM and SPF records are valid? by Postmark
What are SPF and DKIM and do I need to set them up? by Mandrill
How do I add DNS records for my sending domains? by Mandrill
Email Deliverability Guide by SendGrid

How to setup a dev environment using Vagrant & Puppet (Part II)

This is a continuation of my previous post on how to get up and running with Vagrant.

Now let’s get more specific and see how we can install specific modules.


To install PostgreSQL I’ve chosen to use this module on GitHub. Let’s go ahead and add that as a submodule. Run this command from the git root folder:

git submodule add https://github.com/akumria/puppet-postgresql.git puppet/modules/postgresql

Also keep in mind, the destination directory should have the same name as the main module’s class. So in this case, postgresql.

You can now add this module into your default.pp in puppet/manifests folder to start using it:

class { 'postgresql': }

That’s the most basic way to get started. Although to make things easier, we should probably also add any databases or users that we will need. And if you plan to access PostgreSQL from your host machine, you will also need to open up the port and allow incoming traffic from the host.

So taking those things into consideration, here’s what it would end up looking like:

class install_postgres {
  class { 'postgresql': }

  class { 'postgresql::server':
    listen => ['*', ],
    port   => 5432,
    acl   => ['host all all md5', ],  

  pg_database { ['some-project_database']:
    ensure   => present,
    encoding => 'UTF8',
    require  => Class['postgresql::server']

  pg_user { 'some-project-user':
    ensure  => present,
    require => Class['postgresql::server'],
    superuser => true,
    password => 'some-password'

  pg_user { 'vagrant':
    ensure    => present,
    superuser => true,
    require   => Class['postgresql::server']

  package { 'libpq-dev':
    ensure => installed

  package { 'postgresql-contrib':
    ensure  => installed,
    require => Class['postgresql::server'],
class { 'install_postgres': }

Finally in your Vagrantfile, make sure you are forwarding that port to whichever port you want on your host machine:

config.vm.network :forwarded_port, guest: 5432, host: 5433


Now let’s take a look at how to go about installing Ruby. Instead of a package, we’ll just do this manually. First let’s defined a variable called $as_vagrant. It’s a reusable command which will let us basically run other commands as the vagrant user:

$as_vagrant = 'sudo -u vagrant -H bash -l -c'

We also need to make sure, curl is installed:

package { 'curl':
  ensure => installed

Now we can install RVM (note the dependency on the curl package):

exec { 'install_rvm':
  command => "${as_vagrant} 'curl -L https://get.rvm.io | bash -s stable'",
  creates => "${home}/.rvm/bin/rvm",
  require => Package['curl']

Next up, install Ruby 2.0.0 (note the dependency on our previous install_rvm command):

exec { 'install_ruby':
  # We run the rvm executable directly because the shell function assumes an
  # interactive environment, in particular to display messages or ask questions.
  # The rvm executable is more suitable for automated installs.
  # Thanks to @mpapis for this tip.
  command => "${as_vagrant} '${home}/.rvm/bin/rvm install 2.0.0 --latest-binary --autolibs=enabled && rvm --fuzzy alias create default 2.0.0'",
  creates => "${home}/.rvm/bin/ruby",
  require => Exec['install_rvm']

Finally, install Bundler:

exec { "${as_vagrant} 'gem install bundler --no-rdoc --no-ri'":
  creates => "${home}/.rvm/bin/bundle",
  require => Exec['install_ruby']

That’s it! If you are a RoR dev, I would highly recommend checking out the rails-dev-box project on GitHub which is where the above snippet is form.


For python I’ve chosen to use this module which makes it very easy to install.

Let’s add that as a submodule:

git submodule add https://github.com/stankevich/puppet-python.git puppet/modules/python

Now all you have to do is add this declaration:

class { 'python':
  version => 'system',
  virtualenv => true,
  pip => true,
  dev => true

You can checkout more params/arguments on their GitHub page.

If you’d like to learn more about creating and consuming modules, I would also recommend checking out Erika Heidi’s site for in-depth blog posts and talks.


Finally you are now ready to fire up your new Vagrant machine. Just run the following commands:

$ vagrant up
$ vagrant provision
$ vagrant ssh

You can now just go to the /vagrant folder where you should see the project folder that you just cloned. This folder is shared with your host so you can open your favorite editor on the host and start hacking away.


New version of Vagrant also includes the ability to Share, Distribute and Discover your Vagrant boxes. I have been sharing my Vagrant machine using ngrok, a utility that I mentioned in my previous post about useful web development tools. With the release of vagrant share, you could skip that and just do:

vagrant share --ssh

That will create a random shareable web URL such as http://hulking-chameleon-3934.vagrantshare.com which would point to your Vagrant instance.

This is very handy for sharing results with co-workers or remote debugging. On the flip side, you can also SSH into other Vagrant machines :

vagrant connect --ssh hulking-chameleon-3934

More details on how this works on their site.

That’s it, I think I’ve covered the basics. Let me know if you’d like to me cover anything else.

How to setup a dev environment using Vagrant & Puppet (Part I)


Vagrant is a great tool for spinning up different dev environments without polluting your own local machine with gems and packages from different projects. It is also great for sharing the same environment across your entire team without having to going through the pain of manual setup every time you on-board a new team member.

You can use providers/platforms such as VMWare, VirtualBox, AWS, etc to run your VM. If you are a first-time user, installing VirtualBox is probably your best option to get started.

After you’ve installed VirtualBox and Vagrant, let’s create a folder where you want to initializing a base machine.

$ mkdir my-project-dev-box
$ cd my-project-dev-box
$ git init
$ vagrant init precise32 http://files.vagrantup.com/precise32.box

Those commands will create a plain Ubuntu 12.04 box and generate a Vagrantfile in my-project-box folder. It will also initialize a git repository since we’ll need that later down the line to add modules.

But the first thing we’ll do now is add custom hostname in the config block to avoid getting confused if you spin up more Vagrant machines in the future. I like to have the project’s name in the hostname so you can easily identify it (same as the folder name):

config.vm.hostname = 'some-project-dev-box'

Since we’ll be using this for a particular dev environment, we need more than just a plain Ubuntu box. So let’s look at how we’d go about installing other packages and software.

Vagrant supports many different provisioners/IT automation tools such as Ansible, Chef, Puppet, or even a plain old shell script to help us automate this process. For the purpose of this post, we’ll use Puppet since IMHO it has the easiest learning curve.


To use Puppet for provisioning, add this block in the Vagrantfile:

config.vm.provision :puppet do |puppet|
    puppet.manifests_path = 'puppet/manifests'
    puppet.module_path    = 'puppet/modules'
    puppet.options        = '--verbose'

So as you might have guessed, we’ll need a puppet directory with two sub-directories manifests and modules. Manifests will contain a default.pp file which tell Puppet which modules to install and how:

|   .Vagrantfile
    |       default.pp

Modules are basically just re-usable instructions for how to install specific piece of software. You can create your own modules or just grab some 3rd party modules from the Example42′s repository, Puppet Lab’s official repository, or just from other folks on GitHub.

There are basically three ways of installing/using these modules. You can either clone or just copy/paste the module in the modules folder, you can add it as a git submodule or use the puppet module tool. I’ve never tried the last one since it doesn’t work on Windows.

This is where I’ll stop for now. In my next post, I’ll talk about installing specific modules and how to overcome some common gotchas while configuring them. Meanwhile, here’s what our basic Vagrantfile looks like at this point:

Vagrant.configure('2') do |config|
  config.vm.box      = 'precise32'
  config.vm.box_url  = 'http://files.vagrantup.com/precise32.box'
  config.vm.hostname = 'some-project-dev-box'

  config.vm.provision :puppet do |puppet|
    puppet.manifests_path = 'puppet/manifests'
    puppet.module_path    = 'puppet/modules'

If you wanted to you can boot your VM and ssh into it just like any other Ubuntu machine:

$ vagrant up
$ vagrant ssh

Although at this point, there’s not much to see. We’ll get to the fun stuff in part II. Stay tuned!

Useful tools for web development

This time I’m trying something different. I want to talk about some dev tools that I find indispensable in my day-to-day coding tasks:

Vagrant – Helps you spin up different VMs to create isolated dev environments. Definitely worth checking out if you find yourself juggling many different projects at once. (Checkout my two part blog post on how to get started.)

ngrok – Ever found yourself needing to test your local app with external callbacks such as Facebook or Twitter auth? Well this lets you securely expose your localhost to internet traffic to do just that. No need to constantly deploy your app to a staging environment just to test callbacks.

RequestBin – Opposite of ngrok. It lets you quickly test and inspect webhooks or your outgoing HTTP requests to make sure they are formatted correctly before you deploy them to production.

Postman – One of the best Chrome extensions out there for testing and interacting with REST APIs.

ExpanDrive – Elegant solution to mount an external VPS or S3 to your local filesystem. It’s not free but it’s worth it.

HeidiSQL – Most lightweight and fastest SQL client out there. It gets the job done without a lot of fuss.

Notepad++ – I use different IDEs for various projects ranging from Visual Studio to RubyMine but this has been a constant companion for doing quick text manipulations or editing small chunks of code without opening up clunky IDEs.

Awesome Cookie Manager – When testing an app, no need to log out. Just delete the cookie from your browser itself or change it to a certain value to see if your app is responding to it correctly.

smtp4dev or FakeSMTP – These two will help you test emails in your web app without actually sending them out.

JSFiddle – No introduction needed here. If you’ve been developing for a while I’m sure you’ve come across this. Best way to demo your code snippet or get some help fixing a bug.

htacess tester – Name says it all. Quick and easy way to test your .htaccess files to make sure they are working as intended.

RegExr – Beautifully designed online tool to learn, build, & test Regular Expressions with a rich feature set including real time results, sharing, saving, examples library, etc

StackEdit – One of the best markdown editors out there with live previews. Very handy for writing readme files or wikis on GitHub.

explainshell – Found a command but not quite sure what it does? Before executing it, run it through explain shell to make sure it’s doing what you intended it to do.

I would love to hear from you. What are some of your favorite dev tools?

Automated deployments with git-flow, heroku_san & CircleCI

Here’s a simple workflow that I like to use to quickly get automated deployments working on any RoR project running on Heroku.


First we need to establish a good git branching process and for that I always use git-flow.

If you are not familiar with it, I highly recommend reading A successful Git branching model which explains what it’s all about.

Basically it’s just a command line wrapper for several git commands. For example, this git-flow command git flow feature start cool_stuff will create a new branch called feature/cool_stuff, based on your develop branch and switch to it.

It’s super easy to use. It establishes best-practices out the door and helps you leverage the power of git.


Now once you have that setup, we can move on to setting up heroku_san. It makes maintaining multiple environments (staging, production, demo) a breeze by providing a simple set of rake tasks.

Install the gem

group :development do
    gem 'heroku_san'

and run the rake command

rails generate heroku_san

which will generate a /config/heroku.yml. You can tweak this according to your needs. In our case:

  app: liveapp
  stack: cedar
    RACK_ENV: production
    RAILS_ENV: production
    BUNDLE_WITHOUT: "development:test"

  app: stagingapp
  stack: cedar
    RACK_ENV: staging
    RAILS_ENV: staging
    BUNDLE_WITHOUT: "development:test"

Now you could run rake staging deploy at anytime to deploy your code to the stagingapp on Heroku but we want to automate that.


That brings us to the last part. Here you could swap CircleCI with any other CI server such as Jenkins, Codeship, Travis, Semaphore, etc they should all work similarly.

In this case, go ahead and sign-up for CircleCI and add your project to it. After that, follow this guide to deploy to Heroku.

Once you’ve set that up, create a circle.yml file at the root of your project so it knows what to do when you push your code:

## Customize deployment commands
    branch: master
      - bundle exec rake production deploy
    branch: develop
      - bundle exec rake staging deploy

We are basically telling it to run heroku_san deploy tasks once your test suite passes.

In Action

So now that we’ve got it all dialed in, we can either push to develop or master and the code will be deployed to the correct app on Heroku.

Pushing to staging:

git push origin develop

That’s all it takes and your app will automatically be deployed to your stagingapp on Heroku. It will recognize that we pushed to the develop branch based on our configuration in the circle.yml.


Pushing to production:

git flow release start 1.0.0
git flow release finish 1.0.0
git push origin master develop --tags

In this case, git-flow will create a release branch/tag which will be merged into master and pushed to GitHub. CI will listen in on that and immediately trigger a production deploy since we pushed to master.

Using Postgres Hstore with Rails 4

I’m a huge fan of PostgresSQL given it’s performance and stability. One of it’s great features is the ability to store hashes in tables using the Hstore column type AND to query them! This is perfect for a column like user settings where you may need to add or change the type of settings you need as your app evolves. Here’s some steps and gotchas that I came across while using this column.


So first let’s start by creating a migration which would enable the Hstore extension on our database:

class AddHstore < ActiveRecord::Migration
  def up
    execute 'CREATE EXTENSION hstore'
  def down
    execute 'DROP EXTENSION hstore'

Next you'll need to change your schema dump format from ruby to sql as the schema for hstore can't be represented by ruby.

In config/application.rb:

config.active_record.schema_format = :sql

I've also come across a more cleaner way to enable this without using the native execute command (although I never got it to work):

class SetupHstore < ActiveRecord::Migration
  def self.up
    enable_extension "hstore"
  def self.down
    disable_extension "hstore"

With this approach you don't have to even change the schema_format, it will automatically add enable_extension "hstore" in your schema.rb file. Again, I've yet to see this work but I'd try this first before using the native sql approach.


Now you are ready to start using this column. Let's add a settings column to our users table.

class AddSettingsToUsers < ActiveRecord::Migration
  def change
     add_column :users, :settings, :hstore  

If you know which keys you'll store in the settings column, you can define accessors in your model and even validate them:

class User < ActiveRecord::Base
  store_accessor :settings, :theme, :language
  validates :theme, inclusion: { in: %w{white, red, blue} }
  validates :language, inclusion: { in: I18n.available_locales.map(&:to_s) }

Although, if you don't know what kind of keys the client will send your controller, here's a way to receive dynamic keys/value pairs using strong params:

def some_params
  params.require(:some).permit(:name, :title).tap do |whitelisted|
    whitelisted[:settings] = params[:some][:settings]

We are now ready to query our users based on their settings:

User.where("settings -> 'language' = 'en'")

More query syntax can be found here in the hstore documentation.


hstore provides index support for a few select operators such as @>, ?, ?& or ?|. So if you are planning to use any of those, it's probably a good idea to add an index for better performance:

class AddIndexToUserSettings < ActiveRecord::Migration
  def up
    execute 'CREATE INDEX users_settings ON users USING gin(settings)'

  def down
    execute 'DROP INDEX users_settings'

Alternatively you can use the native DSL to accomplish the same thing:

class AddIndexToUserSettings < ActiveRecord::Migration
  def up
    add_index :users, [:settings], name: "users_gin_settings", using: :gin

  def down
    remove_index :users, name: "users_gin_settings"

I generally tend to use GIN index as the lookups are three times faster compared to GiST indexes, although they take three times longer to build. You can checkout their documentation to decide which is the best fit for your needs.


If you are using Heroku, there is one small issue I ran across. As a consequence of switching to sql schema format, I kept getting this error when heroku ran db:migrate at the end of each deployment:

No such file or directory - pg_dump -i -s -x -O -f /app/db/structure.sql

It's trying to dump a new structure.sql with the pg_dump command which is not present on Heroku's environment. You really don't need that in production anyway. So the way to get around it would be to just silent that task if it's running in any environment other than development by adding this in your Rakefile:

Rake::Task["db:structure:dump"].clear unless Rails.env.development?

That's a wrap! Let me know if I missed anything.

Fixing CPU spikes in Ubuntu 11.10

If you finding your CPU usage spiking in Ubuntu 11.10 it may very well be an issue caused by php5′s cron task.

To confirm, first use top to see if fuser is taking up the most amount of CPU and creating hundreds of zombie processes. If that’s the case, here’s the fix:

Open /etc/cron.d/php5 and you should see something like this:

09,39 *     * * *     root   [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete

Go ahead and comment that out and replace it with:

09,39 *     * * *     root   [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete

The difference being that the 11.10 version runs fuser for each PHP session file, thus using all the CPU when there are hundreds of sessions.

Save that and reboot the instance to regain normalcy.

All credit goes to grazer for suggesting this solution.

Architecting RESTful Rails 4 API

This is a follow-up to my previous post about Authentication with Rails, Devise and AngularJS. This time we’ll focus more on some aspects of the API implementation. One thing to keep in mind, where my previous example used Rails 3.2, I’m using Rails 4 this time around.


If you are building an API which could be potentially consumed by many different clients, it’s important to version them to provide backward compatibility. That way clients can catch-up on their own time while newer versions are rolled out. Here’s what I’ve found to be the most recommended approach:


namespace :api, defaults: {format: :json} do
  scope module: :v1, constraints: ApiConstraints.new(version: 1, default: :true) do
     devise_scope :user do
	match '/sessions' => 'sessions#create', :via => :post
	match '/sessions' => 'sessions#destroy', :via => :delete

So what we are doing there is wrapping our routes with an API namespace which will give us some separation from the admin and client-side routes by giving them their own top level /api/ segment. But to avoid being too verbose we’ll leave the version number out of the URLs. So instead of a namespace we’ll turn the version module into a scope block. Although this raises a question – how can the client request a specific version? Well that’s where something called constraints comes in.


class ApiConstraints
  def initialize(options)
    @version = options[:version]
    @default = options[:default]

  def matches?(req)
    @default || req.headers['Accept'].include?("application/vnd.myapp.v#{@version}")

This constraint basically says if the client wants anything besides the default version of our API (in this case v1) they can send us an Accept header indicating that.

Next lets take a look at our controller.


class Api::V1::SessionsController < Devise::SessionsController
    protect_from_forgery with: :null_session, :if => Proc.new { |c| c.request.format == 'application/vnd.myapp.v1' }

    def create
      warden.authenticate!(:scope => resource_name, :recall => "#{controller_path}#failure")
      render :json => { :info => "Logged in", :user => current_user }, :status => 200

    def destroy
      warden.authenticate!(:scope => resource_name, :recall => "#{controller_path}#failure")
      render :json => { :info => "Logged out" }, :status => 200

    def failure
      render :json => { :error => "Login Credentials Failed" }, :status => 401

Nothing fancy there. Pretty much what I covered in my previous post i.e overriding the default Devise controller for more control.


Having an API means you’ll have clients which will need to know how to consume it. I know die hard HATEOS advocates will say that a REST API should be discoverable by nature. But in most real world scenarios that may not always be the case. So we’ll need to find a way to write our own documentation. Writing documentation manually would be extremely time consuming and unmaintainable. So the best way would be to somehow generate it automatically. There is a perfect gem written with just this intent by the good folks at Zipmark called rspec_api_documentation. It leverages rspec’s metadata to generate documentation using acceptance tests.

Install this gem and run:

rake docs:generate

It will automatically pickup all the passing tests in the /rspec/acceptance/* folder to generate documentation. Here’s an example of a test for the sessions controller. The key here is to use the custom DSL provided by the gem to give some context and structure to the documentation.


require 'spec_helper'
require 'rspec_api_documentation/dsl'

resource 'Session' do
  header "Accept", "application/vnd.myapp.v1"

  let!(:user) { create(:user) }

  post "/api/sessions" do
    parameter :email, "Email", :required => true, :scope => :user
    parameter :password, "Password", :required => true, :scope => :user

    let(:email) { user.email }
    let(:password) { user.password }

    example_request "Logging in" do
      expect(response_body).to be_json_eql({ :info => "Logged in",
                                         :user => user
      expect(status).to eq 200

  delete "/api/sessions" do
    include Warden::Test::Helpers

    before (:each) do
      login_as user, scope: :user

    example_request "Logging out" do
      expect(response_body).to be_json_eql({ :info => "Logged out"
      expect(status).to eq 200

By default it will generate HTML files in the /docs/ folder. If you want more control over the output, there is an option to generate JSON files which can then be rendered by another gem such as raddocs or your own home brewed solution. Just specify the output format in your spec_helper file.


RSpec.configure do |config|
   RspecApiDocumentation.configure do |config|
       config.format = :json
       config.docs_dir = Rails.root.join("docs", "")


My sample code is based upon many recommendations found in the Rails 3 in Action book and it’s github repo

I also plan to update the RADD example with Rails 4 and the code shown here as soon as I get some time. RADD has now been upgraded to Rails 4 along with the versioning & documentation techniques shown here.

How to renew expired Certificate & Provisioning Profile for Ad Hoc Distribution

Apple requires it’s developers to rebuild and redeploy their apps with a new Provisioning Profile each year. Here are the steps that you would need to follow when your profile is close to it’s expiration date so you keep your app running without interruptions:

  1. Go to developer.apple.com and navigate to the Member Center -> Certificates, Identifiers & Profiles
  2. Go to Certificates -> Production
  3. Here you will see all your production certificates. I’m assuming most of them have or soon will be expired. So go ahead and request a new certificate by clicking on the Add (+) button.
  4. On that Add iOS Certificate screen, select In-House and Ad Hoc option and hit Continue.
  5. Now before we can continue, let’s open Keychain Access on you computer and generate a Certificate Signing Request by going to Keychain Access -> Certificate Assistant -> Request a Certificate from a Certificate Authority
  6. In the window that pops up, enter your email address and common name.
  7. Save the .certSigningRequest file to your disk.
  8. Now go back to your browser window and upload the .certSigningRequest file which you just created and click on Generate.
  9. Download and open the .cer file which you just generated in Keychain Access. You should now be able to see the newly generated certificate with a new expiration date.
  10. Now go back to the browser and navigate to Provisioning Profiles -> Distribution
  11. Click on the provisioinng profile in question and click on the Edit button.
  12. In the certificates field, select the new certificate which you just created and click Generate.
  13. Download and open the new provisioning profile (.mobileprovision) in the Organizer. You should now see the new expiring date (a year from now) on that as well.
  14. Delete the old profiles to avoid confusion and rebuild your app with the new one
  15. Once you’ve rebuilt the app, just install it again on all devices in question.