Getting Started With Mining Cryptocurrencies

With the cryptocurrency space being white-hot I’ve been very curious and learning all that I can. What better way to understand the core concepts than to start at the root of how cryptocurrency is generated aka mining. So here’s some of what I’ve learned in the process of doing just that, hope you find it useful!

What is mining?

In the simplest terms, the act of mining serves two purposes. One, the miner is essentially providing book-keeping services for the crypto’s public ledger aka blockchain by taking recent transactions every few minutes and writing them onto a page of the ledger referred to as a “block”.

Although miners have to solve a puzzle / cryptographic function to be able to add their block to the blockchain, whoever is able to do this first, is rewarded with some crypto in exchange for their services.

This brings me to the second purpose, the crypto which is rewarded is considered “mined”. This is essentially because it is new currency that’s released to the network. All cryptocurrencies have a fixed number that is set to release to the network through this mining process.

As that unreleased currency dwindles down, miners are then incentivized by a reward attached to each transaction in the block. So the transactions that include this reward i.e transaction fee are likely to be processed faster.

Pick a currency

Now that we understand the high level concept behind mining, let’s figure out what to mine. Naturally, BTC (Bitcoin) or ETH (Ethereum) would be our obvious candidates as they are the most well known and popular currencies at the moment. Although the issue with both is the fact that they are not very profitable to mine using a basic CPU or GPUs. If you are starting out, I’m assuming that’s pretty much what you will have access to.

In most cases, the popularity of a currency has a direct correlation with it’s mining difficulty. The more miners that join the pool, the faster new pages are written on the ledger. To keep the number of pages created steady, the difficulty of the puzzle is increased at a regular interval.

So out of the top 10 currencies on coinmarketcap.com, the best candidate for CPU or GPU mining would be XMR (Monero) since it’s based on a proof-of-work algorithm called CryptoNight which is designed to be suitable for ordinary CPU and GPUs and is resistant to mining with special hardware i.e ASIC. So having said that, let’s pick XMR. If you need some more convincing, here’s a good list of why XMR is a good bet.

Setup a wallet

So the next thing we’d have to figure out is where to store the proceeds from the mining. For XMR, the options are somewhat limited since it’s still a currency in development. The best bet would be to download & install the official wallet.

Alternatively, you can setup an account in one of the online exchanges like Bittrex or Kraken and use their built-in wallet, although it’s not recommended to use that as a long term storage for large amounts of currency.

Once you have this setup, take note of your wallet’s address. We will need this in the next steps.

Join a pool

So while XMR is friendly to solo mining, the chances of earning a reward are low. So in order to boost our chances, we’ll want to share our processing power in a network of other miners. We can then earn our reward according to the amount of hashing power we contributed towards creating a block.

There are several XMR mining pools that you can choose from this list. I would recommend sticking to xmrpool.net if you are in US or moneropool.com if you are in Europe.

Both have a good balance of # of miners and the hash rate. You don’t want to join a pool with a lot of miners because your earnings can get very diluted, you’ll want to stick to a good balance between hashing power and the # of members in a pool.

Start mining

So now we are finally ready to start mining. We have two options, we can either use the CPU or the GPU to do the mining. Let’s look at both the options and how they work.

CPU

ServeTheHome has put together some awesome Docker images to make this easy. You can install Docker for your platform if you haven’t already and run this command:

For xmrpool.net:

sudo docker run -itd -e username=[YOUR_WALLET_ADDRESS] servethehome/monero_cpu_xmrpooldotnet

For moneropool.com:

sudo docker run -itd -e username=[YOUR_WALLET_ADDRESS] servethehome/monero_cpu_moneropool

GPU

Again huge thanks to ServeTheHome for providing images for Nvidia GPUs which makes GPU mining super simple without having to install CUDA dependencies on your machine.

Although what you will need to install is nvidia-docker so the Docker container can utilize your GPU.

Installation on Ubuntu should be pretty straightforward:

# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb

# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi

Checkout installation steps on their repo for other distros. Once you have that working you can spin up the Docker instance:

For xmrpool.net:

NV_GPU=0 nvidia-docker run -itd -e username=[YOUR_WALLET_ADDRESS] --name GPU0_monero servethehome/monero_gpu_nv_xmrpooldotnet

For moneropool.com:

NV_GPU=0 nvidia-docker run -itd -e username=[YOUR_WALLET_ADDRESS] --name GPU0_monero servethehome/monero_gpu_nv_moneropool

If you have multiple GPUs you will need to add NV_GPU=0 prefix as shown above and spin up additional containers for each GPU you want to target. If you only have one, you can skip the prefix and explicit names.

Conclusion

Depending on your hardware and pool you might have to wait a day or so to see some rewards come your way. You can checkout this calculator to figure out what kind of return you can expect based on different variables.

Your best bet is to run this on a spare machine and leave it on for few days. You are probably not going to become the next crypto millionaire but it’s an excellent opportunity to learn more about the crypto space and Docker containers.

charles-christian-nahl.jpg The modern day crypto gold rush reminds me a little of Charles Nahl’s “Miners in the Sierras”

May the odds be with you!

Setting up automated code formatting for Rails using NodeJS

Why?

meme1.jpg

As one of the Rails projects I’ve been working grew in size in terms of the number of lines of code as well as the number of people contributing code, it became challenging to maintain consistency in code quality and style.

Many times these issues were brought up in code reviews where they should be addressed well before that so the discussion in code reviews can focus on substance and not the style.

So having said that, I wanted to setup an automated way of fixing stylistic issues when the code is checked in. Here’s how I went about setting that up.

In my previous blog post, I talked about Incorporating Modern Javascript Build Tools With Rails, if you have something similar setup already, you can skip the next two sections about setting up a NodeJS environment. If not, read on.

NodeJS

Before we can get started, let’s make sure we have NodeJS installed. I’ve found the easiest way to do so is via nvm. Rails developers will find this very similar to rvm. To install, run the following commands which will install nvm and the latest version of NodeJS:

$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.32.1/install.sh | bash
$ source ~/.bash_profile
$ nvm install node
$ nvm use node

Yarn

Next we’ll need a package manager. Traditionally we’d use npm but I’ve found Facebook’s yarn to be a lot more stable and reliable to work with. This is very similar to bundler. To install on Debian Linux, run the following commands or follow their installation guide for your OS:

$ curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
$ echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
$ sudo apt-get update && sudo apt-get install yarn

Git Hook Management

meme2.jpg

Now in order to format the code automatically upon check-in, we will have to first figure out how to run those linting scripts. There are a couple of options:

1) Bash Script - You can manually save a bash script as .git/hooks/pre-commit and give it execute permission. Downside of this approach is that you’d have to have every member of your team do this manually. Also if something changes in the script, everyone would have to repeat the process all over again. It would quickly become unmanageable.

2) pre-commit - This is a very robust framework built in Python to manage git hooks. I really like everything about it except the fact that for RoR projects, it adds another language dependency on the local environment in addition to Ruby and NodeJS. Also again this is something the entire team would have to install manually (albeit only once per environment) to get it up and running. I would definitely recommend it for a Python project.

3) overcommit (Recommended) - This is another excellent git hook manager very similar to pre-commit but written in Ruby. It has a ton of built-in hooks for formatting Ruby, JS, CSS and more. It’s virtually plugin and play and perfect for a project if it doesn’t have a NodeJS build pipeline setup. It will help you avoid introducing another language dependency. Although for the purpose of this blog post, we’ll use the next option. I’ll recommend checking out this blog post if you want to use this option.

4) husky & lint-staged (Recommended) - These two NodeJS packages act as a one-two punch. Husky lets you specify any script that you want to run against git hooks right in the package.json while lint-staged makes it possible to run arbitrary npm and shell tasks with a list of staged files as an argument, filtered by a specified glob pattern on pre-commit. The best part, once this is setup, your team doesn’t have to do a thing other than run yarn install.

To get started install both the packages:

yarn add lint-staged husky --dev

Next add a hook for precommit in your package.json:

{
  "scripts": {
    "precommit": "lint-staged"
  },
}

Lastly create an empty .lintstagedrc file at the root, this is where we’ll integrate with the various linters we’ll talk about next.

JavaScript

eslint.jpg

So now we are ready to actually hookup some linters. For JavaScript, there are several good linting frameworks out there ranging from very opinionated to very flexible:

1) StandardJS - This is the most opinionated framework out there and also very popular. It has excellent IDE integration and used by a lot of big names. Although having said that, we didn’t agree with some of it’s rules and there was no way of changing them. It is really designed to be a an install-it-and-forget-it kind of a linter which wasn’t quite what I was looking for.

2) Prettier - So that lead me to investigate another very popular framework. Prettier is a lot like StandardJS, good IDE support, well adopted. It tries to provide little more flexibility over a few basic rules compared to StandardJS. Although it’s main advantage over StandardJS is the fact that it is also able to lint CSS and GraphQL in additional to JavaScript and it’s preprocessors.

3) ESLint (Recommended) - After trying both of the above mentioned linters, I ended up settling with ESLint primarily for the fact that it let us tweak all the options exactly per our needs. The flexibility and extensibility of this framework is impressive.

So let’s go ahead and install it:

yarn install eslint --dev

Next you’ll want to run through setup and answer some questions about your preferences

./node_modules/.bin/eslint --init

Based on your responses, it will create an .eslintrc file in your project which you can always manually edit later. Here’s one that I’m using:

env:
  browser: true
  commonjs: true
  es6: true
extends: 'eslint:recommended'
parserOptions:
  sourceType: module
rules:
  indent:
    - warn
    - 2
  linebreak-style:
    - warn
    - unix
  quotes:
    - warn
    - single
  semi:
    - warn
    - always
  no-undef:
    - off
  no-unused-vars:
    - warn
  no-console:
    - off
  no-empty:
    - warn
  no-cond-assign:
    - warn
  no-redeclare:
    - warn
  no-useless-escape:
    - warn
  no-irregular-whitespace:
    - warn

I went with setting most of the rules as non-blocking warnings since we were dealing with some legacy code and wanted to reduce developer friction as much as possible.

Finally add this line to your .lintstagedrc

{
  "*.js": ["eslint --fix", "git add"]
}

Ruby

rubocop.png

When it came to Ruby linting, there is really just one game in town i.e RuboCop. All you need to do is add it to the Gemfile and run bundle install:

gem 'rubocop', require: false

Next add a hook for it in your .lintstagedrc:

{
  "*.js": ["eslint --fix", "git add"],
  "*.rb": ["rubocop -a -c .rubocop-linter.yml --fail-level E", "git add"],
}

Next you will need to create .rubocop-linter.yml with your coniguration. Here’s one that we used:

AllCops:
  Exclude:
    - 'vendor/**/*'
    - 'spec/factories/**/*'
    - 'tmp/**/*'
  TargetRubyVersion: 2.2

Style/Encoding:
  EnforcedStyle: when_needed
  Enabled: true

Style/FrozenStringLiteralComment:
  EnforcedStyle: always

Metrics/LineLength:
  Max: 200

Metrics/ClassLength:
  Enabled: false

IndentationConsistency:
  EnforcedStyle: rails

Documentation:
  Enabled: false

Style/ConditionalAssignment:
  Enabled: false

Style/LambdaCall:
  Enabled: false

Metrics:
  Enabled: false

Also here’s a list of all the auto corrections RuboCop is able to do when the -a / --auto-correct flag is turned on if you need to add/change any more rules in that file.

CSS/SCSS

stylelint.png

So now that we have Ruby and JS linting squared away. Let’s look into how to do the same with CSS.

1) sass-lint - Since we were using SASS in the project, I first looked at this package. Although quickly realized there was no option for auto fixing available at the moment. There is a PR which is currently in the works that is supposed to add this feature at some point. But for now we’ll have to look somewhere else.

2) stylelint (Recommended) - Ended up going with this option because of its large ruleset (150 at the time of writing) and the fact that it is powered by PostCSS which understands any syntax that PostCSS can parse, including SCSS, SugarSS, and Less. Only downside being the fact that auto fixing feature is experimental but it’s worth a shot anyway.

So let’s go ahead and install it:

yarn add stylelint --dev

Next add a hook for it in your .lintstagedrc:

{
  "*.js": ["eslint --fix", "git add"],
  "*.rb": ["rubocop -a -c .rubocop-linter.yml --fail-level E", "git add"],
  "*.scss": ["stylelint --fix", "git add"]
}

Again this is a very configurable package with a lot of options which you can manage in a .stylelintrc file.

To being with, I’d probably just recommend extending either stylelint-config-standard or stylelint-config-recommended presets.

Here’s an example of a .stylelintrc:

{
  "extends": "stylelint-config-standard",
  "rules": {
    /* exceptions to the rule go here */
  }
}

HAML

haml.jpg

As far as templating engine goes, our project uses HAML but unfortunately I couldn’t find any auto formatting solution for it. haml-lint has an open ticket for adding this feature but it seems like it’s not very easy to implement.

So until then you have the option to just hook up the linter so it can provide feedback about your markup which you would have to manually correct.

To get started, add the gem to your Gemfile:

gem 'haml_lint', require: false

Next add a hook for it in your .lintstagedrc:

{
  "*.js": ["eslint --fix", "git add"],
  "*.rb": ["rubocop -a -c .rubocop-linter.yml --fail-level E", "git add"],
  "*.scss": ["stylelint --fix", "git add"]
  "*.haml": ["haml-lint -c .haml-lint.yml", "git add"],
}

Next you will need to create .haml-lint.yml with your configuration. Here’s one that you can use:

# Whether to ignore frontmatter at the beginning of HAML documents for
# frameworks such as Jekyll/Middleman
skip_frontmatter: false

linters:
  AltText:
    enabled: false

  ClassAttributeWithStaticValue:
    enabled: true

  ClassesBeforeIds:
    enabled: true

  ConsecutiveComments:
    enabled: true

  ConsecutiveSilentScripts:
    enabled: true
    max_consecutive: 2

  EmptyScript:
    enabled: true

  HtmlAttributes:
    enabled: true

  ImplicitDiv:
    enabled: true

  LeadingCommentSpace:
    enabled: true

  LineLength:
    enabled: false

  MultilinePipe:
    enabled: true

  MultilineScript:
    enabled: true

  ObjectReferenceAttributes:
    enabled: true

  RuboCop:
    enabled: false

  RubyComments:
    enabled: true

  SpaceBeforeScript:
    enabled: true

  SpaceInsideHashAttributes:
    enabled: true
    style: space

  TagName:
    enabled: true

  TrailingWhitespace:
    enabled: true

  UnnecessaryInterpolation:
    enabled: true

  UnnecessaryStringOutput:
    enabled: true

Optionally, you can also exclude all the existing HAML files with linting issues by running the following command and including the exclusions file (inherits_from: .haml-lint_todo.yml) in the configuration file above to ease the on-boarding process:

haml-lint --auto-gen-config

Conclusion

That’s all folks! In a few weeks since hooking up the auto formatters our codebase has started to look much more uniform upon every commit. Code reviews can now focus on more important feedback.

Migrating from KnockoutJS to VueJS

KTV.jpg

Recently I’ve been shopping around for a framework to replace KnockoutJS in an existing application. While KO has served it’s purpose well, over the years it hasn’t been maintained very actively and has largely failed to keep up with the newer JS frameworks in terms of features and community adoption.

After doing some research to find it’s replacement, I settled on VueJS. It seemed to be most aligned with Knockout’s MVVM pattern while at the same time being modular and extensible enough to serve as a complete MVC framework if needed using its official state management & routing libs. Above all, it seems to have a flourishing community which is important when it comes to considering a framework.

So as a KnockoutJS developer, let’s go through some of the most familiar aspects of the framework and see how it translates to VueJS.

Viewmodel

In KO, the VM is can be as simple as a object literal or or a function. Here’s a simple example:

var yourViewModel = function(args) {
  this.someObv = ko.observable();
  this.someObv.subscribe(function(newValue) {
    //...
  });
  this.computedSomeObv = ko.computed(function() {
    //...
  });
  this.someMethod = function(item, event) {
    //...
  }
};

Usage:

ko.applyBindings(new yourViewModel(someArgs), document.getElementById("element_id"));

VueJS has a very similar concept although the VM is always an object literal passed into a Vue instance. It also provides much more structure and richer event model. Here’s a VM stub in VueJS:

var yourViewModel = new Vue({
  data: {
    someKey: someValue
  },
  watch: {
    someKey: function(val) {
      // Val has changed, do something, equivalent to ko's subscribe
    }
  },
  computed: {
    computedKey: function() {
      // Return computed value, equivalent to ko's computed observables
    }
  },
  methods: {
    someMethod: function() { ... }
  },
  created: function () {
    // Called synchronously after the instance is created.
  },
  mounted: function () {
    // Called after the instance has just been mounted where el is replaced by the newly created vm.$el
  },
  updated: function () {
    // Called after a data change causes the virtual DOM to be re-rendered and patched.
  },
  destroyed: function () {
    // Called after a Vue instance has been destroyed
  },
});

I didn’t list all the event hooks in that example for brevity. I recommend checking out the this lifecycle diagram to get the full picture.

VueJS also offers an interesting way to organize and share common code across VMs using something called Mixins. There are certain pros and cons of using a Mixin vs just a plan old JS library but it’s worth looking into.

Usage:

yourViewModel.$mount(document.getElementById("element_id"));

Something to note about the syntax above, it is entirely optional. You can also set the value of the el attribute in your VM to #element_id and skip explicitly calling the mount function.

Bindings

The concept of bindings is something KO developers are very familiar with. I’m sure throughout the course of working with KO, we’ve all created or used a lot of custom bindings. Here’s what the custom binding stub looks like in KO:

ko.bindingHandlers.yourBindingName = {
  init: function(element, valueAccessor, allBindings, viewModel, bindingContext) {
    // This will be called when the binding is first applied to an element
    // Set up any initial state, event handlers, etc. here
  },
  update: function(element, valueAccessor, allBindings, viewModel, bindingContext) {
    // This will be called once when the binding is first applied to an element,
    // and again whenever any observables/computeds that are accessed change
    // Update the DOM element based on the supplied values here.
  }
};

Usage:

<span data-bind="yourBindingName: { some: args }" />

VueJS has something similar but it’s called a “directive”. Here’s the VueJS directive stub:

Vue.directive('yourDirectiveName', {
  bind: function(element, binding, vnode) {
   // called only once, when the directive is first bound to the element. This is where you can do one-time setup work.
  },
  inserted: function (element, binding, vnode) {
    // called when the bound element has been inserted into its parent node (this only guarantees parent node presence, not           // necessarily in-document).
  },
  update: function(element, binding, vnode, oldVnode) {
    // called after the containing component has updated, but possibly before its children have updated. The directive’s value may     // or may not have changed, but you can skip unnecessary updates by comparing the binding’s current and old values
  },
  componentUpdated: function(element, binding, vnode, oldVnode) {
    // called after the containing component and its children have updated.
  },
  unbind: function(element, binding, vnode) {
    // called only once, when the directive is unbound from the element.
  },
})

Usage:

<span v-bind="{yourDirectiveName: '{ some: args }' }" />

As you can see VueJS offers a couple of additional life-cycle hooks, but on the most part, its very similar to KnockoutJS. So transferring old bindings into new directives isn’t too difficult.

In most cases you should be able to move everything in your init function into the inserted function. As far as the update function goes, it will largely remain the same but you can now compare the vnode and oldVnode to avoid necessary updates. And lastly, if your custom binding used the KO’s disposal callback i.e ko.utils.domNodeDisposal.addDisposeCallback you can move that logic into the unbind function.

Another thing you’ll notice is that the usage syntax is a bit different, instead of using the data-bind attribute everywhere, VueJS uses different attributes prefixed with v- for various things such as v-bind for binding attributes, v-on for binding events, v-if/for for conditionals/loops, etc.

To add to that, there is also a shorthand syntax for those which might make things confusing initially and it’s probably the biggest gotchas for devs transitioning from Knockout to Vue. So I recommend taking some time to go through the template syntax documentation.

Extenders

Another tool in KO that we are very familiar with is the concept of extender which are useful for augmenting observables. Here’s a simple stub for an extender:

ko.extenders.yourExtender = function (target, args) {
  // Observe / manipulate the target based on args and returns the value
};

Usage:

<span data-bind="text: yourObv.extend({ yourExtender: args })" />

Closest thing to extenders in VueJS is the concept of “filters”, which can be used to achieve a similar objective. Here’s what a filter stub would look like:

Vue.filter('yourFilter', function (value, args) {
  // Manipulate the value based on the args and return the result
});

Usage:

<span>{{ yourVar | yourFilter(args) }}</span>

Alternatively you can also call a filter function inside the v-bind attribute

<span v-bind='{style: {width: $options.filters.yourFilter(yourVar, args)}}'/>

Components

KO offers the ability to create components to help organizing the UI code into self-contained, reusable chunks. Here’s a simple component stub:

ko.components.register('your-component', {
  viewModel: function(params) {
    this.someObv = ko.observable(params.someValue);
  },
  template: { element: 'your-component-template' },
});

Usage:

<your-component params='someValue: "Hello, world!"'></your-component>

VueJS also has the ability to create components. They are a much more feature rich and have better lifecycle hooks compared to KO. They also feel more much “native” to the framework. Here’s a simple component stub in Vue:

Vue.component('your-component', {
  props: ['someValue']
  data: function () {
     return {
       someKey: this.someValue
     }
  },
  template: '#your-component-template'
})

Usage:

<your-component someValue="Hello, world!"></your-component>

This just scratches the surface of what’s possible with components in Vue. They are definitely worth diving more into. Maybe I’ll cover them more in another post.

3rd Party Plugins/Libs/Tools

Mapping - One of the commonly used plugin the KnockoutJS ecosystem has been ko.mapping plugin which helps transforming a JavaScript object into appropriate observables. With VueJS, that is not needed since Vue takes care of that under the hood by walking through all the properties of a VM and converting them to getter/setters using Object.defineProperty. This lets Vue perform dependency-tracking and change-notification when properties are accessed or modified while keeping that invisible to the user.

Validation - In addition to mapping, Knockout-Validation library is another mainstay of the ecosystem. With VueJS, vee-validate is it’s popular counterpart and provides similar features out of the box.

Debugging - It’s important to have a good debugging tool for development. KnockoutJS has Knockoutjs context debugger, while VueJS offers something similar with Vue.js devtools

Lastly…

VueJS is an incredibly feature rich framework with various options for customization and code organization. It is one of the fastest growing frameworks with adoption from some big names projects such as Laravel, GitLab, and PageKit to name a few. Hopefully that will make it a good bet for the future!

I’ll leave you with this chart which pretty much sums up the story of these two frameworks:

Screen Shot 2017-05-29 at 10.36.00 PM.png

Incorporating Modern Javascript Build Tools With Rails

railsjs.png

Recently I’ve been working on ways to bring modern JavaScript development practices into an existing Rails app.

As RoR developers know, Sprockets is a powerful asset pipeline which ships with Rails and thus far has JustWorked™. Although with the JavaScript ecosystem maturing in the recent years, Sprockets hasn’t been able to keep pace with the flexibility and feature set of it’s competitors. While Sprockets might have originally popularized the asset pipeline model, it was finally time to hand the reigns to it’s prodigies.

So having said that, I started searching for a viable replacement to Sprockets. My research came up with two main contenders, namely Browserify and Webpack. Here’s my experience integrating each one with a Rails app.

NodeJS

Before we can get started, let’s make sure we have NodeJS installed. I’ve found the easiest way to do so is via nvm. Rails developers will find this very similar to rvm. To install, run the following commands which will install nvm and the latest version of NodeJS:

$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.32.1/install.sh | bash
$ source ~/.bash_profile
$ nvm install node
$ nvm use node

Yarn

Next we’ll need a package manager. Traditionally we’d use npm but I’ve found Facebook’s yarn to be a lot more stable and reliable to work with. This is very similar to bundler. To install on Debian Linux, run the following commands or follow their installation guide for your OS:

$ curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
$ echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
$ sudo apt-get update && sudo apt-get install yarn

Browserify

Now that we have the essentials available. Let’s first try Browserify. The main objective of Browserify is to bring the ability to “require” and use NodeJS packages in the browser and bundle all their dependencies for you.

Getting Browserify integrated into Rails is surprisingly simple thanks to the browserify-rails gem. So let’s get started by adding it to our Gemfile and creating a package.json for all it’s required JS dependencies.

Gemfile

gem 'browserify-rails'

package.json

{
  "name": "myapp",
  "version": "0.0.1",
  "license": "MIT",
  "engines": {
    "node": ">= 0.10"
  },
  "dependencies": {
    "babel-preset-es2015": "^6.1.18",
    "babelify": "^7.2.0",
    "browserify": "~> 10.2.4",
    "browserify-incremental": "^3.0.1"
  }
}

Although instead of copy/pasting the dependencies from the package.json above I’d recommend using yarn add [package-name] command so you can get the latest as the versions listed above might get stale over time.

All you have to do now is run the two package managers bundle install & yarn install to finish the setup.

Usage

You can now start to define and export your own module or you can use a CommonJS compatible library and require it in your application manifest right alongside Sprockets’ require syntax:

assets/javascript/foo.js

export default class Foo {
  constructor() {
  console.log('Loaded Foo!')
  }
}

assets/javascript/application.js

MyApp.Foo = require('./foo'); // <--- require via Browserify
//= require bar               // <--- require via Sprockets

That’s pretty much all there is to it. So while getting Browserify up and running was easy, it wasn’t designed to be a complete asset pipeline replacement. You could technically set that up in conjunction with Gulp or npm scripts but it’s not something it was designed to do out of the box.

Webpack

I was hoping to go a little further than what Browserify offered with a complete asset pipeline replacement out of the box.

So having that in mind, I started looking into Webpack. It’s motto of only load what you need, when you need sounded pretty compelling.

Especially with DHH working on bringing Webpack and Yarn to Rails 5.1 which is great news and makes it a solid choice as far as future-proofing goes.

So let’s see how to get Webpack integrated right now in Rails 4.x before it becomes more formally integrated into Rails 5.

Gemfile

gem 'webpack-rails'
gem 'foreman'

package.json

{
  "name": "myapp",
  "version": "0.0.1",
  "dependencies": {
    "babel-core": "^6.9.0",
    "babel-loader": "^6.2.4",
    "babel-preset-es2015": "^6.9.0",
    "extract-text-webpack-plugin": "^1.0.1",
    "resolve-url-loader": "^1.6.0",
    "stats-webpack-plugin": "^0.2.1",
    "webpack": "^2.2.1",
    "webpack-dev-server": "^2.3.0"
  }
}

Now all you have to do is run the two package managers bundle install & yarn install to finish the setup.

Although we are not done, next let’s create a config file which will define how our Webpack asset pipeline will work.

config/webpack.config.js

'use strict';

var path = require('path');
var webpack = require('webpack');
var StatsPlugin = require('stats-webpack-plugin');

// must match config.webpack.dev_server.port
var devServerPort = 3808;

// set NODE_ENV=production on the environment to add asset fingerprints
var production = process.env.NODE_ENV === 'production';

var config = {
  name: 'js',
  entry: {
     //Define entry points here aka manifests
    'application': './webpack/javascript/application.js',
  },
  output: {
    path: path.join(__dirname, '..', 'public', 'webpack'),
    publicPath: '/webpack/',
    filename: production ? '[name]-[chunkhash].js' : '[name].js'
  },
  resolve: {
    modules: [
      path.join(__dirname, '..', 'webpack', 'javascript'),
      "node_modules/"
    ]
  },
  plugins: [
    new StatsPlugin('manifest.json', {
      chunkModules: false,
      source: false,
      chunks: false,
      modules: false,
      assets: true
    })],
  module: {
    rules: [
      {
        test: /\.js$/,
        exclude: /node_modules/,
        use: [
          {
            loader: 'babel-loader',
            options: {
              cacheDirectory: true,
              presets: ['es2015'] // Use es2015-without-strict if legacy JS is causing issues
            }
          }
        ]
      }
    ]
  }
};

if (production) {
  config[0].plugins.push(
    new webpack.NoErrorsPlugin(),
    new webpack.optimize.UglifyJsPlugin({
      compressor: {warnings: false},
      sourceMap: false
    }),
    new webpack.DefinePlugin({
      'process.env': {NODE_ENV: JSON.stringify('production')}
    }),
    new webpack.optimize.DedupePlugin(),
    new webpack.optimize.OccurenceOrderPlugin()
  );
} else {
  var devServer = {
    host: '0.0.0.0',
    port: devServerPort,
    headers: {'Access-Control-Allow-Origin': '*'},
    watchOptions: { //You will need this if you are a Vagrant user
      aggregateTimeout: 0,
      poll: 10
    }
  };
  config.devServer = devServer;
  config.output.publicPath = '//localhost:' + devServerPort + '/webpack/';
  config.devtool = 'sourceMap';
}

module.exports = config;

So that’s a whole lot of configuration! But as long as you understand the Four Core Concepts of Entry, Output, Loaders & Plugins behind Webpack it will all start to make more sense.

As far as a Rails/Sprokets analogy goes, you can think of an Entry as a Manifest file and the Output as the final asset file which is produced as a result of compiling the manifest.

Everything in between is handled by Loaders which execute transformations on individual files and Plugins which execute transformations on a set of files.

Webpack offers a rich ecosystem of loaders and plugins which can perform a variety of transformations which is where the true power and flexibility of Webpack becomes apparent.

Usage

So now that we are done setting this up, let’s see how to put it in use. Unlike Browserify which only supported CommonJS modules, Webpack allows mixing and matching Harmony (ES6), AMD or CommonJS syntax for module loading:

webpack/javascript/foo.js

define ('Foo', [], function() {
 return function () {
   __construct = function(that) {
     console.log('Loaded Foo!')
  } (this)
 }
}

webpack/javascript/application.js

MyApp.Foo = require('./foo');

views/layouts/application.html.haml

<%= javascript_include_tag *webpack_asset_paths('application', extension: 'js') %>

So that’s the basic setup. Next time when we start Rails, we’ll also need to start the webpack dev server which will serve the assets locally and rebuild them if they change.

For convenience, I would recommend using foreman so we can just start both Rails and Webpack with foreman start, here’s the Procfile you would need to declare before running the command:

Procfile

# Run Rails & Webpack concurrently
rails: bundle exec rails server -b 0.0.0.0
webpack: ./node_modules/.bin/webpack-dev-server --no-inline --config config/webpack.config.js

Deploy

Deploying Webpack into production is fairly staright forward. Similar to Sprockets’ rake assets:precompile task, webpack-rails gem provides rake webpack:compile task which can run in a production build script and will compile and place the assets into a location specified in our config.output option, in this case, it will be inside public/webpack folder of our Rails application.

Conclusion

So that convers the basics of setting up a modern asset pipeline in Rails. I think Browserify is great if you want to just use some ES6 features and NodeJS packages but don’t really want to necessarily overhaul the entire asset pipeline. On the other hand, Webpack is a bit more difficult to configure but it provides the most amount of flexibility and it has all the features necessary to become a complelete asset pipeline replacement.

Setting Up A Rails Development Environment Using Docker

Rails Stack on Docker

A couple of years ago I wrote about how to setup a local development environment using Vagrant. While Vagrant has worked out great, containerization with Docker has become all the rage. So I decided to take it for a spin to see what all the fuss is all about. I’m going to use an existing Rails app as my test subject.

Requirements

So before we get started, here’s what you’ll need to install:

Folder structure

Here’s an overview of the folder structure we’ll be creating. We’ll store our bundle, Postgres data and the web app itself on our host. We’ll also symlink the keys folder to our SSH keys: ln -s ~/.ssh/ keys

I’ll get more into the advantages of doing so further along in this post.

myapp-dev-box/
	├── Dockerfile
	├── docker-compose.yml
	├── docker-sync.yml
	├── myapp/
	├── bundle/
	├── pgdata/
	└── keys/ -> ~/.ssh/

Dockerfile

Base

First thing we’ll do is create a Dockerfile which will be our starting point inside an empty dir called myapp-dev-box. Since this is for a Rails project, we’ll base this container off the official Ruby image on Docker Hub which you can think of as GitHub for containers.

FROM ruby:2.1.3
Dependencies

Next we’ll install some dependencies that we need for the web app. Notice we will run this as a single command. Reason being, Docker caches the state of the container after each command into an intermediate container to speed things up. You can read a bit more about how this works here.

So if we need to add or remove a package, it’s recommended to re-run the entire step including apt-get update to make sure the entire dependency tree is updated.

RUN apt-get update && apt-get install -y \
    build-essential \
    wget \
    git-core \
    libxml2 \
    libxml2-dev \
    libxslt1-dev \
    nodejs \
    imagemagick \
    libmagickcore-dev \
    libmagickwand-dev \
    libpq-dev \
    ffmpegthumbnailer \
  && rm -rf /var/lib/apt/lists/*
SSH Keys

Now let’s create a folder for our app and a folder to store the SSH keys. The SSH keys are needed to checkout private repositories as part of the bundle install step inside the container.

Alternatively, you can make use of a build flow tool like Habitus to securely share a common set of keys and destroy them later in the build process. You can read more about it here. It supports many different complex build flows making it ideal for production use. Although it adds more complexity than we need just for a development environment so I’ve dediced against using it here.

You can also always create a separate set of SSH keys (without password) and place them in the same folder as the Dockerfile to be used within the container. Although this approach is a lot less secure as those keys would essentially become part of the container cache and could be exploited if someone gets hold of the image history. I wouldn’t recommend it.

Feel free to skip this step altogether if your Gemfile doesn’t reference any private repositories.

We’ll also add Github and BitBucket domains to the known hosts file to avoid first connection host confirmation during the build process.

RUN mkdir /myapp
RUN mkdir -p /root/.ssh/

WORKDIR /myapp

RUN ssh-keyscan -H github.com >> ~/.ssh/known_hosts
RUN ssh-keyscan -H bitbucket.org >> ~/.ssh/known_hosts
Bundle

Generally, we would copy Gemfile.* and run bundle install as a separate step into our Dockerfile so it can be run once and cached during subsequent runs.

Although that has some downsides especially for a development environment. First and foremost, the bundle would have to be rebuilt from scratch every time the Gemfile is changed which could get frustrating if it’s changed frequently.

Since I was primarily focused on a development environment and wanted to make this setup process as frictionless as possible for new devs, I decided to set it up in a way where the state of our bundle can be persisted after running it once even after we shut down the container and start it back up, just like it would on a VM or a local machine. And it would utilize the same SSH keys that are already present on the developer’s machine.

To make that happen, we will go ahead and point the GEM_HOME to a root folder called bundle which will be synced from the host. We’ll also update the bundle configuration to point to that path.

ENV GEM_HOME /bundle
ENV PATH $GEM_HOME/bin:$PATH

RUN gem install mailcatcher
RUN gem install bundler -v '1.10.6' \
    && bundle config --global path "$GEM_HOME" \
    && bundle config --global bin "$GEM_HOME/bin"

docker-compose.yml

Now that we have setup our base app container, it’s time to build and link a couple of supporting containers to run our app. Docker Toolbox includes a great tool called docker-compose (previously known as fig) to help us do just that.

Let’s start by defining which services we want to run, we’ll split our app into a db, redis, web and job services. For db and redis services, we’ll use the official Postgres and Redis images provided by Docker Hub without any custom changes. For job and web services, we’ll instruct it to build the image from the Dockerfile which we just created in the current directory in the previous section.

version: '2'
services:
  db:
    image: postgres
    volumes:
      - ./pgdata:/pgdata
    environment:
      POSTGRES_DB: myapp_development
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: password4postgres
      PGDATA: /pgdata
  redis:
    image: redis
  web:
    build: .
    command: bundle exec rails server --port 3000 --binding 0.0.0.0
    volumes_from:
      - container:myapp-web-sync:rw
      - container:myapp-bundle-sync:rw
    volumes:
      - ./keys:/root/.ssh/
    ports:
      - "3000:3000"
    environment:
      REDIS_URL: redis://redis:6379
    links:
      - db
      - redis
  job:
    build: .
    command: bundle exec sidekiq
    volumes_from:
      - container:myapp-web-sync:rw
      - container:myapp-bundle-sync:rw
    volumes:
      - ./keys:/root/.ssh/
    ports:
      - "6379:6379"
    environment:
      REDIS_URL: redis://redis:6379
    links:
      - db
      - redis
volumes:
  myapp-web-sync:
    external: true
  myapp-bundle-sync:
    external: true

Most of the configuration under each service instance is pretty self-explanatory but there are a couple of interesting things to note. First, under the web and db instance, you’ll notice we have a links section which tells docker compose that these two services depend on the db and redis services. So docker compose will be sure to start the linked services before it starts web or db.

Second, you’ll notice we have a volumes and volumes_from sections. volumes is the native docker syntax for mounting a directory from the host as a data volume. While this is super useful, it tends to be very slow! So for now, we are mostly going to limit using it for sharing the database and the SSH keys. For things that are most disk I/O intensive, we’ll define external volumes in volumes_from section which will utilize a gem by Eugen Mayer called docker-sync. It will give us the ability to use rsync or unison which should significantly boost performance.

For that, we’ll need to define yet another configuration file in which we will define what myapp-web-sync and myapp-bundle-sync volumes will do. As their name suggest, we’ll use each of them to sync our web project files and the bundle respectively. Note, each sync project will have to have it’s own unique port.

docker-sync.yml

syncs:
  myapp-web-sync:
    src: './myapp'
    dest: '/myapp'
    sync_host_ip: 'localhost'
    sync_host_port: 10872
  myapp-bundle-sync:
    src: './bundle'
    dest: '/bundle'
    sync_host_ip: 'localhost'
    sync_host_port: 10873

Showtime

Assuming you have your web project cloned into myapp-dev-box/myapp let’s go through the following steps:

Install the bundle
host $ docker-compose run web bundle install

Note we will only have to do this once, since we have the bundle state shared between our app and job services using a docker-sync volume. We only need to re-run this if/when the Gemfile changes.

Run the migrations
host $ docker-compose run web rake db:migrate

Again like the previous command, this will only need to be run initially and when there are changes thereafter since the state of the database is persisted on the host as well.

Start

This command is a helper which basically starts the sync service like docker-sync start and then starts your compose stack like docker-compose up in one single step.

host $ docker-sync-stack start

If everything was configured correctly, you should now be able to access your app on http://localhost:3000

Stop

Once you are done, you can call another helper which basically stops the sync-service like docker-sync clean and also remove the application stack like docker-compose down

host $ docker-sync-stack clean

That’s a wrap. Let me know if you run into any unexpected issues.