Adding GitHub Actions to Run RSpec and SimpleCov

Summary

Given a simple Ruby project (say this one), how easy is it to set up a GitHub action on the repo so that it runs specs on push? And then say SimpleCov, too?

TL;DR: Skipping the trial and error and the credits (all below), just add this as .github/workflows/run-tests.yml to your repo and push it and you’re away:

name: Run tests
on: [push, pull_request]
jobs:
  run-tests:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: write
    steps:
      - uses: actions/checkout@v4
      - name: Set up Ruby
        uses: ruby/setup-ruby@v1
        with:
          bundler-cache: true
      - name: Run tests
        run: bundle exec rspec -f j -o tmp/rspec_results.json -f p
      - name: RSpec Report
        uses: SonicGarden/rspec-report-action@v2
        with:
          token: ${{ secrets.GITHUB_TOKEN }}
          json-path: tmp/rspec_results.json
        if: always()
      - name: Report simplecov
        uses: aki77/simplecov-report-action@v1
        with:
          token: ${{ secrets.GITHUB_TOKEN }}
        if: always()
      - name: Upload simplecov results
        uses: actions/upload-artifact@master
        with:
          name: coverage-report
          path: coverage
        if: always()

Adding RSpec

I did a search and found Dennis O’Keeffe’s useful article and since I already had a repo I just plugged in his .github/workflows/rspec.yml file:

name: Run RSpec tests
on: [push]
jobs:
  run-rspec-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Set up Ruby
        uses: ruby/setup-ruby@v1
        with:
          # Not needed with a .ruby-version file
          ruby-version: 2.7
          # runs 'bundle install' and caches installed gems automatically
          bundler-cache: true
      - name: Run tests
        run: |
          bundle exec rspec

and it just ran, after three slight tweaks:

  • My repo already had a .ruby-version file so I could skip lines 11-12
  • the checkout@v2 action now gives a deprecation warning so I upped it to checkout@v4
  • because my local machine was a Mac I also needed to run bundle lock --add-platform x86_64-linux and push that for it the action to run on ubuntu-latest. Unsurprisingly.

The specs failed, which was unexpected, since they were passing locally, and it would be more informative if it named the specific failure on the summary page instead of needing me to click into run-tests to see it:

So I went to the GitHub Actions Marketplace and searched for “rspec”. There’s a RSpec Report action that looks like it does what we want if we change the bundle exec rspec line to output the results as a json file, so I changed the end of the workflow file to

      - name: Run tests
        run: bundle exec rspec -f j -o tmp/rspec_results.json -f p
      - name: RSpec Report
        uses: SonicGarden/rspec-report-action@v2
        with:
          token: ${{ secrets.GITHUB_TOKEN }}
          json-path: tmp/rspec_results.json
        if: always()

pushed, and tried again, and it failed again, but this time with details:

The failure is not github-action-specific (Digressions on the repo #1: RSpec): the TL;DR is that I had manually created a spec/tmp_files directory for files the tests wrote and because of the way I was doing setup and teardown it would have continued to work on my machine and nobody else’s without my noticing until I erased my local setup or tried it on a new machine. This was a very useful early warning.

Line 22’s if: always() is worth a mention. If running the tests (lines 15-6) exits with failing specs, subsequent blocks won’t run by default. If we need them to run anyway, we need to add if: always(), which is why it’s included here in all subsequent blocks.

Adding SimpleCov

Since the project wasn’t already using SimpleCov, step one was setting that up locally (Digressions on the repo #2: SimpleCov).

That sorted, I went back to the GitHub Actions Marketplace and searched for “simplecov” and started trying alternatives. The currently-highest-starred action, Simplecov Report, drops in neatly after running the specs and the RSpec Report:

      - name: Simplecov Report
        uses: aki77/simplecov-report-action@v1
        with:
          token: ${{ secrets.GITHUB_TOKEN }}

– it defaults to requiring 90% coverage, but you could pass in a different value by passing in a failedThreshold:

      - name: Simplecov Report
        uses: aki77/simplecov-report-action@v1
        with:
          failedThreshold: 80
          token: ${{ secrets.GITHUB_TOKEN }}

and of course if we want it to run whether specs fail or not, we need to add if: always().

So, I added

  - name: Simplecov Report
    uses: aki77/simplecov-report-action@v1
    with:
      token: ${{ secrets.GITHUB_TOKEN }}
    if: always()

and pushed. That passed:

That said, as with the spec results, it would be useful to see more detail. I remembered a previous project where we’d uploaded the SimpleCov coverage report to GitHub so it showed up as among the artifacts, and the steps in Jeremy Kreutzbender’s 2019 article still work for that: we can add a fourth block to our file:

- name: Upload coverage results
  uses: actions/upload-artifact@master
  with:
    name: coverage-report
    path: coverage
  if: always()

and push, and that gives us access to the coverage report:

which we can download and explore the details down to line numbers, even if our local machine is on a different branch.

And, for pushes, we’re done. But there’s two more things we need to add for it to work with pull requests as well.

For Pull Requests

The first thing to do is to modify the action file so that the we trigger the workflow on pull requests as well as pushes:

on: [push, pull_request]

At which point, it feels like everything should work, but if we create a PR, the SimpleCov Report fails, with Error: Resource not accessible by integration:

It ran the code coverage, as we can see from the end of the Run tests block, it uploads the coverage results, but the SimpleCov Report block fails.

This is because, in the PR case, the SimpleCov Report tries to write a comment to the PR with its coverage report:

and to enable it to do that, we need to give it permission, so we need to insert into the action file:

permissions:
  contents: read
  pull-requests: write

And that fixes the SimpleCov Report error and gets us to a passing run.

I was initially unwilling to do this because pull-requests: write felt like a dangerously wide-scoped permission to leave open, but, on closer examination, the actual permissions that it enables are much more narrowly scoped, to issues and comments and labels and suchlike.

(Note that we only need to add these permissions because SimpleCov Report is adding a comment to the PR: if we’d picked a different SimpleCov reporting action that didn’t write a comment, we would only have needed on: [push, pull_request] to get the action to work on pushes and pull requests.)

This was an interesting gotcha to track down, but now we’re done, for both pushes and pull requests.

With thanks to my friend Ian, who unexpectedly submitted a pull request for the repo which exposed the additional wrinkles that had to be investigated here.

Appendix: Digressions on the Repo

#1: RSpec

The first time I ran the RSpec action it failed, which usefully revealed that the setup in one of my tests was relying on a manual step.

The repo takes an input file and writes an output file, so in spec/ I’ve got spec/example_files for the files it starts from and spec/tmp_files for the new files it writes. I had created that directory locally and was running

  before do
    FileUtils.rm_r("spec/tmp_files")
    FileUtils.mkdir("spec/tmp_files")
  end

Which only worked, of course, because before the first run I’d manually added the directory, which I didn’t remember until the GitHub action tried and failed because it had no idea about that manual step.

The simplest fix would be to replace it with

  before do
    FileUtils.mkdir("spec/tmp_files")
  end

  # ... tests

  after do
    FileUtils.rm_r("spec/tmp_files")
  end

but because I was using the specs to drive the implementation and I was eyeballing the produced files as I went, I didn’t want to delete them in teardown, so I did this instead:

  before do
    FileUtils.rm_r("spec/tmp_files") if Dir.exist?("spec/tmp_files")
    FileUtils.mkdir("spec/tmp_files")
  end

after which both the local and the GitHub action versions of the tests passed. This discovery was an unexpected but very welcome benefit of setting up the GitHub action to run specs remotely.

Back

#2: SimpleCov

To add SimpleCov locally, we add gem "simplecov" to the Gemfile, require and start it from the spec/spec_helper.rb file (filtering out spec files from the report):

require 'simplecov'
SimpleCov.start do
  add_filter "/spec/"
end

and add coverage/ to the .gitignore file so we aren’t committing or pushing the files from local coverage runs.

Then we can run RSpec locally, open the coverage/index.html file, and discover that though we know that both lib/ files are being exercised in the specs, only one of them shows up in the coverage report:

And this makes sense, unfortunately, because lib/formatter.rb contains the class which does the formatting, and lib/reformat.rb is a script for the command-line interface which is largely optparse code and param checking. But it makes for a misleading coverage report.

We can start fixing this by moving everything but the command-line interface and optparse code out of the lib/reformat.rb into, say, lib/processor.rb, and have the CLI call that. It still won’t show up in the coverage report because it isn’t being tested directly, but we can add tests against Processor directly so that they do.

Having done that, we get a much more satisfactory coverage report:

This still leaves a very small bit of code in the CLI that isn’t covered by the coverage report:

option_parser.parse!
variant = ARGV[0]

if (variant.nil? && options.empty?) || variant == "help"
  puts option_parser
end

if variant == "alternating"
  Processor.new(options: options).process
end

but we have tests that will break if that doesn’t work so we decide we can live with that. (If it got more complicated, with multiple variants calling multiple processors, we could pull lines 4-10 into their own class and test it directly too. But it hasn’t yet.)

Back

Adding described_url to RSpec for Request Specs

Background

RSpec has a described_class method, which, given a spec starting

RSpec.describe Accounting::Ledger::Entry do

lets us use described_class for Accounting::Ledger::Entry within the spec, which is less noisy (particularly with long class names) and very handy if we later rename the class.

RSpec also has a subject method which exposes not the class but an instance of the class. It can be over-ridden implicitly, by putting a different class name in an inner describe block’s description:

RSpec.describe Accounting::Ledger::Entry do
  # subject starts as instance of Accounting::Ledger::Entry
  …
  describe Accounting::Ledger::SpecificEntryType do
    # subject is now instance of Accounting::Ledger::SpecificEntryType

or explicitly, by defining it directly:

RSpec.describe Accounting::Ledger::Entry do
  # subject starts as instance of Accounting::Ledger::Entry
  …
  subject { "just what I choose it to mean – neither more nor less" }
  # subject is now that string, just like Humpty Dumpty said

Problem

What if we’re building a request spec for an API endpoint, and our spec starts

RSpec.describe "/accounting/ledger/entries" do
  it "GET" do
    get "/accounting/ledger/entries"

    expect(response).to have_http_status(:ok)

It would be nicer if we had a described_url method, so we could say get described_url instead of repeating the url string in each example.

We could set subject once to the url:

RSpec.describe "/accounting/ledger/entries" do
  subject { "/accounting/ledger/entries" }
  it "GET" do
    get subject

But the single repetition still feels like it should be unnecessary, particularly with the string we need only one line away.

Towards a Solution

We see from the documentation that we can get the inner description string from the example if we specify it as a block parameter:

RSpec.describe "/accounting/ledger/entries" do
  it "GET" do |example|
    expect(example.description).to eql("GET") # => passes
  end
end

We won’t want to say do |example| all the time, though, so let’s start building code in a single before block. Further, we’ll only want this in request specs, so let’s tag request specs as such

RSpec.describe "/accounting/ledger/entries", :requests do

(:requests here is equivalent to requests: true), and put the before do |example| block, scoped to specs tagged :requests, in spec/spec_helper.rb:

RSpec.configure do |config|
  …
  config.before(:each, :requests) do |example|
    binding.pry
  end

From here we can run the spec and at the debugger and see that example.description is indeed “GET”.

> example
=> #<RSpec::Core::Example "GET">
> example.description
=> "GET"

Looking at lib/rspec/core/example.rb shows us that def description looks in something called metadata[:description] and passing example.metadata to the debugger gives us a screen-full of hash data, including

{ … 
:description => "GET"
:full_description => "/accounting/ledger/entries GET",
:example_group => { …
  :description => "/accounting/ledger/entries"

So if we’re one layer deep, example.metadata[:example_group][:description] gives us the outer description.

However, the actual example might be nested inside a describe or context block. Given

RSpec.describe "/accounting/ledger/entries" do
  describe "GET" do
    it "first case" do

the relevant metadata extract is instead:

{ … 
:description => "first case"
:full_description => "/accounting/ledger/entries GET first case",
:example_group => { …
  :description => "GET",
  :full_description => "/accounting/ledger/entries GET",
  …
  :parent_example_group => { …
    :description => "/accounting/ledger/entries",
    :full_description => "/accounting/ledger/entries"

And yes, testing with another nested group demonstrates that a :parent_example_group can include an inner :parent_example_group.

A Solution

But that’s a simple recursion problem: keep going until you don’t have a next :parent_example_group and then grab the outermost :description. We could replace the contents of the before block with:

def reckon_described_url(metadata)
  if metadata[:parent_example_group].nil?
    metadata[:description]
  else
    reckon_described_url(metadata[:parent_example_group])
  end
end

@described_url = reckon_described_url(example.metadata[:example_group])

Why the Snark Is a Boojum

(Yes, arguably, because the first space in the :full_description string happens to be between the outer description and the second-outermost description, we could get the same answer from

@described_url = example.metadata[:full_description].split(" ").first

but only by coincidence, and it would break for unrelated reasons if they changed how they build :full_description. So let’s not do that.)

Back to the Solution

We can now use the code from our before blog in a request spec. Going back to our original example, we can now implement it

RSpec.describe "/accounting/ledger/entries", :requests do
  it "GET" do
    get @described_url

If we’d prefer not to have an unexplained instance variable, or are looking forward to the possibility of passing in parameters, we can wrap it in a method call in the before block:

def described_url
  @described_url
end

And then it can (finally) be used by the end user just like described_class:

RSpec.describe "/accounting/ledger/entries", :requests do
  it "GET" do
    get described_url

A Solution that Takes Parameters

A more realistic version of the request spec url might be /accounting/ledgers/:ledger_id/entries. Fortunately, since we’ve just turned described_url into a method, we can make it take parameters, so a slightly changed test can pass in the parameters it needs to substitute:

RSpec.describe "/accounting/ledgers/:ledger_id/entries", :requests do
  let(:ledger) { … build test ledger … }
  it "GET" do
    get @described_url(params: {ledger_id: ledger.id})

and the method can create a copy of the base @described_url (as different tests may well require different parameters) and return the copy with the parameters, if any, filled in:

def described_url(params: {})
  url = @described_url.dup
  params.each do |key, value|
    url.sub!(":#{key}", value.to_s)
  end
  url
end

Appendix: The Complete Before Block

RSpec.configure do |config|
  …
  config.before(:each, :requests) do |example|
    def described_url(params: {})
      url = @described_url.dup
      params.each do |key, value|
        url.sub!(":#{key}", value.to_s)
      end
      url
    end

    def reckon_described_url(metadata)
      if metadata[:parent_example_group].nil?
	    metadata[:description]
      else
	    reckon_described_url(metadata[:parent_example_group])
      end
    end

    @described_url = reckon_described_url(example.metadata[:example_group])
  end
  …
end

Testing with Page Objects: Implementation

In the previous post, we added the page object framework SitePrism to a project, did some organization and customization, and added helper methods so we could call visit_page and on_page to interact with the app. Next, let’s start using it.

Navigation

First Step

Let’s say that for the first test you want to go to the login page, log in, and then assert that you’re on the dashboard page. That’s easy enough to do like so:

visit_page("Login").login(username, password)
on_page("Dashboard")

module PageObjects
  class Login < SitePrism::Page
    set_url "/login"

    element :username, "input#login"
    element :password, "input#password"
    element :login, "input[name='commit']"

    def login(username, password)
      self.username.set username
      self.password.set password
      login.click
    end
  end
end

module PageObjects
  class Dashboard < SitePrism::Page
    set_url "/dashboard"
  end
end

The first line will go to the “/login” page and then try to submit the form, and will fail if it can’t find or interact with the “username”, “password”, or “login” elements. For instance, if the css for the username field changes, the test will fail with:

Unable to find css "input#login" (Capybara::ElementNotFound)
./test_commons/page_objects/login.rb:10:in `login'
./features/step_definitions/smoke.rb:7:in `login'
./features/step_definitions/smoke.rb:3:in `/^I log in$/'
features/smoke.feature:5:in `When I log in’

The second line will fail if the path it arrives at after logging in is something other than “/dashboard”.

Because the default message for the different path was uninformative (expected true, but was false), we added a custom matcher to provide more information. The default message for not finding an element on the page, including the css it’s expecting and the backtrace to the page object it came from and the line in the method it was trying to run, seems sufficiently informative to leave as-is.

If the login field were disabled instead of not found, the error message would be slightly less informative:

invalid element state
  (Session info: chrome=…)
  (Driver info: chromedriver=2.14. … (Selenium::WebDriver::Error::InvalidElementStateError)
./test_commons/page_objects/login.rb:10:in `login'
./features/step_definitions/smoke.rb:7:in `login'
./features/step_definitions/smoke.rb:3:in `/^I log in$/'
features/smoke.feature:5:in `When I log in’

but since the backtrace still gives you the line of the page object where it failed, and that gets you the element in an invalid state, it feels lower priority.

Second Step

If the next thing you want to test is a subcomponent flow, you can navigate to its first page either by calling visit_page('ComponentOne::StepOne') (the equivalent of typing the path in the browser address bar), or by having the test interact with the same navigation UI that a user would. The second choice feels more realistic and more likely to expose bugs, like the right navigation links not being displayed.

(Ironically, having created the visit_page method, we might only ever use it for a test’s first contact with the app, using in-app navigation and on_page verification for all subsequent interactions.)

To get that to work, we need to model the navigation UI.

SitePrism Sections and Navigation UI

Supose the app has a navigation bar on every logged-in page. We could add it as a section to the Dashboard, say, but if it’s on every page it probably makes more sense to add it as a section to a BasePage and have Dashboard (and all other logged-in pages) inherit from it:

module PageObjects
  class BasePage < SitePrism::Page
    section :navigation, Navigation, ".navigation"
  end
end

module PageObjects
  class Dashboard < BasePage
    …
  end
end

module PageObjects
  class Navigation < SitePrism::Section
    element :component_one, ".component_one"
    element :componont_two, ".component_two"

    def goto_component_one
      component_one.click
    end

    def goto_component_two
      component_two.click
    end
  end
end

Now, if we want to go from the dashboard to the start of component one, we can modify the on_page("Dashboard") line above to

on_page("Dashboard").navigation.goto_component_one

Can we delegate to the section from the enclosing page? Yes. If we add this to the base page:

module PageObjects
  class BasePage < SitePrism::Page
    …
    delegate :goto_component_one, :goto_component_two, to: :navigation
  end
end

we can then call, still more briefly:

on_page("Dashboard").goto_component_one

And now the test from the top can continue like so:

visit_page("Login").login(username, password)
on_page("Dashboard").goto_component_one
on_page("ComponentOne::StepOne").do_something

and line two will fail if the link to “component one” isn’t visible on the screen, and line three will fail if the user doesn’t end up on the first page of component one (perhaps they followed the link but authorizations aren’t set up correctly, so they end up on an “unauthorized” page instead).

What Else Is on the Page?

If you call a method on a page object it will verify that any item that it interacts with is on the page, and fail if it can’t find it or can’t interact with it. But what if you’re adding tests to an existing app, and you aren’t yet calling methods that interact with everything on the page, but you do want to make sure certain things are on the page?

Suppose you have a navigation section with five elements, unimaginatively called :one, :two, :three, :four, :five. And you have three types / authorization levels of users, like so:

User Can see elements
user1 :one
user3 :one, :two, :three
user5 :one, :two, :three, :four, :five

And you want to log each of them in and verify that they only see what they should see. We can do this with SitePrism’s has_<element>? and has_no_<element>? methods.

A naive approach might be:

all_navigation_elements = %w{one two three four five}

visit_page('Login').login(user1.username, user1.password)
on_page('Dashboard') do |page|
  user1_should_see = %w{one}
  user1_should_see.each do |element|
    expect(page.navigation).to send("have_#{element}")
  end
  user1_should_not_see = all_navigation_elements - user1_should_see
  user1_should_not_see.each do |element|
    expect(page.navigation).to send("have_no_#{element}")
  end
end

Even before doing the other two, it’s clear that we’re going to want to extract this into a more generalized form.

To build out the design by “programming by wishful thinking”, if we already had the implementation we might call it like this:

all_navigation_elements = %w{one two three four five}

visit_page('Login').login(user1.username, user1.password)
on_page('Dashboard') do |page|
  should_see = %w{one}
  has_these_elements(page.navigation, should_see, all_elements)
end

visit_page('Login').login(user3.username, user3.password)
on_page('Dashboard') do |page|
  should_see = %w{one two three}
  has_these_elements(page.navigation, should_see, all_elements)
end

# do. for user5

And then we can reopen test_commons/page_objects/helpers.rb to add an implementation:

def has_these_elements(page, has_these, all)
  has_these.each do |element|
    expect(page).to send("have_#{element}")
  end
  does_not_have = all - has_these
  does_not_have.each do |element|
    expect(page).to send("have_no_#{element}")
  end
end

And this works. The raw error message (suppose we’re doing this test first so we’ve got the test for user1 but user1 still sees all five items) is unhelpful:

expected #has_no_two? to return true, got false (RSpec::Expectations::ExpectationNotMetError)
./test_commons/page_objects/helpers.rb:28:in `block in has_these_elements'
./test_commons/page_objects/helpers.rb:27:in `each'
./test_commons/page_objects/helpers.rb:27:in `has_these_elements'
./features/step_definitions/navigation_permissions.rb:24:in `block in dashboard'
…

And we could build a custom matcher for it, but in this case since the error is mostly likely to show up (if we’re working in Cucumber) after

When user1 logs in
Then user1 should only see its subset of the navigation

seeing expected #has_no_two? after that seems perfectly understandable.

Summing Up So Far

I’m quite liking this so far, and I can easily imagine after I’ve used them for a bit longer bundling the helpers into a gem to make it trivially easy to use them on other projects. And possibly submitting a PR to suggest adding something like page.has_these_elements(subset, superset) into SitePrism, because that feels like a cleaner interface than has_these_elements(page, subset, superset).

Testing with Page Objects: Setup

What Are Page Objects?

Page objects are an organizing tool for UI test suites. They provide a place to identify the route to and elements on a page and to add methods to encapsulate paths through a page. Martin Fowler has a more formal description.

Suppose you’re testing a web app which has a component with a four-page flow, with a known happy path a user would normally work through. Using page objects, you could test that flow as:

def component_one_flow(user, interests, contacts)
  goto_component_one_start
  on_page('ComponentOne::StepOne').register(user)
  on_page('ComponentOne::StepTwo').add_interests(interests)
  on_page('ComponentOne::StepThree').add_contacts(contacts)
  on_page('ComponentOne::StepFour').approve
end

If anything changes on one of those pages, you can make the change inside the page object, and the calling code can remain the same.

For small- to medium-sized web apps this might seem a nice-to-have, but for larger apps where you end up afraid to open features/step_definitions or spec/features because it’s impossible to find anything, this is a very useful pattern to introduce, or to start out with on the next project.

This ran quite long, so I’ve split it into two parts: in this first post I’ll describe setting up a page object framework, and in the second I’ll talk about implementation patterns.

Available Frameworks

In 2013 and 2014 I enjoyed working with Jeff Morgan’s page-object, which runs on watir-webdriver. (His book Cucumber and Cheese was my introduction to page objects.) Jeff Nyman’s Symbiont also runs on watir-webdriver and also looks quite interesting (as does his testing blog). There’s also Nat Ritmeyer’s SitePrism, running on Capybara.

Here’s a login page in all three:

Page-object

class LoginPage
  include PageObject

  page_url "/login"

  text_field(:username, id: 'username')
  text_field(:password, id: 'password')
  button(:login, id: 'login')

  def login_with(username, password)
    self.username = username
    self.password = password
    login
  end
end

SitePrism

class Login < SitePrism::Page
  set_url "/login"

  element :username, "input#login"
  element :password, "input#password"
  element :login, "input[name='commit']"

  def login_with(username, password)
    self.username.set username
    self.password.set password
    login.click
  end
end

Symbiont

class Login
  attach Symbiont

  url_is      "/login"
  title_is    "Login"

  text_field :username, id: 'user'
  text_field :calendar, id: 'password'
  button     :login,    id: 'submit'

  def login_with(username, password)
    self.username.set username
    self.password.set password
    login.click
  end
end

The similarities are clear: in each case you can identify the URL, the elements, and methods that interact with those elements. And since it is the internal page object names for elements that are used in the methods, if (when) the css changes, the only line that needs to change in the page object is the declaration of the element.

What about differences? Page-object (here) and Symbiont (here) provide page factories, so that in page-object you can write

visit(LoginPage) do |page|
  page.login(username, password)
end

or, even more compactly

visit(LoginPage).login(username, password)

where SitePrism is, somewhat more long-windedly:

page = Login.new
page.load
page.login(username, password)

SitePrism offers, in addition to element, the named concepts of elements (for a collection of rows, say) and section (for a subset of elements on a page, or elements which are common across multiple pages, a navigation bar, say, which you can identify separately in a SitePrism::Section object).

The further point that inclined me to try SitePrism (though with a mental note to add a page factory) was that when we were using page-object last year we sometimes needed to write explicit wrappers for waiting code because the implicit when_present wait did not always seem to be reliable, and it looked as though Symbiont wrapped it the same way (here). I was curious to see if SitePrism (or its underlying driver) had a more robust solution.

Setting up SitePrism

So How Does It Handle Waiting?

By default, SitePrism requires explicit waits, so either wait_for_<element> or wait_until_<element>_visible. This has been challenged (issue#41, in which Ritmeyer explains that he likes having more fine-grained control over timing of objects than Capybara’s implicit waits allows), and a compromise has been implemented (issue#43), by which you can configure SitePrism to use Capybara’s implicit waits, so:

SitePrism.configure do |config|
  config.use_implicit_waits = true
end

I’ve been using that, and it’s working so far.

Where to Put Them

On the earlier project we were using page objects from Cucumber, so we put them under features/page_objects. The new project is larger and already has non-test-framework-specific code under both features and spec, some of which is called from the other framework’s directory, so it makes sense to create a common top-level directory for them. I called it test_commons to make it clear at a glance through the top-level directories that it was test-related instead of app-related. Then page_objects goes under that.

The disadvantage of the location is that we will need to load the files in explicitly. So we add this at test_commons/all_page_objects.rb (approach and snippet both from Panayotis Matsinopoulos’s useful post):

page_objects_path = File.expand_path('../page_objects', __FILE__)
Dir.glob(File.join(page_objects_path, '**', '*.rb')).each do |f|
  require f
end

and in features/support/env.rb we add:

require_relative '../../test_commons/all_page_objects'

and we’re away. (If/when we use them from RSpec features as well, we can require all_page_objects in spec_helper.rb too.)

Further Organization

Probably you’ll have some top-level or common pages, like login or dashboard, and others which belong to specific flows and components, like the four pages of “component one” in the first example. A login page could go in the top-level page objects directory:

# test_commons/page_objects/login.rb
module PageObjects
  class Login < SitePrism::Page
    …
  end
end

whereas the four pages of component one could be further namespaced:

# test_commons/page_objects/component_one/step_one.rb
module PageObjects
  module ComponentOne
    class StepOne < SitePrism::Page
      …
    end
  end
end

This becomes essential when you’ve got more than a handful of pages.

Adding a Page Factory for SitePrism

I started by looking at page-object’s implementation (which itself references Alister Scott’s earlier post).

That seemed to do more than I needed right away, so I started with the simpler:

# test_commons/page_objects/helpers.rb
module PageObjects
  def visit_page(page_name, &block)
    Object.const_get("PageObjects::#{page_name}").new.tap do |page|
      page.load
      block.call page if block
    end
  end
end

… though we will need to add World(PageObjects) to our features/support/env.rb to use this from Cucumber (the require got us the page object classes, but not module methods). If we were running from RSpec, to borrow a snippet from Robbie Clutton and Matt Parker’s excellent LA Ruby Conference 2013 talk, we would need:

RSpec.configure do |c|
  c.include PageObjects, type: [:feature, :request]
end

So now we can call visit_page (not visit because that conflicts with a Capybara method), it’ll load the page (using the page object’s set_url value), and if we have passed in a block it’ll yield to the block, and either way it’ll return the page, so we can call methods on the returned page. In other words: visit_page('Login').login(username, password).

We can use the same pattern to build an on_page method, which instead of loading the page asserts that the app is on the page that the test claims it will be on:

def on_page(name, args = {}, &block)
  Object.const_get("PageObjects::#{page_name}").new.tap do |page|
    expect(page).to be_displayed
    block.call page if block
  end
end

Further iteration (including the fact that both page.load and page.displayed? take arguments for things like ids in URLs) results in something like this:

module PageObjects
  def visit_page(name, args = {}, &block)
    build_page(name).tap do |page|
      page.load(args)
      block.call page if block
    end
  end

  def on_page(name, args = {}, &block)
    build_page(name).tap do |page|
      expect(page).to be_displayed(args)
      block.call page if block
    end
  end

  def build_page(name)
    name = name.to_s.camelize if name.is_a? Symbol
    Object.const_get("PageObjects::#{name}").new
  end
end

We haven’t implemented page-object’s if_page yet (don’t assert that you’re on a page and fail if you aren’t, but check if you’re on a page and if you are carry on), but we can add it later if we need it.

What Would that Failed Assertion Look Like?

Suppose we tried on_page('Dashboard').do_something when we weren’t on the dashboard. What error message would we get?

expected displayed? to return true, got false (RSpec::Expectations::ExpectationNotMetError)
./test_commons/page_objects/helpers.rb:11:in `block in on_page'

We know, because SitePrism has a #current_path method, what page we’re actually on. A custom matcher to produce a more informative message seems a good idea. Between the rspec-expectations docs and Daniel Chang’s exploration, let’s try this:

# spec/support/matchers/be_displayed.rb
RSpec::Matchers.define :be_displayed do |args|
  match do |actual|
    actual.displayed?(args)
  end

  failure_message_for_should do |actual|
    expected = actual.class.to_s.sub(/PageObjects::/, '')
    expected += " (args: #{args})" if args.count > 0
    "expected to be on page '#{expected}', but was on #{actual.current_path}"
  end
end

which we’d get for free from RSpec, and from Cucumber we can get by adding to features/support/env.rb:

require_relative '../../spec/support/matchers/be_displayed'

The first time I ran this, because I wasn’t passing in arguments to SitePrism’s #displayed? method, I hit a weird error:

comparison of Float with nil failed (ArgumentError)
./spec/support/matchers/be_displayed.rb:3:in `block (2 levels) in <top (required)>'

… emending the helper methods to make sure that I included an empty hash even if no arguments were passed in (part of the final version of helpers.rb above) fixed that. Now, with the new matcher working, if we run on_page('Dashboard').do_something when we’re actually at the path /secondary-control-room, we get the much more useful error

expected to be on page 'Dashboard', but was on '/secondary-control-room' (RSpec::Expectations::ExpectationNotMetError)
./test_commons/page_objects/helpers.rb:11:in `block in on_page'

… note that if there had been expected arguments, we would have printed those out too (expected to be on page 'Dashboard' (args: {id: 42})).

I built the custom matcher in RSpec because that’s what I tend to use, but if Minitest is more your thing, you could add a custom matcher by reopening Minitest::Assertions. (Example here.)

Setup Complete

We’ll break now that the page object framework is set up, and discuss further implementation patterns in the next post.

Trailing Whitespace in Commits

The Problem

You’re opening a pull request to do a code review. There are 34 changed files. You find this distressing, but you dig in and find only three changed files contain changes in logic: the rest are removing whitespace. You examine the three easily and go on to what’s next.

But suppose you had seen there were 34 changed files and cringed and gone on to another task instead? It would be better if there were a way to quickly distinguish real changes from formatting changes. Better still if we could prevent commits that add whitespace we’re only going to take out later in the first place.

Hiding but Not Solving the Problem

git diff, of course, shows you the differences between two code trees.

git diff -w / git diff --ignore-all-space shows you the differences excluding whitespace differences.

You can get github to ignore whitespace in its diffs by appending ?w=1 to the URL (say, of the pull request that you’re reviewing).

This solves the cognitive load problem – being able to tell at a glance that 31 of the 34 changed files are not significant is a huge plus – but all of those extra files have still been changed. Given a small team of great developers, no problem, but if you’ve got multiple people at multiple skill levels working on multiple branches, merges become more complicated and merge conflicts more likely than they need to be.

Solving the Problem: Going Forward

Any editor any developer on your team is likely to be using (Emacs, Vim, RubyMine, Sublime Text, TextMate) already has the ability to automatically remove trailing whitespace before saving a file. Make sure they all have it switched on. For Emacs, just add this to your config:

(add-hook ‘before-save-hook ‘delete-trailing-whitespace)

For other editors, there are instructions here.

Solving the Problem: Everything before That

If the project already has three years of commits before you switched that on, there will still be a problem with already-committed trailing whitespace.

What I would do in that case is find the least disruptive moment, grab something like rstrip (which is more flexible than one-liners from the command line because it starts by setting up a config file so you can decide which file extensions are processed), run it on the whole project, and commit the result. This will make you very unpopular with people on outlying branches, hence the importance of finding the least disruptive moment, but after that, you’ve fixed the trailing whitespace problem for good.

Teaser for Another Problem

Trailing whitespace is a single instance of the larger problem of automatically enforcing coding style conventions.

Back in Java this was a solved problem: write code with Eclipse, use the Checkstyle and Jalopy plugins (or their IntelliJ equivalents): Checkstyle creates warnings whenever a coding style convention is breached, and can also stop you committing code before resolving those warnings. Invoking Jalopy on Checkstyled code can automatically reformat it to follow the Checkstyle conventions to make it committable.

In Ruby? Thoughtbot has released hound which automatically comments on pull requests with Ruby / Javascript / Coffeescript / SCSS style coding style violations, based on Thoughtbot’s style guides.

Bozhidar Batsov (bbatsov) has a Ruby static code analyzer called rubocop which can be run locally and will produce warnings based on his community-driven Ruby style guide.

Yorick Peterse has another Ruby static code analyzer called ruby-lint.

Investigating all these is a large enough topic for another post.

Notes and Quotes from Conference Talks

What did we do before Confreaks let us catch up on all the conference talks? Here are notes on some that resonated with me recently.


TDD for Your Soul: Virtue and Web Development with Abraham Sangha at Madison+ Ruby

… more philosophy than tech, but thought-provoking.

citing Alasdair McIntyre, “After Virtue”:

“Who am I?

Who ought I to become?

How ought I to get there?”

citing Ralph Ellison:

“The blues is an impulse to keep the painful details and episodes of a brutal experience alive in one’s aching consciousness, to finger its jagged grain, and to transcend it, not by the consolation of philosophy but by squeezing from it a near-tragic, near-comic lyricism.”

Abraham Sangha:

“We’re inviting criticism of ourselves, we’re inviting evaluation of our weaknesses, by TDDing our soul, by writing these tests, seeing where we fail, and trying to implement habits to address these failures: that can be a shameful process. … If you don’t believe you’re good enough, you’re stuck, then why would you invite more pain into your life through this process, so why not just skip it. … But there’s a possibility that pursuing this will actually give us a sense of buoyancy.”


Alchemy and the Art of Software Development with Coraline Ada Ehmke at Madison+ Ruby

Going back in pattern languages before Christopher Alexander to Gottfried Wilhelm von Leibniz (1646-1716): “every complex idea is composed of sets of simpler ideas.”

“alchemy is about transforming base matter, like the stuff that we’re made of, into something more closely approaching divine perfection. That ‘lead into gold’ nonsense was actually part of the pitch that alchemists made to the VCs of their era. They called them royalty, I’m not quite sure why. And this idea that ‘I can take base metal and transmute it into gold? Here: I’ll give you all this money for your metaphysical experiments. I don’t care.’ So they were pretty smart.”

“The Divine Pymander, ascribed to Hermes Mercurius Trismegistus. … It contains seventeen tracts, which talk about things like divinity, humanity, idealism, even monads.”

“we impose our will on the universe, we create a structure that we want to impose on chaos, and the manifestation of that begins in harmony, but slides towards disharmony, because every object in every system is corruptible. … the code is corruptible.”

“all that is apparent, is what was generated or made. … the system does not contain information about the ideals that led to its creation. Unless we are very deliberate about recording our intention and our design, that information is lost, and all we are left with is a system that we don’t understand any more.”

“Christopher Alexander the architect said that a builder should start with the roughest sketch of a building, and by processes that are known to the brain through the pattern language, execute the construction of the building. All things that are are but imitations of truth. The systems we build are reflections of the world. Software system is not and cannot be a single source of truth.”

“Really, alchemy and software development are about identifying ideals, identifying particulars, creating taxonomies, studying them, creating a system for classifying every single thing in a limited or expansive universe.”

“The Divine Pymander ends with this: ‘The Image shall become thy Guide, because Sight hath this peculiar charm, it holdeth fast and draweth unto it those who succeed in opening their eyes.’ I would translate this as ‘The metaphors guide us. Information is there, and ideas want to be recognized.’ I think that the metaphor of alchemy holds true with what we do. Just like alchemy, software development is an inquiry into the essential nature of reality. We break it down, we salve it coagula, we break down complex things into simpler things, we recombine them in novel ways, and we produce an effect on the world, on ourselves.”

“Let’s … dive into disciplines that we have no idea what they even mean, explore them, mine them, find tenets and metaphors, tools that we can use in problem solving, that we can apply to enrich our art, our science, our own magnum opus. Maybe when we do that, we can find that the raw stuff of digital creation can be transmuted into something that more closely approaches perfection.”


Aesthetics and the Evolution of Code: Coraline Ada Ehmke, Nickle City Ruby 2014

Aesthetics of code: correctness, performance, conciseness, readability.

So how do we measure elegance? How about a graph with four lines/axes spreading from the origin, correctness “north”, performance “south”, conciseness “west”, readability “east”. If you can measure and grade those four aesthetics on a numeric scale, then the most elegant code will cover the largest space on the graph

Why does it matter? Einstein said:

“After a certain level of technological skill is achieved, science and art tend to coalesce in aesthetic plasticity and form. There greater scientists are artists as well.”


Eric Stiens’s Nickel City Ruby Conference 2014 talk “How to be an Awesome Junior Developer (and why to hire them)”

On Mentoring:

Don’t hire a junior developer you can’t mentor. It’s not fair to them. It’s not fair to your team. Everybody loses.


Sarah Mei’s Nickel City Ruby Conference 2014 talk “Keynote: Multitudes”

Should you always use attr_reader to access an instance variable rather than accessing it directly, as a good object-oriented principle, because by only accessing it by sending a message you have created a seam which makes it easier to change anything about the implementation later? A good application developer (e.g. Sandi Metz) would very likely say yes. A good library developer (e.g. Jim Weirich) might say no, because adding attr_reader to a class makes data manipulation public, and once it’s public it has to be supported for ever, because people will use it. (And if you do change it, and you’re using semantic versioning, you have to change the major version.)

“So people who write gems have developed a system where they have a very minimal interface, with a very rich set of internal abstractions, which they can use while providing this very minimal external interface.

“So these are just two different approaches to programming [application style at one end, gem/library style at the other], and what we’re starting to see here is there is a spectrum of rubyists, and there is a spectrum of projects: people will write different code at different points of the spectrum at different times, and what the spectrum is measuring is the surface area of our interface. And it seems like a fairly simple distinction but it does produce a huge difference in code structure. And what it means is that sometimes someone who is good at one side of this spectrum will not automatically be good at the other side, right away. … Why does this matter? … Having these endpoints helps us define what else there is. For example, there’s a middle here, and that middle is suddenly quite obvious, actually, and that’s external APIs on Rails apps. Many Rails apps now need some kind of programmatic interface that is versioned, and minimal, because once it’s out there, and it’s got users, you’re stuck supporting it, forever. … And the reason people struggle with external APIs for Rails apps is that it is partially application development and it is partially gem-style development, and there aren’t very many people that are good at both.”

Exciting that Sarah Mei and Sandi Metz are writing a book: Practical Rails Programming.

RubyFringe, Day Two

Geoffrey Grosenbach spoke from his university background in philosophy and interest in music, and wondered interestingly if developers are less aware that modern methodologies derive from the scientific method because they’re stuck in the implementation phase, without necessarily being involved in coming up with the original ideas or validating the experiments. The link from Spinoza to cloud computing was a long but entertaining stretch. Invoking the Sapir-Whorf hypothesis as a good reason to do a desktop app instead of a web app every so often (or vice versa), for the wider perspective, struck home.

John Lam spoke of how you gravitate to the same problems over and over, drawing lines from IronRuby to his early days metaprogramming in assembler on a Commodore 64 and building bridges between components, and of how you can change your mind, remembering his early paper on how dynamic languages suck. He explained his move to Microsoft on the grounds that he really wanted to build stuff that people would use: you can build stuff in open source, but brilliance alone won’t guarantee that people use it.

Hampton Catlin spoke about Jabl, which he cheerfully billed “the language you will hate”, his attempt to fix javascript (which he sees as a good if verbose general-purpose language but a terrible browser language that needs frameworks to fix it). Jabl will compile into javascript, is backed by JQuery and provides a compact DSL for client-side DOM manipulation. It look neat and quite compact (a good deal more compact than the equivalent JQuery, and a great deal more compact than the equivalent raw javascript), and as soon as there’s a compiler it will be well worth checking out.

Giles Bowkett (note from 2014: if you haven’t already seen it, go watch the video, which is amazing) gave a high-energy performance, demonstrating his Archaeopteryx MIDI software and making an impassioned plea to do what you love and build what you want. Make profit by providing a superior service at the same price point as the competitors: we’re still banging the drum of making your offering less painful than the competition. Also, Leonardo da Vinci was an artist who didn’t ship and so, by “release early, release often” standards, a failure, there were several images of Cthulhu, references to using a lambda to determine which lambda to use (a meta-strategy pattern), and Bowkett’s thought that it was irresponsible in Ruby not to use the power the language gives you. He added that Archaeopteryx may be insane but it’s also great, and “insanely great” he can live with, and that while he may not be able to say (as Jay Phillips used to of Adhearsion) “my career is Archaeopteryx”, being able to say “my career includes Archaeopteryx” is still very good.

Damien Katz spoke about the long and hard road that led to him being the guy building the cool stuff, in his case CouchDB. It felt in part like a counterpoint to Bowkett’s story: took longer to pay off, but he did what he really wanted to do on the chance that someone might recognize it, and pay off it finally did. Katz also noted that while it was easier to get something working in Java than in Erlang, it was easier to get it working reliably in Erlang than in Java.

Reginald Braithwaite spoke about rewrite, which enables rewriting Ruby code without opening and modifying core classes. This makes it easier to scope changes to specific parts of your code. Reg likes the power of opening classes, but doesn’t think it sustainable. If you open a class in a gem, all the downstream users will be affected, whether they want the change or not. Reg also mused on adverbs, like .andand. or .not., modifiers that affect the verbs, the methods, instead of the more usual noun/object orientation. (Reg has since blogged a riff between his talk and Giles’s.)

Tom Preston-Werner spoke about conceptual algorithms, not applied to coding patterns but to higher-level thinking patterns. He gave an extended example of the full circle of the scientific method and its effectiveness in debugging, noting the usefulness of Evan Weaver’s bleak_house plugin in tracing memory leaks. He previewed the upcoming git feature gist, basically pasties backed by github. He discussed some other algorithms more briefly. Memory Initialization starts with disregarding everything you think you know about a problem and working from first principles (it worked for George Dantzig). Iteration is exemplified by the James Dyson new vacuum cleaner design, which after 5126 failures became a resounding succes. Dyson remarked, taking us right back to Nick Sieger’s jazzers, “Making mistakes is the most important thing you can do”. Breadth-First Search relies on the fact that there are over 2500 computer languages, and learning some others just for the different perspective is useful. (He shared a tip from his time learning Erlang and being weirded out by the syntax: just say to yourself, over and over in an affirming manner, “the syntax is OK”. And that helps you get past it and get on to the good stuff.) Imagining the Ideal Solution feels like a cousin of TDD: if you’re not sure how to implement the solution, write out your best case of what the god config file (say) or the Jabl syntax (say) would be, and then proceed to implement that. And Dedicated Thinking Time was an unexpected reflection of a point I first came across in Robertson Davies: it’s important to set aside time for just thinking about things. Sure it’s simple, but it’s sometimes ridiculously hard to implement.

Blake Mizerany spoke about Sinatra, one of the Ruby web app microframeworks, built (in line with Jeremy McAnally’s thoughts yesterday) because Rails was too slow for his needs. He noted some apps in Sinatra: git-wiki (a wiki served by Sinatra and backed by git), down4.net, and the admin interfaces of heroku. He also suggested that for RESTful resources, rest-client (which he called “the reverse Sinatra”) and Sinatra was all you needed, making ActiveResource look like overkill and curl look cumbersome. To complaints that Sinatra wasn’t MVC, he noted that he thought it was, actually, but in any case, insisting on a web framework being MVC felt like over-eager application of patterns instead of starting with the problem. Asked what Sinatra was tailored for, he said back end services or other small bits of functionality. If you’ve got more than a few screens of Sinatra code, it’s probably time to move to another framework.

Leila Boujnane spoke about the importance of being good, and being useful, and making the world a better place by making something that people want. Don’t start with a business model: start by building something useful, and build it well, because a crappy product upsets both the customers and the people working on it. Making the customers feel good about the product is not only good in itself, it’s the least limiting approach, because if your product doesn’t do that on its own you need to cheat or fool people into thinking it’s terrific. In a way, we’re back where the conference started: find a pain point and build something useful to fix it, to everyone’s benefit.

Oh, and the Toronto Erlang group was founded over lunch.

High praise to the folks at Unspace for making the conference happen. It was thought-provoking and loads of fun.

Joey deVilla’s more expansive account: one, two, three.

RubyFringe, Day One

Jay Phillips talked about Adhearsion, his framework for integration VoIP building on top of and simplifying the use of Asterisk. He noted that it can be surprisingly profitable to find a pain point a bit off the beaten track and fix it. This chimed well with Zed Shaw’s remark from QCon London 2007 that if you want to succeed beyond your wildest dreams, you should find a problem and solve it in a way which is less painful than any of the competing alternatives.

Dan Grigsby noted that entrepreneurship was easier for a developer than perhaps it had ever been, because the sweet spot for team size was about 3. If you’re a really good programmer, all you need on the code side is one or two other really good programmers. This matches Pete McBreen’s point in Software Craftsmanship, that a smaller team of really good people will probably be better and more productive than a larger team. The trick that corporations don’t seem to have solved yet is finding that small great team in the first place.

Grigsby added that a more pragmatic test-driven approach to the developer’s dilemma of picking the winners among all the possible project ideas is to build a whole bunch of narrowly-focussed projects, send them out, measure them to within an inch of their lives, and quickly kill off the failing ones. For a similar amount of effort as honing one or two projects for a much longer period before sending them out, you’re left with a couple of projects that are less well developed but you already know they are successful.

Grigsby’s engaging talk continued into more explicitly market-focused directions, discussing finding hacks in the market for the disproportionate reward, suggesting that there are marketing techniques that match Ruby’s better-known programming claim to 10x better productivity, and that exploiting non-obvious relationships was among them.

Tobias Lütke spoke about memcached.

Yehuda Katz spoke about several neat projects, from Merb and DataMapper (coming to 1.0 this summer so almost no longer Edge-y) to sake (for system-wide rather than project-specific rake tasks), thor (scripting in Ruby), YARD (a neat-looking better Ruby documentation tool), and johnson (a ruby-javascript bridge).

Luke Francl spoke on testing being over-rated. He made clear that testing is great, it’s just that there are a lot of other things we need to do (code inspection, usability testing, manual testing), some of which will find different bugs and none of which will find all the bugs. As he pointed out, you can have the most complete set of unit tests imaginable, and it won’t tell you if your app sucks. Sitting and watching a user interacting with the app may be the only way of being sure of that. Francl also pointed out that in addition to writing code and writing tests, developers are good at criticizing the code of others, and the best measure of code quality may be not rcov stats but the number of WTFs per minute in a code review. And, as a side benefit, knowing that your code will be subject to a code review will probably make you tighten up your code in anticipation. (Francl’s paper was also notable for following the Presentation Zen advice of having clear uncluttered slides and a handout that was an efficient text summary of the presentation that bore no relation to the slides at all.)

ETA from RubyConf 2014: see also Paul Gross’s Testing Isn’t Enough: Fighting Bugs with Hacks.

Nick Sieger spoke about Jazzers and Programmers. Go read his summary and savour the quotations (from a different field but still deeply appropriate) from great jazz musicians. “Learn the changes, then forget them” (Charlie Parker). Or the idea of The Real Book (the body of tunes that all jazz musicians learned and created a shared vocabulary that allowed musicians to play together on their first meeting) as the GoF for jazzers in the 70s.

Obie Fernandez spoke about marketing and practical details of contracts. Not only about accepting contracts but about rejecting them, noting that a key acceptance criterion should be if you’re going to want to show off the project when it’s done and keep it in your portfolio. Following on from patterns and The Real Book, he noted the importance when setting up a consultancy of defining projects and of having names for them, and of writing down your client criteria and acceptance criteria and keeping them constant. Noted Secrets of Power Negotiating, Predictably Irrational, and Seth Godin’s Purple Cow as books to check out.

Matt Todd spoke about the importance of just going ahead and making mistakes and learning from them. (Going back to one of Sieger’s quotations, as Ornette Coleman put it, “It was when I found out I could make mistakes I knew I was on to something.”)

Jeremy McAnally spoke of how we should resist the urge to make Rails do everything, even outside its comfort zone where it would be a lot simpler and more maintainable to, say, just write some Ruby code and hook it up to with Rack. How Rails is designed for database-backed web apps, and taking it outside of its 80% comfort zone, while perfectly possible, may not be the best solution, and if you find yourself in a place where the joy:pain ratio has flipped or where you find you’re writing more spaghetti code to fit the solution into Rails than you would be to write the solution on its own, it’s probably time to switch.

Zed Shaw briefly presented some dangerous ideas and then went on to an impromptu music-making session. He left us with the thought that while he was done with Ruby, he was never going to stop coding, because code was the only artform that you could create that other people could then play with.

Joey deVilla’s more expansive account: one, two.