List Tests Relying on Another Test: Another Approach

Problem Statement

See the previous post. TL;DR: we want a mechanism that lets us link a test in one part of a codebase to other tests that rely on the assumption of that initial test, say that an API call response will include certain fields. If one team wants to make a change to that response, we want to make it clear what else might break if they make that change, and who they should probably talk with to build a shared understanding of what should be done next.

In the previous post we looked at doing this with def requirement and def relies_on methods: here we’re going to look instead at doing it with comment annotations, so:

# requirement anchor
# @REQUIREMENT: my_hounds includes key :breeding_stock

# relies_on anchor in same repo
# @RELIES_ON: my_hounds includes key :breeding_stock

# relies_on anchor in another repo
# @RELIES_ON: <repo:hounds_service>:my_hounds includes key :breeding_stock

Rewriting the Example

Let’s rewrite the example. lib/service.rb stays the same:

class Service

  def my_hounds
    {
      breeding_stock: "Spartan",
      traits: {
        flewed_as: "Spartan",
        sanded_as: "Spartan"
      }
    }
  end
end

The spec/with_requirement_spec.rb changes its requirement and relies_on from methods to comment annotations, and already this lets us put them on individual examples and not just within example groups, giving us more flexibility:

require "service"

RSpec.describe "with requirement" do

  subject { Service.new.my_hounds }

  context "my_hounds" do
    describe "keys in response" do
      # @REQUIREMENT: my_hounds includes :breeding_stock
      it "includes :breeding_stock" do
        expect(subject).to include :breeding_stock
      end

      # @REQUIREMENT: my_hounds includes :traits
      it "includes :traits" do
        expect(subject).to include :traits
      end
    end
  end

  context "using my_hounds" do
    describe "for one thing" do
      # @RELIES_ON: my_hounds includes :breeding_stock
      it "does something with :breeding_stock" do
        expect(subject[:breeding_stock]).to eq("Spartan")
      end

      # @RELIES_ON: my_hounds includes :traits
      it "does something with :traits" do
        expect(subject[:traits][:flewed_as]).to eq("Spartan")
      end
    end

    describe "for another thing" do
      # @RELIES_ON: my_hounds includes :breeding_stock
      it "checks something else" do
        expect(subject[:breeding_stock]).not_to eq("Athenian")
      end
    end
  end
end

As before, this passes, and can be made to fail easily by commenting out line 5 of the lib/service.rb.

We can also create a related repo, with a spec referring back to the requirement in the first repo:

RSpec.describe "relies on related repo" do
  describe "my_hounds from relies_on_example" do
    # @RELIES_ON: <repo:relies_on_example_two>:my_hounds includes :breeding_stock
    it "does something" do
      expect(true).to be true
    end
  end
end

Building the Relies_on Lookup

It still makes sense to do this in the before(:suite), and to store the results in the added setting config.relies_on. So add in spec/spec_helper.rb, as before:

config.add_setting :relies_on

config.before(:suite) do
  config.relies_on = reckon_relies_on
end

But instead of reading ObjectSpace, reckon_relies_on can start grepping instead. Running

def reckon_relies_on
  stdout, _, _ = Open3.capture3("grep", "-nr", "# @RELIES_ON", "spec")
  binding.pry
end

and looking at stdout gives us:

"spec/spec_helper.rb:110:  stdout, _, _ = Open3.capture3(\"grep\", \"-nr\", \"# @RELIES_ON\", \".\")\n
spec/with_requirement_spec.rb:23:      # @RELIES_ON: my_hounds includes :breeding_stock\n
spec/with_requirement_spec.rb:28:      # @RELIES_ON: my_hounds includes :traits\n
spec/with_requirement_spec.rb:35:      # @RELIES_ON: my_hounds includes :breeding_stock\n"

And by experimentation, and reusing / rewriting some of the grepping code from the last example, we can end up with:

def repos
  [
    # current repo
    {
      repo_url: "https://github.com/smiller/relies_on_example_two",
      source_directory: "./",
      relies_on_label: "# @RELIES_ON: "
    },
    # related repos
    {
      repo_url: "https://github.com/smiller/relies_on_example_two_related_repo",
      source_directory: "./relies_on_example_two_related_repo/",
      relies_on_label: "# @RELIES_ON: <repo:relies_on_example_two>:"
    }
  ]
end

def reckon_relies_on
  relies_on = Hash.new { |hash, key| hash[key] = [] }
  FileUtils.rm_r("related_repos") if Dir.exist?("related_repos")

  repo = repos[0]
  relies_on = reckon_relies_on_for_one_repo(repo[:repo_url], repo[:source_directory], repo[:relies_on_label], relies_on)

  FileUtils.mkdir("related_repos")
  FileUtils.cd("related_repos")
  repos[1..].each do |repo|
    system("git clone #{repo[:repo_url]}")
    relies_on = reckon_relies_on_for_one_repo(repo[:repo_url], repo[:source_directory], repo[:relies_on_label], relies_on)
  end
  relies_on
end

def reckon_relies_on_for_one_repo(repo_url, source_directory, relies_on_label, relies_on)
  stdout, _, _ = Open3.capture3("grep", "-nr", relies_on_label, "#{source_directory}spec")
  lines = stdout.split("\n")
  lines.each do |line|
    file, key = line.split(/\:\s+#{relies_on_label}/)
    if file.include?("_spec.rb:")
      matches = /#{source_directory}([\w\/\.]+):(\d+)/.match(file)
      repo_link = "#{repo_url}/blob/main/#{matches[1]}#L#{matches[2]}"
      relies_on[key] << repo_link
    end
  end
  relies_on
end

After which, RSpec.configuration.relies_on is set to:

{"my_hounds includes :breeding_stock" =>
   ["https://github.com/smiller/relies_on_example_two/blob/main/spec/with_requirement_spec.rb#L23",
    "https://github.com/smiller/relies_on_example_two/blob/main/spec/with_requirement_spec.rb#L35",
    "https://github.com/smiller/relies_on_example_two_related_repo/blob/main/spec/relies_on_related_repo_spec.rb#L3"],
 "my_hounds includes :traits" =>
   ["https://github.com/smiller/relies_on_example_two/blob/main/spec/with_requirement_spec.rb#L28"]}

… which, checking against the github links, is what we expect it to be. So the relies_on lookup is now done.

Finding the Requirement

If we comment out line 5 of lib/service.rb to make the specs fail, we see that while it was simpler to build the relies_on lookup in this version, it’s going to be harder to get the requirement programmatically. When it was a method it was in scope so we could just call it to get the result, but now that it’s a comment annotation, we’ll have to work it out.

The simplest solution is a cheat with a manual step, but we’ll put it in to see the rest of the code work. We introduced a custom matcher in the previous post that takes the requirement as the argument. Let’s reintroduce it here, as spec/support/matchers.rb (also require 'support/matchers' in spec/spec_helper.rb so that it is accessible):

RSpec::Matchers.define :include_key do |key, requirement|
  match { |hash| hash.include?(key) }

  failure_message do
    "expected #{actual.keys} to include key #{key}, but it didn't\n- #{actual}\n#{relies_on_message(requirement)}"
  end

  failure_message_when_negated do
    "expected #{actual.keys} not to include key #{key}, but it did\n- #{actual}\n#{relies_on_message(requirement)}"
  end
end

Since that uses the relies_on_message(requirement) method, let’s add that back into spec/spec_helper.rb too:

def relies_on_message(requirement)
  relies_on = RSpec.configuration.relies_on.fetch(requirement, [])
  if relies_on.any?
    "\nOther specs relying on this: \n- #{relies_on.join("\n- ")}"
  else
    ""
  end
end

Now, since we’re explicitly (and only) calling the new include_key matcher when we know we’ve got a requirement, and since we can see from the annotation above the method what the requirement is:

      # @REQUIREMENT: my_hounds includes :breeding_stock
      it "includes :breeding_stock" do
        expect(subject).to include :breeding_stock
      end

we could change line 11 of with_requirement_spec.rb from

        expect(subject).to include :breeding_stock

to

        expect(subject).to include_key(:breeding_stock, "my_hounds includes :breeding_stock")

copying the requirement from line 9. This is a manual step and a duplication and we’d very much prefer to get rid of it, but if we try that now and run the specs, we do get the expected error message with related github links:

  1) with requirement my_hounds keys in response includes :breeding_stock
     Failure/Error: expect(subject).to include_key(:breeding_stock, "my_hounds includes :breeding_stock")

       expected [:traits] to include key breeding_stock, but it didn't
       - {:traits=>{:flewed_as=>"Spartan", :sanded_as=>"Spartan"}}

       Other specs relying on this:
       - "https://github.com/smiller/relies_on_example_two/blob/main/spec/with_requirement_spec.rb#L23"
       - "https://github.com/smiller/relies_on_example_two/blob/main/spec/with_requirement_spec.rb#L35"
       - "https://github.com/smiller/relies_on_example_two_related_repo/blob/main/spec/relies_on_related_repo_spec.rb#L3"
     # ./spec/with_requirement_spec.rb:11:in `block (4 levels) in <top (required)>'

(The quotation marks aren’t in the actual output: I’ve added them in because without them WordPress leaves off the line number suffix (e.g. #L23) from the link, so it just goes to the file, but with them it includes the line number suffix, so if you click on the links above within their quotation marks they go to the file and line number.)

Changes from this section are here.

Finding the Requirement without Cheating

It was good to see everything working, but it feels wrong to be manually copying a string when it’s two lines away. Can we do better? Let’s see.

If put back our binding.pry and inspect the example, we see it gives us the file and the line number where the example begins:

self.inspect
=> "#<RSpec::ExampleGroups::WithRequirement::MyHounds::KeysInResponse \"includes :breeding_stock\" (./spec/with_requirement_spec.rb:10)>"

We can regexp to get out the file and line number from that:

matches = /.+\((.+):(\d+)\).+/.match(self.inspect)
=> #<MatchData
 "#<RSpec::ExampleGroups::WithRequirement::MyHounds::KeysInResponse \"includes :breeding_stock\" (./spec/with_requirement_spec.rb:10)>"
 1:"./spec/with_requirement_spec.rb"
 2:"10">

This feels potentially fragile and accidental, but let’s run with it for now. Now that we’ve got the file, we can check for # @REQUIREMENT: comments within it:

stdout, _, _ = Open3.capture3("grep", "-nr", "# @REQUIREMENT: ", matches[1])
stdout
=> "./spec/with_requirement_spec.rb:9:      # @REQUIREMENT: my_hounds includes :breeding_stock\n./spec/with_requirement_spec.rb:15:      # @REQUIREMENT: my_hounds includes :traits\n"

And if the requirement is at the example level, we know its line_number will be matches[2].to_i - 1 => 9, so we can iterate over the requirement lines looking for that:

  lines = stdout.split("\n")
  lines.each do |line|
    line_matches = /#{example_matches[1]}:(\d+).+# @REQUIREMENT: (.+)/.match(line)
    if line_matches[1].to_i == line_number
      puts "REQUIREMENT IS: #{line_matches[2]}"
    end
  end

So we can create a reckon_requirement(example) method in spec/spec_helper.rb:

def reckon_requirement(example)
  example_matches = /.+\((.+):(\d+)\).+/.match(example.inspect)
  line_number = example_matches[2].to_i - 1
  stdout, _, _ = Open3.capture3("grep", "-nr", "# @REQUIREMENT: ", example_matches[1])
  lines = stdout.split("\n")
  lines.each do |line|
    line_matches = /#{example_matches[1]}:(\d+).+# @REQUIREMENT: (.+)/.match(line)
    if line_matches[1].to_i == line_number
      return line_matches[2]
    end
  end
end

And call it instead where we’re passing in the manually-typed requirement in line 11 of with_requirement_spec.rb:

        expect(subject).to include_key(:breeding_stock, reckon_requirement(self))

Then, in this case, because the comment annotation is on the specific example, it works, and we get the expected “Other specs relying on this: …” error message without manually copying the requirement:

  1) with requirement my_hounds keys in response includes :breeding_stock
     Failure/Error: expect(subject).to include_key(:breeding_stock, reckon_requirement(self))

       expected [:traits] to include key breeding_stock, but it didn't
       - {:traits=>{:flewed_as=>"Spartan", :sanded_as=>"Spartan"}}

       Other specs relying on this:
       - "https://github.com/smiller/relies_on_example_two/blob/main/spec/with_requirement_spec.rb#L23"
       - "https://github.com/smiller/relies_on_example_two/blob/main/spec/with_requirement_spec.rb#L35"
       - "https://github.com/smiller/relies_on_example_two_related_repo/blob/main/spec/relies_on_related_repo_spec.rb#L3"
     # ./spec/with_requirement_spec.rb:11:in `block (4 levels) in <top (required)>'

This does depend on the assumption that the requirement comment annotation will be at the example level, not on an example group level, which is not a restriction we had with the previous solution. But it’s probably a reasonable assumption. And if it ever isn’t, we can either dig deeper into a more complex automated solution or pass in a manually-copied string.

(But if we do find ourselves needing to pass in manually-copied strings, we should at least write a spec that gathers all the requirements from the comment annotations and all of the explicit requirement strings passed into custom matchers and verify that all the custom strings are valid requirements from the list of comment annotations.)

But if we’re good with the restriction that requirement comment annotation is at the specific example level, this now works automatically. Changes from this section are here.

Next-Day Improvement

Having just watched Noel Rappin’s RailsConf 2023 talk, I was reminded that Ruby gives us __FILE__ and __LINE__ to read the current file name and line number, which feels better and more direct than using a regexp as we did above.

Since reckon_requirement only needs the file name and line number, let’s change it from reckon_requirement(example) to reckon_requirement(file_name, line_number). In fact, we can pass in the line number of the requirement (in our case, __LINE__ - 2, but this also gives us the flexibility to reference a requirement on an example group and not just use the one on the example, which we didn’t have before), and, in case we make a mistake counting line numbers, let’s raise an error in reckon_requirement if it doesn’t find a requirement at the specified line number.

We change the call in the spec from:

      # @REQUIREMENT: my_hounds includes :breeding_stock
      it "includes :breeding_stock" do
        expect(subject).to include_key(:breeding_stock, reckon_requirement(self))
      end

to

      # @REQUIREMENT: my_hounds includes :breeding_stock
      it "includes :breeding_stock" do
        expect(subject).to include_key(:breeding_stock, reckon_requirement(__FILE__, __LINE__ - 2)) 
        # __LINE__ is 11, and the requirement is on line 9, hence __LINE__ - 2
      end

and we change reckon_requirement in spec_helper.rb to:

def reckon_requirement(full_path_file_name, line_number)
  file_name = "./#{full_path_file_name[`pwd`.length..]}"
  stdout, _, _ = Open3.capture3("grep", "-nr", "# @REQUIREMENT: ", file_name)
  lines = stdout.split("\n")
  lines.each do |line|
    matches = /#{file_name}:(\d+).+# @REQUIREMENT: (.+)/.match(line)
    if matches[1].to_i == line_number
      return matches[2]
    end
  end
  raise "requirement expected but not found at file #{file_name} line #{line_number}"
end

We can change the line_number we pass in at line 11 to __LINE__ - 3 (line 8, the line above the requirement) to verify we get the error with a usefully specific message:

Failures:

  1) with requirement my_hounds keys in response includes :breeding_stock
     Failure/Error: raise "requirement expected but not found at file #{file_name} line #{line_number}"

     RuntimeError:
       requirement expected but not found at file ./spec/with_requirement_spec.rb line 8
     # ./spec/spec_helper.rb:177:in `reckon_requirement'
     # ./spec/with_requirement_spec.rb:11:in `block (4 levels) in <top (required)>'

From requirement expected but not found at file ./spec/with_requirement_spec.rb line 8 we can look back at our spec file

    describe "keys in response" do
      # @REQUIREMENT: my_hounds includes :breeding_stock
      it "includes :breeding_stock" do
        expect(subject).to include_key(:breeding_stock, reckon_requirement(__FILE__, __LINE__ - 2))
      end

and see we meant line 9 / __LINE__ - 2 (not line 8 / __LINE - 3), and fix it easily.

Having fixed that, we can also comment out the line in lib/service.rb and verify that we still get the error message including the other specs relying on this block.

Looking at that block:

Other specs relying on this:
- "https://github.com/smiller/relies_on_example_two/blob/main/spec/with_requirement_spec.rb#L23"
- "https://github.com/smiller/relies_on_example_two/blob/main/spec/with_requirement_spec.rb#L35"
- "https://github.com/smiller/relies_on_example_two_related_repo/blob/main/spec/relies_on_related_repo_spec.rb#L3"

It occurs that naming the requirement instead of just saying “this” would probably be more useful, so we change relies_on_message(requirement) to say

    "\nOther specs relying on requirement '#{requirement}': \n- #{relies_on.join("\n- ")}"

And re-run to get a more informative error:

Other specs relying on requirement 'my_hounds includes :breeding_stock':
- "https://github.com/smiller/relies_on_example_two/blob/main/spec/with_requirement_spec.rb#L23"
- "https://github.com/smiller/relies_on_example_two/blob/main/spec/with_requirement_spec.rb#L35"
- "https://github.com/smiller/relies_on_example_two_related_repo/blob/main/spec/relies_on_related_repo_spec.rb#L3"

That feels better. Changes from this section are here.

Initial Conclusion

I think I prefer this solution to the one from the previous post, largely because of the github-to-specific-line-number links in all cases. It is true that we’re encoding data in comments, which can be risky, but since they’re clearly structured comments – # @REQUIREMENT: and # @RELIES_ON: – they feel more secure.

Also, since we’ve now got worked examples of both, we can let them breathe side-by-side for a bit, try them out on other examples, possibly / probably discover different edge cases, and come to a better-founded conclusion. It’s a lot easier to reason about now that both exist as running code.

List Tests Relying on Another Test

Problem Statement

Suppose you have three services, let’s call them hounds_service, breeding_service, and traits_service. Suppose further that breeding_service and traits_service both call my_hounds on hounds_service, which returns something including (from Theseus’ speech in A Midsummer Night’s Dream):

{
  breeding_stock: "Spartan",
  traits: {
    flewed_as: "Spartan",
    sanded_as: "Spartan"
  }
  …
}

For within-the-service tests, breeding_service and traits_service will stub out these responses. But they’ll also want tests in hounds_service that the values they care about appear in the actual response. If different teams own the different services, then, ideally, if the hounds_service team decides to rename breeding_stock to bred_from for their own reasons, they should see the broken test and know they need to go talk with the other team whose test on their code they just broke to come to some agreement about how or whether the field name needs to change consistently.

The simplest thing would be to add a comment:

# if this breaks, please talk to the breeding_service team
describe "my_hounds response includes 'breeding_stock'" do
  …

But it might be more efficient if there were some way of seeing at a glance which other tests (possibly on other repos) depend on this behaviour.

On the programming by wishful thinking approach, let’s introduce two new methods that we can put in spec blocks, requirement and relies_on. So:

# in hounds_service specs:

describe "my_hounds response includes 'breeding_stock'" do

  def requirement
    "my_hounds response includes 'breeding_stock'"
  end


# in breeding_service specs:

describe "code processing the my_hounds response" do

  def relies_on
    "hounds_service:my_hounds response includes 'breeding_stock'"
  end

Since the requirement is the same as the describe block’s description in this case, we might consider just using the description text, but starting with something that is more explicitly a search target, and less likely to be edited than a description, seems a safer first step.

Now we’ve got a concrete problem. Given a requirement, we want to locate and list all the relies_on blocks mentioning that requirement, either in the same repo or in other repos.

What we’ll want next is a relies_on lookup, where the keys are the relies_on text, and the values are the other spec example groups that have that text, and which we could interrogate as:

relied_on = relies_on_lookup.fetch(requirement, [])
if relied_on.any?
  # other specs have relies_on text matching our spec's requirement text
  # alert the user accordingly if our spec fails
end

(Edited to add: For a different approach, using comment annotations instead of method calls, see now also List Tests Relying on Another Test: Another Approach. But that very much builds on the work done here.)

In the Same Repo…

Let’s work with an example. Given a lib/service.rb:

class Service

  def my_hounds
    {
      breeding_stock: "Spartan",
      traits: {
        flewed_as: "Spartan",
        sanded_as: "Spartan"
      }
    }
  end
end

And a spec/with_requirement_spec.rb:

require "service"

RSpec.describe "with requirement" do

  subject { Service.new.my_hounds }

  context "my_hounds" do

    describe ":breeding_stock key" do
      def requirement
        "my_hounds includes :breeding_stock"
      end

      it "is included" do
        expect(subject).to include :breeding_stock
      end
    end

    describe ":traits key" do
      def requirement
        "my_hounds includes :traits"
      end

      it "is included" do
        expect(subject).to include :traits
      end
    end

    context "using my_hounds" do
      describe "for :breeding_stock" do
        def relies_on
          "my_hounds includes :breeding_stock"
        end

        it "does something with :breeding_stock" do
          expect(subject[:breeding_stock]).to eq("Spartan")
        end
      end
      describe "for :traits" do
        def relies_on
          "my_hounds includes :traits"
        end

        it "does something with :traits" do
          expect(subject[:traits][:flewed_as]).to eq("Spartan")
        end
      end
    end

    describe "using my_hounds :breeding_stock again" do
      def relies_on
        "my_hounds includes :breeding_stock"
      end

      it "checks something else" do
        expect(subject[:breeding_stock]).not_to eq("Athenian")
      end
    end
  end
end

That passes:

$ rspec
.....

Finished in 0.02194 seconds (files took 0.31486 seconds to load)
5 examples, 0 failures

And can be made to fail by commenting out line 5 of service:

$ rspec
F.F..

Failures:

  1) with requirement my_hounds :breeding_stock key is included
     Failure/Error: expect(subject).to include :breeding_stock

       expected {:traits => {:flewed_as => "Spartan", :sanded_as => "Spartan"}} to include :breeding_stock
       Diff:
       @@ -1 +1 @@
       -[:breeding_stock]
       +:traits => {:flewed_as=>"Spartan", :sanded_as=>"Spartan"},

     # ./spec/with_requirement_spec.rb:15:in `block (4 levels) in <top (required)>'

  etc… 

And we’d like the first failure message to include “Check out these other specs that rely on the assumption made in this one: …”.

Step 1: Build the relies_on lookup

Any spec file (or example group in a spec file) is going to be turned into a class extending RSpec::Core::ExampleGroup. So we can, at the beginning of the spec run, read ObjectSpace and find all the classes extending that:

  ObjectSpace.each_object(Class).select { |klass| klass < RSpec::Core::ExampleGroup }.each do |klass|
    # do something with it
  end

Note that we need to do this at the beginning of the test run because ObjectSpace contains living objects and partway through the test run some of them might already be garbage collected and no longer there. (Yes, I have experimentally verified this.) We can put it into before(:suite), run once before everything else. We can’t save it to an instance variable from there, but we can add a setting to RSpec.configuration:

  config.add_setting :relies_on

  config.before(:suite) do
    config.relies_on = reckon_relies_on
  end

which we can access later as RSpec.configuration.relies_on.

In reckon_relies_on, we need to grab all the RSpec classes, instantiate them, see if they contain / respond to relies_on, and add them to the lookup if they do (note that multiple example groups might rely on the same requirement, like specs 3 and 5 above, so the value is an array)

def reckon_relies_on
  relies_on = Hash.new { |hash, key| hash[key] = [] }
  ObjectSpace.each_object(Class).select { |klass| klass < RSpec::Core::ExampleGroup }.each do |klass|
    instance = klass.new
    next unless instance.respond_to?(:relies_on)

    relies_on_key = instance.method(:relies_on).call
    relies_on[relies_on_key] << klass
  end
  relies_on
end

If we make those changes in spec/spec_helper.rb we can put a binding.pry in the first spec, and in the console see what RSpec.configuration.relies_on is set to:

{
  "my_hounds includes :breeding_stock" =>
    [RSpec::ExampleGroups::WithRequirement::MyHounds::UsingMyHoundsBreedingStockForAgain,
     RSpec::ExampleGroups::WithRequirement::MyHounds::UsingMyHounds::ForBreedingStock],
  "my_hounds includes :traits" =>
    [RSpec::ExampleGroups::WithRequirement::MyHounds::UsingMyHounds::ForTraits]
}

At which point we have both the requirement (because we’re in the spec) and the relies_on lookup to check it against. If the spec passes, there’s nothing to do. But if it fails, and other specs rely on the requirement, we want to display that.

Step 2: Display the relied_on related specs in the failure output

We can start by building the addition to the error message that we wish we had. Add another method to spec/spec_helper.rb:

def relies_on_message(requirement)
  relies_on = RSpec.configuration.relies_on.fetch(requirement, [])
  if relies_on.any?
    "\nOther specs relying on this: \n- #{relies_on.join("\n- ")}"
  else
    ""
  end
end

And now, from the binding.pry in the first spec, if we call relies_on_message(requirement), we get:

Other specs relying on this: 
- RSpec::ExampleGroups::WithRequirement::MyHounds::UsingMyHoundsBreedingStockForAgain
- RSpec::ExampleGroups::WithRequirement::MyHounds::UsingMyHounds::ForBreedingStock

To incorporate that into the actual error message, let’s build a custom matcher, which lets us customize the failure message. Add this as spec/support/matchers.rb (also add require 'support/matchers' to the top of spec/spec_helper.rb so that it can be used):

RSpec::Matchers.define :include_key do |key, requirement|
  match { |hash| hash.include?(key) }

  failure_message do
    "expected #{actual.keys} to include key #{key}, but it didn't\n- #{actual}\n#{relies_on_message(requirement)}"
  end

  failure_message_when_negated do
    "expected #{actual.keys} not to include key #{key}, but it did\n- #{actual}\n#{relies_on_message(requirement)}"
  end
end

And change the checks in the first two specs (lines 15 and 25) from (for line 15):

expect(subject).to include :breeding_stock

to

expect(subject).to include_key(:breeding_stock, requirement)

And then, if we comment out line 5 of lib/service.rb to break the first spec, we get the error message including the desired “Other specs relying on this assumption” block:

rspec
F.F..

Failures:

  1) with requirement my_hounds :breeding_stock key is included
     Failure/Error: expect(subject).to include_key(:breeding_stock, requirement)

       expected [:traits] to include key breeding_stock, but it didn't
       - {:traits=>{:flewed_as=>"Spartan", :sanded_as=>"Spartan"}}

       Other specs relying on this:
       - RSpec::ExampleGroups::WithRequirement::MyHounds::UsingMyHoundsBreedingStockForAgain
       - RSpec::ExampleGroups::WithRequirement::MyHounds::UsingMyHounds::ForBreedingStock
     # ./spec/with_requirement_spec.rb:15:in `block (4 levels) in <top (required)>'

   etc…

It is, granted, less than intuitive that we need to pass the requirement into the matcher – include_key(:breeding_stock, requirement) – what does requirement have to do with whether :breeding_stock is in my_hounds is a valid question, and the answer is nothing, save that we need requirement in there to check the relies_on lookup.

It is also the case that we’ll need to build other custom matchers for other kinds of checks if we want to include the relies_on_message(requirement) for those too.

But this still feels like a step in a good direction. Example repo with all changes described so far here.

A Rejected Alternate Approach

This doesn’t require custom matchers but does require a rescue for each spec in a block with a requirement, which felt worse, but, for completeness:

Add this to spec/spec_helper.rb:

def append_to_failure_message(e, requirement)
  e.message << relies_on_message(requirement)
  raise e
end

And in the spec, change line 15 from

  expect(subject).to include :breeding_stock

to

  expect(subject).to include(:breeding_stock)
rescue RSpec::Expectations::ExpectationNotMetError => e
  append_to_failure_message(e, requirement)

This does work, but using the rescue like this – even granted that it’s only going to happen when there’s a failing spec, so arguably it is a properly exceptional case instead of just control flow – didn’t sit well with me, hence the custom matcher approach I went with instead.

In Other Repos…

The simplest way to drive out the building of the other repos functionality is to create another repo with a spec with a relies_on method referring back to a requirement in this one:

def relies_on
  "relies_on_example:my_hounds includes :breeding_stock"
end

Then, in our repo, we need a list of the related_repos we need to search, a reference to what the related repos are calling our repo, and a way of cloning the related repos into our repo so we can examine them. We can add this to spec/spec_helper.rb:

def clone_related_repos
  FileUtils.rm_r("related_repos") if Dir.exist?("related_repos")
  FileUtils.mkdir("related_repos")
  FileUtils.cd("related_repos")
  related_repos.each do |repo|
    system("git clone #{repo}")
  end
end

def this_repo
  "relies_on_example"
end

def related_repos
  ["https://github.com/smiller/relies_on_example_related_repo"]
end

We’ll also want to add related_repos/ to .gitignore so we aren’t committing them.

Now if we call clone_related_repos in reckon_relies_on, we’ll have added a related_repos/ directory with the source code for all the related repos. We aren’t going to set them all running and examine their ObjectSpace (in the real not-simplified-for-a-blog-post example at the back of this there are a dozen related repos), so we need something simpler and quicker.

At this point, I put a binding.pry in reckon_relies_on and started experimenting. Using open3 to capture the results of grepping for "relies_on_example:

  stdout, _, _ = Open3.capture3("grep", "-nr", "\"#{this_repo}:", ".")

reveals

"./relies_on_example_related_repo/spec/relies_on_related_repo_spec.rb:4:      \"relies_on_example:my_hounds includes :breeding_stock\"\n"

(In a more complex project, this check might not be distinct enough to avoid false positives: in that case, we would want to make the related repo relies_on string more unique, for instance, <from:relies_on_example>:my_hounds include :breeding_stock), and search for that instead.)

From that, after some trial and error, we can further process the results and add them to the relies_on hash:

  matches = stdout.split("\n")
  matches.each do |line|
    file, key = line.split(/\:\s+\"/)
    key.sub!("#{this_repo}:", "")
      .sub!("\"", "")
    if file.include?("/spec/") && file.include?("_spec.rb:")
      relies_on[key] << file
    end
  end

And then, if we comment out line 5 of lib/service.rb and make the first spec fail again, we get a failure message including the line number of the relying-on spec in another repo:

  1) with requirement my_hounds :breeding_stock key is included
     Failure/Error: expect(subject).to include_key(:breeding_stock, requirement)

       expected [:traits] to include key breeding_stock, but it didn't
       - {:traits=>{:flewed_as=>"Spartan", :sanded_as=>"Spartan"}}

       Other specs relying on this:
       - RSpec::ExampleGroups::WithRequirement::MyHounds::UsingMyHoundsBreedingStockForAgain
       - RSpec::ExampleGroups::WithRequirement::MyHounds::UsingMyHounds::ForBreedingStock
       - ./relies_on_example_related_repo/spec/relies_on_related_repo_spec.rb:4
     # ./spec/with_requirement_spec.rb:15:in `block (4 levels) in <top (required)>'

Further refinements are possible: if a github link to the line number in the other repo would be more likely to be followed up than a line number, we could change the code to provide that instead:

    if file.include?("/spec/") && file.include?("_spec.rb:")
      matches = /.\/(\w+)\/([\w\/.]+):(\d+)/.match(file)
      repo_url = related_repos.select { |x| x[x.rindex('/') + 1..] == matches[1] }.first
      repo_link = "#{repo_url}/blob/main/#{matches[2]}#L#{matches[3]}"
      relies_on[key] << repo_link
    end

resulting in:

       Other specs relying on this:
       - RSpec::ExampleGroups::WithRequirement::MyHounds::UsingMyHoundsBreedingStockForAgain
       - RSpec::ExampleGroups::WithRequirement::MyHounds::UsingMyHounds::ForBreedingStock
       - https://github.com/smiller/relies_on_example_related_repo/blob/main/spec/relies_on_related_repo_spec.rb#L4

Changes to the repo from this section are here.

At the end of the day, to go back to the original example of the developer from one team wanting to change a key from :breeding_stock to :bred_from, this doesn’t stop them from doing that. But it does make much more clear the other parts of the codebase (even from different repos) that rely on the assumption staying the same, and should prompt conversations before the change is made.

Coda: After Sleeping on It…

I greatly prefer the convenience of the line number / github link references in the other repos code to the ObjectSpace references in the same repo code. Everything described here works, but I wonder if it would be even better to start from greppable comment annotations instead of methods, like so:

# requirement anchor
# @REQUIREMENT: my_hounds includes key :breeding_stock

# relies_on anchor in same repo
# @RELIES_ON: my_hounds includes key :breeding_stock

# relies_on anchor in another repo
# @RELIES_ON: <repo:hounds_service>:my_hounds includes key :breeding_stock

See now List Tests Relying on Another Test: Another Approach, where I’ve developed this version as well.

Appendix: The Thread from November 2019

This idea started in a twitter thread I wrote in November 2019, and recently re-read, and decided it was still a good idea:

[Germ of a testing idea] Suppose you’ve got a test. It depends on something elsewhere in the system behaving in a certain way. Within the test, you stub out that behaviour. Which is fine, provided your assumption holds.

If the other behaviour changes without warning, your assumption and your test become worse than useless. To prevent that, you make sure there’s a test elsewhere in the system testing the behaviour you’re relying on, and you add it if it isn’t already there.

But, if that behaviour changes – not just the interface, which we can auto-check, but the behaviour – it would be useful if there were an automatic way of notifying “Now you need to check these 12 places to see if they still work because they assumed the previous behaviour”

And so, maybe in addition to the tests-in-two-places, we need a marker on the dependent test, a depends-on annotation, some sort of mechanism that if you change behaviour X, it can report or “potentially or secondarily fail” all the places Y that depend on X.

This isn’t going back to the brittle tests problem of “you make a change and 27 tests start breaking”, or at least it’s different in that it tells you the one place to look at first.

It probably also indicates, if 26 other places depend on the behaviour, it should be treated as a public interface and not changed or only changed very carefully. So probably, in addition to running tests, we’d want to be able to build reports from it.

This is indirectly down to @alex_schl, as I was watching her keynote on exploratory testing from https://twitter.com/alex_schl/status/1196804674373980166 while thinking about how to improve some tests I’d written yesterday. All flaws are of course my own, and I will try to write this up with more detail soon.

Adding GitHub Actions to Run RSpec and SimpleCov

Summary

Given a simple Ruby project (say this one), how easy is it to set up a GitHub action on the repo so that it runs specs on push? And then say SimpleCov, too?

TL;DR: Skipping the trial and error and the credits (all below), just add this as .github/workflows/run-tests.yml to your repo and push it and you’re away:

name: Run tests
on: [push, pull_request]
jobs:
  run-tests:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: write
    steps:
      - uses: actions/checkout@v4
      - name: Set up Ruby
        uses: ruby/setup-ruby@v1
        with:
          bundler-cache: true
      - name: Run tests
        run: bundle exec rspec -f j -o tmp/rspec_results.json -f p
      - name: RSpec Report
        uses: SonicGarden/rspec-report-action@v2
        with:
          token: ${{ secrets.GITHUB_TOKEN }}
          json-path: tmp/rspec_results.json
        if: always()
      - name: Report simplecov
        uses: aki77/simplecov-report-action@v1
        with:
          token: ${{ secrets.GITHUB_TOKEN }}
        if: always()
      - name: Upload simplecov results
        uses: actions/upload-artifact@master
        with:
          name: coverage-report
          path: coverage
        if: always()

Adding RSpec

I did a search and found Dennis O’Keeffe’s useful article and since I already had a repo I just plugged in his .github/workflows/rspec.yml file:

name: Run RSpec tests
on: [push]
jobs:
  run-rspec-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Set up Ruby
        uses: ruby/setup-ruby@v1
        with:
          # Not needed with a .ruby-version file
          ruby-version: 2.7
          # runs 'bundle install' and caches installed gems automatically
          bundler-cache: true
      - name: Run tests
        run: |
          bundle exec rspec

and it just ran, after three slight tweaks:

  • My repo already had a .ruby-version file so I could skip lines 11-12
  • the checkout@v2 action now gives a deprecation warning so I upped it to checkout@v4
  • because my local machine was a Mac I also needed to run bundle lock --add-platform x86_64-linux and push that for it the action to run on ubuntu-latest. Unsurprisingly.

The specs failed, which was unexpected, since they were passing locally, and it would be more informative if it named the specific failure on the summary page instead of needing me to click into run-tests to see it:

So I went to the GitHub Actions Marketplace and searched for “rspec”. There’s a RSpec Report action that looks like it does what we want if we change the bundle exec rspec line to output the results as a json file, so I changed the end of the workflow file to

      - name: Run tests
        run: bundle exec rspec -f j -o tmp/rspec_results.json -f p
      - name: RSpec Report
        uses: SonicGarden/rspec-report-action@v2
        with:
          token: ${{ secrets.GITHUB_TOKEN }}
          json-path: tmp/rspec_results.json
        if: always()

pushed, and tried again, and it failed again, but this time with details:

The failure is not github-action-specific (Digressions on the repo #1: RSpec): the TL;DR is that I had manually created a spec/tmp_files directory for files the tests wrote and because of the way I was doing setup and teardown it would have continued to work on my machine and nobody else’s without my noticing until I erased my local setup or tried it on a new machine. This was a very useful early warning.

Line 22’s if: always() is worth a mention. If running the tests (lines 15-6) exits with failing specs, subsequent blocks won’t run by default. If we need them to run anyway, we need to add if: always(), which is why it’s included here in all subsequent blocks.

Adding SimpleCov

Since the project wasn’t already using SimpleCov, step one was setting that up locally (Digressions on the repo #2: SimpleCov).

That sorted, I went back to the GitHub Actions Marketplace and searched for “simplecov” and started trying alternatives. The currently-highest-starred action, Simplecov Report, drops in neatly after running the specs and the RSpec Report:

      - name: Simplecov Report
        uses: aki77/simplecov-report-action@v1
        with:
          token: ${{ secrets.GITHUB_TOKEN }}

– it defaults to requiring 90% coverage, but you could pass in a different value by passing in a failedThreshold:

      - name: Simplecov Report
        uses: aki77/simplecov-report-action@v1
        with:
          failedThreshold: 80
          token: ${{ secrets.GITHUB_TOKEN }}

and of course if we want it to run whether specs fail or not, we need to add if: always().

So, I added

  - name: Simplecov Report
    uses: aki77/simplecov-report-action@v1
    with:
      token: ${{ secrets.GITHUB_TOKEN }}
    if: always()

and pushed. That passed:

That said, as with the spec results, it would be useful to see more detail. I remembered a previous project where we’d uploaded the SimpleCov coverage report to GitHub so it showed up as among the artifacts, and the steps in Jeremy Kreutzbender’s 2019 article still work for that: we can add a fourth block to our file:

- name: Upload coverage results
  uses: actions/upload-artifact@master
  with:
    name: coverage-report
    path: coverage
  if: always()

and push, and that gives us access to the coverage report:

which we can download and explore the details down to line numbers, even if our local machine is on a different branch.

And, for pushes, we’re done. But there’s two more things we need to add for it to work with pull requests as well.

For Pull Requests

The first thing to do is to modify the action file so that the we trigger the workflow on pull requests as well as pushes:

on: [push, pull_request]

At which point, it feels like everything should work, but if we create a PR, the SimpleCov Report fails, with Error: Resource not accessible by integration:

It ran the code coverage, as we can see from the end of the Run tests block, it uploads the coverage results, but the SimpleCov Report block fails.

This is because, in the PR case, the SimpleCov Report tries to write a comment to the PR with its coverage report:

and to enable it to do that, we need to give it permission, so we need to insert into the action file:

permissions:
  contents: read
  pull-requests: write

And that fixes the SimpleCov Report error and gets us to a passing run.

I was initially unwilling to do this because pull-requests: write felt like a dangerously wide-scoped permission to leave open, but, on closer examination, the actual permissions that it enables are much more narrowly scoped, to issues and comments and labels and suchlike.

(Note that we only need to add these permissions because SimpleCov Report is adding a comment to the PR: if we’d picked a different SimpleCov reporting action that didn’t write a comment, we would only have needed on: [push, pull_request] to get the action to work on pushes and pull requests.)

This was an interesting gotcha to track down, but now we’re done, for both pushes and pull requests.

With thanks to my friend Ian, who unexpectedly submitted a pull request for the repo which exposed the additional wrinkles that had to be investigated here.

Appendix: Digressions on the Repo

#1: RSpec

The first time I ran the RSpec action it failed, which usefully revealed that the setup in one of my tests was relying on a manual step.

The repo takes an input file and writes an output file, so in spec/ I’ve got spec/example_files for the files it starts from and spec/tmp_files for the new files it writes. I had created that directory locally and was running

  before do
    FileUtils.rm_r("spec/tmp_files")
    FileUtils.mkdir("spec/tmp_files")
  end

Which only worked, of course, because before the first run I’d manually added the directory, which I didn’t remember until the GitHub action tried and failed because it had no idea about that manual step.

The simplest fix would be to replace it with

  before do
    FileUtils.mkdir("spec/tmp_files")
  end

  # ... tests

  after do
    FileUtils.rm_r("spec/tmp_files")
  end

but because I was using the specs to drive the implementation and I was eyeballing the produced files as I went, I didn’t want to delete them in teardown, so I did this instead:

  before do
    FileUtils.rm_r("spec/tmp_files") if Dir.exist?("spec/tmp_files")
    FileUtils.mkdir("spec/tmp_files")
  end

after which both the local and the GitHub action versions of the tests passed. This discovery was an unexpected but very welcome benefit of setting up the GitHub action to run specs remotely.

Back

#2: SimpleCov

To add SimpleCov locally, we add gem "simplecov" to the Gemfile, require and start it from the spec/spec_helper.rb file (filtering out spec files from the report):

require 'simplecov'
SimpleCov.start do
  add_filter "/spec/"
end

and add coverage/ to the .gitignore file so we aren’t committing or pushing the files from local coverage runs.

Then we can run RSpec locally, open the coverage/index.html file, and discover that though we know that both lib/ files are being exercised in the specs, only one of them shows up in the coverage report:

And this makes sense, unfortunately, because lib/formatter.rb contains the class which does the formatting, and lib/reformat.rb is a script for the command-line interface which is largely optparse code and param checking. But it makes for a misleading coverage report.

We can start fixing this by moving everything but the command-line interface and optparse code out of the lib/reformat.rb into, say, lib/processor.rb, and have the CLI call that. It still won’t show up in the coverage report because it isn’t being tested directly, but we can add tests against Processor directly so that they do.

Having done that, we get a much more satisfactory coverage report:

This still leaves a very small bit of code in the CLI that isn’t covered by the coverage report:

option_parser.parse!
variant = ARGV[0]

if (variant.nil? && options.empty?) || variant == "help"
  puts option_parser
end

if variant == "alternating"
  Processor.new(options: options).process
end

but we have tests that will break if that doesn’t work so we decide we can live with that. (If it got more complicated, with multiple variants calling multiple processors, we could pull lines 4-10 into their own class and test it directly too. But it hasn’t yet.)

Back

Adding described_url to RSpec for Request Specs

Background

RSpec has a described_class method, which, given a spec starting

RSpec.describe Accounting::Ledger::Entry do

lets us use described_class for Accounting::Ledger::Entry within the spec, which is less noisy (particularly with long class names) and very handy if we later rename the class.

RSpec also has a subject method which exposes not the class but an instance of the class. It can be over-ridden implicitly, by putting a different class name in an inner describe block’s description:

RSpec.describe Accounting::Ledger::Entry do
  # subject starts as instance of Accounting::Ledger::Entry
  …
  describe Accounting::Ledger::SpecificEntryType do
    # subject is now instance of Accounting::Ledger::SpecificEntryType

or explicitly, by defining it directly:

RSpec.describe Accounting::Ledger::Entry do
  # subject starts as instance of Accounting::Ledger::Entry
  …
  subject { "just what I choose it to mean – neither more nor less" }
  # subject is now that string, just like Humpty Dumpty said

Problem

What if we’re building a request spec for an API endpoint, and our spec starts

RSpec.describe "/accounting/ledger/entries" do
  it "GET" do
    get "/accounting/ledger/entries"

    expect(response).to have_http_status(:ok)

It would be nicer if we had a described_url method, so we could say get described_url instead of repeating the url string in each example.

We could set subject once to the url:

RSpec.describe "/accounting/ledger/entries" do
  subject { "/accounting/ledger/entries" }
  it "GET" do
    get subject

But the single repetition still feels like it should be unnecessary, particularly with the string we need only one line away.

Towards a Solution

We see from the documentation that we can get the inner description string from the example if we specify it as a block parameter:

RSpec.describe "/accounting/ledger/entries" do
  it "GET" do |example|
    expect(example.description).to eql("GET") # => passes
  end
end

We won’t want to say do |example| all the time, though, so let’s start building code in a single before block. Further, we’ll only want this in request specs, so let’s tag request specs as such

RSpec.describe "/accounting/ledger/entries", :requests do

(:requests here is equivalent to requests: true), and put the before do |example| block, scoped to specs tagged :requests, in spec/spec_helper.rb:

RSpec.configure do |config|
  …
  config.before(:each, :requests) do |example|
    binding.pry
  end

From here we can run the spec and at the debugger and see that example.description is indeed “GET”.

> example
=> #<RSpec::Core::Example "GET">
> example.description
=> "GET"

Looking at lib/rspec/core/example.rb shows us that def description looks in something called metadata[:description] and passing example.metadata to the debugger gives us a screen-full of hash data, including

{ … 
:description => "GET"
:full_description => "/accounting/ledger/entries GET",
:example_group => { …
  :description => "/accounting/ledger/entries"

So if we’re one layer deep, example.metadata[:example_group][:description] gives us the outer description.

However, the actual example might be nested inside a describe or context block. Given

RSpec.describe "/accounting/ledger/entries" do
  describe "GET" do
    it "first case" do

the relevant metadata extract is instead:

{ … 
:description => "first case"
:full_description => "/accounting/ledger/entries GET first case",
:example_group => { …
  :description => "GET",
  :full_description => "/accounting/ledger/entries GET",
  …
  :parent_example_group => { …
    :description => "/accounting/ledger/entries",
    :full_description => "/accounting/ledger/entries"

And yes, testing with another nested group demonstrates that a :parent_example_group can include an inner :parent_example_group.

A Solution

But that’s a simple recursion problem: keep going until you don’t have a next :parent_example_group and then grab the outermost :description. We could replace the contents of the before block with:

def reckon_described_url(metadata)
  if metadata[:parent_example_group].nil?
    metadata[:description]
  else
    reckon_described_url(metadata[:parent_example_group])
  end
end

@described_url = reckon_described_url(example.metadata[:example_group])

Why the Snark Is a Boojum

(Yes, arguably, because the first space in the :full_description string happens to be between the outer description and the second-outermost description, we could get the same answer from

@described_url = example.metadata[:full_description].split(" ").first

but only by coincidence, and it would break for unrelated reasons if they changed how they build :full_description. So let’s not do that.)

Back to the Solution

We can now use the code from our before blog in a request spec. Going back to our original example, we can now implement it

RSpec.describe "/accounting/ledger/entries", :requests do
  it "GET" do
    get @described_url

If we’d prefer not to have an unexplained instance variable, or are looking forward to the possibility of passing in parameters, we can wrap it in a method call in the before block:

def described_url
  @described_url
end

And then it can (finally) be used by the end user just like described_class:

RSpec.describe "/accounting/ledger/entries", :requests do
  it "GET" do
    get described_url

A Solution that Takes Parameters

A more realistic version of the request spec url might be /accounting/ledgers/:ledger_id/entries. Fortunately, since we’ve just turned described_url into a method, we can make it take parameters, so a slightly changed test can pass in the parameters it needs to substitute:

RSpec.describe "/accounting/ledgers/:ledger_id/entries", :requests do
  let(:ledger) { … build test ledger … }
  it "GET" do
    get @described_url(params: {ledger_id: ledger.id})

and the method can create a copy of the base @described_url (as different tests may well require different parameters) and return the copy with the parameters, if any, filled in:

def described_url(params: {})
  url = @described_url.dup
  params.each do |key, value|
    url.sub!(":#{key}", value.to_s)
  end
  url
end

Appendix: The Complete Before Block

RSpec.configure do |config|
  …
  config.before(:each, :requests) do |example|
    def described_url(params: {})
      url = @described_url.dup
      params.each do |key, value|
        url.sub!(":#{key}", value.to_s)
      end
      url
    end

    def reckon_described_url(metadata)
      if metadata[:parent_example_group].nil?
	    metadata[:description]
      else
	    reckon_described_url(metadata[:parent_example_group])
      end
    end

    @described_url = reckon_described_url(example.metadata[:example_group])
  end
  …
end

Testing with Page Objects: Implementation

In the previous post, we added the page object framework SitePrism to a project, did some organization and customization, and added helper methods so we could call visit_page and on_page to interact with the app. Next, let’s start using it.

Navigation

First Step

Let’s say that for the first test you want to go to the login page, log in, and then assert that you’re on the dashboard page. That’s easy enough to do like so:

visit_page("Login").login(username, password)
on_page("Dashboard")

module PageObjects
  class Login < SitePrism::Page
    set_url "/login"

    element :username, "input#login"
    element :password, "input#password"
    element :login, "input[name='commit']"

    def login(username, password)
      self.username.set username
      self.password.set password
      login.click
    end
  end
end

module PageObjects
  class Dashboard < SitePrism::Page
    set_url "/dashboard"
  end
end

The first line will go to the “/login” page and then try to submit the form, and will fail if it can’t find or interact with the “username”, “password”, or “login” elements. For instance, if the css for the username field changes, the test will fail with:

Unable to find css "input#login" (Capybara::ElementNotFound)
./test_commons/page_objects/login.rb:10:in `login'
./features/step_definitions/smoke.rb:7:in `login'
./features/step_definitions/smoke.rb:3:in `/^I log in$/'
features/smoke.feature:5:in `When I log in’

The second line will fail if the path it arrives at after logging in is something other than “/dashboard”.

Because the default message for the different path was uninformative (expected true, but was false), we added a custom matcher to provide more information. The default message for not finding an element on the page, including the css it’s expecting and the backtrace to the page object it came from and the line in the method it was trying to run, seems sufficiently informative to leave as-is.

If the login field were disabled instead of not found, the error message would be slightly less informative:

invalid element state
  (Session info: chrome=…)
  (Driver info: chromedriver=2.14. … (Selenium::WebDriver::Error::InvalidElementStateError)
./test_commons/page_objects/login.rb:10:in `login'
./features/step_definitions/smoke.rb:7:in `login'
./features/step_definitions/smoke.rb:3:in `/^I log in$/'
features/smoke.feature:5:in `When I log in’

but since the backtrace still gives you the line of the page object where it failed, and that gets you the element in an invalid state, it feels lower priority.

Second Step

If the next thing you want to test is a subcomponent flow, you can navigate to its first page either by calling visit_page('ComponentOne::StepOne') (the equivalent of typing the path in the browser address bar), or by having the test interact with the same navigation UI that a user would. The second choice feels more realistic and more likely to expose bugs, like the right navigation links not being displayed.

(Ironically, having created the visit_page method, we might only ever use it for a test’s first contact with the app, using in-app navigation and on_page verification for all subsequent interactions.)

To get that to work, we need to model the navigation UI.

SitePrism Sections and Navigation UI

Supose the app has a navigation bar on every logged-in page. We could add it as a section to the Dashboard, say, but if it’s on every page it probably makes more sense to add it as a section to a BasePage and have Dashboard (and all other logged-in pages) inherit from it:

module PageObjects
  class BasePage < SitePrism::Page
    section :navigation, Navigation, ".navigation"
  end
end

module PageObjects
  class Dashboard < BasePage
    …
  end
end

module PageObjects
  class Navigation < SitePrism::Section
    element :component_one, ".component_one"
    element :componont_two, ".component_two"

    def goto_component_one
      component_one.click
    end

    def goto_component_two
      component_two.click
    end
  end
end

Now, if we want to go from the dashboard to the start of component one, we can modify the on_page("Dashboard") line above to

on_page("Dashboard").navigation.goto_component_one

Can we delegate to the section from the enclosing page? Yes. If we add this to the base page:

module PageObjects
  class BasePage < SitePrism::Page
    …
    delegate :goto_component_one, :goto_component_two, to: :navigation
  end
end

we can then call, still more briefly:

on_page("Dashboard").goto_component_one

And now the test from the top can continue like so:

visit_page("Login").login(username, password)
on_page("Dashboard").goto_component_one
on_page("ComponentOne::StepOne").do_something

and line two will fail if the link to “component one” isn’t visible on the screen, and line three will fail if the user doesn’t end up on the first page of component one (perhaps they followed the link but authorizations aren’t set up correctly, so they end up on an “unauthorized” page instead).

What Else Is on the Page?

If you call a method on a page object it will verify that any item that it interacts with is on the page, and fail if it can’t find it or can’t interact with it. But what if you’re adding tests to an existing app, and you aren’t yet calling methods that interact with everything on the page, but you do want to make sure certain things are on the page?

Suppose you have a navigation section with five elements, unimaginatively called :one, :two, :three, :four, :five. And you have three types / authorization levels of users, like so:

User Can see elements
user1 :one
user3 :one, :two, :three
user5 :one, :two, :three, :four, :five

And you want to log each of them in and verify that they only see what they should see. We can do this with SitePrism’s has_<element>? and has_no_<element>? methods.

A naive approach might be:

all_navigation_elements = %w{one two three four five}

visit_page('Login').login(user1.username, user1.password)
on_page('Dashboard') do |page|
  user1_should_see = %w{one}
  user1_should_see.each do |element|
    expect(page.navigation).to send("have_#{element}")
  end
  user1_should_not_see = all_navigation_elements - user1_should_see
  user1_should_not_see.each do |element|
    expect(page.navigation).to send("have_no_#{element}")
  end
end

Even before doing the other two, it’s clear that we’re going to want to extract this into a more generalized form.

To build out the design by “programming by wishful thinking”, if we already had the implementation we might call it like this:

all_navigation_elements = %w{one two three four five}

visit_page('Login').login(user1.username, user1.password)
on_page('Dashboard') do |page|
  should_see = %w{one}
  has_these_elements(page.navigation, should_see, all_elements)
end

visit_page('Login').login(user3.username, user3.password)
on_page('Dashboard') do |page|
  should_see = %w{one two three}
  has_these_elements(page.navigation, should_see, all_elements)
end

# do. for user5

And then we can reopen test_commons/page_objects/helpers.rb to add an implementation:

def has_these_elements(page, has_these, all)
  has_these.each do |element|
    expect(page).to send("have_#{element}")
  end
  does_not_have = all - has_these
  does_not_have.each do |element|
    expect(page).to send("have_no_#{element}")
  end
end

And this works. The raw error message (suppose we’re doing this test first so we’ve got the test for user1 but user1 still sees all five items) is unhelpful:

expected #has_no_two? to return true, got false (RSpec::Expectations::ExpectationNotMetError)
./test_commons/page_objects/helpers.rb:28:in `block in has_these_elements'
./test_commons/page_objects/helpers.rb:27:in `each'
./test_commons/page_objects/helpers.rb:27:in `has_these_elements'
./features/step_definitions/navigation_permissions.rb:24:in `block in dashboard'
…

And we could build a custom matcher for it, but in this case since the error is mostly likely to show up (if we’re working in Cucumber) after

When user1 logs in
Then user1 should only see its subset of the navigation

seeing expected #has_no_two? after that seems perfectly understandable.

Summing Up So Far

I’m quite liking this so far, and I can easily imagine after I’ve used them for a bit longer bundling the helpers into a gem to make it trivially easy to use them on other projects. And possibly submitting a PR to suggest adding something like page.has_these_elements(subset, superset) into SitePrism, because that feels like a cleaner interface than has_these_elements(page, subset, superset).

Testing with Page Objects: Setup

What Are Page Objects?

Page objects are an organizing tool for UI test suites. They provide a place to identify the route to and elements on a page and to add methods to encapsulate paths through a page. Martin Fowler has a more formal description.

Suppose you’re testing a web app which has a component with a four-page flow, with a known happy path a user would normally work through. Using page objects, you could test that flow as:

def component_one_flow(user, interests, contacts)
  goto_component_one_start
  on_page('ComponentOne::StepOne').register(user)
  on_page('ComponentOne::StepTwo').add_interests(interests)
  on_page('ComponentOne::StepThree').add_contacts(contacts)
  on_page('ComponentOne::StepFour').approve
end

If anything changes on one of those pages, you can make the change inside the page object, and the calling code can remain the same.

For small- to medium-sized web apps this might seem a nice-to-have, but for larger apps where you end up afraid to open features/step_definitions or spec/features because it’s impossible to find anything, this is a very useful pattern to introduce, or to start out with on the next project.

This ran quite long, so I’ve split it into two parts: in this first post I’ll describe setting up a page object framework, and in the second I’ll talk about implementation patterns.

Available Frameworks

In 2013 and 2014 I enjoyed working with Jeff Morgan’s page-object, which runs on watir-webdriver. (His book Cucumber and Cheese was my introduction to page objects.) Jeff Nyman’s Symbiont also runs on watir-webdriver and also looks quite interesting (as does his testing blog). There’s also Nat Ritmeyer’s SitePrism, running on Capybara.

Here’s a login page in all three:

Page-object

class LoginPage
  include PageObject

  page_url "/login"

  text_field(:username, id: 'username')
  text_field(:password, id: 'password')
  button(:login, id: 'login')

  def login_with(username, password)
    self.username = username
    self.password = password
    login
  end
end

SitePrism

class Login < SitePrism::Page
  set_url "/login"

  element :username, "input#login"
  element :password, "input#password"
  element :login, "input[name='commit']"

  def login_with(username, password)
    self.username.set username
    self.password.set password
    login.click
  end
end

Symbiont

class Login
  attach Symbiont

  url_is      "/login"
  title_is    "Login"

  text_field :username, id: 'user'
  text_field :calendar, id: 'password'
  button     :login,    id: 'submit'

  def login_with(username, password)
    self.username.set username
    self.password.set password
    login.click
  end
end

The similarities are clear: in each case you can identify the URL, the elements, and methods that interact with those elements. And since it is the internal page object names for elements that are used in the methods, if (when) the css changes, the only line that needs to change in the page object is the declaration of the element.

What about differences? Page-object (here) and Symbiont (here) provide page factories, so that in page-object you can write

visit(LoginPage) do |page|
  page.login(username, password)
end

or, even more compactly

visit(LoginPage).login(username, password)

where SitePrism is, somewhat more long-windedly:

page = Login.new
page.load
page.login(username, password)

SitePrism offers, in addition to element, the named concepts of elements (for a collection of rows, say) and section (for a subset of elements on a page, or elements which are common across multiple pages, a navigation bar, say, which you can identify separately in a SitePrism::Section object).

The further point that inclined me to try SitePrism (though with a mental note to add a page factory) was that when we were using page-object last year we sometimes needed to write explicit wrappers for waiting code because the implicit when_present wait did not always seem to be reliable, and it looked as though Symbiont wrapped it the same way (here). I was curious to see if SitePrism (or its underlying driver) had a more robust solution.

Setting up SitePrism

So How Does It Handle Waiting?

By default, SitePrism requires explicit waits, so either wait_for_<element> or wait_until_<element>_visible. This has been challenged (issue#41, in which Ritmeyer explains that he likes having more fine-grained control over timing of objects than Capybara’s implicit waits allows), and a compromise has been implemented (issue#43), by which you can configure SitePrism to use Capybara’s implicit waits, so:

SitePrism.configure do |config|
  config.use_implicit_waits = true
end

I’ve been using that, and it’s working so far.

Where to Put Them

On the earlier project we were using page objects from Cucumber, so we put them under features/page_objects. The new project is larger and already has non-test-framework-specific code under both features and spec, some of which is called from the other framework’s directory, so it makes sense to create a common top-level directory for them. I called it test_commons to make it clear at a glance through the top-level directories that it was test-related instead of app-related. Then page_objects goes under that.

The disadvantage of the location is that we will need to load the files in explicitly. So we add this at test_commons/all_page_objects.rb (approach and snippet both from Panayotis Matsinopoulos’s useful post):

page_objects_path = File.expand_path('../page_objects', __FILE__)
Dir.glob(File.join(page_objects_path, '**', '*.rb')).each do |f|
  require f
end

and in features/support/env.rb we add:

require_relative '../../test_commons/all_page_objects'

and we’re away. (If/when we use them from RSpec features as well, we can require all_page_objects in spec_helper.rb too.)

Further Organization

Probably you’ll have some top-level or common pages, like login or dashboard, and others which belong to specific flows and components, like the four pages of “component one” in the first example. A login page could go in the top-level page objects directory:

# test_commons/page_objects/login.rb
module PageObjects
  class Login < SitePrism::Page
    …
  end
end

whereas the four pages of component one could be further namespaced:

# test_commons/page_objects/component_one/step_one.rb
module PageObjects
  module ComponentOne
    class StepOne < SitePrism::Page
      …
    end
  end
end

This becomes essential when you’ve got more than a handful of pages.

Adding a Page Factory for SitePrism

I started by looking at page-object’s implementation (which itself references Alister Scott’s earlier post).

That seemed to do more than I needed right away, so I started with the simpler:

# test_commons/page_objects/helpers.rb
module PageObjects
  def visit_page(page_name, &block)
    Object.const_get("PageObjects::#{page_name}").new.tap do |page|
      page.load
      block.call page if block
    end
  end
end

… though we will need to add World(PageObjects) to our features/support/env.rb to use this from Cucumber (the require got us the page object classes, but not module methods). If we were running from RSpec, to borrow a snippet from Robbie Clutton and Matt Parker’s excellent LA Ruby Conference 2013 talk, we would need:

RSpec.configure do |c|
  c.include PageObjects, type: [:feature, :request]
end

So now we can call visit_page (not visit because that conflicts with a Capybara method), it’ll load the page (using the page object’s set_url value), and if we have passed in a block it’ll yield to the block, and either way it’ll return the page, so we can call methods on the returned page. In other words: visit_page('Login').login(username, password).

We can use the same pattern to build an on_page method, which instead of loading the page asserts that the app is on the page that the test claims it will be on:

def on_page(name, args = {}, &block)
  Object.const_get("PageObjects::#{page_name}").new.tap do |page|
    expect(page).to be_displayed
    block.call page if block
  end
end

Further iteration (including the fact that both page.load and page.displayed? take arguments for things like ids in URLs) results in something like this:

module PageObjects
  def visit_page(name, args = {}, &block)
    build_page(name).tap do |page|
      page.load(args)
      block.call page if block
    end
  end

  def on_page(name, args = {}, &block)
    build_page(name).tap do |page|
      expect(page).to be_displayed(args)
      block.call page if block
    end
  end

  def build_page(name)
    name = name.to_s.camelize if name.is_a? Symbol
    Object.const_get("PageObjects::#{name}").new
  end
end

We haven’t implemented page-object’s if_page yet (don’t assert that you’re on a page and fail if you aren’t, but check if you’re on a page and if you are carry on), but we can add it later if we need it.

What Would that Failed Assertion Look Like?

Suppose we tried on_page('Dashboard').do_something when we weren’t on the dashboard. What error message would we get?

expected displayed? to return true, got false (RSpec::Expectations::ExpectationNotMetError)
./test_commons/page_objects/helpers.rb:11:in `block in on_page'

We know, because SitePrism has a #current_path method, what page we’re actually on. A custom matcher to produce a more informative message seems a good idea. Between the rspec-expectations docs and Daniel Chang’s exploration, let’s try this:

# spec/support/matchers/be_displayed.rb
RSpec::Matchers.define :be_displayed do |args|
  match do |actual|
    actual.displayed?(args)
  end

  failure_message_for_should do |actual|
    expected = actual.class.to_s.sub(/PageObjects::/, '')
    expected += " (args: #{args})" if args.count > 0
    "expected to be on page '#{expected}', but was on #{actual.current_path}"
  end
end

which we’d get for free from RSpec, and from Cucumber we can get by adding to features/support/env.rb:

require_relative '../../spec/support/matchers/be_displayed'

The first time I ran this, because I wasn’t passing in arguments to SitePrism’s #displayed? method, I hit a weird error:

comparison of Float with nil failed (ArgumentError)
./spec/support/matchers/be_displayed.rb:3:in `block (2 levels) in <top (required)>'

… emending the helper methods to make sure that I included an empty hash even if no arguments were passed in (part of the final version of helpers.rb above) fixed that. Now, with the new matcher working, if we run on_page('Dashboard').do_something when we’re actually at the path /secondary-control-room, we get the much more useful error

expected to be on page 'Dashboard', but was on '/secondary-control-room' (RSpec::Expectations::ExpectationNotMetError)
./test_commons/page_objects/helpers.rb:11:in `block in on_page'

… note that if there had been expected arguments, we would have printed those out too (expected to be on page 'Dashboard' (args: {id: 42})).

I built the custom matcher in RSpec because that’s what I tend to use, but if Minitest is more your thing, you could add a custom matcher by reopening Minitest::Assertions. (Example here.)

Setup Complete

We’ll break now that the page object framework is set up, and discuss further implementation patterns in the next post.

Emacs Org-mode Styling, Non-smart Quotes, Zero-width-space, and TeX Input Method

In org-mode we can style inline elements with *bold* (bold), /italic/ (italic), _underlined_ (_underlined_, which won’t show up in html), =verbatim= (verbatim), and ~code~ (code).

But this breaks if the character just inside the styling code is a non-smart single or double quote. =C-c ;= is styled (C-c ;); =C-c '= is not (=C-c '=).

We can fix that by inserting a zero-width space between the apostrophe and the = . The first time, we can put the cursor between the apostrophe and the = and enter C-x 8 RET ZERO WIDTH SPACE RET, at which point =C-c '​= will display as C-c '.

The ZERO WIDTH SPACE part of insert-char (C-x 8) is autocompleting, so we won’t need to type the whole thing, but we’ll probably still want a shortcut if it we find ourselves doing it more than once or twice. Here’s the simplest thing we could do:

(defun my/insert-zero-width-space ()
(interactive)
(insert-char ?\u200B)) ;; code for ZERO WIDTH SPACE
(global-set-key (kbd "C-x 8 s") ‘my/insert-zero-width-space)

After which, if we find ourselves adding inline styles which don’t work because the styled section starts or ends with a quotation mark, we can go to just before or just after the quotation mark, type C-x 8 s, and watch the styles apply as expected.

Thanks, as usual, to Sacha Chua, who explained this in passing. Her .emacs has a more flexible solution which I will probably switch to the moment I need to insert more than one unicode character.

Note that while you can use insert-char to get accented characters (C-x 8 ' e produces é, for instance), and I used to do that, I have since set the input method to TeX (M-x set-input-method RET TeX RET), and now I type \​'e for é, \​th for þ, \​Mu\​epsilon\​nu for Μεν, and so on and so forth.

Automating Boilerplate in Org-mode Journalling

The Problem

My main org-mode files have a header structure like this:

* 2015
** March
*** Friday 6 March 2015
**** 10:00. Emacs coaching with Sacha :emacs:
<2015-03-06 10:00-11:00>
**** 23:37. Emacs progress note
[2015-03-06 23:37]
*** Saturday 7 March 2015
**** 12:55. Stratford King Lear in cinemas :wcss:
<2015-03-07 12:55-15:55>

The two things I do most often are start writing notes in the present, and set up scheduled events in the future. Scheduled events need active timestamps so that they show up in the agenda view:

Week-agenda (W10):
Monday 2 March 2015 W10
Tuesday 3 March 2015
Wednesday 4 March 2015
Thursday 5 March 2015
Friday 6 March 2015
10:00. Emacs Coaching with Sacha :emacs:
Saturday 7 March 2015
12:55. Stratford King Lear in cinemas :wcss:
Sunday 8 March 2015

[How to get the agenda view? Make sure the file you’re in is in org-agenda-files by pressing C-c [, then run org-agenda either by M-x org-agenda or the common (but not out-of-the-box) shortcut C-c a, then press a to get the agenda for the week.]

[Active timestamps are in < >; inactive timestamps are in [ ]. If we want to promote an inactive timestamp to be active and show up in the agenda view, we can run M-x org-toggle-timestamp-type from the inactive timestamp.]

[Start time is duplicated between header and timestamp deliberately: sometimes I review the file in org-mode outline with only the headers. If not for that use-case it would make sense to leave the start time out of the header.]

What I’d like to be able to do is open the file, and then with as little ceremony or setup as possible, start typing.

Adding a New Note

When I’ve already got a date header for the current date, this is trivially simple: I can put my cursor under that date and call M-x my note, having previously defined

(defun my/note ()
(interactive)
(insert "\n\n**** " (format-time-string "%H:%M") ". ")
(save-excursion
(insert "\n" (format-time-string "[%Y-%m-%d %H:%M]") "\n\n")))

… and it does pretty much what it says.

The values passed into format-time-string (%Y, %m, %d etc.) will be the values for the current date and time.

save-excursion means “do what’s inside here and then return the point (cursor) to where it was before the block was called”, so it fills in the timestamp and then returns the cursor to the end of the header line, ready for the title to be filled in.

Since after that I’d need to move the cursor back down to below the timestamp, I can be even lazier and prompt for the title and the tags (if any) and fill them in and move the cursor down to where the text should start:

(defun my/note-2 (title tags)
(interactive (list
(read-from-minibuffer "Title? ")
(read-from-minibuffer "Tags? ")))
(insert "\n\n**** " (format-time-string "%H:%M") ". " title)
(unless (string= tags "")
(insert " :" tags ":")
)
(insert "\n" (format-time-string "[%Y-%m-%d %H:%M]") "\n\n"))

If we were just setting one argument, say title, this would be simpler:

(defun my/note-2-eg (title)
(interactive "MTitle? ")

But since we want two, we need to tell interactive it’s getting a list, and then the arguments are set in order.

(defun my/note-2 (title tags)
(interactive (list
(read-from-minibuffer "Title? ")
(read-from-minibuffer "Tags? ")))

Just as we can create an inactive timestamp by inserting a formatted date with [ ], we can add tags by inserting a string surrounded by : :. We don’t want to do this if the tag argument was left blank, of course, so we surround it with an unless:

(unless (string= tags "")
(insert " :" tags ":")
)

[Side-note: if you’re doing anything more complicated than this with strings, you probably want Magnar Sveen’s s.el (string manipulation library), which here would let you do (unless (s-blank? tags)… instead.]

At the end of which, because we took out the save-excursion, the cursor is below the timestamp and I’m ready to type.

All good. What if we don’t already have a date header for the current date? Can we auto-generate the date header?

Auto-generating Date Headers

Org-mode has a function called org-datetree-find-date-create. If you pass it in a list of numbers (the month, the day, and the year), and if your header structure is

* 2015
** 2015-03 March
*** 2015-03-06 Friday

then if you called that function passing in (3 7 2015), it would automatically add:

*** 2015-03-07 Saturday

For that matter, if you called it passing in (4 1 2015), it would automatically add

** 2015-04 April
*** 2015-04-01 Wednesday

If you call it passing in a date which is already there, it moves the cursor to that date. So, we could change the format of the org file headers, update our new note function to call the function, and be done.

(defun my/note-3 (title tags)
(interactive (list
(read-from-minibuffer "Title? ")
(read-from-minibuffer "Tags? ")))
(org-datetree-find-date-create (org-date-to-gregorian
(format-time-string "%Y-%m-%d")))
(org-end-of-subtree)
(insert "\n\n**** " (format-time-string "%H:%M") ". " title)
(unless (string= tags "")
(insert " :" tags ":")
)
(insert "\n" (format-time-string "[%Y-%m-%d %H:%M]") "\n\n"))

In the new bit:

(org-datetree-find-date-create (org-date-to-gregorian
(format-time-string "%Y-%m-%d")))
(org-end-of-subtree)

format-time-string “%Y-%m-%d” will return “2015-03-07”, and org-date-to-gregorian will turn that into (3 7 2015), which is the format that org-datetree-find-date-create expects. Determining this involved looking at the source for org-datetree-find-date-create to see what arguments it expected (C-h f org-datetree-find-date-create takes you to a help buffer that links to the source file, in this case org-datetree.el; click on the link to go to the function definition) and a certain amount of trial and error. At one point, before org-date-to-gregorian, I had the also working but rather less clear:

(org-datetree-find-date-create (mapcar ‘string-to-number
(split-string (format-time-string "%m %d %Y"))))

org-end-of-subtree just takes the cursor to the bottom of the section for the date.

And that then works. What about adding new events in the future?

Adding a New Event

org-datetree-find-date-create makes it easier to fill in missing month and date headers to create a new future event:

(defun my/event (date end-time)
(interactive (list
(org-read-date)
(read-from-minibuffer "end time (e.g. 22:00)? ")))
(org-datetree-find-date-create (org-date-to-gregorian date))
(goto-char (line-end-position))
(setq start-time (nth 1 (split-string date)))
(if (string= start-time nil)
(setq start-time ""))
(insert "\n\n**** " start-time ". ")
(save-excursion
(if (string= end-time "")
(setq timestamp-string date)
(setq timestamp-string (concat date "-" end-time)))
(insert "\n<" timestamp-string ">\n\n")))

There’s a problem I haven’t solved here, which is that org-read-date brings up the date prompt and lets you select a date, including a time or time range. If you select a time, it will be included in the date. If you select a time range, say 19:30-22:30, it ignores the time and the date object returned uses the current time. That’s not what we want.

So when the date prompt comes up:

Date+time [2015-03-07]: _ => <2015-03-07 Sat>

I can put in a new date and start time, say “2015-04-01 11:00”:

Date+time [2015-03-07]: 2015-04-01 11:00_ => <2015-04-01 Wed 11:00-14:00>

and then press enter and that gives me the date and the start time in a single string. We extract the start time and put it into the header like so:

(setq start-time (nth 1 (split-string date)))
(if (string= start-time nil)
(setq start-time ""))
(insert "\n\n**** " start-time ". ")

Where split-string splits the date “2015-04-01 11:00” into a two-element list (“2015-04-01” “11:00”) {n}, and nth 1 returns the second element (list elements, here as elsewhere, are numbered starting from 0), or “11:00” {n}.

If the date had been “2015-04-01” without a start time, line 1 would have set start-time to nil, which would have blown up as an argument to insert (with a “Wrong type argument: char-or-string-p, nil”), so we check for a nil and set it to an empty string instead.

For the agenda view we’ll need the end-time as well, so we grab that as a second argument, and reassemble the date, say “2015-04-01 11:00”, and end-time, say “14:00”, into the active timestamp:

(if (string= end-time "")
(setq timestamp-string date)
(setq timestamp-string (concat date "-" end-time)))
(insert "\n<" timestamp-string ">\n\n"))

though here again, if we had just pressed enter when prompted for end time we would end up with an empty string and an invalid active timestamp, like “<2015-04-01->”, so check whether end-time has been set and build the timestamp string accordingly.

And since this is a future event and we’re probably only filling in the title, we’ll use save-excursion again to fill in the active timestamp and then go back and leave the cursor on the header to fill in the title.

The End

And we’re done. Or we would be, but I would really prefer the header to read “March” instead of “2015-03 March”, and “Saturday 7 March 2015” instead of “2015-03-07 Saturday”. We might need to come back to that.

Emacs, Rvm, and the Important Distinction between Interactive and Non-Interactive Shells

Normally I run the non-graphical version of Emacs in iTerm and switch to a non-Emacs tab to run rspecs and cucumbers at the command line. The other day, since I have feature-mode installed, I typed C-c , v in a feature file to run the cucumber features. The start of the compilation buffer of the results showed:

-*- mode: compilation; default-directory: "~/code/my-project/features/" -*-
Compilation started at Sat Feb 28 00:24:30

rake cucumber CUCUMBER_OPTS="" FEATURE="/Users/seanmiller/code/my-project/features/smoke.feature"
(in /Users/seanmiller/code/my-project)
rake aborted!
undefined method `result' for #<TypeError:0x007fa8ca0f51c0>
/Users/seanmiller/.rvm/gems/ruby-1.9.3-p551@my-project/gems/activerecord-3.2.13/lib/active_record/connection_adapters/postgresql_adapter.rb:1147:in `translate_exception'

with a long stack trace mostly in the activerecord adapters.

The third line (starting “rake cucumber…”) is the command that was run, so we can type M-x eshell to get a shell within Emacs, paste in the line and run it, and there the cucumbers run without error. We can also type M-x compile to get a compile command prompt directly, paste in the line again, and we end up with the error we started with. The compile command prompt and the Emacs shell are behaving differently.

What if we run echo $PATH in both? The shell path starts with several rvm-related directories. The compile prompt path starts with /usr/bin and the rvm-related directories are nearer the end.

That might well do it, but what is causing it? The manual for the compilation shell notes that it brings up a noninteractive shell. The manual for zsh notes that while ~/.zshenv is run for almost every zsh, ~/.zshrc is run only for interactive shells. So if I get the latest version of the rvm dotfiles via rvm get stable --auto-dotfiles and move the two lines that show up in ~/.profile not to my ~/.zshrc file but to my ~/.zshenv file, and then restart Emacs, I can run the cukes from M-x compile or C-c , v and they run cleanly.

What’s more, in the compilation buffer of the test results, the comments referring to the line numbers of feature and step definition files are live links: press enter on them to jump to the lines of the code files. Maybe there is a point to running them in Emacs after all…

Shortcuts to Default Directories

[Now largely historical. See ETA for achieving the same effect with bookmarks, and ETA 2 for achieving the same effect with Hydra, which feels better still.]

Yesterday’s Emacs coaching session with Sacha Chua included a sidebar on jumping to and setting default directories. Sacha’s .emacs.d uses registers for this, and her code sets defaults and org-refile targets.

I found later that I had six shortcuts I’d clean forgotten about, that set the default directory and open either dired or a specific file:

(global-set-key (kbd "C-c C-g C-c")
(lambda ()
(interactive)
(setq default-directory "~/.emacs.d/")
(dired ".")))

(global-set-key (kbd "C-c C-g C-h")
(lambda ()
(interactive)
(setq default-directory "~/Dropbox/gesta/")
(find-file "2015.org")))

But a forgotten solution doesn’t count, and having to remember triple-decker key bindings doesn’t help either.

A general solution for the second problem is that an incomplete key binding followed by ? opens a help buffer with all the bindings starting with sequence. C-c C-g ? on my machine yielded:

Global Bindings Starting With C-c C-g:
key binding
— ——-

C-c C-g C-c ??
C-c C-g C-d ??
C-c C-g C-e to-english
C-c C-g C-f to-french
C-c C-g C-h ??
C-c C-g C-p ??
C-c C-g C-r ??
C-c C-g C-u ??
C-c C-g C-w ??

We can make this more useful by switching from defining the key binding functions anonymously inline to moving them out to their own named functions, thus:

(defun my/to-emacs ()
(interactive)
(setq default-directory "~/.emacs.d/")
(dired "."))

(defun my/to-today ()
(interactive)
(setq default-directory "~/Dropbox/gesta/")
(find-file "2015.org"))

(global-set-key (kbd "C-c C-g C-c") ‘my/to-emacs)
(global-set-key (kbd "C-c C-g C-h") ‘my/to-today)

Which fixes those two ?? entries:

C-c C-g C-c my/to-emacs
C-c C-g C-h my/to-today

Another niggle is that I’ve overloaded C-c C-g across three different kinds of commands. Let’s make C-x j the shortcut specifically and only for jumping to a new default directory, and extract some duplicate methods while we’re at it. After:

(defun my/to-file (dir file)
(interactive)
(setq default-directory dir)
(find-file file))

(defun my/to-dir (dir)
(interactive)
(setq default-directory dir)
(dired "."))

(defun my/to-gesta-file (file)
(interactive)
(my/to-file "~/Dropbox/gesta/" file))

(defun my/to-emacs-config ()
(interactive)
(my/to-file "~/.emacs.d/" "sean.org"))

(defun my/to-autrui ()
(interactive)
(my/to-dir "~/code/autrui/"))

(defun my/to-gesta ()
(interactive)
(my/to-dir "~/Dropbox/gesta/"))

(defun my/to-today ()
(interactive)
(my/to-gesta-file "2015.org"))

(defun my/to-readings ()
(interactive)
(my/to-gesta-file "readings.org"))

(defun my/to-writings ()
(interactive)
(my/to-gesta-file "writings.org"))

(defun my/to-twc ()
(interactive)
(my/to-dir "~/Dropbox/gesta/twc/"))

(global-set-key (kbd "C-x j e") ‘my/to-emacs-config)
(global-set-key (kbd "C-x j a") ‘my/to-autrui)
(global-set-key (kbd "C-x j g") ‘my/to-gesta)
(global-set-key (kbd "C-x j h") ‘my/to-today)
(global-set-key (kbd "C-x j r") ‘my/to-readings)
(global-set-key (kbd "C-x j w") ‘my/to-writings)
(global-set-key (kbd "C-x j t") ‘my/to-twc)

C-x j ? then yields:

Global Bindings Starting With C-x j:
key binding
— ——-

C-x j a my/to-autrui
C-x j e my/to-emacs-config
C-x j g my/to-gesta
C-x j h my/to-today
C-x j r my/to-readings
C-x j t my/to-twc
C-x j w my/to-writings

Which is an improvement, but the “Bindings Starting With” help menu is not interactive: we still have to either remember the triple-decker key binding, or type C-x j ? to find the list and then type C-x j h (say) to get the specific shortcut we want. It would be nice if we could over-ride C-x j ? and have it prompt us for one more character to select which of the seven jumps we wanted.

Something like this, in fact:

(defun my/pick-destination (pick)
(interactive "ce = ~/.emacs.d/sean.org a = ~/code/autrui/ g = ~/Dropbox/gesta/ h = …/2015.org r = …/readings.org w = …/writings.org t = …/twc/ ? ")
(case pick
(?e (my/to-emacs-config))
(?a (my/to-autrui))
(?g (my/to-gesta))
(?h (my/to-today))
(?r (my/to-readings))
(?w (my/to-writings))
(?t (my/to-twc))))

(global-set-key (kbd "C-x j ?") ‘my/pick-destination)

interactive "c… takes a single character (see further options here), so the conditional is on a character, which in Emacs Lisp is represented by a ? followed by the character {n}. If there were a single conditional we might use (when (char-equal ?e pick) {n}, but since there are seven of them, we look instead for Emacs Lisp’s equivalent of a switch statement, and find it in case (an alias for cl-case).

So what this does, when you type C-x j ?, is provide a prompt of options in the echo area at the bottom of the screen, and if you type one of the significant letters, it uses that shortcut to default.

Purists would probably prefer to bind ’my/pick-destination to something other than C-x j ? (C-x j j say), so that if we ever bound other commands to something starting with C-x j we would still be able to discover them with C-x j ?. It’s also easier to type because it doesn’t need the shift key for the third element. Having demonstrated that we could over-ride C-x j ? if we wanted to, I’m probably going to side with the purists on this one.

And we’re done. Commit.

ETA: Easier Ways To Do It

That works, but since (see Sacha’s comment below) there are always easier ways to do it:

Simpler Code

find-file will either open a file or open dired if it is passed a directory instead of a file, and opening a file or directory will change the default directory. So we can replace all the extracted ’my/to-file and ’my/to-dir methods with simple find-file calls:

(defun my/to-emacs-config ()
(interactive)
(find-file "~/.emacs.d/sean.org"))

(defun my/to-autrui ()
(interactive)
(find-file "~/code/autrui/"))

… etc …

Using Bookmarks (No Custom Code, Even Simpler)

While ’my/pick-destination gets around the non-interactive nature of ? (minibuffer-completion-help, e.g. C-c C-g ? for list of completions to C-c C-g), using straight-up bookmarks lets you do that without custom code:

C-x r m {name} RET ;; bookmark-set ;; sets a {named} bookmark at current location
C-x r b {name} RET ;; bookmark-jump ;; jump to {named} bookmark
C-x r l ;; list-bookmarks ;; list all bookmarks

So having set bookmarks in the same seven places, either files or directories, we could list-bookmarks and get

% Bookmark File
autrui ~/code/autrui/
emacs ~/.emacs.d/sean.org
gesta ~/Dropbox/gesta/
now ~/Dropbox/gesta/2015.org
readings ~/Dropbox/gesta/readings.org
twc ~/Dropbox/gesta/twc/
writings ~/Dropbox/gesta/writings.org

And this list is interactive. Or we could call C-x r b (bookmark-jump) and start typing the destination we want. (If we get bored of typing, we can reduce the bookmark names to single characters, remembering that C-x r l (list-bookmarks) will give us the translation table if we forget, and then we’re back to being five keystrokes away from any bookmark, without having to add any extra code.)

Using Bookmark+ (And Describing and Customizing Faces)

If we want to be able to do more advanced things with bookmarks – tag them, annotate them, rename them, run dired-like commands on the bookmark list – we can grab the Bookmark+ package and (require ’bookmark+) in our .emacs. (Having done that, if we press e by a line in the bookmark list, for instance, we can edit the lisp record for the bookmark, to rename it or change the destination or see how many times it has been used.)

One problem I had with bookmark+ is that the bookmark list was displaying illegibly in dark blue on a black background. To fix this, I needed to move the cursor over one of the dark blue on black bookmark names and type M-x describe-face, and it reported the face (Emacs-speak for style {n}) of the character, in this case Describe face (default `bmkp-local-file-without-region'):. Pressing enter took me to a buffer which described the face and provided a customize this face link at the end of the first line. I followed (pressed enter) that link to get to a customize face buffer which let me change the styling for that element. On the line:

[X] Foreground: blue [ Choose ] (sample)

I followed (pressed enter on) Choose, it opened a buffer of colour options, I scrolled up to a more visible one, pressed enter again, and got back to the customize face buffer, then went up to “Apply and Save” link and pressed enter again there. Going back to the bookmark list, the bookmarks were visible. The change is preserved for future sessions in the init.el file:

(custom-set-faces
;; custom-set-faces was added by Custom.
;; If you edit it by hand, you could mess it up, so be careful.
;; Your init file should contain only one such instance.
;; If there is more than one, they won’t work right.
‘(bmkp-local-file-without-region ((t (:foreground "green")))))

ETA 2: Using Hydra Instead

Heikki Lehvaslaiho suggested using hydra instead, and as the bookmark solution required a couple of extra keystrokes and I’d been curious about hydra anyway, I thought I’d give it a go.

(require ‘hydra)
(global-set-key
(kbd "C-c j")
(defhydra hydra-jump (:color blue)
"jump"
("e" (find-file "~/.emacs.d/sean.org") ".emacs.d")
("c" (find-file "~/.emacs.d/Cask") "Cask")

("a" (find-file "~/code/autrui/") "autrui")
("h" (find-file "~/Dropbox/gesta/2015.org") "hodie")
("r" (find-file "~/Dropbox/gesta/readings.org") "readings")
("w" (find-file "~/Dropbox/gesta/writings.org") "writings")
("t" (find-file "~/Dropbox/gesta/twc/") "twc")))

I like it. We’re back down to three keystrokes from five and, since we’re providing labels for the commands, we also get the descriptive and interactive and self-updating index which was the big advantage of the bookmark route. If you press C-c j and don’t immediately carry on, it prompts:

jump: [e]: .emacs.d, [c]: Cask, [a]: autrui, [h]: hodie, [r]: readings, [w]: writings, [t]: twc.

Much better. Thanks for the prompt, Heikki!