The Economics of Programming: Externalized vs. Internalized Costs

external_costs
Diagram: Econation NZ

Many days I feel like my work as an agile consultant is simply internalizing (externalized) costs.

First example that comes to mind: software development done too quickly which creates technical debt as it goes. In the short term, a project like that can seem very successful, exceeding expectations for delivery time and customer satisfaction. And then the devs rotate on to another green field.

I once argued to a manager that a certain project was producing “negative value”, but I didn’t get far with that argument 😉

Contrast this to programming under a method like Test Driven Development, where, many times per day, a small coding effort is followed up by a small “refactoring” (cleanup) effort.

Python vs. Haskell round 2: Making Me a Better Programmer

Another point for Haskell

In Round 1 I described the task: find the number of “Titles” in an HTML file. I started with the Python implementation, and wrote this test:

def test_finds_the_correct_number_of_titles():
    assert title_count() == 59

Very simple: I had already downloaded the  web page  and so this function had to do just two things: (1) read in the file, and then (2) parse the html & count the Title sections.

Not so simple in Haskell

Anyone who’s done Haskell might be able to guess at the problem I was about to have: The very same test, written in Haskell, wouldn’t compile!

it "finds the correct number of titles" $
    titleCount `shouldBe` 59

There was no simple way to write that function to do both, read the file and parse it, finally returning a simple integer. I realized that the blocker was Haskell’s famous separation between pure and impure code. Using I/O (reading a file) is an impure task, and so anything combined with it becomes impure. My function’s return value would be locked in an I/O wrapper of sorts.

I got frustrated and thought about dumping Haskell. “Just another example of how it’s too difficult for practical work,” I thought. But then I wondered how hard it would be to read in the file as a fixture in the test, and then call the function? I’d just need to pass the html as a parameter. And yep, this worked:

it "finds the correct number of titles" $ do
    html <- readFile "nrs.html"
    titleCount html `shouldBe` 59

As I refactored the code to pass this test, I realized that this is much better: Doing I/O and algorithmic work should always be separate. I had been a little sloppy or lazy in setting up my first task. The app with the Haskell-inspired change will be more reliable and easier to test, regardless which language it ends up being written in.

See also:

Python vs. Haskell round 1: Test Output

The point goes to Haskell

On the surface, it may sound silly to compare these two languages because they’re about opposite as you could get: Python is interpreted, dynamically typed, and slightly weakly typed as well. Haskell on the other hand, is compiled, statically and strongly typed. But they’re both open source, and they both have large collection of libraries which are helpful for my work.

I’m creating NevadaLaws.org, an app displaying the Nevada Revised Statutes, similar to my first, OregonLaws.org. Most of my code is Ruby and Rails, but I want to improve the architecture and maybe move to a new language. So I’ve been parallel developing the scraping & import code in my candidate languages, Python and Haskell.

The screenshots above show the results of running my first failing test in each language — failing for pretty much the same reason in each case. My first test is, parse the NRS index page and return the number of Titles found. Python’s pytest puts a lot of junk on the screen in comparison to Haskell’s hspec. Reading the hspec output is a thing of beauty, really.

Next: Round 2, Making Me a Better Programmer

Engineering with Empathy: how do we decide when to fix?

populations

It’s a sappy title, but bear with me. I’m wondering if the best lessons of diversity and inclusion can be applied to software development? For example, if something is offensive to others, then it’s worth looking for an alternative — even if I’m not personally offended.

I witnessed an interesting disagreement on a software project: Some developers joined a project with a large codebase which had been up and running for a while. They stumbled upon an aspect that made it harder for them to understand. These new-to-the-project devs proposed a small change:

Why is this named Dwankie? It keeps tripping us up. Can we rename it to ApplicationConfig? It’d be a lot clearer.

The group’s current decision-making process: Is no clear majority dissatisfied? Then keep the status quo.

The long-time devs disagreed with the change. They acknowledged that it would do no harm. But in their view, the code was understandable enough as is. Interestingly, every dev who was unbothered by the odd naming advocated keeping it. They also argued that this wasn’t an important issue. I saw this argument boiling down to, “we don’t think it’s a problem”. This struck me as analogous to “I don’t find it offensive” in the context of inclusion. And so I’m thinking it’d be similarly invalid. The change was disallowed.

A better way to decide: Is a significant group dissatisfied? Then we allow change.

Thinking about it, “But is a different group of people ok with things as they are?” is not a good guideline. Instead, we should use a rule of thumb based on empathy when deciding whether to act: A sizeable group of people who are dissatisfied with the status quo is enough reason to support change. We shouldn’t require a unanimous or even a majority vote. rabbit

Why I don’t use let/let! in my RSpec

At work, we’re deciding on our test-writing style: let/let! blocks like let(:arg) { 5 } vs. instance variables defined in a setup method like @arg = 5.

  • I’ve found no advantage to let; but I have experienced disadvantages.
  • I’ve found no disadvantages to instance variables.

And so, 👍 for instance variables.


I’ve written many specs and have read the rspec docs, betterspecs.org many times.

I don’t use let/let! because

  • the purported advantage of lazy evaluation is never actually realized. I’m most always running all the tests in a file, and so there’s no time efficiency gain;
  • the API [let and let!] and their magic increase the code’s complexity and so must be balanced out by some other advantage;
  • Let introduces magic and apparently nondeterministic behavior which has broken my tests, and I’ve only been able to fix by converting to easy-to-understand @- variable instantiations.
  • Let has the problem of introducing non-ruby syntax — something that looks like an automatic variable isn’t one anymore.

So for me, let is fixing a problem I don’t have, and in doing so, introduces new problems.

The Benefits (not features!) of Programming with Haskell

I’m just a couple of months in, and have written my first production Haskell app, a PDF parser for Oregon laws. Programming it feels different, in a good way. Looking over the list below, two themes — easy and fast — stand out. Compared to OO languages:

  • It’s easy to jump back in to previous work;
  • easy to test my code;
  • easy to refactor;
  • easy to code in a familiar style;
  • the programs are lightning fast;
  • I’m delivering features faster;
  • I can use Atom as an IDE;
  • it’s simple to develop on a Mac and deploy on Linux,
  • it’s easier to understand someone else’s code.

Plus, I just subjectively look forward to it. I think this all adds up to a low mental burden, yet the language is very stimulating.

I decided to focus on “benefits” because really, that’s what we’re all after. This may sound controversial, but I don’t think anybody “wants” referential transparency. That’d be like wanting a regular cleaning service. Instead, what we do want is the benefit; a clean home. Or in programming terms, a clean codebase. So let’s talk about those features now:

The features that enable those benefits

I thought a lot about why I’m seeing these benefits. Here’s a quasi root-cause analysis for them:

Easy to come back to a project and do a little work on it, because of low cognitive load, because everything I need to understand a chunk of code is on the screen in front of me, because of pure functions which have very clear input and output, and because my text editor (Atom) acts as a full IDE showing me the types of every variable and signatures of every function.

Easy to refactor, because I can simply cut and paste functions from one file to another, because there are no globals, only constants, and because the functions — pure and impure — are easy to compose. Also because I can change my data structures and the compiler shows necessary changes in the code, e.g. forcing that every property is set, and to the correct type. Because I can rename anything and the compiler shows every reference needing updating.

Easy to code in a familiar style, because new functions can be composed from others, and new operators can be defined — without sacrificing the other benefits.

I can develop on OS X and deploy on Linux, because the libraries are cross-platform.

The programs are lightning fast, because they’re compiled and because Haskell has lots of optimizations, like laziness.

I’m delivering features faster, because I’m writing much less code, and because I don’t need to write many tests. In Ruby, my test suites are perhaps 2-3x the size of my code base. In Haskell, I’m only TDD’ing and testing particular pieces which can be tricky to get right, like regex-based parsing functions. The compiler tests the rest for me automatically. I’m also writing fewer tests because the code itself is declarative (as well as type-checked). In other words, it reads like a test already, and so a test would only be redundant.

Based on my experience so far, Haskell’s my current language of choice for new server-side code.


I’ve decided to make this the first part in a series about my real-world experience of using Haskell for application programming. I’ll follow up with a post on some of the rough edges in the ecosystem to be aware of, and the best path I’ve found for learning Haskell.

Wifi LAN Performance Test comparing 3 routers and 6 computers

In the past year, I noticed that my wifi had gotten too slow to smoothly ssh from one computer to another. Screen sharing was also very rocky. I began to suspect that either my Macs or my Apple router were seriously under-performing.

Ping times are a great performance indicator for the apps that I use as a software developer: screen sharing, ssh, and file browsing in the Finder. So I bought a couple of new routers, and spent the weekend testing every device in every combination:

Wifi LAN Ping Tests

From my Google Spreadsheet – feel free to comment on it or here.

I set my threshold for excellent performance at 10 ms because that just seems reasonable. And I set my acceptable performance threshold at 30 ms because I can ping from servers in California from my apartment in Portland in that time. So, I thought, I sure as hell ought to be able to send a packet across the room in less time than that.

Crowded Wifi Neighborhood (1)I picked up the two routers at Costco: The Linksys was $70, and the Netgear was $140. The test environment is a 700 sq. foot 1 bedroom apartment. The computers are in a semi-circle around the router location, all at roughly the same distance. The iMac is the oldest. The Macbook Pro 13 is hooked up to an external monitor and its lid is closed. I think it’s logical that those two scored the worst You can see that the network neighborhood is pretty much a worst case scenario.

I see some surprising results:

  • The Mac computers are an order of magnitude (or two!) slower responding to pings.  Interestingly, they have no problems when originating a ping. I haven’t found much online, but this is strong evidence of a problem somewhere in their network stack. These results correspond to my subjective experience connecting to these machines.
  • The Linksys significantly outperformed the Netgear at half the price.
  • It really pays to periodically check your performance and upgrade if necessary. In my case, my newer computers support faster wifi speeds than my old router could support.
  • The newer router doesn’t solve the Mac device problem, but it significantly reduces it. These data don’t show it, but there are now fewer dropped packets, and the variance is ping times is much lower as well.
  • The cheapest, “slowest” computer is the fastest. That’s the Chromebook 14 q010nr, which I bought three years ago for around $250. It’s a fantastic machine in many ways.

DigBang: Safely unsafe Ruby hash traversal

Here’s Hash#dig!, a handy little piece of code I wrote for a project. It’s similar to Ruby 2.3’s Hash#dig, but raises an exception instead of returning nil when a key isn’t found.

places = { world: { uk: true } }

places.dig  :world, :uk # true
places.dig! :world, :uk # true

places.dig  :world, :uk, :bvi # nil
places.dig! :world, :uk, :bvi # KeyError:
                              #   Key not found: :bvi

#dig is to #[] as #dig! is to #fetch

Ruby 2.3 introduces the new Hash#dig method for safe extraction of a nested value. It’s the equivalent of a safely repeated Hash#[].

#dig!, on the other hand, is the equivalent of a safely repeated Hash#fetch. It’s “safely unsafe” 😉 raising the appropriate KeyError anywhere down the line, or returning the value if it exists. This turns out to be very easy to implement in the functional style:

def dig!(*keys)
  keys.reduce(self) { |a, e| a.fetch(e) }
end

The full source code

Comparing Kanban apps with GitHub integration

I’m working on this for a client: Comparing Kanban project management apps that have very good GitHub integration. So far I’ve looked at Huboard, Waffle, Zenhub, and Blossom.

Blossom.io is the strongest for our needs due to the detailed cycle time reporting, showing where cards are spending their time. It also has some very useful project management features like marking an issue as “blocked” or “ready” for moving to the next stage.

Screen Shot 2016-01-08 at 12.25.23 AM

Online at Google Docs

Web Framework Comparison Matrix

web framework comparison keyThis is how I evaluate frameworks for clients and my own projects. I’m doing my best to be:

  • opinionated about which features matter
  • unopinionated about the actual frameworks

So you’ll be most likely to find this helpful if you value the same things I do: good CRUD support, good deployment and testing support, and an open and friendly community. Here’s the current matrix, and a link to the spreadsheet online. Help me fill in the remaining items by leaving comments.

web framework comparison matrix 1

web framework comparison matrix 2

I’m thinking about how to preserve the sources and reasoning for each score.