July 23, 2018

RSS Feed Validation with Jekyll Feed

  —Ran into some problems getting IFTTT to work with Jekyll Feed.

I couldn’t get IFTTT to pick up my RSS feed on my new blog. With the help of the W3C Feed Validator I was able to diagnose the problems.

YAML site URL shouldn’t end with a ‘/’. I changed this URL as follows.

diff --git a/_config.yml b/_config.yml
index 6b317a7..dc2b996 100644
--- a/_config.yml
+++ b/_config.yml
@@ -9,7 +9,7 @@ timezone: America/Toronto
 future: false
 
 # url is currently only used only for the RSS feed in feed.xml
-url: https://bminard.github.io/
+url: https://bminard.github.io
 
 # baseurl will often be '', but for a project page on gh-pages, it needs to
 # be the project name.

The W3C Nu HTML Validator showed some problems in the templates. These included missing closing div tags.

A problem with URLs was fixed by reading Clearing Up Confusion Around baseurl – Again.

Embed files from a github repository like a gist using gist-it.appspot.com. I had script files embedded in blog posts. For example:

<script src="http://gist-it.appspot.com/https://github.com/bminard/experimental/blob/master/reviewboard/diff">
</script>

This worked flawlessly on Blogger but doesn’t render on GitHub. It won’t render and is flagged by the feed validator because the HTML script tag shows up in the feed. No solution currently.

I added html-proofer to my Jekyll build chain. This permitted identification of broken links along with other issues in the posts.

July 2, 2018

Hatching a Catastrophe

  —A look at other essays by Frederick P. Brooks and the importance of hustle.

In The Mythical Man-Month (Worth Reading Again), I describe the value of re-reading this classic essay. In this article, I recommend looking at another essay “Hatching a Catastrophe”.

In this essay, Brooks’ calls out how a big catasrophe is, in some ways, easier to handle because there is a clear motivation for change and action. It’s the little slips of a day or half-day that are harder to manage because they creep up on you. He makes reccomendations to counter these slips and includes valuable observations. Importantly, he describes what he refers to as hustle.

Hustle embodies a sense of urgency in completing the current task. Completing early creates a positive schedule buffer that insulates against the inevitable setbacks that arise. It’s a form of insurance.

His advice for achieving hustle? One must become excited about a one-day slip.

Hustle and urgency are closely connected. Urgency is an important component of focus. Focus improves execution. If you become excited about a one-day slip you have an opportunity to create a sense of urgency for your project. Urgency increases the odds of recovery and may help you recover lost time and keep your insurance (schedule buffer).

His advice for the manager is excellent: use separate meetings to collect status and action problems. Publish both estimated and scheduled dates. Invest in a “Plans and Controls” team whose purpose is to monitor and communcate project plans.

June 26, 2018

Use of Heuristics in #NoEstimates

  —A call to action for the #NoEstimates community.

I am interested in the relationship between #NoEstimates, heuristics and the Kelly criterion. I first heard of this criterion in a tweet from @duarte_vasco. It was written in response to another from @galleman.

I’ve found proponents of #NoEstimates to be quiet on what they want to achieve. This is the first concrete discussion I’ve run across that provides insight into the thinking of #NoEstimates advocates.

In discussing #NoEstimates, I think it important to keep survivorship bias top of mind. There is evidence that software estimates don’t work well. There is less evidence that #NoEstimates is the solution. Is #NoEstimates the solution? Maybe, maybe not. It may provide insight on new ways of thinking about software delivery.

The pertinent parts of @galleman’s tweet:

The continued conjecture that decisions can be made in the presence of uncertainty without estimates … is simply a fallacy, due to lack of understanding of the principles or willfully intent to deceive (sic)

and @duarte_vasco’s response:

Uncertainty can never be managed by estimates. Only through survival heuristics like the Kelly criterion. .. You survive uncertainty, you don’t remove it. Basic understanding of complex systems.

I love the contradiction:

  • decisions can’t be made in the presence of uncertainty without estimates.
  • estimates are not a tool for managing uncertainty.

A close look at these in opposition might form the basis of a useful education on #NoEstimates.

Of course, decisions are made in the presence of uncertainty without estimates. Estimates are a tool to approximate the true value of something. Their usefulness lies in how closely they approximate that value.

Creating useful estimates is a resource allocation problem: invest or don’t. If you choose to invest, then the question becomes how much.

The purpose of #NoEstimates is to explore decision making without estimates. This implies low (zero?) investment in estimating.

Exploration isn’t fallacy. It’s an attempt to find new and better ways of delivering software. To accept #NoEstimates is to agree to explore our ability to make decisions without estimates.

What about uncertainty

The lack of certainty, a state of limited knowledge where it is impossible to exactly describe the existing state, a future outcome, or more than one possible outcome.

Uncertainty is part of the motivation for #NoEstimates. It it were not we’d have true values equaling estimates and the debate would be over.

Uncertainty is why risk management is a project management best practice. No argument that there are well defined principles in place for both risk and project management.

The response says to use survial heuristics, like the Kelly criterion. What is a survial heuristic? I couldn’t find a definition, so I went with adaptive heuristics and decision making.

A paper, Heuristic Decision Making, says this:

This research indicates that (a) individuals and organizations often rely on simple heuristics in an adaptive way, and (b) ignoring part of the information can lead to more accurate judgments than weighting and adding all information, for instance for low predictability and small samples.

A heuristic:

A strategy that ignores information to make decisions faster, more frugally, and/or more accurately than more complex methods.

Mapping Heuristic Decision Making onto #NoEstimates:

  • I liken traditional project management to the description of rationale reasoning.
  • I liken #NoEstimates to a heuristic and the description of irrationale reasoning.

Typical thinking says that people often rely upon heuristics but would be better off in terms of accuracy if they did not. In my view, @galleman rejects the notion of #NoEstimates (heuristics) preferring accuracy (estimates).

This model of rationale reasoning requires knowledge of all relevant alternatives, their consequences and probability and a predictable world without surprises.

Bayesian decision theory calls these situations small worlds. A large world is missing relevant knowledge or has to be estimated from small samples. This means the conditions of rational decision theory are not met. In large worlds, one cannot assume that rationale models automatically provide the correct answer. They may provide incorrect answers.

This situation leads to less-is-more effects:

when less information leads to better results than more information.

Part of the debate between traditional project management and #NoEstimates lies in the notion that heuristics can outperform sophisticated models.

It is incorrect to assume that project management, as viewed through the rationale reasoning model, is ill-equiped to deal with large world problems. That’s why it exists–to deal with large world problems. The interesting question is whether #NoEstimates can develop heuristics to achieve the less-is-more effect.

The paper describes that when heuristics are formailzed certain large worlds lend themselves to simple heuristics that provide better results than standard statistical methods. (The paper contains examples of heuristics used in large world problems.) There is a point where more is not better. In my view, @duarte_vasco views #NoEstimates this way.

The paper pursues two reasearch questions:

  1. Description: which heuristics do people use in which situations?
  2. Precription: when should people rely on a given heuristic rather than a complex strategy to make more accurate judgements?

These questions are aligned with the questions #NoEstimates should answer. The great thing about this paper is that it goes on to describe a framework that could be applied to #NoEstimates:

  • The Adaptive Toolbox:

    the cognitive heuristics, their building blocks (e.g., rules for search, stopping, decision), and the core capacities (e.g., recognition memory) they exploit.

  • Ecological Rationality:

    investigate which environments a given strategy is better than other strategies (better—not best—because in large worlds the optimal strategy is unknown).

Ecological rationality is a reason why @galleman and @duarte_vasco are in opposition. I bet the opposition comes from implied differences in environments. (@galleman’s LinkedIn profile implies a regulated work environment; @duarte_vasco’s less so.)

The paper discusses many different models of heuristics. A compelling model in relation to my understanding of #NoEstimates is the 1/N Rule:

Another variant of the equal weighting principle is the 1/N rule, which is a simple heuristic for the allocation of resources (time, money) to N alternatives:

  • 1/N rule: Allocate resources equally to each of N alternatives.
  • This rule is also known as the equality heuristic.

It is also applicable to investment, which brings in the Kelly criterion:

a formula used to determine the optimal size of a series of bets in order to maximise the logarithm of wealth. In most gambling scenarios, and some investing scenarios under some simplifying assumptions, the Kelly strategy will do better than any essentially different strategy in the long run (that is, over a span of time in which the observed fraction of bets that are successful equals the probability that any given bet will be successful).

The connection to @duarte_vasco’s tweet and the Kelly criterion is weakly tied to the 1/N rule. The connection arises from the simliarity in placing bets or investments with the 1/N rule.

I can’t find a connection between the Kelly criterion and project planning. I looked into whether there was a connection between the Markov process the Kelly criterion or project managment and didn’t come up with anything I could put together. I’m stumped on the connection between the Kelly criterion and #NoEstimates, beyond the fact that it’s a heuristic. If you’ve made the connection, please let me know!

What about complex systems? @duarte_vasco’s implies that uncertainty is a property of complex systems and that heuristics are a way of dealing with these systems. Ok.

A summary of Heuristic Decision Making:

In all, I see a place for traditional project planning and #NoEstimates. The challenge for #NoEstimate advocates is to educate on their insight in relation to the heuristics they develop and importantly when to apply them and which domains they are best suited.

I’ll leave this as a call to action for the #NoEstimates community. Help:

  • identify and characterize the toolkit you are creating.
  • identify the where (the domains) that this toolkit is most effective in.

Thanks to @duarte_vasco and @galleman for sharing. I learned something from each of you.

June 3, 2018

#NoEstimates, Revisited

  —Can #NoEstimates lead to better decision making on project delivery?

People are still talking about “NoEstimates”. I first looked at this in the Mythical Man-Month (Worth Reading Again).

A history of #NoEstimates is in Estimates? We Don’t Need No Stinking Estimates!. The origin is a collection of posts by Woody Zuill. There is an active Twitter hash tag: #NoEstimates.

Recently, I caught a discussion on Twitter wherein:

@duarte_vasco says:

For the sake of clarity, let’s agree in (sic) this. Estimation = asking the team to sit down and evaluate the cost/duration/effort of a piece of work they haven (sic) done yet. #NoEstimates = the team just focuses on delivering valuable increments of software at a regular cadence.

Two interesting responses.

@jcoplien says:

To know whether an increment is valuable, you often need to know its market window. You also may need to know its cost. If so, and if your definition is right, #NoEstimates makes no sense. It must be far more finessed than this.

And:

@WoodyZuill says:

Not my exact useage (sic), but certainly a definition that would allow for meaningful discussion.

Interesting. The entire thread.

Woody is the originator of the concept. Vasco published a book on it. James doesn’t dismiss it. Just the proposed definition.

My take on this is there is something worth exploring but there is still much debate. It’s interesting that Woody doesn’t offer an alternative definition or expand on his usage. A simple definition would add clarity.

Woody’s definition, from 2013:

#NoEstimates is a hashtag for the topic of exploring alternatives to estimates [of time, effort, cost] for making decisions in software development. That is, ways to make decisions with “No Estimates”.

My take is that #NoEstimates is about creating awareness of and prompting investigation into alternative approaches to estimating and the decision making around estimates.

So #NoEstimates isn’t about eliminating estimates. It’s about finding alternatives to cost, time and effort. And its about addressing dysfunction around the decision making that leverages estimates.

In effect, it’s about trying to do something sensible about a problem we all have.

I wonder about the psychological implications of #NoEstimates. I’ve had experience where

  • my teams estimate work but don’t buy into the estimates.
  • my teams estimate near term deliverables and commit to those deliverables.
  • team members recognize that the business needs to know when something will be available.

I’m sure this is a common experience for many teams.

Clearly, there is a gap. There is no simple answer.

May 28, 2018

Experiments with Packer and Vagrant on CentOS

  —A look at Vagrant and Packer (CentOS).

In Experiments with Packer and Vagrant on Debian I discussed my experience with Pierre Mavro's packer-debian project. Here I discuss my experience with Packer and Vagrant on CentOS.

My exploration of CentOS relies on work done by Gavin Burris. I extended  Gavin's work to include CentOS 7.2. Using Gavin's example, I was able to bring up a Vagrant box using Virtual Box on CentOS 7.2-1511 in a matter of minutes.

One problem I encountered in my test environment took a while to solve. I executed:

        > packer version
        > Packer v0.10.1
        > packer build centos7.json

dd returns a non-zero return code with these commands:

        > sudo dd if=/dev/zero of=/boot/zero bs=1M
        > sudo rm -f /boot/zero
        > sudo dd if=/dev/zero of=/zero bs=1M
        > sudo rm -f /zero

Packer reports the following errors:

        > virtualbox-iso: dd: error writing ‘/boot/zero’: No space left on device
        > virtualbox-iso: 397+0 records invirtualbox-iso: 396+0 records out
        > virtualbox-iso: 415494144 bytes (415 MB) copied, 1.05651 s, 393 MB/s
        > ==> virtualbox-iso: Unregistering and deleting virtual machine...
        > ==> virtualbox-iso: Deleting output directory...
        > Build 'virtualbox-iso' errored: Script exited with non-zero exit status: 1

Ouch. Packer reasonably deletes what it believes to be a broken Virtual Box.

To correct this, replace the dd commands above with the following:

        > sudo dd if=/dev/zero of=/boot/zero bs=1M || sudo rm -f /boot/zero
        > sudo dd if=/dev/zero of=/zero bs=1M || sudo rm -f /zero

Problem solved!

The problem is that dd has to fill the device which means it must exit with a non-zero return code.

While interesting, I ended up removing the dd commands altogether. My modifications to centos7.json.