March 8, 2018

Twitter Timelines on GitHub Pages

  —How to add a Twitter timeline to a Jekyll blog hosted on GitHub pages.

Adding a Twitter timeline to a Jekyll site hosted on GitHub is easy. The Twitter Developer Documentation provides instructions for this. See Embedded Timelines.

No Jekyll plugin required.

March 2, 2018

RBTLIB v0.3.0 On Read The Docs

  —A client-side library for Review Board.

In RBTLIB v0.3 Update (Part 2), I discussed introducing complexity measures to RBTLIB using radon and xenon. Recently, I've introduced Sphinx and taken advantage of Read the Docs. Sphinx is a documentation generator for Python and other languages. Read the Docs lets you create, host and search project documentation. The combination of the two coupled with GitHub creates a publishing environment that allows me to update my project documentation push it to GitHub and have the documentation published on Read the Docs within minutes. Simple. Part of the move to Read the Docs included a clean up of the naming for the project. I moved away from rbt to rbtlib for two reasons: I don't want to cause confusion between RBTools, which provides a command-line tool called rbt and my work. It's not my intent to diminish the work that people are doing on Review Board and RBTools by causing confusion. I still don't know if my project will be successful. It is my hope that it may be useful to the Review Board team but I haven't engaged anyone there. I learned through Kenneth Reitz's Requests module that a best practice exists for API versioning: Semantic Versioning. Seems sensible to adopt. I've moved from v0.3 to v0.3.0. Same release. Semantic Versioning also helpfully includes advice on versioning projects in an alpha and beta stage: once I achieve my goals for v0.3.0 I'll be targeting v0.4.0. I'd been using virtualenv to develop RBTLIB and incorporated virtualenvwrapper. Very nice set of tools. RBTLIB documentation: http://rbtlib.readthedocs.io/en/latest/.

February 7, 2018

Working Agreements for Agile Teams (Part 6)

  —Getting design working agreements to work better.

I’ve mentioned elsewhere that my team struggles to find the right balance for design reviews (see Working Agreements for Agile Teams (Part 4)). My initial challenge was to get people to recognize the need for collaborative design. Then to get collaboration to occur and finally raise the bar on the quality of the collaboration so we improved our designs.

Its taken 18 months to learn to collaborate effectively on design. I’m confident of this because our last team meeting involved a discussion on the team’s expectations on the amount of collaboration needed for different design activities. You can’t have that discussion if people aren’t trying.

I wanted to share some details of that discussion because the intent and scope is valuable to others.

The team discussed two examples. In both cases, the design activity was handled by another team. The required software changes involved parameter changes. Parameter changes in our application typically involve changing values in a configuration file or the source code. Simple changes to make in the source code but with far reaching implications.

The question asked during the team meeting was how much involvement did the team want with the other team in order to fulfill the working agreement. Our working agreement requires the author and two others engage and agree on the scope of the design. Basically, accept the parameter change as a trivial software change or consider the broader implications.

The source code change isn’t the important factor affecting the design activity. Other factors include knowledge of why these parameters need to change and the rationale for choices made regarding their manipulation by the application. This information is needed to make future changes. It also involves understanding the requirement that the other team was trying to fulfill.

The team discussion focused on the degree of engagement required by other team members when knowledge and experience is important. Having this conversation is a huge win as it level-sets expectations and ensures that rich and meaningful engagements occur between team members.

This is what a working agreement should foster: an environment where expectations can be set and met and discussion where team members can level-set on expectations with each other.

This level-set is an important component of developing team norms.

February 1, 2018

Sunk Cost, Code and Emotional Investment

  —Emotional investment in poor code.

In a Practical Application of DRY, I discussed sunk costs as part of Sandi Metz's discussion on the Wrong Abstraction. In my work on RBTLIB v0.3.0 I encountered another element of sunk cost: emotional attachment to your implementation.

I put in considerable effort between RBTLIB v0.2 and v0.3.0. This effort included at least 2 rewrites of the core algorithms for traversing the resource tree returned by Review Board. In my case, the core approach of using the Composite Pattern and Named Tuples didn't change. Their use did.

The issue was primarily due to grey areas in my knowledge of Python and the constraints I placed upon my implementation -- avoiding meta-classes and inexperience with using Python's __call__ method effectively. (OK, I didn't know __call__() existed when I started my implementation.)

Frankly, the situation drove me to new levels of frustration. Each time my frustration peaked I had to step back, build the stamina for another rewrite and push through.

Interestingly, I thought I was disciplined. My emotions kept telling me my broken implementation would be ok if I just spent more time on it. Rationally, I could tell that I was stuck. Stealing myself to rewrite took significant effort.

Each time, I created an experimental branch with the idea of exploring what was wrong with the implementation. Every time I did that I had a breakthrough. The two experimental branches have been merged to master and the implementation is better for it.

I'm currently on my 3rd rewrite of RBTLIB v0.3.0. I am more confident that this implementation will work but I'm procrastinating because I am still unhappy with some aspects of it.

January 9, 2018

Over Thinking Velocity in Scrum

  —A model of velocity in 6 months.

I’ve reached the point where I have enough data to calculate a meaningful velocity for my team. I defined velocity as the median of the story points completed in each sprint during the last six months.

I use the median because its a more robust statistic than an average. By robust, I mean it changes more slowly and is less susceptible to outliers.

I am concerned how this model will work for us, particularly when it involves schedule projections. I collect six months of data to permit a four month projection. (Using six months of data is an arbitrary decision.)

I present the velocity during the sprint planning meeting as guidance. As guidance, I acknowledge that velocity is a model of team capacity. The team may have reasons to plan for more or less work.

I didn’t count on people’s reactions to this model. They challenged it

  • using individual and team absences.

    A median accounts for things like statutory holidays, vacations, and student turn-over. Absences make the median lower than it would be if everyone were present.

    The model doesn’t address extremes. A holiday shutdown (and the resulting velocity) can be excluded.

  • using the accuracy of the story point estimates.

    Using powers of 2 to estimate story points and a median makes the calculation conservative, not aggressive. Mitigations for poor story point estimates include swarming and point changes until story is added to a sprint.

  • pointing out that adding people didn’t change the velocity.

    A median taken over 13 sprints with team of 8 doesn’t move much when someone is added or removed from the team. These changes won’t affect velocity for at least 6 sprints.

    This is a benefit when dealing with students who change every four months. It is a disadvantage when you add or remove a full time person and management can’t see the impact immediately.

People didn’t buy the argument that the model accounts for absences. If you have a statutory holiday once a month and run two sprints each month, then the velocity of both sprints includes the reduced capacity introduced by the holiday.

I agree vacations during the holiday season introduce more pressure on velocity. In my environment, people tend to take more vacation during the summer and in December. Fewer people means less capacity and smaller velocity. If fewer people results in more velocity then other challenges exist.

The absence argument is hard to explain since velocity is presented as guidance. This argument implies people didn’t percieve velocity as guidance or felt that they weren’t empowered to use this information.

Poor estimates are challenging. In our case, the team provides estimates and can change them at any point up to commitment into the sprint. I say this, because adding a story to a sprint is a commitment to deliver it.

The method used to generate story point estimates and velocity is conservative and should buy the team additional buffer for poor estimates. When using powers of two for story points, any debate on the story points that can’t be resolved should drive the story point estiment to the next higer powe of 2. This implies that every situation like this introduces up to 100% buffer into estimate.