March 2, 2018

RBTLIB v0.3.0 On Read The Docs

In RBTLIB v0.3 Update (Part 2), I discussed introducing complexity measures to RBTLIB using radon and xenon. Recently, I've introduced Sphinx and taken advantage of Read the Docs.

Sphinx is a documentation generator for Python and other languages.

Read the Docs lets you create, host and search project documentation.

The combination of the two coupled with GitHub creates a publishing environment that allows me to update my project documentation push it to GitHub and have the documentation published on Read the Docs within minutes. Simple.

Part of the move to Read the Docs included a clean up of the naming for the project. I moved away from rbt to rbtlib for two reasons: I don't want to cause confusion between RBTools, which provides a command-line tool called rbt and my work.

It's not my intent to diminish the work that people are doing on Review Board and RBTools by causing confusion. I still don't know if my project will be successful. It is my hope that it may be useful to the Review Board team but I haven't engaged anyone there.

I learned through Kenneth Reitz's Requests module that a best practice exists for API versioning: Semantic Versioning. Seems sensible to adopt. I've moved from v0.3 to v0.3.0. Same release.

Semantic Versioning also helpfully includes advice on versioning projects in an alpha and beta stage: once I achieve my goals for v0.3.0 I'll be targeting v0.4.0.

I'd been using virtualenv to develop RBTLIB and incorporated virtualenvwrapper. Very nice set of tools.

RBTLIB documentation: http://rbtlib.readthedocs.io/en/latest/.

February 7, 2018

Working Agreements for Agile Teams (Part 6)

I’ve mentioned elsewhere that my team struggles to find the right balance for design reviews (see Working Agreements for Agile Teams (Part 4)). My initial challenge was to get people to recognize the need for collaborative design. Then to get collaboration to occur and finally raise the bar on the quality of the collaboration so we improved our designs.

Its taken 18 months to learn to collaborate effectively on design. I’m confident of this because our last team meeting involved a discussion on the team’s expectations on the amount of collaboration needed for different design activities. You can’t have that discussion if people aren’t trying.

I wanted to share some details of that discussion because the intent and scope is valuable to others.

The team discussed two examples. In both cases, the design activity was handled by another team. The required software changes involved parameter changes. Parameter changes in our application typically involve changing values in a configuration file or the source code. Simple changes to make in the source code but with far reaching implications.

The question asked during the team meeting was how much involvement did the team want with the other team in order to fulfill the working agreement. Our working agreement requires the author and two others engage and agree on the scope of the design. Basically, accept the parameter change as a trivial software change or consider the broader implications.

The source code change isn’t the important factor affecting the design activity. Other factors include knowledge of why these parameters need to change and the rationale for choices made regarding their manipulation by the application. This information is needed to make future changes. It also involves understanding the requirement that the other team was trying to fulfill.

The team discussion focused on the degree of engagement required by other team members when knowledge and experience is important. Having this conversation is a huge win as it level-sets expectations and ensures that rich and meaningful engagements occur between team members.

This is what a working agreement should foster: an environment where expectations can be set and met and discussion where team members can level-set on expectations with each other.

This level-set is an important component of developing team norms.

February 1, 2018

Sunk Cost, Code and Emotional Investment

In a Practical Application of DRY, I discussed sunk costs as part of Sandi Metz's discussion on the Wrong Abstraction. In my work on RBTLIB v0.3.0 I encountered another element of sunk cost: emotional attachment to your implementation.

I put in considerable effort between RBTLIB v0.2 and v0.3.0. This effort included at least 2 rewrites of the core algorithms for traversing the resource tree returned by Review Board. In my case, the core approach of using the Composite Pattern and Named Tuples didn't change. Their use did.

The issue was primarily due to grey areas in my knowledge of Python and the constraints I placed upon my implementation -- avoiding meta-classes and inexperience with using Python's __call__ method effectively. (OK, I didn't know __call__() existed when I started my implementation.)

Frankly, the situation drove me to new levels of frustration. Each time my frustration peaked I had to step back, build the stamina for another rewrite and push through.

Interestingly, I thought I was disciplined. My emotions kept telling me my broken implementation would be ok if I just spent more time on it. Rationally, I could tell that I was stuck. Stealing myself to rewrite took significant effort.

Each time, I created an experimental branch with the idea of exploring what was wrong with the implementation. Every time I did that I had a breakthrough. The two experimental branches have been merged to master and the implementation is better for it.

I'm currently on my 3rd rewrite of RBTLIB v0.3.0. I am more confident that this implementation will work but I'm procrastinating because I am still unhappy with some aspects of it.

January 9, 2018

Over Thinking Velocity in Scrum

I’ve reached the point where I have enough data to calculate a meaningful velocity for my team. I defined velocity as the median of the story points completed in each sprint during the last six months.

I use the median because its a more robust statistic than an average. By robust, I mean it changes more slowly and is less susceptible to outliers.

I am concerned how this model will work for us, particularly when it involves schedule projections. I collect six months of data to permit a four month projection. (Using six months of data is an arbitrary decision.)

I present the velocity during the sprint planning meeting as guidance. As guidance, I acknowledge that velocity is a model of team capacity. The team may have reasons to plan for more or less work.

I didn’t count on people’s reactions to this model. They challenged it

  • using individual and team absences.

    A median accounts for things like statutory holidays, vacations, and student turn-over. Absences make the median lower than it would be if everyone were present.

    The model doesn’t address extremes. A holiday shutdown (and the resulting velocity) can be excluded.

  • using the accuracy of the story point estimates.

    Using powers of 2 to estimate story points and a median makes the calculation conservative, not aggressive. Mitigations for poor story point estimates include swarming and point changes until story is added to a sprint.

  • pointing out that adding people didn’t change the velocity.

    A median taken over 13 sprints with team of 8 doesn’t move much when someone is added or removed from the team. These changes won’t affect velocity for at least 6 sprints.

    This is a benefit when dealing with students who change every four months. It is a disadvantage when you add or remove a full time person and management can’t see the impact immediately.

People didn’t buy the argument that the model accounts for absences. If you have a statutory holiday once a month and run two sprints each month, then the velocity of both sprints includes the reduced capacity introduced by the holiday.

I agree vacations during the holiday season introduce more pressure on velocity. In my environment, people tend to take more vacation during the summer and in December. Fewer people means less capacity and smaller velocity. If fewer people results in more velocity then other challenges exist.

The absence argument is hard to explain since velocity is presented as guidance. This argument implies people didn’t percieve velocity as guidance or felt that they weren’t empowered to use this information.

Poor estimates are challenging. In our case, the team provides estimates and can change them at any point up to commitment into the sprint. I say this, because adding a story to a sprint is a commitment to deliver it.

The method used to generate story point estimates and velocity is conservative and should buy the team additional buffer for poor estimates. When using powers of two for story points, any debate on the story points that can’t be resolved should drive the story point estiment to the next higer powe of 2. This implies that every situation like this introduces up to 100% buffer into estimate.

January 3, 2018

Working Agreements for Agile Teams (Part 5)

In Working Agreements for Agile Teams (Part 4), I discuss one side-effect of using working agreements as principles and individual decision making. I view those examples as growing pains--an adjustment that people make when the nature of team engagement changes.  Those discussions are healthy for a team because they re-enforce a new way of working together.

A recent example of learning to work together arose during a discussion on the interaction required by our working agreement on design reviews. This agreement focuses on a successful outcome--when the design is complete we are well positioned to complete the review.  It requires the involvement of a designer and two design reviewers:

We agree to document our design and review the design with at least two people prior to implementation.
This agreement positions the team to avoid situations where only one person understands the design.  It's simplistic. If you dwell on it you may conclude it's heavy handed. Taken literally, this working agreement requires every design review to involve three people.

My notion of design includes adding a method to a class. It also acknowledges this design might warrant a single line of text in a comment for the method. It's natural to ask why anyone would want this overhead for simple cases.

One team member made an argument against this approach:
  • The working agreement promoted inefficiency because it required too many people to engage.
  • The working agreement permitted passive engagement--they asked someone to be a reviewer and that person indicted interest but did not actively engage.
  • We need time to learn (or prototype) so their is something of substance to review.
  • A difference of opinion on when to start applying the working agreement.
My counter arguments were:
  • I am happy if the conversation on how to approach the design occurs and all three people actively engage in the decision.
  • Passivity is a form of passive aggressiveness that I won't tolerate--engage or choose not to engage but make a decision.
  • Absolutely, take the time to learn but ensure that the interaction of all three people acknowledges and understands the objective and intended outcome of this learning.
  • Start the interaction at the same time we start working on the story.

Ironically, we disagreed only on the starting point and the passivity. Everything else this team member said made sense to me.

So the working agreement failed to help us understand the importance of the interaction required to make the design review a success. It failed to balance the need for the author to learn and for the reviewers to understand. And it failed to address the notion that too much investment up front might commit us to a poor course of action. Or did it?

Clearly, the working agreement addresses none of the above explicitly. Clearly different perspectives resulted in different approaches. Importantly, these culminated in a very important and profound outcome for the team.

I encourage the team member to raise the differences of opinions in our Lean Coffee. They did and they and I discussed the issues with the team.

To the team's credit, they took both perspectives in stride and we agreed to enhance our understanding of the working agreement. We also agreed not to modify the working agreement to include this understanding.

Interactions over process triumphs again! Furthermore the team adopted several Agile principles in doing so. We all won.