October 25, 2016

Defensive Programming Done Right

  —A look at defensive programming and Narrow Contracts in C++.

I watched John Lakov’s two part video on Defensive Programming Done Right (Part I and Part II). The first part provides motivation for defensive programming. The second, shows how to use BSLS to introduce defensive programming into C++. A definition of defensive programming is provided in Part I.

Part I looks at design by contract and observes that undefined behaviour in a contract can be advantageous. Particularly if you structure your implementation so that sensible behaviour occurs whenever preconditions are violated.

Sensible behaviour is delegated to the application. Doing so simplifies library construction. For example, the Standard C Library’s handling of null pointers provided to string functions is implemented this way.

A model is discussed for pre- and post-conditions applied functions and methods.

  • A function’s postcondition is simply its return value.
  • A method’s postcondition is subject to the preconditions of the method and the object state when the method is called.
  • The extension of pre- and post-conditions to methods introduces the notion of essential behaviour. Essential behaviour includes method postconditions but also other behavioural guarantees beyond these postconditions. These behavioural guarantees are essential to ensuring the method’s correctness.

Both talks provide an introduction to the C++ proposal for Centralized Defensive-Programming Support for Narrow Contracts. The implementation of BDE (of which BSLS is a component) contains support for this proposal. The experience gained at Bloomberg using BDE provides the practical element of this proposal.

Centralized Defensive-Programming Support for Narrow Contracts defines a narrow contract as a combination of inputs and object state that can result in undefined behaviour detectable only at runtime. There is an excellent argument in this paper for not artificially widening a contract–an argument that the Standard C Library supports and which the Standard Template Library may have missed (for example with the introduction of the Vector container’s at method).

In all, I had difficulty finding value in Lakov’s videos but think that this is a result of his presentation style rather than the content contained therein. Lakov is a co-auther of Centralized Defensive-Programming Support for Narrow Contracts and that work and the ideas contained therein made clear what the videos did not.

I haven’t done any research on what prompted the Vector at methods and the notion of artificially widened contracts but I am convinced the C Standard Library embodies better solutions.

October 2, 2016

The Abstract is 'an Enemy' (With a nod to LibAPI)

  —What's in a name? Everything.

I discovered The Abstract is ‘an Enemy’: Alternative Perspectives to Computational Thinking in the references to Robert Atkey's talk on Generalising Abstraction.

The Abstract is 'an Enemy' is an argument against creating generic names for abstractions. The paper begins with a module name 'ProcessData'. I laughed on reading this having encountered a library called 'API' in my own work. The example struck a chord.

The compelling argument in The Abstract is 'an Enemy' is that software should be designed so that names are specific. The rationale for the specific is two-fold: it forces the design to encapsulate a single thought and it aligns what is being defined with something in the real world.

The paper goes to provide examples on how the increase in abstraction in an effort to simplify leads to complexity. In one example they discuss how the concept of a user is generalized to the point with the resulting concept in the implementation embodies two very different users.

The abstraction of user leads to complexity in the system but also diminishes the ability of the software to serve these users. The shared representation of user in the system resulted in the system not supporting the user's way of thinking about the world.
A misfit is a correspondence problem between abstractions in the device, abstractions in the shared representation (the user interface) and abstractions as the user thinks about them. Here, the abstractions in the ‘shared representation’ (the user interfaces ...) don’t match the users’ way of thinking about the world. Such misfits are known to cause usability difficulties.
 The provides a description of the tension between the need to model the real world and the need to limit complexity in an implementation. It's a good walk through how the design process goes awry and offers some insight on how to correct these challenges.

In my experience, I am confounded by the need to create arbitrary abstractions that obscure the real world. In the domain I work, in I am faced with electronic signals and devices that make up the physical interface to the product. In many cases the signal names presented in the schematics are never captured in the software and an abstraction for a physical device (such as a button) are non-existent.

I don't have an answer other than to suggest that the software implementation is very out of touch with reality. The resulting complexity in the product and simple misunderstanding that results is costly.

September 26, 2016

Abstract Data Types

  —A look at Barbara Liskov's paper on Abstract Data Types.

I was reading "Contracts, Scenarios and Prototypes" and learned that Abstract Data Types were first presented by Barbara Liskov and Stephen Zilles in their paper "Programming with abstract data types" (requires access to the ACM Digital Library). The main contribution made by this paper is that it identifies an abstract data type as class of object completely characterized by the operations it can perform.

Liskov and Zilles' paper was written in 1974. It lists thirteen references that provide insight on how abstract data types were arrived at. It's an interesting list of references including work from Dijkstra, Neumann, Parnas and Wirth.

What is compelling about the introduction of "Contracts, Scenarios and Prototypes" is the depth of the references provided on the development of contracts. In addition to abstract data types, this introduction includes a look at Hoare's "An Axiomatic Basis for Computer Programming" which introduces pre- and post-conditions via Hoare Triples and Parnas' "A Technique for Software Module Specifications with Examples" for description of good specifications and strongly typed languages.

Contracts, Scenarios And Prototypes:  An Integrated Approach To High Quality SoftwareContracts, Scenarios And Prototypes: An Integrated Approach To High Quality Software by Reinhold Ploesch




View all my reviews

September 3, 2016

Object-Oriented Programming: A Disaster Story

  —A look at object-oriented programming when it goes wrong.

Read Object-Oriented Programming: A Disaster Story, not so much for what it said but for what ended up on Reddit. If you are looking for perspectives on object-oriented programming, the comments are a good read.

One commenter, discussed Closures and Objects are Equivalent, another brought in Alan Kay on Object-Oriented Programming. Both reasonable responses.

In my opinion, the best comment:
The value of objects is in treating systems behaviourally. Inheritance and even immutability are orthogonal. An object is a first-class, dynamically dispatched behaviour.   (/u/discretevent)
The lesson, is know your tools, use them appropriately and recognize their limitations. Orthogonality is a good way to organize your thinking on this.

In Object-Oriented Programming: A Disaster Story,  I do agree that shallow object hierarchies are better than deep but I'm not sure if the argument presented there is in response to programs that derive all classes from the same object or and argument against deep class hierarchies.

With respect to
Among OOP practitioners, there are competing schools of thought on the degree to which a program’s behaviors should be expressed as class methods rather than free-floating functions.
Isn't the correct answer to apply what is appropriate to the context?

I've made arguments wherein a free-floating function is the best tool for ensuring consistency between different classes whose behaviour is related only due to business rules and logic. For example. do you want a manager class in an automobile that ensures the lights are on and the doors are locked while driving? You can construct arguments for both approach.

The right approach is the one that leads to the simplest implementation. In this case, I agree with the author and avoid "nonsense Doer" classes.

There is a great set of comments in the Reddit thread relating to context and object decomposition.

August 28, 2016

Good Grief! Good Goals!

  —How not to abuse metrics. Or how to use metrics to support goals.

Martin Fowler has an essay on An Appropriate Use of Metrics. It's a useful summary of how metrics are abused and often obscure the true intent of the goals they support. It provides guidance on how to improve goals by placing metrics in a supporting role, instead of a deciding role.

If you've done research on this you've likely heard it all before. I like this essay because it's useful to refer to from time to time and it's broad enough that you can use it to educate others on how to get your metrics aligned with and supportive of the intent behind your goals.

I think it important to emphasize something that Fowler touches on when discussing explicitly linking metrics to goals. He states:
A shift towards a more appropriate use of metrics means management cannot come up with measures in isolation. They must no longer delude themselves into thinking they know the best method for monitoring progress and stop enforcing a measure that may or may not be the most relevant to the goal. Instead management is responsible for ensuring the end goal is always kept in sight, working with the people with the most knowledge of the system to come up with measures that make the most sense to monitor for progress.
Management is responsible for ensuring that the focus remains on the end goal and for working with the people who are best positioned to develop meaning measures of progress.

Absolutely. That means you need to engage.

I engage my software team in the development of goals. It's an imperfect process.

What's missing from Fowler's essay is language directed at ensuring the responsibility of team members is clear. That's important enough to say again: you need to engage. Whomever you are and whatever your responsibilities. Engage to create understanding and allow for the possibility that your perspective needs to be adjusted.

For example, I recently started a discussion on goals. One focused on testing. I estimated that this discussion would take five hours. It took ten.

Some feedback on during this discussion.
  • I was told such discussion were designed to make management look like they are doing something useful. I asked if we were investing enough in our tests. 
  • I was told that five hours is too much to time to spend on this. I pointed that five hours for the team was 1 week of 400 weeks of effort available this year. I asked if it was smarter to spend this time writing tests or determining how to get the best return on our investment.
  • I was told that the process was too democratic (because everyone could participate). I responded by saying there would be a testing goal and that people were free to contribute to its definition in whatever manner they thought appropriate (including not participating).
  • I was told that we didn't have a quality problem. I encouraged discussion by saying that we can disagree on this but we should understand the basis of our disagreement so that we can improve our lines of enquiry into the issue.
All of the feedback was valuable, but only one actually focused on the goal of testing. The fact that it provided a contrarian view to my own makes it that much more valuable.

One win arising from our discussion on testing is that while I sought to create a set of principles identifying what to achieve.  Some were concerned about losing control if the goal were too prescriptive. There was agreement on the outcome we wanted to avoid.

Since understanding what was best for the business was important we were able to identify this miscommunication early. I used Fowler's essay as the basis of part of that discussion but emphasized the importance of everyone's engagement.

Another win was the discussion on whether we had a quality problem.  Obviously we test. We have our challenges too. The fact that someone thought we had good enough quality created a situation where we could engage on what worked well and figure out how to leverage or reproduce it.

In all, I like Fowler's essay and the contribution it makes. I think it's a little too one-sided in calling out management responsibilities. Good goals are everyone's responsibility.