WSJ July 31, 2021: Artificial Intelligence’s Big Chill
The WSJ article is worthy of reading, even if you have to pay for it. I don’t want to steal its thunder, so consider this a recommendation. I don’t get any compensation from it. I can’t link to it, because it’s behind a paywall, but the Wikipedia article defines the definition nicely enough.
An “AI Winter” is what follows irrational exuberance caused by overblown and sweeping claims that imply the computers are going to somehow “be intelligent” like people. Eventually, the people get sick of being misled (or actively lied to, but we’ll call it “mistakenly over-optimistic”) and the party ends.
So, to be clear, the marketing term of “AI” is outrageously stupid.
In attempting to “write the right software” we’re routinely faced with a prose (or worse, spoken) request that the client believes is clear and proper but in fact can’t be unambiguously implemented.
I wrote earlier (Easy and Simple Isn’t) of my attempt to clarify a desktop lamp’s behavior only to discover the electronic lamp had a dimmer built into its power switch.
What does the dimmer do to the requirement? It makes something that was true or false into a range of discrete values.
The simple logic of a light switch (without a dimmer) is functionally this:
next state = not(prior state)
So, if it lit, it becomes not lit. If not lit, it becomes lit.
That’s no longer true if it has a range of values. The behavior of “trigger the switch” (or just “switch” the lamp) actually goes through a sequence that loops on each use of the switch:
This still requires the logic for handling the power cable. If this electronic lamp is unplugged, it always starts on its brightest setting when plugged back in, regardless of what setting it had when unplugged (including off).
The simple requirement to “handle plug in, unplug, and switch the lamp” as prose is clearly insufficient. I could certainly write a more detailed prose requirement, but honestly, the problem with prose is that it can’t be tested. To be tested requires execution — and to execute requires a programmer write it! We have a catch 22 … or do we?
Back in the dark ages (the 1980s) when I studied Systems Analysis a large area of the coursework was in creation of paper forms. The paper forms (often designed to be NCR — no carbon required) were a primary way for data capture up until the 90s and still are today.
When designing a paper form, there were many common elements. Consider the forms that are still often used today by home repair contractors. They have an area for defining the customer (name, address, etc.) and then lines for capturing whatever work was done, with a column for price, quantity, and extension (price times quantity). And of course, subtotal, tax, and total. In other words: invoice forms like are still for sale at office supply stores like this from Office Depot.
The reason these forms worked for years (and are still in use) is because they solved a common problem and did so well. In the field, a person with such an invoice can capture any arbitrary work. Even one-off special jobs are fine because anything can be written into the lines (and anything often is).
Now, the company decides they want to have software for a phone or pad to avoid using paper. Naturally, they tell their developer that they want to make an electronic version of that venerable invoice, so that they don’t have to re-key all the data from the paper into their accounting system.
And from such an obvious and humble beginning one of the most insidious mistakes has occurred: the prior solution was used as a requirement for the next solution. Instead of solving the problem (capturing the work done) the requirement is to make a version of the old solution.
I’ve been building up content (for here and for eventual videos) regarding reducing the pain of software development.
In particular, I’ve become fascinated with two challenges:
Writing the right software
Writing the software right
Imagine a team of perfect developers. They can take any user story, use-case, requirement, specification (or whatever term a given methodology uses) and generate 100% flawless and efficient code. They never have bugs, they never run low on memory, and they never have unexpected effects no matter how convoluted or wrong the input provided is.
You task them to write something, knowing it will be, in a word — perfect. And the first time the intended audience uses it they reject it as totally broken. How can this be?