There’s a huge number of books written about how to get things done. Some of them are about personal development, some about project management, some about business strategy, and some are just pragmatics for the general case.
Skipping the computer aspect, just leaving instructions for people is tricky. For instance, something as basic as washing laundry has dependencies and can interleave with other things.
A home washer/dryer example. It would seem a checklist would work:
- place dirty laundry in washing machine
- add soap
- pick cycle
- start machine
Pretty basic. Except it’s not. The cycle is automatic and may finish in 34 minutes for instance. During that time, then, the person can do something else.
They can interleave the task of doing laundry with another task. They aren’t multi-tasking, per-se, but they can have the washing machine running, they can then load and start the dishwasher (while the washing machine runs) and then do something else, then move laundry from washing machine to dryer …
In short, even simple things in the real world have relationships and temporal inter-dependencies.
How, then, do we let a computer create a plan or schedule, since it would have to be told all of these things?
There are software systems that talk about planning. I’ve used a few. What they do is allow people to list their steps and mark status on them. They can attach budget, or project things like GANTT charts. They are used for communication and tracking.
What they aren’t used for is allowing the computers to do planning.
For instance, the steps I listed above could be placed as columns in a table and loads of laundry could be shown as in various stages. That’s great, but it required a human to figure out the ordering.
That’s what the basic tools have in common: they rely on a human to create the plans and flows, then the computer provides the recording and sharing technology. It doesn’t allow the computers to actually create the plans or discover the flows.
Step out of that and you enter the academic realms. Things like STRIPS which rely on formal logic to define the domain and the various actions and goals. These tools can actually be “told” enough to have the computer (and not the human) form the plan and schedule.
Yes, this is one of the many technologies officially in the AI domain …
These tools require training. At the least, they use some variation of formal logic. Some of them use tables and state machines, too (which are just different ways of capturing formal logic). Typically, a normal user untrained in their use won’t be able to evaluate the correctness of the specification of the world.
Now, what I have been fighting with for a long time (since around 2012 I think, perhaps a bit earlier) is how to allow an expert in a non-software domain (payroll, auto mechanics, business project management) to express their intentions in a way that doesn’t require the training in formal logic and that another expert in the same domain could read immediately as well.
It’s All In the Details (Along with the Devil)
The challenge is context.
Even the simple statement “place dirty laundry in the washing machine” requires a tremendous amount of context to be understood.
What is laundry? Where is it found? How is it placed? What if it doesn’t fit (too much)?
A human just knows these things.
Actually, that’s a lie. Kids learn it by “monkey see monkey do.” We humans learn by observation, trial, and guidance. We interact with the world and in turn our brains adjust — natural learning.
In order to create a general tool in software to let people “explain” their world requires either them giving all the details (which is why planning systems use formal notations) or it requires some form of pre-existing knowledge (such as the Cyc Project for capturing “common sense”). Cyc is fascinating, but it’s out of the scope of most projects.
Without existing context, it’s brutally hard to put enough into a planner to warrant the effort. It’s not that the planning algorithms don’t work (they are computationally non-trivial but there are working planners) it’s that it’s often easier to write a simple program that is not generalized and address the specifics than to use the general purpose beast. And of course, that requires a developer.
It’s Not Just Planners
This is not that different from the problem faced in robot cars. Humans understand the meanings and implications of what they see at the side of the road.
Residential neighborhood, wide open lawns, parked cars … go slow, a kid could dart out.
Trying to make “all of those rules” in a formal notation is an incredibly tedious task.
The use of statistical learning (“Machine Learning” including neural nets) tries to get around this problem by “learning from data.” Without any attempt to teach the machine rules, samples of inputs and outputs are provided and the machine “learns” to generate “reasonable outputs” for different inputs.
The popularity of machine learning is precisely because it eliminates the need for a person to figure out the formal rules. The downside of machine learning is that it is an approximation from data…in places where “close is good enough” this is fine but there are many cases where “good enough” isn’t. For instance, to avoid damage to a motor shut it down if the temperature exceeds a threshold. Trying to come up with enough samples to gain enough “examples of emergency shutdown” and ensure that rule was “discovered in data” would be a challenge.
Again, it boils down to context. Machine learning algorithms are generalized to the point they don’t care what the numbers mean. Only that they produce output numbers that are “like” what they’ve seen before in “similar” conditions.
Still No Solution
I have been seeking some sufficient means of allowing general knowledge to be captured without forcing formal logic.
The way people work with language isn’t generally formal. We can use the same word to mean different things in different contexts, and we almost never give the full context of anything. As a result, our use of natural language is “loosey goosey” which is what drives researchers to use artificial formal languages.
But I keep trying, because if there was a way to allow people to readily use AI planners the computer could actually serve a lot more people.
Right now almost all thinking is done by people and the computer records the results of the thinking. Even programming: I (or any programmer) decides and then we express the result of our decisions into a formal notation (a programming language). As a developer, I can at least make the machine do what I choose.
For most non-developers, even “no-code/low-code” systems don’t give them the ability without a lot of training and practice to command the machine.
I don’t want to force people to command everything. I’d like to let them explain what they know and want. I’d even like the computer to ask for clarification as needed.
This isn’t wholly a pipe-dream. I’m one of the patent holders of the Payroll Policy Miner, which lets a computer examine the inputs and outputs of a payroll system and reference a set of “payroll policies” that it then deduces the configuration of the payroll system. And because it’s a well formed domain, the computer asks questions with multiple answers to pick from in order to clarify uncertainties. The process of policy mining is thus a conversation in which the computer examines data, asks for clarification, and then repeats, each step defining more of the payroll rules for the customer.
So, I’ve done a very narrow focused version. Not of planning, but of “understanding from data and rules.” But it was still formal.
It may be I never figure out a way to explain “general things” to the computer in a way that doesn’t require advanced training in formal notations. But, just as spreadsheets broke the idea “only programmers can control computers” I believe it’s a challenge worth pursuing.
If not me, then I hope someone else solves it! I would really like to use it …
Keep the Light!