Even though I modified my resume not to match laundry-list job postings I still receive emails from recruiters seeking things like this:
- 8 years working experience developing in JAVA with Microservices.
- Expert in Core JAVA / Spring Boot, Framework, Security, Cloud / Hibernate / Web Services
- Working knowledge of Apache Camel, JMS, JNDI, JUnit, and Cucumber
- Advanced knowledge developing REST APIs and micro-services
- Experience with relational, NoSQL, and event streaming database implementations (Oracle, MySQL, PostgreSQL, MongoDB, Cassandra, Kafka)
- Practical use of XML, JSON, XSLT
So, I guess seven years isn’t enough, eight is the magic number? What are they actually needing? What’s interesting to me is they don’t actually care about programming skills, analysis skills, design skills. What they care about are libraries and products.
It’s almost like what they want is: Coder, Java Enterprise, One (1) each.
Is that really what allows companies to generate extraordinary software?
A simple example: processing a compressed data file accessed via HTTP (could be API, static website, or any other HTTP source).
I don’t actually care what language, stack, library or framework, etc. Mostly, I don’t care because it doesn’t matter. This problem isn’t about coding, its about performance. So, for the sake of the example, assume the language is fast and the library or framework it uses is exceptional.
Now, the normal approach is this:
- Fetch the compressed file
- Decompress the compressed file into a source file
- Process the source file
This is great, except it takes longer to fetch the file than we’re allowed to wait to start processing the data. And that ignores the decompression time. In short, we have a performance problem. We need to begin showing the results before the bandwidth we have finishes delivering the compressed file.
Well, if it takes 30 seconds to move the data through the wire and we must start showing results within 5 seconds, I guess we’ll need a faster pipe.
Or perhaps we can just do better…
It’s easy to build software today.
Pick a stack, which gives you a framework from web browser through server. Pick the plugins that hook to your preferred SQL or No-SQL persistence using the mapping technology so you don’t have to code storage directly. Use Docker for each microservice and have them expose web services and setup Kafka for the queueing, with Redis for the shared state. Control it all via Kubernetes and deploy it through AWS.
I’ve been told this, by more than one person and team. “It’s easy.” I’ve never seen any of the teams who claimed it was easy ship.
And yet, everything I wrote above is true. There really are full starting points. There are excellent technologies. I’ve written before about the challenges of attempting to build with enterprise stacks before the problem is well understood, but let’s grant that the problem is understood.
Why, then, is it still so hard to ship software that people can use (and harder to ship software that they like to use)?
There’s a huge number of books written about how to get things done. Some of them are about personal development, some about project management, some about business strategy, and some are just pragmatics for the general case.
Skipping the computer aspect, just leaving instructions for people is tricky. For instance, something as basic as washing laundry has dependencies and can interleave with other things.
A home washer/dryer example. It would seem a checklist would work:
- place dirty laundry in washing machine
- add soap
- pick cycle
- start machine
Pretty basic. Except it’s not. The cycle is automatic and may finish in 34 minutes for instance. During that time, then, the person can do something else.
They can interleave the task of doing laundry with another task. They aren’t multi-tasking, per-se, but they can have the washing machine running, they can then load and start the dishwasher (while the washing machine runs) and then do something else, then move laundry from washing machine to dryer …
In short, even simple things in the real world have relationships and temporal inter-dependencies.
How, then, do we let a computer create a plan or schedule, since it would have to be told all of these things?
I’ve been pondering a common misconception: Microservices are synchronous. Or, Microservices are asynchronous. Both are actually missing the point!
The use of common libraries when implementing microservices using HTTP as a transport has many characteristics of synchronous coding. A request is made, and then one blocks until the result is availble, and then processes the result of the request. That sounds like synchronous code.
However, it’s only the way the code is written that makes it synchronous. For instance, if instead of using a library that takes the request details and returns a set of results, one could use a library that issues the request and then returns without waiting for the eventual results. That would shift the model to asynchronous programming.
So, are microservices synchronous or asynchronous?
What’s the real difference? Actually, it’s easier to show than tell …
Today I’m working with a founder who is struggling with a common problem. Their application (still running dark, please don’t ask) provides automation for data capture and evaluation for mission critical repetitive tasks.
It’s hard to get time with the people who do the work (my client understands not to ask their boss!) but time was arranged with one of the many people who do the job and will run the application. The client knew not to ask “What are your processes and workflows?” because that question makes no sense to most people. Instead, the question was “What do you do to carry out the task?”
Now, it gets interesting, because other experts must take the same set of inputs and produce the same result … but from less formal earlier sessions it’s clear that each of the experts do things differently.
There’s nothing like the ringing and then silence about the time you grab your phone. When it does occasionally keep going and you answer, there’s a recorded voice, giving you a lie such as “Your auto warranty is about to expire. This is your final notice…”
Of course, we should just hang up.
Or wait through and sometimes there’s a number we can press to be placed on their “do not call list.” Assuming that anyone believes that those who use robo-dialers would honor such a choice.
I was going to write about the existing state machine I have working. It’s very powerful, has parallel features, and its output was shown in Rethinking State Machines meaning I wouldn’t have to actually create any code, just go through a post-mortem.
Then I changed my mind.
My Lisp version was designed intentionally to be an integral extension of the Lisp technology we use internally. It’s not that it’s a trade secret (how does one make a “trade secret” out of something as well known as state machines?) but rather, it’s not actually comprehensible without an understanding of Lisp and the nature of the Lisp development needs that drove its creation.
What I really want to write about is the process of creation that allows one to make the step from “code that works with data” to “data that is representing code.” That’s the jump, the spark, that enables metaprogramming, where the program code is the data to other program code.
And that process warrants starting at the beginning, which is with the problem.
The problem is my desire to allow direct expression of the basis for responding over time to external events. Think of this as the architect’s perspective instead of a developer’s perspective. No matter the language of the system implementation (or set of languages) I want the system to respond properly to any sequence of events.
I want this to be independent of the code that carries out the intention so that I can work “at a higher abstraction.”
I spent many years teaching for Learning Tree International, New Horizons, and privately. That has given me a very formal perspective on presenting complex material that isn’t yet understood to people who are much smarter than I am.
I’ve also created an enormous number of models and diagrams over the years to present to clients. And I’ve been in the meetings when others present their diagram sets as well.
Having made the mistakes, watched the mistakes, and sometimes even avoided the mistakes, there’s a very basic trap that I want to spare all my fellow modelers and diagram authors — and it’s one I see still at all levels of organizations.
The trap is this: showing the big picture first.
I understand that you should “tell them what you’re going to say, say it, and tell them what you said.” Perhaps. But if you show a complex diagram (more than two or three boxes) that you expect to explain in detail, it’s only going to go downhill from there.
I have made a few extensions and Domain Specific Languages (DSLs) to represent state machines.
Some were frameworks in a native language, including versions using the State Chart pattern. One was in C++ integrated with an event and entity framework so that entities could participate in threaded declarative workflows assembled instead of derived.
But the ones I enjoyed most were done in Lisp. The simplest was a sexpr based notation that was read by clisp and emitted Java. It was, of course, a macro. The expression of the machine was executable and the execution was to emit Java. It generated interfaces, abstract base classes, 100% of the POJO that represented the messages, and required no fences — it was a full code generator, not a wizard (no generated code was edited by hand or saved in the version control system).
I did another recently. This time, I decided to tackle the interesting problem: state machines become unbearably big and repetitive if done following the conventional model.
So I decided to toss the rules aside.