I founded Redwall MUCK Friday, September 15th, 1995. Redwall MUCK was from the Brian Jacques Redwall novel series.
For those who don’t know, a MUCK is a textual multi-player virtual world. One of the best known MUCK code bases is TinyMUCK (which is the base of Fuzzball MUCK which is what I hacked a few features in for Redwall MUCK). The world is built by players/members (not by programmers). It’s made up of rooms interconnected by exits and with the means for non-programmers to create freely.
The MUCK tech was old in 1995. There are still MUCKs out there (even Redwall MUCK).
Now, the MUCKs aren’t 3D virtual reality, but they are virtual reality. They were absolutely multi-user, and they had community. And each MUCK world is essentially a different place full of virtual places.
Even though I modified my resume not to match laundry-list job postings I still receive emails from recruiters seeking things like this:
- 8 years working experience developing in JAVA with Microservices.
- Expert in Core JAVA / Spring Boot, Framework, Security, Cloud / Hibernate / Web Services
- Working knowledge of Apache Camel, JMS, JNDI, JUnit, and Cucumber
- Advanced knowledge developing REST APIs and micro-services
- Experience with relational, NoSQL, and event streaming database implementations (Oracle, MySQL, PostgreSQL, MongoDB, Cassandra, Kafka)
- Practical use of XML, JSON, XSLT
So, I guess seven years isn’t enough, eight is the magic number? What are they actually needing? What’s interesting to me is they don’t actually care about programming skills, analysis skills, design skills. What they care about are libraries and products.
It’s almost like what they want is: Coder, Java Enterprise, One (1) each.
Is that really what allows companies to generate extraordinary software?
A simple example: processing a compressed data file accessed via HTTP (could be API, static website, or any other HTTP source).
I don’t actually care what language, stack, library or framework, etc. Mostly, I don’t care because it doesn’t matter. This problem isn’t about coding, its about performance. So, for the sake of the example, assume the language is fast and the library or framework it uses is exceptional.
Now, the normal approach is this:
- Fetch the compressed file
- Decompress the compressed file into a source file
- Process the source file
This is great, except it takes longer to fetch the file than we’re allowed to wait to start processing the data. And that ignores the decompression time. In short, we have a performance problem. We need to begin showing the results before the bandwidth we have finishes delivering the compressed file.
Well, if it takes 30 seconds to move the data through the wire and we must start showing results within 5 seconds, I guess we’ll need a faster pipe.
Or perhaps we can just do better…
It’s easy to build software today.
Pick a stack, which gives you a framework from web browser through server. Pick the plugins that hook to your preferred SQL or No-SQL persistence using the mapping technology so you don’t have to code storage directly. Use Docker for each microservice and have them expose web services and setup Kafka for the queueing, with Redis for the shared state. Control it all via Kubernetes and deploy it through AWS.
I’ve been told this, by more than one person and team. “It’s easy.” I’ve never seen any of the teams who claimed it was easy ship.
And yet, everything I wrote above is true. There really are full starting points. There are excellent technologies. I’ve written before about the challenges of attempting to build with enterprise stacks before the problem is well understood, but let’s grant that the problem is understood.
Why, then, is it still so hard to ship software that people can use (and harder to ship software that they like to use)?