Why Programming Should Be Hard

Or… How I Learned To Stop Worrying And Love The Free Monad

“That’s not easy what I just did!” — Carl (River Phoenix) from Sneakers

I hate the word “elegant.” It’s one of those words you hear a lot when talking to programmers, like “refactor” or “trivial.” Like those words, it’s overused. And like all overused words, it’s lost its meaning. “Elegant” (when used in the context of programming) once referred to a program or architecture that was so beautifully simple that it was almost unbelievable. The recursive solution to the Towers of Hanoi is elegant. Most code isn’t. We sure like to use that word a lot, though.

I hate “elegant” because it’s become a rhetorical bludgeon used against complex code that must be complex in order to accomplish its task. Not every problem has a Towers-of-Hanoi-esque solution. Most don’t, or at least the world is too short of geniuses to invent truly elegant solutions to every problem. Also, programming problems are becoming more complex every day. My team writes cloud software, which means we need to write code that runs across hundreds or thousands of independent compute nodes and somehow appears to an end user to accomplish a coherent, unified task. Worse (or better, depending on your perspective), we write IoT software for industrial machines, which means our software has to process hundreds of thousands of discrete events per second and somehow make sense of all that data in a way that can be presented to a human.

In order to manage the complexity of these new programming tasks, we as a community of software engineers must invent new tools. Those tools include:

  • New programming languages with better facilities
  • New programming paradigms (OOP, functional, immutable, reactive)
  • New vocabularies to describe and encode solutions to harder problems (category theory, type-level programming, stream processing)
  • Entirely new types of systems (cluster managers, NoSQL databases, streaming frameworks)

All of these are difficult to learn. Code written in a language you don’t know, using paradigms you’re not familiar with, in vocabularies you have never learned is likely to be incomprehensible to you. If you call this code “inelegant,” you’re not helping. You are making excuses for your own refusal to learn something new and difficult. It’s not a programmer’s duty to make code easy to comprehend for those unwilling to learn. It’s a programmer’s duty to learn, so that they can write and comprehend better and more powerful code.

The reluctance to learn difficult things is understandable, and I can even understand why someone might get defensive about it and start throwing “inelegant” around. But this is ultimately not constructive. Every software engineer that wants to evolve with the industry has to continue to learn. In this post, I’ll talk about my personal learning journey in my career and what I’ve learned about learning, so to speak, from early in my career to as recently as this past year.

The Dangers And Rewards Of Geeking Out

I am a geek. I define “geek” as someone who likes things — often certain kinds of things — just because he or she thinks they’re cool. Geeks often make good engineers, and engineers are often geeks. You can geek out about anything really — I geek out about cooking. I also geek out about software engineering. This has, on occasion, gotten me into some trouble.

Software engineering, like most human endeavors, is a game. It has constraints (i.e rules), and you’re trying to win it (i.e. ship a successful product). Winning games is all about understanding the rules and finding optimal ways to work within these rules to produce the desired outcome. In any sufficiently complex game, the problem space is large enough that there’s plenty to geek out about. An enthusiast for the game of chess might geek out about openings, for instance. I believe this is why many engineers are also gamers. Any game can be a dress rehearsal for the real thing.

Where geeking out can go wrong is when it steers you away from winning the game. Most deep dives into a geek-out session begin with the best of intentions: maybe you read about some cool new software engineering technique. Maybe you did a book club on it. Now you’re in a big hurry to put it into practice. At some point though, you stop caring that you learned about this thing in the first place to win the game, and start caring more about doing the thing. This might lead to doing the thing badly, and in particularly bad cases, you and others might rack up a list of failures that are then used as evidence of why the thing is a bad idea in the first place.

My Awkward Unit Testing Adventure

This happened to me with unit testing. When I first read about automated testing — and unit testing in particular — I was immediately sold. At first, I was thinking about the game. If we write good unit tests I thought, we will save countless hours of manual testing time, find bugs earlier, and have code divided into small comprehensible units that are independently tested. Then I — and other like-minded geeks I worked with — started doing it.

The problem was, nothing works like it does in books. When you start working with real software — legacy software in particular — compromises must be made. Engineering trade offs must be considered. The game has to be played and played well. However, at this point I was so geeked out over unit testing that I was determined to make it work by any means necessary. Because it was cool!

The result was a disaster. Thousands of lines of of un-maintainable, often-useless, poorly performing tests were written by myself and others. About the only thing this accomplished was to give ammunition to those who were skeptical of automated testing from the outset. Now they had a real live train wreck they could point to as evidence they had been right all along. The group then entered a sort of testing dark age that took years of concerted effort to recover from. That recovery was still in process when I left that company.

A Better Way

I like to think I learned the hard-won lesson from that experience. A year or two later, I geeked out about Test Driven Development (TDD). As with unit testing, I had read about it in books and various blogs, and I had a concrete idea of how it could improve our software engineering — even how it could right some of the wrongs from the misadventure above. Also as before, I totally geeked out about it. But this time I caught myself and used my geek powers for good.

As with unit testing, I immediately ran into problems applying the practice to actual code. However this time, I kept the game in mind. Ultimately I wanted to make our software engineering better and more efficient. This meant first and foremost that I accept failure as a possibility. If I can’t make TDD work in our code base, I am doing no one any favors by trying to use it anyway because it’s cool.

Then — and this was my epiphany — I accepted that programming is supposed to be hard. I decided to have some faith in the people that wrote these books and blogs — that they were able to make these technologies work with real software. If I stumbled early on, I told myself, the blame is likely mine, not the technology’s. I kept trying, and I practiced. I referred back to the literature and sought out more reading and advice when I got stuck. Ultimately, I reached the point where the technology I geeked out over met with my goal of playing the game more effectively.

Obviously healthy skepticism of the technology is fine, but don’t let your skepticism rule you. Too many people do this. People with unhealthy skepticism are the same folks that call necessarily-complex code “inelegant.” Take the time to watch the first 3:40 or so of this youtube video. Most of it is about how to play guitar — another thing I geek out about — but the first part is great advice about getting good at anything difficult and dealing with skepticism — yours and others’. It’s almost identical to my own mental model on the subject.

Once you have a comfortable mastery of the technology and a decent track record of applying it to real problems, evangelize it. Start with people more likely to accept it and less likely to show unhealthy skepticism (fellow geeks!). Pretty soon, you’ll have built a tribe not only of believers, but fellow experts that can help spread the use of the technology throughout the organization, and ultimately move a step closer to winning the game.

You Said Something About A Free Monad?

This brings us to the final chapter of our story. In my current position, I am fortunate to have a lot of very smart programmers working for me. One of them, Phil Quirk, I have worked with for several years, including the company involved in my anecdotes above. He likes to geek out about category theory and functional programming. He also believes that they allow him to write more powerful code that’s easier to reason about. For a long time, I stubbornly disagreed with him about this. This is the story of how I was wrong.

I am a firm believer in writing easy-to-read, easy-to-maintain code. Clean Code — one of my favorite books — deals with this subject exclusively. Code is read and changed a lot more often than it is written (which is one time), so of course any engineer would optimize for readability and maintainability. In my discussions with Phil however, I caught myself confusing “easy to read and maintain” with the misuse of the term “elegant” that bothers me so much. I was calling Phil’s code “inelegant” for the same reasons my code had been called that so many times before: because I lacked the patience to learn the vocabularies necessary to understand it. When I realized this, I decided to give Phil and his categories a fair shake.

A book club (on the excellent Scala With Cats) and a few programming experiments later, and I am not only a convert, but I am ashamed I dug in my heels for so long. In the book club, I raised the question to the group “is it fair to require anyone that wants to play in our code base to learn these concepts, even though they are difficult and unfamiliar?” The unanimous answer: of course. Knowing these concepts makes us better at playing the game. If you want to play the game at our level, you must know them.

Phil also loves Dr. Strangelove. I hope that the alternate title of this blog post will suffice as an apology.


Programming is a constant learning journey, at least if you want to continue to work on new and cutting-edge problems. Somehow, the ethos of “false elegance” has crept into the software community, and this sometimes causes us to be inappropriately cynical about new techniques and ways of reasoning about problems. In the worst cases, we disparage these new methods, because if we try to use them and fail, we might look foolish. We should stop this and embrace increasing complexity as an unavoidable fact of technology itself. Only by continuing to be smarter than the problems we’re trying to solve will we continue to win the game.

In future blog posts, I and others will delve into some of the technical details behind some of the technologies I mentioned above. One might even involve my adventures using the free monad in database programming. In them, we’ll show how embracing necessary complexity empowers you to solve difficult problems with — dare I say it? — elegance. Stay tuned!

Combining Kafka Streams and Akka

In our KUKA Connect robot management web app, we have begun refactoring our code to use Kafka Streams as much as possible.

We were already using Kafka, publishing data to various topics, and consuming that data, either with simple consumers or by using Akka Streams.

But now we wanted to move to Kafka Streams: can we just replace our usages of Akka Streams with it? The short answer is, no. Here’s a use case from our web app that shows how combining the two frameworks still makes sense for us.

Kafka Streams and Akka Streams, Each Great at Something

Use Case: When a KUKA robot hits a critical fault, notify the owner of that robot via text message.

Microservices Involved:

Notification Service: Main player, responsible for joining various Kafka Streams together from other microservices and calling into Messaging Service to actually send the messages to the right users.

Device Access Service: Responsible for knowing which users have access to which robots at any given time, publishes a full state KTable with this information.

Device Licensing Service: Responsible for knowing whether a robot has a Plus license or not, publishes to a full state KTable.

Device Fault Service: Publishes robot faults to a KStream as they occur.

Messaging Service: Knows how to send text messages to users.

We build our main stream by constructing a KStream off the deviceFaults topic, joining those fault events to the deviceAccess KTable to find the users who have access to the device, joining that to the deviceLicensing KTable to filter out robots that are not Plus licensed, we do a flatMap to rekey the result from deviceId to userId, and finally we publish those events to a new topic we call the userFaultStream.

Kafka Streams gives us the ability to easily combine these different flows of data, filter based on various criteria, then rekey it from device identifier to user identifier.

But there’s a problem.

Enter the Akka

People dislike getting spammed with hundreds of text messages.

Further, robots can sometimes hiccup and throw a spurious fault that is then immediately cleared by an internal system. The fault stream flows every single fault however, so if we blindly call into Messaging Service with each fault, users will sometimes get flooded with fault notifications.

What we really want is a way to batch the faults to any given user every X seconds or so, to give spurious faults time to clear themselves and also to group up rapid-occurring faults into a single message.

Kafka Streams is not good at that.
But Akka Streams is.

Recall we published our fault stream to a new topic userFaultStream.

We then made an Akka Stream consumer of that topic, grouped by userId, merged with another source that throttles out a “Send Message” signal every 5 seconds, and then we buffer up all text messages intended for a given user during those 5 seconds, until we get the Send Message signal.

This magic means we can be getting robot faults from many robots at once, each handled by a different instance of our faults microservice, join those together to filter and discover which users need to know about the faults, and then key them by those userIds, buffer up and combine messages to the same user, and then send out as a batch.


Because programming with Kafka Streams is so powerful, we are adopting it whenever we can, “streamifying” existing microservices one at a time as it makes sense for us to do so.

That said, we still have many use cases for Akka Streams, as the broadcast and flow processing provide rich abilities that Kafka Streams do not (yet?) offer.

Now that we are programming using this paradigm, we don’t want to go back to the “old” way of doing things, where we made lots of calls between microservices to piece together all the data we needed at any given time.