Work is a Queue of Queues

Do you ever get that feeling like no matter how hard you work, you just can’t keep up?

This isn’t a problem uniquely faced by modern knowledge workers. It’s also a characteristic of certain software systems. This state — of being perpetually behind on intended work-in-progress — can fall naturally out of the data structures used to design a software system. Perhaps by learning something about these data structures, we can learn something about the nature of work itself.

Let’s start with the basics. In computer science, one of the most essential data structures is the stack. Here’s the definition from Wikipedia:

… a stack is a data type that serves as a collection of elements, with two principal operations: (1) “push”, which adds an element to the collection; and (2) “pop”, which removes the most recently added element that was not yet removed. The order in which elements come off [is known as] LIFO, [or last in, first out]. Additionally, a “peek” operation may give access to the top without modifying the stack.

From here on out, we’ll use the computer science (mathematical) function call notation, f(), whenever we reference one of the operations supported by a given data structure. So, for example, to refer to the “push” operation described above, we’ll notate it as push().

I remember learning the definition of a stack in college and being a little surprised at “LIFO” behavior. That is, if you push() three items onto a stack — 1, 2, and 3 — when you pop() the stack, you’ll get the last item you added — in this case, 3. This means the last item, 3, is the first one pop()‘ed off the stack. Put another way, the first item you put on the stack, 1, only gets processed once you pop() all the other items — 3, 2 — off the stack, and then pop() once more to (finally) remove item 1.

Practically speaking, this seems like a “frenetic” or “unfair” way to do work — you’re basically saying that the last item always gets first service, and so, if items are push()’ed onto the stack faster than they are pop()’ed, some items will never be serviced (like poor item 1, above).

Continue reading Work is a Queue of Queues

JavaScript: The Modern Parts

In the last few months, I have learned a lot about modern JavaScript and CSS development with a local toolchain powered by Node 8, Webpack 4, and Babel 7. As part of that, I am doing my second “re-introduction to JavaScript”. I first learned JS in 1998. Then relearned it from scratch in 2008, in the era of “The Good Parts”, Firebug, jQuery, IE6-compatibility, and eventually the then-fledgling Node ecosystem. In that era, I wrote one of the most widely deployed pieces of JavaScript on the web, and maintained a system powered by it.

Now I am re-learning it in the era of ECMAScript (ES6 / ES2017), transpilation, formal support for libraries and modularization, and, mobile web performance with things like PWAs, code splitting, and WebWorkers / ServiceWorkers. I am also pleasantly surprised that JS, via the ECMAScript standard and Babel, has evolved into a pretty good programming language, all things considered.

To solidify all this stuff, I am using webpack/babel to build all static assets for a simple Python/Flask web app, which ends up deployed as a multi-hundred-page static site.

One weekend, I ported everything from Flask-Assets to webpack, and to play around with ES2017 features, as well as explore the Sass CSS preprocessor and some D3.js examples. And boy, did that send me down a yak shaving rabbit hole. Let’s start from the beginning!

JavaScript in 1998

I first learned JavaScript in 1998. It’s hard to believe that this was 20 years — two decades! — ago. This post will chart the two decades since — covering JavaScript in 1998, 2008, and 2018. The focus of the article will be on “modern” JavaScript, as of my understanding in 2018/2019, and, in particular, what a non-JavaScript programmer should know about how the language — and its associated tooling and runtime — have dramatically evolved. If you’re the kind of programmer who thinks, “I code in Python/Java/Ruby/C/whatever, and thus I have no use for JavaScript and don’t need to know anything about it”, you’re wrong, and I’ll describe why. Incidentally, you were right in 1998, you could get by without it in 2008, and you are dead wrong in 2018.

Further, if you are the kind of programmer who thinks, “JavaScript is a tire fire I’d rather avoid because it lacks basic infrastructure we take for granted in ‘real’ programming languages”, then you are also wrong. I’ll be able to show you how “not taking JavaScript seriously” is the 2018 equivalent of the skeptical 2008-era programmer not taking Python or Ruby seriously. JavaScript is a language that is not only here to stay, but has already — and will continue to — take over the world in several important areas. To be a serious programmer, you’ll have to know JavaScript’s Modern and Good Parts — as well as some other server-side language, like Python, Ruby, Go, Elixir, Clojure, Java, and so on. But, though you can swap one backend language for the other, you can’t avoid JavaScript: it’s pervasive in every kind of web deployment scenario. And, the developer tooling has fully caught up to your expectations.

Continue reading JavaScript: The Modern Parts

Parse.ly’s brand refresh

Here’s how Parse.ly’s original 2009 logo looked:

Parse.ly has some fun startup lore from its early days about how we “acquired” this logo. I wrote about this in a post entitled, “Parse.ly: brand hacking”:

Our first Parse.ly logo was designed as a trade for another domain I happened to own. It was the dormant domain for a film project for one of my friends, Josh Bernhard. I had registered it for him while we were both in college. […] It so happened that my friend had picked the name “Max Spector” for his film, and thus registered maxspector.com. The film never came to fruition, so the domain just gathered dust for awhile. But, Max Spector happened to be the name of a prominent San Francisco designer. And Max got in touch with me about buying the domain for his personal website. Acting opportunistically, I offered it in trade for a logo for Parse.ly. To my surprise, he agreed.

I still look back at the logo fondly, though, it being nearly a decade old at this point, it obviously has that dated “web 2.0 startup” feel.

Continue reading Parse.ly’s brand refresh

Shipping the Second System

In 2015-2016, the Parse.ly team embarked upon the task of re-envisioning its entire backend technology stack. The goal was to build upon the learnings of more than 2 years delivering real-time web content analytics, and use that knowledge to create the foundation for a scalable stream processing system that had built-in support for fault tolerance, data consistency, and query flexibility. Today in 2019, we’ve been running this new system successfully in production for over 2 years. Here’s what we learned about designing, building, shipping, and scaling the mythical “second system”.

The Second System Effect

But why re-design our existing system? This question lingered in our minds a few years back. After all, the first system was successful. And I had the lessons of Frederick Brooks accessible and nearby when I embarked on this project. He wrote in The Mythical Man-Month:

Sooner or later the first system is finished, and the architect, with firm confidence and a demonstrated mastery of that class of systems, is ready to build a second system.

This second is the most dangerous system a man ever designs.

When he does his third and later ones, his prior experiences will confirm each other as to the general characteristics of such systems, and their differences will identify those parts of his experience that are particular and not generalizable.

The general tendency is to over-design the second system, using all the ideas and frills that were cautiously sidetracked on the first one. The result, as Ovid says, is a “big pile.”

Were we suffering from engineering hubris to redesign a working system? Perhaps. But we may have been suffering from something else altogether healthy — the paranoia of a high-growth software startup.

I discuss Parse.ly’s log-oriented architecture at Facebook’s HQ for PyData Silicon Valley, with Parse.ly’s VP of Engineering, Keith Bourgoin.

Our product had only just been commercialized. We were a team small enough to be nimble, but large enough to be dangerous. Yes, there were only a handful of engineers. But we were operating at the scale of billions of analytics events per day, on-track to serve hundreds of enterprise customers who required low-latency analytics over terabytes of production data. We knew that scale was not just a “temporary problem”. It was going to be the problem. It was going to be relentless.

Continue reading Shipping the Second System

Expanding my mind, once more, with functional programming

The Structure and Interpretation of Computer Programs (SICP) is a classic computer science text written by Gerald Jay Sussman and Hal Abelson. It is widely known in the computer science community as the “wizard book”. It intends to teach the foundations of computer programming from “first principles”, illustrating programming language design using Scheme, a dialect of the Lisp language.

In this context, from Aug 26 – 31 2018, I am taking a “think week” to reflect on my relationship to computer programming.

I am spending this week in Chicago with David Beazley (@dabeaz), where we will be spelunking through the land of this famed SICP textbook via Racket, a modern functional programming environment one can use to program in — and even extend — Scheme and many other languages.

The course will also (of course) involve some Python. This will be a fun follow-up to an earlier course I took with Beazley in 2011, “Write a Compiler (in Python)”. I can’t believe I wrote the code for that course over 7 years ago.


Back in 2011, I took “Write a Compiler (in Python)” with David Beazley. A handful of long-time professional programmers and Pythonistas, locked in a room together for 5 days, hacking away on a Python compiler for a Go-like language. It was so much fun. It proved to me that I loved programming! I’m the one whose head is exploding on the left.

How I’m thinking about this course

I have long identified primarily as a computer programmer. I studied Computer Science at NYU, and I currently read about programming languages, paradigms, and design patterns all the time. I have read way more technical programming books than any other category or genre of book.

But, I’m also someone who is interested in the business of software, and leadership of software teams, in a sort of secondary way to my love of software itself. Business books — and particularly books about high-growth companies and their teams — make up my other big obsession. But, in the last several months, I’ve seen my relationship with software change in a number of ways.

Continue reading Expanding my mind, once more, with functional programming

Flow and concentration

From Good Business, by Mihaly Csikszentmihalyi, the author of Flow.

Another condition that makes work more flowlike is the opportunity to concentrate. In many jobs, constant interruptions build up to a state of chronic emergency and distraction.

He goes on:

Stress is not so much the product of hard work, as it is of having to switch attention to from one task to the other without having any control over the process.

Continue reading Flow and concentration

Public technical talks and slides

Over the years, I’ve put together a few public technical talks where the slides are accessible on this site. These are only really nice to view on desktop, and require the use of arrow keys to move around. Long-form notes are also available — generated by a sweet Sphinx and reStructuredText plugin. I figured I’d link to them all here so I don’t lose track:

Continue reading Public technical talks and slides

Software planning for skeptics

Engineers hate estimating things.

One of the most-often quoted lines about estimation is “Hofstadter’s Law”, which goes:

Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law.

If you want to deliver inaccurate information to your team on a regular basis, give them a 3-month-out product development timeline every week. This is a truism at every company at which I have worked over a varied career in software.

So, estimation is inaccurate. Now what?

Why do we need a product delivery schedule if it’s always wrong?

There is an answer to this question, too:

Realistic schedules are the key to creating good software. It forces you to do the best features first and allows you to make the right decisions about what to build. [Good schedules] make your product better, delight your customers, and — best of all — let you go home at five o’clock every day.

This quote comes from Joel Spolsky.

So, planning and estimation isn’t so much about accuracy, it’s about constraints.

Continue reading Software planning for skeptics

Lenovo and the new Linux desktop experience

I am a longtime Thinkpad and Lenovo user as my preferred laptop for Linux computing and programming.


The Lenovo X1C 2016 4th Generation Model is my latest Linux laptop

For some context, I’ve been running Linux on my desktop and laptop machines since ~2001, and started using Thinkpads in this role starting with the famous Thinkpad T40 (2003), one of the first laptops that provided good Linux support, a rugged design, portability, power, and an excellent keyboard.

I then moved through a few different Lenovo models: the T400 (2008), the T420s (2011), and the X220 (2011).

I spent a couple of short stints in-between — which I always regretted — on other PC laptop models, including HP and Asus. I upgraded from the T420s to the X220 after coming to the realization that portability and power consumption mattered more to me than the 14″ form factor, and that I could easily expand the X220’s limited hard drive with a 512 GiB SSD.

Since 2013 or so, the X220 has been my main programming/Linux machine. The X220 was my favorite Thinkpad model of all time, despite some flaws. I’ll discuss my Linux desktop experience with the X220 briefly, and then go on to my experience with my current model, the Lenovo X1 Carbon 2016 model (4th Generation).

Continue reading Lenovo and the new Linux desktop experience

Charlottesville tech: a community that won’t be stopped by tragedy

Note: This post was written on August 17, 2017. I was living in Charlottesville, Virginia at the time; I had been based there since 2011 and would end up living there until 2019. Unfortunately, 5 days before this post was written, a tragedy happened in my town. This was my attempt to provide an alternative perspective on Charlottesville, the town, when this specific (terrible) tragedy on a specific (terrible) day became all anyone knew about it in the national headlines for months and years on end.

tl;dr — This New York techie moved to Charlottesville six years ago and witnessed a vibrant tech ecosystem develop. Though Charlottesville has some deep social problems, it’s also a place of creativity and optimism. Its best communities will prevail.

After spending my childhood, teenage years, college years, and early working years in and around New York City, in 2011, I was ready for a change. My wife was applying to medical schools across the country, and I was in the early stages of running my tech startup as a fully remote/distributed team.

Charlottesville’s pedestrian Downtown Mall on a calm fall day in 2013.

Charlottesville’s pedestrian “Downtown Mall” on a calm fall day in 2013. (source)

I think prior to the tragic events of Saturday, August 12, most life-long New Yorkers I know rarely gave much thought to Charlottesville, Virginia. Maybe they would hear the occasional news story about it, or had a friend, or friend of a friend, who attended the University of Virginia. But, for the most part, the locale occupied very little room in their brain — perhaps none — as was the case for me in 2011.
Continue reading Charlottesville tech: a community that won’t be stopped by tragedy