The Log: a building block for large-scale data systems

A software engineer at LinkedIn has written a monster of a blog post about “The Log”, a building block for large-scale data systems. The concepts in this post are near and dear to my heart due to my work on precisely these kinds of problems at Parse.ly.

What is “a log”?

The log is similar to the list of all credits and debits and bank processes; a table is all the current account balances. If you have a log of changes, you can apply these changes in order to create the table capturing the current state. This table will record the latest state for each key (as of a particular log time). There is a sense in which the log is the more fundamental data structure: in addition to creating the original table you can also transform it to create all kinds of derived tables.

At Parse.ly, we just adopted Kafka widely in our backend to address just these use cases for data integration and real-time/historical analysis for the large-scale web analytics use case. Prior, we were using ZeroMQ, which is good, but Kafka is better for this use case.

We have always had a log-centric infrastructure, not born out of any understanding of theory, but simply of requirements. We knew that as a data analysis company, we needed to keep data as raw as possible in order to do derived analysis, and we knew that we needed to harden our data collection services and make it easy to prototype data aggregates atop them.

I also recently read Nathan Marz’s book (creator of Apache Storm), which proposes a similar “log-centric” architecture, though Marz calls it a “master dataset” and uses the fanciful term, “Lambda Architecture”. In his case, he describes that atop a “timestamped set of facts” (essentially, a log) you can build any historical / real-time aggregates of your data via dedicated “batch” and “speed” layers. There is a lot of overlap of thinking in that book and in this article.

full-stack

LinkedIn’s log-centric stack, visualized.

Continue reading The Log: a building block for large-scale data systems

Functional dynamic dispatch with Python’s new singledispatch decorator in functools

I just read about Python 3.4’s release notes. I found a nice little gem.

I didn’t know what “Single Dispatch Functions” were all about. Sounded very abstract. But it’s actually pretty cool, and covered in PEP 443.

What’s going on here is that Python has added support for another kind of polymorphism known as “single dispatch”. This allows you to write a function with several implementations, each associated with one or more types of input arguments. The “dispatcher” (called singledispatch and implemented as a Python function decorator) figures out which implementation to choose based on the type of the argument. It also maintains a registry of types to function implementations.

This is not technically “multimethods” — which can also be implemented as a decorator, as GvR did in 2005 — but it’s related. See the Wikipedia article on Dynamic Dispatch for more information.

Also, the other interesting thing about this change is that the library is already on Bitbucket and PyPI and has been tested to work as a backport with Python 2.6+. So you can start using this today, even if you’re not on 3.x!

Continue reading Functional dynamic dispatch with Python’s new singledispatch decorator in functools

How investors play the option

Paul Graham put up a new essay, one of his longest, called “How to raise money”. It gives a good glimpse into the mind game that is startup financing.

I thought it was particularly interesting how he documented three different ways in which investors say “no”, without really saying no.

Continue reading How investors play the option

Parse.ly: brand hacking

There’s some hoopla lately about “weird” startup names in the Wall Street Journal, with specific coverage of “.ly” domains in The Atlantic Wire:

The latest start-up boom has led to the creation of at least 161 companies that end in “ly,” “lee,” and “li,” which is, naming consultants tell us, 160 too many. There’s feedly, bitly, contactually, cloudly, along with a bunch of other company-LYS […] and all but the first ever “ly” name are “just lazy,” Nancy Friedman, a naming consultant, told The Atlantic Wire.

Continue reading Parse.ly: brand hacking

“We’re killing it”

Good post today, A “Third Way” in Entrepreneurship, that discusses the “always be winning”, annoyingly positive veneer of most startup entrepreneurs. This is a community where many founders you meet always share their latest victory and pretend that failures rarely happen.

… entrepreneurs are pressured to maintain a totally positive face to the outside world about the state of their company. In San Francisco, “we’re killing it” is almost now an inside joke because of the ubiquity of that response when someone asks an entrepreneur how their company is faring. Most of these companies are not “killing it”, and the entrepreneurs probably know that.

There is also a nice comment thread discussing the “we’re killing it” phrase, a discussion to which I contributed an anecdote and interpretation.

The comment I added to the discussion:

A friend once relayed a story to me of a dinner meeting of ~20 early-stage high-tech executives he attended that was sponsored by a startup organization. The moderator asked one question as an ice breaker to kick off the night: “What is the greatest challenge that your startup faces today?”

My friend was the first one picked to share. Being a very level-headed guy (who personally hates the term, “killing it”), he suggested that one of his biggest challenges was maintaining work/life balance & personal relationships, for himself & also for his employees, so that they don’t burn out on the job.

The baton then got passed to the next entrepreneur, and, as my friend tells it, entrepreneur after entrepreneur shared their “greatest challenge”, though they were only “challenges” in the weakest sense of the word. For example: “handling all the new customers we have”, “scaling our servers for our massive user-base”, “hiring enough software engineers to keep up with the business growth”.

He realized then that every entrepreneur was “positioning” the answer to make it appear that the greatest challenge faced was dealing with the company’s illusory massive success.

I think this anecdote describes the “killing it” mentality quite well — even among peers and in a setting where people should be comfortable sharing their fears, this community prefers reality distortion.

Uninterruptability

Paul Graham, in a footnote from his essay on “How to Make Wealth”:

One valuable thing you tend to get only in startups is uninterruptability. Different kinds of work have different time quanta. Someone proofreading a manuscript could probably be interrupted every fifteen minutes with little loss of productivity. But the time quantum for hacking is very long: it might take an hour just to load a problem into your head. So the cost of having someone from personnel call you about a form you forgot to fill out can be huge.

This is why hackers give you such a baleful stare as they turn from their screen to answer your question. Inside their heads a giant house of cards is tottering.

The mere possibility of being interrupted deters hackers from starting hard projects. This is why they tend to work late at night, and why it’s next to impossible to write great software in a cubicle (except late at night).

One great advantage of startups is that they don’t yet have any of the people who interrupt you. There is no personnel department, and thus no form nor anyone to call you about it.

The content trading desk

Note: At the time of this post’s publication, I was the co-founder & CTO of Parse.ly, the company behind a a real-time analytics platform used by top publishers such as The Wall Street Journal, Bloomberg, Slate, Fortune, TechCrunch, and over 2,500 others. This post was informed by that work experience.

My first job out of college was as a software engineer for Morgan Stanley. This 50,000 employee firm employed tens of thousands of software engineers. Though the most important employees were the traders, it was clear that software ran the place.

For anyone who has worked on Wall Street in the last couple of decades, this is no surprise. It may be surprising to people from outside the industry, who perhaps still associate Wall Street with traders wearing funny jackets calling out orders across a busy NYSE trading floor. That still goes on (though, probably not for long). Most of the trading activity on Wall Street is heavily computerized. Orders are placed and fulfilled mostly by machines.

Wall St Trading Floor

Once the markets went digital, data analysis became a much more rigorous discipline on Wall Street. Whereas before, a trader’s primary edge was being personally connected to the most important players (think Gordon Gecko), today’s traders seek edge in systematic prediction models. In other words, other machines analyze the orders being placed and fulfilled on the market, and yet others try to detect patterns or make predictions based on all of this data.
Continue reading The content trading desk

Python double-under, double-wonder

Python has a number of protocols that classes can opt into by implementing one or more “dunder methods”, aka double-underscore methods. Examples include __call__ (make an object behave like a function) or __iter__ (make an object iterable).

The choice of wrapping these functions with double-underscores on either side was really just a way of keeping the language simple. The Python creators didn’t want to steal perfectly good method names from you (such as “call” or “iter”), but they also did not want to introduce some new syntax just to declare certain methods “special”. The dunders achieve the dual goal of calling attention to these methods while also making them just the same as other plain methods in every aspect except naming convention.

Continue reading Python double-under, double-wonder