JavaScript: The Modern Parts

In the last few months, I have learned a lot about modern JavaScript and CSS development with a local toolchain powered by Node 8, Webpack 4, and Babel 7. As part of that, I am doing my second “re-introduction to JavaScript”. I first learned JS in 1998. Then relearned it from scratch in 2008, in the era of “The Good Parts”, Firebug, jQuery, IE6-compatibility, and eventually the then-fledgling Node ecosystem. In that era, I wrote one of the most widely deployed pieces of JavaScript on the web, and maintained a system powered by it.

Now I am re-learning it in the era of ECMAScript (ES6 / ES2017), transpilation, formal support for libraries and modularization, and, mobile web performance with things like PWAs, code splitting, and WebWorkers / ServiceWorkers. I am also pleasantly surprised that JS, via the ECMAScript standard and Babel, has evolved into a pretty good programming language, all things considered.

To solidify all this stuff, I am using webpack/babel to build all static assets for a simple Python/Flask web app, which ends up deployed as a multi-hundred-page static site.

One weekend, I ported everything from Flask-Assets to webpack, and to play around with ES2017 features, as well as explore the Sass CSS preprocessor and some D3.js examples. And boy, did that send me down a yak shaving rabbit hole. Let’s start from the beginning!

JavaScript in 1998

I first learned JavaScript in 1998. It’s hard to believe that this was 20 years — two decades! — ago. This post will chart the two decades since — covering JavaScript in 1998, 2008, and 2018. The focus of the article will be on “modern” JavaScript, as of my understanding in 2018/2019, and, in particular, what a non-JavaScript programmer should know about how the language — and its associated tooling and runtime — have dramatically evolved. If you’re the kind of programmer who thinks, “I code in Python/Java/Ruby/C/whatever, and thus I have no use for JavaScript and don’t need to know anything about it”, you’re wrong, and I’ll describe why. Incidentally, you were right in 1998, you could get by without it in 2008, and you are dead wrong in 2018.

Further, if you are the kind of programmer who thinks, “JavaScript is a tire fire I’d rather avoid because it lacks basic infrastructure we take for granted in ‘real’ programming languages”, then you are also wrong. I’ll be able to show you how “not taking JavaScript seriously” is the 2018 equivalent of the skeptical 2008-era programmer not taking Python or Ruby seriously. JavaScript is a language that is not only here to stay, but has already — and will continue to — take over the world in several important areas. To be a serious programmer, you’ll have to know JavaScript’s Modern and Good Parts — as well as some other server-side language, like Python, Ruby, Go, Elixir, Clojure, Java, and so on. But, though you can swap one backend language for the other, you can’t avoid JavaScript: it’s pervasive in every kind of web deployment scenario. And, the developer tooling has fully caught up to your expectations.

JavaScript during The Browser Wars

In 1998, browsers were a harsh environment to target for development; not only was Internet adoption low and not only were internet connections slow, but the browser wars — mainly between Netscape and Microsoft — were creating a lot of confusion.

Netscape Navigator 4 was released in 1997, and Internet Explorer 5 was released in 1998. The web was still trying to make sense of HTML and CSS; after all, CSS1 had only been released a year earlier.

In this environment, the definitive web development book of the era was “JavaScript: The Definitive Guide”, which weighed in at over 500 pages. Note that, in 1998, the most widely used programming languages were C, C++, and Java, as well as Microsoft Visual Basic for Windows programmers. So expectations about “what programming was” were framed mostly around these languages.

In this sense, JavaScript was quite, quite different. There was no compiler. There was no debugger (at least, not very good ones). There was no way to “run a JavaScript program”, except to write scripts in your browser, and see if they ran. Development tools for JavaScript were still primitive or inexistent. There was certainly not much of an open source community around JS; to figure out how to do things, you would typically “view source” on other people’s websites. Plus, much of the discussion in the programming community of web developers was how JavaScript represented a compatibility and security nightmare.

Not only differing implementations across browsers, but also many ways for you to compromise the security of your web application by relying upon JavaScript too directly. A common security bug in that era was to validate forms with JavaScript, but still allow invalid (and insecure) values to be passed to the server. Or, to password-protect a system, but in a way that inspection of JavaScript code could itself crack access to that system. Combined with the lack of a proper development environment, the “real web programmers” used JavaScript as nothing more than a last resort — a way to inject a little bit of client-side code and logic into pages where doing it server-side made no sense. I remember one of the most common use cases for JavaScript at the time was nothing more than changing an image upon hover, as a stylistic effect, or implementing a basic on-hover menu on a complex multi-tab form. These days, these tasks can be achieved with vanilla CSS, but, at the time, JavaScript DOM manipulation was your only option.

JavaScript in 2008

Fast forward 10 years. In 2008, Douglas Crockford released the book, “JavaScript: The Good Parts”. By using a language subsetting approach, Crockford pointed out that, not only was JavaScript not a bad language, it was actually a good language, well-designed, with certain key features that made it stand out vs competitors.

Around this time, several JavaScript libraries were becoming popular, notably jQuery, Prototype, YUI, and Dojo. These libraries attempted to provide JavaScript with something it was missing: a cross-browser compatibility layer and programming model for doing dynamic manipulation of pages inside the browser, and especially for a new model of JavaScript programming that was emerging, with the moniker AJAX. This was the beginning of the trend of rich internet applications, “dynamic” web apps, single-page applications, and the like.

JavaScript’s Tooling Leaps

The developer tooling for JavaScript also took some important leaps. In 2006, the Firefox team released Firebug, a JavaScript and DOM debugger for Firefox, which was then one of the world’s most popular web browsers, and open source. Two years later, Google would make the first release of Google Chrome, which bundled some developer tooling. Around the same time that Chrome was released, Google also released V8, the JavaScript engine that was embedded inside of Chrome. That marked the first time that the world had seen a full-fledged, performant open source implementation of the JavaScript language that was not completely tied to a browser. Firefox’s JS engine, SpiderMonkey, was part of its source tree, but was not necessarily marketed to be modularized and used outside the context of the Firefox browser.

I remember that aside from Crockford’s work on identifying the good parts of JavaScript, and aside from the new (and better) developer tooling, a specific essay on Mozilla’s website helped me re-appreciate the language, and throw away my 1998 conception. That article was called “A Reintroduction to JavaScript”. It showed how JavaScript was actually a real programming language, once you got past the tooling bumps. A little under-powered in its standard library, thus you had to rely upon frameworks (like jQuery) to give you some tools, and little micro-libraries beyond that.

A year after reading that essay, I wrote my own about JavaScript, which was called “Real, Functional Programs with JavaScript” (archived PDF here). It described how JavaScript was, quite surprisingly, more of a functional language than Java 8 or Python 2.7. And that with a little focus on understanding the functional core, really good programs could be written. I recently converted this essay into a set of instructional slides with the name, “Lambda JavaScript” (archived notes here), which I now use to teach new designers/developers the language from first principles.

But, let’s return to history. Only a year after the release of Chrome, in 2009, we saw the first release of NodeJS, which took the V8 JavaScript engine and embedded it into a server-side environment, which could be used to experiment with JavaScript on a REPL, to write scripts, and even to write HTTP servers on a performant event loop.

People began to experiment with command-line tools written in JavaScript, and with web frameworks written in JavaScript. It was at this point that the pace of development in the JavaScript community accelerated. In 2010, npm — the Node Package Manager — was released, and it and its package registry quickly grew to represent the full JavaScript open source community. Over the next few years, the browser vendors of Mozilla, Google, Apple, and Microsoft engaged in the “JavaScript Engine Wars”, with each developing SpiderMonkey, V8, Nitro, and Chakra to new heights.

Meanwhile, NodeJS and V8 became the “standard” JS engine running on developer’s machines from the command line. Though developers still had to target old “ECMAScript 3” browsers (such as IE6), and thus had to write restrained JavaScript code, the “evergreen” (auto-updating) browsers from Mozilla, Google, and Apple gained support for ECMAScript 5 and beyond, and mobile web browsing went into ascendancy, thus making Chrome and Safari dominant in market share especially on smartphones.

I remember in 2012, I gave a presentation at a local tech conference entitled, “Writing Real Programs… with JavaScript!?”. The “!?” punctuation was intentional. That was the general zeitgeist I remember in a room full of developers: that is, “is writing real programs with JavaScript… actually possible!?” It’s funny to review those slides as a historical relic. I spent the first half of the talk convincing the audience that JavaScript’s functional core was actually pretty good. And then I spent the second half convincing them that NodeJS might… it just might… create a developer tooling ecosystem and standard library for JavaScript. There are also a few funny “detour” slides in there around things like Comet vs Ajax, a debate that didn’t really amount to much (but it’s good to remind one of fashion trends in tech).

Zooming ahead a few years, in all of this noise of web 2.0, cloud, and mobile, we finally reached “mobilegeddon” in 2015, where mobile traffic surpassed desktop traffic, and we also saw several desktop operating systems move to a mostly-evergreen model, such as Windows 10, Mac OS X, and ChromeOS. As a result, as early as 2015 — but certainly by 2018 — JavaScript became the most widely deployed and performant programming language with “built-in support” on almost every desktop and mobile computer in the world.

In other words, if you wanted your code to be “write once, run everywhere” in 2015 or so (but even as far back as 2009), your best option was JavaScript. Well, that’s even more true today. The solid choice for widespread distribution of your code continues to be JavaScript. As Crockford predicted in 2008: “It is better to be lucky than smart.”

JavaScript in 2018-2019

In 2018-2019, several things have changed about the JavaScript community. Development tools are no longer fledgling, but are, instead, mature. There are built-in development tools in all of Safari, Firefox, and Chrome browsers (and the Firebug project is mostly deprecated). There are also ways to debug mobile web browsers using mobile development tools. NodeJS and npm are mature projects that are shared infrastructure for the whole JavaScript community.

What’s more, JavaScript, as a language, has evolved. It’s no longer just the kernel language we knew in 1998, nor the “good parts” we knew in 2008, but instead the “modern parts” of JavaScript include several new language features that go by the name “ES6” (ECMAScript v6) or “ES2017” (ECMAScript 2017 Edition), and beyond.

Some concepts in HTML have evolved, such as HTML5 video and audio elements. CSS, too, has evolved, with the CSS2 and CSS3 specifications being ratified and widely adopted. JSON has all but entirely replaced XML as an interchange format and is, of course, JavaScript-based.

The V8 engine has also gotten a ton of performance-oriented development. It is now a JIT compiled language with speedy start times and near-native performance for CPU-bound blocks. Modern web performance techniques are almost entirely based on a speedy JavaScript engine and the ability to script different elements of a web application’s loading approach.

The language itself has become comfortable with something akin to “compiler” and “command line” toolchains you might find in Python, Ruby, C, and Java communities. In lieu of a JavaScript “compiler”, we have node, JavaScript unit testing frameworks like Mocha/Jest, as well as eslint and babel for syntax checking. (More on this later.)

In lieu of a “debugger”, we have the devtools built into our favorite browser, like Chrome or Firefox. This includes rich debuggers, REPLs/consoles, and visual inspection tools. Scriptable remote connections to a node environment or a browser process (via new tools like Puppeteer) further close the development loop.

To use JavaScript in 2018/2019, therefore, is to adopt a system that has achieved 2008-era maturity that you would see in programming ecosystems like Python, Ruby, and Java. But, in many ways, JavaScript has surpassed those communities. For example, where Python 3’s reference implementation, CPython, is certainly fast as far as dynamic languages go, JavaScript’s reference implementation, V8, is optimized by JIT and hotspot optimization techniques that are only found in much more mature programming communities, such as Java’s (which received millions of dollars of commercial support in applied/advanced compiler techniques in the Sun era). That means that unmodified, hotspot JavaScript code can be optimized into native code automatically by the Node runtime and by browsers such as Chrome.

Whereas Java and C users may still have debates about where, exactly, open source projects should publish their releases, that issue is settled in the JavaScript community: it’s npm, which operates similarly to PyPI and pip in the Python community.

Some essential developer tooling issues were only recently settled. For example, because modern JavaScript (such as code written using ES2017 features) needs to target older browsers, a “transpilation” toolchain is necessary, to compile ES2017 code into ES3 or ES5 JavaScript code, suitable for older browsers. Because “old JavaScript” is a Turing complete, functional programming language, we know we can translate almost any new “syntactic sugar” to the old language, and, indeed, the designers of the new language features are being careful to only introduce syntax that can be safely transpiled.

What this means, however, is that to do JavaScript development “The Modern Way”, while adopting its new features, you simply must use a local transpiler toolchain. The community standard for this at the moment is known as babel, and it’s likely to remain the community standard well into the future.

Another issue that plagued 2008-era JavaScript was build tooling and modularization. In the 2008-2012 era, ad-hoc tools like make were used to concatenate JavaScript modules together, and often Java-based tools such as Google’s Closure Compiler or UglifyJS were used to assemble JavaScript projects into modules that could be included onto pages. In 2012, the Grunt tool was released as a JavaScript build tool, written atop NodeJS, runnable from the command-line, and configurable using a JavaScript “Gruntfile”. A whole slew of build tools similar to this were released in the period, creating a whole lot of code churn and confusion.

Thankfully, today, Single Page Application frameworks like React have largely solved this problem, with the ascendancy of webpack and the reliance on npm run-script. Today, the webpack community has come up with a sane approach to JavaScript modularization that relies upon the modern JS support for modules, and then development-time tooling, provided mainly through the webpack CLI tool, allow for local development and production builds. This can all be scripted and wired together with simple npm run-script commands. And since webpack can be itself installed by npm, this keeps the entire development stack self-contained in a way that doesn’t feel too dissimilar from what you might enjoy with lein in Clojure or python/pip in Python.

Yes, it has taken 20 years, but JavaScript is now just as viable a choice for your backend and CLI tooling projects as Python was in the past. And, for web frontends, it’s your only choice. So, if you are a programmer who cares, at all, about distribution of your code to users, it’s time to care about JavaScript!

In a future post, I plan to go even deeper on JavaScript, covering:

  • How to structure your first “modern” JavaScript project
  • Using Modern JS with Python’s Flask web framework for simple static sites
  • Understanding webpack, and why it’s important
  • Modules, modules, modules. Why JS modules matter.
  • Understanding babel, and why it’s important
  • Transpilation, and how to think about evolving JS/ES features and “compilation”
  • Using eslint for bonus points
  • Using sourcemaps for debugging
  • Using console.assert and console for debugging
  • Production minification with uglify
  • Building automated tests with jest
  • Understanding the value of Chrome scripting and puppeteer

Want me to keep going? Let me know via @amontalenti on Twitter.


Update from 2021: It has been a few years since I published this post. Though webpack is still widely used, it has a competitor in the form of esbuild which is also popular, especially due to its improved build speeds over webpack. In the Vue.js community, which has grown at a rapid clip in the last few years, the vite build tool is popular, which leverages esbuild under the hood for building, and rollup under the hood for bundling. But the rough design and need of a JavaScript (ES) build and bundling tooling remains the same; esbuild is just a re-imagining of the webpack-style tooling for performance.

One more point of reflection in the last two years. Some folks, upon reading this post, have suggested: “why not throw all this cruft away and start with something like TypeScript, Elm, or Reason?” I am reminded of this section of Brendan Eich’s History of JavaScript, which describes Allen Wirfs-Brock and his analysis of the state of JavaScript when it was at standardization fork in the road, around the year 2007:

Allen Wirfs-Brock spent several days familiarizing himself with JavaScript, the then-current ES3 specification, and TG1 proposals from the public wiki snapshot. He spoke with […] software architects on the Internet Explorer team, and Microsoft engineers working on Web based applications. He recognized JavaScript’s role on the Web as being a significant instance of Richard Gabriel’s [1990] “Worse Is Better” concept. It was a minimalist creation that had grown in a piecemeal manner to become deeply ingrained in the fabric of the [web]. In contrast, the “ES4 2” effort appeared to Wirfs-Brock to be what Gabriel calls a “Do The Right Thing” project that was unlikely to reach fruition and, if it did, would be highly disruptive to the [web]. He concluded that the technically responsible thing to do would be to try to get ECMAScript evolution back onto a path of incremental evolution.

Or, as Brendan Eich, JavaScript’s creator, put it succinctly in a presentation on the topic, “Always bet on JS”.

I also enjoy this paragraph from the above-cited “JavaScript: the first 20 years”, which summarizes the core point of this post:

The production use of transpilers, especially Babel […], was part of a large cultural transformation within many JavaScript development teams. In those teams, JavaScript is treated similarly to a conventional, ahead-of-time compiled language with development and deployment build toolchains rather than as a dynamic execution environment that loads and directly executes a programmer’s original source code.


A late-2021 note on Microsoft IE11, and what’s to come in mid-2022: Microsoft has publicly announced that “The Internet Explorer 11 desktop application will be retired and go out of support on June 15, 2022”. To support this transition, they have added an enterprise-opt-in “IE Mode” to their otherwise-Chromium-based Edge browser. IE11 is the last browser not to support native ES6 modules, among other important ES features, and has a 0.7% (and declining) global market share. This likely means that in late 2022, we’ll see IE11 dropped from the compatibility list of babel compiler settings for many JavaScript projects. This will likely simplify some of the babel output for common cases, and will also mean that all the major browsers remaining — that is, Chrome, Safari, & Firefox, who are all relatively cooperative with each other in developing web standards — will be free to evolve the JS language and the browser platform. It’s also nice to remember that all three of these remaining major browser engines have robust developer tools for testing. The two big threats to the browser platform these days are Chrome’s relative hegemony (we should do our best to support Firefox) and Apple’s stubborn insistence on building macOS- and iOS-only tooling (we should lobby Apple to release Safari for platforms other than macOS and iOS). That all said, with IE11 in the rear-view mirror soon, the future of the web and of JavaScript is very bright indeed!

15 thoughts on “JavaScript: The Modern Parts”

  1. Thanks for the trip down memory lane. I’d suggest discussing the potential impact of web assembly on the JavaScript front-end monopoly, too. We might not be stuck with JavaScript in the front-end, anymore! Also, languages with no runtime errors, like Elm, are another opportunity to do complex frontend stuff without ever needing to know (and debug) JavaScript. Looking forward to the next article!

  2. Nice reflection as always! It is fun to think back to the “early” days, JS devs have it so much better now 🙂

  3. great! earlier I was intimidated by things people said about JS and though of avoiding it at any cost(now that seems irrational, I should conduct my own research before coming to conclusions). I have already started to learn JS now.

  4. @Jim “JavaScript: The Good Parts” is a real book, very much worth reading, and published in 2009 by Crockford. A more modern update to this style of book is probably “You Don’t Know JavaScript” (YDKJS) by Kyle Simpson, which is 4 years old.

  5. Thanks. I was looking for “JavaScript: The ‘Modern’ Parts”. I’ll look for YDKJS just as soon as I master the J language.

  6. Some links I didn’t include in the original article but that I always go hunting for when sending people this article include:

    “Advanced Frontend Automation with npm run scripts”, a 20-minute YouTube talk on how to effectively use npm run-script. https://www.youtube.com/watch?v=0RYETb9YVrk

    “Modern JavaScript Explained for Dinosaurs”: https://medium.com/the-node-js-collection/modern-javascript-explained-for-dinosaurs-f695e9747b70

    “Webpack: The Confusing Parts”: https://medium.com/@rajaraodv/webpack-the-confusing-parts-58712f8fcad9

  7. Pingback: The Blog Chill

Leave a Reply