This installment is a part of a series of posts exploring the internal details of our JavaScript SDK. Our hope is that these posts will help developers debug any issues with our SDK and give us an opportunity to outline some best practices for JavaScript libraries in general. In this post, we’re going to drill into how we use polyfills in our SDK.

Polyfills have been around for a number of years and enable developers to take advantage of future (or in most cases, current) APIs across browsers, old and new.

JavaScript polyfills can roughly be divided into two categories: a), extensions to the core Document Object Model (DOM) or Browser Object Model (BOM), and b), extensions to the core ECMAScript objects. For example, polyfills might allow you to rapidly create advanced applications using features from HTML5 (like the video-tag), or features from ECMAScript 5 (ES5), such as JSON.parse, Array.isArray and myArray.map. They avoid the alternative of either having to develop for the lowest common denominator, or worst case, having to develop multiple implementations for multiple browsers.

There is a fundamental problem with polyfills though, and that is that the basic premise of filling in missing functionality requires the creation of new global objects (like JSON), the augmentation of existing objects (like Array.isArray), or the augmentation of the prototypes of built-in types, like String, Number and Date. This essentially means that polyfills need to alter the runtime shared by all the JavaScript executing in an app, sometimes violating the expectations of other scripts. The problematic nature of this has been covered thoroughly by kangax, and means that you can only safely add or consume polyfills if you’re in total control of all the code running in the document.

So what does this mean when you’re developing a JavaScript library for use on third-party pages?

  • Never apply polyfills to the runtime, as you might be violating the expectations of other scripts.
  • Never rely on possibly polyfilled features in the runtime, as they might be violating your expectations.

If we have to abide by these rules, how can we then make use of new features? And how is it that Facebook’s JavaScript SDK complies, while at the same time being written as idiomatic ES5-compliant JavaScript? Remember that it is unacceptable for the Facebook JS SDK to modify any built-in objects, or to add any new globals (besides the single FB object) as it might be violating expectations and cause unexpected behavior in your applications.

Source transformation

The key to our approach is that, as part of the Facebook SDK’s build process, we convert our ES5-based JavaScript to ECMAScript 3 (ES3) — a process called transpiling. This means that at runtime, our code has no dependencies outside of ES3, and we therefore have no need for polyfills in the conventional sense.

To give you an example of what this looks like, these are samples of before and after such a conversion:

Before:


JSON.stringify({hello: "world"}); <\/pre><p>After:</p><pre>ES5('JSON', 'stringify', false, {hello: "world"}); <\/pre><p>Or a little more complicated example:</p></p><p><p>Before:</p><pre>JSON.stringify(myArray.map(this.processor.bind(this, 'boundValue'))); <\/pre><p>After:</p><pre>ES5('JSON', 'stringify', false, ES5(myArray, 'map', true, ES5(this.processor, 'bind', true, this, 'boundValue'))); <\/pre><p>Before we look at the specifics of the new source code, let's take a look at how we perform the conversion.</p></p><p><p>To manage this process, we use 'jspatch' from the <a href="https://github.com/facebook/jsgrep">jsgrep</a> project, a Node.js module created by one of our engineers for performing code refactoring and linting. jspatch allows us to run pattern matching and substitutions on the <a href="http://en.wikipedia.org/wiki/Abstract_syntax_tree">AST</a> representing the source code, while at the same time being non-destructive to the remaining source code.</p></p><p><p>To define substitutions, you use simple rules such as the following:</p><pre>-JSON.stringify +ES5 ( +'JSON', 'stringify', false, ...) --- -A.bind +ES5 ( +A, 'bind', true, ...) --- -A.map +ES5 ( +A, 'map', true, ...) <\/pre><p>If we save this to a .spatch file, we can easily convert a source file given the following command:</p><pre>node spatch-cli.js es3.spatch source.js // yields the converted source <\/pre></p><h2><a name="the-es5-function" class="anchor" href="#the-es5-function"></a>The ES5 function</h2><p><p>As you can see, all patterns matching the use of ES5 features are replaced with a call to <code>ES5, with three fixed arguments, followed by any arguments that might have been passed to the function:
  • The object which we want to call the method on (always a string for ‘static’ methods)
  • The name of the method we want to invoke
  • Whether this is an instance method or not, used to disambiguate between e.g ‘JSON’ as a string, and ‘JSON’ representing the JSON object.

We have chosen to use a single function to implement or proxy these calls, but you can easily use separate functions for each pattern. The only thing to be aware of though, is the following: for prototype-based methods, how do you differentiate between .map being called on an array, and .map being called on an object that has a .map function defined? For us, this is solved by how ES5 uses the type information to look up the method in our method cache, before falling back to the function actually defined on the object.

This is a rough equivalent of the proxy function we use:


function ES5(lhs, rhs, proto/*, args*/) { // Normalize the type information var type = proto ? Object.prototype.toString.call(lhs).slice(8, -1).toLowerCase() : lhs; // Locate the method to use var method = methodCache[type + '.' + rhs] || lhs[rhs]; return method.apply(lhs, Array.prototype.slice.call(arguments, 3)); } <\/pre><h2><a name="the-method-cache" class="anchor" href="#the-method-cache"></a>The method cache</h2></p><p><p>Before the <code>ES5 function is first used, we populate the method cache by iterating over all of our polyfills, checking to see if we can find a native implementation. If we find a native implementation, whose toString method returns [native code], and which is not blocklisted (we always use our own polyfill for JSON), we add this to the method cache. If we don't find one, we add the polyfill. 

Combined, this ensures that the method cache only contains safe functions, and not potentially incompatible versions, such as those added to the runtime by, for example, older versions of Prototype.

Drawbacks

It is worth mentioning that this approach does come with a cost, though one we find negligible. The converted source is larger than what you could get with manually writing ES3, and the ES5 function does incur a slight overhead due to the indirection.

Note: Sandboxing using dynamically created iframes is a different approach for safely using polyfills, but one that comes with its own set of challenges.

Summary

When it comes to using polyfills in third-party JavaScript, there is nothing stopping you from having the proverbial cake and eating it too. It is possible to both reap the benefits of writing idiomatic ES5-compliant source code, possibly sharing code between transpiled and non-transpiled environments, while at the same time ensuring that the expectations of neither yourself nor others are broken by incompatible polyfills being added.

Check out this gist to see how you can apply this to your project.

Sean Kinsey is an engineer on the Platform team.

In an effort to be more inclusive in our language, we have edited this post to replace blacklisted with blocklisted.

Leave a Reply

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy