Improving LOL

August 28, 2009

Today I gave a presentation to some of the guys from the platform team about the state of l20n. In my previous posts I’ve blogged about the advantages and drawbacks for each of the formats we’ve considered, and I got some more good feedback about that in today’s brownbag. After the talk, I chatted with fantasai about how we could improve the LOL format to made it more readable and understandable. She had some interesting ideas that I’d like to share.

Dropping Angle Brackets

First, she (and a few others) pointed out that angle brackets make LOL look like XML. However, this resemblance might be confusing since LOL is otherwise nothing like XML. The intention for angle brackets in LOL was to delimit entity definitions and give clear visual separation. These cues are helpful for our parser, especially when it comes to error recovery. In an error case, the parser can drop all tokens until it recognizes an opening bracket (<) and resume the parse. If you have suggestions for implementing effective error recovery if we do remove the brackets from the syntax, please leave them below.

Encoding Properties

Another potentially confusing aspect of LOL files is that an entity may have properties defined in addition to its value. Here’s our usual example of a noun with a specified gender:

<appName: "Jägermeister"
 gender: "male">

This can be disambiguated slightly with indentation, but fantasai noted that the current syntax does little to explain the difference between the first assignment (which is specifying appName itself), and the second assignment to the appName.gender property. She suggested a syntax that differentiates assigning the value from assigning properties: use ‘=’ for the first assignment, and curly braces to delimit additional properties. Here’s the same example from above:

appName = "Jägermeister" {
    gender: "male" }

In this format, LOL would look a lot like CSS.

Indexing

An entity that mentions a variable gendered noun may define different forms for different genders. For example:

<complex[appName.gender]: {
	male: "Ein hübscher ${appName}s.",
	female: "Ein hübsches ${appName}s."}>

In the current syntax, square brackets following the entity key denote the index used to select the proper form. The suggestion was to move that to the RHS of the assignment:

complex = [appName.gender] {
        male: "Ein hübscher ${appName}s.",
	female: "Ein hübsches ${appName}s."}

If you’re familiar with a switch statement in programming, you’ll probably notice that we basically adopted the standard syntax, but substituted square brackets for the switch( ) keyword.

Objects with Multiple Attributes

Objects in the UI, such as buttons, typically have a “label” and “accesskey” attribute but no canonical string value. This is subtly distinct from the cases above, where in the first case we wished to specify additional properties, and in the second the string value was resolved based on an external index. Example time:

<button: {value: "Push me", accesskey: "p"}>

In this case, it doesn’t make sense to refer to just “button”: you want either the label or the accesskey, which are available through the “.” accessor (e.g. button.label). To draw attention to this distinction, we could require a syntactic difference, or we could simply omit the index from the switch syntax above.

Summary

There are plenty more features of l20n that I’d love to put under the microscope here, but in the interest of focusing the discussion I’ll add them to a future post instead. As always, please share your opinion. You can also find us on IRC if you’re looking to start a lively debate; just look for me (jhiatt), Pike, and gandalf. Thanks to everyone from the brownbag today, and thanks especially to fantasai for taking the time to help us out!


Compiling Localizable Objects into Native JavaScript

August 22, 2009

One of the goals for my summer internship is to improve performance of l20n. The initial implementation was a parser written entirely in JavaScript that operated on .lol files. For more details about our choices for file formats, see my previous post. After some failed attempts to rework the parser’s use of regular expressions that regressed performance, I experimented with JSON as an alternative file format. The hope was that we could leverage the performance of Gecko’s built-in JSON parser to speed up l20n. We did see some tremendous improvements: on a large testcase constructed from browser.dtd, JSON cut our parsing time from ~140 milliseconds down to just a few ms. Unfortunately, we were still slow when it came to evaluating and displaying all those entities. We still had a big chunk of parsing left that we couldn’t outsource to JSON. Each string value in l20n may contain variable placeholders. Here’s an example (in JSON):

"droponbookmarksbutton" : {
    "value" : "Drop a link to bookmark it"},

"popupWarning" : {
    "value" : "${brandShortName}s prevented this site 
              from opening a pop-up window."}

(Line breaks inserted for clarity.) The first string doesn’t use any variables, but the second does. In order to catch all these placeholders, we scanned each string with a regular expression to match the ${…}s syntax, even though many strings don’t use any variables. That translated to a linear traversal of every single string before it could be returned, costing us a lot of time. In tests conducted in the xpcshell, rendering all the elements from browser.properties took roughly 40ms. In comparison, the current framework for properties files can parse and display all the elements in under 20ms. Since we can’t afford to regress overall performance, that meant we still had work to do to get faster.

One way to eliminate checking every single string is to add extra information to the encoding for strings. Many languages define different behavior for single- vs. double-quoted strings, performing replacements in one but not the other. We could also have added a special flag to indicate simple (no replacements) vs. complex strings. Either of these approaches would have added further complexity to the localization process, so we did not seriously consider this approach.

Instead, on the advice of the brilliant Staś Małolepszy, we embarked on an experiment to compile our l20n objects into native JavaScript. As a result, we saw another impressive performance jump. In an xpcshell test, we can load and display all of browser.properties in roughly 4ms (an order of magnitude improvement!). Here’s what our previous example looks like as compiled JavaScript:

this.droponbookmarksbutton="Drop a link to bookmark it";
this.__defineGetter__("popupWarning", 
  function() { return "" + (brandShortName) + 
    " prevented this site from opening a pop-up window.";});

Another great thing about compilation is that our runtime performance doesn’t depend on our choice of source file format. Here’s a diagram showing the different ways an l20n file can get inflated into a localization context:

l20n compilation scheme

Inflating l20n source into a context


The performance numbers were collected using nsITimelineService in the xpcshell. The l20n runtime infrastructure can inflate a source file directly into a context, or it can load compiled JavaScript definitions for a significant performance boost. For comparison, here’s a diagram of Mozilla’s current l10n scheme:
Current l10n scheme

Current l10n scheme


Again, this time was measured in the xpcshell when loading the browser.properties string bundle. It’s not necessarily representative of performance for DTD files as well. As we can see, compilation now guarantees at least comparable performance to the current approach, no matter what file format we end up using. If you’d like to weigh in on that debate, please leave a comment on my previous post! And finally, we are also working on l20n support in Silme so that it will be easy to migrate existing DTD/.properties files to our new l20n format.
Intercompatibility with Silme

Intercompatibility with Silme


Silme will serve as a critical compatibility layer to ensure a smooth transition to our new l10n framework. Please let me know if you have any questions or comments!


The L20n Format Shootout

August 21, 2009

L20n (for localization, 2.0) aims to empower localizers to describe complexities and subtleties of their language: gendered nouns, singular/plural forms, and just about any other quirk that might exist in the grammar. Like DTD and .properties formats, which we currently use to encode localizable strings, l20n objects associate entity IDs with string values. Localizers translate these values into the target language. L20n has all the power of the current framework, plus a lot more, and it’s just as simple to use (provided we choose the right format!). You can find some examples of l20n in action here. In the past weeks, we’ve experimented with JSON (JavaScript Object Notation) as a file format to represent localizable objects in hopes of achieving better performance by leveraging the new built-in JSON parser in Firefox. The performance gains were substantial, but still not enough to compete with the current DTD/properties framework in terms of speed. We’ve since adopted a new scheme to compile our l20n source files into native JavaScript (credit to Staś Małolepszy for suggesting this). This compilation now guarantees good performance independent of our choice of source file format. I will discuss the specifics of compilation in an upcoming post; this post will focus on the relative merits of the various formats under consideration.

Meet the contenders

LOL files

Before experimenting with JSON, we developed a novel format for l20n, playfully titled “localizable object lists” (.lol files). A lol file looks like a hybrid of DTD and .properties formats, with entities delimited by angle brackets and colons separating keys from values. Here’s a simple example, constructed from brand.dtd:

<brandShortName: "Minefield">
<brandFullName: "Minefield">
<vendorShortName: "Mozilla">
<logoCopyright: " ">

In this simple case, the lol file looks a lot like the original brand.dtd, which looks like this:

<!ENTITY  brandShortName        "Minefield">
<!ENTITY  brandFullName         "Minefield">
<!ENTITY  vendorShortName       "Mozilla">
<!ENTITY  logoCopyright         " ">

We lost the !ENTITY declaration and added a colon, but otherwise the lol format should look familiar. What if we want to do something more complex, like define an entity that involves a gendered noun? Here’s a German example encoded in a lol file:

/* This entity references a gendered noun */
<complex[appName.gender]: {
    male: "Ein hübscher ${appName}s.",
    female: "Ein hübsches ${appName}s."}>

/* This is a gendered noun */
<appName: "Jägermeister"
 gender: "male">

In the above example, we indicated the “complex” entity depends on the “gender” property of the “appName” entity. The ${…}s expander within the string is a placeholder that will be replaced with the value of “appName” (Jägermeister). Note that we can insert comments in the familiar /*…*/ style. If you’re curious to see more use cases for l20n and the lol format, be sure to check out the link above to Axel’s examples.

JSON

JSON is a well-known data exchange format. It’s simple to understand, and with implementations available in many different languages, simple to use. As mentioned above, our initial attempt to encode localizable objects in JSON was motivated by performance concerns. Even without a speed advantage, JSON still has some attractions, namely its existing implementations. Our JSON-based l20n infrastructure leverages Gecko’s built-in parser to do a lot of heavy lifting, meaning we have less code to maintain on our part. Plus, arrays and hashes, the fundamental datatypes available in JSON, are a natural fit for localizable objects. Still, JSON has some serious shortcomings, which we will see shortly.

As mentioned above, JSON is great for describing key-value pairs. Here’s the same brand.dtd example, now expressed in JSON:

{"brandShortName" : {"value" : "Minefield"},
 "brandFullName" : {"value" : "Minefield"},
 "vendorShortName" : {"value" : "Mozilla"},
 "logoCopyright" : {"value" : " "}}

Our localizable objects in JSON feature a “value” attribute denoting the string to be displayed. This makes our JSON example slightly more verbose, along with the required quotes surrounding the keys. Now here’s the sample gendered-noun example from above, this time in JSON:

{ "complex" : 
    {"indices" : ["appName.gender"],
     "value" : { "male" : "Ein hübscher ${appName}s.",
                 "female" : "Ein hübsches ${appName}s."}},

  "appName" : {"value" : "Jägermeister",
                  "gender" : "male"}}

In JSON, we need to reserve some keywords for attributes, like “indices” here, to implement certain l20n features. Still, JSON works pretty well to express the structure of the object. One area where JSON doesn’t work so well is comments. In JSON, our top-level object is a hash that associates entity IDs with their definitions. There are a few apparent ways to integrate comments into this object:

  1. Assign each comment to the same identifier, e.g. “comment”.
  2. Assign each comment to a unique identifier, e.g. “comment0”, “comment1”, etc.
  3. Don’t allow top-level comments: each comment has to be an attribute of an entity

Option 1 makes sense for humans writing JSON, and it’s valid, but a little strange.
Option 2 is a little painful when writing the file, especially when it comes to inserting new comments. This scheme would make it possible to reference specific comments, which might be useful.
Option 3 is somewhat of a straw-man but still deserves some consideration. Most comments in a localizable file give instructions for how to translate a specific entity, and now that relationship would be explicitly enforced. This form of comment is likely the best choice in most situations, but it probably is too restrictive to make it the only choice.

Another shortcoming in JSON is that it doesn’t support multiline strings. This is a serious problem when it comes to presenting long strings to localizers, since line breaks aren’t just for readability; they also give important cues for localization about logical separation between thoughts. As it turns out, the native JSON parser built into Gecko is perfectly content to accept multiline strings, but most other parsers will report an error.

YAML: A better JSON?

YAML is a data serialization language that is a superset of JSON. It supports comments, multiline strings, and user-defined data types. On the downside, it’s not nearly as well-known as JSON, it’s considerably more complex, and it’s not already built in to the Mozilla platform.

Here’s our first example from above, now in YAML:

brandShortName: Minefield
brandFullName: Minefield
vendorShortName: Mozilla
logoCopyright:

And the second example:

complex:
    indices: appName.gender
    value:
        male: Ein hübscher ${appName}s.
        female: Ein hübsches ${appName}s.

appName: {value: Jägermeister, gender: male}

YAML has the same logical structure as JSON with a much cleaner look, since indentation can be used instead of curly braces to denote objects, and it doesn’t require strings to be delimited with quotes. That’s another attractive feature, since it reduces the chance for errors with improperly escaped quotes within strings, and missing trailing quotes, that cause a lot of frustration. The less rosy side of the picture is that we don’t have a YAML parser that we can simply drop into place like we did with JSON, so it would require a lot of work on our part to get it up and running. YAML does have a fair number of implementations floating around, but development seems to have stalled on many of these. For example, this JavaScript implementation hasn’t seen any updates in nearly 5 years.

Summary

So far we’ve seen examined three choices: LOL, JSON, and YAML. The first was designed specifically for l20n, so naturally it encodes the complex features of l20n most gracefully. The remaining two are established formats with implementations available in many different programming languages (JSON to a far greater extent than YAML). The lack of comments and multiline strings is probably enough to eliminate JSON from the discussion, since these deficits outweigh any advantage of interoperability, leaving us with LOL and YAML. If you’d like to make a case for one of these, or any other format dear to your heart, don’t hesitate to leave a comment! We’d love to get your input.