A brief history of JavaScript

JavaScript is one of the most widespread and most successful programming languages today. Its history is reminiscent of a “from rags to riches” saga. It is said that JavaScript was created in only ten days. I am not sure about the historical accuracy of this claim, but suffice to say that JavaScript was born at a pivotal moment in mid 1995 when the Netscape Navigator browser was driving the commercial breakthrough of the Internet. Things happened fast at the time. JavaScript was conceived as a “glue language” for the browser and the document object model to give programmers a means of making web pages more dynamic. As for glue languages, most of them die when the product into which they are embedded reaches end-of-life. The browser, however, never reached end-of-life, but was continuously improved, and thus JavaScript kept growing and growing.

Its inventor, Brendan Eich, was tasked with creating a Scheme-like language targeted at amateur programmers for the Netscape browser. However, Java happened to be the hype of day and Netscape happened to be in a partnership with Sun Microsystems, the then owner of Java. Therefore, the language was christened JavaScript instead of LiveScript/Mocha and the syntax was made to resemble that of Java. Well, at least somewhat. To that end, the parentheses were dropped and the functional aspects of Scheme were retained. Surprisingly, a prototypal object model was chosen, slightly exotic at the time, possibly to allow easy object creation and extension without the need for complicated OOP constructs. It turned out to be a winning mix, because it allowed people to use procedural, functional, and object-oriented programming styles “al gusto”. Furthermore, one could use objects without even knowing anything about OOP.

Just a year and a half later, at a time when Netscape held over 60% of the browser market, JavaScript was submitted to ECMA International association for standardization and for overseeing the future development of the language. The first ECMA-262 standard was published in June 1997 and since then, JavaScript was also known as ECMAScript of which there are now several competing implementations. Strategically, this turned out to be a brilliant move. It ensured the uncoupling of JavaScript from the Netscape product. Eventually, this meant that JavaScript survived the browser wars, whereas Netscape Navigator did not. Around the same time, arch rival Microsoft created a clone language called JScript, a JavaScript “dialect” with small differences in its implementation. Though imitation can be seen as the sincerest form of flattery, this rang in a long-lasting period of browser incompatibilities, aggravated by Microsoft’s quirky DOM and CSS implementations, that caused web programmers a permanent headache.

The triumph of JavaScript continued, however. Netscape had the bright idea to turn JavaScript into a server language from the very beginning, but for some reason the world wasn’t ready for that in the 90ties. Just before the end of the millennium, the third edition of ECMAScript was released. It added regular expressions, exception handling, and a bunch of other things that made JavaScript a (somewhat) grown-up language. ECMAScript 3 remained the standard for 10 years and it gained significant leverage through Web 2.0 and Ajax. This technology allows data and partial web pages to be loaded from the server in the background through asynchronous application design. JavaScript is well-suited to asynchronous programming due to its functional nature. Since it is already present in the browser, JavaScript+Ajax enabled moving view logic entirely to the browser thus making applications more responsive. Two ingredients played a key role in this development: JSON and the jQuery library. A typical Web 2.0 application mixes HTML with JavaScript display logic and thus allows the creation of dynamic components handling data and visual content.

A natural outcome of this was the inclusion of library support for JSON parsing into ECMAScript 5 released in 2009. Because the complexity of JavaScript code increased with Web 2.0 innovations, “strict mode” was introduced in the same release. If enabled, it disallows the use of error-prone syntax and constructs, a kill switch for the weak sides of the JavaScript language, so to speak. An important event in the JavaScript world was the introduction of Node.js in the same year, a server-side JavaScript runtime environment based on Google’s V8 engine. Node.js allows the creation of server applications with non-blocking IO using an asynchronous reactive programming model. Node.js gained significant traction in the 2010s and is currently making inroads into enterprise server applications where it replaces Java and .NET applications. Another JavaScript driver is the single page application (SPA) model that redraws the screen of a single web page, akin to a desktop application, instead of successively loading web pages. Such applications are commonly implemented using a JavaScript web framework similar to server language web frameworks.

 

ECMAScript 6 (ES6), released in mid 2015, introduced a substantial number of new features and syntax to allow the creation of complex applications. They include block scope, arrow functions, enhanced parameter handling, modules, classes, enhanced object properties, assignment destructuring, iterators, promises, generators and new data types. Some of these features address the special needs of complex asynchronous server-side applications. Many developers perceive ES6 as the most important language update thus far. The ES7 standard published in the following year was a minor upgrade, adding an exponentiation operator and one additional array method. The ES8 standard released in June 2017 is the most recent major upgrade that ties in with the need for more complex language features for building server-side enterprise applications. It adds the async/await syntax which further simplifies asynchronous programming. It also provides new data structures and syntax to support concurrent programming, which opens the door to multi-threaded JavaScript environments.

JavaScript has turned from an ugly scripting duckling into a versatile and feature-rich full-stack web programming language. As of today, it is the web language most in demand and on its way to take over enterprise applications. It’s been quite a journey.

 

The Vim Experiment

Though I started my career with text-based editors like vi, Emacs and Brief, I have been using IDEs for a very long time. It began with products with “Visual” in their name. Since then I moved on to Eclipse for Java programming, Netbeans for web/PHP development, Webstorm for Javascript and the list goes on. So far I have not looked back and never questioned the convenience and productivity that comes with contemporary IDEs. Until last month.

Someone suggested to give vim a try. Say what? 1970s technology instead of a full-featured IDE? Well, first of all vim must not be confused with vi. The latter is significantly older whereas vim was originally developed in the 1990s and is still in active development. Anyone who has ever worked with Linux is probably familiar with vim. It can be found on almost any *nix computer and often used for quick-and-dirty editing of configuration files. Perhaps it is not the most popular editor, because to the majority of people accustomed to non-modal editing, the modal interface of vim feels a bit foreign. In addition, vim has no point-and-click interface. It can only be used effectively by learning a great number of keyboard shortcuts and commands.

vim 8 on Ubuntu 16.04

So why vim? To put it simply, its curse is also its greatest promise. If your hands do not have to move between keyboard and mouse all the time, you can accomplish things faster and with greater ease. Drew Neil, the author of “Practical Vim” speaks of “editing at the speed of thought” and “high precision code editing”. There is also less potential for carpal tunnel syndrome with your hands resting on the keyboard. What is more, vim features a scripting language and a plugin system which makes it highly configurable and extensible. So the question is: can vim hold up a candle to modern IDEs or even beat them in terms of productivity?

I have decided to find out and prescribed myself a strict 3-month IDE-less diet using vim and nothing but vim for my daily editing work. Three months because, as mentioned, the learning curve is not exactly flat and it takes some time before all these keyboard sequences are committed to finger muscle memory. For me, there are two questions that I am looking to answer with this experiment. The first is whether vim can actually accomplish all the wonderful tasks that IDEs are good at and that make a programmer’s life easier, such as code completion, automatic formatting, diffing, syntax and code-style checking, debugging support and whatnot. So far, I am pleasantly surprised, though there are still a few rough edges.

The second question is whether typing speed and editing automation actually exceed the possibilities offered by an IDE and whether the promise of increased productivity does materialize. Not sure about this one either, although my vim repertoire is slowly improving and I start to feel like I am not merely hacking my way through the various editor modes anymore. At any rate, the vim editor is both ubiquitous and here to stay. So even if I decide to go back to using an IDE for coding, there is probably a benefit in mastering this tool a little bit better.

Object cloning in PHP

object cloningIn any complex object-oriented PHP program, there are situations that require copies of objects. Objects are often designed mutable, which means they contain state information that can change. Consider a bank account object, for example, which contains state information about balance, credit limit, and the account holder. Let’s assume that there are withdraw() and deposit() methods that change the state of this object. By contrast, an immutable design would require that withdraw() and deposit() return new account objects with updated balance information. This may sound like an irrelevant distinction, but the the implications are actually far-reaching, because mutable objects tend to increase complexity in subtle ways. Copying objects is a good example.

$object1 = new Account();
$object2 = $object1;

By assigning an object instance to a new variable, as above, one creates only a new reference and the object’s state information is shared by both reference variables. Sometimes, this is all a program needs. If withdraw() is called on $object2, both $object1->getBalance() and $object2->getBalance() return the same value. On other occasions, this behaviour is not desirable. For instance, consider displaying the results of a withdrawal operation on an ATM machine before the transaction is executed. In this case, we can make a copy of the account object, execute the withdrawal operation, and display the new balance or an overdraft message to the user without affecting the actual account. For this we need a copy of the object rather than a copy of the reference. PHP provides an intrinsic operation using the keyword clone:

$account = new Account();
$clonedAccount = clone $account;

The $clonedAccount variable contains a copy of the original object. We can now invoke  $clonedAccount->withdraw() to display the results and -with a bit of luck- the original $account object remains unaffected. With a bit of luck? Yes, unfortunately things aren’t quite straightforward. The clone operation creates a so-called shallow copy of  the original instance, which means that it constructs a new object with all fields duplicated. Any field that contains internal type data, such as integer, string, float, or an array is copied. If the balance is of type float, for example, we should be fine. If the balance field happens to be an object, however, we have another problem, because the clone operation does not copy composite objects but only their references. If the account class uses a balance object, a call to $clonedAccount->withdraw() method would still affect the state of the original $account object, which is clearly not the desired behaviour.

This can be remedied by adding a magic method named __clone() to the original object. The __clone() method defines what happens if the object is cloned:

class Account {

  protected $balance;

  function __clone() {
    $this->balance = clone $this->balance;
  }
  …
}

The somewhat odd looking syntax of the _clone() method above instructs PHP to make a copy of the balance object that the field $balance refers to when the object is cloned. Thus not only the account object itself is copied, but also the balance object that it contains. While this should be okay for our stated purposes, note that this only copies the balance object, and not any other composite objects that the account object might contain. It is not difficult to generalise the code, however. The following even odder looking syntax makes copies of all composite objects of the account object. It does so by iterating all fields of the current instance referred to by $this, whereas $key takes the names of the fields and $value their values:

class Account {

  function __clone()
  {
    foreach ($this as $key => $value) {
      if (is_object($value)) {
        $this->$key = clone $this->$key;
      }
    }
  }

}

The is_object() test in the above code is necessary to avoid cloning non-existing composite objects, i.e. fields whose value is set to null, which would result in an exception. Yet, this code still has a minor flaw. What if our object contains array fields whose values are objects? While the array itself would be copied, the array fields still contain references and thus would point to the same objects as the array fields in the original object. This flaw can be eliminated by adding a few more lines of code that make explicit copies of the array fields:

function __clone()
{
  foreach ($this as $key => $value) {
    if (is_object($value)) {
      $this->$key = clone $this->$key;
    }
    else if (is_array($value)) {
      $newArray = array();
      foreach ($value as $arrayKey => $arrayValue) {
        $newArray[$arrayKey] = is_object($arrayValue)?
          clone $arrayValue : $arrayValue;
      }
      $this->$key = $newArray;
    }
  }
}

This already looks fairly complicated, but unfortunately it is not the end of our troubles. We also have to consider the hierarchical structure of composite objects, which means that the objects in object fields may contain object fields themselves which may yet contain objects with other object fields. Thus, creating a clone from scratch requires recursive copying of the object structure, otherwise known as making a “deep” copy. Obviously, the above method already gives us a way of implicit recursion if all of our objects implement it. The __clone() method of Object A is implicitly invoked by the __clone() method of object B when B containing A is cloned. We could use a base class for all of our objects to provide deep copying functionality. Although this works only with our own objects, and not with objects from third-party libraries, it would provide a comprehensive method for object copying. Unfortunately, the recursive approach still contains a flaw. Consider the following object structure:

class Employee {
  $name         = null; /** @var string employee name */
  $superior     = null; /** @var Employee employee's superior */
  $subordinates = null; /** @var array of Employee, subordinates */
}

This is an example of a class that represents a hierarchical graph of instances in memory. The Employee class defines a tree structure with the variable $superior containing a reference to the ancestor node and the variable $subordinates containing a reference to child nodes. Because of this double linking, the graph contains cycles, and because of these cycles, the above clone method will run into an infinite loop and cause a stack overflow. Cycles are fairly common in object graphs, though they are not necessarily as obvious as in the above example. In order to prevent the clone method from running into a cycle death trap, we need to add a cycle a detection algorithm, for example by reference counting. How exactly this is implemented is beyond the scope of this article. Let’s just say it’s not that trivial.

If you can do without cycle detection, there is a simple alternative for creating deep copies of an object, one which does not require a clone method implementation:

$object1 = new Account();
$object2 = unserialize(serialize($object1));

This takes advantage of the PHP serialize() and unserialize() library functions that  convert an object back and forth to a string expression. These functions take nested object structure into account. However, they are expensive operations, both in terms of CPU and memory, and they should therefore be used with discretion.

Posted in PHP

Pogoplugged

Everyone seems to agree that the outgoing year 2011 was the year of the cloud. Judging by how often the word “cloud” was thrown at us by computer vendors, hosting companies, and service providers, it sounds like the greatest innovation since sliced bread. Of course, nothing could be farther from the truth. Cloud computing is not new at all. It has been around since the days of Multics and the ARPANET, at least conceptually. It is neither an invention nor a product, but an application of existing computer technologies, no matter how many companies now try to productise it now. The fuzzy term includes everything from network storage, utility computing, virtual server hosting, to service oriented architectures, typically delivered via the Internet (i.e. the cloud). In fact, the term is so blurry, that even the vendors themselves often disagree what it means, as famously Larry Ellison, CEO of Oracle, before his company jumped on the bandwagon.

Pogoplug 2Most people associate the word cloud with file hosting services such as Dropbox, Windows Azure, or Apple’s iCloud. Today, I want to talk about a product that provides an alternative to these network storage services, which provides in my opinion a superior solution. It’s called Pogoplug and it is a box that comes in a flashy pink. The idea is simple enough. You connect this box with your Wifi router on one side and with your storage media on the other side, and voilà, you get networked storage, aka your own “personal cloud” which is accessible on your LAN as well as from outside via the Internet. Besides connecting the box, you have to get an account with pogoplug.com and register your device. Optionally, you can install software that makes the attached storage available on your LAN as an external mass storage device. There are also free apps for iOS and Android that that allow you to access Pogoplug-managed storage from your tablet and/or phone.

Why is this such a clever product? Well, for two reasons. First, the Pogoplug is low-cost and easy to use. Second, it provides solutions to multiple problems. Let’s start with the first. The basic Pogoplug device costs 50 USD, and the web account is free. You can plug up to four external hard disks or flash memory sticks into the four USB ports, so one could easily realise four or eight Terabyte total capacity. External hosting is expensive by comparison; for example, a 50 GB Dropbox account costs 10 USD per month; with Apple’s iCloud it’s 100 USD per year for the same size. There are cheaper alternatives, such as livedrive.com or justcloud.com, but the annual expense still exceeds the cost of a Pogoplug device. What’s the catch? The download speed via Internet is limited to the upload speed of your Internet connection, which for the average DSL user is typically lower than the access speed of an external file storage service. Filling the Pogoplug devices with data, on the other hand, is much faster, because you can access the drives locally.

Now, about the multiple solutions aspect. What I like about the Pogoplug device is that I can reuse my external backup disks as network storage. I work with redundant pairs of disks, whereas one disk is plugged into the Pogoplug at all times and the other disk is used to create backups from my computers. In the second step, I mount the Pogoplug to my Linux workstation and synchronise the online storage with the fresh backups via rsync. In addition, I use my Pogoplug as a household NAS and media server. This comes in very handy for viewing my photo library on a tablet, or for streaming audio from my music collection to my phone. As long as I stay within my house/garden’s Wifi range, the data transfer happens at Wifi speed. Streaming movies is a little trickier. Usually I download movies from the Pogoplug to the mobile device before viewing.

In summary, the product offers a miniature file server for local access via LAN/Wifi and remote access via Internet plus some streaming services. Authentication service is provided by the pogoplug.com web server. As of late, you also get 5GB free cloud storage space externally hosted by pogoplug.com, which is likewise accessible via mobile apps and can even be mounted into your local network. The pogoplug device itself consumes only 5W, less than most NAS or mini PC servers. Obviously, the power consumption increases when connected USB hard disks draw power from it, so the most energy-efficient solution is probably to use either flash memory sticks or USB-powered disks that stop spinning in idle mode. Additionally, the Pogoplug device can be deployed as a LAN print server. Those who are comfortable with Unix administration and scripting can program the Pogoplug device to do even more.

Website: www.pogoplug.com

Specifications:
1.2GHz ARM CPU with 256MB RAM plus 512MB Flash storage,
4 x USB2 ports, 1 x 10/100/1000Mbps Ethernet port, integrated DC power supply
Supported Filesystems: NTFS, FAT32, Mac OS, Extended Journaled and non-Journaled (HFS+), EXT-2/EXT-3
Supported Browsers: Safari, Firefox 3, IE7, IE8, Chrome
Supported AV File Formats: H.264, MP4, AVI with motion JPEG, MP3

Digital Attrition

Naively, one might assume that digital artifacts, such as software, are not subject to decay and attrition, processes that affect physical objects. After all, any digital artifact reduces to a sequence of ones and zeros that -given durable storage- remains completely unaltered and would therefore function in exactly the same way even in ten, hundred, or a thousand years. However, this notion disregards an important aspect of digital products, namely that they don’t exist on their own. A digital artifact almost always exists as part of a digital ecosystem requiring other components to be available to fulfill its function. At the very least, it requires a set of conventions and standards. For example, even a simple text file requires a standard of how to encode letters.

This became once again painfully clear to me, when the WordPress software, on which this blog runs, suddenly started behaving erratically a few weeks ago. It produced 404 page-not-found errors that were impossible to diagnose and fix. I had not changed anything, being happy with the look and functionality of the blog, so the WordPress installation had reached the ripe old age of three and a half years. The cause had to be sought somewhere in the operating platform, which in this case means the web server configuration. Upon contacting the hosting provider, I was told that this problem had been diagnosed with older versions of WordPress and could only be cured by an upgrade.

Previous Blog ThemeI had no choice but to upgrade WordPress and the result is before you. Since the old theme, which can still be seen in the thumbnail image, is not compatible with the latest WordPress version, I derived a new theme from the included twentyeleven package. It takes into consideration that screen resolution has increased over the last few years and it also provides a display theme for mobile devices. Curiously, while still offering the same set of functions and features as version 2.3.1, the WordPress software version 3.2.1 has increased significantly in complexity. I ran a quick sloccount analysis, which told me that its codebase increased from 36,895 lines to 92,141 lines, not counting plugins and themes, and the average theme has roughly doubled in code size.

I am sure that this phenomenon is not unfamiliar to anyone who has worked with computers over a number of years. Remember how MS Office 97 contained every feature you would ever need? Since text processing and spreadsheets reached maturity quite early in the game, some people would even say this for the prior versions of Office. Yet, Microsoft has successfully marketed five successor versions of MS Office since then, the latest one being Office 2010. Needless to say that the more recent versions have gained significantly in complexity and size. But who needs it? Studies have shown that most people only use a small core set of features. Unless you are a Visual Basic programmer or have specific uncommon requirements, you would probably still do well with Office 97. Or would you not?

Upon closer look, you would probably not, and this is where the attrition factor comes into play. In case of Microsoft, it is safe to say that this effect has been engineered for the sake of continued profits. Not only are older version not supported any longer, but they do actually become incompatible with current versions. The change of file formats is a case in point. For example, do you know the differences/advantages of the Office x-fomats (such as .docx and .xlsx) over the older .doc/.xls formats? The new ones are zipped XML-based and as such easier to process automatically. However, most people using older office versions of Office or competing products cannot read these formats and are thus forced to upgrade or obtain software extensions for compatibility.

This does not only apply to Microsoft products, but -as previously mentioned- to digital artifacts in general. Remember floppy disks? Not long ago I found a box of them in the storage room. They contained sundry programs and files, reaching back into the Atari and MS-DOS era. Not only don’t I possess a floppy drive any longer, but even if I had one, I could not read these files. To access my earliest attempts at digital art and programming, for instance, I would have to read .PC2 and .GFA files on the GEMDOS file system, which would constitute a major archival effort. Perhaps I should keep the them until I am retired and find some time for such projects. The surprising thing is how fast attrition has rendered digital works useless in the past decades. While I can still find ways to play an old vinyl record from the eighties, for example, it’s almost impossible to access my digital records from the same era.