Общо показвания

декември 23, 2012

*@home or why you have never heard of it.

Ever heard of Set@home? Or maybe Folding@home. Well, if you haven't here is a quick 'all about it': A computer program is downloaded to your own PC/Mac/Whatever you use and it starts downloading data from somewhere in the Internet and then makes some (many) calculations on it and then sends back some data. Basically your computer is used as part of a large cloud computing service and meanwhile your CPU cycles are 'donated'. Which means that you CPU (or even GPU - depending on which is more powerful on your particular system) is constantly sniping at highest possible speed. Not very nice huh?

I do remember the days when the SETI community was large and the top 'donators' were geeks with high tech, high price PCs, proudly sharing their vision for the future (not without a hint of Sagan's Contact vivid imagery on their mind).

This was maybe 10 years ago.

Today even my smartphone can do more calculations than the PC I was using back then. Back in the days one chunk of data took almost 5 days to complete. I has friends who did not turn off their computer, ever, just to be able to 'fold more' or find the perfect candidate signal and be the first to detect the ET call to earthlings. Yeah, those were the days.

Today however we have 'the cloud'. It does not make sense to use individual's computer to make calculations. It makes more sense to use the cloud and just ask the individuals for the money required to run on the cloud. Indeed, this is what SETI is doing right now - sending email messages to every once registered and long gone user out there asking for money.

Well dear friends, as fascinating as I find the quest of finally finding those aliens (and hopefully they will not be of the 'Alien' species), hiding around the galaxies, teasing us with noises that only a few of humans can hear (yeah, try watching 'Conspiracy theories' - you will know everything about the aliens and their contracts with the powerful and rich of the day), today's computers are all about energy savings and efficiency and guess what - mathematical calculations do not tend to go easy on the energy usage. Besides back in the days I was using like 10 terminal applications and the only thing graphical was my browser. Back then there were a lot of free cycles available. Now days I prefer to keep those cycles for my browser. Or whatever. Just stay away from my fan. I hate fans....

октомври 17, 2012

Closure compiler and the adverse effect of using types for optimizations

This is more of a note and less a fully featured post. Just a quick reminder that one should not use types for optimizations if one is not actually providing the types. This way one will not spend almost 30 minutes battling with a code compiled with debug options (and thus extremely ugly) just to figure out why the the code is not working as expected even thou one has followed all the rules of the compiler.

Type optimization is a great addition to the compiler and indeed produces smaller code. So small that it is actually not existing.

So next time you want to have a nice and tidy compiled code, do not forget that you should not just write the code and test it as source and then, just to see how it goes so far as compiled entity, run it with advanced mode and type optimization. You should first write the types. It is easy.

октомври 10, 2012

TypeScript (almost a week later or why you should not jump to it yet!)

I have played with the language and guess what... it is not so brilliant as one is lead to believe initially. As with many new products you start to discover the multitude of technical gaffes only after you use it for a while.

First of all I should tell that I rewrote a real world module used in production and proven to work with a) RequireJS and b) Closure tools, the later being fully annotated and all types discovered and identified correctly.

Now, in TypeScript, there are several severe problems for OOP and one of those is evident only if you use chain-able APIs. You know those, where you write almost 300 character lines of methods being invoked one after the other.

Lets see a simple example:

class Base {
  m() { return this; };
}

class Derived1 extends Base {
  m1() {};
}

class Derived2 extends Base {
   m2() {};
}

class Derived3 extends Derived1 {
  m() { return new Derived2(); }
}

Two things to notice: Base has a method called m and it returns the 'this' object, which as we know is special and is always pointing to the context in which the function/method is called. Second: Derived1 class extends Base and introduces a new method m1, which returns nothing (in TypeScript - void, in JavaScript undefined.

Would you like to assume what will the following code do?:

(new Derived1()).m().m1();

Well. it spills out an error: The property 'm1' does not exist on value of type 'Base'.

Analyzing the code is is pretty obvious what is happening: the compiler does not understand what is 'this' when the invocation is encountered. It figures out that the type is Derived1, then searches for method m on it, it does not find one, looks it up in the immediate parent, finds it, checks its return type and ... 'boom', it does not know what is 'this' anymore. Is it possible that Microsoft do not know anything about context in invocation or is it a bug?

Reading the related thread in the codeplex discussion board it is pretty clear to me that the people currently playing with the language have never ever heard of JavaScript compilation, AST and tree shaking and type inference and whole program compilation. It is also pretty obvious that the fragmentation of the JS library world does not help.

Lets try another example:

class Base {
  m() { return this; };
}

class Derived1 extends Base {
  m1() {};
}

class A {
  another(){
    return this;
  }
}

var a = new A();
var b = new Derived1();

a.another.call(b).m1();

As it might magically seems, this example works! How is this possible? Well, it is not magic, if we take a look at the definition of Function.prototype.call it is defined as returning 'any' and this is the correct declaration (because of course it can return anything). Here is the thing: the compiler should still be able to understand what is going on. Lets analyze this a bit: method of class A is called in the context of an instance of class Derived1. Lets look what is the return statement of that method: 'this' - s keyword in JavaScript and TypeScript. So if it is 'this' the return type should be the context in which it was called, right? Why is the compiler not able to determine that and instead rely on the declaration?

I will tell you why: because it was not designed to be able to do such complex tree walking. And it is okay, maybe the authors have different objectives. But what I think we should have is a language/compiler that can do this kind of things! Whole application compilation is what will really assure all your types are correct, not compiling small parts and then hoping that the type system will still work - it will not.

On the mentioned thread there was a proposition to have the _this keyword replace the fact that the compiler is not smart enough to know the type of the context of invocation. It might be true, while I do not agree, it is up to the designers of the language. But imagine this (as they promote the VisualStudio plugin): You have the code above and after you typed (new Derived1()). you get the completion and you select m() and then you get:

(new Derived1()).m().

and at this stage you get no meaningful completion, because the typescript service does not know what is the type of the result anymore. So now tell me, how useful is that?

Of course this is not really a show stopped, people seem to like it a lot, I also like the syntax for the modules and classes, kind of like the fact that you have your type definitions next to the variables, much more elegant than having Java like annotations. Unfortunately the type system reader seem to either be in its infancy or have different objectives than the ones I consider important, which makes the language not so attractive to me. There are things to be fixed, a lots of bugs have been narrowed down in the last week and I am sure the tool will mature, but the advanced features some developers are looking for (for example a robust type system as in Closure tools) are not there. Too bad, I really liked the syntax.

Good luck to Microsoft and I will make sure to check the language in 6 months. Unfortunately it seems the community will be driving the development and knowing the variety of styles and requirement I don't believe something coherent and useful will come out of it in the long run.

октомври 07, 2012

TypeScript and the JavaScript developers

This is going to be a rant of a king and is not related to the 3 part series dedicated to TypeScript.

I happen to like TypeScript a lot. It is typed (I lived with types in JavaScript for a long time, Closure tools anyone?) and I like short, clear syntax and tools that can work with it. Closure tools are awesome, but hard to learn and hard to use, internally Google have attempted to make the syntax more bearable, but then what happens is that their own tools work inconsistently with it (gjslint does not understand scoped code and thus does not provide useful information in that case, the compiler works just fine however).

TypeScript in the other hand is pretty easy to get on for the average JavaScript developer. Unfortunately the average JavaScript developer seem to be in love with patterns that are mostly used in jquery and do not understand basic concepts of the language, and still they are interested in this new language. At least this is my impression from the questions/suggestions asked on the forum on Microsoft's page.

This is in sync with the opinion of some of the 'gurus' of the JavaScript land: too many JavaScript developers, too few good ones.

Not to be too skeptical, but I don't see the regular jquery user benefit from type checking, they are in more for the auto completion. Not that this is bad... just strange, as it works without type checking as well...

As a person coming from Closure tools I am in LOVE with TypeScript. I wonder what do other find in it? While people still argue how useful type checks are in JavaScript, for me it is no longer "IF" I should do type checking, but "HOW" do I do it easier for me.

октомври 05, 2012

TypeSciprt vs. Closure Library/Compiler

This post will take a look at the user experience with the closure tools versus the experience with TypeScript.

This is part 3 of 3 parts post. Part 1 is here, part 2 is here.

Because there is a compiler involved and because everyone out there is speeding up on comparing TypeScript to Dart, Clojure, C# and whatever there is I wanted to point out one thing very important that no one seem to notice:

JavaScript is TypeScript. All valid JavaScript is valid TypeScript. This puts the new language way out of the niche of Dart, Coffee Script or whatever language is out there that compiles to JavaScript. The closest thing that there is IMO is Closure library + compiler.

I will elaborate a bit on that: Closure library is all valid JavaScript: it can work in the browser just as it is. Because you can put pure JavaScript in TypeScript same is at least in a fringe case valid for TypeScript. Most people compare the closure compiler to other option in simple optimization case: however the library and the compiler were created to be used in advanced mode, which mandates very special subset of JavaScript to be used in the source js files in order for the output to work as intended. That is: closure library is a subset of JavaScript while TypeScript is a superset of the language.

If you can remember a few years ago the closure tools and especially the library were received very badly in the JavaScript community: well established names in the community were on the fast track to reject it. I particularly remember the notes of the RaphaelJS creator on how bad the closure library was, how little the creators understand JavaScript working and so on, even for loops were used as example on how they should not be used. Of course today we all know that the library should never be used without advanced optimization and that making micro optimization on loop constructs where the loop body is 10000 slowed than the slower loop construct you can imagine is pretty stupid, but back in the days the big names of the open source JavaScript development movement were there to "explain it for us".

I mentioned this because today I see pretty much the same story: the well established names in the javaScript world (at least some of theme) were fast enough to give their verdict on the new language just because it does not coincide with their established work flow.

Well, a few years ago those people were wrong and even though closure tools were never  widely accepted or used, they remain one of the best in the business. No compiler or minifier can beat the closure compiler in advanced mode. Of course the requirement for the input remains and is clearly a big leap for any JavaScript developer, but the benefits are very clear and if small size is what you are looking for this is the place to go.

Lots and lots of big products have been built with other tools of course and they work well too. The thing is that the tools were offered for free and almost no one put enough effort to figure out how and when should it be used. The developers were angry that they have to write their code following specific patterns and the learning curve for the components that were ready to use was huge. It still can be for new comers. The main complain was the style and lack of expressiveness because of the limited number of allowed patterns.

Strangely today the same developers are limited more than ever: most of the MVC frameworks are very opinionated about the code style and patterns that can be used, also most are not interchangeable, if you start with one product it is not such an easy task to switch to another, not really. So it is basically the same thing, I for example have never used AngularJS. Today I wanted to try to make something with it, well... I had 2 hours free time. Guess what - I could not complete anything in two hours. Maybe I am a slow learner, maybe it is not for me, whatever it is, the thing is you have to give it time.

Now back on TypeScript. Because of the automatic type inferring and because it does not actually contains any code/library the learning curve is very little, epsecially if you have already dealt with module patterns, inheritance, classes in JavaScript, public/private/protected methods and types. I will translate that for you:

If you have used closure tools before, TypeScript will be very refreshing and intuitive tool for you.


Yes, it does not provide the utilities closure has, yes, it cannot strip unused code, but it seems that the community is not interested in the later, even advanced projects like uglifyJS is not aiming to clear unused members because "it is not safe" for every type of code style. Yes, it is not. But is very beneficial. Anyways, unlimited by the patterns of closure library and compiler but still having static types and easy to follow annotations, having interfaces that can be checked at compile time, having sort of 'exports' (defined in the case of TypeScript) is very familiar to any closure developer.

Like in closure tools, you have to go through a build step before you can check your changes in the browser. In the case of closure tools, this is only when you change a namespace or require a new one, you need to recalculate the dependencies. In the case of TypeScript it is true even if you change a single letter, because the file that is loaded in the browser is not the one you are editing. I believe it will not take long for this to be overcomes like it was for closure with tools like closure-script and plovr.

Unlike closure, you have source maps that work right now, not that you will needed it much, unless you are using the TypeScript preferred module pattern, because the generated JavaScript matches very closely the source.

Unlike closure the generated code is not obfuscated nor minified. However there are plenty of tools that will do that for you. Unfortunately the generated code is not compatible with the closure compiler in advanced mode and you will always need to load the full of your libraries if you happen to create them with TypeScript. The good news is that modern JS engines tend to not construct everything in the memory, there is something called lazy parsing, which I do not pretend to understand very well, but basically it says that if a part of your code is never used, it is never interpreted as well, except the first time when checking for code validity. Maybe this will solve some of the issues with downloading several megabytes of code and the caching will solve the rest? I have no idea, but the thing is that we see almost then megabytes of scripts being now part of a single application. The fact the developers tend to use patterns that are easy on the eyes and skills but hog on system memory is entirely different problem.

I find it easier to express complex constructs in TypeScript than in Closure, however the small number of people already interested in the language pointed out that it lacks some important types (for promises, deferred etc.) I believe those will be resolved in the next few months.

The language is still in Alpha! You have to know that. Also it tried to follow ES6, which probably means that if it changes the language might change as well. But it is a fun toy as well, especially if your mind set is polluted by the closure tools and you are constantly thinking about types and interfaces and static methods of classes.

So the verdict in my case (me being not really an acknowledged expert, but still with years of experience in large scale projects) is that TypeScript looks very promising and I will definitely enjoy working with it. Hopefully there will be a standard DOM library soon and hopefully widget library later as well. I also hope that it will be possible to construct a compiler that will be able to strip unused symbols to make it possible to developer large and well featured libraries without fighting for every byte.

A few words about the compiler and auto completion in other editors: last night post established that the completion and compilation can work entirely in the browser and this is true: open the play ground app and disconnect from the internet - everything will continue working: the completion service is available in the browser offline simply interpreting your code and using the main definition file (lib.d.ts). This really means it can be accomplished in IDEs like Cloud9 which already work in the browser anyway. Please please please do something about it, because I really don't like VisualStudio2012 and I dont even own a Window machine.

TypeScript Part2: AMD and CommonJS

This is the second part of three part post dedicated to the developer's experience with the TypeScript language, created and released as open source by Microsoft.

Part 1 is here.

AMD modules.

The compiler supports producing AMD compatible modules out of your code. However one need to specifically target the currently circulating module systems when coding, which is fairly simple: just avoid the module keyword and mark the exports with the export keyword. This is slightly counter-intuitive, but if you know about this feature in advance no mistakes will be made. So to sum it up and clarify I will make a real world example and define the smjs api interface present on the stb devices:


interface Smjs {
  initapi();
  set_json_handler(hanlder: string);
  jsoncmd(command: string);
}

class esmjs implements Smjs {
  initapi() {}
  set_json_handler(handler) {}
  jsoncmd(srt){}
};

export var smjs = new esmjs();

Then I compile it with the following command:
tsc --module amd mymodule.ts
and the result is then:
define(["require", "exports"], function(require, exports) {
    var esmjs = (function () {
        function esmjs() { }
        esmjs.prototype.initapi = function () {
        };
        esmjs.prototype.set_json_handler = function (handler) {
        };
        esmjs.prototype.jsoncmd = function (srt) {
        };
        return esmjs;
    })();    
    ; ;
    exports.smjs = new esmjs();
})

This will allow me to basically define my module as amd without having to write my amd boilerplate. How about required modules? It is as simple as stateing them with an import module pattern:

import myname = module('path/to/module');

The resulting code will entirely compatible, as if you typed it in the amd pattern.
There is one thing to mind however: the compiler does not understand the requirejs config options and will always traverse the paths in simple mode (i.e. as if you have not set any config on requirejs regarding paths). In the opinion of some folks using rjs this makes it impossible to use TypeScript in a real requirejs projects. However this is not my opinion, because the compiler does not try to do everything for the module loader, instead it simply provides means to allow code completion and basic checks on your module, so instead of really providing the code for the third party modules that you do not want to include in the base path of your *.ts scripts, provide the definition files instead. More details and examples can be found by browsing the example projects constructed with typescript on their website.

Producing commonJS module is similar: even without specifying the
--module
directive the compiler will output commonJS style code when it encounters the
export
keyword. The same module above will look like this:
var esmjs = (function () {
    function esmjs() { }
    esmjs.prototype.initapi = function () {
    };
    esmjs.prototype.set_json_handler = function (handler) {
    };
    esmjs.prototype.jsoncmd = function (srt) {
    };
    return esmjs;
})();
; ;
exports.smjs = new esmjs();
(Dont know why the extra semicolons though)

One interesting aspect of all this is that is you reference moduleA from moduleB, the compiler looks for it in order to make all possible checks on your code, so if you do not really have the required module (i.e. A) you should at least provide the definitions for it. With the AMD/CommonJS module pattern it is even easier as you only will provide the exports from the module, which is often a very simple, stripped interface (modules provide better implementation hiding, right?).

Another catch is how you define the referenced modules in AMD/CommonJS module definitions in TypeScript: in TypeScript module you do it by using reference comments, however in AMD/CommonJS you do it with import. You can of course use imports in TypeScript modules as well, but in a slightly different meaning. You can learn more about this in the video posted on their web site.

One note: if you tend to require lots and lots of modules that are not written in TypeScript and you do not want to spend time writing definition files it is probably better for you to not bother with TypeScript. Module migration from AMD to commonJS and from CommonJS to AMD can be achieved with other tools anyway.

In my research I found it more compelling to use the modules for nodejs projects, because there are so much npm modules each and every one of them with different interface. What would be really great is if the npm modules provide definition only and if possible with commends. This could easily lead to a tool that is able to generate documentation from definition files, which will be a double win: a) you can use TypeScript or JavaScript to develop your npm module, then you deploy the JavaScript and the definition file. It can be used both by an IDE and by a documentation tool. On the consumer of the module side same can be applied: if the consuming developer uses TypeScript he gets to use the definition file for auto completion and for documentation, if JavaScript is the preferred technology then still the definition file can be of great help on understanding the API instead of reading lots of documentation, sometimes not that well written and sometimes completely missing.

As of AMD, the reality of most projects is as follow: you pull up like 10 different pieces of libraries and framework parts and almost none of them is AMD compatible, then you spend a few days tweaking the shims for those to be loaded as they require and then you start writing your application code "as modules" in hope that those can be reused in other parts of your application or (hail Mary) in another project, the later of course is happening like... never. At the end you have a well defined modular code of course, but you have do many dependencies in each of those modules that you figure out you will never be able to use just one or two modules from this project because they will pull up at least 10 times mode dependencies. Because it is application logic code. One thing you can easily do is plug new features in such partitioned code, which is not a small win, but it does not pay back that much when you think of it.

Enough on my opinions on the module patterns in use.

The fact is if you use AMD and/or CommonJS TypeScript can support your code easily and you do not need to rewrite your existing code to TypeScript to benefit it, you only need the definitions.

Part 3 will take a look at a how TypeScript compares to Closure Tools.

октомври 04, 2012

TypeScript (by Microsoft)

The last few days the news was all over the Internet: Microsoft has released a preview of a new superset of JavaScript.

I decided that I should give it a try, especially after reading materials on the internal situation in Microsoft vs. the same in Google. It is not a secret, that I have not been using Microsoft technologies for 12-13 years. What good comes out of Redmond, right? Or at least that is the mantra most of Linux and OSX users swear by.

Well, it turns out Microsoft have learned and they are indeed able to produce high quality. I have talked previously about the TV interface and the fact that is uses no library at all and uses everything available in webkit. Well it turns out it works without any changes in Internet Explorer 10! So yeah, tehy have learned!

Now about the new language.

The last few months I have been working extensively with closure tools and I can honestly say that while it is very interesting and big and well designed product, it is a subset of the original language in so many ways, that it is like you have to learn to think in javascript anew.  It dictates memory saving rules and design patterns, but on top of that it dictates constructs that a very small minority of JavaScript developers would appreciate. Its pluses and minuses are not the topic here, however I will use it to compare to TypeScript.

The history, the aims and directions for the language you can get from the official web site and slightly reinterpreted from all news about it. Here I will be talking more about it from developer's point of view.

First of all, it provides the class, interface and module constructs on top of a type system. Nothing new here, however the module system seem to confuse most of the developers, judging from the forums. The beauty I see in it is that it can output modules in both commonJS and AMD format. This means that you can indeed write your logic once and make it work in node and browser without using any wrapping or boilerplate code! This is really really great. I have read about similar capabilities, but most of the time it was converting from one to the other or involved writing boilerplate code.

The module pattern TypeScript authors decided to use is a well known and kind of "used and dismissed" one: self invoking function wrapping. The reason it is not widely used I believe is that it leaves the developer managing dependencies manually. There was no sane way to manage a project with hundreds of files this way.

TypeScript uses something called references: a comment at the top of the source file that declares dependencies. The declaration has two uses: 1) it allows the editor (I will talk about editor support later) to know which files to use for completion/type checking and if "output all" mode is used, those files fill be included in the build. The later will produce a single javascript file with optional source map that has the sources arranged in the correct order. This way it is able to provide a build system on its own, without depending on AMD loader.

Unfortunately most of the code out there is written using different patterns (which is why writing large application trying to compile a working system out of so many different pieces is pain in the *ss). The interesting thing about TypeScript is that is is compatible with any library because the resulting javascript is "pure" javascript. It does not expect anything from any other part of your application. You can write any code in TypeScript that depend on backbone for example. The type system can be 'boosted' much like the closure compiler type system. You just define an interface and then declare a variable that is using it. Let's see an example. In most modern browsers on the Element class you have classList property that is an object with property length and a few methods. It is not included in the lib.d.ts file (d.ts files are the way TypeScript can include type declarations only without actual implementations, much like the extern files in closure).

interface classList {
  length: number;
  contains(classname: string): bool;
  add(classname: string): void;
  item(index: number): string;
  remove(classname: string): void;
  toggle(classname: string): void;
}

interface Element {
  classList: classList;
}
This will tell the editor and the compiler about a property on all Element instances, named classList which implements the classList interface. The interface is not really defined in the browsers as I defined it here, in fact it is a DOMTokenList and one could simply complete the Element interface with classList as this:

interface Element {
  classList: DOMTokenList;
}

but I am using this as an example how one can define an interface and then attach it to an existing interface. This is possible because the interfaces are 'open' as they call them in TypeScript, which means that one can add to them from wherever needed.

If you happen to override an interface property you will get a warning.

As this simplified example shows, one can easily add types for an existing code base and then include it. In case this is done and a '*.d.ts' is included in a source file it will be considered for types but as it does not provide implementation the developer has to provide it manually (in the browser - include the javascript for the external source for example). However this covers a large portions of the web sites in the wild!

A few words about the editors. Syntax highlighting is provided for vim, emacs and Sublime Text 2. I have not tried it in vim and I have never used emacs, however sublime is working (only in OSX, interestingly it is not working under Linux, on the sublime text 2 forums there are other people with the same issue). There is already a gist providing build config for ts files in sublime text 2. I have tested this one as well, however the line/column error recognition is not working, but the build is working okay. On the demo video, presenting the language, a plugin for VisualStudio is used. If you happen to have Window 7/8 and VisualStudio, the plugin is free of charge and is working as demonstrated, meaning - it works great! Auto completion, error detection and intellisense are all great. What you might have not noticed however is that the same is available in the play ground on the website! Which means that all this intellisense, type and error checking etc is running on javascript. Hopefully it will be available soon on other editors/ides as well.

I want to say something here as well: Cloud9 IDE were the first one to congratulate themselves with "support for the language". However having syntax highlighting in the editor is not exactly language support. It would be great if they could have had the intellisense (as it was already demonstrated that it could run on javascript only) and then go out loud. Never the less, I hope they will catch up on this. They have built great platform and I think this will be a great completion to the product.

Part two will cover commonJS and AMD modules.


май 04, 2012

The promise of Touch

I rarely write about series in this blog, because I find it kind of boring to praise (or not) a production based solely on your own experience with it.

However as my TV time is diminished lately (and is threatening to approach zero) I had to choose wisely what to watch.

When the premiere of Touch aired several months before it was actually scheduled to begin, the first episode was kind of unusual, exciting, intriguing, promising... Who was to guess right then and there that this same story will repeat over and over and over again with each end every episode and nothing else will ever happen, not even a hint for actual line of events moving forward while the season progresses.

There is of course the possibility for speculation. Maybe this phone travelling around the world will have some meaning at the end, but I would not hold my breath (Lost anyone?). What else? Maybe those Japanese    girls will have something to do with the whole thing? Who knows.. (yeah, Lost, Heroes and so on - you expect things to matter, but the scrip writers think otherwise.

The take away: don't expect too much of those 'sci-fi' flicks. Most of them are disappointing anyway.

април 28, 2012

The JavaScript ecosystem problem

I tend to follow the news in the JavaScript field because it is related to my work. What I notice lately is that every developer out there is making attempts to popularize his/her favorite framework and/or tool.

Here is my objection to all this: Currently the situation is this - the ecosystem of javascript tools and libraries is fragmented. The pieces are so small and so incompatible that it makes it impossible to build a large web application without actually spend months in development time trying to make the pieces work with each other.

There is jQuery, which unfortunately is now much more than a tool to hide the browser quirks. It now provides pub/sub, promises and what not. There is also a whole lot of 'plug-ins' for jQuery. But those require different version of jQuery. So you might need more than one jQuery.

Then there is 'modular' code, AMD. One being the new dojo (1.7), which is a pretty big modular library, but it is not really compatible with other AD loaders. There is RequireJS, which is also popular because of its 'optimizer', but it is only that - loader, and does not provide any library for browser quirks hiding, so the developer is left with no library at all. jQuery is popular choice in this case.

There is closure library and closure compiler, however those are completely unusable with code that was not specially designed to work with them.

There is prototype, there is backbone there is underscore... every single one of those has its fan base and none of theme follows a code pattern that will allow a developer to just use the needed functionality without any additional hassle.

This of course leads to the decision enforcement: the developer has to make decision which technology to use (some 'ninjas' even go as far as creating their own set of tools/mini libraries and use those, but this is time consuming and error prone and the library takes additional work to be supported and renewed). It does not matter what technology is selected, every single one that is popular has its benefits. What is more worrisome is the following: How much tie will it take for the used tool/library to become incompatible/obsolete?

Lets say that currently I find the closure tools to bring the best combination of browser quirks hiding, performance and code optimization quality. As every single tool out there it imposes a set of rules I need to follow in order to make my code work with it. Notice that __ALL__ tools do this. For example jQuery also requires the whole library to be loaded before I load the  plugin. It also requires a global variable to be exposed ($ or jquery). Most libraries count on a global variable to be available with a certain name. Closure makes exception in this, but imposes much more rules. So let's say that I create a super useful UI component, that is based on closure library. If you want to use it you have two choices: a) use closure library/tools or b) rewrite the functionality with your fave library. The last one is the more popular choice. One and same widget/component is currently being re-written over and over again with little augmentation to the actual functionality. LightWindow anyone? It is available as stand alone, jquery plugin, mootools plugin, prototype plugin, probably as closure library component as well. Can you imagine how many useful development hours have been wasted for this single component to be ported to so many tools/libraries?

Lets continue our in-mind exploration! I finish my component, I write tests for it and and I am happy with it, it goes to production. Then one day, lets say 2 years later I need it in another project. Unfortunately this new project does not use closure tools, because, let's say requirejs optimizer has been rewritten to use a new AST parser that is able to rewrite and minify javascript more effectively and faster than closure compiler and does not impose such strict rules on the code style. And (I will imagine this one) there is great browser abstraction library that is AMD and require js compatible, and maybe even IE is now standards compliant and I don't really need that much abstraction. So I don't really need all the closure library code. I can only hope that someone has already rewritten my code as an AMD module. But because it was closure compiled most probably no one did that ( here is a -1 for closure compiled code, you might think that you protect your code but what you are doing is actually making more work for yourself for later times, just because you did not allow someone else to do it for you), I need to do it manually. How possible is that scenario. Very much possible.

The problem this small example demonstrates is the lack of stability in the defined interfaces. Yesterday everything was jquery. Today people tend to look at other direction, very soon something else might be the big thing. Today people are talking about large code bases and large web applications. AMD and Closure tools are the most talked in those, while I think extJS is also a great example.

Looking at other languages the interfaces remain much longer untouched. In JavaScript code written 2 years ago is now deprecated because the engines change and optimization are found.. code written 10 years ago is actually not running anymore in the new browsers. I have a large code base of windowing system in the browser (similar to extJS) at my hand and honestly I do not know what to do with it...

What should we do?

I don't know.  My best bet right now would be AMD. But as we all know there should be modules in next version of the javascript language. Which means that the code will still be unusable few years from now. Or it will require legacy tooling, which is even worse than rewriting it. Imagine a situation where the quirks are actually in your code instead of in the host environment and you need shims for the code itself to assure its proper working.

I think this was on the Dart team's mind when the new language was born. And yet I am not sure that it is the answer. Maybe time will show.

But in the mean time I think the developers should work toward more compatible and replaceable code. Code where you do not require a module, but instead you require a functionality. Or an interface. Lets say I want to use the query selector interface for the DOM. Most modern browsers support it natively, but some that are still popular and significant percentage of people use do not. In that case the current solution is to use a library, that has selector engine implementation. jQuery has one, mootools also has one, dojo also has its own. Unfortunately not two of those provide the same interface. Mootools separated the selector engine and it can be used stand alone, closure library use dojo's implementation as a namespace, but yet those are not hot swappable. For example I cannot just require the functionality? A robust solution would be such as: the developer requires the functionality and the tool, based on the environment it runs in provides either a complete selector engine (like Slick for example) or a thin wrapper around the native implementation, should the host supports it. This is possible in the current state of affairs, but what about all the differences in the browsers?

This problem causes lots of time being spend in evaluating the endless possible solution to one and same problem. Which I find insane! And that is why I think I will have to move on on another field. Because this one, the web development, has taken too much talking and thinking and not so many action....

април 08, 2012

One more thing on javascript minification

There seem to be some very troubling misunderstanding that arises when one go on the hunt for compression technology for javascript: most comparison is made on two false assumptions.
  1. Closure compiler should be used in simple mode
  2. Code size measurement should be done when using either libraries not written in the required for advanced optimization style or using own application code.
Here is why those two are fallacies!

I will start with number two - unused code from libraries and utilities.
In advanced mode the compiler makes something no other compression tool out there is able to do (including uglifyjs, as confirmed by Mihai Bazon this is even not in their scope): the compiler is able to determine the types as if it was run-time and thus strip code paths that were never utilized.

Most code is written to be application agnostic and thus contains lots and lots of methods and functions and variables that are never used in an application. All those contribute to slow execution because they a) need to be parsed and then b) evaluated

Here is an example code
Car = function(color) {
  this.color_ = color;
  this.mileage_ = 0;
};

Car.prototype.drive = function( direction ) {
  this.mileage_ ++;
};

var n = new Car('black');
console.log(n.color_);
UglifyJS currently is completely unable to identify the fact that, should this be considered the whole project code, the drive method and the mileage_ property are never used. Even more, when posted this on the discussion group I гot pointed out that considering the following additional code:
function driveCar(c) { c.drive() }
uglify is not able to determine the type of the argument for the driveCar function, thus, is unable to know that the drive method is used (or unused in case c is not actually instance of Car) and thus it is unable to determine if the drive method should be removed or not.

I was also pointed out that this kind of code modifications are outside the scope of the project. I can understand and accept that. But consider how much one could save from a DOM access library if all code related to ajax is stripped, should ajax is not required, or promise related, or pub/sub related code can be removed from the resulting build should it not be used.I would argue that while this might not seem like a lot, getting built code that contains only the instruction that you actually use is much better than containing all possible instructions in all used pieces of code.

Now for number one: Closure is compared to UglifyJS in simple mode.
Well, of course it is slower, it is written in java! And of course closure cannot compile all your libraries, it is written to compile applications, not libraries for third party use ( however at google they use it for this too!). Using the compiler in advanced mode will most often break your code, this is why there are some code style rules to be followed should the compiler is to be used in that mode. The gains are significant, but the friction is so much, most developers just do not bother. For one, you cannot just stop any library in your project path and expect it to work. This is a big show stopper for most developers. Then you also need to write your code in google's funny way of name spaces. You are forced to use only a few subset of JS patterns ( no matter how memory and efficiency optimized those are - limits are limits).

So next time you evaluate the two options, please make note of the difference between minification and 'compilation'. While it is not a real compilation, the code changes made are much greater than method and property renaming.


април 07, 2012

JavaScript optimizations

I have been working the last year and a half on large in-browser applications and the last few months I have been looking into automated method of code optimization. The two most significant options out there currently are Closure compiler and Uglify-js.

I have used both and the following text is about my experience.

I will start with the few per-conditions: The code is modularized, i.e. different components are written as modules in separate files already and not all components should be loaded for the application to run. The application should be able to load additional dependencies/components as reaction to user actions or other events. The code should be minified  if possible and concatenated into small number of files, the lower the number of requests - the better.

Starting with code base of 1.8MB the task at hand was to squeeze it to something that can be loaded via the Internet on a low performance STB device. The application is the main device interface, so it should indicate activity to the UI asap.

The code was initially developer with requirejs so making a build of it with node requirejs module was not very difficult. The code needed some tweaking but in one day work the whole project compiled to 279KB. The resulting javascript file is only one (i.e. monolithic build) and once downloaded was able to start the UI in 2.79 seconds (as opposite to 9 seconds when un-compiled, due to more file loading mostly). This however, combined with the network lag of downloading 280KB  resulted in a black screen for almost 6 seconds and was not fast enough for a home entertainment equipment.  What we needed was 'load indication' module, that loads first and then indicate the load progress for the next few seconds.

Next we decided to try and use closure compiler. The compiler however forces the developer to use limited set of javascript patterns and it took a few days to stip the incompatible code or replace it with one that is. Some modules were not ported, for example the settings module. We used the closure-script to ease the development cycle. The result was impressive. The compiler is able to inline lot of the code ( for example property getters/setters are inlined, which lowers the scope creation load) and the tool we used is able to include the needed files per modules and modules are defined with namespaces. One caveat is that all modules need additional code for initialization, which we needed to write in addition to the logic code we already had. Another caveat (or one can think of it as advantage) is the renaming of method names, which means that you get much smaller size of the final built but you loose the names. The structure of your code might change beyond recognition, but the payoff often worth it. However the additional code that need to be written and the required changes should your code base is written without the compiler in mind can make the process long and tedious.

Finally we decided to use the requirejs whole project build capabilities. Once again it required some getting used to it and a bit testing, but we managed to separate the monolithic build to 4 modules. While testing the closure compiler we really liked the defines ( i.e. variables that can be defined at compile time and allow the compiler to strip code that is branched out based on the value of the defines) and we wanted to used it in the project. It was a nice surprise for us the fact that uglify-js supports defines as well. Both uglify and closure compiler require special syntax for this to work but the overhead is very small compared to the benefits.  We use it now to strip the debug modules and debug statements from the build. As requirejs works in node we were able to produce a small development server that can run the build process based on the query string submitted with the index request in less than 15 minutes. This frees us from the java virtual machine (as opposed to the closure tools). Uglify also makes its magic faster compared to closure compiler. Uglify however does not do function inlining for methods (getters/setters) nor does overwrite method names ( as this is not safe transformation in the context of non-restrictive javascript style ), it also does not make namespace squashing, which means that you need to make some extra effort to make sure your code does not use excessive scope creation/nesting.

The final thought on this project are as follow:
  1. You need to first decide which compiler you would you as both require different code style to work properly. 
  2. You need to go an extra step to make sure the code is structured suitably for modularization in both compilers.
  3. Do not forget that it is simply a tool, there are more than one ways to get to the destination, do not get obsessed!
As a last point I would suggest closure tools for projects that want to work across all browsers and have the latest abstractions (for example they already have abstraction for the transitionend event as it differs in firefox and chrome) and you do not want to collect the abstractions yourself. Closure library can save you lot of time for this.

On the other hand if you are targeting only newer browsers you can go with the faster and easier to use uglify-js/requirejs.

февруари 13, 2012

Tests running in both browser and nodejs

Following my post about developing on cloud9 and running your tests in both node.js and browser, this post will show how I had to make some sacrifices to allow proper tests constructions and support for assert testing that works on both node and browser.

First - the screenshots!

Tests run in browsers


Same tests run in node

The 'test runner' (debug name designated to 'AllTest' in the screenshot) is exactly the same file (see the post mention in the beginning of this one), the only difference is the run script: we need to use run-in-node.js and run-in-browser.js respectively, which is pretty easy to maintain.

I spend a good part of the Sunday trying to figure out a way to use a test framework that works exactly the same in both the browser and node.js and on top of that can work just fine with requirejs (which I use on the server and on the browser with amdefine and rjs on the server). I have tried several but I either couldn't make it work with requireJS or run the same tests in both environments. The closest thing to what I imagined a test suite would be was Vows. Unfortunately it runs only in node and the code depends on process, file and other objects not found in the list of host objects in the browser.

Never the less, the way one structure the tests in Vows was what I was looking for. The only thing I could do in this case was 'take it and make one for yourself'. Finding assert library that works in both node and browser was easy. Next I created a simplified version of Vows. It is _NOT_ API compatible and it lacks large portion of the features offered in Vows (If you are developing for node.js I suggest using Vows - really!), like nested topics for example.

However for my needs I was satisfied: it supported sync and async topics (thanks Vows for the idea), the topic can be a function or any object, the tests should return its result as boolean.

A simple asynchronous test looks like this:

    vow.addTests({
        topic: function() {
          var callback = this.callback;
            next(function () {
                callback(null);
            });
        },
        'test async assert': function( shoudBeNull ) {
            return assert.isNull( shoudBeNull );
        },
        'test async assert failing': function( willBeNull ) {
            return assert.isTrue( willBeNull );
        }
    });


All topics that are functions are run and the tests are invoked asynchronously. I have also implemented this same thing in a sync manner and it was much slower, so maybe the people that dislike the 'callback spaghetti node code' are not right after all?

Example for sync test ( i.e. tests invoked directly with the topic result ):
vow.addTests({
    topic: (function() {return null;})(),
    'sync test': function(shouldBeNull) {
        return assert.isNull( shouldBeNull );
    }
)};
If you are looking for simple test running suite that works in both browser and nodejs, take a look at it. I am sure there are many improvements to be desired, but I really needed it for my use case mostly and it covers it.

The code is part of the tablet branch of the 'Tornado UI' project and can be found on GitHub.

Happy hacking.

февруари 09, 2012

ACTA и българите

В България всичко е наопъки. Накъдето и да погледнеш, никога нищо не си е на мястото и никога нищо не е наред. В американските филми се питат "всичко наред ли е", в нашите условия се питаме "има ли нещо, което е наред".

Та така и с протестите срещу ACTA. Някакви хора разпространяват от трибуната на що годе популярния си блог (а вероятно и още много, просто спрях да чета БГ блогове, защото ми писна да чета глупост след глупост, този го оставих, защото се списва от чужбина) идеята да се противопоставим на АКТА чрез постване на снимка в фейсбука си.

Сега, аз вярно че фейсбук нямам, като бях малък нямах измислен приятел, сега не са ми нужни 700 такива. Но не това е важното. Важното е, че все още не става ясно как твоя снимка в твоя профил как даряваш кръв фактически (за разлика от имагинерно) влияе на ратифицирането (или не) на АКТА. Мисля, че е ясно, че управниците ни не четат ТВОЯ фейсбук.

Един от "интелигентните" коментари е( перефразирам ): "Какво не е ясно - показваш, че да споделяш не е лошо."

Така е. Сигурно въпросният господин споделя и жена си с нуждаещите се. Вероятно е така, щом толкова е 'за' споделянето. Но дори и той лично да споделя половинката си с който там има нужда или пък чувства близък и достоен за споделяне, не става ясно как това влияе на властите ни, които в крайна сметка взимат решенията вместо нас. 'Щото прочем и да споделя жена си, в законовата нормативна база това си е прелюбодеяние. Нищо, че за него лично е хубаво нещо споделянето. Но да оставим този екземплят.

Насочваме се към "организираните" протести.

Предвид климатичните условия в нашата бедна република (дневна темепратура -10 и надолу), по мои груби убеждения на уречените места ще се явят има няма 10% от заявилите желанието си. Останалите ще протестират от комфорта на отопления си дом или офис с очи широко вперени в монитора, белким се почувстват на мястото на събитието зачитайки някой и друг "туит". Останалите ще се почувстваме на мястото на събитието само ако някоя национална телевизия няма коя бедстваща община да ни покаже та заеме преден пост сред протестиращите/премръзващите.

В крайна сметка и протестите не виждам какво могат да направят специфично в нашата държава. Даже това, което се чудя на нашата държава е как има толкова много 'задник'. Как е възможно някой да се е надупил на толкова много страни едновременно. На мутрите, на Русия, на САЩ, на ЕС, на не знам си кой още.

На протестиращи ще кажа: вместо да си губите времето да протестирате в БГ, особено пък извън столицата, вземете се вдигнете да идете до някоя европейска държава, в която задника на държавния апарат не е изцяло насочен към бизнес интересите на САЩ, ами поне частично, ей така, с един градус обсег, и към поданиците и. Там протестирайте, белким там не мине ратификацията, а пък те нашите по напъна отзад ще усетят дали трябва или не трябва и какво трябва да направят.

Това е.

февруари 05, 2012

Developing frontend with node js and cloud9

Not so long ago I was describing how disguised I am with the whole nodejs and cloud9 story, how it was working bad, how it does not support 'MY' use case and so on.

Well as with most things I tend to re-evaluate my position from time to time, just to be sure and sometimes to check if things are progressing.

The last few days I was starting an spin-off of my Tornado UI project ( an Open Source interface for Tornado IPTV devices, read more ). The new code is loosely based on the existing one, re-using some of the components, but still is a chance for fresh start and evaluation of new options in code organization and structuring.

Part of the code for the previous project was ported to closure-library to allow it to cleanly compile with closure compiler. I can gladly tell that it was a success, all tests of the converted code passed and it is available online for anyone to use it in a closure based project. It covers completely the communication layer for Tornado, but does not provide any UI. It assumes the closure developer would use closure techniques for the UI as well and I doubt my ability to fully utilize closure paradigm for UI components.

The effort to use closure library was an enlightening in a way, allowing me to free myself from Crockford's mantra for privacy in the code. Meanwhile I have read tons of posts and articles about patterns, speed, reuse as well as package management, what Harmony will bring, what should we do until then and so on and so on, basically re-educating myself on the matter of serious JavaScript.

One big problem for me was the fact that I am not actually using any of the power of the 'abstraction' the libraries provide (i.e. Jquery, MooTools, closure and so on), when developing for a known environment one does not need abstractions and polyfils (well, mostly does not, if you have seen the code for TUI, you'd notice that bind is not part of the Function prototype). It just bloats the code and rarely enforces useful patterns ( unlike closure, mootools and jquery allow you to code in any style possible, even with no style).

Another problem was that I wanted to be able to test the code on a test server or at least in a console with cli, instead of loading it in browser every time. At the time I started the project there was no clean way to run the same code without any change on node and in the browser. Or at least I could not find it.

This week I started the new project, it aims to bring new features to the Tornado line of device, at least two of which will be co-developed with the in-house development team at the company. It will also be able to work on any android device (2.2+). This time I was determined to use some CLI testing for the code to speed up development and bug catching.

I went looking for a suitable tool again. First I was thinking about closure library and found nclosure. Unfortunately it seem to be not updated lately and does not work with node 0.6+. Then I tried JSClass, it require shift in package management, but still is manageable. Unfortunately the dependency tracking and loading is not working as I expected, maybe it is me, that cannot figure it out, (even though I figured out closure and requirejs), but after two attempts I gave it up.

The last was amdefine. It requires node 0.6+ but apart from that works very well and suits me perfectly. Very good explanation on how to use it could be found on their github page, linked above.

Lastly I wanted to try cloud9 again. I downloaded the latest source from github and it work very nicely with node 0.4. Unpleasantly for me it does not work with node 0.6, while amdefine requires it, which makes it impossible to use nodejs as test run and cloud9 locally. The only possibility remained hosted version of cloud9. Recently they announced support for running your code on node 0.4 or node 0.6, configurable from your IDE. Month ago they also featured npm command line interface, so I figured I should try it.

A working setup follows:

On c9.io open your project and add requirejs and amdefine:

npm install requirejs
npm install amdefine

In your project, configure the test runners, one for node and one for browser. Node:

var requirejs = require('requirejs');

requirejs.config({
    baseUrl: __dirname + '/javascript',
    nodeRequire: require
});

//Use eyes-console in node js for prettier output

requirejs(['tests', 'debug/eyes-console'], function(tests, SimpleConsole) {
//    Enable logging in node
    (new SimpleConsole()).enable();
    tests.run();
});

Browser:

require.config({
    baseUrl: "tablet/javascript"
});
require([
    'tests',
    'debug/simple-console'
], function(tests, SimpleConsole) {
    (new SimpleConsole()).enable();
    tests.run();
});

Note: the past are relative and fit my project, those may vary for yours. 

The browser runner requires html file as well, however the node test runner can be run as easily as starting a node process with the file. Remember to set the run process to use node 0.6.

Conclusion: Cloud9 ide is a viable option for developing and testing your code, even if you are developing only a front-end application. However not all approaches for code organization work, currently I have managed to make only AMD with requireJS. I expect the future to bring more variety in the options.

As a side note, Eclipse Orion has much better git UI (as a real UI, not only cli), however it is still in a very early development and on top it does not support running node instance, so you cannot really test your JS code in it.