PromisePipe: basics

Published:

I really like building business logic as a chain. And I like the idea of building logic out of reusable functions that does simple transformations. All logic can be described as a chain of data transformations. Even the express backend and all middleware chain is a chained logic that transforms the request into response.

PromisePipe is a constructor of reusable Promise chains. Additionally it allows to customize it’s API, gives better debugging and allows to make cross process logic chains.

You can install PromisePipe from npm:

1
$ npm install promise-pipe

In you javascript file you would start

1
2
3
4
5
6
7
8
9
10
11
12
var PromisePipe = require("promise-pipe")();
var action = PromisePipe().then(addOne);

action(10).then(actionDone); //11
action(20).then(actionDone); //21

function actionDone(result){
console.log(result)
}
function addOne(data){
return data + 1;
}

the action is a constructor for the logic and is a function that returns a Promise. You can add chains modifying it’s logic.

Let’s create an action that would reverse the Array. Reverse chain would look same as if we are using native Promises:

1
2
3
4
5
6
7
8
9
10
11
function reverse(data){
if(!Array.isArray(data)) return Promise.reject(new Error('Data is not an Array'));
return data.reverse();
}
//or if async
function reverse(data){
return new Promise(function(resolve, reject){
if(!Array.isArray(data)) return reject(new Error('Data is not an Array'));
resolve(data.reverse());
})
}

The action will be modified like:

1
2
3
4
5
6
7
8
9
10

var action = PromisePipe().then(reverse).catch(handleError);

action([1,2,3]).then(actionDone);
//under the hood same as Promise.resolve([1,2,3]).then(reverse).catch(handleError).then(actionDone)

function handleError(err){
console.error("Error: " + err);
return Promise.reject(err)
}

Where handleError will catch the error if argument would not be an Array.

PromisePipe have small API but you can extend it with your custom methods.

For example we can extend PromisePipe API with reverse function like this.

1
2
3
4
PromisePipe.use('reverse', function reverse(data){
if(!Array.isArray(data)) return Promise.reject(new Error('Data is not an Array'));
return data.reverse();
});

The action will look like

1
var action = PromisePipe().reverse().catch(handleError);

PromisePipe extension function have first two arguments (data, context) as any chain function. All other arguments can be used in API method.
For example similar to reactive streams .map method that will accept mapFunction as argument:

1
2
3
4
PromisePipe.use('map', function map(data, context, mapFunction){
if(!Array.isArray(data)) data = [data];
return data.map(mapFunction)
})

Our action

1
2
3
4
var action = PromisePipe()
.map(item => item + 1)
.reverse()
.catch(handleError);

And the example will look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
var PromisePipe = require("promise-pipe")();

PromisePipe.use('reverse', function reverse(data){
if(!Array.isArray(data)) return Promise.reject(new Error('Data is not an Array'));
return data.reverse();
});

PromisePipe.use('map', function map(data, context, mapFunction){
if(!Array.isArray(data)) data = [data];
return data.map(mapFunction)
})

var action = PromisePipe()
.map(item => item + 1)
.reverse()
.catch(handleError);

action([1,2,3]).then(actionDone);

action([3,4,5]).then(actionDone);

function actionDone(result){
console.log(result)
}
function handleError(err){
console.error("Error: " + err);
return Promise.reject(err)
}

And you can play around with whis PromisePipe example online;


PromisePipe: debugging

Published:

While Promises look nice there is one thing I always hated about them. They are awful for debugging and they fail silently if you have a typo in your code. You need to catch the error otherwise you will never know what happened. So in PromisePipes I decided to fix that.

So PromisePipe in debug mode shows unhandled exceptions in console:

1
2
3
4
5
6
7
8
9
Failed inside test
ReferenceError: ff is not defined
Object.test@/Users/edjafarov/work/PromisePipe/tests/PromisePipe.error.spec.js:28:16
newArgFunc@/Users/edjafarov/work/PromisePipe/src/PromisePipe.js:534:26
{anonymous}() ($)$$internal$$tryCatch@/Users/edjafarov/work/PromisePipe/node_modules/es6-promise/dist/es6-promise.js:304:16
{anonymous}() ($)$$internal$$invokeCallback@/Users/edjafarov/work/PromisePipe/node_modules/es6-promise/dist/es6-promise.js:316:17
{anonymous}()@/Users/edjafarov/work/PromisePipe/node_modules/es6-promise/dist/es6-promise.js:874:13
{anonymous}() ($)$asap$$flush@/Users/edjafarov/work/PromisePipe/node_modules/es6-promise/dist/es6-promise.js:111:9
process._tickCallback@node.js:442:13

It notifies you that things go wrong though it is not that useful for real life debugging.

What is really helping is that PromisePipe wraps each chain function into the wrapper that is recording data/context values that chain is called with. You can track how the data is transformed by each chain. And it works out of the box. Just setup debug mode for PromisePipe like this:

1
PromisePipe.setMode("DEBUG")

In chrome it looks like this:

Whats next

  • We have the decomposition of all calls of the PromisePipe and transformations during the execution. That allows us to record the log of what happened while reproducing the bug by QA and replay the log later by Developer.

  • As well we can record session to use it for autogeneration of integration tests.

  • And we can watch a performance of each chain individually.


PromisePipe: cross process homogenous Promise chains

Published:

I used to be a backend developer and even earlier I was a frontend developer. I guess I was good backend dev for my frontend colleagues at that time. Since I was thinking about API’s as a consumer of that API.

I wasn’t ever that lucky as a frontend dev. Building API is hard. It usually takes a lot of time and wtf’s to get same vision on how communication with the server should work. We start with REST API, then everyone has its own vision on what is REST. Each modification of API is a pain and it always take a lot of time and talks to make a change.

Today as frontend developer I usually describe resource calls with Promises. I use promises because I find their chaining API really nice to describe the potentially asynchronous business logic.

It might be that any business logic could be represented as a chain of data transformations. Even saving of data in DB is a transformation of data into item ID.

Lets check following code for a simple frontend business logic built with Promises:

1
2
3
4
5
Promise.resolve(item)
.then(validateItem)
.then(postItem)
.then(addItem)
.catch(handleError)

postItem here will return a Promise that will be resolved when the server replies.

On server side, we would probably have some express route:

1
2
3
4
app.post('/api/items',
validateItemMiddleware,
saveItemInDBMiddleware,
returnItemMiddleware)

If Promises would appear earlier, I think nodejs express would use them instead of middlewares.
Built with Promise chains the server will probably look something like:

1
2
3
4
app.post('/api/items')
.then(validateItem)
.then(saveItemInDB)
.then(returnItem)

Since any Promise could be constructed out of composition of Promises lets imagine for a second that we do not have a frontend and backend - our code will look like:

1
2
3
4
5
6
7
8
9
10
11
12
var postItem = function(data){
return Promise.resolve(data)
.then(validateItem)
.then(saveItemInDB)
.then(returnItem)
}

Promise.resolve(item)
.then(validateItem)
.then(postItem)
.then(addItem)
.catch(handleError)

Obviously in that case we would need no API at all and we won’t waste time deciding how to name the Url and what method to use right?

And we can go even deeper:

1
2
3
4
5
6
Promise.resolve(item)
.then(validateItem)
.then(validateItemServer)
.then(saveItemInDB)
.then(addItem)
.catch(handleError)

Of cause, you can’t do it with plain promises - but with PromisePipe you can.

PromisePipe is a builder for reusable promise chains.
It has more control over the execution of chains and can control how to execute each of them.

PromisePipe is a singleton. You build chains of business logic and run the code both on server and client. Chains marked to be executed on the server will be executed on the server only and chains marked to be executed in the client will be executed in the client. You need to implement methods in PromisePipe to pass messages from the client to the server. And it is up to you what transport to use.

PromisePipe

So you need to write some boilerplate code that would pass messages around between server and client to try it (simple example). So far I wrote examples with socket.io as a transport but there is no problem to use plain HTTP request or any protocol that can pass messages around.

simple example

With PromisePipe our code example would look like:

1
2
3
4
5
6
7
8
var doOnServer = PromisePipe.in('server')
var addItemAction = PromisePipe()
.then(validateItem)
.then(doOnServer(validateItemServer))
.then(doOnServer(saveItemInDB))
.then(addItem)
.catch(handleError);
addItemAction(item) // will pass complete chain

When execution comes to validateItemServer chain PromisePipe is passing execution to server with execution message and proceeds there. validateItemServer and saveItemInDB are executing on the server side and the message passed back to a client to proceed execution starting with addItem.

PromisePipe allows to extend API with custom methods and you can build expressive DSL that will describe business logic:

1
2
3
4
5
6
7
8
var doOnServer = PromisePipe.in('server')
var addItemAction = PromisePipe()
.validate('item')
.validateServer('item')
.db.save.Item()
.then(addItem)
.catch(handleError);
addItemAction(item) // will pass complete chain

For example, here is a mongodb API for PromisePipe. And here is an example of todo-app(live) which uses this “mongo-pipe-api”. validateServer and “mongo-pipe-api” should be marked as serverside methods. So, they would be executed on the server only.

PromisePipe allows to build business logic out of simple transformation chains which can run in different processes while the logic itself still simple and homogenous.

With PromisePipes you get:

  • simplicity

    Build up your logic in a functional manner with simple transformations. Forget about the process to process communication and save time for building business logic.

  • testability

    Each chain can be tested independently. It is also easy to assemble pieces of logic together and test parts in isolation.

  • isomorphism

    PromisePipe was created to work in a cross process environment. You will get isomorphic business logic out of the box if you build chains with isomorphism in mind. So if you do not use any env specific API’s in your logic you can run pipe in a single process or expect the chain to work in multiple environments like browser and server.

  • scalability

    Each chain could be running in a separate process without much effort. That doesn’t mean you have scalability out of the box. But you have a nice way to distribute your load over multiple processes.

  • frontend guys can build complete business logic

    Chains are easy to compose together. The main idea behind is to allow frontend guys to use simple building blocks to build backend functionality encapsulating complexity inside meaningful business logic chains.

homogenous code

I believe PromisePipe will help us to push microservice architectures forward. The homogenous business logic allows decoupling of logic from process to process communication. Which makes no difference between code that is running in monolith and miscroservice architecture.


Responsibility is the best Motivation

Published:

You probably also hate the M word. So let’s make it this way - we can’t motivate people to build our product, so let’s develop an attachment of those people to our product and they would take care of it as much as we are.

We are spending one-third of our lives doing our jobs. It is natural to care about what you are doing. And it is really important to allow people to care about the whole product. People tend to take some area of ownership where they have best expertise and make it their baby. But doing that they stop caring about a whole product which eventually decreases the Motivation.

I believe that proper distribution of responsibility is a key for a successful attachment of a person to a product. To make it happen:

  • responsibility should be shareable

    It is hard to believe but giving out responsibility is really hard. But if you want people to care you should trust them.

  • taking responsibility should be safe

    One who is taking responsibility should be supported by all other team members. The person should feel that he won’t be blamed if he failed. That whole team will step up to help him fix any problems.

  • taking responsibility should be broad

    Any member should be able to commit in any part of the business. Including the business part itself. And areas of responsibility should change over time. One should try many roles and take responsibility of multiple aspects of the project.

  • risks should be minimized with routines and processes (one should feel that they work for him)

    Taking responsibility is also taking a risk and is really stressful. The process should work in the way to minimize risks and stresses over time. Those routines should be meaningful and simple enough to perform. Ideally over time many routines should be automatized.

And the most important thing - failures should be shared, passing through hard times is what makes a true attachment and reliance of a team members.


The Holy Grail: promise pipes

Published:

The Holy Grail series of posts is about React based framework (inspired by Flux architecture) for building isomorph applications. This framework is like a puzzle consists of multiple segments that should play well together. Though each segment can be used separately.

PromisePipes are reusable and cutomisable promise chains. They allow to build own business logic promise based DSL like:

1
2
3
4
var doAction = PromisePipe().add(5).multiply(2).pow(10).doSomeBusinessLogic(withArgs).save('/api/result').emit('result:saved');

doAction(1); //((1+5)*2)^10 -> POST /api/result -> 'result:saved'
doAction(2); //((2+5)*2)^10 -> POST /api/result -> 'result:saved'

SPA patterns

There are common patterns for frontend SPA patterns. Most of app work is to:

  • get data from server.
  • modify data.
  • render data.
  • save data to server.

Usually when user surfs app pages in your SPA app does GET -> render flow. With PromisePipe that would look like:

1
2
3
PromisePipe()
.get('/api/data')
.render('view_name')

Where the .get() is a promise that returns the body as a result to next promise in chain.

When user submits form app does (data) -> Validation -> SAVE -> render

1
2
3
4
5
PromisePipe()
.validate(validationScheme) //validate incoming data, reject if failed
.save('/api/data') //save data if validation successfull
.catch(handleErrors) //catch and handle validation or save error
.render('view_name')

You can mix data from various sources with the PromisePipe:

1
2
3
4
5
6
7
8
9
var renderAuthorByBookId = PromisePipe()
.get('/api/books/:bookId', {bookId:'id'})
.map(function(data){
return data.author;
})
.get('/api/books/:authorId', {authorId:'id'})
.render('view_name')

renderAuthorByBookId({id: 10})

You can play around with PromisePipe in this fiddle.

Form validation

PromisePipe was built to work in single direction dataflow architecture inspired by Flux. The PromisePipe role is to prepare (fetch/save) and manipulate data. These pipes are containers of pure business logic and stores/models are listening for events emitted in the pipes context and fill themselves with the data.

The validation of forms pattern is pretty neat. One of solutions for validation is mixing validation result with the data itself. I feel like this is wrong making the data dirty and flow ambigous. With PromisePipe I am separating the data of error object.

1
2
3
4
5
6
7
8
9
var context = new Emitter();

var saveEventItem = PromisePipe()
.validate(validationScheme)
.post('/api/events')
.emit('events:add');
.catchAndEmit('events:add:reject')

saveEventItem(formData, context);

On submit the app will call the function passing the form data as a first argument and apps context as second.

The store can hook up the context events and look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

function eventsStore(context){
var form = {
data: [],
errors: {}
}

return new Emitter({
init: function(){
context.on('events:add', addEvent.bind(this));
context.on('events:add:reject', updateErrors.bind(this));
},
get: function(){
return form;
},
updateErrors: function(errors){
form.errors = errors;
this.emit('change')
},
addEvent: function(data){
form.data.push(data);
this.emit('change')
}
})
}

var eventsForm = eventsStore(context);

saveEventItem(formData, context);
// validate -> post -> emit(events:add) -> eventsForm.addEvent -> eventsForm.emit('change')

isomporph resources

The .get, .post(), emit, etc methods are not a part of PromisePipe API. But you can extend PromisePipe API. That opens you a possibility to build DSL with superagent thus making it isomorph.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
var resource = require('superagent');

PromisePipe.use('get', function get(data, context, url, query){
return new Promise(function(resolve, reject){
var req = resource.get(prepreUrl.call(context, url));
if(context.request && context.request.headers){
req.set(context.request.headers);
}

if(typeof(query) == 'function') {
req.query(query.call(context, data));
} else if(typeof(query) == 'object'){
req.query(query);
}
req.on('error', function(err){
reject(err);
})
req.end(function(res){
if(res.error) return reject(res.error);
resolve(res.body);
});
});
});

PromisePipe

install

npm install promise-stream

extend

You can extend PromisePipe API with additional methods. Thus you are able to build your own customized DSL.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var PromisePipe = require('promise-pipe');

PromisePipe.use('log', function(data, context, name){
if(name) {
console.log(data[name]);
} else {
console.log(data)
}
return data;
})

var action = PromisePipe().log().log('foo');

action({foo:"baz", bar:"xyz"})
// {foo:"baz", bar:"xyz"} <- log()
// baz <- log('foo')

API

PromisePipe

PromisePipe.use(name, handler)

Allows to build your own customized DSL. handler is a function with arguments

1
2
3
4
5
6
7
function handler(data, context, arg1, ..., argN){
//you can return Promise
return data;
}
PromisePipe.use('custom', handler);

PromisePipe().custom(arg1, ..., argN)

Stream

Is a function that returns a promise. First argument is a data, second is a context. While data behaves the same way as in Promises context is passing thorough whole chain of promises.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var stream = PromisePipe()
.then(function(data, context){
console.log(data, context);
context.foo = "bar";
return data + 1;
}).then(function(data, context){
console.log(data, context);
context.xyz = "baz";
return data + 1;
}).then(function(data, context){
console.log(data, context);
})
stream(2, {});
//2 {}
//3 {foo:"bar"}
//4 {foo:"bar", xyz:"baz"}

stream:then

As in Promises you can pass two functions inside for success and fail.

1
2
3
4
var stream = PromisePipe()
.then(function(data, context){
return //Promise.resolve/reject
}).then(success, fail)

stream:catch

The catch is taking single argument and bahaves same as Promise catch.

stream:join

You can join PromisePipes if you like.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
var stream = PromisePipe()
.then(function(data, context){
return data + 1;
});
var stream2 = PromisePipe()
.then(function(data, context){
return data + 2;
})
.join(stream)
.then(function(data){
console.log(data);
});

stream2(1) //4

proper node-webkit desktop notifications

Published:

Desktop notifications plays an important role in success of desktop apps since is allowing to bring attention of a user when something is happenning.

So far in node-webkit we have several solutions for desktop notifications.

1) HTML5 notifications

  • pros: is a html5 standart, same as in web.
  • cons: looks differently in differen OSes, low interaction capabilities, have bugs for node-webkit:)

2) NW Desktop Notifications

  • pros: customizable, same in different OSes.
  • cons: no interaction, animations are ugly, API is not a standart.

3) node-notifier

  • pros: you can do it from node, you can use standart OSes notification systems (win8 is coming soon!).
  • cons: looks differently in differen OSes, low interaction.

All those solutions were not working for me. Thus I had to create another one. Based on same idea as NW Desktop Notifications but implemented a little bit better.

node-webkit-desktop-notifications should become drop-in replacement for HTML5 notifications. So ideally you just use it instead of html5 Notification with some more API around. And if the code is executed in node-webkit context it does all kinds of rich notifications otherwise it degrades to html5.

Additionally to simple notifications you will get ability to build any complex interations inside your notification. Like change layout, buttons, textfields or gestures.

how to build interactive notifications

The notification itself is a window. Thus it has it’s own context, html and css/javascript. Rich interactions are built on events. You can emit events on the window object inside notification window and catch them on DesktopNotification instance inside your application. So you can build any kind of presentation and keep interaction inside your appcode.

use it

To use the lib in your app you need to take 2 files:

1
2
src/desktopNotification.js
src/desktopNotification.html

You need to place them in same folder of your app. Load desktopNotification.js to your index.html to use the DesktopNotification

1
2
var notif = new DesktopNotification('Hello', {body: 'World'});
notif.show();

check other ways to use DesktopNotification in example.

try live

  • Fetch the repo
  • npm install
  • npm start
  • find an app for your OS in build/node-webkit-desktop-notification
  • play

semantic styling

Published:

Driving from #OdessaJs with @listochkin we had discussed the future of web development and our perceptions of what makes sense. It was priceless talk. And lot of things were formulated and validated for me. One of concepts popped out is the “semantic styling”.

Not long ago there was a hype around semantic layout. Which is a great concept around giving more sense to layout. But it looks like nobody noticed the next big revolution in styles. The twitter did it. Twitter bootstrap has formulated smallest footprint of layout + styles for common design patterns.

So right now everybody agrees that button can be <button> as well as <a> and it should look the same and dropdown will look like:

1
2
3
4
5
6
7
<div class="dropdown">
<button class="btn btn-default dropdown-toggle">Dropdown</button>
<ul class="dropdown-menu">
<li><a href="#1">Action</a></li>
<li><a href="#2">Another action</a></li>
</ul>
</div>

and nobody tries to invent something else. Bootstrap made a standart. What we can call “semantic styles”.

What is also great about twitter bootstrap that it is built with less thus you can use parts of it. The variables in bootstrap are mainly colors and you can override them. Thus your css (less) code consists of two parts: the logic (the relations between html elements/classes, that describes the footprint) and the presentation (that makes the design individualistic - variables that change your colors and paddings)

With TWBS we have a vocabulary of semantic styles that represent commonly used design patterns and we can adjust them to branded design style overriding less variables.

Unlike bootstrap does though I think this vocabulary should be managed in a different way.

Bootstrap has just a lot of files in twitter bootstrap repo. Package managers on other hand provide better development experience, more abilities to handle dependencies and structure the code. I prefer components style over bower thus I would like and I believe it is possible to handle vocabulary of semantic styles in the same manner. But instead of CommonJs require("modulename") being able to (CommonLess - why not?) @import "buttons" without putting complete path to the component.

So from a development perspective we will just need to install some “less terms” as a components (example: $ component install twbs/forms twbs/buttons twbs/grid). And then inside my main.less file:

1
2
3
4
5
6
7
8
//importing vocabulary
@import "forms"
@import "buttons"
@import "grid"

//overriding default variables to adjust to branded design.
$brand-success: #AAAAAA;
$brand-default: #BBBBBB;

Almost everything is ready. CommonLess thing is not there yet. So far we just need to put complete relative path for less/sass components we use.


node-webkit autoupdate

Published:

Node-webkit allows you to build cross platform desktop application with node and JavaScript. Though building desktop application unlike pure online webapp means you got no control over the code after the app was installed. Thus releases are painful and bugs after the code was released are frustrating.

Right now desktop apps usually update themselves. Node-webkit do not have this functionality out of the box. Which is reasonable because such functionality would hardly rely on specific implementation.

So I created webkit-updater. It relies on suggestion that your app will be packaged with grunt-node-webkit-builder, compressed with grunt-contrib-compress and with windows package unzip.exe will be packaged. Basically the example of packaging app you can find here.

webkit-updater is working/tested under mac, win and linux(32/64).

how does it work

It gives you api to:

  1. Check the manifest for version.
  2. If the version is different from local one download new package to temp.
  3. Unpack the package to temp.
  4. Run new version from temp and exit the process.
  5. The new version from temp will copy itself to original folder.
  6. The new version will run itself from original folder and exit the process.

you should build this logic by yourself though. As a reference you can use example.

what are the plans

There should be bugs, probably. Need to stabilize it and test it extensively in real world app.

It would also be great to have different rates of updates:

  • update assets only without page reload (..x)
  • update assets with page reload (.x.)
  • update assets and node-webkit engine - full cycle (x..)

There is a bug for newer versions of linux. Updater should resolve things like that. Also there should be some preInstall and postInstall scripts handling.

You are welcome to use and commit.


framework vs microlib architecture

Published:

Just recently seems like I understood why holywars between programmers are happening. During revolution I basically saw the same holywar between people that believe that their truth is the only truth. Why even for smart people from both sides it is hard to negotiate for the same vision? I believe that the reason is that they have different values.

We often underestimate how much common values important for us to feel comfortable and productive in a team. We often do not care about having common values while looking for new jobs , we care about salaries more. Agree we make decision of accepting offer based on many factors, shared values as well though we do that unconsciously.

JavaScript community is quite inhomogeneous so you can see all kinds of values there. So far I can distinct two types of people, those who love classical OOP languages with strict and stable structure and patterns and people that love alternative languages that doesn’t have determined patterns and have ‘unexpected’ flexibilities. While talking with first group of people seems like they afraid of chaos and are intolerable for any unpredictability. The second one so bored with structures and solutions that are running away from enterprises like from hell. I know very few people that are ok with both.

So here I am coming to framework vs microlib architecture discourse. Inside JavaScript community itself we have kind of holywar around this topic. Now I feel it doesn’t make sense to participate in this war since final decision is always based on our values and that means that “common sense” that we are appealing to is quite individual.

PS: Main question in any job interview should be around values, always!


Angularjs is evil: overengineering hell

Published:

This I hope is the last post about how Angular will bring you to a world of pain.

Recently I stumbled on this Angularjs hate article that I am totally alligned with on emotional level. Angular is clearly overengineering. And unlike some may say It is not giving you exclusive scalability.

I would compare Angular to React. I know that those are not comparable. But I believe that React based architecture is something that beats Angular solution in terms of simplicity and scalability.

React vs AngularJS by number of concepts to learn

  • React stack: 4 (everything is a component, some components have state, you may use model instead, Commonjs modules, router).
  • AngularJS: 7 (modules, router, controllers, directives, scopes, templates, services, filters).

There are twice more concepts to learn for Angular than React. Not even saying that React’s concepts are much more simple. For example you have controller and directive with templates that do more or less the same things. With React you have component only as a building block of your application. We all know that simplicity scales better, right?

directive vs component

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
//Angular
App.directive('myDirective', function() {
return {
restrict: 'E',
transclude: true,
scope: {
link: '@link'
},
template: '<a href="#/{{link}}" ng-transclude></a>',
link: function (scope, element, attrs) {
//do stuff with scope
}
};
});
//usage
<myDirective link='somewhere'><span>GO</span></myDirective>
1
2
3
4
5
6
7
8
9
10
11
12
13
//React
var myComponent = React.createClass({
componentDidMount: function(){
//do stuff with this.props
},
render: function() {
return <div href={'#/' + this.props.link}>{this.props.children}</div>;
}
});
//usage
<myComponent link='somewhere'><span>GO</span></myComponent>
//JSX transformed
myComponent({link:'somewhere'}, span(null,'GO'));

Substantial difference between React and Angular here is that React is Javascript friendly - you just put stuff as props and since components are functions these are passed as function arguments.

Component is a function!

service vs function

1
2
3
4
5
6
7
8
9
10
11
12
//Angular
myApp.service('unicornLauncher', ["apiToken", UnicornLauncher]);

function UnicornLauncher(apiToken) {

this.launchedCount = 0;
this.launch = function() {
// make a request to the remote api and include the apiToken
...
this.launchedCount++;
}
}

1
2
3
4
5
6
7
8
9
10
11
12
13
//Javascript
var apiToken = require('../apiToken.js');

module.exports = function UnicornLauncher() {

this.launchedCount = 0;
this.launch = function() {
// make a request to the remote api and include the apiToken
...
this.launchedCount++;
}

}

Service/provider in Angular is a solution for a made up problem. Just use CommonJs and you won’t need service/provider thing. You will just use modules and functions that are natural for JS.

Service is a function!

filter vs function

1
2
3
4
5
6
7
//Angular
App.filter('incr', function() {
return function(input) {
return input + 1;
};
})
<div>{ {value | incr} }</div>

1
2
3
4
5
6
//React
function incr(input){
return input + 1;
}

<div>incr(value)</div>

Well, directive is pretty useful if you use html templates as strings. Life is easier with React when you do not use strings for templates.

Filter is a function!

template vs JSX

1
2
3
4
5
//Angular
<div>
<ProfilePic username='username' />
<ProfileLink username='username' />
</div>

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
//Reactjs
/** @jsx React.DOM */
var Avatar = React.createClass({
render: function() {
return (
<div>
<ProfilePic username={this.props.username} />
<ProfileLink username={this.props.username} />
</div>
);

}
});
//transformed
var Avatar = React.createClass({
render: function() {
return (
div(null,
ProfilePic({username:this.props.username}),
ProfileLink({username:this.props.username})
)
);
}
});

Functions are better than strings. Functions could work with closures. Functions are faster. And in javascript functions are first class citizens. Functions are much more logical than strings.

Template is a function!

With react you can live in function world. And with Angular you live in enterprize patterns world.

My next story will be about why Angular might work for you.