semantic styling

Published:

Driving from #OdessaJs with @listochkin we had discussed the future of web development and our perceptions of what makes sense. It was priceless talk. And lot of things were formulated and validated for me. One of concepts popped out is the "semantic styling".

Not long ago there was a hype around semantic layout. Which is a great concept around giving more sense to layout. But it looks like nobody noticed the next big revolution in styles. The twitter did it. Twitter bootstrap has formulated smallest footprint of layout + styles for common design patterns.

So right now everybody agrees that button can be <button> as well as <a> and it should look the same and dropdown will look like:

1
2
3
4
5
6
7
<div class="dropdown">
  <button class="btn btn-default dropdown-toggle">Dropdown</button>
  <ul class="dropdown-menu">
    <li><a href="#1">Action</a></li>
    <li><a href="#2">Another action</a></li>
  </ul>
</div>

and nobody tries to invent something else. Bootstrap made a standart. What we can call "semantic styles".

What is also great about twitter bootstrap that it is built with less thus you can use parts of it. The variables in bootstrap are mainly colors and you can override them. Thus your css (less) code consists of two parts: the logic (the relations between html elements/classes, that describes the footprint) and the presentation (that makes the design individualistic - variables that change your colors and paddings)

With TWBS we have a vocabulary of semantic styles that represent commonly used design patterns and we can adjust them to branded design style overriding less variables.

Unlike bootstrap does though I think this vocabulary should be managed in a different way.

Bootstrap has just a lot of files in twitter bootstrap repo. Package managers on other hand provide better development experience, more abilities to handle dependencies and structure the code. I prefer components style over bower thus I would like and I believe it is possible to handle vocabulary of semantic styles in the same manner. But instead of CommonJs require("modulename") being able to (CommonLess - why not?) @import "buttons" without putting complete path to the component.

So from a development perspective we will just need to install some "less terms" as a components (example: $ component install twbs/forms twbs/buttons twbs/grid). And then inside my main.less file:

1
2
3
4
5
6
7
8
//importing vocabulary
@import "forms"
@import "buttons"
@import "grid"

//overriding default variables to adjust to branded design.
$brand-success: #AAAAAA;
$brand-default: #BBBBBB;

Almost everything is ready. CommonLess thing is not there yet. So far we just need to put complete relative path for less/sass components we use.


node-webkit autoupdate

Published:

Node-webkit allows you to build cross platform desktop application with node and JavaScript. Though building desktop application unlike pure online webapp means you got no control over the code after the app was installed. Thus releases are painful and bugs after the code was released are frustrating.

Right now desktop apps usually update themselves. Node-webkit do not have this functionality out of the box. Which is reasonable because such functionality would hardly rely on specific implementation.

So I created webkit-updater. It relies on suggestion that your app will be packaged with grunt-node-webkit-builder, compressed with grunt-contrib-compress and with windows package unzip.exe will be packaged. Basically the example of packaging app you can find here.

webkit-updater is working/tested under mac, win and linux(32/64).

how does it work

It gives you api to:

  1. Check the manifest for version.
  2. If the version is different from local one download new package to temp.
  3. Unpack the package to temp.
  4. Run new version from temp and exit the process.
  5. The new version from temp will copy itself to original folder.
  6. The new version will run itself from original folder and exit the process.

you should build this logic by yourself though. As a reference you can use example.

what are the plans

There should be bugs, probably. Need to stabilize it and test it extensively in real world app.

It would also be great to have different rates of updates:

  • update assets only without page reload (..x)
  • update assets with page reload (.x.)
  • update assets and node-webkit engine - full cycle (x..)

There is a bug for newer versions of linux. Updater should resolve things like that. Also there should be some preInstall and postInstall scripts handling.

You are welcome to use and commit.


framework vs microlib architecture

Published:

Just recently seems like I understood why holywars between programmers are happening. During revolution I basically saw the same holywar between people that believe that their truth is the only truth. Why even for smart people from both sides it is hard to negotiate for the same vision? I believe that the reason is that they have different values.

We often underestimate how much common values important for us to feel comfortable and productive in a team. We often do not care about having common values while looking for new jobs , we care about salaries more. Agree we make decision of accepting offer based on many factors, shared values as well though we do that unconsciously.

JavaScript community is quite inhomogeneous so you can see all kinds of values there. So far I can distinct two types of people, those who love classical OOP languages with strict and stable structure and patterns and people that love alternative languages that doesn’t have determined patterns and have ‘unexpected’ flexibilities. While talking with first group of people seems like they afraid of chaos and are intolerable for any unpredictability. The second one so bored with structures and solutions that are running away from enterprises like from hell. I know very few people that are ok with both.

So here I am coming to framework vs microlib architecture discourse. Inside JavaScript community itself we have kind of holywar around this topic. Now I feel it doesn’t make sense to participate in this war since final decision is always based on our values and that means that “common sense” that we are appealing to is quite individual.

PS: Main question in any job interview should be around values, always!


managers are taking your project's breath

Published:

Theoretically there is no manager in agile. If someone is telling you that they have agile team with a manager they are lying. That mostly mean that they are not ready for some reason to share responsibility across the team - they do not have agile.

Few years ago I was lucky to participate in @jeffpatton agile workshop. This had huge impact on my understanding of application development. Right now I had embraced some agile basic principles and consider them as healthier for internal group dynamics of a team. Agile is literally makes healthier and happier each individual in the team.

People are lazy. Developers are not exclusions. We do not like to work and take responsibility. We easily give out our responsibilities for anyone who will take em. And worst thing you can do is to give all responsibility of project's success to one person. That what is happening when you put manager in your team.

On other hand people tend to step up and take responsibilities on products they are building and when they do they become more engaged and proud by stuff they do. They stop asking stupid questions and start committing themselves to the product.

We are developers and we are coming to our jobs and spend eight hours in a day to do magic. We really want to build something that makes sense, that could make world better. Isn't that the best motivation for us? Business, please, spend time to share your passion about product with us!

PS: There is only one case when having a project manager is a good idea. When you have short time project and manager who takes BA role. It just do not make sense to commit in building a team.


Reactjs mixing with Backbone

Published:

Reactjs is a javascript librarby for building user interfaces opensourced by facebook just recently.

Not long ago I felt that as a developer I have more or less two best options to build an app. Whether do it in #angular or #backbone. Now I feel that #react is taking best of angular, do it better and allows to use best parts of backbone.

I hate Backbone Views and I hate $scope of angular, especially when it comes to directive scope and all &-@-= stuff. Transclusion and scope is a double hell and I am not talking about digesting and performance yet.

React has really small API and it does one thing, but does it really well. It abstracts DOM for you and optimizes the rendering part. So each time you need react to reflect state changes in the DOM it renders lightweight DOM in javascript and applies only diff to the real DOM. In that way rendering becomes really cheap unlike in angular. And that allows us to build apps with diffenrent patterns in mind.

And here are some tips I got from several weeks of playing around #Reactjs

React is just V

React needs other stuff like routes and models. I am taking them from Backbone.

Models are state

By default React have single state this.state. Which is not usually best solution. It appears that cleaner way is to have multiple states. Where this.state is not persisting state and backbone models are.

In the React's example you can find BackboneMixin but it has some flaws. Following one is better since it does proper cleanup.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var  ModelMixin = {
  componentDidMount: function() {
    // Whenever there may be a change in the Backbone data, trigger a reconcile.
    this.getBackboneModels().forEach(this.injectModel, this);
  },
  componentWillUnmount: function() {
    // Ensure that we clean up any dangling references when the component is
    // destroyed.
    this.__syncedModels.forEach(function(model) {
      model.off(null, model.__updater, this);
    }, this);
  },
  injectModel: function(model){
    if(!this.__syncedModels) this.__syncedModels = [];
    if(!~this.__syncedModels.indexOf(model)){
      var updater = this.forceUpdate.bind(this, null);
      model.__updater = updater;
      model.on('add change remove', updater, this);
      this.__syncedModels.push(model);
    }
  }
}

In that way you can use same models in several nested components.

1
2
3
4
  <rootComponent user="new UserModel({id: id})">
    <contactComponent user = {this.props.user}/>
    <userpicComponent user = {this.props.user}/>
  </rootComponent>

2 way binding

It's kinda logical to have 2 way binding with those Backbone models. LinkedState plugin is working only for state thus here is BindMixin wich does basically the same as LinkedState but for Backbone models.

1
2
3
4
5
6
7
8
9
10
var  BindMixin = {
  bindTo: function(model, key){
    return {
      value: model.get(key),
      requestChange: function(value){
          model.set(key, value);
      }.bind(this)
    }
  }
}

This mixin adds bindTo method that binds control with model property as simple as this.bindTo(user, 'name'):

1
2
3
4
5
6
7
8
9
10
11
12
13
var Hello = React.createClass({
  mixins:[ModelMixin, BindMixin],
  getBackboneModels: function(){
    return [this.props.instance]
  },
  render: function() {
    var model = this.props.instance;
    return <div>
        <div>Hello {model.get('initial')}</div>
        <input type="text" valueLink={this.bindTo(model, 'initial')}/>
      </div>
  }
});

Here is working example: http://jsfiddle.net/djkojb/qZf48/24/


using private components in compy

Published:

There are core limitations in component that makes hard to use private git repositories directly, unless you use github. Component FAQ proposes to use remotes property and any web server that uses the same urls as Github.

package.json

1
2
3
4
5
6
7
{
  ...
  "compy":{
    ...
    "remotes":["https://user:pass@raw.github.com"]
  }
}

But there is a better way to manage private components with any git server you like.

using git submodules to manage private components

Component supports local dependencies. That means it can serve components from any local folder you put in as local param in config.

package.json

1
2
3
4
5
6
7
8
{
  ...
  "compy":{
    ...
    "paths":["local"],
    "local":["component1","component2"]
  }
}

So if you want to use local dependencies, you should put git submodules with those private components in the folder.

Compy will serve them as usual components and you will manage them with git-cli.

adding component to folder

You can add component to local folder like this:

1
2
cd local;
submodule add git://github.com/chneukirchen/rack.git

socker: websocket CRUD over engine.io

Published:

One of substantial difference of using sockets instead of plain http requests is that we usually broadcast messages without expecting any response. While building jschat I thought it would be ehough. Though even for a chat we need the response if we want reliability and better experience. Think of "Pending" state of a message when sending it in offline mode.

Raw libraries doesn't provide any 'response' like functionality thus I had to build my own implementation. As a base for jschat we are using engine.io because socket.io is not supported for a long time and engine.io is kind of it's successor and it's awesome.

socker

Socker is inspired by express. Simple and lightweight implementation of middlewares, routing and error handling. Socker wrapping both engine.io and engine.io-client and providing additional methods that implement express like API.

setting up socker

We can use engine.io and socker with or without express

1
2
3
4
5
6
7
8
9
10
11
12
13
//backend
var engine = require('engine.io');
var socker = require('socker');

var app = require('express')();
var server = http.createServer(app);
server.listen(nconf.get('server:port'));
server = engine.attach(server);

socker(server); // wrapping server with additional methods
server.on('connection', function(socket){
  socker.attach(socket);// we are attaching socker to the socket
});
1
2
3
4
// frontend
var socket = require('engine.io')('ws://localhost');
var sockerClient = require('socker-client'); // we can use it as standalone though
sockerClient(socket);

sending the message from client

On the client we have additional serve method on the socket

1
2
3
4
5
6
7
8
9
10
11
12
13
14
//socket.serve(<optional> route, <optional> message, <required> callback);

socket.serve({message:"Hello world!"}, function(err, data){
  // err contains error object if it was thrown
  // data is a response data
})
socket.serve('READ /api/item/343', function(err, data){
  // err contains error object if it was thrown
  // data is a response data
})
socket.serve('CREATE /api/items', {itemName : "foo"} function(err, data){
  // err contains error object if it was thrown
  // data is a response data
})

handling the message on server

On the server we additionally have sock.use and sock.when methods. sock.use adds middleware handler. Middleware in our case instead of request and response gives socket and data objects.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
server.sock.use(logger);
function logger(socket, data, next){
  // socket is a socket object
  // data - the data object sent with `request`
  console.log(data);
  
  // socket object have .json method to send a response
  if(weNeedTo) return socket.json({responseMessage: "bar"});
  
  // or we can throw an error
  if(weNeedToThrowError) return next("Error message");
  
  // if we need to pass to next handler
  next() 
}

socket is a message context object. And you can pollute it whenever you like. The "session" context object is socket.__proto__ so If you want to save some data for connection lifetime use prototype object.

handling routing

Inside routing middlewares the route is already parsed and we have also socket.params object with all params from the route.

1
2
3
4
5
6
7
server.sock.when('CREATE /api/items', checkItem, createItem);
server.sock.when('READ /api/item/:id', getItem);
function getItem(socket, data, next){
  // socket.params['id'] contains id from the route
  // data is a data sent
  socket.json({room:"name", id: 343});
}

Using the mask METHOD uri is not required for socker. In the same manner you can name your routes.

1
2
3
server.sock.when('Server, please, give me room with :id', callback);
//or
server.sock.when('Bloody server! I command you to stay on your knees and give all items you got.', callback);

error handling

We can also customize error handling.

1
2
3
4
5
server.sock.use(function(err, socket, data, next){
  if(err){
    socket.json({type:"ERROR", err:err, code: 500})
  }
})

It is important to put type : "ERROR" because that is the way client will treat the message as error.

try it

You got clean and simple API and you got some latency boost. You save roundtrip to your session storage and handshake time. And now with socker moving from express REST API to socket based API is really simple.


why building another app compiler?

Published:

When you are frontend developer and start doing node npm completely spoils you. Because unlike we used to it provides single and predictable way of adding/using 3rd party libs and snippets to your code.

Frontend is more complex in many ways. It is more fragmented since there are html and css additionally to javascript and our code is running in different combinations of vm's and platforms.

Commonly used way of adding 3rd libs is a /vendor folder that holds bunch of unminified (if you're lucky enough) files that were downloaded by someone ages ago. Maybe you will find comments inside that will give you an idea of what version of library is used, maybe not. Also what-load-first dependency management is completely your pain. You might have a master file with all the scripts loaded in 'right' order .

Bower is doing great job adding more metadata to packages fixing some problemts. But Bower is just a package manager (c) and it doesnt load scripts. So again, you need to do additional job defining what-load-when relations.

Even if you will use require.js you need to configure 3rd libraries. Besides requirejs adds it's own complexities into code. For example: do you know the difference between require and define functions? And frankly why do you need to know difference! You need something that just works.

So at the end of a day we need package manager that will deliver libs into our app, require functionality that will handle script dependencies and builder that will wire all the thing together and give back 3 files: index.html, app.js and app.css

and compy can do it

Componentjs does most of work already. Compy just wraps the concept in one solid solution.

component package manager

Componentjs was obvious choice. Unlike npm or bower component is really strict on what files considered a source. Which is not that important for server but really important for frontend.

local require

Componentjs gives local require out of the box. Component's require is synchronous. Your files are wrapped in scope and concatenated in one file. Your dependencies are already loaded then you require them. Thus you don't break javascript and require becomes plain simple and clear.

builder

Builder takes responsibility to compile out three files: app.js, app.css and index.html. app.js is built of js dependencies (components), precompiled templates and js source files. app.css is just concatenated css files and index.html generated automatically to eat js/css and run "main" file. Builder have bunch of plugins that allow to precompile sources. So you can use coffeescript, scss, jade, whatever. And just technically because we avoid on read/write cycle comparing to plain grunt it's faster than grunt. And you can use mix of technologies require coffeescript from js and vice versa.


compy: simple way of building webapps

Published:

Compy is a simple, 'zero' configuration web app builder/compiller integrated with client package manager component. Although there is almost no configuration it gives you all flexibility to code the way you like.

Start

Install compy with npm :

1
$ npm install compy -g

To start an app all you need is to tell compy where is the beginning. To do that you need package.json file with compy.main property pointing to main js file of your app.

1
2
3
4
5
6
{
  "name" : "app",
  "compy" : {
    "main" : "appstart.js"
  }
}

appstart.js file will be executed right after the page load.

To compile app, just run $compy compile

Compy will generate ./dist folder with app.js, app.css and index.html. All css in your directory will be concatenated/minified into app.css file.

Compy have static server so you can check the result with

1
$ compy server [watch]

adding watch option will recompile the app and livereload the changes in a browser.

Components

Most powerful part of compy local require and integration with component.

To install jquery:

1
$ compy install jquerycomp/jquery

to use jquery in code:

1
2
var $ = require('jquery');
$(document.body).html("Hallo world");

Local require will work the same as in node.js

1
2
3
4
//filename: add.js
module.exports = function(a, b){
  return a + b;
}
1
2
3
//filename: appstart.js
var add = require('./add');
add(2,2); //4

Plugins

compy support component's plugins.

Given that you can use those to work with language/template you want. For example to use coffeescript you will need to install plugin in your root folder.

1
$ npm install component-coffee

Now after recompilation all your coffee files will be used as javascript. That also means you can use both js and coffee files in same repo.

1
2
3
#filename: add.coffee
module.exports = (a, b) =>
  a + b
1
2
3
//filename: appstart.js
var add = require('./add');
add(2,2); //4

And there is more

Compy is built ontop of grunt. Basically it is just grunt setup. So no magic here. Though lots of stuff available:

  • components support
  • local require
  • supporting coffeescript, sass, jade and other plugins
  • static server + livereload
  • karma runner
  • grunt extendable

May the force be with you!