Lazy load nvm for faster shell start

NVM is a version manager for node that makes using specific versions of Node a breeze. I prefer to use it on my development machine instead of system wide node as it gives much more control with almost no added complexity.

Once you install it , it adds the following snippet to your .bashrc :

export NVM_DIR="/Users/zaro/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"  # This loads nvm

and everything just works 🙂

Except that on my laptop this adds 1-2 seconds of start up time to each new shell I open. It’s a bit of annoyance and I don’t need it in every terminal session I start, so I though maybe there will be a way to load it on demand.

After fiddling a bit with it I replaced the NVM snippet with the following:

nvm() {
    unset -f nvm
    export NVM_DIR=~/.nvm
    [ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"  # This loads nvm
    nvm "$@"
}

node() {
    unset -f node
    export NVM_DIR=~/.nvm
    [ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"  # This loads nvm
    node "$@"
}

npm() {
    unset -f npm
    export NVM_DIR=~/.nvm
    [ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"  # This loads nvm
    npm "$@"
}

Now nvm, node and npm are  loaded on their first invocation, posing no start up time penalty for the shells that aren’t going to use them at all.

Edit: Thanks to jonknapp’s suggestion, now the snippet is more copy paste friendly.

Edit: fl0w_io made a standalone script out of it to include in .bashrc

Edit: sscotth made a version that will register all your globally installed modules

[Javascript] Promise me to keep your order

Promises are currently one of the best tools JavaScript has to offer to keep track of all the asynchronous calls in a program. If you don’t use them already you definitely should start.  But in this post I want to share a technique which even if its dead simple that wasn’t quite obvious how to achieve right away from the Promise documentation.

The parallel approach

The problem I had at hand was database operations. Several deletes which I wanted to be sure that all of them have completed before continuing . Which is quite easy to do with array of Promises like this :

function dbDelete(data) {
  console.log("Delete ", data);
}

var promises = [];
for(var id of [1,2,3]){
  (function(id){
    promises.push(new Promise(function(resolve, reject) {
      // Use setTimeout to simulate real DB operation taking time
      setTimeout(function () {
        dbDelete(id);
        resolve()
      }, 500+ Math.floor(Math.random()*500) );
    }));
  })(id)
}

Promise.all(promises).then(function(){
  console.log("All done.");
});

This works fine generally, but in my case these delete operations were kind of heavy, and were putting a lot of load on the database. Starting all 3 of them at the same time is not helping at all.

A not so working serial execution

So I decided to run them serially instead of parallel with the obvious approach:

function dbDelete(data) {
  console.log("Delete ", data);
}

new Promise(function(resolve, reject) {
  setTimeout(function () {
    dbDelete(1);
    resolve()
  }, 500+ Math.floor(Math.random()*500) );
}).then(function(){
  setTimeout(function () {
    dbDelete(2);
  }, 500+ Math.floor(Math.random()*500) );
}).then(function(){
  setTimeout(function () {
    dbDelete(3);
  }, 500+ Math.floor(Math.random()*500) );
}).then(function(){
  console.log("All done.");
});

Just chaining the promises with .then() doesn’t quite work as the .then() handlers are invoked all at the same time so once the first delete operation is resolved, the next two are again started simultaneously.  Thanks to Quabouter , this is more clear now. The return value of a .then() is passed to Promise.resolve df and the resulting promise is used to resolve the next then(). That’s why returning simple value ( or no value ) will fire the next .then() while a returning a promise will block until it is resolved.

I had to search for a different approach

The final solution

According to .then() documentation it returns a new Promise, which makes possible for chaining .then() calls what is not quite clear is that when the function passed to then returns a new Promise it is used to fulfil the Promise returned by then(). With this knowledge it is possible to rewrite the loop like this :

function dbDelete(data) {
  console.log("Delete ", data);
}

var promise = Promise.resolve();
for(var id of [1,2,3]){
  (function(id){
    promise = promise.then(function() {
      return new Promise(function(resolve, reject) {
      // Use setTimeout to simulate real DB operation taking time
      setTimeout(function () {
        dbDelete(id);
        resolve()
      }, 500+ Math.floor(Math.random()*500) );
    })});
  })(id);
}

promise.then(function(){
  console.log("All done.");
});

This gives the serial execution of dbDelete, where the next operation starts only after the previous has finished.

Hope this helps somebody 🙂

Node.js change visible process name

When writing command line tools, sometimes there is sensitive information (like passwords) on the command line that shouldn’t be visible right away in the process list. I know what you are thinking now “Passwords on the command line is a big fat NO”. Yeah maybe from security viewpoint this is a horrible thing to do, but from the usability perspective is pretty convenient. And as reality has proven so many times , convenience triumphs over security. But being a bit precautious won’t hurt, so let’s see how we can hide that password.

Process name in Node.js

Node.js has support for retrieving and setting the process name along with the passed command line argument. There is process.title property which according to the documentation is a getter/setter for the process name.

So our first guess is :

process.title = process.title.replace('HIDEME','******')

The result of this is not what you’d expect. It sets the process name and command line to just ‘node’. That’s because process.title contains only the process name and no command line arguments:

$ node -e 'console.log("process name=\"" + process.title + "\"") ; setTimeout("",10000)' arg1 arg2 arg3
process name="node"

# In anohter shell
$ ps ax
...
48146 s009  S+     0:00.11 node -e console.log("process name=\"" + process.title + "\"") ; setTimeout("",10000) arg1 arg2 arg3
...

Setting it will overwrite the both the process name and command line arguments though.

$ node -e 'process.title ="got nothing to hide"; console.log("process name=\"" + process.title + "\"") ; setTimeout("",10000)' arg1 arg2 arg3
process name="got nothing to hide"

#
$ ps ax
...
48151 s009  S+     0:00.12 got nothing to hide
...

So we can overwrite the visible process name but we loose some information that might be nice to have like, what is this process, and what command line arguments it was ran with.

The good news is that we have the command line arguments in process.argv so all we have to do is reconstruct the command line and append it to process.title.

Change the visible process name

Here it goes:

// append the current process name to the new title
var t = [ process.title ];  

// Append the script node is running, this is always argv[1]
// Also run it trough path.relative, because node replaces argv[1]
// with its full path and this is way too long 
t.push( path.relative(process.cwd(), process.argv[1]) );

// For the rest of the argv
for(var index=2; index < process.argv.length; index++ ) {
  var val = process.argv[ index ];
  // If the current argument is the password
  if(val === 'password' ) {
    // Append stars
    t.push( val.replace(/./g, '*') );
  } else {
    // Else append the argument as it is
    t.push( val );
  }
}

// Finaly set the visible title
process.title = t.join(' ');

This works quite well if you don’t change the length of the command line. Making it shorter also works fine, but making it longer will lead to truncated string as the memory for argv is preallocated by the C runtime, and Node just overwrites that, meaning it cannot change its length.

Running this with ‘password’ on the command line gives:

$ node process_title.js  argv1 argv2 password argv4

# Check the visible name
$ ps ax
48327 s010  S+     0:00.10 node process_title.js argv1 argv2 ******** argv4

 

Quck and (not so) dirty compressing reverse proxy.

While working on a couchapp recently I found out quite interesting fact. CouchDB doesn’t support   the gzip/deflate HTTP responses. And with a view that’s several MB, on a slow connection it was a lot pain using the app.

My first thought was, no problem, my CouchDB is behind nginx anyway, lets just turn on gzip compression in nginx. And while that was super simple to do, it yielded an undesired effect. The responses were now compressed, but nginx strips off Etags headers, and there is no way around it. Without the Etag queries always return full responses, even if the data hasn’t been modified. With them, a short 304 reponse is sent when there is no change.

Unhappy with nginx approach to way too strict HTTP, I decided to write my own compressing proxy. Fortunately thats super simple with nodejs/iojs. Its just a matter of gluing few modules together 🙂

First install the modules:

npm install http-proxy connect compression morgan

Then save this as proxy.js in the same directory:

var	http = require('http'),
		connect = require('connect'),
		compression = require('compression'),
		morgan = require('morgan'),
		httpProxy = require('http-proxy');

var port = 8012;

var proxy = httpProxy.createProxyServer({
	target: 'http://localhost:5984/'
});

var app = connect();

// Log the requests, useful for debugging
app.use(morgan('combined'));

app.use(compression());
app.use(
	function(req, res) {
		proxy.web(req, res);
	}
).listen(port);

console.log('proxy  started  on port ' +  port);

And voila , now run:

node proxy.js

And you have gzipped responses from CouchDB on port 8012.