exception handling, thrown errors, within promises

Promise rejection is simply a from of failure abstraction. So are node-style callbacks (err, res) and exceptions. Since promises are asynchronous you can't use try-catch to actually catch anything, because errors a likely to happen not in the same tick of event loop.

A quick example:

function test(callback){
    throw 'error';
    callback(null);
}

try {
    test(function () {});
} catch (e) {
    console.log('Caught: ' + e);
}

Here we can catch an error, as function is synchronous (though callback-based). Another:

function test(callback){
    process.nextTick(function () {
        throw 'error';
        callback(null); 
    });
}

try {
    test(function () {});
} catch (e) {
    console.log('Caught: ' + e);
}

Now we can't catch the error! The only option is to pass it in the callback:

function test(callback){
    process.nextTick(function () {
        callback('error', null); 
    });
}

test(function (err, res) {
    if (err) return console.log('Caught: ' + err);
});

Now it's working just like in the first example.The same applies to promises: you can't use try-catch, so you use rejections for error-handling.


It is almost the most important feature of promises. If it wasn't there, you might as well use callbacks:

var fs = require("fs");

fs.readFile("myfile.json", function(err, contents) {
    if( err ) {
        console.error("Cannot read file");
    }
    else {
        try {
            var result = JSON.parse(contents);
            console.log(result);
        }
        catch(e) {
            console.error("Invalid json");
        }
    }

});

(Before you say that JSON.parse is the only thing that throws in js, did you know that even coercing a variable to a number e.g. +a can throw a TypeError?

However, the above code can be expressed much more clearly with promises because there is just one exception channel instead of 2:

var Promise = require("bluebird");
var readFile = Promise.promisify(require("fs").readFile);

readFile("myfile.json").then(JSON.parse).then(function(result){
    console.log(result);
}).catch(SyntaxError, function(e){
    console.error("Invalid json");
}).catch(function(e){
    console.error("Cannot read file");
});

Note that catch is sugar for .then(null, fn). If you understand how the exception flow works you will see it is kinda of an anti-pattern to generally use .then(fnSuccess, fnFail).

The point is not at all to do .then(success, fail) over , function(fail, success) (I.E. it is not an alternative way to attach your callbacks) but make written code look almost the same as it would look when writing synchronous code:

try {
    var result = JSON.parse(readFileSync("myjson.json"));
    console.log(result);
}
catch(SyntaxError e) {
    console.error("Invalid json");
}
catch(Error e) {
    console.error("Cannot read file");
}

(The sync code will actually be uglier in reality because javascript doesn't have typed catches)


Crashing and restarting a process is not a valid strategy to deal with errors, not even bugs. It would be fine in Erlang, where a process is cheap and does one isolated thing, like serving a single client. That doesn't apply in node, where a process costs orders of magnitude more and serves thousands of clients at once

Lets say that you have 200 requests per second being served by your service. If 1% of those hit a throwing path in your code, you would get 20 process shutdowns per second, roughly one every 50ms. If you have 4 cores with 1 process per core, you would lose them in 200ms. So if a process takes more than 200ms to start and prepare to serve requests (minimum cost is around 50ms for a node process that doesn't load any modules), we now have a successful total denial of service. Not to mention that users hitting an error tend to do things like e.g. repeatedly refresh the page, thereby compounding the problem.

Domains don't solve the issue because they cannot ensure that resources are not leaked.

Read more at issues #5114 and #5149.

Now you can try to be "smart" about this and have a process recycling policy of some sort based on a certain number of errors, but whatever strategy you approach it will severely change the scalability profile of node. We're talking several dozen requests per second per process, instead of several thousands.

However, promises catch all exceptions and then propagate them in a manner very similar to how synchronous exceptions propagate up the stack. Additionally, they often provide a method finally which is meant to be an equivalent of try...finally Thanks to those two features, we can encapsulate that clean-up logic by building "context-managers" (similar to with in python, using in C# or try-with-resources in Java) that always clean up resources.

Lets assume our resources are represented as objects with acquire and dispose methods, both of which return promises. No connections are being made when the function is called, we only return a resource object. This object will be handled by using later on:

function connect(url) {
  return {acquire: cb => pg.connect(url), dispose: conn => conn.dispose()}
}

We want the API to work like this:

using(connect(process.env.DATABASE_URL), async (conn) => {
  await conn.query(...);
  do other things
  return some result;
});

We can easily achieve this API:

function using(resource, fn) {
  return Promise.resolve()
    .then(() => resource.acquire())
    .then(item => 
      Promise.resolve(item).then(fn).finally(() => 
        // bail if disposing fails, for any reason (sync or async)
        Promise.resolve()
          .then(() => resource.dispose(item))
          .catch(terminate)
      )
    );
}

The resources will always be disposed of after the promise chain returned within using's fn argument completes. Even if an error was thrown within that function (e.g. from JSON.parse) or its inner .then closures (like the second JSON.parse), or if a promise in the chain was rejected (equivalent to callbacks calling with an error). This is why its so important for promises to catch errors and propagate them.

If however disposing the resource really fails, that is indeed a good reason to terminate. Its extremely likely that we've leaked a resource in this case, and its a good idea to start winding down that process. But now our chances of crashing are isolated to a much smaller part of our code - the part that actually deals with leakable resources!

Note: terminate is basically throwing out-of-band so that promises cannot catch it, e.g. process.nextTick(() => { throw e });. What implementation makes sense might depend on your setup - a nextTick based one works similar to how callbacks bail.

How about using callback based libraries? They could potentially be unsafe. Lets look at an example to see where those errors could come from and which ones could cause problems:

function unwrapped(arg1, arg2, done) {
  var resource = allocateResource();
  mayThrowError1();
  resource.doesntThrow(arg1, (err, res) => {
    mayThrowError2(arg2);
    done(err, res);
  });
}

mayThrowError2() is within an inner callback and will still crash the process if it throws, even if unwrapped is called within another promise's .then. These kinds of errors aren't caught by typical promisify wrappers and will continue to cause a process crash as per usual.

However, mayThrowError1() will be caught by the promise if called within .then, and the inner allocated resource might leak.

We can write a paranoid version of promisify that makes sure that any thrown errors are unrecoverable and crash the process:

function paranoidPromisify(fn) {
  return function(...args) {
    return new Promise((resolve, reject) =>   
      try {
        fn(...args, (err, res) => err != null ? reject(err) : resolve(res));
      } catch (e) {
        process.nextTick(() => { throw e; });
      }
    }
  }
}

Using the promisified function within another promise's .then callback now results with a process crash if unwrapped throws, falling back to the throw-crash paradigm.

Its the general hope that as you use more and more promise based libraries, they would use the context manager pattern to manage their resources and therefore you would have less need to let the process crash.

None of these solutions are bulletproof - not even crashing on thrown errors. Its very easy to accidentally write code that leaks resources despite not throwing. For example, this node style function will leak resources even though it doesn't throw:

function unwrapped(arg1, arg2, done) {
  var resource = allocateResource();
  resource.doSomething(arg1, function(err, res) {
    if (err) return done(err);
    resource.doSomethingElse(res, function(err, res) {
      resource.dispose();
      done(err, res);
    });
  });
}

Why? Because when doSomething's callback receives an error, the code forgets to dispose of the resource.

This sort of problem doesn't happen with context-managers. You cannot forget to call dispose: you don't have to, since using does it for you!

References: why I am switching to promises, context managers and transactions