Server Side Javascript: Why?

I think a really cool use of server-side Javascript that isn't used nearly often enough is for data validation. With it, you can write one javascript file to validate a form, check it on the client side, then check it again on the server side because we shouldn't trust anything on the client side. It lets you keep your validation rules DRY. Quite handy.

Also see:


It goes like this:

Servers are expensive, but users will give you processing time in their browsers for free. Therefore, server-side code is relatively expensive compared to client-side code on any site big enough to need to run more than one server. However, there are some things you can't leave to the client, like data validation and retrieval. You'd like to do them on the client, because it means faster response times for the users and less server infrastructure for yourself, but security and accessibility concerns mean server-side code is required.

What typically happens is you do both. You write server-side logic because you have to, but you also write the same logic in javascript in hopes of providing faster responses to the user and saving your servers a little extra work in some situations. This is especially effective for validation code; a failed validation check in a browser can save an entire http request/response pair on the server.

Since we're all (mostly) programmers here we should immediately spot the new problem. There's not only the extra work involved in developing two sets of the same logic, but also the work involved in maintaining it, the inevitable bugs resulting from platforms don't match up well, and the bugs introduced as the implementations drift apart over time.

Enter server-side javascript. The idea is you can write code once, so the same code runs on both server and client. This would appear to solve most of the issue: you get the full set of both server and client logic done all at once, there's no drifting, and no double maintenance. It's also nice when your developers only need to know one language for both server and client work.

Unfortunately, in the real world it doesn't work out so well. The problem is four-fold:

  1. The server view of a page is still very different from the client view of a page. The server needs to be able to do things like talk directly to a database that just shouldn't be done from the browser. The browser needs to do things like manipulate a DOM that doesn't match up with the server.
  2. You don't control the javascript engine of the client, meaning there will still be important language differences between your server code and your client code.
  3. The database is normally a bigger bottleneck than the web server, so savings and performance benefit ends up less than expected.
  4. While just about everyone knows a little javascript, not many developers really know and understand javascript well.

These aren't completely unassailable technical problems: you constrain the server-supported language to a sub-set of javascript that's well supported across most browsers, provide an IDE that knows this subset and the server-side extensions, make some rules about page structure to minimize DOM issues, and provide some boiler-plate javascript for inclusion on the client to make the platform a little nicer to use. The result is something like Aptana Studio/Jaxer, or more recently Node.js, which can be pretty nice.

But not perfect. In my opinion, there are just too many pitfalls and little compatibility issues to make this really shine. Ultimately, additional servers are still cheap compared to developer time, and most programmers are able to be much more productive using something other than javascript.

What I'd really like to see is partial server-side javascript. When a page is requested or a form submitted the server platform does request validation in javascript, perhaps as a plugin to the web server that's completely independent from the rest of it, but the response is built using the platform of your choice.