Does creating functions consume more memory
Yes, creating functions uses more memory.
... and, no, interpreters don't optimize Case A down to a single function.
The reason is the JS scope chain requires each instance of a function to capture the variables available to it at the time it's created. That said, modern interpreters are better about Case A than they used to be, but largely because the performance of closure functions was a known issue a couple years ago.
Mozilla says to avoid unnecessary closures for this reason, but closures are one of the most powerful and often used tools in a JS developer's toolkit.
Update: Just ran this test that creates 1M 'instances' of Constructor, using node.js (which is V8, the JS interpreter in Chrome). With caseA = true
I get this memory usage:
{
rss: 212291584, //212 MB
vsize: 3279040512, //3279 MB
heapTotal: 203424416, //203 MB
heapUsed: 180715856 //180 MB
}
And with caseA = false
I get this memory usage:
{
rss: 73535488, //73 MB
vsize: 3149352960, //3149 MB
heapTotal: 74908960, //74 MB
heapUsed: 56308008 //56 MB
}
So the closure functions are definitely consuming significantly more memory, by almost 3X. But in the absolute sense, we're only talking about a difference of ~140-150 bytes per instance. (However that will likely increase depending on the number of in-scope variables you have when the function is created).
I believe, after some brief testing in node, that in both Case A and B there is only one copy of the actual code for the function foo
in memory.
Case A - there is a function object created for each execution of the Constructor()
storing a reference to the functions code, and its current execution scope.
Case B - there is only one scope, one function object, shared via prototype.