Performance difference between returning a value directly or creating a temporary variable

In these basic situations, readability always trumps performance differences. I'd consider this a micro-optimisation at best, and these largely turn out to be wastes of time. What you save on this will be eaten up by an undeterministic GC run.

Most of the time there are no differences in the resulting code if the compiler is allowed to optimise it. The resulting IL in this case seems to have a few extra op codes for a reference to the string on the stack, but what the JIT then does with this is anyone's guess.

I sometimes break out into temporary variables to review them before returning, but I never worry about the performance impact. Most importantly, I have never seen a case where this sort of improvement was required to solve a performance problem.


If the local variable is actually used by the executable code, and not optmised away, then the difference is still minimal.

The local variable uses just the stack space needed to store the reference, and allocating the space for it takes no time at all as the stack frame is always allocated anyway.

The time to make the extra copy to and from the local variable would hardly be possible to measure. It would only make a difference if you would call the method millions of times in a tight loop, and it would still only be a tiny fraction of the execution time compared to the time it takes to allocate a string.


The local variable is always optimised.

There is no performance impact of using a local variable before a return statement.

Check here to see the compiled output of two classes.

I prefer using the local variable always as it speeds up debugging. According to this, developers spend 75% of their time debugging.