Why would one drive LEDs with a common emitter?
I would argue that there are fewer "gotcha's" with option A. I would recommend option A to people of unknown electronics skill because there's not a lot that can keep it from working. For option B to be viable, the following conditions must be true:
- \$V_{CC_{LED}}\$ must be equal to \$V_{CC_{CONTROL}}\$
- \$V_{CC}\$ must be greater than \$V_{f_{LED}} + V_{BE}\$
- It is a topology unique to BJT devices
These conditions are not as universal as they might first seem. For example, with the first assumption, this rules out any auxiliary power supply for the load that is separate from the logic power supply. It also starts constricting values of \$V_{CC}\$ for a single LED when you start talking about blue or white LEDs with \$V_f\$ > 3.0 V and a controller running off a supply less than 5.0 V. And I think the other thing is that you can't really replace the BJT in option B with a MOSFET if you wanted to eliminate that base current.
Additionally, it is more complicated (marginally, but still) to calculate your load resistance. With option A, you can use an analogy such as "consider the transistor to operate like a switch". This is easy to understand, and then you can use familiar equations to calculate \$R_{load}\$.
\$R_{load}=\dfrac{V_{CC}-V_{f_{LED}}}{I_{LED}}\$
Compare that to what is required for option B and there is the marginal increase in difficulty:
\$R_{load}=\dfrac{V_{CC}-V_{f_{LED}}-V_{BE}}{I_{LED}}\$
Couple that with the fact that the advantages of option B often are not needed. Aside from the reduced part count, the base current from option A shouldn't increase the power consumption by more than 10%, and LEDs are rarely (unsubstantiated qualitative guess) driven fast enough for BJT saturation to matter.
An even better variation on your option "B" is to put the LED in series with the collector, while leaving the resistor in series with the emitter.
simulate this circuit – Schematic created using CircuitLab
This turns the transistor into a controlled current sink, where the current is determined by the base voltage, minus VBE, across the resistor. The base voltage normally comes from a digital output of a microcontroller, which is fed from a regulator, so its value is tightly controlled. For example, if you're using 3.3V logic, and have a 270Ω resistor, you'll get a nice 10 mA through the LED.
The anode of the LED (or even a long string of LEDs) is fed from a higher voltage (which doesn't even need to be regulated), and whatever voltage drop that doesn't appear across the LED(s) appears across the transistor.
Option B requires the control signal to be raised to a higher voltage than the LED drop voltage plus the base/emitter drop voltage. If your control driver is able to operate at a higher voltage than the LED drop voltage plus the transistor base/emitter drop voltage, then Option B would be valid.
Option A on the other hand can easily drive any LED drop voltage assuming your supply rail is high enough and you don't reach the base/collector breakdown voltage.
Also keep in mind if you intend to drive multiple LED's in series you have to add up all the drop voltages of the LEDs.