What is the difference between String and string in C#?
String
stands for System.String
and it is a .NET Framework type. string
is an alias in the C# language for System.String
. Both of them are compiled to System.String
in IL (Intermediate Language), so there is no difference. Choose what you like and use that. If you code in C#, I'd prefer string
as it's a C# type alias and well-known by C# programmers.
I can say the same about (int
, System.Int32
) etc..
string
is an alias in C# for System.String
.
So technically, there is no difference. It's like int
vs. System.Int32
.
As far as guidelines, it's generally recommended to use string
any time you're referring to an object.
e.g.
string place = "world";
Likewise, I think it's generally recommended to use String
if you need to refer specifically to the class.
e.g.
string greet = String.Format("Hello {0}!", place);
This is the style that Microsoft tends to use in their examples.
It appears that the guidance in this area may have changed, as StyleCop now enforces the use of the C# specific aliases.
The best answer I have ever heard about using the provided type aliases in C# comes from Jeffrey Richter in his book CLR Via C#. Here are his 3 reasons:
- I've seen a number of developers confused, not knowing whether to use string or String in their code. Because in C# the string (a keyword) maps exactly to System.String (an FCL type), there is no difference and either can be used.
- In C#, long maps to System.Int64, but in a different programming language, long could map to an Int16 or Int32. In fact, C++/CLI does in fact treat long as an Int32. Someone reading source code in one language could easily misinterpret the code's intention if he or she were used to programming in a different programming language. In fact, most languages won't even treat long as a keyword and won't compile code that uses it.
- The FCL has many methods that have type names as part of their method names. For example, the BinaryReader type offers methods such as ReadBoolean, ReadInt32, ReadSingle, and so on, and the System.Convert type offers methods such as ToBoolean, ToInt32, ToSingle, and so on. Although it's legal to write the following code, the line with float feels very unnatural to me, and it's not obvious that the line is correct:
BinaryReader br = new BinaryReader(...);
float val = br.ReadSingle(); // OK, but feels unnatural
Single val = br.ReadSingle(); // OK and feels good
So there you have it. I think these are all really good points. I however, don't find myself using Jeffrey's advice in my own code. Maybe I am too stuck in my C# world but I end up trying to make my code look like the framework code.
Just for the sake of completeness, here's a brain dump of related information...
As others have noted, string
is an alias for System.String
. Assuming your code using String
compiles to System.String
(i.e. you haven't got a using directive for some other namespace with a different String
type), they compile to the same code, so at execution time there is no difference whatsoever. This is just one of the aliases in C#. The complete list is:
object: System.Object
string: System.String
bool: System.Boolean
byte: System.Byte
sbyte: System.SByte
short: System.Int16
ushort: System.UInt16
int: System.Int32
uint: System.UInt32
long: System.Int64
ulong: System.UInt64
float: System.Single
double: System.Double
decimal: System.Decimal
char: System.Char
Apart from string
and object
, the aliases are all to value types. decimal
is a value type, but not a primitive type in the CLR. The only primitive type which doesn't have an alias is System.IntPtr
.
In the spec, the value type aliases are known as "simple types". Literals can be used for constant values of every simple type; no other value types have literal forms available. (Compare this with VB, which allows DateTime
literals, and has an alias for it too.)
There is one circumstance in which you have to use the aliases: when explicitly specifying an enum's underlying type. For instance:
public enum Foo : UInt32 {} // Invalid
public enum Bar : uint {} // Valid
That's just a matter of the way the spec defines enum declarations - the part after the colon has to be the integral-type production, which is one token of sbyte
, byte
, short
, ushort
, int
, uint
, long
, ulong
, char
... as opposed to a type production as used by variable declarations for example. It doesn't indicate any other difference.
Finally, when it comes to which to use: personally I use the aliases everywhere for the implementation, but the CLR type for any APIs. It really doesn't matter too much which you use in terms of implementation - consistency among your team is nice, but no-one else is going to care. On the other hand, it's genuinely important that if you refer to a type in an API, you do so in a language-neutral way. A method called ReadInt32
is unambiguous, whereas a method called ReadInt
requires interpretation. The caller could be using a language that defines an int
alias for Int16
, for example. The .NET framework designers have followed this pattern, good examples being in the BitConverter
, BinaryReader
and Convert
classes.