What is the difference between encode/decode?
To represent a unicode string as a string of bytes is known as encoding. Use u'...'.encode(encoding)
.
Example:
>>> u'æøå'.encode('utf8') '\xc3\x83\xc2\xa6\xc3\x83\xc2\xb8\xc3\x83\xc2\xa5' >>> u'æøå'.encode('latin1') '\xc3\xa6\xc3\xb8\xc3\xa5' >>> u'æøå'.encode('ascii') UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-5: ordinal not in range(128)
You typically encode a unicode string whenever you need to use it for IO, for instance transfer it over the network, or save it to a disk file.
To convert a string of bytes to a unicode string is known as decoding. Use unicode('...', encoding)
or '...'.decode(encoding).
Example:
>>> u'æøå' u'\xc3\xa6\xc3\xb8\xc3\xa5' # the interpreter prints the unicode object like so >>> unicode('\xc3\xa6\xc3\xb8\xc3\xa5', 'latin1') u'\xc3\xa6\xc3\xb8\xc3\xa5' >>> '\xc3\xa6\xc3\xb8\xc3\xa5'.decode('latin1') u'\xc3\xa6\xc3\xb8\xc3\xa5'
You typically decode a string of bytes whenever you receive string data from the network or from a disk file.
I believe there are some changes in unicode handling in python 3, so the above is probably not correct for python 3.
Some good links:
- The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
- Unicode HOWTO
The decode
method of unicode strings really doesn't have any applications at all (unless you have some non-text data in a unicode string for some reason -- see below). It is mainly there for historical reasons, i think. In Python 3 it is completely gone.
unicode().decode()
will perform an implicit encoding of s
using the default (ascii) codec. Verify this like so:
>>> s = u'ö'
>>> s.decode()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 0:
ordinal not in range(128)
>>> s.encode('ascii')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 0:
ordinal not in range(128)
The error messages are exactly the same.
For str().encode()
it's the other way around -- it attempts an implicit decoding of s
with the default encoding:
>>> s = 'ö'
>>> s.decode('utf-8')
u'\xf6'
>>> s.encode()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0:
ordinal not in range(128)
Used like this, str().encode()
is also superfluous.
But there is another application of the latter method that is useful: there are encodings that have nothing to do with character sets, and thus can be applied to 8-bit strings in a meaningful way:
>>> s.encode('zip')
'x\x9c;\xbc\r\x00\x02>\x01z'
You are right, though: the ambiguous usage of "encoding" for both these applications is... awkard. Again, with separate byte
and string
types in Python 3, this is no longer an issue.