Bytes in a unicode Python string

In Python 2, Unicode strings may contain both unicode and bytes:

No, they may not. They contain Unicode characters.

Within the original string, \xd0 is not a byte that's part of a UTF-8 encoding. It is the Unicode character with code point 208. u'\xd0' == u'\u00d0'. It just happens that the repr for Unicode strings in Python 2 prefers to represent characters with \x escapes where possible (i.e. code points < 256).

There is no way to look at the string and tell that the \xd0 byte is supposed to be part of some UTF-8 encoded character, or if it actually stands for that Unicode character by itself.

However, if you assume that you can always interpret those values as encoded ones, you could try writing something that analyzes each character in turn (use ord to convert to a code-point integer), decodes characters < 256 as UTF-8, and passes characters >= 256 as they were.


You should convert unichrs to chrs, then decode them.

u'\xd0' == u'\u00d0' is True

$ python
>>> import re
>>> a = u'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 \xd0\xb5\xd0\xba'
>>> re.sub(r'[\000-\377]*', lambda m:''.join([chr(ord(i)) for i in m.group(0)]).decode('utf8'), a)
u'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 \u0435\u043a'
  • r'[\000-\377]*' will match unichrs u'[\u0000-\u00ff]*'
  • u'\xd0\xb5\xd0\xba' == u'\u00d0\u00b5\u00d0\u00ba'
  • You use utf8 encoded bytes as unicode code points (this is the PROBLEM)
  • I solve the problem by pretending those mistaken unichars as the corresponding bytes
  • I search all these mistaken unichars, and convert them to chars, then decode them.

If I'm wrong, please tell me.


(In response to the comments above): this code converts everything that looks like utf8 and leaves other codepoints as is:

a = u'\u0420\u0443\u0441 utf:\xd0\xb5\xd0\xba bytes:bl\xe4\xe4'

def convert(s):
    try:
        return s.group(0).encode('latin1').decode('utf8')
    except:
        return s.group(0)

import re
a = re.sub(r'[\x80-\xFF]+', convert, a)
print a.encode('utf8')   

Result:

Рус utf:ек bytes:blää  

The problem is that your string is not actually encoded in a specific encoding. Your example string:

a = u'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 \xd0\xb5\xd0\xba'

Is mixing python's internal representation of unicode strings with utf-8 encoded text. If we just consider the 'special' characters:

>>> orig = u'\u0435\u043a'
>>> bytes = u'\xd0\xb5\xd0\xba'
>>> print orig
ек
>>> print bytes
ек

But you say, bytes is utf-8 encoded:

>>> print bytes.encode('utf-8')
ек
>>> print bytes.encode('utf-8').decode('utf-8')
ек

Wrong! But what about:

>>> bytes = '\xd0\xb5\xd0\xba'
>>> print bytes
ек
>>> print bytes.decode('utf-8')
ек

Hurrah.

So. What does this mean for me? It means you're (probably) solving the wrong problem. What you should be asking us/trying to figure out is why your strings are in this form to begin with and how to avoid it/fix it before you have them all mixed up.