Unicode error when outputting python script output to file

Windows behaviour in this case is a bit complicated. You should listen to other advices and do internally use unicode for strings and decode during input.

To your question, you need to print encoded strings (only you know which encoding!) in case of stdout redirection, but you have to print unicode strings in case of simple screen output (and python or windows console handles conversion to proper encoding).

I recommend to structure your script this way:

# -*- coding: utf-8 -*- 
import sys, codecs
# set up output encoding
if not sys.stdout.isatty():
    # here you can set encoding for your 'out.txt' file
    sys.stdout = codecs.getwriter('utf8')(sys.stdout)

# next, you will print all strings in unicode
print u"Unicode string ěščřžý"

Update: see also other similar question: Setting the correct encoding when piping stdout in Python


You can use the codecs module to write unicode data to the file

import codecs
file = codecs.open("out.txt", "w", "utf-8")
file.write(something)

'print' outputs to the standart output and if your console doesn't support utf-8 it can cause such error even if you pipe stdout to a file.


It makes no sense to convert text to unicode in order to print it. Work with your data in unicode, convert it to some encoding for output.

What your code does instead: You're on python 2 so your default string type (str) is a bytestring. In your statement you start with some utf-encoded byte strings, convert them to unicode, surround them with quotes (regular str that are coerced to unicode in order to combine into one string). You then pass this unicode string to print, which pushes it to sys.stdout. To do so, it needs to turn it into bytes. If you are writing to the Windows console, it can negotiate somehow, but if you redirect to a regular dumb file, it falls back on ascii and complains because there's no loss-less way to do that.

Solution: Don't give print a unicode string. "encode" it yourself to the representation of your choice:

print "Latin-1:", "unicode über alles!".decode('utf-8').encode('latin-1')
print "Utf-8:", "unicode über alles!".decode('utf-8').encode('utf-8')
print "Windows:", "unicode über alles!".decode('utf-8').encode('cp1252')

All of this should work without complaint when you redirect. It probably won't look right on your screen, but open the output file with Notepad or something and see if your editor is set to see the format. (Utf-8 is the only one that has a hope of being detected. cp1252 is a likely Windows default).

Once you get that down, clean up your code and avoid using print for file output. Use the codecs module, and open files with codecs.open instead of plain open.

PS. If you're decoding a utf-8 string, conversion to unicode should be loss-less: you don't need the errors=ignore flag. That's appropriate when you convert to ascii or Latin-2 or whatever, and you want to just drop characters that don't exist in the target codepage.