Text
stringlengths
1
9.41k
More generally, we can always convert a string to a different encoding than the source character set default, but we must provide an explicit encoding name to encode to and decode from: ``` >>> S = 'AÄBèC' >>> S 'AÄBèC' ``` `>>> S.encode()` _# Default utf-8 encoding_ ``` b'A\xc3\x84B\xc3\xa8C' ``` `>>> T = S....
In practice, you’ll often load such text from files instead.
As we’ll see later in this chapter, 3.0’s file object (created with the `open built-in function) automatically decodes text strings as they are read and` encodes them when they are written; because of this, your script can often deal with strings generically, without having to code special characters directly. Later i...
unicode is available in Python 2.6, but it is a distinct data type from `str, and it allows free mixing of normal and` Unicode strings when they are compatible.
In fact, you can essentially pretend 2.6’s ``` str is 3.0’s bytes when it comes to decoding raw bytes into a Unicode string, as long ``` as it’s in the proper form.
Here is 2.6 in action (all other sections in this chapter are run under 3.0): ``` C:\misc> c:\python26\python >>> import sys >>> sys.version '2.6 (r26:66721, Oct 2 2008, 11:35:03) [MSC v.1500 32 bit (Intel)]' ``` `>>> S = 'A\xC4B\xE8C'` _# String of 8-bit bytes_ `>>> print S` _# Some are non-ASCII_ ``` AÄBèC...
However, as with bytes in 3.0, the "\u..." and "\U..." escapes are recognized only for unicode strings in 2.6, not 8-bit str strings: ``` C:\misc> c:\python26\python ``` `>>> U = u'A\xC4B\xE8C'` _# Hex escapes for non-ASCII_ ``` >>> U u'A\xc4B\xe8C' ``` **|** ----- ``` >>> print U AÄBèC ``` `>>> U = u'A...
One of the primary differences between 2.6 and 3.0, though, is that` ``` unicode and non-Unicode str objects can be freely mixed in expressions, and as long ``` as the str is compatible with the unicode’s encoding Python will automatically convert it up to unicode (in 3.0, str and bytes never mix automatically and req...
Like normal strings, Unicode strings may be concatenated, indexed, sliced, matched with the re module, and so on, and they cannot be changed in-place.
If you ever need to convert between the two types explicitly, you can use the built-in str and unicode functions: `>>> str(u'spam')` _# Unicode to normal_ ``` 'spam' ``` `>>> unicode('spam')` _# Normal to Unicode_ ``` u'spam' ``` However, this liberal approach to mixing string types in 2.6 only works if the stri...
To read and write Unicode files and encode or decode their content automatically, use 2.6’s codecs.open call, documented in the 2.6 library manual.
This call provides much the same functionality as 3.0’s open and uses 2.6 unicode objects to represent file content—reading a file translates encoded bytes into decoded Unicode characters, and writing translates strings to the desired encoding specified when the file is opened. ###### Source File Character Set Encodin...
For strings you code within your script files, Python uses the UTF-8 encoding by default, but it allows you to change this to support arbitrary character sets by including a comment that names your desired encoding.
The comment must be of this form and must appear as either the first or second line in your script in either Python 2.6 or 3.0: ``` # -*- coding: latin-1 -* ``` When a comment of this form is present, Python will recognize strings represented natively in the given encoding.
This means you can edit your script file in a text editor that accepts and displays accented and other non-ASCII characters correctly, and Python will decode them correctly in your string literals.
For example, notice how the comment at the top of the following file, text.py, allows Latin-1 characters to be embedded in strings: ``` # -*- coding: latin-1 -* ``` _# Any of the following string literal forms work in latin-1._ _# Changing the encoding above to either ascii or utf-8 fails,_ _# because the 0xc4 and 0x...
Instead, let’s dig a bit deeper into the operation sets provided by the new bytes type in 3.0. As mentioned previously, the 3.0 bytes object is a sequence of small integers, each of which is in the range 0 through 255, that happens to print as ASCII characters when displayed.
It supports sequence operations and most of the same methods available on ``` str objects (and present in 2.X’s str type).
However, bytes does not support the for mat method or the % formatting expression, and you cannot mix and match bytes and str type objects without explicit conversions—you generally will use all str type objects ``` and text files for text data, and all bytes type objects and binary files for binary data. ###### Meth...
The output can also tell you something about the expression operators they support (e.g., `__mod__ and` `__rmod__ implement the` `%` operator): ``` C:\misc> c:\python30\python ``` _# Attributes unique to str_ ``` >>> set(dir('abc')) - set(dir(b'abc')) {'isprintable', 'format', '__mod__', 'encode', 'isidentifier'...
Their unique at-` tributes are generally methods that don’t apply to the other; for instance, decode translates a raw bytes into its str representation, and encode translates a string into its raw ``` bytes representation.
Most of the methods are the same, though bytes methods require bytes arguments (again, 3.0 string types don’t mix).
Also recall that bytes objects are ``` immutable, just like str objects in both 2.6 and 3.0 (error messages here have been shortened for brevity): `>>> B = b'spam'` _# b'...' bytes literal_ ``` >>> B.find(b'pa') 1 ``` `>>> B.replace(b'pa', b'XY')` _# bytes methods expect bytes arguments_ ``` b'sXYm' >>> B.sp...
Notice in the following that **|** ----- indexing a bytes object returns an integer giving the byte’s binary value; bytes really is a sequence of 8-bit integers, but it prints as a string of ASCII-coded characters when displayed as a whole for convenience.
To check a given byte’s value, use the chr builtin to convert it back to its character, as in the following: `>>> B = b'spam'` _# A sequence of small ints_ `>>> B` _# Prints as ASCII characters_ ``` b'spam' ``` `>>> B[0]` _# Indexing yields an int_ ``` 115 >>> B[-1] 109 ``` `>>> chr(B[0])` _# Show character ...
As we’ve seen, encoding takes a ``` str and returns the raw binary byte values of the string according to the encoding ``` specification; conversely, decoding takes a raw bytes sequence and encodes it to its string representation—a series of possibly wide characters.
Both operations create new string objects: ``` >>> B = b'abc' >>> B b'abc' >>> B = bytes('abc', 'ascii') >>> B b'abc' >>> ord('a') 97 >>> B = bytes([97, 98, 99]) ``` **|** ----- ``` >>> B b'abc' ``` `>>> B = 'spam'.encode()` _# Or bytes()_ ``` >>> B b'spam' >>> ``` `>>> S = B.decode()`...
Although Python 2.X automatically con ``` verts str to and from unicode when possible (i.e., when the str is 7-bit ASCII text), Python 3.0 requires specific string types in some contexts and expects manual conversions if needed: _# Must pass expected types to function and method calls_ ``` >>> B = b'spam' >>> B.re...
First, though, we should introduce bytes’s very close, and mutable, cousin. ###### Using 3.0 (and 2.6) bytearray Objects So far we’ve focused on `str and` `bytes, since they subsume Python 2’s` `unicode and` ``` str.
Python 3.0 has a third string type, though—bytearray, a mutable sequence of ``` integers in the range 0 through 255, is essentially a mutable variant of bytes.
As such, it supports the same string methods and sequence operations as bytes, as well as many of the mutable in-place-change operations supported by lists.
The bytearray type is also available in Python 2.6 as a back-port from 3.0, but it does not enforce the strict text/binary distinction there that it does in 3.0. Let’s take a quick tour.
bytearray objects may be created by calling the bytearray builtin.
In Python 2.6, any string may be used to initialize: _# Creation in 2.6: a mutable sequence of small (0..255) ints_ ``` >>> S = 'spam' ``` `>>> C = bytearray(S)` _# A back-port from 3.0 in 2.6_ `>>> C` _# b'..' == '..' in 2.6 (str)_ ``` bytearray(b'spam') ``` In Python 3.0, an encoding name or byte string is req...
Besides named methods, the `__iadd__ and` `__setitem__ methods in` ``` bytearray implement += in-place concatenation and index assignment, respectively: ``` _# Methods overlap with both str and bytes, but also has list's mutable methods_ ``` >>> set(dir(b'abc')) - set(dir(bytearray(b'abc'))) {'__getnewargs__'} >...
As mentioned earlier, the mode in which you open a file is crucial—it determines which object type you will use to represent the file’s content in your script.
Text mode implies str objects, and binary mode implies bytes objects: - Text-mode files interpret file contents according to a Unicode encoding—either the default for your platform, or one whose name you pass in.
By passing in an encoding name to open, you can force conversions for various types of Unicode files.
Textmode files also perform universal _line-end translations: by default, all line-end_ forms map to the single '\n' character in your script, regardless of the platform on which you run it.
As described earlier, text files also handle reading and writing the byte order mark (BOM) stored at the start-of-file in some Unicode encoding schemes. - Binary-mode files instead return file content to you raw, as a sequence of integers representing byte values, with no encoding or decoding and no line-end translat...
The default mode is "rt"; this is the same as "r", which ``` means text input (just as in 2.X). In 3.0, though, this mode argument to open also implies an object type for file content representation, regardless of the underlying platform—text files return a str for reads and expect one for writes, but binary files re...
As long as you’re processing basic text files (e.g., ASCII) and don’t care about circumventing the platform-default encoding of strings, files in 3.0 look and feel much as they do in 2.X (for that matter, so do strings in general).
The following, for instance, writes one line of text to a file and reads it back **|** ----- in 3.0, exactly as it would in 2.6 (note that file is no longer a built-in name in 3.0, so it’s perfectly OK to use it as a variable here): ``` C:\misc> c:\python30\python ``` _# Basic text files (and strings) work the s...
The only major difference is that text files automatically map \n end-of-line characters to and from \r\n on Windows, while binary files do not (I’m stringing operations together into one-liners here just for brevity): ``` C:\misc> c:\python26\python ``` `>>> open('temp', 'w').write('abd\n')` _# Write in text mode: ...
To demonstrate, let’s write a text file and read it back in both modes in 3.0.
Notice that we are required to provide a str for writing, but reading gives us a str or a bytes, depending on the open mode: ``` C:\misc> c:\python30\python ``` _# Write and read a text file_ `>>> open('temp', 'w').write('abc\n')` _# Text mode output, provide a str_ ``` 4 ``` `>>> open('temp', 'r').read()` _# Te...
This is the same in 2.6, and it’s what we want for binary data (no translations should occur), although you can control this behavior with extra open arguments in 3.0 if desired. Now let’s do the same again, but with a binary file.
We provide a bytes to write in this case, and we still get back a str or a bytes, depending on the input mode: _# Write and read a binary file_ `>>> open('temp', 'wb').write(b'abc\n')` _# Binary mode output, provide a bytes_ ``` 4 ``` `>>> open('temp', 'r').read()` _# Text mode input, returns a str_ ``` 'abc\n' ...
Type requirements and file behavior are the same even if the data we’re writing to the binary file is truly binary in nature.
In the following, for example, the "\x00" is a binary zero byte and not a printable character: _# Write and read truly binary data_ `>>> open('temp', 'wb').write(b'a\x00c')` _# Provide a bytes_ ``` 3 ``` `>>> open('temp', 'r').read()` _# Receive a str_ ``` 'a\x00c' ``` `>>> open('temp', 'rb').read()` _# Receive...
In fact, most APIs in Python 3.0 that accept a bytes also allow a bytearray: _# bytearrays work too_ ``` >>> BA = bytearray(b'\x01\x02\x03') >>> open('temp', 'wb').write(BA) 3 >>> open('temp', 'r').read() '\x01\x02\x03' ``` **|** ----- ``` >>> open('temp', 'rb').read() b'\x01\x02\x03' ###### Type an...
As the following examples illustrate, we get errors (shortened here) if we try to write a bytes to a text file or a str to a binary file: _# Types are not flexible for file content_ `>>> open('temp', 'w').write('abc\n')` _# Text mode makes and requires str_ ``` 4 >>> open('temp', 'w').write(b'abc\n') TypeError:...
Although it is often possible to convert between the types by encoding str and decoding bytes, as described earlier in this chapter, you will usually want to stick to either `str for text` data or bytes for binary data.
Because the str and bytes operation sets largely intersect, the choice won’t be much of a dilemma for most programs (see the string tools coverage in the final section of this chapter for some prime examples of this). In addition to type constraints, file content can matter in 3.0.
Text-mode output files require a str instead of a bytes for content, so there is no way in 3.0 to write truly binary data to a text-mode file.
Depending on the encoding rules, bytes outside the default character set can sometimes be embedded in a normal string, and they can always be written in binary mode.
However, because text-mode input files in 3.0 must be able to decode content per a Unicode encoding, there is no way to read truly binary data in text mode: _# Can't read truly binary data in text mode_ `>>> chr(0xFF)` _# FF is a valid char, FE is not_ ``` 'ÿ' >>> chr(0xFE) UnicodeEncodeError: 'charmap' codec c...
It turns out to be easy to read and write Unicode text stored in files, because the 3.0 open call accepts an encoding for text files, which does the encoding and decoding for us automatically as data is transferred.
This allows us to process Unicode text created with different encodings than the default for the platform, and store in different encodings to convert. ###### Reading and Writing Unicode in 3.0 In fact, we can convert a string to different encodings both manually with method calls and automatically on file input and ...
We’ll use the following Unicode string in this section to demonstrate: ``` C:\misc> c:\python30\python ``` `>>> S = 'A\xc4B\xe8C'` _# 5-character string, non-ASCII_ ``` >>> S 'AÄBèC' >>> len(S) 5 ###### Manual encoding ``` As we’ve already learned, we can always encode such a string to raw bytes according ...
As suggested in the prior section, Python 3.0 really must be able to decode the data in text files into a str string, according to either the default or a passed-in Unicode encoding name.
Trying to open a truly binary data file in text mode, for example, is unlikely to work in 3.0 even if you use the correct object types: ``` >>> file = open('python.exe', 'r') >>> text = file.read() UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 2: ... >>> file = open('python.exe', 'rb') ...
To treat file content as Unicode text in 2.6, we need to use special tools instead of the general open built-in function, as we’ll see in a moment.
First, though, let’s turn to a more explosive topic.... ###### Handling the BOM in 3.0 As described earlier in this chapter, some encoding schemes store a special byte order _marker (BOM) sequence at the start of files, to specify data endianness or declare the_ encoding type.
Python both skips this marker on input and writes it on output if the encoding name implies it, but we sometimes must use a specific encoding name to force BOM processing explicitly. For example, when you save a text file in Windows Notepad, you can specify its encoding type in a drop-down list—simple ASCII text, UTF-...
If a one-line text file named spam.txt is saved in Notepad as the encoding type “ANSI,” for instance, it’s written as simple ASCII text without a BOM.
When this file is read in binary mode in Python, we can see the actual bytes stored in the file.
When it’s read as text, Python performs end-of-line translation by default; we can decode it as explicit UTF-8 text since ASCII is a subset of this scheme (and UTF-8 is Python 3.0’s default encoding): `c:\misc> C:\Python30\python` _# File saved in Notepad_ ``` >>> import sys >>> sys.getdefaultencoding() 'utf-8' ...
When writing a Unicode file in Python code, we need a more explicit encoding name to force the BOM in UTF-8—“utf-8” does not write (or skip) the BOM, but “utf-8-sig” does: ``` >>> open('temp.txt', 'w', encoding='utf-8').write('spam\nSPAM\n') 10 ``` `>>> open('temp.txt', 'rb').read()` _# No BOM_ ``` b'spam\r\nSPA...
More specific **|** ----- UTF-16 encoding names can specify different endianness, though you may have to manually write and skip the BOM yourself in some scenarios if it is required or present: ``` >>> sys.byteorder 'little' >>> open('temp.txt', 'w', encoding='utf-16').write('spam\nSPAM\n') 10 >>> open('t...
You can achieve similar effects for Unicode files in 2.6, but the interface is different.
If you replace str with unicode and open with codecs.open, the result is essentially the same in 2.6: ``` C:\misc> c:\python26\python >>> S = u'A\xc4B\xe8C' >>> print S AÄBèC >>> len(S) 5 >>> S.encode('latin-1') 'A\xc4B\xe8C' >>> S.encode('utf-8') 'A\xc3\x84B\xc3\xa8C' >>> import codecs ``` **|**...
We won’t cover any of these application-focused tools in much detail in this core language book, but to wrap up this chapter, here’s a quick look at four of the major tools impacted: the re patternmatching module, the struct binary data module, the pickle object serialization module, and the xml package for parsing XML...
With ``` re, strings that designate searching and splitting targets can be described by general ``` patterns, instead of absolute text.
This module has been generalized to work on objects of any string type in 3.0—str, bytes, and bytearray—and returns result substrings of the same type as the subject string. Here it is at work in 3.0, extracting substrings from a line of text.
Within pattern strings, ``` (.*) means any character (.), zero or more times (*), saved away as a matched substring ``` (()).
Parts of the string matched by the parts of a pattern enclosed in parentheses are available after a successful match, via the group or groups method: ``` C:\misc> c:\python30\python >>> import re ``` `>>> S = 'Bugger all down here on earth!'` _# Line of text_ `>>> B = b'Bugger all down here on earth!'` _# Usually ...
But note that, like in other APIs, you can’t mix str and bytes types in its calls’ arguments in 3.0 (although if you don’t plan to do pattern matching on binary data, you probably don’t need to care): ``` C:\misc> c:\python30\python >>> import re >>> S = 'Bugger all down here on earth!' >>> B = b'Bugger all dow...
Although the last test in the following example fails on a type mismatch, most scripts will read binary data from a file, not create it as a string: ``` C:\misc> c:\python30\python >>> import struct >>> B = struct.pack('>i4sh', 7, 'spam', 8) >>> B b'\x00\x00\x00\x07spam\x00\x08' >>> vals = struct.unpack('>i...
Code like this is one of the main places where programmers will notice the bytes object type: ``` C:\misc> c:\python30\python ``` _# Write values to a packed binary file_ `>>> F = open('data.bin', 'wb')` _# Open binary output file_ ``` >>> import struct ``` `>>> data = struct.pack('>i4sh', 7, 'spam', 8)` _# Crea...
However, if you must use or produce lower-level data used by C programs, networking libraries, or other interfaces, Python has tools to assist. ###### The pickle Object Serialization Module We met the pickle module briefly in Chapters 9 and 30.
In Chapter 27, we also used the shelve module, which uses pickle internally.
For completeness here, keep in mind that the Python 3.0 version of the pickle module always creates a bytes object, regardless of the default or passed-in “protocol” (data format level).
You can see this by using the module’s dumps call to return an object’s pickle string: ``` C:\misc> C:\Python30\python ``` `>>> import pickle` _# dumps() returns pickle string_ `>>> pickle.dumps([1, 2, 3])` _# Python 3.0 default protocol=3=binary_ **|** ----- ``` b'\x80\x03]q\x00(K\x01K\x02K\x03e.' ``` `>>> ...
See reference books or Python’s manuals for more details on object pickling. ###### XML Parsing Tools XML is a tag-based language for defining structured information, commonly used to define documents and data shipped over the Web.
Although some information can be extracted from XML text with basic string methods or the re pattern module, XML’s nesting of constructs and arbitrary attribute text tend to make full parsing more accurate. Because XML is such a pervasive format, Python itself comes with an entire package of XML parsing tools that sup...
First, we could run basic pattern matching on the file’s text, though this tends to be inaccurate if the text is unpredictable.
Where applicable, the re module we met earlier does the job—its match method looks for a match at the start of a string, ``` search scans ahead for a match, and the findall method used here locates all places ``` where the pattern matches in the string (the result comes back as a list of matched **|** ----- substr...
DOM parses XML text into a tree of objects and provides an interface for navigating the tree to extract tag attributes and values; the interface is a formal specification, independent of Python: _# File domparse.py_ ``` from xml.dom.minidom import parse, Node xmltree = parse('mybooks.xml') for node1 in xmltree.g...
Under the SAX model, a class’s methods receive callbacks as a parse progresses and use state information to keep track of where they are in the document and collect its data: _# File saxparse.py_ ``` import xml.sax.handler class BookHandler(xml.sax.handler.ContentHandler): def __init__(self): self.inTitl...
It’s a Python-specific way to both parse and generate XML text; after a parse, its API gives access to components of the document: **|** ----- _# File etreeparse.py_ ``` from xml.etree.ElementTree import parse tree = parse('mybooks.xml') for E in tree.findall('title'): print(E.text) ``` When run in eithe...
for node2 in node.childNodes: ... if node2.nodeType == Node.TEXT_NODE: ...
node2.data ... 'Learning Python' 'Programming Python' 'Python Pocket Reference' C:\misc> c:\python26\python >>> ...same code... ... u'Learning Python' u'Programming Python' u'Python Pocket Reference' ``` Programs that must deal with XML parsing results in nontrivial ways will need to account for th...
Again, though, because all strings have nearly identical interfaces in both 2.6 and 3.0, most scripts won’t be affected by the change; tools available on unicode in 2.6 are generally available on str in 3.0. Regrettably, going into further XML parsing details is beyond this book’s scope.
If you are interested in text or XML parsing, it is covered in more detail in the applicationsfocused follow-up book Programming Python.
For more details on re, struct, pickle, and XML tools in general, consult the Web, the aforementioned book and others, and Python’s standard library manual. **|** ----- ###### Chapter Summary This chapter explored advanced string types available in Python 3.0 and 2.6 for processing Unicode text and binary data.
As we saw, many programmers use ASCII text and can get by with the basic string type and its operations.
For more advanced applications, Python’s string models fully support both wide-character Unicode text (via the normal string type in 3.0 and a special type in 2.6) and byte-oriented data (represented with a bytes type in 3.0 and normal strings in 2.6). In addition, we learned how Python’s file object has mutated in 3....
Finally, we briefly met some text and binary data tools in Python’s library, and sampled their behavior in 3.0. In the next chapter, we’ll shift our focus to tool-builder topics, with a look at ways to manage access to object attributes by inserting automatically run code.
Before we move on, though, here’s a set of questions to review what we’ve learned here. ###### Test Your Knowledge: Quiz 1. What are the names and roles of string object types in Python 3.0? 2.