Text
stringlengths
1
9.41k
What happens if you change `oops to raise a` ``` KeyError instead of an IndexError? Where do the names KeyError and IndexError ``` come from?
(Hint: recall that all unqualified names come from one of four scopes.) 2. Exception objects and lists.
Change the `oops function you just wrote to raise an` exception you define yourself, called MyError.
Identify your exception with a class. Then, extend the try statement in the catcher function to catch this exception and its instance in addition to IndexError, and print the instance you catch. 3.
Error handling.
Write a function called safe(func, *args) that runs any function with any number of arguments by using the *name arbitrary arguments call syntax, catches any exception raised while the function runs, and prints the exception using the `exc_info call in the` `sys module.
Then use your` `safe function to run your` ``` oops function from exercise 1 or 2. Put safe in a module file called tools.py, and ``` pass it the oops function interactively.
What kind of error messages do you get? Finally, expand safe to also print a Python stack trace when an error occurs by calling the built-in print_exc function in the standard traceback module (see the Python library reference manual for details). **|** ----- 4.
Self-study examples.
At the end of Appendix B, I’ve included a handful of example scripts developed as group exercises in live Python classes for you to study and run on your own in conjunction with Python’s standard manual set.
These are not described, and they use tools in the Python standard library that you’ll have to research on your own.
Still, for many readers, it helps to see how the concepts we’ve discussed in this book come together in real programs.
If these whet your appetite for more, you can find a wealth of larger and more realistic application-level Python program examples in follow-up books like Programming _Python and on the Web._ **|** ----- ##### PART VIII ## Advanced Topics ----- ----- ###### CHAPTER 36 ### Unicode and Byte Strings In the string...
Because the vast majority of programmers deal with simple forms of text like ASCII, they can happily work with Python’s basic str string type and its associated operations and don’t need to come to grips with more advanced string concepts.
In fact, such programmers can largely ignore the string changes in Python 3.0 and continue to use strings as they may have in the past. On the other hand, some programmers deal with more specialized types of data: nonASCII character sets, image file contents, and so on.
For those programmers (and others who may join them some day), in this chapter we’re going to fill in the rest of the Python string story and look at some more advanced concepts in Python’s string model. Specifically, we’ll explore the basics of Python’s support for _Unicode text—_ wide-character strings used in inter...
As we’ll see, the advanced string representation story has diverged in recent versions of Python: - Python 3.0 provides an alternative string type for binary data and supports Unicode text in its normal string type (ASCII is treated as a simple type of Unicode). - Python 2.6 provides an alternative string type for ...
Finally, we’ll take a brief look at some advanced string and binary tools, such as pattern matching, object pickling, binary data packing, and XML parsing, and the ways in which they are impacted by 3.0’s string changes. This is officially an advanced topics chapter, because not all programmers will need to delve into...
If you ever need to care about processing either of these, though, you’ll find that Python’s string models provide the support you need. ----- ###### String Changes in 3.0 One of the most noticeable changes in 3.0 is the mutation of string object types.
In a nutshell, 2.X’s str and unicode types have morphed into 3.0’s str and bytes types, and a new mutable bytearray type has been added.
The bytearray type is technically available in Python 2.6 too (though not earlier), but it’s a back-port from 3.0 and does not as clearly distinguish between text and binary content in 2.6. Especially if you process data that is either Unicode or binary in nature, these changes can have substantial impacts on your cod...
In fact, as a general rule of thumb, how much you need to care about this topic depends in large part upon which of the following categories you fall into: - If you deal with non-ASCII Unicode text—for instance, in the context of internationalized applications and the results of some XML parsers—you will find support...
Your strings will be encoded and decoded using your platform’s default encoding (e.g., ASCII, or UTF-8 on Windows in the U.S.—sys.getdefaultencoding() gives your default if you care to check), but you probably won’t notice. In other words, if your text is always ASCII, you can get by with normal string objects and tex...
As we’ll see in a moment, ASCII is a simple kind of Unicode and a subset of other encodings, so string operations and files “just work” if your programs process ASCII text. Even if you fall into the last of the three categories just mentioned, though, a basic understanding of 3.0’s string model can help both to demyst...
Although our main focus in this chapter is on string types in 3.0, we’ll explore some 2.6 differences along the way too.
Regardless of which version you use, the tools we’ll explore here can become important in many types of programs. **|** ----- ###### String Basics Before we look at any code, let’s begin with a general overview of Python’s string model. To understand why 3.0 changed the way it did on this front, we have to start w...
programmers’ notion of text strings.
ASCII defines character codes from 0 through 127 and allows each character to be stored in one 8-bit byte (only 7 bits of which are actually used). For example, the ASCII standard maps the character 'a' to the integer value 97 (0x61 in hex), which is stored in a single byte in memory and files.
If you wish to see how this works, Python’s ord built-in function gives the binary value for a character, and chr returns the character for a given integer code value: `>>> ord('a')` _# 'a' is a byte with binary value 97 in ASCII_ ``` 97 >>> hex(97) '0x61' ``` `>>> chr(97)` _# Binary value 97 stands for charact...
Various symbols and accented characters, for instance, do not fit into the range of possible characters defined by ASCII.
To accommodate special characters, some standards allow all possible values in an 8-bit byte, 0 through 255, to represent characters, and assign the values 128 through 255 (outside ASCII’s range) to special characters.
One such standard, known as Latin-1, is widely used in Western Europe. In Latin-1, character codes above 127 are assigned to accented and otherwise special characters.
The character assigned to byte value 196, for example, is a specially marked non-ASCII character: ``` >>> 0xC4 196 >>> chr(196) 'Ä' ``` This standard allows for a wide array of extra special characters.
Still, some alphabets define so many characters that it is impossible to represent each of them as one byte. _Unicode allows more flexibility.
Unicode text is commonly referred to as_ “wide-character” strings, because each character may be represented with multiple bytes.
Unicode is typically used in internationalized programs, to represent European and Asian character sets that have more characters than 8-bit bytes can represent. **|** ----- To store such rich text in computer memory, we say that characters are translated to and from raw bytes using an encoding—the rules for transl...
For some encodings, the translation process is trivial—ASCII and Latin-1, for instance, map each character to a single byte, so no translation work is required.
For other encodings, the mapping can be more complex and yield multiple bytes per character. The widely used UTF-8 encoding, for example, allows a wide range of characters to be represented by employing a variable number of bytes scheme.
Character codes less than 128 are represented as a single byte; codes between 128 and 0x7ff (2047) are turned into two bytes, where each byte has a value between 128 and 255; and codes above 0x7ff are turned into three- or four-byte sequences having values between 128 and 255. This keeps simple ASCII strings compact, s...
This is also true when the data is stored in files: every ASCII file is a valid UTF-8 file, because ASCII is a 7-bit subset of UTF-8. Conversely, the UTF-8 encoding is binary compatible with ASCII for all character codes less than 128.
Latin-1 and UTF-8 simply allow for additional characters: Latin-1 for characters mapped to values 128 through 255 within a byte, and UTF-8 for characters that may be represented with multiple bytes.
Other encodings allow wider character sets in similar ways, but all of these—ASCII, Latin-1, UTF-8, and many others—are considered to be Unicode. To Python programmers, encodings are specified as strings containing the encoding’s name.
Python comes with roughly 100 different encodings; see the Python library reference for a complete list.
Importing the module `encodings and running` ``` help(encodings) shows you many encoding names as well; some are implemented in ``` Python, and some in C.
Some encodings have multiple names, too; for example, latin-1, _iso_8859_1, and 8859 are all synonyms for the same encoding, Latin-1.
We’ll revisit_ encodings later in this chapter, when we study techniques for writing Unicode strings in a script. **|** ----- For more on the Unicode story, see the Python standard manual set.
It includes a “Unicode HOWTO” in its “Python HOWTOs” section, which provides additional background that we will skip here in the interest of space. ###### Python’s String Types At a more concrete level, the Python language provides string data types to represent character text in your scripts.
The string types you will use in your scripts depend upon the version of Python you’re using.
Python 2.X has a general string type for representing binary data and simple 8-bit text like ASCII, along with a specific type for representing multibyte Unicode text: - str for representing 8-bit text and binary data - unicode for representing wide-character Unicode text Python 2.X’s two string types are differen...
The str string type in 2.X is used for text that can be represented with 8-bit bytes, as well as binary data that represents absolute byte values. By contrast, Python 3.X comes with three string object types—one for textual data and two for binary data: - str for representing Unicode text (both 8-bit and wider) - ...
Given that ASCII and other 8-bit text is really a simple kind of Unicode, this convergence seems logically sound. To achieve this, the 3.0 str type is defined as an immutable sequence of characters (not necessarily bytes), which may be either normal text such as ASCII with one byte per character, or richer character s...
Strings processed by your script with this type are encoded per the platform default, but explicit encoding names may be provided to translate str objects to and from different schemes, both in memory and when transferring to and from files. While 3.0’s new str type does achieve the desired string/unicode merging, man...
To support processing of truly binary data, therefore, a new type, `bytes, also was` introduced. In 2.X, the general str type filled this binary data role, because strings were just sequences of bytes (the separate unicode type handles wide-character strings).
In 3.0, the ``` bytes type is defined as an immutable sequence of 8-bit integers representing absolute ``` byte values.
Moreover, the 3.0 bytes type supports almost all the same operations that the str type does; this includes string methods, sequence operations, and even re module pattern matching, but not string formatting. A 3.0 bytes object really is a sequence of small integers, each of which is in the range 0 through 255; indexin...
When processed with operations that assume characters, though, the contents of bytes objects are assumed to be ASCII-encoded bytes (e.g., the isalpha method assumes each byte is an ASCII character code).
Further, bytes objects are printed as character strings instead of integers for convenience. While they were at it, Python developers also added a `bytearray type in 3.0.` `bytearray is a variant of` `bytes that is` _mutable and so supports in-place changes.
It_ supports the usual string operations that str and bytes do, as well as many of the same in-place change operations as lists (e.g., the append and extend methods, and assignment to indexes).
Assuming your strings can be treated as raw bytes, bytearray finally adds direct in-place mutability for string data—something not possible without conversion to a mutable type in Python 2, and not supported by Python 3.0’s str or bytes. Although Python 2.6 and 3.0 offer much the same functionality, they package it di...
In fact, the mapping from 2.6 to 3.0 string types is not direct—2.6’s str equates to both `str and` `bytes in 3.0, and 3.0’s` `str equates to both` `str and` `unicode in 2.6.` Moreover, the mutability of 3.0’s bytearray is unique. In practice, though, this asymmetry is not as daunting as it might sound.
It boils down to the following: in 2.6, you will use str for simple text and binary data and unicode for more advanced forms of text; in 3.0, you’ll use str for any kind of text (simple and Unicode) and bytes or bytearray for binary data.
In practice, the choice is often made for you by the tools you use—especially in the case of file processing tools, the topic of the next section. ###### Text and Binary Files File I/O (input and output) has also been revamped in 3.0 to reflect the `str/bytes` distinction and automatically support encoding Unicode te...
Python now makes a sharp platform-independent distinction between text files and binary files: **|** ----- _Text files_ When a file is opened in text mode, reading its data automatically decodes its content (per a platform default or a provided encoding name) and returns it as a str; writing takes a str and automat...
Depending on the encoding name, text files may also automatically process the byte order mark sequence at the start of a file (more on this momentarily). _Binary files_ When a file is opened in binary mode by adding a b (lowercase only) to the mode string argument in the built-in open call, reading its data does not de...
Binary-mode files also accept a bytearray object for the content to be written to the file. Because the language sharply differentiates between str and bytes, you must decide whether your data is text or binary in nature and use either `str or` `bytes objects to` represent its content in your script, as appropriate.
Ultimately, the mode in which you open a file will dictate which type of object your script will use to represent its content: - If you are processing image files, packed data created by other programs whose content you must extract, or some device data streams, chances are good that you will want to deal with it usi...
You might also opt for ``` bytearray if you wish to update the data without making copies of it in memory. ``` - If instead you are processing something that is textual in nature, such as program output, HTML, internationalized text, or CSV or XML files, you’ll probably want to use str and text-mode files. Notice ...
By adding a b to the mode string, you specify_ binary mode and will receive, or must provide, a bytes object to represent the file’s content when reading or writing.
Without the b, your file is processed in text mode, and you’ll use str objects to represent its content in your script.
For example, the modes ``` rb, wb, and rb+ imply bytes; r, w+, and rt (the default) imply str. ``` Text-mode files also handle the byte order marker (BOM) sequence that may appear at the start of files under certain encoding schemes.
In the UTF-16 and UTF-32 encodings, for example, the BOM specifies big- or little-endian format (essentially, which end of a bitstring is most significant).
A UTF-8 text file may also include a BOM to declare that it is UTF-8 in general, but this isn’t guaranteed.
When reading and writing data using these encoding schemes, Python automatically skips or writes the BOM if it is implied by a general encoding name or if you provide a more specific encoding name to force the issue.
For example, the BOM is always processed for “utf-16,” the more specific encoding name “utf-16-le” species little-endian UTF-16 format, and the more **|** ----- specific encoding name “utf-8-sig” forces Python to both skip and write a BOM on input and output, respectively, for UTF-8 text (the general name “utf-8” d...
First, let’s explore the implications of Python’s new Unicode string model. ###### Python 3.0 Strings in Action Let’s step through a few examples that demonstrate how the 3.0 string types are used. One note up front: the code in this section was run with and applies to 3.0 only.
Still, basic string operations are generally portable across Python versions.
Simple ASCII strings represented with the str type work the same in 2.6 and 3.0 (and exactly as we saw in Chapter 7 of this book).
Moreover, although there is no bytes type in Python 2.6 (it has just the general str), it can usually run code that thinks there is—in 2.6, the call bytes(X) is present as a synonym for str(X), and the new literal form b'...' is taken to be the same as the normal string literal '...'.
You may still run into version skew in some isolated cases, though; the 2.6 bytes call, for instance, does not allow the second argument (encoding name) required by 3.0’s bytes. ###### Literals and Basic Properties Python 3.0 string objects originate when you call a built-in function such as `str or` ``` bytes, proce...
For the latter, a new literal form, `b'xxx' (and equivalently,` ``` B'xxx') is used to create bytes objects in 3.0, and bytearray objects may be created by ``` calling the bytearray function, with a variety of possible arguments. More formally, in 3.0 all the current string literal forms—'xxx', "xxx", and triple-quot...
This new b'...' bytes literal is similar in form to the r'...' raw string used to suppresses backslash escapes.
Consider the following, run in 3.0: ``` C:\misc> c:\python30\python ``` `>>> B = b'spam'` _# Make a bytes object (8-bit bytes)_ `>>> S = 'eggs'` _# Make a str object (Unicode characters, 8-bit or wider)_ ``` >>> type(B), type(S) (<class 'bytes'>, <class 'str'>) ``` `>>> B` _# Prints as a character string, reall...
The bytes prefix also works for any string literal form: `>>> B[0] = 'x'` _# Both are immutable_ ``` TypeError: 'bytes' object does not support item assignment >>> S[0] = 'x' TypeError: 'str' object does not support item assignment ``` `>>> B = B"""` _# bytes prefix works on single, double, triple quotes_ ``` ...
xxxx ... yyyy ...
""" >>> B b'\nxxxx\nyyyy\n' ``` As mentioned earlier, in Python 2.6 the b'xxx' literal is present for compatibility but is the same as 'xxx' and makes a str, and bytes is just a synonym for str; as you’ve seen, in 3.0 both of these address the distinct `bytes type.
Also note that the` `u'xxx' and` ``` U'xxx' Unicode string literal forms in 2.6 are gone in 3.0; use 'xxx' instead, since all ``` strings are Unicode, even if they contain all ASCII characters (more on writing nonASCII Unicode text in the section “Coding Non-ASCII Text” on page 905). ###### Conversions Although Pyth...
A function that expects an argument to be a str object won’t generally accept a bytes, and vice versa. Because of this, Python 3.0 basically requires that you commit to one type or the other, or perform manual, explicit conversions: - str.encode() and bytes(S, encoding) translate a string to its raw bytes form and c...
For example, in 3.0: ``` >>> S = 'eggs' ``` `>>> S.encode()` _# str to bytes: encode text into raw bytes_ ``` b'eggs' ``` `>>> bytes(S, encoding='ascii')` _# str to bytes, alternative_ ``` b'eggs' >>> B = b'spam' ``` `>>> B.decode()` _# bytes to str: decode raw bytes into text_ ``` 'spam' ``` `>>> str(B, ...
First of all, your platform’s default encoding is available in the ``` sys module, but the encoding argument to bytes is not optional, even though it is in str.encode (and bytes.decode). ``` Second, although calls to str do not require the encoding argument like bytes does, leaving it off in `str calls does not mean i...
Assuming B and S are still as in the prior listing: ``` >>> import sys ``` `>>> sys.platform` _# Underlying platform_ ``` 'win32' ``` `>>> sys.getdefaultencoding()` _# Default encoding for str here_ ``` 'utf-8' >>> bytes(S) TypeError: string argument without an encoding ``` `>>> str(B)` _# str without enco...
To code arbitrary Unicode characters in your strings, some of which you might not even be able to type on your keyboard, Python string literals support both "\xNN" hex byte value escapes and "\uNNNN" and "\UNNNNNNNN" Unicode escapes in string literals.
In Unicode escapes, the first form gives four hex digits to **|** ----- encode a 2-byte (16-bit) character code, and the second gives eight hex digits for a 4-byte (32-bit) code. ###### Coding ASCII Text Let’s step through some examples that demonstrate text coding basics.
As we’ve seen, ASCII text is a simple type of Unicode, stored as a sequence of byte values that represent characters: ``` C:\misc> c:\python30\python ``` `>>> ord('X')` _# 'X' has binary value 88 in the default encoding_ ``` 88 ``` `>>> chr(88)` _# 88 stands for character 'X'_ ``` 'X' ``` `>>> S = 'XYZ'` _# A ...
The hex values 0xCD and 0xE8, for instance, are codes for two special accented characters outside the 7-bit range of ASCII, but we can embed them in 3.0 str objects because str supports Unicode today: **|** ----- `>>> chr(0xc4)` _# 0xC4, 0xE8: characters outside ASCII's range_ ``` 'Ä' >>> chr(0xe8) 'è' ``` `...
Encoding as Latin-1 works, though, and allocates one byte per character; encoding as UTF-8 allocates 2 bytes per character instead.
If you write this string to a file, the raw bytes shown here is what is actually stored on the file for the encoding types given: ``` >>> S = '\u00c4\u00e8' >>> S 'Äè' >>> len(S) 2 >>> S.encode('ascii') UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1: ordinal not in range(128) ...
However, as we’ll see later, the encoding mode you give to the open call causes this decoding to be done for you automatically on input (and avoids issues that may arise from reading partial character sequences when reading by blocks of bytes): ``` >>> B = b'\xc4\xe8' >>> B b'\xc4\xe8' ``` **|** ----- `>>> le...
When needed, you can specify both 16- and 32-bit Unicode values for characters in your strings—use ``` "\u..." with four hex digits for the former, and "\U...." with eight hex digits for the ``` latter: ``` >>> S = 'A\u00c4B\U000000e8C' ``` `>>> S` _# A, B, C, and 2 non-ASCII characters_ ``` 'AÄBèC' ``` `>>> len...
The cp500 EBCDIC encoding, for example, doesn’t even encode ASCII the same way as the encodings we’ve been using so far (since Python encodes and decodes for us, we only generally need to care about this when providing encoding names): ``` >>> S 'AÄBèC' ``` `>>> S.encode('cp500')` _# Two other Western European enc...
First, Python 3.0 allows special characters to be coded with both hex and Unicode escapes in str strings, but only with hex escapes in bytes strings— Unicode escape sequences are silently taken verbatim in bytes literals, not as escapes. In fact, `bytes must be decoded to` `str strings to print their non-ASCII characte...