Text stringlengths 1 9.41k |
|---|
You can both read from and write to the same socket.
If you write something to a socket, it is sent to the application at the other end
of the socket. |
If you read from the socket, you are given the data which the other
application has sent.
But if you try to read a socket when the program on the other end of the socket
has not sent any data, you just sit and wait. |
If the programs on both ends of the
socket simply wait for some data without sending anything, they will wait for a
very long time.
So an important part of programs that communicate over the Internet is to have
some sort of protocol. |
A protocol is a set of precise rules that determine who is
to go first, what they are to do, and then what the responses are to that message,
and who sends next, and so on. |
In a sense the two applications at either end of
the socket are doing a dance and making sure not to step on each other’s toes.
There are many documents which describe these network protocols. |
The HyperText Transport Protocol is described in the following document:
[http://www.w3.org/Protocols/rfc2616/rfc2616.txt](http://www.w3.org/Protocols/rfc2616/rfc2616.txt)
-----
This is a long and complex 176-page document with a lot of detail. |
If you find
it interesting, feel free to read it all. But if you take a look around page 36 of
RFC2616 you will find the syntax for the GET request. |
To request a document
from a web server, we make a connection to the www.pythonlearn.com server on
port 80, and then send a line of the form
GET http://data.pr4e.org/romeo.txt HTTP/1.0
where the second parameter is the web page we are requesting, and then we also
send a blank line. |
The web server will respond with some header information about
the document and a blank line followed by the document content.
##### 12.2 The World’s Simplest Web Browser
Perhaps the easiest way to show how the HTTP protocol works is to write a very
simple Python program that makes a connection to a web server and fo... |
. |
.
|Your Program|Col2|Col3|
|---|---|---|
|socket|) * +, E .||
|connect|||
|send|||
|recv|||
Figure 12.1: A Socket Connection
Content-Length: 167
Connection: close
Content-Type: text/plain
But soft what light through yonder window breaks
It is the east and Juliet is the sun
Arise fair sun and kill the envious moon... |
For example, the Content-Type header indicates that the document is a
plain text document (text/plain).
After the server sends us the headers, it adds a blank line to indicate the end of
the headers, and then sends the actual data of the file romeo.txt.
This example shows how to make a low-level network connection wi... |
All that is needed is to find the document which
describes the protocol and write the code to send and receive the data according
to the protocol.
However, since the protocol that we use most commonly is the HTTP web protocol,
Python has a special library specifically designed to support the HTTP protocol
for the retr... |
We can use a
similar program to retrieve an image across using HTTP. |
Instead of copying the
data to the screen as the program runs, we accumulate the data in a string, trim
off the headers, and then save the image data to a file as follows:
import socket
import time
-----
HOST = 'data.pr4e.org'
PORT = 80
mysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mysock.connect((HOST... |
Once the program completes, you can view
the image data by opening the file stuff.jpg in an image viewer.
As the program runs, you can see that we don’t get 5120 characters each time
we call the recv() method. |
We get as many characters as have been transferred
across the network to us by the web server at the moment we call recv(). |
In this
example, we either get 1460 or 2920 characters each time we request up to 5120
characters of data.
Your results may be different depending on your network speed. |
Also note that on
the last call to recv() we get 1681 bytes, which is the end of the stream, and in
the next call to recv() we get a zero-length string that tells us that the server has
called close() on its end of the socket and there is no more data forthcoming.
We can slow down our successive recv() calls by uncomm... |
This way, we wait a quarter of a second after each call so that
the server can “get ahead” of us and send more data to us before we call recv()
again. |
With the delay, in place the program executes as follows:
$ python urljpeg.py
1460 1460
5120 6580
5120 11700
...
5120 62900
5120 68020
2281 70301
Header length 240
HTTP/1.1 200 OK
Date: Sat, 02 Nov 2013 02:22:04 GMT
Server: Apache
Last-Modified: Sat, 02 Nov 2013 02:01:26 GMT
ETag: "19c141-111a9-4ea280f8354b8"
Accept-R... |
When we run the program with the delay in place, at
some point the server might fill up the buffer in the socket and be forced to pause
until our program starts to empty the buffer. |
The pausing of either the sending
application or the receiving application is called “flow control”.
-----
##### 12.4 Retrieving web pages with urllib
While we can manually send and receive data over HTTP using the socket library,
there is a much simpler way to perform this common task in Python by using the
urllib... |
You simply indicate
which web page you would like to retrieve and urllib handles all of the HTTP
protocol and header details.
The equivalent code to read the romeo.txt file from the web using urllib is as
follows:
import urllib.request, urllib.parse, urllib.error
fhand = urllib.request.urlopen('http://data.pr4e.org/... |
The
headers are still sent, but the urllib code consumes the headers and only returns
the data to us.
But soft what light through yonder window breaks
It is the east and Juliet is the sun
Arise fair sun and kill the envious moon
Who is already sick and pale with grief
As an example, we can write a program to retrieve... |
Using this technique, Google spiders its way through nearly all of the
pages on the web.
Google also uses the frequency of links from pages it finds to a particular page as
one measure of how “important” a page is and how high the page should appear
in its search results.
##### 12.6 Parsing HTML using regular express... |
The question
mark added to the”.+?" indicates that the match is to be done in a “non-greedy”
fashion instead of a “greedy” fashion. |
A non-greedy match tries to find the smallest
possible matching string and a greedy match tries to find the largest possible
matching string.
We add parentheses to our regular expression to indicate which part of our matched
string we would like to extract, and produce the following program:
_# Search for lines that ... |
But since there are a lot of “broken” HTML pages out there, a solution
only using regular expressions might either miss some valid links or end up with
bad data.
This can be solved by using a robust HTML parsing library.
##### 12.7 Parsing HTML using BeautifulSoup
There are a number of Python libraries which can hel... |
Each of the libraries has its strengths and weaknesses
and you can pick one based on your needs.
As an example, we will simply parse some HTML input and extract links using
the BeautifulSoup library. |
You can download and install the BeautifulSoup code
from:
[http://www.crummy.com/software/](http://www.crummy.com/software/)
You can download and “install” BeautifulSoup or you can simply place the
BeautifulSoup.py file in the same folder as your application.
Even though HTML looks like XML[1]i and some pages are ca... |
BeautifulSoup tolerates
highly flawed HTML and still lets you easily extract the data you need.
We will use urllib to read the page and then use BeautifulSoup to extract the
href attributes from the anchor (a) tags.
1The XML format is described in the next chapter.
-----
_# To run this, you can install BeautifulSo... |
The data in these files is generally not useful to print out, but you can
easily make a copy of a URL to a local file on your hard disk using urllib.
The pattern is to open the URL and use read to download the entire contents of
the document into a string variable (img) then write that information to a local
file as f... |
This will work if the size of the file is less
than the size of the memory of your computer.
However if this is a large audio or video file, this program may crash or at least
run extremely slowly when your computer runs out of memory. |
In order to avoid
running out of memory, we retrieve the data in blocks (or buffers) and then write
each block to your disk before retrieving the next block. |
This way the program can
read any size file without using up all of the memory you have in your computer.
import urllib.request, urllib.parse, urllib.error
img = urllib.request.urlopen('http://data.pr4e.org/cover.jpg')
fhand = open('cover.jpg', 'wb')
size = 0
**while True:**
info = img.read(100000)
**if len(info) < 1... |
There is also a curl3.py sample
program that does this task a little more effectively, in case you actually want to
use this pattern in a program you are writing.
##### 12.9 Glossary
**BeautifulSoup A Python library for parsing HTML documents and extracting**
data from HTML documents that compensates for most of the ... |
You can download the Beauti[fulSoup code from www.crummy.com.](http://www.crummy.com)
**port A number that generally indicates which application you are contacting when**
you make a socket connection to a server. |
As an example, web traffic usually
uses port 80 while email traffic uses port 25.
**scrape When a program pretends to be a web browser and retrieves a web page,**
then looks at the web page content. |
Often programs are following the links
in one page to find the next page so they can traverse a network of pages or
a social network.
**socket A network connection between two applications where the applications can**
send and receive data in either direction.
**spider The act of a web search engine retrieving a page... |
You can use split(’/’) to break the URL into
its component parts so you can extract the host name for the socket connect call.
Add error checking using try and except to handle the condition where the user
enters an improperly formatted or non-existent URL.
**Exercise 2: Change your socket program so that it counts th... |
Don’t worry about the headers for
this exercise, simply show the first 3000 characters of the document contents.
**Exercise 4: Change the urllinks.py program to extract and count paragraph (p)**
tags from the retrieved HTML document and display the count of the paragraphs
as the output of your program. |
Do not display the paragraph text, only count
them. |
Test your program on several small web pages as well as some larger web
pages.
**Exercise 5: (Advanced) Change the socket program so that it only shows data**
after the headers and a blank line have been received. |
Remember that recv is
receiving characters (newlines and all), not lines.
-----
## Chapter 13
# Using Web Services
Once it became easy to retrieve documents and parse documents over HTTP using
programs, it did not take long to develop an approach where we
started producing documents that were specifically designe... |
When programs just want
to exchange dictionaries, lists, or other internal information with each other, they
[use JavaScript Object Notation or JSON (see www.json.org). |
We will look at both](http://www.json.org)
formats.
##### 13.1 eXtensible Markup Language - XML
XML looks very similar to HTML, but XML is more structured than HTML. |
Here
is a sample of an XML document:
**<person>**
**<name>Chuck</name>**
**<phone type="intl">**
+1 734 303 4456
**</phone>**
**<email hide="yes"/>**
**</person>**
Often it is helpful to think of an XML document as a tree structure where there
is a top tag person and other tags such as phone are drawn as children o... |
When the XML is in a tree, we have a series of methods we can call
to extract portions of data from the XML.
The find function searches through the XML tree and retrieves a node that
matches the specified tag. |
Each node can have some text, some attributes (like
hide), and some “child” nodes. |
Each node can be the top of a tree of nodes.
Name: Chuck
Attr: yes
Using an XML parser such as ElementTree has the advantage that while the
XML in this example is quite simple, it turns out there are many rules regarding
-----
valid XML and using ElementTree allows us to extract data from XML without
worrying abou... |
In the following program, we loop through all of the user nodes:
import xml.etree.ElementTree as ET
input = '''
<stuff>
<users>
<user x="2">
<id>001</id>
<name>Chuck</name>
</user>
<user x="7">
<id>009</id>
<name>Brent</name>
</user>
</users>
</stuff>'''
stuff = ET.fromstring(input)
lst = stuff.findall('users/user')... |
Then we can write a for loop that looks at each of
the user nodes, and prints the name and id text elements as well as the x attribute
from the user node.
User count: 2
Name Chuck
Id 001
Attribute 2
Name Brent
Id 009
Attribute 7
-----
##### 13.4 JavaScript Object Notation - JSON
The JSON format was inspired by the... |
But since Python was invented before JavaScript, Python’s
syntax for dictionaries and lists influenced the syntax of JSON. |
So the format of
JSON is nearly identical to a combination of Python lists and dictionaries.
Here is a JSON encoding that is roughly equivalent to the simple XML from above:
{
"name" : "Chuck",
"phone" : {
"type" : "intl",
"number" : "+1 734 303 4456"
},
"email" : {
"hide" : "yes"
}
}
You will notice some diffe... |
First, in XML, we can add attributes like “intl”
to the “phone” tag. In JSON, we simply have key-value pairs. |
Also the XML
“person” tag is gone, replaced by a set of outer curly braces.
In general, JSON structures are simpler than XML because JSON has fewer capabilities than XML. |
But JSON has the advantage that it maps directly to some
combination of dictionaries and lists. |
And since nearly all programming languages
have something equivalent to Python’s dictionaries and lists, JSON is a very natural format to have two cooperating programs exchange data.
JSON is quickly becoming the format of choice for nearly all data exchange between
applications because of its relative simplicity compa... |
In
this example, we represent a list of users where each user is a set of key-value pairs
(i.e., a dictionary). |
So we have a list of dictionaries.
In the following program, we use the built-in json library to parse the JSON and
read through the data. |
Compare this closely to the equivalent XML data and code
above. |
The JSON has less detail, so we must know in advance that we are getting a
list and that the list is of users and each user is a set of key-value pairs. |
The JSON
is more succinct (an advantage) but also is less self-describing (a disadvantage).
import json
input = '''
[
{ "id" : "001",
-----
"x" : "2",
"name" : "Chuck"
},
{ "id" : "009",
"x" : "7",
"name" : "Chuck"
}
]'''
info = json.loads(input)
print('User count:', len(info))
**for item in info:**
print('Nam... |
Once the JSON
has been parsed, we can use the Python index operator to extract the various bits
of data for each user. |
We don’t have to use the JSON library to dig through the
parsed JSON, since the returned data is simply native Python structures.
The output of this program is exactly the same as the XML version above.
User count: 2
Name Chuck
Id 001
Attribute 2
Name Brent
Id 009
Attribute 7
In general, there is an industry trend a... |
Because the JSON is simpler and more directly maps to native data structures we already have in programming languages, the parsing and data extraction
code is usually simpler and more direct when using JSON. |
But XML is more selfdescriptive than JSON and so there are some applications where XML retains an
advantage. |
For example, most word processors store documents internally using
XML rather than JSON.
##### 13.6 Application Programming Interfaces
We now have the ability to exchange data between applications using HyperText
Transport Protocol (HTTP) and a way to represent complex data that we are sending back and forth between ... |
The general name for these application-to-application contracts is Application Program Interfaces or APIs. |
When we use an API, generally
one program makes a set of services available for use by other applications and
publishes the APIs (i.e., the “rules”) that must be followed to access the services
provided by the program.
When we begin to build our programs where the functionality of our program
includes access to servic... |
A SOA approach is one where our overall_
application makes use of the services of other applications. |
A non-SOA approach
is where the application is a single standalone application which contains all of the
code necessary to implement the application.
We see many examples of SOA when we use the web. |
We can go to a single web
site and book air travel, hotels, and automobiles all from a single site. The data
for hotels is not stored on the airline computers. |
Instead, the airline computers
contact the services on the hotel computers and retrieve the hotel data and present
it to the user. |
When the user agrees to make a hotel reservation using the airline
site, the airline site uses another web service on the hotel systems to actually make
the reservation. |
And when it comes time to charge your credit card for the whole
transaction, still other computers become involved in the process.
###### Auto Hotel Airline Rental Reservation Reservation Service Service Service
API
API API
Travel Application
Figure 13.2: Service Oriented Architecture
A Service-Oriented Archit... |
With these advantages, an SOA
system must be carefully designed to have good performance and meet the user’s
needs.
When an application makes a set of services in its API available over the web, we
call these web services.
|A|P|I|
|---|---|---|
-----
##### 13.7 Google geocoding web service
Google has an excellent... |
We can submit a geographical search string
like “Ann Arbor, MI” to their geocoding API and have Google return its best
guess as to where on a map we might find our search string and tell us about the
landmarks nearby.
The geocoding service is free but rate limited so you cannot make unlimited use of
the API in a comme... |
But if you have some survey data where an
end user has entered a location in a free-format input box, you can use this API
to clean up your data quite nicely.
_When you are using a free API like Google’s geocoding API, you need to be respectful_
_in your use of these resources. |
If too many people abuse the service, Google might_
_drop or significantly curtail its free service._
You can read the online documentation for this service, but it is quite simple and
you can even test it using a browser by typing the following URL into your browser:
[http://maps.googleapis.com/maps/api/geocode/json... |
Unlike a fixed web page, the data we get depends on the
parameters we send and the geographical data stored in Google’s servers.
Once we retrieve the JSON data, we parse it with the json library and do a few
checks to make sure that we received good data, then extract the information that
we are looking for.
The outp... |
The general idea is that they want to know who is using their services and
how much each user is using. |
Perhaps they have free and pay tiers of their services
or have a policy that limits the number of requests that a single individual can
make during a particular time period.
Sometimes once you get your API key, you simply include the key as part of POST
data or perhaps as a parameter on the URL when calling the API.
... |
A very common technology that is used to sign requests over
the Internet is called OAuth. |
You can read more about the OAuth protocol at
[www.oauth.net.](http://www.oauth.net)
As the Twitter API became increasingly valuable, Twitter went from an open
and public API to an API that required the use of OAuth signatures on each API
request. |
Thankfully there are still a number of convenient and free OAuth libraries
so you can avoid writing an OAuth implementation from scratch by reading the
specification. |
These libraries are of varying complexity and have varying degrees
of richness. |
The OAuth web site has information about various OAuth libraries.
For this next sample program we will download the files twurl.py, hidden.py,
_[oauth.py, and twitter1.py from www.pythonlearn.com/code and put them all in a](http://www.pythonlearn.com/code)_
folder on your computer.
To make use of these programs you w... |
You will edit the file hidden.py and put these four strings into the
appropriate variables in the file:
_# Keep this file separate_
-----
_# https://apps.twitter.com/_
_# Create new App_
**def oauth():**
**return {"consumer_key": "h7Lu...Ng",**
_"consumer_secret" : "dNKenAC3New...mmn7Q",_
_"token_key" : "1018556... |
We simply set the secrets in hidden.py and then send the_
desired URL to the twurl.augment() function and the library code adds all the
necessary parameters to the URL for us.
This program retrieves the timeline for a particular Twitter user and returns it to
us in JSON format in a string. |
We simply print the first 250 characters of the
string:
import urllib.request, urllib.parse, urllib.error
import twurl
TWITTER_URL = 'https://api.twitter.com/1.1/statuses/user_timeline.json'
**while True:**
print('')
acct = input('Enter Twitter Account:')
**if (len(acct) < 1): break**
url = twurl.augment(TWITTER_UR... |
:)\n\nhttps:\/\/t.co\/2XmHPx7kgX",
"source":"web","truncated":false,
Remaining 177
Enter Twitter Account:
Along with the returned timeline data, Twitter also returns metadata about the
request in the HTTP response headers. |
One header in particular, x-rate-limit_remaining, informs us how many more requests we can make before we will be shut_
off for a short time period. |
You can see that our remaining retrievals drop by one
each time we make a request to the API.
In the following example, we retrieve a user’s Twitter friends, parse the returned
JSON, and extract some of the information about the friends. |
We also dump the
JSON after parsing and “pretty-print” it with an indent of four characters to allow
us to pore through the data when we want to extract more fields.
import urllib.request, urllib.parse, urllib.error
import twurl
import json
TWITTER_URL = 'https://api.twitter.com/1.1/friends/list.json'
**while True:*... |
We had a cake with the LO,
scweeker
@DeviceLabDC love it! |
Now where so I get that "etc
Enter Twitter Account:
The last bit of the output is where we see the for loop reading the five most recent
“friends” of the drchuck Twitter account and printing the most recent status for
each friend. |
There is a great deal more data available in the returned JSON. |
If
you look in the output of the program, you can also see that the “find the friends”
of a particular account has a different rate limitation than the number of timeline
queries we are allowed to run per time period.
These secure API keys allow Twitter to have solid confidence that they know who
is using their API an... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.