doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
HMAC.name
The canonical name of this HMAC, always lowercase, e.g. hmac-md5. New in version 3.4. | python.library.hmac#hmac.HMAC.name |
HMAC.update(msg)
Update the hmac object with msg. Repeated calls are equivalent to a single call with the concatenation of all the arguments: m.update(a); m.update(b) is equivalent to m.update(a + b). Changed in version 3.4: Parameter msg can be of any type supported by hashlib. | python.library.hmac#hmac.HMAC.update |
hmac.new(key, msg=None, digestmod='')
Return a new hmac object. key is a bytes or bytearray object giving the secret key. If msg is present, the method call update(msg) is made. digestmod is the digest name, digest constructor or module for the HMAC object to use. It may be any name suitable to hashlib.new(). Despite its argument position, it is required. Changed in version 3.4: Parameter key can be a bytes or bytearray object. Parameter msg can be of any type supported by hashlib. Parameter digestmod can be the name of a hash algorithm. Deprecated since version 3.4, removed in version 3.8: MD5 as implicit default digest for digestmod is deprecated. The digestmod parameter is now required. Pass it as a keyword argument to avoid awkwardness when you do not have an initial msg. | python.library.hmac#hmac.new |
html — HyperText Markup Language support Source code: Lib/html/__init__.py This module defines utilities to manipulate HTML.
html.escape(s, quote=True)
Convert the characters &, < and > in string s to HTML-safe sequences. Use this if you need to display text that might contain such characters in HTML. If the optional flag quote is true, the characters (") and (') are also translated; this helps for inclusion in an HTML attribute value delimited by quotes, as in <a href="...">. New in version 3.2.
html.unescape(s)
Convert all named and numeric character references (e.g. >, >, >) in the string s to the corresponding Unicode characters. This function uses the rules defined by the HTML 5 standard for both valid and invalid character references, and the list of
HTML 5 named character references. New in version 3.4.
Submodules in the html package are:
html.parser – HTML/XHTML parser with lenient parsing mode
html.entities – HTML entity definitions | python.library.html |
html.entities — Definitions of HTML general entities Source code: Lib/html/entities.py This module defines four dictionaries, html5, name2codepoint, codepoint2name, and entitydefs.
html.entities.html5
A dictionary that maps HTML5 named character references 1 to the equivalent Unicode character(s), e.g. html5['gt;'] == '>'. Note that the trailing semicolon is included in the name (e.g. 'gt;'), however some of the names are accepted by the standard even without the semicolon: in this case the name is present with and without the ';'. See also html.unescape(). New in version 3.3.
html.entities.entitydefs
A dictionary mapping XHTML 1.0 entity definitions to their replacement text in ISO Latin-1.
html.entities.name2codepoint
A dictionary that maps HTML entity names to the Unicode code points.
html.entities.codepoint2name
A dictionary that maps Unicode code points to HTML entity names.
Footnotes
1
See https://www.w3.org/TR/html5/syntax.html#named-character-references | python.library.html.entities |
html.entities.codepoint2name
A dictionary that maps Unicode code points to HTML entity names. | python.library.html.entities#html.entities.codepoint2name |
html.entities.entitydefs
A dictionary mapping XHTML 1.0 entity definitions to their replacement text in ISO Latin-1. | python.library.html.entities#html.entities.entitydefs |
html.entities.html5
A dictionary that maps HTML5 named character references 1 to the equivalent Unicode character(s), e.g. html5['gt;'] == '>'. Note that the trailing semicolon is included in the name (e.g. 'gt;'), however some of the names are accepted by the standard even without the semicolon: in this case the name is present with and without the ';'. See also html.unescape(). New in version 3.3. | python.library.html.entities#html.entities.html5 |
html.entities.name2codepoint
A dictionary that maps HTML entity names to the Unicode code points. | python.library.html.entities#html.entities.name2codepoint |
html.escape(s, quote=True)
Convert the characters &, < and > in string s to HTML-safe sequences. Use this if you need to display text that might contain such characters in HTML. If the optional flag quote is true, the characters (") and (') are also translated; this helps for inclusion in an HTML attribute value delimited by quotes, as in <a href="...">. New in version 3.2. | python.library.html#html.escape |
html.parser — Simple HTML and XHTML parser Source code: Lib/html/parser.py This module defines a class HTMLParser which serves as the basis for parsing text files formatted in HTML (HyperText Mark-up Language) and XHTML.
class html.parser.HTMLParser(*, convert_charrefs=True)
Create a parser instance able to parse invalid markup. If convert_charrefs is True (the default), all character references (except the ones in script/style elements) are automatically converted to the corresponding Unicode characters. An HTMLParser instance is fed HTML data and calls handler methods when start tags, end tags, text, comments, and other markup elements are encountered. The user should subclass HTMLParser and override its methods to implement the desired behavior. This parser does not check that end tags match start tags or call the end-tag handler for elements which are closed implicitly by closing an outer element. Changed in version 3.4: convert_charrefs keyword argument added. Changed in version 3.5: The default value for argument convert_charrefs is now True.
Example HTML Parser Application As a basic example, below is a simple HTML parser that uses the HTMLParser class to print out start tags, end tags, and data as they are encountered: from html.parser import HTMLParser
class MyHTMLParser(HTMLParser):
def handle_starttag(self, tag, attrs):
print("Encountered a start tag:", tag)
def handle_endtag(self, tag):
print("Encountered an end tag :", tag)
def handle_data(self, data):
print("Encountered some data :", data)
parser = MyHTMLParser()
parser.feed('<html><head><title>Test</title></head>'
'<body><h1>Parse me!</h1></body></html>')
The output will then be: Encountered a start tag: html
Encountered a start tag: head
Encountered a start tag: title
Encountered some data : Test
Encountered an end tag : title
Encountered an end tag : head
Encountered a start tag: body
Encountered a start tag: h1
Encountered some data : Parse me!
Encountered an end tag : h1
Encountered an end tag : body
Encountered an end tag : html
HTMLParser Methods HTMLParser instances have the following methods:
HTMLParser.feed(data)
Feed some text to the parser. It is processed insofar as it consists of complete elements; incomplete data is buffered until more data is fed or close() is called. data must be str.
HTMLParser.close()
Force processing of all buffered data as if it were followed by an end-of-file mark. This method may be redefined by a derived class to define additional processing at the end of the input, but the redefined version should always call the HTMLParser base class method close().
HTMLParser.reset()
Reset the instance. Loses all unprocessed data. This is called implicitly at instantiation time.
HTMLParser.getpos()
Return current line number and offset.
HTMLParser.get_starttag_text()
Return the text of the most recently opened start tag. This should not normally be needed for structured processing, but may be useful in dealing with HTML “as deployed” or for re-generating input with minimal changes (whitespace between attributes can be preserved, etc.).
The following methods are called when data or markup elements are encountered and they are meant to be overridden in a subclass. The base class implementations do nothing (except for handle_startendtag()):
HTMLParser.handle_starttag(tag, attrs)
This method is called to handle the start of a tag (e.g. <div id="main">). The tag argument is the name of the tag converted to lower case. The attrs argument is a list of (name, value) pairs containing the attributes found inside the tag’s <> brackets. The name will be translated to lower case, and quotes in the value have been removed, and character and entity references have been replaced. For instance, for the tag <A HREF="https://www.cwi.nl/">, this method would be called as handle_starttag('a', [('href', 'https://www.cwi.nl/')]). All entity references from html.entities are replaced in the attribute values.
HTMLParser.handle_endtag(tag)
This method is called to handle the end tag of an element (e.g. </div>). The tag argument is the name of the tag converted to lower case.
HTMLParser.handle_startendtag(tag, attrs)
Similar to handle_starttag(), but called when the parser encounters an XHTML-style empty tag (<img ... />). This method may be overridden by subclasses which require this particular lexical information; the default implementation simply calls handle_starttag() and handle_endtag().
HTMLParser.handle_data(data)
This method is called to process arbitrary data (e.g. text nodes and the content of <script>...</script> and <style>...</style>).
HTMLParser.handle_entityref(name)
This method is called to process a named character reference of the form &name; (e.g. >), where name is a general entity reference (e.g. 'gt'). This method is never called if convert_charrefs is True.
HTMLParser.handle_charref(name)
This method is called to process decimal and hexadecimal numeric character references of the form &#NNN; and &#xNNN;. For example, the decimal equivalent for > is >, whereas the hexadecimal is >; in this case the method will receive '62' or 'x3E'. This method is never called if convert_charrefs is True.
HTMLParser.handle_comment(data)
This method is called when a comment is encountered (e.g. <!--comment-->). For example, the comment <!-- comment --> will cause this method to be called with the argument ' comment '. The content of Internet Explorer conditional comments (condcoms) will also be sent to this method, so, for <!--[if IE 9]>IE9-specific content<![endif]-->, this method will receive '[if IE 9]>IE9-specific content<![endif]'.
HTMLParser.handle_decl(decl)
This method is called to handle an HTML doctype declaration (e.g. <!DOCTYPE html>). The decl parameter will be the entire contents of the declaration inside the <!...> markup (e.g. 'DOCTYPE html').
HTMLParser.handle_pi(data)
Method called when a processing instruction is encountered. The data parameter will contain the entire processing instruction. For example, for the processing instruction <?proc color='red'>, this method would be called as handle_pi("proc color='red'"). It is intended to be overridden by a derived class; the base class implementation does nothing. Note The HTMLParser class uses the SGML syntactic rules for processing instructions. An XHTML processing instruction using the trailing '?' will cause the '?' to be included in data.
HTMLParser.unknown_decl(data)
This method is called when an unrecognized declaration is read by the parser. The data parameter will be the entire contents of the declaration inside the <![...]> markup. It is sometimes useful to be overridden by a derived class. The base class implementation does nothing.
Examples The following class implements a parser that will be used to illustrate more examples: from html.parser import HTMLParser
from html.entities import name2codepoint
class MyHTMLParser(HTMLParser):
def handle_starttag(self, tag, attrs):
print("Start tag:", tag)
for attr in attrs:
print(" attr:", attr)
def handle_endtag(self, tag):
print("End tag :", tag)
def handle_data(self, data):
print("Data :", data)
def handle_comment(self, data):
print("Comment :", data)
def handle_entityref(self, name):
c = chr(name2codepoint[name])
print("Named ent:", c)
def handle_charref(self, name):
if name.startswith('x'):
c = chr(int(name[1:], 16))
else:
c = chr(int(name))
print("Num ent :", c)
def handle_decl(self, data):
print("Decl :", data)
parser = MyHTMLParser()
Parsing a doctype: >>> parser.feed('<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" '
... '"http://www.w3.org/TR/html4/strict.dtd">')
Decl : DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"
Parsing an element with a few attributes and a title: >>> parser.feed('<img src="python-logo.png" alt="The Python logo">')
Start tag: img
attr: ('src', 'python-logo.png')
attr: ('alt', 'The Python logo')
>>>
>>> parser.feed('<h1>Python</h1>')
Start tag: h1
Data : Python
End tag : h1
The content of script and style elements is returned as is, without further parsing: >>> parser.feed('<style type="text/css">#python { color: green }</style>')
Start tag: style
attr: ('type', 'text/css')
Data : #python { color: green }
End tag : style
>>> parser.feed('<script type="text/javascript">'
... 'alert("<strong>hello!</strong>");</script>')
Start tag: script
attr: ('type', 'text/javascript')
Data : alert("<strong>hello!</strong>");
End tag : script
Parsing comments: >>> parser.feed('<!-- a comment -->'
... '<!--[if IE 9]>IE-specific content<![endif]-->')
Comment : a comment
Comment : [if IE 9]>IE-specific content<![endif]
Parsing named and numeric character references and converting them to the correct char (note: these 3 references are all equivalent to '>'): >>> parser.feed('>>>')
Named ent: >
Num ent : >
Num ent : >
Feeding incomplete chunks to feed() works, but handle_data() might be called more than once (unless convert_charrefs is set to True): >>> for chunk in ['<sp', 'an>buff', 'ered ', 'text</s', 'pan>']:
... parser.feed(chunk)
...
Start tag: span
Data : buff
Data : ered
Data : text
End tag : span
Parsing invalid HTML (e.g. unquoted attributes) also works: >>> parser.feed('<p><a class=link href=#main>tag soup</p ></a>')
Start tag: p
Start tag: a
attr: ('class', 'link')
attr: ('href', '#main')
Data : tag soup
End tag : p
End tag : a | python.library.html.parser |
class html.parser.HTMLParser(*, convert_charrefs=True)
Create a parser instance able to parse invalid markup. If convert_charrefs is True (the default), all character references (except the ones in script/style elements) are automatically converted to the corresponding Unicode characters. An HTMLParser instance is fed HTML data and calls handler methods when start tags, end tags, text, comments, and other markup elements are encountered. The user should subclass HTMLParser and override its methods to implement the desired behavior. This parser does not check that end tags match start tags or call the end-tag handler for elements which are closed implicitly by closing an outer element. Changed in version 3.4: convert_charrefs keyword argument added. Changed in version 3.5: The default value for argument convert_charrefs is now True. | python.library.html.parser#html.parser.HTMLParser |
HTMLParser.close()
Force processing of all buffered data as if it were followed by an end-of-file mark. This method may be redefined by a derived class to define additional processing at the end of the input, but the redefined version should always call the HTMLParser base class method close(). | python.library.html.parser#html.parser.HTMLParser.close |
HTMLParser.feed(data)
Feed some text to the parser. It is processed insofar as it consists of complete elements; incomplete data is buffered until more data is fed or close() is called. data must be str. | python.library.html.parser#html.parser.HTMLParser.feed |
HTMLParser.getpos()
Return current line number and offset. | python.library.html.parser#html.parser.HTMLParser.getpos |
HTMLParser.get_starttag_text()
Return the text of the most recently opened start tag. This should not normally be needed for structured processing, but may be useful in dealing with HTML “as deployed” or for re-generating input with minimal changes (whitespace between attributes can be preserved, etc.). | python.library.html.parser#html.parser.HTMLParser.get_starttag_text |
HTMLParser.handle_charref(name)
This method is called to process decimal and hexadecimal numeric character references of the form &#NNN; and &#xNNN;. For example, the decimal equivalent for > is >, whereas the hexadecimal is >; in this case the method will receive '62' or 'x3E'. This method is never called if convert_charrefs is True. | python.library.html.parser#html.parser.HTMLParser.handle_charref |
HTMLParser.handle_comment(data)
This method is called when a comment is encountered (e.g. <!--comment-->). For example, the comment <!-- comment --> will cause this method to be called with the argument ' comment '. The content of Internet Explorer conditional comments (condcoms) will also be sent to this method, so, for <!--[if IE 9]>IE9-specific content<![endif]-->, this method will receive '[if IE 9]>IE9-specific content<![endif]'. | python.library.html.parser#html.parser.HTMLParser.handle_comment |
HTMLParser.handle_data(data)
This method is called to process arbitrary data (e.g. text nodes and the content of <script>...</script> and <style>...</style>). | python.library.html.parser#html.parser.HTMLParser.handle_data |
HTMLParser.handle_decl(decl)
This method is called to handle an HTML doctype declaration (e.g. <!DOCTYPE html>). The decl parameter will be the entire contents of the declaration inside the <!...> markup (e.g. 'DOCTYPE html'). | python.library.html.parser#html.parser.HTMLParser.handle_decl |
HTMLParser.handle_endtag(tag)
This method is called to handle the end tag of an element (e.g. </div>). The tag argument is the name of the tag converted to lower case. | python.library.html.parser#html.parser.HTMLParser.handle_endtag |
HTMLParser.handle_entityref(name)
This method is called to process a named character reference of the form &name; (e.g. >), where name is a general entity reference (e.g. 'gt'). This method is never called if convert_charrefs is True. | python.library.html.parser#html.parser.HTMLParser.handle_entityref |
HTMLParser.handle_pi(data)
Method called when a processing instruction is encountered. The data parameter will contain the entire processing instruction. For example, for the processing instruction <?proc color='red'>, this method would be called as handle_pi("proc color='red'"). It is intended to be overridden by a derived class; the base class implementation does nothing. Note The HTMLParser class uses the SGML syntactic rules for processing instructions. An XHTML processing instruction using the trailing '?' will cause the '?' to be included in data. | python.library.html.parser#html.parser.HTMLParser.handle_pi |
HTMLParser.handle_startendtag(tag, attrs)
Similar to handle_starttag(), but called when the parser encounters an XHTML-style empty tag (<img ... />). This method may be overridden by subclasses which require this particular lexical information; the default implementation simply calls handle_starttag() and handle_endtag(). | python.library.html.parser#html.parser.HTMLParser.handle_startendtag |
HTMLParser.handle_starttag(tag, attrs)
This method is called to handle the start of a tag (e.g. <div id="main">). The tag argument is the name of the tag converted to lower case. The attrs argument is a list of (name, value) pairs containing the attributes found inside the tag’s <> brackets. The name will be translated to lower case, and quotes in the value have been removed, and character and entity references have been replaced. For instance, for the tag <A HREF="https://www.cwi.nl/">, this method would be called as handle_starttag('a', [('href', 'https://www.cwi.nl/')]). All entity references from html.entities are replaced in the attribute values. | python.library.html.parser#html.parser.HTMLParser.handle_starttag |
HTMLParser.reset()
Reset the instance. Loses all unprocessed data. This is called implicitly at instantiation time. | python.library.html.parser#html.parser.HTMLParser.reset |
HTMLParser.unknown_decl(data)
This method is called when an unrecognized declaration is read by the parser. The data parameter will be the entire contents of the declaration inside the <![...]> markup. It is sometimes useful to be overridden by a derived class. The base class implementation does nothing. | python.library.html.parser#html.parser.HTMLParser.unknown_decl |
html.unescape(s)
Convert all named and numeric character references (e.g. >, >, >) in the string s to the corresponding Unicode characters. This function uses the rules defined by the HTML 5 standard for both valid and invalid character references, and the list of
HTML 5 named character references. New in version 3.4. | python.library.html#html.unescape |
http — HTTP modules Source code: Lib/http/__init__.py http is a package that collects several modules for working with the HyperText Transfer Protocol:
http.client is a low-level HTTP protocol client; for high-level URL opening use urllib.request
http.server contains basic HTTP server classes based on socketserver
http.cookies has utilities for implementing state management with cookies
http.cookiejar provides persistence of cookies http is also a module that defines a number of HTTP status codes and associated messages through the http.HTTPStatus enum:
class http.HTTPStatus
New in version 3.5. A subclass of enum.IntEnum that defines a set of HTTP status codes, reason phrases and long descriptions written in English. Usage: >>> from http import HTTPStatus
>>> HTTPStatus.OK
<HTTPStatus.OK: 200>
>>> HTTPStatus.OK == 200
True
>>> HTTPStatus.OK.value
200
>>> HTTPStatus.OK.phrase
'OK'
>>> HTTPStatus.OK.description
'Request fulfilled, document follows'
>>> list(HTTPStatus)
[<HTTPStatus.CONTINUE: 100>, <HTTPStatus.SWITCHING_PROTOCOLS: 101>, ...]
HTTP status codes Supported, IANA-registered status codes available in http.HTTPStatus are:
Code Enum Name Details
100 CONTINUE HTTP/1.1 RFC 7231, Section 6.2.1
101 SWITCHING_PROTOCOLS HTTP/1.1 RFC 7231, Section 6.2.2
102 PROCESSING WebDAV RFC 2518, Section 10.1
103 EARLY_HINTS An HTTP Status Code for Indicating Hints RFC 8297
200 OK HTTP/1.1 RFC 7231, Section 6.3.1
201 CREATED HTTP/1.1 RFC 7231, Section 6.3.2
202 ACCEPTED HTTP/1.1 RFC 7231, Section 6.3.3
203 NON_AUTHORITATIVE_INFORMATION HTTP/1.1 RFC 7231, Section 6.3.4
204 NO_CONTENT HTTP/1.1 RFC 7231, Section 6.3.5
205 RESET_CONTENT HTTP/1.1 RFC 7231, Section 6.3.6
206 PARTIAL_CONTENT HTTP/1.1 RFC 7233, Section 4.1
207 MULTI_STATUS WebDAV RFC 4918, Section 11.1
208 ALREADY_REPORTED WebDAV Binding Extensions RFC 5842, Section 7.1 (Experimental)
226 IM_USED Delta Encoding in HTTP RFC 3229, Section 10.4.1
300 MULTIPLE_CHOICES HTTP/1.1 RFC 7231, Section 6.4.1
301 MOVED_PERMANENTLY HTTP/1.1 RFC 7231, Section 6.4.2
302 FOUND HTTP/1.1 RFC 7231, Section 6.4.3
303 SEE_OTHER HTTP/1.1 RFC 7231, Section 6.4.4
304 NOT_MODIFIED HTTP/1.1 RFC 7232, Section 4.1
305 USE_PROXY HTTP/1.1 RFC 7231, Section 6.4.5
307 TEMPORARY_REDIRECT HTTP/1.1 RFC 7231, Section 6.4.7
308 PERMANENT_REDIRECT Permanent Redirect RFC 7238, Section 3 (Experimental)
400 BAD_REQUEST HTTP/1.1 RFC 7231, Section 6.5.1
401 UNAUTHORIZED HTTP/1.1 Authentication RFC 7235, Section 3.1
402 PAYMENT_REQUIRED HTTP/1.1 RFC 7231, Section 6.5.2
403 FORBIDDEN HTTP/1.1 RFC 7231, Section 6.5.3
404 NOT_FOUND HTTP/1.1 RFC 7231, Section 6.5.4
405 METHOD_NOT_ALLOWED HTTP/1.1 RFC 7231, Section 6.5.5
406 NOT_ACCEPTABLE HTTP/1.1 RFC 7231, Section 6.5.6
407 PROXY_AUTHENTICATION_REQUIRED HTTP/1.1 Authentication RFC 7235, Section 3.2
408 REQUEST_TIMEOUT HTTP/1.1 RFC 7231, Section 6.5.7
409 CONFLICT HTTP/1.1 RFC 7231, Section 6.5.8
410 GONE HTTP/1.1 RFC 7231, Section 6.5.9
411 LENGTH_REQUIRED HTTP/1.1 RFC 7231, Section 6.5.10
412 PRECONDITION_FAILED HTTP/1.1 RFC 7232, Section 4.2
413 REQUEST_ENTITY_TOO_LARGE HTTP/1.1 RFC 7231, Section 6.5.11
414 REQUEST_URI_TOO_LONG HTTP/1.1 RFC 7231, Section 6.5.12
415 UNSUPPORTED_MEDIA_TYPE HTTP/1.1 RFC 7231, Section 6.5.13
416 REQUESTED_RANGE_NOT_SATISFIABLE HTTP/1.1 Range Requests RFC 7233, Section 4.4
417 EXPECTATION_FAILED HTTP/1.1 RFC 7231, Section 6.5.14
418 IM_A_TEAPOT HTCPCP/1.0 RFC 2324, Section 2.3.2
421 MISDIRECTED_REQUEST HTTP/2 RFC 7540, Section 9.1.2
422 UNPROCESSABLE_ENTITY WebDAV RFC 4918, Section 11.2
423 LOCKED WebDAV RFC 4918, Section 11.3
424 FAILED_DEPENDENCY WebDAV RFC 4918, Section 11.4
425 TOO_EARLY Using Early Data in HTTP RFC 8470
426 UPGRADE_REQUIRED HTTP/1.1 RFC 7231, Section 6.5.15
428 PRECONDITION_REQUIRED Additional HTTP Status Codes RFC 6585
429 TOO_MANY_REQUESTS Additional HTTP Status Codes RFC 6585
431 REQUEST_HEADER_FIELDS_TOO_LARGE Additional HTTP Status Codes RFC 6585
451 UNAVAILABLE_FOR_LEGAL_REASONS An HTTP Status Code to Report Legal Obstacles RFC 7725
500 INTERNAL_SERVER_ERROR HTTP/1.1 RFC 7231, Section 6.6.1
501 NOT_IMPLEMENTED HTTP/1.1 RFC 7231, Section 6.6.2
502 BAD_GATEWAY HTTP/1.1 RFC 7231, Section 6.6.3
503 SERVICE_UNAVAILABLE HTTP/1.1 RFC 7231, Section 6.6.4
504 GATEWAY_TIMEOUT HTTP/1.1 RFC 7231, Section 6.6.5
505 HTTP_VERSION_NOT_SUPPORTED HTTP/1.1 RFC 7231, Section 6.6.6
506 VARIANT_ALSO_NEGOTIATES Transparent Content Negotiation in HTTP RFC 2295, Section 8.1 (Experimental)
507 INSUFFICIENT_STORAGE WebDAV RFC 4918, Section 11.5
508 LOOP_DETECTED WebDAV Binding Extensions RFC 5842, Section 7.2 (Experimental)
510 NOT_EXTENDED An HTTP Extension Framework RFC 2774, Section 7 (Experimental)
511 NETWORK_AUTHENTICATION_REQUIRED Additional HTTP Status Codes RFC 6585, Section 6 In order to preserve backwards compatibility, enum values are also present in the http.client module in the form of constants. The enum name is equal to the constant name (i.e. http.HTTPStatus.OK is also available as http.client.OK). Changed in version 3.7: Added 421 MISDIRECTED_REQUEST status code. New in version 3.8: Added 451 UNAVAILABLE_FOR_LEGAL_REASONS status code. New in version 3.9: Added 103 EARLY_HINTS, 418 IM_A_TEAPOT and 425 TOO_EARLY status codes. | python.library.http |
http.client — HTTP protocol client Source code: Lib/http/client.py This module defines classes which implement the client side of the HTTP and HTTPS protocols. It is normally not used directly — the module urllib.request uses it to handle URLs that use HTTP and HTTPS. See also The Requests package is recommended for a higher-level HTTP client interface. Note HTTPS support is only available if Python was compiled with SSL support (through the ssl module). The module provides the following classes:
class http.client.HTTPConnection(host, port=None, [timeout, ]source_address=None, blocksize=8192)
An HTTPConnection instance represents one transaction with an HTTP server. It should be instantiated passing it a host and optional port number. If no port number is passed, the port is extracted from the host string if it has the form host:port, else the default HTTP port (80) is used. If the optional timeout parameter is given, blocking operations (like connection attempts) will timeout after that many seconds (if it is not given, the global default timeout setting is used). The optional source_address parameter may be a tuple of a (host, port) to use as the source address the HTTP connection is made from. The optional blocksize parameter sets the buffer size in bytes for sending a file-like message body. For example, the following calls all create instances that connect to the server at the same host and port: >>> h1 = http.client.HTTPConnection('www.python.org')
>>> h2 = http.client.HTTPConnection('www.python.org:80')
>>> h3 = http.client.HTTPConnection('www.python.org', 80)
>>> h4 = http.client.HTTPConnection('www.python.org', 80, timeout=10)
Changed in version 3.2: source_address was added. Changed in version 3.4: The strict parameter was removed. HTTP 0.9-style “Simple Responses” are not longer supported. Changed in version 3.7: blocksize parameter was added.
class http.client.HTTPSConnection(host, port=None, key_file=None, cert_file=None, [timeout, ]source_address=None, *, context=None, check_hostname=None, blocksize=8192)
A subclass of HTTPConnection that uses SSL for communication with secure servers. Default port is 443. If context is specified, it must be a ssl.SSLContext instance describing the various SSL options. Please read Security considerations for more information on best practices. Changed in version 3.2: source_address, context and check_hostname were added. Changed in version 3.2: This class now supports HTTPS virtual hosts if possible (that is, if ssl.HAS_SNI is true). Changed in version 3.4: The strict parameter was removed. HTTP 0.9-style “Simple Responses” are no longer supported. Changed in version 3.4.3: This class now performs all the necessary certificate and hostname checks by default. To revert to the previous, unverified, behavior ssl._create_unverified_context() can be passed to the context parameter. Changed in version 3.8: This class now enables TLS 1.3 ssl.SSLContext.post_handshake_auth for the default context or when cert_file is passed with a custom context. Deprecated since version 3.6: key_file and cert_file are deprecated in favor of context. Please use ssl.SSLContext.load_cert_chain() instead, or let ssl.create_default_context() select the system’s trusted CA certificates for you. The check_hostname parameter is also deprecated; the ssl.SSLContext.check_hostname attribute of context should be used instead.
class http.client.HTTPResponse(sock, debuglevel=0, method=None, url=None)
Class whose instances are returned upon successful connection. Not instantiated directly by user. Changed in version 3.4: The strict parameter was removed. HTTP 0.9 style “Simple Responses” are no longer supported.
This module provides the following function:
http.client.parse_headers(fp)
Parse the headers from a file pointer fp representing a HTTP request/response. The file has to be a BufferedIOBase reader (i.e. not text) and must provide a valid RFC 2822 style header. This function returns an instance of http.client.HTTPMessage that holds the header fields, but no payload (the same as HTTPResponse.msg and http.server.BaseHTTPRequestHandler.headers). After returning, the file pointer fp is ready to read the HTTP body. Note parse_headers() does not parse the start-line of a HTTP message; it only parses the Name: value lines. The file has to be ready to read these field lines, so the first line should already be consumed before calling the function.
The following exceptions are raised as appropriate:
exception http.client.HTTPException
The base class of the other exceptions in this module. It is a subclass of Exception.
exception http.client.NotConnected
A subclass of HTTPException.
exception http.client.InvalidURL
A subclass of HTTPException, raised if a port is given and is either non-numeric or empty.
exception http.client.UnknownProtocol
A subclass of HTTPException.
exception http.client.UnknownTransferEncoding
A subclass of HTTPException.
exception http.client.UnimplementedFileMode
A subclass of HTTPException.
exception http.client.IncompleteRead
A subclass of HTTPException.
exception http.client.ImproperConnectionState
A subclass of HTTPException.
exception http.client.CannotSendRequest
A subclass of ImproperConnectionState.
exception http.client.CannotSendHeader
A subclass of ImproperConnectionState.
exception http.client.ResponseNotReady
A subclass of ImproperConnectionState.
exception http.client.BadStatusLine
A subclass of HTTPException. Raised if a server responds with a HTTP status code that we don’t understand.
exception http.client.LineTooLong
A subclass of HTTPException. Raised if an excessively long line is received in the HTTP protocol from the server.
exception http.client.RemoteDisconnected
A subclass of ConnectionResetError and BadStatusLine. Raised by HTTPConnection.getresponse() when the attempt to read the response results in no data read from the connection, indicating that the remote end has closed the connection. New in version 3.5: Previously, BadStatusLine('') was raised.
The constants defined in this module are:
http.client.HTTP_PORT
The default port for the HTTP protocol (always 80).
http.client.HTTPS_PORT
The default port for the HTTPS protocol (always 443).
http.client.responses
This dictionary maps the HTTP 1.1 status codes to the W3C names. Example: http.client.responses[http.client.NOT_FOUND] is 'Not Found'.
See HTTP status codes for a list of HTTP status codes that are available in this module as constants. HTTPConnection Objects HTTPConnection instances have the following methods:
HTTPConnection.request(method, url, body=None, headers={}, *, encode_chunked=False)
This will send a request to the server using the HTTP request method method and the selector url. If body is specified, the specified data is sent after the headers are finished. It may be a str, a bytes-like object, an open file object, or an iterable of bytes. If body is a string, it is encoded as ISO-8859-1, the default for HTTP. If it is a bytes-like object, the bytes are sent as is. If it is a file object, the contents of the file is sent; this file object should support at least the read() method. If the file object is an instance of io.TextIOBase, the data returned by the read() method will be encoded as ISO-8859-1, otherwise the data returned by read() is sent as is. If body is an iterable, the elements of the iterable are sent as is until the iterable is exhausted. The headers argument should be a mapping of extra HTTP headers to send with the request. If headers contains neither Content-Length nor Transfer-Encoding, but there is a request body, one of those header fields will be added automatically. If body is None, the Content-Length header is set to 0 for methods that expect a body (PUT, POST, and PATCH). If body is a string or a bytes-like object that is not also a file, the Content-Length header is set to its length. Any other type of body (files and iterables in general) will be chunk-encoded, and the Transfer-Encoding header will automatically be set instead of Content-Length. The encode_chunked argument is only relevant if Transfer-Encoding is specified in headers. If encode_chunked is False, the HTTPConnection object assumes that all encoding is handled by the calling code. If it is True, the body will be chunk-encoded. Note Chunked transfer encoding has been added to the HTTP protocol version 1.1. Unless the HTTP server is known to handle HTTP 1.1, the caller must either specify the Content-Length, or must pass a str or bytes-like object that is not also a file as the body representation. New in version 3.2: body can now be an iterable. Changed in version 3.6: If neither Content-Length nor Transfer-Encoding are set in headers, file and iterable body objects are now chunk-encoded. The encode_chunked argument was added. No attempt is made to determine the Content-Length for file objects.
HTTPConnection.getresponse()
Should be called after a request is sent to get the response from the server. Returns an HTTPResponse instance. Note Note that you must have read the whole response before you can send a new request to the server. Changed in version 3.5: If a ConnectionError or subclass is raised, the HTTPConnection object will be ready to reconnect when a new request is sent.
HTTPConnection.set_debuglevel(level)
Set the debugging level. The default debug level is 0, meaning no debugging output is printed. Any value greater than 0 will cause all currently defined debug output to be printed to stdout. The debuglevel is passed to any new HTTPResponse objects that are created. New in version 3.1.
HTTPConnection.set_tunnel(host, port=None, headers=None)
Set the host and the port for HTTP Connect Tunnelling. This allows running the connection through a proxy server. The host and port arguments specify the endpoint of the tunneled connection (i.e. the address included in the CONNECT request, not the address of the proxy server). The headers argument should be a mapping of extra HTTP headers to send with the CONNECT request. For example, to tunnel through a HTTPS proxy server running locally on port 8080, we would pass the address of the proxy to the HTTPSConnection constructor, and the address of the host that we eventually want to reach to the set_tunnel() method: >>> import http.client
>>> conn = http.client.HTTPSConnection("localhost", 8080)
>>> conn.set_tunnel("www.python.org")
>>> conn.request("HEAD","/index.html")
New in version 3.2.
HTTPConnection.connect()
Connect to the server specified when the object was created. By default, this is called automatically when making a request if the client does not already have a connection.
HTTPConnection.close()
Close the connection to the server.
HTTPConnection.blocksize
Buffer size in bytes for sending a file-like message body. New in version 3.7.
As an alternative to using the request() method described above, you can also send your request step by step, by using the four functions below.
HTTPConnection.putrequest(method, url, skip_host=False, skip_accept_encoding=False)
This should be the first call after the connection to the server has been made. It sends a line to the server consisting of the method string, the url string, and the HTTP version (HTTP/1.1). To disable automatic sending of Host: or Accept-Encoding: headers (for example to accept additional content encodings), specify skip_host or skip_accept_encoding with non-False values.
HTTPConnection.putheader(header, argument[, ...])
Send an RFC 822-style header to the server. It sends a line to the server consisting of the header, a colon and a space, and the first argument. If more arguments are given, continuation lines are sent, each consisting of a tab and an argument.
HTTPConnection.endheaders(message_body=None, *, encode_chunked=False)
Send a blank line to the server, signalling the end of the headers. The optional message_body argument can be used to pass a message body associated with the request. If encode_chunked is True, the result of each iteration of message_body will be chunk-encoded as specified in RFC 7230, Section 3.3.1. How the data is encoded is dependent on the type of message_body. If message_body implements the buffer interface the encoding will result in a single chunk. If message_body is a collections.abc.Iterable, each iteration of message_body will result in a chunk. If message_body is a file object, each call to .read() will result in a chunk. The method automatically signals the end of the chunk-encoded data immediately after message_body. Note Due to the chunked encoding specification, empty chunks yielded by an iterator body will be ignored by the chunk-encoder. This is to avoid premature termination of the read of the request by the target server due to malformed encoding. New in version 3.6: Chunked encoding support. The encode_chunked parameter was added.
HTTPConnection.send(data)
Send data to the server. This should be used directly only after the endheaders() method has been called and before getresponse() is called.
HTTPResponse Objects An HTTPResponse instance wraps the HTTP response from the server. It provides access to the request headers and the entity body. The response is an iterable object and can be used in a with statement. Changed in version 3.5: The io.BufferedIOBase interface is now implemented and all of its reader operations are supported.
HTTPResponse.read([amt])
Reads and returns the response body, or up to the next amt bytes.
HTTPResponse.readinto(b)
Reads up to the next len(b) bytes of the response body into the buffer b. Returns the number of bytes read. New in version 3.3.
HTTPResponse.getheader(name, default=None)
Return the value of the header name, or default if there is no header matching name. If there is more than one header with the name name, return all of the values joined by ‘, ‘. If ‘default’ is any iterable other than a single string, its elements are similarly returned joined by commas.
HTTPResponse.getheaders()
Return a list of (header, value) tuples.
HTTPResponse.fileno()
Return the fileno of the underlying socket.
HTTPResponse.msg
A http.client.HTTPMessage instance containing the response headers. http.client.HTTPMessage is a subclass of email.message.Message.
HTTPResponse.version
HTTP protocol version used by server. 10 for HTTP/1.0, 11 for HTTP/1.1.
HTTPResponse.url
URL of the resource retrieved, commonly used to determine if a redirect was followed.
HTTPResponse.headers
Headers of the response in the form of an email.message.EmailMessage instance.
HTTPResponse.status
Status code returned by server.
HTTPResponse.reason
Reason phrase returned by server.
HTTPResponse.debuglevel
A debugging hook. If debuglevel is greater than zero, messages will be printed to stdout as the response is read and parsed.
HTTPResponse.closed
Is True if the stream is closed.
HTTPResponse.geturl()
Deprecated since version 3.9: Deprecated in favor of url.
HTTPResponse.info()
Deprecated since version 3.9: Deprecated in favor of headers.
HTTPResponse.getstatus()
Deprecated since version 3.9: Deprecated in favor of status.
Examples Here is an example session that uses the GET method: >>> import http.client
>>> conn = http.client.HTTPSConnection("www.python.org")
>>> conn.request("GET", "/")
>>> r1 = conn.getresponse()
>>> print(r1.status, r1.reason)
200 OK
>>> data1 = r1.read() # This will return entire content.
>>> # The following example demonstrates reading data in chunks.
>>> conn.request("GET", "/")
>>> r1 = conn.getresponse()
>>> while chunk := r1.read(200):
... print(repr(chunk))
b'<!doctype html>\n<!--[if"...
...
>>> # Example of an invalid request
>>> conn = http.client.HTTPSConnection("docs.python.org")
>>> conn.request("GET", "/parrot.spam")
>>> r2 = conn.getresponse()
>>> print(r2.status, r2.reason)
404 Not Found
>>> data2 = r2.read()
>>> conn.close()
Here is an example session that uses the HEAD method. Note that the HEAD method never returns any data. >>> import http.client
>>> conn = http.client.HTTPSConnection("www.python.org")
>>> conn.request("HEAD", "/")
>>> res = conn.getresponse()
>>> print(res.status, res.reason)
200 OK
>>> data = res.read()
>>> print(len(data))
0
>>> data == b''
True
Here is an example session that shows how to POST requests: >>> import http.client, urllib.parse
>>> params = urllib.parse.urlencode({'@number': 12524, '@type': 'issue', '@action': 'show'})
>>> headers = {"Content-type": "application/x-www-form-urlencoded",
... "Accept": "text/plain"}
>>> conn = http.client.HTTPConnection("bugs.python.org")
>>> conn.request("POST", "", params, headers)
>>> response = conn.getresponse()
>>> print(response.status, response.reason)
302 Found
>>> data = response.read()
>>> data
b'Redirecting to <a href="http://bugs.python.org/issue12524">http://bugs.python.org/issue12524</a>'
>>> conn.close()
Client side HTTP PUT requests are very similar to POST requests. The difference lies only the server side where HTTP server will allow resources to be created via PUT request. It should be noted that custom HTTP methods are also handled in urllib.request.Request by setting the appropriate method attribute. Here is an example session that shows how to send a PUT request using http.client: >>> # This creates an HTTP message
>>> # with the content of BODY as the enclosed representation
>>> # for the resource http://localhost:8080/file
...
>>> import http.client
>>> BODY = "***filecontents***"
>>> conn = http.client.HTTPConnection("localhost", 8080)
>>> conn.request("PUT", "/file", BODY)
>>> response = conn.getresponse()
>>> print(response.status, response.reason)
200, OK
HTTPMessage Objects An http.client.HTTPMessage instance holds the headers from an HTTP response. It is implemented using the email.message.Message class. | python.library.http.client |
exception http.client.BadStatusLine
A subclass of HTTPException. Raised if a server responds with a HTTP status code that we don’t understand. | python.library.http.client#http.client.BadStatusLine |
exception http.client.CannotSendHeader
A subclass of ImproperConnectionState. | python.library.http.client#http.client.CannotSendHeader |
exception http.client.CannotSendRequest
A subclass of ImproperConnectionState. | python.library.http.client#http.client.CannotSendRequest |
class http.client.HTTPConnection(host, port=None, [timeout, ]source_address=None, blocksize=8192)
An HTTPConnection instance represents one transaction with an HTTP server. It should be instantiated passing it a host and optional port number. If no port number is passed, the port is extracted from the host string if it has the form host:port, else the default HTTP port (80) is used. If the optional timeout parameter is given, blocking operations (like connection attempts) will timeout after that many seconds (if it is not given, the global default timeout setting is used). The optional source_address parameter may be a tuple of a (host, port) to use as the source address the HTTP connection is made from. The optional blocksize parameter sets the buffer size in bytes for sending a file-like message body. For example, the following calls all create instances that connect to the server at the same host and port: >>> h1 = http.client.HTTPConnection('www.python.org')
>>> h2 = http.client.HTTPConnection('www.python.org:80')
>>> h3 = http.client.HTTPConnection('www.python.org', 80)
>>> h4 = http.client.HTTPConnection('www.python.org', 80, timeout=10)
Changed in version 3.2: source_address was added. Changed in version 3.4: The strict parameter was removed. HTTP 0.9-style “Simple Responses” are not longer supported. Changed in version 3.7: blocksize parameter was added. | python.library.http.client#http.client.HTTPConnection |
HTTPConnection.blocksize
Buffer size in bytes for sending a file-like message body. New in version 3.7. | python.library.http.client#http.client.HTTPConnection.blocksize |
HTTPConnection.close()
Close the connection to the server. | python.library.http.client#http.client.HTTPConnection.close |
HTTPConnection.connect()
Connect to the server specified when the object was created. By default, this is called automatically when making a request if the client does not already have a connection. | python.library.http.client#http.client.HTTPConnection.connect |
HTTPConnection.endheaders(message_body=None, *, encode_chunked=False)
Send a blank line to the server, signalling the end of the headers. The optional message_body argument can be used to pass a message body associated with the request. If encode_chunked is True, the result of each iteration of message_body will be chunk-encoded as specified in RFC 7230, Section 3.3.1. How the data is encoded is dependent on the type of message_body. If message_body implements the buffer interface the encoding will result in a single chunk. If message_body is a collections.abc.Iterable, each iteration of message_body will result in a chunk. If message_body is a file object, each call to .read() will result in a chunk. The method automatically signals the end of the chunk-encoded data immediately after message_body. Note Due to the chunked encoding specification, empty chunks yielded by an iterator body will be ignored by the chunk-encoder. This is to avoid premature termination of the read of the request by the target server due to malformed encoding. New in version 3.6: Chunked encoding support. The encode_chunked parameter was added. | python.library.http.client#http.client.HTTPConnection.endheaders |
HTTPConnection.getresponse()
Should be called after a request is sent to get the response from the server. Returns an HTTPResponse instance. Note Note that you must have read the whole response before you can send a new request to the server. Changed in version 3.5: If a ConnectionError or subclass is raised, the HTTPConnection object will be ready to reconnect when a new request is sent. | python.library.http.client#http.client.HTTPConnection.getresponse |
HTTPConnection.putheader(header, argument[, ...])
Send an RFC 822-style header to the server. It sends a line to the server consisting of the header, a colon and a space, and the first argument. If more arguments are given, continuation lines are sent, each consisting of a tab and an argument. | python.library.http.client#http.client.HTTPConnection.putheader |
HTTPConnection.putrequest(method, url, skip_host=False, skip_accept_encoding=False)
This should be the first call after the connection to the server has been made. It sends a line to the server consisting of the method string, the url string, and the HTTP version (HTTP/1.1). To disable automatic sending of Host: or Accept-Encoding: headers (for example to accept additional content encodings), specify skip_host or skip_accept_encoding with non-False values. | python.library.http.client#http.client.HTTPConnection.putrequest |
HTTPConnection.request(method, url, body=None, headers={}, *, encode_chunked=False)
This will send a request to the server using the HTTP request method method and the selector url. If body is specified, the specified data is sent after the headers are finished. It may be a str, a bytes-like object, an open file object, or an iterable of bytes. If body is a string, it is encoded as ISO-8859-1, the default for HTTP. If it is a bytes-like object, the bytes are sent as is. If it is a file object, the contents of the file is sent; this file object should support at least the read() method. If the file object is an instance of io.TextIOBase, the data returned by the read() method will be encoded as ISO-8859-1, otherwise the data returned by read() is sent as is. If body is an iterable, the elements of the iterable are sent as is until the iterable is exhausted. The headers argument should be a mapping of extra HTTP headers to send with the request. If headers contains neither Content-Length nor Transfer-Encoding, but there is a request body, one of those header fields will be added automatically. If body is None, the Content-Length header is set to 0 for methods that expect a body (PUT, POST, and PATCH). If body is a string or a bytes-like object that is not also a file, the Content-Length header is set to its length. Any other type of body (files and iterables in general) will be chunk-encoded, and the Transfer-Encoding header will automatically be set instead of Content-Length. The encode_chunked argument is only relevant if Transfer-Encoding is specified in headers. If encode_chunked is False, the HTTPConnection object assumes that all encoding is handled by the calling code. If it is True, the body will be chunk-encoded. Note Chunked transfer encoding has been added to the HTTP protocol version 1.1. Unless the HTTP server is known to handle HTTP 1.1, the caller must either specify the Content-Length, or must pass a str or bytes-like object that is not also a file as the body representation. New in version 3.2: body can now be an iterable. Changed in version 3.6: If neither Content-Length nor Transfer-Encoding are set in headers, file and iterable body objects are now chunk-encoded. The encode_chunked argument was added. No attempt is made to determine the Content-Length for file objects. | python.library.http.client#http.client.HTTPConnection.request |
HTTPConnection.send(data)
Send data to the server. This should be used directly only after the endheaders() method has been called and before getresponse() is called. | python.library.http.client#http.client.HTTPConnection.send |
HTTPConnection.set_debuglevel(level)
Set the debugging level. The default debug level is 0, meaning no debugging output is printed. Any value greater than 0 will cause all currently defined debug output to be printed to stdout. The debuglevel is passed to any new HTTPResponse objects that are created. New in version 3.1. | python.library.http.client#http.client.HTTPConnection.set_debuglevel |
HTTPConnection.set_tunnel(host, port=None, headers=None)
Set the host and the port for HTTP Connect Tunnelling. This allows running the connection through a proxy server. The host and port arguments specify the endpoint of the tunneled connection (i.e. the address included in the CONNECT request, not the address of the proxy server). The headers argument should be a mapping of extra HTTP headers to send with the CONNECT request. For example, to tunnel through a HTTPS proxy server running locally on port 8080, we would pass the address of the proxy to the HTTPSConnection constructor, and the address of the host that we eventually want to reach to the set_tunnel() method: >>> import http.client
>>> conn = http.client.HTTPSConnection("localhost", 8080)
>>> conn.set_tunnel("www.python.org")
>>> conn.request("HEAD","/index.html")
New in version 3.2. | python.library.http.client#http.client.HTTPConnection.set_tunnel |
exception http.client.HTTPException
The base class of the other exceptions in this module. It is a subclass of Exception. | python.library.http.client#http.client.HTTPException |
class http.client.HTTPResponse(sock, debuglevel=0, method=None, url=None)
Class whose instances are returned upon successful connection. Not instantiated directly by user. Changed in version 3.4: The strict parameter was removed. HTTP 0.9 style “Simple Responses” are no longer supported. | python.library.http.client#http.client.HTTPResponse |
HTTPResponse.closed
Is True if the stream is closed. | python.library.http.client#http.client.HTTPResponse.closed |
HTTPResponse.debuglevel
A debugging hook. If debuglevel is greater than zero, messages will be printed to stdout as the response is read and parsed. | python.library.http.client#http.client.HTTPResponse.debuglevel |
HTTPResponse.fileno()
Return the fileno of the underlying socket. | python.library.http.client#http.client.HTTPResponse.fileno |
HTTPResponse.getheader(name, default=None)
Return the value of the header name, or default if there is no header matching name. If there is more than one header with the name name, return all of the values joined by ‘, ‘. If ‘default’ is any iterable other than a single string, its elements are similarly returned joined by commas. | python.library.http.client#http.client.HTTPResponse.getheader |
HTTPResponse.getheaders()
Return a list of (header, value) tuples. | python.library.http.client#http.client.HTTPResponse.getheaders |
HTTPResponse.getstatus()
Deprecated since version 3.9: Deprecated in favor of status. | python.library.http.client#http.client.HTTPResponse.getstatus |
HTTPResponse.geturl()
Deprecated since version 3.9: Deprecated in favor of url. | python.library.http.client#http.client.HTTPResponse.geturl |
HTTPResponse.headers
Headers of the response in the form of an email.message.EmailMessage instance. | python.library.http.client#http.client.HTTPResponse.headers |
HTTPResponse.info()
Deprecated since version 3.9: Deprecated in favor of headers. | python.library.http.client#http.client.HTTPResponse.info |
HTTPResponse.msg
A http.client.HTTPMessage instance containing the response headers. http.client.HTTPMessage is a subclass of email.message.Message. | python.library.http.client#http.client.HTTPResponse.msg |
HTTPResponse.read([amt])
Reads and returns the response body, or up to the next amt bytes. | python.library.http.client#http.client.HTTPResponse.read |
HTTPResponse.readinto(b)
Reads up to the next len(b) bytes of the response body into the buffer b. Returns the number of bytes read. New in version 3.3. | python.library.http.client#http.client.HTTPResponse.readinto |
HTTPResponse.reason
Reason phrase returned by server. | python.library.http.client#http.client.HTTPResponse.reason |
HTTPResponse.status
Status code returned by server. | python.library.http.client#http.client.HTTPResponse.status |
HTTPResponse.url
URL of the resource retrieved, commonly used to determine if a redirect was followed. | python.library.http.client#http.client.HTTPResponse.url |
HTTPResponse.version
HTTP protocol version used by server. 10 for HTTP/1.0, 11 for HTTP/1.1. | python.library.http.client#http.client.HTTPResponse.version |
class http.client.HTTPSConnection(host, port=None, key_file=None, cert_file=None, [timeout, ]source_address=None, *, context=None, check_hostname=None, blocksize=8192)
A subclass of HTTPConnection that uses SSL for communication with secure servers. Default port is 443. If context is specified, it must be a ssl.SSLContext instance describing the various SSL options. Please read Security considerations for more information on best practices. Changed in version 3.2: source_address, context and check_hostname were added. Changed in version 3.2: This class now supports HTTPS virtual hosts if possible (that is, if ssl.HAS_SNI is true). Changed in version 3.4: The strict parameter was removed. HTTP 0.9-style “Simple Responses” are no longer supported. Changed in version 3.4.3: This class now performs all the necessary certificate and hostname checks by default. To revert to the previous, unverified, behavior ssl._create_unverified_context() can be passed to the context parameter. Changed in version 3.8: This class now enables TLS 1.3 ssl.SSLContext.post_handshake_auth for the default context or when cert_file is passed with a custom context. Deprecated since version 3.6: key_file and cert_file are deprecated in favor of context. Please use ssl.SSLContext.load_cert_chain() instead, or let ssl.create_default_context() select the system’s trusted CA certificates for you. The check_hostname parameter is also deprecated; the ssl.SSLContext.check_hostname attribute of context should be used instead. | python.library.http.client#http.client.HTTPSConnection |
http.client.HTTPS_PORT
The default port for the HTTPS protocol (always 443). | python.library.http.client#http.client.HTTPS_PORT |
http.client.HTTP_PORT
The default port for the HTTP protocol (always 80). | python.library.http.client#http.client.HTTP_PORT |
exception http.client.ImproperConnectionState
A subclass of HTTPException. | python.library.http.client#http.client.ImproperConnectionState |
exception http.client.IncompleteRead
A subclass of HTTPException. | python.library.http.client#http.client.IncompleteRead |
exception http.client.InvalidURL
A subclass of HTTPException, raised if a port is given and is either non-numeric or empty. | python.library.http.client#http.client.InvalidURL |
exception http.client.LineTooLong
A subclass of HTTPException. Raised if an excessively long line is received in the HTTP protocol from the server. | python.library.http.client#http.client.LineTooLong |
exception http.client.NotConnected
A subclass of HTTPException. | python.library.http.client#http.client.NotConnected |
http.client.parse_headers(fp)
Parse the headers from a file pointer fp representing a HTTP request/response. The file has to be a BufferedIOBase reader (i.e. not text) and must provide a valid RFC 2822 style header. This function returns an instance of http.client.HTTPMessage that holds the header fields, but no payload (the same as HTTPResponse.msg and http.server.BaseHTTPRequestHandler.headers). After returning, the file pointer fp is ready to read the HTTP body. Note parse_headers() does not parse the start-line of a HTTP message; it only parses the Name: value lines. The file has to be ready to read these field lines, so the first line should already be consumed before calling the function. | python.library.http.client#http.client.parse_headers |
exception http.client.RemoteDisconnected
A subclass of ConnectionResetError and BadStatusLine. Raised by HTTPConnection.getresponse() when the attempt to read the response results in no data read from the connection, indicating that the remote end has closed the connection. New in version 3.5: Previously, BadStatusLine('') was raised. | python.library.http.client#http.client.RemoteDisconnected |
exception http.client.ResponseNotReady
A subclass of ImproperConnectionState. | python.library.http.client#http.client.ResponseNotReady |
http.client.responses
This dictionary maps the HTTP 1.1 status codes to the W3C names. Example: http.client.responses[http.client.NOT_FOUND] is 'Not Found'. | python.library.http.client#http.client.responses |
exception http.client.UnimplementedFileMode
A subclass of HTTPException. | python.library.http.client#http.client.UnimplementedFileMode |
exception http.client.UnknownProtocol
A subclass of HTTPException. | python.library.http.client#http.client.UnknownProtocol |
exception http.client.UnknownTransferEncoding
A subclass of HTTPException. | python.library.http.client#http.client.UnknownTransferEncoding |
http.cookiejar — Cookie handling for HTTP clients Source code: Lib/http/cookiejar.py The http.cookiejar module defines classes for automatic handling of HTTP cookies. It is useful for accessing web sites that require small pieces of data – cookies – to be set on the client machine by an HTTP response from a web server, and then returned to the server in later HTTP requests. Both the regular Netscape cookie protocol and the protocol defined by RFC 2965 are handled. RFC 2965 handling is switched off by default. RFC 2109 cookies are parsed as Netscape cookies and subsequently treated either as Netscape or RFC 2965 cookies according to the ‘policy’ in effect. Note that the great majority of cookies on the Internet are Netscape cookies. http.cookiejar attempts to follow the de-facto Netscape cookie protocol (which differs substantially from that set out in the original Netscape specification), including taking note of the max-age and port cookie-attributes introduced with RFC 2965. Note The various named parameters found in Set-Cookie and Set-Cookie2 headers (eg. domain and expires) are conventionally referred to as attributes. To distinguish them from Python attributes, the documentation for this module uses the term cookie-attribute instead. The module defines the following exception:
exception http.cookiejar.LoadError
Instances of FileCookieJar raise this exception on failure to load cookies from a file. LoadError is a subclass of OSError. Changed in version 3.3: LoadError was made a subclass of OSError instead of IOError.
The following classes are provided:
class http.cookiejar.CookieJar(policy=None)
policy is an object implementing the CookiePolicy interface. The CookieJar class stores HTTP cookies. It extracts cookies from HTTP requests, and returns them in HTTP responses. CookieJar instances automatically expire contained cookies when necessary. Subclasses are also responsible for storing and retrieving cookies from a file or database.
class http.cookiejar.FileCookieJar(filename, delayload=None, policy=None)
policy is an object implementing the CookiePolicy interface. For the other arguments, see the documentation for the corresponding attributes. A CookieJar which can load cookies from, and perhaps save cookies to, a file on disk. Cookies are NOT loaded from the named file until either the load() or revert() method is called. Subclasses of this class are documented in section FileCookieJar subclasses and co-operation with web browsers. Changed in version 3.8: The filename parameter supports a path-like object.
class http.cookiejar.CookiePolicy
This class is responsible for deciding whether each cookie should be accepted from / returned to the server.
class http.cookiejar.DefaultCookiePolicy(blocked_domains=None, allowed_domains=None, netscape=True, rfc2965=False, rfc2109_as_netscape=None, hide_cookie2=False, strict_domain=False, strict_rfc2965_unverifiable=True, strict_ns_unverifiable=False, strict_ns_domain=DefaultCookiePolicy.DomainLiberal, strict_ns_set_initial_dollar=False, strict_ns_set_path=False, secure_protocols=("https", "wss"))
Constructor arguments should be passed as keyword arguments only. blocked_domains is a sequence of domain names that we never accept cookies from, nor return cookies to. allowed_domains if not None, this is a sequence of the only domains for which we accept and return cookies. secure_protocols is a sequence of protocols for which secure cookies can be added to. By default https and wss (secure websocket) are considered secure protocols. For all other arguments, see the documentation for CookiePolicy and DefaultCookiePolicy objects. DefaultCookiePolicy implements the standard accept / reject rules for Netscape and RFC 2965 cookies. By default, RFC 2109 cookies (ie. cookies received in a Set-Cookie header with a version cookie-attribute of 1) are treated according to the RFC 2965 rules. However, if RFC 2965 handling is turned off or rfc2109_as_netscape is True, RFC 2109 cookies are ‘downgraded’ by the CookieJar instance to Netscape cookies, by setting the version attribute of the Cookie instance to 0. DefaultCookiePolicy also provides some parameters to allow some fine-tuning of policy.
class http.cookiejar.Cookie
This class represents Netscape, RFC 2109 and RFC 2965 cookies. It is not expected that users of http.cookiejar construct their own Cookie instances. Instead, if necessary, call make_cookies() on a CookieJar instance.
See also
Module urllib.request
URL opening with automatic cookie handling.
Module http.cookies
HTTP cookie classes, principally useful for server-side code. The http.cookiejar and http.cookies modules do not depend on each other. https://curl.haxx.se/rfc/cookie_spec.html
The specification of the original Netscape cookie protocol. Though this is still the dominant protocol, the ‘Netscape cookie protocol’ implemented by all the major browsers (and http.cookiejar) only bears a passing resemblance to the one sketched out in cookie_spec.html.
RFC 2109 - HTTP State Management Mechanism
Obsoleted by RFC 2965. Uses Set-Cookie with version=1.
RFC 2965 - HTTP State Management Mechanism
The Netscape protocol with the bugs fixed. Uses Set-Cookie2 in place of Set-Cookie. Not widely used. http://kristol.org/cookie/errata.html
Unfinished errata to RFC 2965. RFC 2964 - Use of HTTP State Management CookieJar and FileCookieJar Objects CookieJar objects support the iterator protocol for iterating over contained Cookie objects. CookieJar has the following methods:
CookieJar.add_cookie_header(request)
Add correct Cookie header to request. If policy allows (ie. the rfc2965 and hide_cookie2 attributes of the CookieJar’s CookiePolicy instance are true and false respectively), the Cookie2 header is also added when appropriate. The request object (usually a urllib.request.Request instance) must support the methods get_full_url(), get_host(), get_type(), unverifiable(), has_header(), get_header(), header_items(), add_unredirected_header() and origin_req_host attribute as documented by urllib.request. Changed in version 3.3: request object needs origin_req_host attribute. Dependency on a deprecated method get_origin_req_host() has been removed.
CookieJar.extract_cookies(response, request)
Extract cookies from HTTP response and store them in the CookieJar, where allowed by policy. The CookieJar will look for allowable Set-Cookie and Set-Cookie2 headers in the response argument, and store cookies as appropriate (subject to the CookiePolicy.set_ok() method’s approval). The response object (usually the result of a call to urllib.request.urlopen(), or similar) should support an info() method, which returns an email.message.Message instance. The request object (usually a urllib.request.Request instance) must support the methods get_full_url(), get_host(), unverifiable(), and origin_req_host attribute, as documented by urllib.request. The request is used to set default values for cookie-attributes as well as for checking that the cookie is allowed to be set. Changed in version 3.3: request object needs origin_req_host attribute. Dependency on a deprecated method get_origin_req_host() has been removed.
CookieJar.set_policy(policy)
Set the CookiePolicy instance to be used.
CookieJar.make_cookies(response, request)
Return sequence of Cookie objects extracted from response object. See the documentation for extract_cookies() for the interfaces required of the response and request arguments.
CookieJar.set_cookie_if_ok(cookie, request)
Set a Cookie if policy says it’s OK to do so.
CookieJar.set_cookie(cookie)
Set a Cookie, without checking with policy to see whether or not it should be set.
CookieJar.clear([domain[, path[, name]]])
Clear some cookies. If invoked without arguments, clear all cookies. If given a single argument, only cookies belonging to that domain will be removed. If given two arguments, cookies belonging to the specified domain and URL path are removed. If given three arguments, then the cookie with the specified domain, path and name is removed. Raises KeyError if no matching cookie exists.
CookieJar.clear_session_cookies()
Discard all session cookies. Discards all contained cookies that have a true discard attribute (usually because they had either no max-age or expires cookie-attribute, or an explicit discard cookie-attribute). For interactive browsers, the end of a session usually corresponds to closing the browser window. Note that the save() method won’t save session cookies anyway, unless you ask otherwise by passing a true ignore_discard argument.
FileCookieJar implements the following additional methods:
FileCookieJar.save(filename=None, ignore_discard=False, ignore_expires=False)
Save cookies to a file. This base class raises NotImplementedError. Subclasses may leave this method unimplemented. filename is the name of file in which to save cookies. If filename is not specified, self.filename is used (whose default is the value passed to the constructor, if any); if self.filename is None, ValueError is raised. ignore_discard: save even cookies set to be discarded. ignore_expires: save even cookies that have expired The file is overwritten if it already exists, thus wiping all the cookies it contains. Saved cookies can be restored later using the load() or revert() methods.
FileCookieJar.load(filename=None, ignore_discard=False, ignore_expires=False)
Load cookies from a file. Old cookies are kept unless overwritten by newly loaded ones. Arguments are as for save(). The named file must be in the format understood by the class, or LoadError will be raised. Also, OSError may be raised, for example if the file does not exist. Changed in version 3.3: IOError used to be raised, it is now an alias of OSError.
FileCookieJar.revert(filename=None, ignore_discard=False, ignore_expires=False)
Clear all cookies and reload cookies from a saved file. revert() can raise the same exceptions as load(). If there is a failure, the object’s state will not be altered.
FileCookieJar instances have the following public attributes:
FileCookieJar.filename
Filename of default file in which to keep cookies. This attribute may be assigned to.
FileCookieJar.delayload
If true, load cookies lazily from disk. This attribute should not be assigned to. This is only a hint, since this only affects performance, not behaviour (unless the cookies on disk are changing). A CookieJar object may ignore it. None of the FileCookieJar classes included in the standard library lazily loads cookies.
FileCookieJar subclasses and co-operation with web browsers The following CookieJar subclasses are provided for reading and writing.
class http.cookiejar.MozillaCookieJar(filename, delayload=None, policy=None)
A FileCookieJar that can load from and save cookies to disk in the Mozilla cookies.txt file format (which is also used by the Lynx and Netscape browsers). Note This loses information about RFC 2965 cookies, and also about newer or non-standard cookie-attributes such as port. Warning Back up your cookies before saving if you have cookies whose loss / corruption would be inconvenient (there are some subtleties which may lead to slight changes in the file over a load / save round-trip). Also note that cookies saved while Mozilla is running will get clobbered by Mozilla.
class http.cookiejar.LWPCookieJar(filename, delayload=None, policy=None)
A FileCookieJar that can load from and save cookies to disk in format compatible with the libwww-perl library’s Set-Cookie3 file format. This is convenient if you want to store cookies in a human-readable file. Changed in version 3.8: The filename parameter supports a path-like object.
CookiePolicy Objects Objects implementing the CookiePolicy interface have the following methods:
CookiePolicy.set_ok(cookie, request)
Return boolean value indicating whether cookie should be accepted from server. cookie is a Cookie instance. request is an object implementing the interface defined by the documentation for CookieJar.extract_cookies().
CookiePolicy.return_ok(cookie, request)
Return boolean value indicating whether cookie should be returned to server. cookie is a Cookie instance. request is an object implementing the interface defined by the documentation for CookieJar.add_cookie_header().
CookiePolicy.domain_return_ok(domain, request)
Return False if cookies should not be returned, given cookie domain. This method is an optimization. It removes the need for checking every cookie with a particular domain (which might involve reading many files). Returning true from domain_return_ok() and path_return_ok() leaves all the work to return_ok(). If domain_return_ok() returns true for the cookie domain, path_return_ok() is called for the cookie path. Otherwise, path_return_ok() and return_ok() are never called for that cookie domain. If path_return_ok() returns true, return_ok() is called with the Cookie object itself for a full check. Otherwise, return_ok() is never called for that cookie path. Note that domain_return_ok() is called for every cookie domain, not just for the request domain. For example, the function might be called with both ".example.com" and "www.example.com" if the request domain is "www.example.com". The same goes for path_return_ok(). The request argument is as documented for return_ok().
CookiePolicy.path_return_ok(path, request)
Return False if cookies should not be returned, given cookie path. See the documentation for domain_return_ok().
In addition to implementing the methods above, implementations of the CookiePolicy interface must also supply the following attributes, indicating which protocols should be used, and how. All of these attributes may be assigned to.
CookiePolicy.netscape
Implement Netscape protocol.
CookiePolicy.rfc2965
Implement RFC 2965 protocol.
CookiePolicy.hide_cookie2
Don’t add Cookie2 header to requests (the presence of this header indicates to the server that we understand RFC 2965 cookies).
The most useful way to define a CookiePolicy class is by subclassing from DefaultCookiePolicy and overriding some or all of the methods above. CookiePolicy itself may be used as a ‘null policy’ to allow setting and receiving any and all cookies (this is unlikely to be useful). DefaultCookiePolicy Objects Implements the standard rules for accepting and returning cookies. Both RFC 2965 and Netscape cookies are covered. RFC 2965 handling is switched off by default. The easiest way to provide your own policy is to override this class and call its methods in your overridden implementations before adding your own additional checks: import http.cookiejar
class MyCookiePolicy(http.cookiejar.DefaultCookiePolicy):
def set_ok(self, cookie, request):
if not http.cookiejar.DefaultCookiePolicy.set_ok(self, cookie, request):
return False
if i_dont_want_to_store_this_cookie(cookie):
return False
return True
In addition to the features required to implement the CookiePolicy interface, this class allows you to block and allow domains from setting and receiving cookies. There are also some strictness switches that allow you to tighten up the rather loose Netscape protocol rules a little bit (at the cost of blocking some benign cookies). A domain blacklist and whitelist is provided (both off by default). Only domains not in the blacklist and present in the whitelist (if the whitelist is active) participate in cookie setting and returning. Use the blocked_domains constructor argument, and blocked_domains() and set_blocked_domains() methods (and the corresponding argument and methods for allowed_domains). If you set a whitelist, you can turn it off again by setting it to None. Domains in block or allow lists that do not start with a dot must equal the cookie domain to be matched. For example, "example.com" matches a blacklist entry of "example.com", but "www.example.com" does not. Domains that do start with a dot are matched by more specific domains too. For example, both "www.example.com" and "www.coyote.example.com" match ".example.com" (but "example.com" itself does not). IP addresses are an exception, and must match exactly. For example, if blocked_domains contains "192.168.1.2" and ".168.1.2", 192.168.1.2 is blocked, but 193.168.1.2 is not. DefaultCookiePolicy implements the following additional methods:
DefaultCookiePolicy.blocked_domains()
Return the sequence of blocked domains (as a tuple).
DefaultCookiePolicy.set_blocked_domains(blocked_domains)
Set the sequence of blocked domains.
DefaultCookiePolicy.is_blocked(domain)
Return whether domain is on the blacklist for setting or receiving cookies.
DefaultCookiePolicy.allowed_domains()
Return None, or the sequence of allowed domains (as a tuple).
DefaultCookiePolicy.set_allowed_domains(allowed_domains)
Set the sequence of allowed domains, or None.
DefaultCookiePolicy.is_not_allowed(domain)
Return whether domain is not on the whitelist for setting or receiving cookies.
DefaultCookiePolicy instances have the following attributes, which are all initialised from the constructor arguments of the same name, and which may all be assigned to.
DefaultCookiePolicy.rfc2109_as_netscape
If true, request that the CookieJar instance downgrade RFC 2109 cookies (ie. cookies received in a Set-Cookie header with a version cookie-attribute of 1) to Netscape cookies by setting the version attribute of the Cookie instance to 0. The default value is None, in which case RFC 2109 cookies are downgraded if and only if RFC 2965 handling is turned off. Therefore, RFC 2109 cookies are downgraded by default.
General strictness switches:
DefaultCookiePolicy.strict_domain
Don’t allow sites to set two-component domains with country-code top-level domains like .co.uk, .gov.uk, .co.nz.etc. This is far from perfect and isn’t guaranteed to work!
RFC 2965 protocol strictness switches:
DefaultCookiePolicy.strict_rfc2965_unverifiable
Follow RFC 2965 rules on unverifiable transactions (usually, an unverifiable transaction is one resulting from a redirect or a request for an image hosted on another site). If this is false, cookies are never blocked on the basis of verifiability
Netscape protocol strictness switches:
DefaultCookiePolicy.strict_ns_unverifiable
Apply RFC 2965 rules on unverifiable transactions even to Netscape cookies.
DefaultCookiePolicy.strict_ns_domain
Flags indicating how strict to be with domain-matching rules for Netscape cookies. See below for acceptable values.
DefaultCookiePolicy.strict_ns_set_initial_dollar
Ignore cookies in Set-Cookie: headers that have names starting with '$'.
DefaultCookiePolicy.strict_ns_set_path
Don’t allow setting cookies whose path doesn’t path-match request URI.
strict_ns_domain is a collection of flags. Its value is constructed by or-ing together (for example, DomainStrictNoDots|DomainStrictNonDomain means both flags are set).
DefaultCookiePolicy.DomainStrictNoDots
When setting cookies, the ‘host prefix’ must not contain a dot (eg. www.foo.bar.com can’t set a cookie for .bar.com, because www.foo contains a dot).
DefaultCookiePolicy.DomainStrictNonDomain
Cookies that did not explicitly specify a domain cookie-attribute can only be returned to a domain equal to the domain that set the cookie (eg. spam.example.com won’t be returned cookies from example.com that had no domain cookie-attribute).
DefaultCookiePolicy.DomainRFC2965Match
When setting cookies, require a full RFC 2965 domain-match.
The following attributes are provided for convenience, and are the most useful combinations of the above flags:
DefaultCookiePolicy.DomainLiberal
Equivalent to 0 (ie. all of the above Netscape domain strictness flags switched off).
DefaultCookiePolicy.DomainStrict
Equivalent to DomainStrictNoDots|DomainStrictNonDomain.
Cookie Objects Cookie instances have Python attributes roughly corresponding to the standard cookie-attributes specified in the various cookie standards. The correspondence is not one-to-one, because there are complicated rules for assigning default values, because the max-age and expires cookie-attributes contain equivalent information, and because RFC 2109 cookies may be ‘downgraded’ by http.cookiejar from version 1 to version 0 (Netscape) cookies. Assignment to these attributes should not be necessary other than in rare circumstances in a CookiePolicy method. The class does not enforce internal consistency, so you should know what you’re doing if you do that.
Cookie.version
Integer or None. Netscape cookies have version 0. RFC 2965 and RFC 2109 cookies have a version cookie-attribute of 1. However, note that http.cookiejar may ‘downgrade’ RFC 2109 cookies to Netscape cookies, in which case version is 0.
Cookie.name
Cookie name (a string).
Cookie.value
Cookie value (a string), or None.
Cookie.port
String representing a port or a set of ports (eg. ‘80’, or ‘80,8080’), or None.
Cookie.path
Cookie path (a string, eg. '/acme/rocket_launchers').
Cookie.secure
True if cookie should only be returned over a secure connection.
Cookie.expires
Integer expiry date in seconds since epoch, or None. See also the is_expired() method.
Cookie.discard
True if this is a session cookie.
Cookie.comment
String comment from the server explaining the function of this cookie, or None.
Cookie.comment_url
URL linking to a comment from the server explaining the function of this cookie, or None.
Cookie.rfc2109
True if this cookie was received as an RFC 2109 cookie (ie. the cookie arrived in a Set-Cookie header, and the value of the Version cookie-attribute in that header was 1). This attribute is provided because http.cookiejar may ‘downgrade’ RFC 2109 cookies to Netscape cookies, in which case version is 0.
Cookie.port_specified
True if a port or set of ports was explicitly specified by the server (in the Set-Cookie / Set-Cookie2 header).
Cookie.domain_specified
True if a domain was explicitly specified by the server.
Cookie.domain_initial_dot
True if the domain explicitly specified by the server began with a dot ('.').
Cookies may have additional non-standard cookie-attributes. These may be accessed using the following methods:
Cookie.has_nonstandard_attr(name)
Return True if cookie has the named cookie-attribute.
Cookie.get_nonstandard_attr(name, default=None)
If cookie has the named cookie-attribute, return its value. Otherwise, return default.
Cookie.set_nonstandard_attr(name, value)
Set the value of the named cookie-attribute.
The Cookie class also defines the following method:
Cookie.is_expired(now=None)
True if cookie has passed the time at which the server requested it should expire. If now is given (in seconds since the epoch), return whether the cookie has expired at the specified time.
Examples The first example shows the most common usage of http.cookiejar: import http.cookiejar, urllib.request
cj = http.cookiejar.CookieJar()
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
r = opener.open("http://example.com/")
This example illustrates how to open a URL using your Netscape, Mozilla, or Lynx cookies (assumes Unix/Netscape convention for location of the cookies file): import os, http.cookiejar, urllib.request
cj = http.cookiejar.MozillaCookieJar()
cj.load(os.path.join(os.path.expanduser("~"), ".netscape", "cookies.txt"))
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
r = opener.open("http://example.com/")
The next example illustrates the use of DefaultCookiePolicy. Turn on RFC 2965 cookies, be more strict about domains when setting and returning Netscape cookies, and block some domains from setting cookies or having them returned: import urllib.request
from http.cookiejar import CookieJar, DefaultCookiePolicy
policy = DefaultCookiePolicy(
rfc2965=True, strict_ns_domain=Policy.DomainStrict,
blocked_domains=["ads.net", ".ads.net"])
cj = CookieJar(policy)
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
r = opener.open("http://example.com/") | python.library.http.cookiejar |
class http.cookiejar.Cookie
This class represents Netscape, RFC 2109 and RFC 2965 cookies. It is not expected that users of http.cookiejar construct their own Cookie instances. Instead, if necessary, call make_cookies() on a CookieJar instance. | python.library.http.cookiejar#http.cookiejar.Cookie |
Cookie.comment
String comment from the server explaining the function of this cookie, or None. | python.library.http.cookiejar#http.cookiejar.Cookie.comment |
Cookie.comment_url
URL linking to a comment from the server explaining the function of this cookie, or None. | python.library.http.cookiejar#http.cookiejar.Cookie.comment_url |
Cookie.discard
True if this is a session cookie. | python.library.http.cookiejar#http.cookiejar.Cookie.discard |
Cookie.domain_initial_dot
True if the domain explicitly specified by the server began with a dot ('.'). | python.library.http.cookiejar#http.cookiejar.Cookie.domain_initial_dot |
Cookie.domain_specified
True if a domain was explicitly specified by the server. | python.library.http.cookiejar#http.cookiejar.Cookie.domain_specified |
Cookie.expires
Integer expiry date in seconds since epoch, or None. See also the is_expired() method. | python.library.http.cookiejar#http.cookiejar.Cookie.expires |
Cookie.get_nonstandard_attr(name, default=None)
If cookie has the named cookie-attribute, return its value. Otherwise, return default. | python.library.http.cookiejar#http.cookiejar.Cookie.get_nonstandard_attr |
Cookie.has_nonstandard_attr(name)
Return True if cookie has the named cookie-attribute. | python.library.http.cookiejar#http.cookiejar.Cookie.has_nonstandard_attr |
Cookie.is_expired(now=None)
True if cookie has passed the time at which the server requested it should expire. If now is given (in seconds since the epoch), return whether the cookie has expired at the specified time. | python.library.http.cookiejar#http.cookiejar.Cookie.is_expired |
Cookie.name
Cookie name (a string). | python.library.http.cookiejar#http.cookiejar.Cookie.name |
Cookie.path
Cookie path (a string, eg. '/acme/rocket_launchers'). | python.library.http.cookiejar#http.cookiejar.Cookie.path |
Cookie.port
String representing a port or a set of ports (eg. ‘80’, or ‘80,8080’), or None. | python.library.http.cookiejar#http.cookiejar.Cookie.port |
Cookie.port_specified
True if a port or set of ports was explicitly specified by the server (in the Set-Cookie / Set-Cookie2 header). | python.library.http.cookiejar#http.cookiejar.Cookie.port_specified |
Cookie.rfc2109
True if this cookie was received as an RFC 2109 cookie (ie. the cookie arrived in a Set-Cookie header, and the value of the Version cookie-attribute in that header was 1). This attribute is provided because http.cookiejar may ‘downgrade’ RFC 2109 cookies to Netscape cookies, in which case version is 0. | python.library.http.cookiejar#http.cookiejar.Cookie.rfc2109 |
Cookie.secure
True if cookie should only be returned over a secure connection. | python.library.http.cookiejar#http.cookiejar.Cookie.secure |
Cookie.set_nonstandard_attr(name, value)
Set the value of the named cookie-attribute. | python.library.http.cookiejar#http.cookiejar.Cookie.set_nonstandard_attr |
Cookie.value
Cookie value (a string), or None. | python.library.http.cookiejar#http.cookiejar.Cookie.value |
Cookie.version
Integer or None. Netscape cookies have version 0. RFC 2965 and RFC 2109 cookies have a version cookie-attribute of 1. However, note that http.cookiejar may ‘downgrade’ RFC 2109 cookies to Netscape cookies, in which case version is 0. | python.library.http.cookiejar#http.cookiejar.Cookie.version |
class http.cookiejar.CookieJar(policy=None)
policy is an object implementing the CookiePolicy interface. The CookieJar class stores HTTP cookies. It extracts cookies from HTTP requests, and returns them in HTTP responses. CookieJar instances automatically expire contained cookies when necessary. Subclasses are also responsible for storing and retrieving cookies from a file or database. | python.library.http.cookiejar#http.cookiejar.CookieJar |
CookieJar.add_cookie_header(request)
Add correct Cookie header to request. If policy allows (ie. the rfc2965 and hide_cookie2 attributes of the CookieJar’s CookiePolicy instance are true and false respectively), the Cookie2 header is also added when appropriate. The request object (usually a urllib.request.Request instance) must support the methods get_full_url(), get_host(), get_type(), unverifiable(), has_header(), get_header(), header_items(), add_unredirected_header() and origin_req_host attribute as documented by urllib.request. Changed in version 3.3: request object needs origin_req_host attribute. Dependency on a deprecated method get_origin_req_host() has been removed. | python.library.http.cookiejar#http.cookiejar.CookieJar.add_cookie_header |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.