text stringlengths 1 335k |
|---|
Administrivia: http-wg mailing list UP! |
Folks, The http-wg mailing list is now up and running. To subscribe/unsubscribe send an e-mail messsage to: http-wg-request@cuckoo.hpl.hp.com To post to the mailing list, send an e-mail message to: http-wg@cuckoo.hpl.hp.com -- ange -- <>< ange@hplb.hpl.hp.com |
Comments please on agenda for HTTP working group BOF |
To get things moving for the proposed HTTP working group, I have set up a new mailing list (http-wg@cuckoo.hpl.hp.com) and added a few people (including yourself) who have shown interest in the working group. To unsubscribe send an e-mail messsage to: http-wg-request@cuckoo.hpl.hp.com I am writing to you to get your input on the proposed charter and workplan to be discussed at the HTTP Working Group BOF at WWWF'94 in Chicago. Early input will help to ensure that the BOF makes the best use of our limited time together, and I would like us to be able to get names against actions to start things rolling along. Our next chance to get together, after WWWF'94, will be the December IETF meeting in San Jose. Please email your comments on the proposal and thoughts as to how to get the best out of the WWWF'94 BOF to <http-wg@cuckoo.hpl.hp.com> or to me personally at <dsr@hplb.hpl.hp.com> Many thanks, Dave Raggett <dsr@hplb.hpl.hp.com> +44 272 228046 (United Kindom) ---- Proposed Charter for the IETF Hypertext Transfer Protocol Working Group ----------------------------------------------------------------------- Right now HTTP is in a mess. The internet draft has expired and needs updating to bring into line with current practice. The performance is widely perceived to be poor, particularly for modem users, and various groups are working on disparate approaches for adding security and payment mechanisms. Left to itself, we will get fragmented de facto standards that inhibit interoperability. Even if one company wins out as the dominant supplier of servers and clients, it will act as a bottleneck for change with negative effects for end-users and value-adding niche suppliers. The role of W3O and the IETF should be to facilitate the development of open standards in which everyone can gain. To achieve this we will need to take advantage of the best work whether it is done at academic research centers or by commercial developers. Internet drafts and RFCs offer a route for nailing down an open framework for the protocol and for standard APIs for plug-in modules. This will facilitate interoperability by encouraging re-use of security modules rather than, for example, forcing every developer to separately negotiate with RSA for rights to use public key algorithms, or with the Department of Commerce for export licenses. Acting together, we can get better deals. Our suggested focus is on the short term. In particular, we want: a) to tap into the 30% of US homes with PCs/Macs and provide the incentives for them to connect to the Web b) to make it easy to pay for goods and services on the Web c) to protect the copyright interests of information providers To meet these objectives, we need to build on existing work and scale our action plan to what is feasible in the short term. Security and Electronic Payments -------------------------------- The suggested timetable for this is to first concentrate on what is needed to securely send order details and receipts. Credit card payments can be authenticated offline by the credit card companies, but the next step is to provide support for authenticating servers, and subsequently clients. Basic authentication is possible using the IP address. Other mechanisms include Kerberos, and public key certificates. We shouldn't overlook encryption of arbitrary HTTP requests and responses. Smart cards have a bright future for payments based on credit/debit models and digital cash. In the short term, no one has card readers and we need to consider how to get things off the ground. In transitioning from here to there, we need to make it very simple for end-users and financial institutions. Some of the issues this raises include: how users register with an authentication server; if public key mechanisms are used, who generates the public key/secret key pairs and certificates; do the credit card companies store the certificate information as well; and are the transactions themselves are protected for both secrecy and integrity? Mosaic Communications, EIT, Spyglass and CERN all have different approaches to this! The working group will need to come up with an open framework that supports a range of different approaches. Could we agree on using a standard header to indicate which approach is in use, and to indicate acceptable alternatives? Can we agree on a high level set of security mechanisms and an API for implementing them with a range of cryptographic techniques? Is there any role here for the GSS API? Improved Performance -------------------- It may prove worthwhile to extend MIME for use with an improved HTTP. Switching to a binary encoding of the protocol headers will not of its own give us the performance we desire, but many of the weak spots in the current protocol have been repeatedly discussed on the mailing lists. We would like to see one or more Internet Drafts covering: - MGET and multipart messages The ability to request several objects in the same request. The objects are then returned as a multipart message. - keep-alive and segmented transfers This gives us the ability to get an HTML file and then request the inlined images reusing the same connection. - encouraging deployment of transaction TCP Recent proposals cut out the slow start up times of conventional TCP protocol stacks. Can we coordinate our efforts to promote the widespread adoption of these extensions to TCP? - ways to avoid long lists of Accept headers and to better specify client capabilities Right now Mosaic sends out long lists of Accept headers which could easily be replaced by more compact identifiers for standard configurations. For home users with standard VGA and slow modems, it would be great if servers could take advantage of this to send more compact images. - consideration of an ASN.1 based format We need to look at the advantages of switching to a binary encoded format for protocol headers. Suggested Workplan ------------------ October '94: We meet in Chicago and seek agreement that a common framework is needed for security and payment mechanisms, as well as brainstorming the problems/issues that the framework should address. We agree a numbering scheme for subsequent HTTP releases, and get interested people to sign-up to take an active role. November '94: Work starts on a revised Internet Draft covering HTTP as in current use. The http-wg mailing list may be appropriate for exchanging detailed comments on this document as it is written. We use the www-security mailing list to continue brainstorming ideas on the common security framework. One or more people nominated at the October BOF write this up as an initial draft. The objective for November is to finalize the charter and initial workplan for the IETF working group. The group uses the http-wg mailing list to work together on this document. December '94: IETF HTTP WG BOF - we present the charter and workplan. This meeting should be used to build the consensus and to look forward to the next set of actions and milestones. The work group is formally established, and people are signed up to write particular Internet Drafts. Spring '95: We present Internet Drafts for the revamped HTTP spec describing current practice; the framework for security; and for improved performance. This will coincide with the Internet Draft for HTML 3.0. WWW'95: Demonstrations of working implementations of these Internet Drafts. The HTTP working group starts looking at new issues such as the framework needed for digital cash, collaborative hypermedia, and scaling issues for information access and the implications for HTTP. IETF HTTP BOF in December ------------------------- I have reserved a slot at the December meeting of the IETF in San Jose. The Hypertext Transfer Protocol (http) BOF will be held on Tuesday, December 6: 1330-1530. Further info on IETF meetings is available from: <http://www.ietf.cnri.reston.va.us/home.html> Click on the link for "meetings" and you should find an entry for the San Jose meeting. Dave Raggett - 2nd October 1994 |
Re: Comments please on agenda for HTTP working group BOF |
First off: thanks for taking the initiative to establish an HTTP working group. You've set a very ambitious timetable for getting things done. I have my doubts whether the timetable is feasible, but I'm more often a pessimist than an optimist. My comments appear below. Dave Kristol ========== [...] > Proposed Charter for the IETF Hypertext Transfer Protocol Working Group > ----------------------------------------------------------------------- > > Right now HTTP is in a mess. The internet draft has expired and needs > updating to bring into line with current practice. The performance is > widely perceived to be poor, particularly for modem users, and various > groups are working on disparate approaches for adding security and payment > mechanisms. Bear in mind that the standards process usually ratifies existing practice. Therefore, updating the old Internet draft ought to be the first order of business. Addressing well-known problems, while important, should come afterward. > [De facto, closed standards are a bad thing....] [W3O and IETF should facilitate open WWW standards.] [Use Internet drafts and RFCs to define open protocols and APIs.] > > Our suggested focus is on the short term. In particular, we want: > > a) to tap into the 30% of US homes with PCs/Macs and provide > the incentives for them to connect to the Web I think this is possibly beyond our control. If there's interesting stuff, people will connect. Otherwise they won't. I agree we should have the technology for them to use the Web at acceptable speed and cost. > > b) to make it easy to pay for goods and services on the Web Yes. > > c) to protect the copyright interests of information providers Yes. But I doubt this can be solved in the short term. There are many ideas around, but I don't see any consensus on how to do this. > > To meet these objectives, we need to build on existing work and scale > our action plan to what is feasible in the short term. > > > Security and Electronic Payments > -------------------------------- > > The suggested timetable for this is to first concentrate on what is needed > to securely send order details and receipts. Credit card payments can be > authenticated offline by the credit card companies, but the next step is > to provide support for authenticating servers, and subsequently clients. > Basic authentication is possible using the IP address. Other mechanisms IP address is dicey if you're trying to serve those folks with their Macs and PCs who get a new IP address each time they connect to their Internet provider. > include Kerberos, and public key certificates. We shouldn't overlook > encryption of arbitrary HTTP requests and responses. > > Smart cards have a bright future for payments based on credit/debit models > and digital cash. In the short term, no one has card readers and we need > to consider how to get things off the ground. In transitioning from here > to there, we need to make it very simple for end-users and financial > institutions. Some of the issues this raises include: how users register > with an authentication server; if public key mechanisms are used, who > generates the public key/secret key pairs and certificates; do the credit > card companies store the certificate information as well; and are the > transactions themselves are protected for both secrecy and integrity? Some transactions will surely be protected. Otherwise an eavesdropper could capture for free what someone else bought. Don't forget privacy. I think it will be important for people to make requests anonymously and/or to feel comfortable that servers do not accumulate dossiers on their information buying habits. > [Different security approaches being developed....] > > > Improved Performance > -------------------- [Good proposals omitted.] > Suggested Workplan > ------------------ > > October '94: > We meet in Chicago and seek agreement that a common > framework is needed for security and payment mechanisms, > as well as brainstorming the problems/issues that the > framework should address. We agree a numbering scheme > for subsequent HTTP releases, and get interested people > to sign-up to take an active role. The numbering scheme may be premature -- it may depend on who gets what done (and accepted by the community) first. My guess is that development will be breadth-first, which works against the usual numbering schemes. What I mean is this: after people agree on the state of current practice, folks will go off in different directions that, I hope, are largely orthogonal: performance improvement, security, payment. A linear numbering system won't accommodate that diversity well. You might have to say "HTTP 1.1 with performance improvements", or "HTTP 1.1 with security". > > November '94: > Work starts on a revised Internet Draft covering HTTP as in > current use. The http-wg mailing list may be appropriate for > exchanging detailed comments on this document as it is written. > > We use the www-security mailing list to continue brainstorming > ideas on the common security framework. One or more people > nominated at the October BOF write this up as an initial draft. Please please use www-buyinfo for discussions of commercial issues. (An interesting question is whether security and payment can be treated separately, or whether authentication connected to payment must be bundled with other kinds of authentication. I'm hoping for orthogonality, but I certainly haven't demonstrated it yet.) > > The objective for November is to finalize the charter and initial > workplan for the IETF working group. The group uses the http-wg > mailing list to work together on this document. Yes. > > December '94: > IETF HTTP WG BOF - we present the charter and workplan. This meeting > should be used to build the consensus and to look forward to the next > set of actions and milestones. The work group is formally established, > and people are signed up to write particular Internet Drafts. Yes. I would expect people to agree to the formation of a working group. Getting them to agree to a draft charter will perhaps be tougher. Who knows about a workplan? > > Spring '95: > We present Internet Drafts for the revamped HTTP spec describing > current practice; the framework for security; and for improved > performance. This will coincide with the Internet Draft for HTML 3.0. Describing current practice may be possible by Spring '95. The other two are less likely. It would be better to have developed working prototypes of security and improved performance features. Remember that the IETF expects working code in conjunction with paper specs. It will be hard to have both polished code and a polished draft ready in that timespan. > > WWW'95: > Demonstrations of working implementations of these Internet Drafts. > The HTTP working group starts looking at new issues such as the > framework needed for digital cash, collaborative hypermedia, and > scaling issues for information access and the implications for HTTP. (When is WWW '95? Where?) > > > IETF HTTP BOF in December > ------------------------- [Meeting placed on schedule.] Great! |
Re: Comments please on agenda for HTTP working group BOF |
My reactions to Dave Kristol's comments: Bear in mind that the standards process usually ratifies existing practice. Therefore, updating the old Internet draft ought to be the first order of business. Addressing well-known problems, while important, should come afterward. Agreed, although I think that we can work on problems in parallel. a) to tap into the 30% of US homes with PCs/Macs and provide the incentives for them to connect to the Web I think this is possibly beyond our control. If there's interesting stuff, people will connect. Otherwise they won't. I agree we should have the technology for them to use the Web at acceptable speed and cost. The goal is to unlock the business potential, the means to achieving this are asserted to be improving performance and simple payment mechanisms. c) to protect the copyright interests of information providers Yes. But I doubt this can be solved in the short term. There are many ideas around, but I don't see any consensus on how to do this. I think there may be short term steps we should take, e.g. passing limited copyright information in the HTTP header, perhaps along with contractual restrictions on right to print/save local copies. A related idea is to allow publishers of "free" information to get some idea of how many people are accessing it via shared caches. Basic authentication is possible using the IP address. Other mechanisms ... IP address is dicey if you're trying to serve those folks with their Macs and PCs who get a new IP address each time they connect to their Internet provider. Good point. None-the-less there is a need for a basic authentication mechanism in the absence of the infrastructure for a stronger solution. institutions. Some of the issues this raises include: how users register with an authentication server; if public key mechanisms are used, who generates the public key/secret key pairs and certificates; do the credit card companies store the certificate information as well; and are the transactions themselves are protected for both secrecy and integrity? Some transactions will surely be protected. Otherwise an eavesdropper could capture for free what someone else bought. Agreed. The intent was to raise the distinction between secrecy and integrity. Don't forget privacy. I think it will be important for people to make requests anonymously and/or to feel comfortable that servers do not accumulate dossiers on their information buying habits. Can we do this in the short term? I am interested in exploiting the blinding techniques of David Chaum, but don't yet know enough to get a clear idea of how feasible it will be to support this widely on the Web in the short term. We agree a numbering scheme for subsequent HTTP releases, The numbering scheme may be premature -- it may depend on who gets what done (and accepted by the community) first. When we produce the revised Internet Draft that describes current practise we will almost certainly want to change the version number in some way. That would do for now! What I mean is this: after people agree on the state of current practice, folks will go off in different directions that, I hope, are largely orthogonal: performance improvement, security, payment. A linear numbering system won't accommodate that diversity well. You might have to say "HTTP 1.1 with performance improvements", or "HTTP 1.1 with security". I was hoping that say HTTP 2.0 would support the performance improvements and a framework for plugging in security extensions in a modular way, e.g. HTTP 2.0 with Shen or HTTP 2.0 with digital cash. Please please use www-buyinfo for discussions of commercial issues. (An interesting question is whether security and payment can be treated separately, or whether authentication connected to payment must be bundled with other kinds of authentication. I'm hoping for orthogonality, but I certainly haven't demonstrated it yet.) I was too prescriptive here. The intention was to keep the working group mailing list clear of rambling discussions that are better handled on a wider forum. Spring '95: We present Internet Drafts for the revamped HTTP spec describing current practice; the framework for security; and for improved performance. This will coincide with the Internet Draft for HTML 3.0. Describing current practice may be possible by Spring '95. The other two are less likely. It would be better to have developed working prototypes of security and improved performance features. Remember that the IETF expects working code in conjunction with paper specs. It will be hard to have both polished code and a polished draft ready in that timespan. You may be right, but I am hoping that we can build on existing work rather than having to start from scratch, e.g. EIT would write up how their modified S-HTTP proposal fits into the open framework, ditto for Spyglass and others. W3O would enhance the public domain libraries to demonstrate feasibility of the open framework approach, e.g. with a basic authentication module (Spyglass have volunteered to provide code for this) and a module for using Shen. Much of the work has already been done for handling multipart messages, and plans are in hand for work on reuse of transactions for follow-on requests. -- Best wishes, Dave Raggett ----------------------------------------------------------------------------- Hewlett Packard Laboratories email: dsr@hplb.hpl.hp.com Filton Road, Stoke Gifford tel: +44 272 228046 Bristol BS12 6QZ fax: +44 272 228003 United Kingdom |
Re: Comments please on agenda for HTTP working group BOF |
Dave Raggett <dsr@hplb.hpl.hp.com> says: (> >> is Dave Raggett's original. are my comments on that. are Dave Raggett's comments on my comments.) > My reactions to Dave Kristol's comments: [...] > I think there may be short term steps we should take, e.g. passing > limited copyright information in the HTTP header, perhaps along with > contractual restrictions on right to print/save local copies. A related > idea is to allow publishers of "free" information to get some idea of > how many people are accessing it via shared caches. Okay. I was thinking of a different kind of copyright protection, namely a technological approach like document marking. (See http://www.research.att.com/#docmark for one example.) > > >> Basic authentication is possible using the IP address. Other mechanisms ... > > > IP address is dicey if you're trying to serve those folks with their Macs > > and PCs who get a new IP address each time they connect to their Internet > > provider. > > Good point. None-the-less there is a need for a basic authentication > mechanism in the absence of the infrastructure for a stronger solution. I agree. My quibble was with the specific use of IP address, not a basic authentication mechanism. [...] > > Don't forget privacy. I think it will be important for people to make > > requests anonymously and/or to feel comfortable that servers do not > > accumulate dossiers on their information buying habits. > > Can we do this in the short term? I am interested in exploiting the > blinding techniques of David Chaum, but don't yet know enough to get > a clear idea of how feasible it will be to support this widely on the > Web in the short term. There are a couple of aspects: 1) Anonymous payment mechanisms help to preserve privacy: DigiCash, anonymous credit cards. I don't see how we can preserve privacy using a billing model for payment. 2) Caching proxy servers help obscure identity (to the information service provider). 3) I can imagine proxy TCP services that establish connections in a way analogous to the anonymous reEmailers in Finland. These would obscure the original requester. Note the effect that (2) and (3) have on IP address authentication! > > >> We agree a numbering scheme for subsequent HTTP releases, > > > The numbering scheme may be premature -- it may depend on who gets what > > done (and accepted by the community) first. > > When we produce the revised Internet Draft that describes current practise > we will almost certainly want to change the version number in some way. > That would do for now! > > > What I mean is this: after people agree on the state of current practice, > > folks will go off in different directions that, I hope, are largely > > orthogonal: performance improvement, security, payment. A linear numbering > > system won't accommodate that diversity well. You might have to say "HTTP > > 1.1 with performance improvements", or "HTTP 1.1 with security". > > I was hoping that say HTTP 2.0 would support the performance improvements > and a framework for plugging in security extensions in a modular way, e.g. > HTTP 2.0 with Shen or HTTP 2.0 with digital cash. Umm. Let me be pedantic and note that digital cash is not a security extension (at least in my book) but a payment extension. That said, I agree that we should be working toward a framework that allows compatible extensions to HTTP. [...] > > Describing current practice may be possible by Spring '95. The other two > > are less likely. It would be better to have developed working prototypes > > of security and improved performance features. Remember that the IETF > > expects working code in conjunction with paper specs. It will be hard to > > have both polished code and a polished draft ready in that timespan. > > You may be right, but I am hoping that we can build on existing work > rather than having to start from scratch, e.g. EIT would write up how > their modified S-HTTP proposal fits into the open framework, ditto for > Spyglass and others. W3O would enhance the public domain libraries to > demonstrate feasibility of the open framework approach, e.g. with a basic > authentication module (Spyglass have volunteered to provide code for this) > and a module for using Shen. Much of the work has already been done for > handling multipart messages, and plans are in hand for work on reuse of > transactions for follow-on requests. I support the use of existing work. I think you and I are agreeing that we want to define a framework in which all these things can co-exist. The WG would define the framework, not necessarily the specific extensions. David M. Kristol AT&T Bell Laboratories |
Re: Comments please on agenda for HTTP working group BOF |
Thanks for initiating this! My comments to this proposal are: Improved Performance -------------------- It may prove worthwhile to extend MIME for use with an improved HTTP. Switching to a binary encoding of the protocol headers will not of its own give us the performance we desire, but many of the weak spots in the current protocol have been repeatedly discussed on the mailing lists. We would like to see one or more Internet Drafts covering: - MGET and multipart messages The ability to request several objects in the same request. The objects are then returned as a multipart message. It is not only GET - we also need a way to have multiple POST (and PUT). The reason for this is that a message is often to be posted to one or more mailing lists, one or more news groups and maybe a remote HTTP server. I have described how I would like the client interface to the Library of Common Code when building what I call a POST-Web at http://info.cern.ch/hypertext/WWW/Library/User/Features/ClientPost.html If the client is capable of talking directly to all the remote servers then this causes no problem for the HTTP protocol. However, if the POST request is going through a Proxy server, the current POST concept is inadequate. I think that MIME is an obvious tool to be considered to extend the HTTP protocol. - keep-alive and segmented transfers This gives us the ability to get an HTML file and then request the inlined images reusing the same connection. I am currently testing my implementation of the multi-threaded version of the HTTP client in the Library of Common Code (The implementation is *platform independent* and does not require threads) When this is working then clients have a far more powerful tool to keep connection alive, not only for inlined images but also for HTTP sessions, video etc. - encouraging deployment of transaction TCP Recent proposals cut out the slow start up times of conventional TCP protocol stacks. Can we coordinate our efforts to promote the widespread adoption of these extensions to TCP? In my opinion TTCP is a very nice way of enhancing TCP - I think we need something that is backwards compatible with TCP for a long time to come. - ways to avoid long lists of Accept headers and to better specify client capabilities Right now Mosaic sends out long lists of Accept headers which could easily be replaced by more compact identifiers for standard configurations. For home users with standard VGA and slow modems, it would be great if servers could take advantage of this to send more compact images. YEP - why not use MIME types without sub-types? - consideration of an ASN.1 based format We need to look at the advantages of switching to a binary encoded format for protocol headers. What is ASN.1??? - I agree that the protocol must turn into binary mode but I am not sure that this is the right time to do it. Maybe we can have an extension to the HTTP as TTCP is to the TCP - that is start binary - if failure then fall back to the text based HTTP? Suggested Workplan ------------------ October '94: We meet in Chicago and seek agreement that a common framework is needed for security and payment mechanisms, as well as brainstorming the problems/issues that the framework should address. We agree a numbering scheme for subsequent HTTP releases, and get interested people to sign-up to take an active role. November '94: Work starts on a revised Internet Draft covering HTTP as in current use. The http-wg mailing list may be appropriate for exchanging detailed comments on this document as it is written. We use the www-security mailing list to continue brainstorming ideas on the common security framework. One or more people nominated at the October BOF write this up as an initial draft. The objective for November is to finalize the charter and initial workplan for the IETF working group. The group uses the http-wg mailing list to work together on this document. December '94: IETF HTTP WG BOF - we present the charter and workplan. This meeting should be used to build the consensus and to look forward to the next set of actions and milestones. The work group is formally established, and people are signed up to write particular Internet Drafts. Spring '95: We present Internet Drafts for the revamped HTTP spec describing current practice; the framework for security; and for improved performance. This will coincide with the Internet Draft for HTML 3.0. WWW'95: Demonstrations of working implementations of these Internet Drafts. The HTTP working group starts looking at new issues such as the framework needed for digital cash, collaborative hypermedia, and scaling issues for information access and the implications for HTTP. The workplan is _very_ ambitious - I think it is the right way to do it - so let's start :-) I am very interested in having an active role in the work! -- cheers -- Henrik Frystyk frystyk@dxcern.cern.ch + 41 22 767 8265 World-Wide Web Project, CERN, CH-1211 Geneva 23, Switzerland |
WIT, Was: Comments please on agenda for HTTP working group BOF |
I think that to get any further we have to divide up the topics. So far this comes out as one thread. Can we try conversing in a psuedo WIT type manner and give semantic links instead of just re:? To start off could we try Proposal: Start new thread with some info Agree: Yup, proposal was good Disagree: " " Nope, it was not Accept: RFC or code writer has incorporated suggestion into code Reject: " " Idea is inappropriate or contradictory. Was: Spin off new topic from discussion If none of the above fit just create a new one that seems best One reason for this is that I want to have a go at reorganising this stuff in a WIT style as the basis for a WIT design. Another is that after trying WIT the semantic links are quite usefull... |
Disagree: Multiple connections |
Hi Let's see if this poor-man WIT works ;-) When this is working then clients have a far more powerful tool to keep connection alive, not only for inlined images but also for HTTP sessions, video etc. It should be noted that one company's solution to the problem of time when loading html pages with lots of inlined images was to 1) grab the page 2) note the images needed for download 3) open up separate TCP connections for *each* image 4) find out the width and height of each image as it's coming down the pipe, laying out a box in which the image gets filled in as it arrives - thus allowing the page to be layed out perfectly before all images are received. We here would think that 1 x N is the same as N x 1, so opening 4 connections for 4 different things shouldn't be faster than one connection containing all elements, but aesthetically it is *much* more appealing. I see three reasons for not using the proposal above: 1) TCP has a slow start in all connections as it has no apriori knowledge about the round trip time for the connection and hence must use a long timeout for acknowlegements. As more and more data travels over the connection the variance of the estimated round trip gets smaller and the connection gets faster. 2) A connection establishment using TCP requires a 3-way handshake which for small transactions actually can be the major time consuming task. 3) When TCP closes a connection at least one of the parties is put into a TIME_WAIT state where the socket can't accept any new connections. With a relatively small amount of new connections this causes no problems but if the amount of sockets hanging in TIME_WAIT state explodes then this can be a real problem. We already have this problem on our `info.cern.ch' server which simply runs out of sockets. Maybe the remarks say more about TCP than about the method you mention above ;-) What is needed is basically a transaction oriented protocol. Could we get that same effect with one connection? Sure - the browser must be (simulated if not OS_supported) multithreaded, so it can be accepting data and rendering simultaneously, and when accessing inlined images a HEAD command should be sent for each image whose response shows how much screen acreage it'll take (which I *think* can be determined in the first couple of bytes of any GIF or JPEG). The implementation is based on that every thread has its own socket. Otherwise it gets very difficult to sort the incoming packets into the set of threads talking over the same connection. -- cheers -- Henrik Frystyk |
Re: Comments please on agenda for HTTP working group BOF |
- keep-alive and segmented transfers This gives us the ability to get an HTML file and then request the inlined images reusing the same connection. I am currently testing my implementation of the multi-threaded version of the HTTP client in the Library of Common Code (The implementation is *platform independent* and does not require threads) When this is working then clients have a far more powerful tool to keep connection alive, not only for inlined images but also for HTTP sessions, video etc. It should be noted that one company's solution to the problem of time when loading html pages with lots of inlined images was to 1) grab the page 2) note the images needed for download 3) open up separate TCP connections for *each* image 4) find out the width and height of each image as it's coming down the pipe, laying out a box in which the image gets filled in as it arrives - thus allowing the page to be layed out perfectly before all images are received. We here would think that 1 x N is the same as N x 1, so opening 4 connections for 4 different things shouldn't be faster than one connection containing all elements, but aesthetically it is *much* more appealing. Could we get that same effect with one connection? Sure - the browser must be (simulated if not OS_supported) multithreaded, so it can be accepting data and rendering simultaneously, and when accessing inlined images a HEAD command should be sent for each image whose response shows how much screen acreage it'll take (which I *think* can be determined in the first couple of bytes of any GIF or JPEG). Brian |
Propsal: MHEAD for Fast inline image formatting Was: Multiple connections |
Problem: When a document is composite (text & images) it is not nice to have to wait for the whole document to load before display, but until the size of the images is known this is tricky. Non-Solutions: 1) Opening up multiple TCP/IP sessions. This is a kludge. Solutions: 1) HTML+ allows the size of an image to be given in the text. We could imagine some sort of "intall" utility to set up such info. 2) Using MGET we could imagine sending a resume of the images (size etc) before starting the download. This could be sent in the message header "image/gif; width=200; height=300; colours=256". Again this would require some sort of install perhaps - or the server could be intelligent and "know" about gifs, jpegs etc. This is where I think that MGET is not quite enough, we would also need an MHEAD. I think that we need to enrich the content types to add in extra information also. Phill. |
Image-Hints: (was Re: Comments please on agenda for HTTP working group BOF) |
It should be noted that one company's solution to the problem of time when loading html pages with lots of inlined images was to 1) grab the page 2) note the images needed for download 3) open up separate TCP connections for *each* image 4) find out the width and height of each image as it's coming down the pipe, laying out a box in which the image gets filled in as it arrives - thus allowing the page to be layed out perfectly before all images are received. [...] Could we get that same effect with one connection? Sure - the browser must be (simulated if not OS_supported) multithreaded, so it can be accepting data and rendering simultaneously, and when accessing inlined images a HEAD command should be sent for each image whose response shows how much screen acreage it'll take (which I *think* can be determined in the first couple of bytes of any GIF or JPEG). There are 2 problems being discussed here: 1) how to minimize the number of TCP connections. 2) how to help the client do the document layout before it has the images (or image sizes). I'd like to work on these independently. Problem 1 is pretty much obvious -- we need to work on it. Problem 2 can be solved with or with out Problem 1. I defintely do not like the *wasting* of network resources as is described above. I would like to discuss one or both of the following: a) extend HTML to include image size information for inline images. [This is a real pain for those of us with a lot of content and/or those of us who hand-code the HTML. But with the increasing availability of HTML authoring tool, this issue is less important.] [I realize that this suggestion does not necessarily belong within the domain of this mailing list, but it is a method of solving the problem and if adopted, we wouldn't need to do anything to HTTP....] b) have the server provide ''Image Size Hints'' in the document response header for all inlined images. This does not require any content to change, but does require some extra work from the server (eg, scanning HTML to find the images referenced and looking up their sizes (when local)). The nice thing about both of these is that if the information is absent (either not in the HTML or not provided by the server), then the client falls-back to the current method of document presentation. jeff |
Re: Propsal: MHEAD for Fast inline image formatting Was: Multiple connections |
Problem: When a document is composite (text & images) it is not nice to have to wait for the whole document to load before display, but until the size of the images is known this is tricky. Non-Solutions: 1) Opening up multiple TCP/IP sessions. This is a kludge. Very true. But it is kinda breathtaking and effective from the *client* end. Solutions: 1) HTML+ allows the size of an image to be given in the text. We could imagine some sort of "intall" utility to set up such info. This is exactly one solution being given by the same maker of this browser in their "extensions to HTML" document. But I don't like this too much, as I think it crosses the HTML structure-presentation boundary too far and isn't trustable (the HTML author or install procedure could be wrong, the image size could change over time, etc.) 2) Using MGET we could imagine sending a resume of the images (size etc) before starting the download. This could be sent in the message header "image/gif; width=200; height=300; colours=256". Again this would require some sort of install perhaps - or the server could be intelligent and "know" about gifs, jpegs etc. This is where I think that MGET is not quite enough, we would also need an MHEAD. I like this idea much more. I think the server could be configured to recognize image files and return coordinates like width, height, and color pretty easily - and it could cache that information for popular images. Brian |
Re: Propsal: MHEAD for Fast inline image formatting Was: Multiple connections |
The following assumes that the client gets the html document and then sends the server a list of images to send next, reusing the same connection. I like the idea of being able to get the image sizes in advance of the data, but would also like to be able to interleave the image data streams. This way users see all of the images start to appear concurrently, rather than one by one. One simple idea is to use the segmented encoding approach and include the stream number with the segment length. The initial info on image size/type would specify the stream number for each image. If this sounds too difficult, then at least the server could sort the images by size and send the small ones first. Dave Raggett |
Re: Propsal: MHEAD for Fast inline image formatting Was: Multiple connections |
A general problem that I see in the proposals until now is that we have no guarantee that the images are in fact on the same server as the main document. Often this is _not_ the case and then it doesn't help to keep the connection open nor is it easy for the server to get the size of the image. I think a general solution must be based on at least two connections. First the main document gets retrived. If the client is text-based then fine - no more connections are made. If not then the client can sort the requests for inline images and make simultaneously (multi-threaded) connenctions to the servers involved. These can then be multipart, MGET or whatever solution we come up with. This might not seem very elegant but I think it is necessary in order to keep the flexibility and backward compatibility which in my opinion is required. Another solution could be that the client has a special header-line saying: keep the connection open - I will tell yon when to close but then I think it's another protocol than the HTTP we are heading for. -- cheers -- Henrik Frystyk The following assumes that the client gets the html document and then sends the server a list of images to send next, reusing the same connection. I like the idea of being able to get the image sizes in advance of the data, but would also like to be able to interleave the image data streams. This way users see all of the images start to appear concurrently, rather than one by one. One simple idea is to use the segmented encoding approach and include the stream number with the segment length. The initial info on image size/type would specify the stream number for each image. If this sounds too difficult, then at least the server could sort the images by size and send the small ones first. |
Re: Propsal: MHEAD for Fast inline image formatting Was: Multiple connections |
I like the idea of being able to get the image sizes in advance of the data, but would also like to be able to interleave the image data streams. This way users see all of the images start to appear concurrently, rather than one by one. One simple idea is to use the segmented encoding approach and include the stream number with the segment length. The initial info on image size/type would specify the stream number for each image. If this sounds too difficult, then at least the server could sort the images by size and send the small ones first. Both excellent suggestions. I think the latter will result in better performance on average or lower-end machines. Brian |
Re: Propsal: MHEAD for Fast inline image formatting Was: Multiple connections |
A general problem that I see in the proposals until now is that we have no guarantee that the images are in fact on the same server as the main document. Often this is _not_ the case and then it doesn't help to keep the connection open nor is it easy for the server to get the size of the image. Actually, I'd dispute this. I bet we could get one of our web-crawler authors to add to his crawling algorithm a measure of the ratio of inlined-images-on-same-site to inlined-images-off-site, and that it would probably be something on the order of 20-1, if not 100-1. We can't guarantee it, and we certainly shouldn't set up a protocol that would make inlining off-site images difficult or impossible, but forgoing optimizations because of it is a bad choice, I think. I think a general solution must be based on at least two connections. First the main document gets retrived. If the client is text-based then fine - no more connections are made. If not then the client can sort the requests for inline images and make simultaneously (multi-threaded) connenctions to the servers involved. These can then be multipart, MGET or whatever solution we come up with. Right, this would be great, and I don't see how it contradicts other proposals made here, it's just another parallel action. Brian |
Time for HTTP WG BOF |
I have arranged the time for the BOF as Monday evening from 7:30 to 9:30 leaving us time to get to the bar afterwards! :-) See http://union.ncsa.uiuc.edu/bof.shtml for details of other BOFs. I will get Daniel to put a link into the proposed charter as posted to this mailing list. Looking forward to meeting you all. -- Best wishes, Dave Raggett ----------------------------------------------------------------------------- Hewlett Packard Laboratories email: dsr@hplb.hpl.hp.com Filton Road, Stoke Gifford tel: +44 272 228046 Bristol BS12 6QZ fax: +44 272 228003 United Kingdom |
502 result code |
Henrik suggested that I submit this message to the list. I'm not signed up (and I'm not sure how one gets signed up), so please make sure I get cc'ed. That, or sign me up! :) I have been discussing the addition of the 502 (Server too busy) status code with Aleks Totic and Rob McCool at MCom. I have plans to add support for this result code to the next release of MacHTTP and would like to get some guidance on the semantics (and syntax) involved. Is there anything more substantial on this subject other than the IETF HTTP Draft (which says 502 is TBD)? --_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\_-_-_-_-_-_-_-_-_-_-_-_-_-_-_- Chuck Shotton \ Assistant Director, Academic Computing \ "Shut up and eat your U. of Texas Health Science Center Houston \ vegetables!!!" cshotton@oac.hsc.uth.tmc.edu (713) 794-5650 \ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-\-_-_-_-_-_-_-_-_-_-_-_-_- |
502 or 503 |
Now I am thoroughly confused. Is "server too busy" 502 or 503? Also, there was a mention about the possibility of this response containing information for the client about how long the "busy" condition was expected to persist. Is the part of the current 502/503 spec, or is the response just the usual HTTP one-liner? --_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\_-_-_-_-_-_-_-_-_-_-_-_-_-_-_- Chuck Shotton \ Assistant Director, Academic Computing \ "Shut up and eat your U. of Texas Health Science Center Houston \ vegetables!!!" cshotton@oac.hsc.uth.tmc.edu (713) 794-5650 \ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-\-_-_-_-_-_-_-_-_-_-_-_-_- |
Two proposals for HTTP/2.0 |
Here are two modest suggestions for HTTP/2.0 with their rationale. 1. Add a header to the client request that indicates the hostname and port of the URL which the client is accessing. Rationale: One of the most requested features from commercial server maintainers is the ability to run a single server on a single port and have it respond with different top level pages depending on the hostname in the URL. Service providers would like to have multiple aliases for a single host and have URLs like http://company1.com/, http://company2.com/, http://company3.com/ all return appropriate (and different) pages even though all the hostnames refer to the same IP address. This is not currently possible because there is no way for the server to know the hostname in the URL by which it was accessed. 2. Require (or request) that clients support relative URL's in redirects (status 301 and 302). Rationale: This is important for small special purpose servers (e.g. gateways). Such a server is simpler to write, more robust, and more portable if it doesn't need to know its hostname or port. At present there are two reasons a server needs to know its hostname and port. First if it supports CGI it is required to supply this information and second for sending 301 and 302 redirects. Normally a small special purpose server would not support CGI and wouldn't send redirects. However, in order to use relative URL's, a server must deal with requests like GET /dir1/dir2 which should be GET /dir1/dir2/ i.e. with trailing '/'. The accepted (and perhaps only) way of handling this is with a redirect from the first to the second. Since clients must handle relative URL's anyway there is little cost in having them handle them in redirects. On the other hand the cost for special purpose servers of needing to know both their hostname and port is significant. John Franks Dept of Math. Northwestern University john@math.nwu.edu |
RE: Two proposals for HTTP/2.0 |
Re the first proposal, to incorporate the hostname somewhere. This would be cleanest put into the URL itself :- GET http://hostname/fred http/2.0 This is the syntax for proxy redirects. This suggestion conflicts with the aims of the second I'm affraid. I don't think that its a good thing for a server to not know its name. Proxying is far too prevelant now and a server that doth not know its name shall be called a LOOP. Phill |
Re: Two proposals for HTTP/2.0 |
According to hallam@axal04.cern.ch: Re the first proposal, to incorporate the hostname somewhere. This would be cleanest put into the URL itself :- GET http://hostname/fred http/2.0 This is the syntax for proxy redirects. The only thing objectionable about this is that it is a substantial change from the HTTP/1.0. I suppose we could say the syntax is GET URL HTTP/?? and that HTTP/1.0 only allows relative URLs (i.e. relative to the host being queried). This suggestion conflicts with the aims of the second I'm affraid. I don't think that its a good thing for a server to not know its name. Proxying is far too prevelant now and a server that doth not know its name shall be called a LOOP. Obviously a proxy server would have to know its own name. But a small gateway that speaks HTTP on one side and accesses a local service on the other side shouldn't need to. (Am I wrong about this? How would that create a loop?). The proposal was to make it possible for such a gateway to use file system names in URL's without being required to know its own name. Parenthetically, IMHO proxy servers and regular servers should be different programs. Their purposes are quite different. John Franks |
Re: Two proposals for HTTP/2.0 |
According to hallam@axal04.cern.ch: Re the first proposal, to incorporate the hostname somewhere. This would be cleanest put into the URL itself :- GET http://hostname/fred http/2.0 This is the syntax for proxy redirects. The only thing objectionable about this is that it is a substantial change from the HTTP/1.0. I suppose we could say the syntax is GET URL HTTP/?? and that HTTP/1.0 only allows relative URLs (i.e. relative to the host being queried). If the server needs to know its own name, it seems more appropriate that the info be made part of the request header and not the request itself. If you want to have any hope of easily maintaining backwards compatibility with slack clients, the header is a much safer place to put this info. A more general purpose solution might be to have clients send the complete URL that they used to make their current query. Something like: From-URL: http://some.host/some/path.html Since there will be backwards compatibility issues with old and new clients, it matters little where this information is passed as some clients will send it and some won't. In order to prevent wholesale overhaul of HTTP request processing, it seems much easier to add a new header field than to substantially change the syntax of requests. This suggestion conflicts with the aims of the second I'm affraid. I don't think that its a good thing for a server to not know its name. Proxying is far too prevelant now and a server that doth not know its name shall be called a LOOP. Relying on something as weak as a domain name for differentiating the roles a server is to perform is an extreme hack. There are MUCH better ways to accomodate this that won't be subject to the whims, vagueries, and failures of DNS. Path arguments, header fields, and any number of other techniques can be used already to help a server determine its "role". Simply depending on the DNS name of the server is not sufficiently robust. Obviously a proxy server would have to know its own name. But a small gateway that speaks HTTP on one side and accesses a local service on the other side shouldn't need to. (Am I wrong about this? How would that create a loop?). The proposal was to make it possible for such a gateway to use file system names in URL's without being required to know its own name. Parenthetically, IMHO proxy servers and regular servers should be different programs. Their purposes are quite different. I agree! Proxy servers perform a completely different function from "regular" servers and in my opinion, are more properly termed "proxy clients". With the advent of caching clients like NetScape, making allowances in HTTP for proxy servers may become less of requirement except at sites where there are evil firewalls that only allow the proxy in and out. --_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\_-_-_-_-_-_-_-_-_-_-_-_-_-_-_- Chuck Shotton \ Assistant Director, Academic Computing \ "Shut up and eat your U. of Texas Health Science Center Houston \ vegetables!!!" cshotton@oac.hsc.uth.tmc.edu (713) 794-5650 \ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-\-_-_-_-_-_-_-_-_-_-_-_-_- |
Re: Two proposals for HTTP/2.0 |
I agree! Proxy servers perform a completely different function from "regular" servers and in my opinion, are more properly termed "proxy clients". With the advent of caching clients like NetScape, making allowances in HTTP for proxy servers may become less of requirement except at sites where there are evil firewalls that only allow the proxy in and out. I disagree very strongly here. Security proxies such as the TIS proxy are rather different to what the CERN proxy server provides. Here there is a primitive version of an item I beleive represents the future of the Web, a caching relay server. The name proxy is a misnomer. Client side caqches help but only to a small extent. Client side caches cannot be safely shared in most circumstances. Phill H-B |
Re: Two proposals for HTTP/2.0 |
According to Chuck Shotton: Relying on something as weak as a domain name for differentiating the roles a server is to perform is an extreme hack. There are MUCH better ways to accomodate this that won't be subject to the whims, vagueries, and failures of DNS. Path arguments, header fields, and any number of other techniques can be used already to help a server determine its "role". Everything you say is true from a technical point of view. However, the issue here is a political/commercial one. When a company contracts with a service provider to create a WWW presence they want the URL for their company to be something like http://company_name.com/ They don't want the service provider's name in the URL and they don't want any path or port stuff at the end. It's a PR thing. It may seem silly but it is important to them. (As you no doubt know there are lawsuits now over the ownership rights to DNS names.) On the other hand, the service provider does not want to have to have a different computer for each client since most clients put minimal load on a server. It seems to me that both the desires of the company and the desires of the service provider are reasonable and it ought to be possible to accomodate them with a very minor change in the protocol. John Franks |
Re: Two proposals for HTTP/2.0 |
According to Chuck Shotton: Relying on something as weak as a domain name for differentiating the roles a server is to perform is an extreme hack. There are MUCH better ways to accomodate this that won't be subject to the whims, vagueries, and failures of DNS. Path arguments, header fields, and any number of other techniques can be used already to help a server determine its "role". Everything you say is true from a technical point of view. However, the issue here is a political/commercial one. When a company contracts with a service provider to create a WWW presence they want the URL for their company to be something like http://company_name.com/ This is an apples and oranges discussion. An alias name in the DNS for a computer has very little to do with Web servers or HTTP. There is NO change needed to HTTP request syntax to accomodate this. As I said before, clients are going to need to send the info to the server one way or another. My proposal is that they adopt a standard HTTP header field that specifies the complete remote URL used to access the server. Since there will be a mix of clients, some supporting host name reporting and some not, it just doesn't matter how this info gets to the server. Since it doesn't matter, the easier to implement solution is a new HTTP request header field. It allows all clients and servers to operate as they do now with NO code changes. Clients and servers that actually need host name information can have tiny mods made to send the extra header field containing the URL and process it. Leave the standard alone on this issue. It is robust enough to do what you want using the mechanisms built into it now without completely convoluting the syntax of a request. Companies will still be able to use whatever domain name they want and servers will get a LOT more info than just the host name with this scheme. In any case, the client must conform to a new standard, whatever it is, or this won't work. All I'm suggesting is that there is a better way to implement the delivery of host name info to the server that doesn't involve hacking the request syntax and can be backwards compatible with ALL clients and servers. From-URL: http://host.name/file/path/info.html --_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\_-_-_-_-_-_-_-_-_-_-_-_-_-_-_- Chuck Shotton \ Assistant Director, Academic Computing \ "Shut up and eat your U. of Texas Health Science Center Houston \ vegetables!!!" cshotton@oac.hsc.uth.tmc.edu (713) 794-5650 \ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-\-_-_-_-_-_-_-_-_-_-_-_-_- |
Re: Two proposals for HTTP/2.0 |
Everything you say is true from a technical point of view. However, the issue here is a political/commercial one. When a company contracts with a service provider to create a WWW presence they want the URL for their company to be something like http://company_name.com/ This is an apples and oranges discussion. An alias name in the DNS for a computer has very little to do with Web servers or HTTP. There is NO change needed to HTTP request syntax to accomodate this. As I said before, clients The issue in question is not that of using CNAME aliases (which provide different names for the same service), but one of providing different services on the same machine, all with (vanity) addresses of the form above. I _think_ this is currently done at a few sites with a feature of the BSD ifconfig that allows one interface to accept traffic on multiple IP addresses on one interface, then hacking the server to serve up different web pages for the different IP addresses. It's an ugly hack, but there is a demand. I personally doubt this can be "fixed" in the HTTP protocol because of the problem of supporting old clients, and because this is, in effect, trying to subvert the meaning of the DNS in the context of URLs. -- Albert Lunde Albert-Lunde@nwu.edu |
Re: Two proposals for HTTP/2.0 |
Everything you say is true from a technical point of view. However, the issue here is a political/commercial one. When a company contracts with a service provider to create a WWW presence they want the URL for their company to be something like http://company_name.com/ This is an apples and oranges discussion. An alias name in the DNS for a computer has very little to do with Web servers or HTTP. There is NO change needed to HTTP request syntax to accomodate this. As I said before, clients The issue in question is not that of using CNAME aliases (which provide different names for the same service), but one of providing different services on the same machine, all with (vanity) addresses of the form above. Yes, I understand the problem. It is identical to the technique used by some C programs to examine argv[0] and perform different behaviors (serve different home pages in the WWW case) based on the name, like compress/uncompress or sendmail/newalias. But instead of using the name of a program to determine behavior on a host, admins want to use the name of a host to determine behavior of a single server (with multiple names). This is a legitimate technique to use. I just question the logic behind altering the syntax of HTTP requests when other mechanisms exist. The real issue is that if clients don't send the name, servers have no way of knowing which of many names was used to contact the server. SO, clients ultimately have to support sending this info. Since clients need to change, there will be a non-trivial period of time where some clients support the new method (whatever that may be) and some don't. In order to ease the transition (strictly from a software developers' perspective), the servers should easily be able to accomodate requests from both types of clients. The best way to do this is to try and leave the ways that clients communicate with servers relatively untouched and enhance the amount of info sent from client to server using features in the HTTP protocol designed for this purpose. Namely, HTTP request header fields. New clients will send the field, old clients won't. New servers will understand the field, old servers won't. New clients will still be able to talk to old servers with the SAME syntax, and old clients can talk to new servers, too. Changing the request syntax to include a full URL will preclude NEW clients being able to talk to OLD servers. The client has NO way of knowing whether or not the server it is about to talk to can understand HTTP/2.0 until it talks to it. This is the single biggest reason to avoid radical changes to the syntax for the request. I don't know of any servers now that break if they get a HTTP request header field that they don't understand. But I bet every one of them will fail if they get a complete URL in a GET request. I _think_ this is currently done at a few sites with a feature of the BSD ifconfig that allows one interface to accept traffic on multiple IP addresses on one interface, then hacking the server to serve up different web pages for the different IP addresses. This is different than using CNAMEs and doesn't present the same problem since you DO know which ip address was contacted and can equate this directly to a host name. It also isn't widely supported on many Unix workstations. Another thing to consider is that a VAST majority of Web servers aren't even being run on Unix servers. There are MANY, MANY more servers running on PCs and Macs than Unix. So continuing to adopt a Unix-centric approach to implementing new HTTP features is not necessarily the best idea. It's an ugly hack, but there is a demand. I personally doubt this can be "fixed" in the HTTP protocol because of the problem of supporting old clients, and because this is, in effect, trying to subvert the meaning of the DNS in the context of URLs. In so far as the HTTP protocol equates to the actual request/response method syntax, I agree. However, there is an unlimited ability to modify client and server interaction using other parts of the request/response data (the header fields). --_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\_-_-_-_-_-_-_-_-_-_-_-_-_-_-_- Chuck Shotton \ Assistant Director, Academic Computing \ "Shut up and eat your U. of Texas Health Science Center Houston \ vegetables!!!" cshotton@oac.hsc.uth.tmc.edu (713) 794-5650 \ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-\-_-_-_-_-_-_-_-_-_-_-_-_- |
Re: Two proposals for HTTP/2.0 |
I think allowing GET url HTTP/2.0 makes sense just in terms of cleaning up the protocol, independently of the motivation of helping people who want to serve maultiple host-names from the same host. Servers don't really need to know their own names, as much as they need to be able to discover their own addresses, and, after doing the name lookup on a new hostname first ask "is this me?". Servers will also need some way to discover their own port, though. |
Re: Two proposals for HTTP/2.0 |
I think allowing GET url HTTP/2.0 makes sense just in terms of cleaning up the protocol, independently of the motivation of helping people who want to serve maultiple host-names from the same host. Servers don't really need to know their own names, as much as they need to be able to discover their own addresses, and, after doing the name lookup on a new hostname first ask "is this me?". Servers will also need some way to discover their own port, though. This is only true of servers on Unix implemented to run under inet. It isn't the case on any other server on any other platform including stand-alone Unix servers, because these servers already know what port they are listening on. Servers DO need to know host name and port info so they can pass it to CGI applications which may need to generate self-referencing URLs. They just don't need to find it out by forcing a wholesale change on the way clients make requests to the server. Imagine all of the software that will have to change, from clients and servers to dedicated scripts, applications, etc., if the syntax of a GET request changes to require a complete URL. Information contained in the URL is redundant, given that servers already know their IP address, the protocol they are communicating with, and the port number. The ONLY missing piece of information is something that has NOTHING to do with HTTP, HTML, or the WWW and everything to do with some strictly commercial needs - namely the actual DNS name that was used to access the server. As I said before, using the domain name to determine server function may (or may not) be considered a hack, but it doesn't really have anything to do with HTTP, per se. It has to do with some configuration "tricks" that some server administrators feel they need to do to make customers happy. I'm all for that, but I think that the appropriate mechanism should be chosen and munging the HTTP request syntax isn't it. Bottom line is that it would be a lot easier to look for a new request header field than to have to add a bunch of conditional code to process a different request syntax for HTTP/1.0 vs. HTTP/2.0. The two protocols will not be forward/backward compatible if a syntax change is made to the request, causing a lot of headaches for everyone. I suggest avoiding the headaches altogether and simply define the new request header field. Can someone point out a good reason NOT to accomodate the need for sending a host name by putting it in a required header field as part of a complete URL? If there's something I'm overlooking, I'll gladly stop whining. --_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\_-_-_-_-_-_-_-_-_-_-_-_-_-_-_- Chuck Shotton \ Assistant Director, Academic Computing \ "Shut up and eat your U. of Texas Health Science Center Houston \ vegetables!!!" cshotton@oac.hsc.uth.tmc.edu (713) 794-5650 \ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-\-_-_-_-_-_-_-_-_-_-_-_-_- |
Re: Two proposals for HTTP/2.0 |
Changing the request syntax to include a full URL will preclude NEW clients being able to talk to OLD servers. Are you really proposing that HTTP/2.0 be kept compatible with HTTP/1.0 such that old HTTP/1.0 servers could ignore the "HTTP/2.0" in the GET request and respond as if it were a HTTP/1.0 request? Any protocol change for HTTP will have to be staged by first getting most of the servers to upgrade. If there are no changes proposed that would actually require some different response, then why bother calling it 'HTTP/2.0' at all? Actually, this gets me to a point where I want to stop talking about HTTP/2.0 at *all*: we need a specification/standard for HTTP/1.0, as an IETF RFC, either an "informational" one or as a "draft standard". Is anyone willing to volunteer to put such a beast together? |
Re: Two proposals for HTTP/2.0 |
Changing the request syntax to include a full URL will preclude NEW clients being able to talk to OLD servers. Are you really proposing that HTTP/2.0 be kept compatible with HTTP/1.0 such that old HTTP/1.0 servers could ignore the "HTTP/2.0" in the GET request and respond as if it were a HTTP/1.0 request? Yes. This is pretty important, since most servers will handle this already. (Most apparently ignore the HTTP/1.0 tag or don't care if the version number is off. Try it by telnetting to port 80 and sending GET / HTTP/2.0 to any server) This means that forward/backward compatibility can be maintained between new clients and old servers NOW, with no changes. Any protocol change for HTTP will have to be staged by first getting most of the servers to upgrade. Only if the syntax of the request/response changes. Otherwise, the status quo can be maintained for old servers while new clients and servers get the benefit of new HTTP additions. If there are no changes proposed that would actually require some different response, then why bother calling it 'HTTP/2.0' at all? A question of semantics, I suppose. If all that changes are the header fields, leaving the syntax of methods, requests, and responses alone, then there is no fundamental change required of old servers as they can just ignore the new headers. If radical change is a requirement for increasing the protocol version number, then there's no reason to change the version number for this "hostname" proposal. But if substantial functionality is added in the context of the existing HTTP/1.0 standard, there's also no reason that it can't be termed a new draft of the standard, or a new version altogether. What's in a number anyway? Actually, this gets me to a point where I want to stop talking about HTTP/2.0 at *all*: we need a specification/standard for HTTP/1.0, as an IETF RFC, either an "informational" one or as a "draft standard". Is anyone willing to volunteer to put such a beast together? There's already a draft RFC for HTTP/1.0. Were you thinking of something beyond the current draft that's available from info.cern.ch? --_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\_-_-_-_-_-_-_-_-_-_-_-_-_-_-_- Chuck Shotton \ Assistant Director, Academic Computing \ "Shut up and eat your U. of Texas Health Science Center Houston \ vegetables!!!" cshotton@oac.hsc.uth.tmc.edu (713) 794-5650 \ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-\-_-_-_-_-_-_-_-_-_-_-_-_- |
Re: Two proposals for HTTP/2.0 |
There's already a draft RFC for HTTP/1.0. Were you thinking of something beyond the current draft that's available from info.cern.ch? There is no Internet Engineering Task Force 'RFC' for HTTP. There may be a document that CERN put on the web that describes its use, but it hasn't been published as an RFC. You might want to check out RFC 1310, "The Internet Standards Process" for more details. I thought we were here (in html-wg, rather than on www-talk) for the purpose of creating Internet Standards for HTTP. If that isn't the purpose of this mailing list, would someone please correct me? (and take me off the list; I'm on enough 'random chatter' mailing lists, thank you). |
HTTP/1.0 draft status |
Actually, this gets me to a point where I want to stop talking about HTTP/2.0 at *all*: we need a specification/standard for HTTP/1.0, as an IETF RFC, either an "informational" one or as a "draft standard". Is anyone willing to volunteer to put such a beast together? Henrik and I, with the help of Bob Denny, have been working on it steadily over the past month. The draft should be available sometime early next week. We anticipate that it will go through at least one iteration before the San Jose BOF, where it will be the main topic. ......Roy Fielding ICS Grad Student, University of California, Irvine USA <fielding@ics.uci.edu> <URL:http://www.ics.uci.edu/dir/grad/Software/fielding> |
Re: Two proposals for HTTP/2.0 |
Two points :- 1) Running httpd under inetd is definitively not recommended. Under many UNIX implementations the inetd daemon breaks very badly when it gets large number of simultaneous requests. So one netscape file upload and your whole system is hosed. 2) The proposal to allow specification of the whole URL at the method prompt would be an option, not mandatory. The point being that proxies should be intelligent enough to identify requests to themselves. Phill H-B |
Re: Two proposals for HTTP/2.0 |
Larry writes: Actually, this gets me to a point where I want to stop talking about HTTP/2.0 at *all*: we need a specification/standard for HTTP/1.0, as an IETF RFC, either an "informational" one or as a "draft standard". Is anyone willing to volunteer to put such a beast together? As agreed at the WWWF'94 HTTP BOF, Henrik Nielsen and Roy Fielding are working on this and will report at the IETF meeting next month. Simon Spero will report on work on HTTP-NG. -- Best wishes, Dave Raggett ----------------------------------------------------------------------------- Hewlett Packard Laboratories email: dsr@hplb.hpl.hp.com Filton Road, Stoke Gifford tel: +44 272 228046 Bristol BS12 6QZ fax: +44 272 228003 United Kingdom |
Re: HTTP BOF - Draft agenda (was: Re: HTTP/1.0 draft status) |
*The discussion on HTTP/1.0 specification will be resticted soley to differences between the specification and existing practice. ^^^^^^^^^^^^^ This should be "conflicts between the specification and existing practice." There will be many differences, since there are many different "existing practices". The chairs request that any detailed comments be submitted before the start of the meeting, indicating the nature of the problem, its severity, together with any suggested fixes.* Yes, that would be nice. Sub groups from Chicago reporting to this group: HTTP 1.0 review group RTF - Roy T Fielding RD - Bob Denny HF - Henrik F[a-z]+ That's Henrik Frystyk Nielsen HTTP-NG group SES - Simon E Spero DSR - David S Raggett AGENDA Administrivia: {Introduction 5 mins {Presentation of Agenda {Changes to order of business HTTP/1.0: 30 mins Report from HTTP 1.0 review group (RTF/HF/RD) 60 mins Discussion arising from report HTTP-NG: {Architecture and Requirements (SES/DSR) 50 mins {Specification overview (SES/DSR) {Implementation experience and {measurements (SES/DSR) {Discussion 5 mins {Formation of working group. {Adoption of proposed charter Sounds reasonable. Do we have a "proposed charter"? Dave? I know there was some problems (i.e. lack of specificity) with the first draft, but I never received any updated version. REQUIRED READING: HTTP 1.0 specification http://info.cern.ch/hypertext/WWW/Protocols/HTTP/HTTP2.html (http://where.is.the.new.one/roy) It will be at http://www.ics.uci.edu/pub/ietf/http/ ftp://www.ics.uci.edu/pub/ietf/http/ just as soon as Henrik and I stop making major changes (early next week, if not sooner). I'll put a revised charter there as well, if I get one [the first draft is there presently]. ......Roy Fielding ICS Grad Student, University of California, Irvine USA <fielding@ics.uci.edu> <URL:http://www.ics.uci.edu/dir/grad/Software/fielding> |
Revised agenda for IETF BOF on 7th December 1994 |
This is a revised version of the agenda as sent out by Simon Spero. --- HTTP - HyperText Transfer Protocol BOF Wednesday December 7th 1994 19:30-22:00 HTTP - the HyperText Transfer Protocol is an applications protocol which serves as the basis of the World Wide Web distributed hypertext system. The BOF has two main aims. The first goal is to review and correct the updated HTTP 1.0 specification to make it suitable for standards track advancement. The second aim is to discuss future work on the next versions of HTTP, and to formally propose the setting up on a working group for HTTP. Please send comments on the agenda to Dave Raggett <dsr@hplb.hpl.hp.com> AGENDA Administrivia: {Introduction 5 mins {Presentation of Agenda {Changes to order of business HTTP/1.0: 30 mins Report from HTTP 1.0 review group (Roy Fielding, Henrik Nielsen and Bob Denny) 50 mins Discussion arising from report (restricted to differences between the draft specification and existing practice). 20 mins Discussion of limitations of HTTP 1.0 and options for short term fixes HTTP-NG: {Architecture and Requirements (Simon Spero/Dave Raggett) 40 mins {Specification overview (Simon Spero/Dave Raggett) {Implementation experience and {measurements (Simon Spero/Dave Raggett) {Discussion 5 mins {Formation of working group. {Adoption of proposed charter REQUIRED READING: HTTP 1.0 specification and description of HTTP-NG http://info.cern.ch/hypertext/WWW/Protocols/Overview.html -- Dave Raggett <dsr@hplb.hpl.com> tel: +44 272 228046 fax: +44 272 228003 |
http://info.cern.ch/hypertext/WWW/Protocols/Overview.html |
Formatting on the ascii version leaves something to be desired; I assume the editors will fix this before moving the document forward. (DL lists don't indent reasonably, the table of contents is useless, etc.) ================================================================ 2.1 Augmented BNF I went through lots of machinations on BNF for the URL document, and wound up with: This is a BNF-like description of the Uniform Resource Locator syntax, using the conventions of RFC822, except that "|" is used to designate alternatives, and brackets [] are used around optional or repeated elements. Briefly, literals are quoted with "", optional elements are enclosed in [brackets], and elements may be preceded with <n>* to designate n or more repetitions of the following element; n defaults to 0. The formatting wound up working better to put comments on separate lines; even though the result was longer, it was easier to read. It seems like the addition of the # construct doesn't make the BNF easier to read or interpret at all. 4.1 Date/Time Format I think you should define HTTP as using RFC 822/1123 date formats, and make the "strong recommendation" be to accept other formats, rather than the way you have this worded. 4.2 Content Types In what way is the HTTP content-type a superset of the MIME BNF? If you must repeat some BNF that occurs in another RFC, you must identify how this is either just a repeat, or a modification, and if it is a modification, how it is different from the source. 4.2.1 Multipart Types The BNF for multipart types is very confusing, since it isn't at all clear whether this is just supplying BNF for MIME or something new that is MIME-like. I'd appreciate it if you would consider mentioning multipart/form-data as proposed in draft-ietf-html-fileupload-00.txt. 4.2.1.1 multipart/alternative The "multipart/alternative" content-type is used in MIME to send content-type variants of a single entity when the receiver's capabilities are not known. This is not the case with HTTP. Multipart/alternative can be used to provide metainformation of many instances of an object, This is really annoying if it is current practice (to redefine what "alternative" might mean). 4.3 General Message Header Fields You're really recommending that HTTP servers send a "MIME-Version" field? But the MIME committee has basically recommended that there not be any MIME-Version other than 1.0. You might as well tie MIME-Version: 1.0 into HTTP/1.0 and suppose that any new MIME-Version will imply a new HTTP version. 4.3.3 Message-ID Do you care to comment on message-id vs. URN? .... (more later) |
Re: http://info.cern.ch/hypertext/WWW/Protocols/Overview.html |
Formatting on the ascii version leaves something to be desired; I assume the editors will fix this before moving the document forward. (DL lists don't indent reasonably, the table of contents is useless, etc.) Ack! I failed to notice that Dave listed the CERN location. That is not the main distribution site for the IETF version, and the copy there is one day old (and what a difference a day makes!). The version being submitted to the IETF as an I-D is available at http://www.ics.uci.edu/pub/ietf/http/ ftp://www.ics.uci.edu/pub/ietf/http/ it is currently available in compressed postscript as: draft-fielding-http-spec-00.ps.Z The above I-D name will be used until the HTTP-WG becomes official. Text and HTML versions will follow as soon as I can generate them, and both the text and PS versions will (eventually) be available from all the Internet-Draft shadow directories. Larry, your comments are appreciated and will be addressed as soon as Henrik and I can get our heads above water. ......Roy Fielding ICS Grad Student, University of California, Irvine USA <fielding@ics.uci.edu> <URL:http://www.ics.uci.edu/dir/grad/Software/fielding> |
User authentication for the proxy |
We desparately need a way in the protocol to authenticate the user to a proxy. Here's the first draft proposal for public review: http://home.mcom.com/info/proxy-auth.html Cheers, -- Ari Luotonen Netscape Communications Corp. 650 Castro Street, Suite 500 Mountain View, CA 94041, USA |
HTTP/1.0 Specification and HTTP-WG Archives |
Hello all, The revised HTTP/1.0 Specification has been submitted as an Internet-Draft under the name <draft-fielding-http-spec-00.txt> and is available for comment at the following locations: http://www.ics.uci.edu/pub/ietf/http/ ftp://www.ics.uci.edu/pub/ietf/http/ These URLs both point to the site of a comprehensive archive of the materials currently under consideration by the proposed IETF HTTP Working Group. Additional contributions are welcome. A hypermail archive of the HTTP-WG mailing list is available at http://www.ics.uci.edu/pub/ietf/http/hypermail/ or through the URLs above. ......Roy Fielding ICS Grad Student, University of California, Irvine USA <fielding@ics.uci.edu> <URL:http://www.ics.uci.edu/dir/grad/Software/fielding> |
multipart in HTTP/1.0 draft |
Is use of MIME or MIME-like multi-part types a current practice supported by either clients or servers? (My impression was that it was not used and it could confuse a lot of clients.) If not, I'd suggest removing it from this version of the specification. I note the disclaimer that this specification may not reflect current practice, but still, codifying current practice is a main aim of the HTTP/1.0 spec. -- Albert Lunde Albert-Lunde@nwu.edu |
RE: User authentication for the proxy |
We desparately need a way in the protocol to authenticate the user to a proxy. Here's the first draft proposal for public review: The digest authentication method can be used for authentification along the whole chain and does not involve sending a password in the clear. It is intended to replace the Basic scheme ASAP. The next public release of the daemon and CERN library will have it incorporated. The scheme is :- Let the password be P, the username be Uthe Realm be R and the hash function H(), let the binary operator a^b represent the concatenation of the strings a and b. The Request:- Request = Start ^ Boundary ^ Secure-Fields ^ Signature ^ Insecure-Fields ^ CRLF ^ Body Where Start = Method URI "HTTP/1.0" CRLF Boundary = "Digest-Boundary: " Algorithm [, nonce] CRLF Secure-Fields = Any HTTP request fields Signature = Algorithm, S CRLF Insecure-Fields = Informational HTTP fields only (TBS) S = H(H(Boundary^Secure-Fields) ^ Date ^ H(P ^ U ^ "@" ^ R)) Appols for the formatting, this has been changing a few times at Alans suggestion and other peoples. The working spec is now on paper:-( and in C :-). This scheme is not intended as a replacement for Shen, SHTTP or whatever, the aims are:- 1) Authentication only 2) Unconstrained by export controls 3) Unconstrained by patent restrictions 4) Drop in one for one replacement of BASIC scheme 5) Does not compromise high grade security schemes. 6) Password never transmitted en-clair 7) Access key not transmitted en-clair 5) is most important. One area in which a lot of people are interested is in setting up Web MUDs MOOs etc. Some people will run such systems to snarf passwords, despite warnings people will use the same password on multiple machines. If we breach (7) once we can remove the need for the dungeon master to ever see the users plaintext password, the password is hashed in the client and the hash value transmitted. This communication could be encrypted. The main objection to the digest scheme is that the password file is all you need for access. This is why the scheme does not replace the strong authentication schemes in Shen or SHTTP (which should emerge as rsoon as we have the two schemes combined. As far as the proxy scheme goes it simplifies a few things, multiple encapsulations are possible for example, leaking authentication information is not a security hole (it can only be used within the validity interval of the Date, there is also a stronger method of preventing a replay attack but it is not practical on a forking UNIX server, it needs threads). Phill H-B |
A few semantic points for HTTP/1.0 draft |
After a quick read through the HTTP/1.0 draft, I have a couple of comments. By and large, the draft seems to be a very good representation of standard practice (with a nod to concerns over multi-part messages), but there are two "pet peeves" that remain unresolved and could be cleared up with a short addition of some semantic info to the spec. Specifically, there are minor ambiguities regarding the 503 error code defined in section 6.3.4 and the encoding of object-body parts as described in section 7 and RFC 1630. Regarding section 6.3.4, the draft specifies that the error code "503 Service Unavailable" simply indicates that the server is unable to handle the request. There are two different occasions when this code can be returned, both of which are detailed in this paragraph. Unfortunately, one is a short-term, transient condition (busy) that the client may reasonably expect to disappear momentarily. The other, a server is administratively off-line, is a condition of indeterminent duration and the client cannot infer how long this condition will last. Current client support for 503 is minimal. As an example, NetScape's response to a 503 error is to begin repeatedly resending the client's request until it is satisfied or the client times out. If the server is moderately busy, the client may be quickly serviced without impacting performance from the user's or server's perspective. If the server is heavily overloaded, a rapid, repeated resubmission of the request will only make the problem worse. In the case that the server is administratively off line, and 503 is being returned to indicate that the server is alive but unable to process connections, client behavior as above is clearly unacceptable. Obviously, some guidance in implementing client responses to this error code is needed. Optimally, information should be returned to a client to help determine an appropriate client response to a 503 error. As a solution, I'd like to propose an additional response-header for the 503 error response that specifies a time at which the client may expect the server to be able to handle requests again. This time should be relative to the Date: header sent by the client. I propose that this time be specified as a delta from this date in terms of hours, minutes, and seconds until availability. The client should not attempt to resend its request before this delta period of time has elapsed. For the case of a busy server, this could be a delta of a few seconds from the present (or a delta value calculated on the load of the server, depth of the request queue, etc.) For an "off-line" server, this could be a delta supplied by administration (i.e. the server will be back up in 30 minutes) or an arbitrarily longer value (10 seconds, a few minutes) that would prevent rapid client retries. A candidate syntax for this response header field returned by the server is: Retry-After = "Retry-After" ":" *LWSP-char 3DIGIT ":" 2DIGIT ":" 2DIGIT The 3 numeric fields represent the number of hours, minutes, and seconds to wait before attempting to contact the server again after the server has reported a 503 error. The server should also return its own Date: field as part of the response in all cases. With regard to comment number 2, the encoding of object-body parts, there is a non-trivial ambiguity in RFC 1630 regarding the encoding of spaces as "+", and where this is allowed. For WWW clients that encode object-bodies using the URL-encoding scheme, behavior is inconsistent. Some clients encode specials in the object-body text using %xx hex encodings exclusively. Others use %xx encodings for all specials except space, and encode spaces as "+". According to 1630, "+" may be used as a shorthand for space in the search portion of a URI. Unfortunately, the BNF for URIs is ambiguous in that the definition of the non-terminal "xpalphas" includes separating "xalphas" with "+", implying that spaces can be encoded as "+" anywhere. In my opinion, the object-body of a HTTP request is not the search portion of a URI. Therefore, spaces should only be encoded using %xx encodings and not "+" encodings. This ambiguity has never been resolved and a definitive statement regarding appropriate encoding of object-bodies using URL-encoding in this draft of the HTTP standard would be helpful. It may be a moot point, since many clients encode POST arguments using both techniques, and many gateway apps parse pluses as space. Nonetheless, an appropriate "ruling" should be made in the HTTP standard. Thanks for your consideration, Chuck Shotton p.s. I am not subscribed to www-talk@info.cern.ch, so I'd appreciate being CC'ed on any responses from that list. ----------------------------------------------------------------------- Chuck Shotton cshotton@oac.hsc.uth.tmc.edu "I am NOT here." |
RE: A few semantic points for HTTP/1.0 draft |
Current client support for 503 is minimal. As an example, NetScape's response to a 503 error is to begin repeatedly resending the client's request until it is satisfied or the client times out. If the server is moderately busy, the client may be quickly serviced without impacting performance from the user's or server's perspective. If the server is heavily overloaded, a rapid, repeated resubmission of the request will only make the problem worse. This is unacceptable, if a server is busy then that information should be returned to the user. Sending 503 should in the main be done only at the point where the server is about to collapse. One solution to this is to use a threaded server and implement a lockout, once a client recieves 503 the site is blacklisted for a period. Retry attempts in this interval would increase the blacklist period and/or return notification of a protocol violation. I like the idea of a retry in x seconds field, it would allow a server to schedule a slot where the request was guaranteed to be handled. I would like to have some control over the interpretation though. How about:- Deadtime: time[; reason=tag][; retry=policy] Where time is the deadtime in seconds, a time of 0 being an indeterminate length of time. reason tags could include: busy maintenance retry policies would include: blacklist retry attempts are now blacklisted ignore retry attempts are simply ignored lockout retry attempts are dealt with by router lockout. The final one is something added to one implementation to deal with denial of service attacks. If more than a certain number of accesses are made the system goes into `lockout mode'. This involves adding a filter record to the router to redirect packets to another server (amongst other things). We should also have a facility to allow an alternative site to be stated. This would permit rather more gracefull fallover and provide some load balancing. Phill H-B |
Comments on HTTP draft [of 23 Nov 1994] |
I was very pleased to see the new HTTP draft; it's a major improvement on previous versions! Here are some comments on the new draft which I hope will be useful. I am writing these from the perspective of an implementer of software for a reasonably general-purpose HTTP server, so I am especially looking for a definition of the HTTP which a) allows a server to be implemented using the definition (and referenced documents) and specifically without reference to specific clients or other implementations b) makes it absolutely clear what is required for a server to be stated as conforming to the definition. Most of these comments therefore seek clarification in these two areas. I'm sorry I won't be able to attend the IETF meeting next week--a long-standing commitment has me on the wrong coast of the USA. Mike Cowlishaw IBM Fellow, IBM UK Laboratories, Winchester, UK - - - - - - - - - 2.1 The '#' rule implies that no whitespace is allowed after (or before) the commas in a list. Is this correct? For example, in the example in 7.1 there is a space after each comma (which certainly aids readability). 2.2 linear-white-space rule: I didn't understand the comment "CRLF => Folding". I think this rule allows whitespace (but non-null) lines in headers etc.? 3.1 Header fields: (a) "However ... use of comments is discouraged". This seems rather outside the scope of a definition such as this; at most it should be a informational note, and explain why the note is there (historical or client incompatibility, performance, reduced net traffic?). (b) [nit] the second open quote in the comment rule should be a close quote. (c) The ctext rule seems to be missing some characters (there's an open quote, followed by an open single quote, but neither is closed). Also, shouldn't LF be excluded too? 3.2 Object body: (a) This is my 'biggest' question -- I don't understand from the second paragraph how to determine when to stop reading the data on a request. If the headers are only 'similar' to those defined by MIME, then the MIME definition may or may not be relevant. Moreover, a "heuristic function of the Content-Type and Content-Encoding" would appear to be unimplementable, as new Types (especially application/xxx) seem to spring up daily. It would seem to be appropriate that the HTTP protocol specify that Content-Length, in bytes, be Required--at least for Requests. (b) Does the server have to read the headers (and data, if any) on a request? For example, if the customizing filter/script doesn't need the information in order to determine the response, is the server permitted to leave the data unread, or could this embarrass some client(s) or TCP/IP stack(s)? 4.1 Date/Time stamps: (a) I'm a little disturbed that time is only permitted to be specified to the second, given that most server hardware will be able to handle many more than one request per second, and network transit time is often sub-second. If this is a limitation of RFC 1123, perhaps an additional HTTP header for sub-second time information should be specified. (b) Since this is a new standard/document, surely it should specify a single Date/Time format, and only mention the others for compatibility/historical information? (c) [aside] I wish, oh wish, that Longitude/Latitude information were a recommended header. 4.2 Multipart types: From the text, I infer that a server/script is not Required to respond with a multipart type when a client has indicated that it can accept them. It might be worth an explicit statement to that effect. 4.3.1 Date Header Field: (a) [nit] should refer to RFC 1123 rather than 822, or both? (b) It's not clear what time the header should refer to. For a response, is it the time when the request was accepted, or when the response line was generated, or when the first line of the header was transmitted, or when the 'Date:' header line was generated? (c) [nit] 'of' is missing between 'creation date' and 'the enclosed'. 4.3.3 Message-ID: [suggestion] Although the example shows a unique ID, it might be nice to encourage via the example a form of ID that includes the port number (if not 80), and even follows URL format. Perhaps: Message-ID: <http://info.cern.ch:8080/9411251630.4256> 4.3.4 MIME version: It is not at all clear why this is useful and strongly recommended if it is not an indication of full compliance with MIME. One might argue that it should *not* be included unless MIME-compliance of the remainder of the header and data is guaranteed? 5. Request: The 0.9 requirement here (Simple Request must have Simple Response) is somewhat onerous on a server. Is it possible to relax or remove this requirement yet, or are there still 0.9-only clients in use? I've noticed that at least some Simple Responses will not go through some proxies transparently. 5.2 Method and 5.2.2 Head: I'm surprised that the HEAD method *must* be supported, as it is ill-defined. 5.2.2 simply says that there must be no Object-Body; it seems that the header may or may not be related to the header that would be sent if the method were GET, and in particular, HEAD may as well just return an empty (null) header. Further, in many cases the cost of determining, building and sending the header is going to be the major part of many transactions, so should clients or proxies be encouraged to use this Method? 5.2.1 Get: [nit] The first paragraph should have the suffix: "(unless that is the produced data)." 5.2.3 Post: (a) Some clarification seems to be needed here; there's an assumption that Form data is used by some gateway program rather than the server/script directly, but in the latter case the specification (the paragraph starting "If the URI does not refer to a gateway...") implies that the Form data must be retrievable at some later date. (b) Can the URI returned via a URI-header be a partial URI as described in 5.4? Or does it have to be a full URI? [I infer the latter.] 5.2.3.1 [nit] Change 'references' to 'refers to'? 5.3 HTTP Version: Given that this definition is more rigorous than earlier documents, and hence must be more constraining, it would seem to be necessary to change the version number (perhaps to 1.1) to reflect the stricter conditions for compliance. If the version number is not changed, then the date of the relevant HTTP 1.0 document would have to be specified at every reference. 5.4 URI Note 1: What's 'default escaping'? What characters may be 'considered unsafe'? The "should" (probably meant to be "shall"?) implies that a server must comply with these conditions, but they do not seem to be well defined. 5.5.2 If-Modified-Since: This section implies that servers *must* implement this feature. However, the last-modified-date might be unavailable, unreliable, or not applicable for some URIs. In these cases (or indeed in any case), is the server permitted to return the object, despite the presence of the I-M-S header? 5.5.4 Authorization: UU-encoding: if defined by RFC 1421, then this should appear in Section 13 (References), and Appendix 15 should go away (as it does not appear to apply, in any case)? 6.3 Status Codes and Reason Phrases: The rule for Reason-Phrase does not allow spaces, but several of the phrases specified later do include a space. 6.3.1 201 Created: Also possible following PUT, presumably. 6.3.1 202 Accepted: "delay header line" is what? 6.3.1 204 No Response: Allowed for POST, too? (For a Form.) 6.3.2 301 & 302 Moved: Allowed for POST, and others, too? 6.3.3 401 Unauthorized: [nit] change first 'a' to 'an'. 6.3.3 404 Method Not Allowed: is this only for the defined methods, or should this also be used for a misspelled or unrecognized method name? 6.3.4 500 Server Error: 502 (twice) and 504 call this "500 Internal Error". All should say "Server Error"? 6.3.4 503 Service Unavailable: Does this imply that a server is not permitted to refuse to accept a connection? [Presumably not, though it could be read that way.] 6.4 paragraph 1: [nit] change 'a Object-Body' to 'an ...' 6.4.2 Version: since this refers to an object and not the server, shouldn't it be in section 7? 7.2 Content-Length Note 2: [nit] 'wherever' has only three 'e's. 7.4 Content Encoding: (a) [nit] this heading (and 7.5 too) needs a hyphen. (b) [nit] change 'method' to 'mechanism' in the first paragraph to avoid confusion with use of the term elsewhere? 7.5 C-T-E: The rule omits the token and colon before the type. 7.7 Expires: Are there any constraints on the Date and Time specified? Specifically, may they refer to a time earlier than or the same as that in the Date: header? 7.9 URI First example: [nit] Change close quote to open quote. 7.9 URI Second example: [nit] Semicolon missing. 7.12 Title: "isomorphic" here implies that the Title follows SGML syntax, and hence depends on the HTML DTD (including the Declaration), and is allowed any valid entities and shortrefs within, etc. This probably isn't intended (I hope!). 7.13 Link: [nit] both examples have unmatched quotes. '//' missing after 'mailto:'? 8 Neg. algorithm para 4: [nit] change 'between 0 and 1' to 'in the range 0 through 1'? (0 and 1 are allowed values.) 8 Neg. algorithm 'bs' definition: [nit] change 'send' to 'sent' 9 Authentication, paragraph 1: Here is perhaps the strongest statement in the document about conformance. Yet, surely, if the server would never return "401 Unauthorized" (because all its data are public) there is no need for it to implement the Basic Access Authentication Scheme? 9 Authentication, fourth bullet: [nit] change second 'a' to 'an'. 11.3 Abuse, para 1: While I strongly support the intent behind the last sentence here, this document is a definition of the HTTP protocol, not people using it. It cannot impose requirements on, or define, people. (Does my server become non-conforming because someone using my server abused his or her collected data?) Also, am I (as the writer, and hence provider, of a server) responsible for the actions of other people *using* my server to provide data? Thin ice... Perhaps it should read something like: "People using the HTTP protocol to provide data are responsible for...". 11.3 Abuse, final para: This reads as though the user must be prompted with the From field to be sent before sending every request. Probably not the intent. 16 Server Tolerance, para 2: Time to make this a Requirement? 17 Bad servers: The first paragraph here sounds like a Compliance statement. As such, it should be in the body of the document, not an appendix? The document certainly needs a Compliance section. 17.1 Back compatibility, para 2: This doesn't seem to reflect current practice (inline <img href="xxx"> requests for .GIF files do seem to appear as images, not as HTML documents). mfc/29 Nov 1994 |
Comments on the HTTP/1.0 draft. |
(By the way, I have not yet succeeded in getting myself subscribed to this list, so please cc replies to me for the moment.) Generally, I think the draft of Nov 28 is very good. Rather egregiously missing is a reference to transmitting network objects in canonical form. Section 3.2 should mention this; a reference to the canonical encoding model in Appendix G of RFC 1521 (specifically step 2) probably should suffice. The only place this is hinted at is in the tolerance section of the appendices on tolerance of broken implementations, but the spec should explicitly say what the proper behavior is, just in case any servers every actually do that. :-) As near as I can tell, the spec constrains all header values to be US-ASCII, meaning nothing that is not US-ASCII may be contained in them. We might consider permitting non-US-ASCII information in at least some headers, probably using RFC 1522's model. In section 7.5, I don't understand the BNF for the CTE header. CTE's don't have subtypes or parameters. Chuck Shotton said: As a solution, I'd like to propose an additional response-header for the 503 error response that specifies a time at which the client may expect the server to be able to handle requests again. This time should be relative to the Date: header sent by the client. I propose that this time be specified as a delta from this date in terms of hours, minutes, and seconds until availability. The client should not attempt to resend its request before this delta period of time has elapsed. Regarding busy server errors, a "Retry-After:" field might be reasonable, but I would prefer to just make it an HTTP-date rather than inventing something new for clients to have to parse. If we were going to use relative dates, there are plenty of other places (like Expires:) where they make as much sense. A pointer to an alternative address also seems like a sensible way to handle timeouts. With regard to comment number 2, the encoding of object-body parts, there is a non-trivial ambiguity in RFC 1630 regarding the encoding of spaces as "+", and where this is allowed. For WWW clients that encode object-bodies using the URL-encoding scheme, behavior is inconsistent. Some clients encode specials in the object-body text using %xx hex encodings exclusively. Others use %xx encodings for all specials except space, and encode spaces as "+". I disagree strongly with this interpretation. A + in search terms represents a keyword separator, and has nothing to do with a space, which is (of course) represented as %20. The fact that some WWW clients choose to have a space be the device by which the user communicates keyword separations to the client is irrelevant; it could just as well be a tab, or a comma, or clicking in a different box. (The fact that some WWW clients don't allow any way for a keyword to contain a space reflects a lack of flexibility.) -- Marc VanHeyningen <URL:http://www.cs.indiana.edu/hyplan/mvanheyn.html> |
Re: Comments on the HTTP/1.0 draft. |
With regard to comment number 2, the encoding of object-body parts, there is a non-trivial ambiguity in RFC 1630 regarding the encoding of spaces as "+", and where this is allowed. For WWW clients that encode object-bodies using the URL-encoding scheme, behavior is inconsistent. Some clients encode specials in the object-body text using %xx hex encodings exclusively. Others use %xx encodings for all specials except space, and encode spaces as "+". I disagree strongly with this interpretation. A + in search terms represents a keyword separator, and has nothing to do with a space, which is (of course) represented as %20. The fact that some WWW clients choose to have a space be the device by which the user communicates keyword separations to the client is irrelevant; it could just as well be a tab, or a comma, or clicking in a different box. (The fact that some WWW clients don't allow any way for a keyword to contain a space reflects a lack of flexibility.) Actually, we agree. "+" and space are NOT equivalent. The problem is that Mosaic and its derivative works (including NetScape, derived from the programmers rather than the source) all encode spaces as + in object-body parts. The "+" token is very clearly intended to be a search term separator, as specified in the URI RFC. Just because "+" is the representation of spaces from the original Mosaic's data entry dialog for searches is coincidence. As you say, Mosaic could have prompted repeatedly for single search terms, concatenating them with "+". However, we are talking about slightly different subjects. I am specifically requesting a clarification on what it means to have object-body content that uses "URL-Encoding", and whether or not the usage of "+" as an encoding for spaces is acceptable in an object-body part. I have always felt that it is incorrect to use "+" for ANYTHING but keyword separators in the search term portion of a URL. "+" in an object-body that is URL-encoded should be represented as %2B and spaces as %20. This would avoid any confusion with CGIs that interpret + as space, though it would do little to keep clients from emitting them in the first place. In the grand scheme of things, this is a minor issue. But clarifying it can make life a little easier for CGI authors and client implementors. --_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\_-_-_-_-_-_-_-_-_-_-_-_-_-_-_- Chuck Shotton \ Assistant Director, Academic Computing \ "Shut up and eat your U. of Texas Health Science Center Houston \ vegetables!!!" cshotton@oac.hsc.uth.tmc.edu (713) 794-5650 \ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-\-_-_-_-_-_-_-_-_-_-_-_-_- |
Re: Comments on the HTTP/1.0 draft. |
1) + is not part of any "URL-encoding". 2) I think this is a HTML and not a HTTP issue IETF has 3 working groups working on different but related standards: URI (URL, URN, etc.), HTML, and (presumably, after the BOF and the approval of the charter), HTTP. We'll have to be careful to separate out issues, especially ones that seem to cross working group boundaries. In particular, how web clients should encode queries in response to HTML documents in the URL they send to their HTTP server seems to cross all of the boundaries of all of the subcommittees, but in this case, the transformation is something that a HTML interpreter makes independently of whether the base document is HTTP or FTP or MAILTO. |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 9