Top browser Questions

List of Tags
965
Sander Versluys

Does it differ between browsers?

Does the HTTP protocol dictate it?

Answered By: Paul Dixon ( 1134)

Short answer - de facto limit of 2000 characters

If you keep URLs under 2000 characters, they'll work in virtually any combination of client and server software.

Longer answer - first, the standards...

RFC 2616 (Hypertext Transfer Protocol HTTP/1.1) section 3.2.1 says

The HTTP protocol does not place any a priori limit on the length of
a URI. Servers MUST be able to handle the URI of any resource they serve, and SHOULD be able to handle URIs of unbounded length if they provide GET-based forms that could generate such URIs. A server SHOULD return 414 (Request-URI Too Long) status if a URI is longer than the server can handle (see section 10.4.15).

Note: Servers ought to be cautious about depending on URI lengths above 255 bytes, because some older client or proxy implementations might not properly support these lengths.

...and the reality

That's what the standards say. For the reality, see this research over at boutell.com to see what individual browser and server implementations will support. It's worth a read, but the executive summary is:

Extremely long URLs are usually a mistake. URLs over 2,000 characters will not work in the most popular web browser. Don't use them if you intend your site to work for the majority of Internet users.

Also, be aware that the sitemaps protocol, which allows a site to inform search engines about available pages, has a limit of 2048 characters in a URL. If you intend to use sitemaps, a limit has been decided for you! (see Calin-Andrei Burloiu's answer below)

There's also some research from 2010 into the maximum URL length that search engines will crawl and index. They found the limit was 2047 chars, which appears allied to the sitemap protocol spec. However, they also found the Google SERP tool wouldn't cope with URLs longer than 1855 chars.

Footnote

This is a popular question, and as the original research is nearly 6 years old I'll try to keep it up to date: As of Jan 2013, the advice still stands, as IE8's maximum URL length is 2083 chars, and it seems IE9 has a similar limit.

Like everyone else, I need to test my code on Internet Explorer 6 and Internet Explorer 7. Now Internet Explorer 8 has some great tools for developer, which I'd like to use. I'd also like to start testing my code with Internet Explorer 8, as it will soon be released.

The question is: how to run Internet Explorer 6, Internet Explorer 7, and Internet Explorer 8 on the same machine. So far with Internet Explorer 6 and Internet Explorer 7 I've been using Multiple IE. But people have reported (see comments on the page linked in the previous sentence) issue with Internet Explorer 6 after installing Internet Explorer 8. Those errors are related to focus in form fields. Running Internet Explorer 7 wouldn't matter so much as Internet Explorer 8 can use the Internet Explorer 7 rendering engine, but we still need Internet Explorer 6.

How to run Internet Explorer 6, Internet Explorer 7, and Internet Explorer 8 on the same machine?

Answered By: Ian Robinson ( 175)

I wouldn't do it. Use virtual PCs instead. It might take a little setup, but you'll thank yourself in the long run. In my experience, you can't really get them cleanly installed side by side and unless they are standalone installs you can't really verify that it is 100% true-to-browser rendering.

Update: Looks like one of the better ways to accomplish this (if running Windows 7) is using Windows XP mode to set up multiple virtual machines: Testing Multiple Versions of IE on one PC at the IEBlog.

257
John Millikin

I've seen a couple questions around here like How to debug RESTful services, which mentions:

Unfortunately that same browser won't allow me to test HTTP PUT, DELETE, and to a certain degree even HTTP POST.

I've also heard this, that browsers support only GET and POST, from some other sources like:

However, a few quick tests in Firefox show that sending PUT and DELETE requests works as expected -- the XMLHttpRequest completes successfully, and the request shows up in the server logs with the right method. Is there some aspect to this I'm missing, such as cross-browser compatibility or non-obvious limitations?

Answered By: Matthew Murdoch ( 170)

HTML forms (up to HTML version 4 and XHTML 1) only support GET and POST as HTTP request methods. A workaround for this is to tunnel other methods through POST by using a hidden form field which is read by the server and the request dispatched accordingly.

However, for the vast majority of RESTful web services GET, POST, PUT and DELETE should be sufficient. All these methods are supported by the implementations of XMLHttpRequest in all the major web browsers (IE, Firefox, Opera).

How can I detect if a user is viewing my web site from a mobile web browser so that I can then auto detect and display the appropriate version of my web site?

Answered By: Vinko Vrsalovic ( 64)

Yes, reading the User-Agent header will do the trick.

There are some lists out there of known mobile user agents so you don't need to start from scratch. What I did when I had to is to build a database of known user agents and store unknowns as they are detected for revision and then manually figure out what they are. This last thing might be overkill in some cases.

If you want to do it at Apache level, you can create a script which periodically generates a set of rewrite rules checking the user agent (or just once and forget about new user agents, or once a month, whatever suits your case), like

RewriteEngine On

RewriteCond %{HTTP_USER_AGENT} (OneMobileUserAgent|AnotherMobileUserAgent|...)
RewriteRule (.*) mobile/$1

which would move, for example, requests to http://domain/index.html to http://domain/mobile/index.html

If you don't like the approach of having a script recreate a htaccess file periodically, you can write a module which checks the User Agent (I didn't find one already made, but found this particularly appropriate example) and get the user agents from some sites to update them. Then you can complicate the approach as much as you want, but I think in your case the previous approach would be fine.

207
Kevin Dente

Is there a standard way for a Web Server to determine what time zone offset a user is in?

From a HTTP header, or part of the user-agent description perhaps?

Answered By: JD Isaacks ( 119)
-new Date().getTimezoneOffset()/60;

getTimezoneOffset() will subtract your time from GMT and return the number of minutes. So if you live in GMT-8, it will return 480. To put this into hours, divide by 60. Also, notice that the sign is the opposite of what you need -- it's calculating GMT's offset from your time zone, not your time zone's offset from GMT. To fix this, simply multiply by -1.

192
tgandrews

CSS Selectors are matched by browser engines from right to left. So they first find the children and then check their parents to see if they match the rest of the parts of the rule.

  1. Why is this?
  2. Is it just because the spec says?
  3. Does it affect the eventual layout if it was evaluated from left to right?

To me the simplest way to do it would be use the selectors with the least number of elements. So IDs first (as they should only return 1 element). Then maybe classes or an element that has the fewest number of nodes — e.g. there may only be one span on the page so go directly to that node with any rule that references a span.

Here are some links backing up my claims

  1. http://code.google.com/speed/page-speed/docs/rendering.html
  2. https://developer.mozilla.org/en/Writing_Efficient_CSS

It sounds like that it is done this way to avoid having to look at all the children of parent (which could be many) rather than all the parents of a child which must be one. Even if the DOM is deep it would only look at one node per level rather than multiple in the RTL matching. Is it easier/faster to evaluate CSS selectors LTR or RTL?

Answered By: Boris Zbarsky ( 381)

Keep in mind that when a browser is doing selector matching it has one element (the one it's trying to determine style for) and all your rules and their selectors and it needs to find which rules match the element. This is different from the usual jQuery thing, say, where you only have one selector and you need to find all the elements that match that selector.

If you only had one selector and only one element to compare against that selector, then left-to-right makes more sense in some cases. But that's decidedly not the browser's situation. The browser is trying to render Gmail or whatever and has the one <span> it's trying to style and the 10,000+ rules Gmail puts in its stylesheet (I'm not making that number up).

In particular, in the situation the browser is looking at most of the selectors it's considering don't match the element in question. So the problem becomes one of deciding that a selector doesn't match as fast as possible; if that requires a bit of extra work in the cases that do match you still win due to all the work you save in the cases that don't match.

If you start by just matching the rightmost part of the selector against your element, then chances are it won't match and you're done. If it does match, you have to do more work, but only proportional to your tree depth, which is not that big in most cases.

On the other hand, if you start by matching the leftmost part of the selector... what do you match it against? You have to start walking the DOM, looking for nodes that might match it. Just discovering that there's nothing matching that leftmost part might take a while.

So browsers match from the right; it gives an obvious starting point and lets you get rid of most of the candidate selectors very quickly. You can see some data at http://groups.google.com/group/mozilla.dev.tech.layout/browse_thread/thread/b185e455a0b3562a/7db34de545c17665 (though the notation is confusing), but the upshot is that for Gmail in particular two years ago, for 70% of the (rule, element) pairs you could decide that the rule does not match after just examining the tag/class/id parts of the rightmost selector for the rule. The corresponding number for Mozilla's pageload performance test suite was 72%. So it's really worth trying to get rid of those 2/3 of all rules as fast as you can and then only worry about matching the remaining 1/3.

Note also that there are other optimizations browsers already do to avoid even trying to match rules that definitely won't match. For example, if the rightmost selector has an id and that id doesn't match the element's id, then there will be no attempt to match that selector against that element at all in Gecko: the set of "selectors with IDs" that are attempted comes from a hashtable lookup on the element's ID. So this is 70% of the rules which have a pretty good chance of matching that still don't match after considering just the tag/class/id of the rightmost selector.

165
Morgan Cheng

Is there a standard for what actions F5 and Ctrl+F5 trigger in web browsers?

I once did experiment in IE6 and Firefox 2.x. The "F5" refresh would trigger a HTTP request sent to the server with an "If-Modified-Since" header, while "Ctrl+F5" would not have such a header. In my understanding, F5 will try to utilize cached content as much as possible, while "Ctrl+F5" is intended to abandon all cached content and just retrieve all content from the servers again.

But today, I noticed that in some of the latest browsers (Chrome, IE8) it doesn't work in this way anymore. Both "F5" and "Ctrl+F5" send the "If-Modified-Since" header.

So how is this supposed to work, or (if there is no standard) how do the major browsers differ in how they implement these refresh features?

Answered By: some ( 335)

It is up to the browser but they behave in similar ways.

I have tested FF, IE7, Opera and Chrome.

F5 usually updates the page only if it is modified. The browser usually tries to use all types of cache as much as possible and adds an "If-modified-since" header to the request. Opera differs by sending a "Cache-Control: no-cache".

CTRL-F5 is used to force an update, disregarding any cache. IE7 adds an "Cache-Control: no-cache", as does FF, which also adds "Pragma: no-cache". Chrome does a normal "If-modified-since" and Opera ignores the key.

If I remember correctly it was Netscape which was the first browser to add support for cache-control by adding "Pragma: No-cache" when you pressed CTRL-F5.

Edit: Updated table

The table below is updated with information on what will happen when the browser's refresh-button is clicked (after a request by Joel Coehoorn), and the "max-age=0" Cache-control-header.

Updated table, 27 September 2010

+------------+-----------------------------------------------+
|  UPDATED   |                Firefox 3.x                    |
|27 SEP 2010 |  +--------------------------------------------+
|            |  |             MSIE 8, 7                      |
| Version 3  |  |  +-----------------------------------------+
|            |  |  |          Chrome 6.0                     |
|            |  |  +  +--------------------------------------+
|            |  |  |  |       Chrome 1.0                     |
|            |  |  |  |  +-----------------------------------+
|            |  |  |  |  |    Opera 10, 9                    |
|            |  |  |  |  |  +--------------------------------+
|            |  |  |  |  |  |                                |
+------------+--+--+--|--+-----------------------------------+
|          F5|IM|I |IM|IM|C |                                |
|    SHIFT-F5|- |- |CP|IM|- | Legend:                        |
|     CTRL-F5|CP|C |CP|IM|- | I = "If-Modified-Since"        |
|      ALT-F5|- |- |- |- |*2| P = "Pragma: No-cache"         |
|    ALTGR-F5|- |I |- |- |- | C = "Cache-Control: no-cache"  |
+------------+--+--+--|--+--+ M = "Cache-Control: max-age=0" |
|      CTRL-R|IM|I |IM|IM|C | - = ignored                    |
|CTRL-SHIFT-R|CP|- |CP|- |- |                                |
+------------+--+--+--|--+--+                                |
|       Click|IM|I |IM|IM|C | With 'click' I refer to a      |
| Shift-Click|CP|I |CP|IM|C | mouse click on the browsers    |
|  Ctrl-Click|*1|C |CP|IM|C | refresh-icon.                  |
|   Alt-Click|IM|I |IM|IM|C |                                |
| AltGr-Click|IM|I |- |IM|- |                                |
+------------+--+--+--+--+--+--------------------------------+

Versions tested:

  • Firefox 3.1.6 and 3.0.6 (WINXP)
  • MSIE 8.0.6001 and 7.0.5730.11 (WINXP)
  • Chrome 6.0.472.63 and 1.0.151.48 (WINXP)
  • Opera 10.62 and 9.61 (WINXP)

Notes:

  1. Version 3.0.6 sends I and C, but 3.1.6 opens the page in a new tab, making a normal request with only "I".

  2. Version 10.62 does nothing. 9.61 might do C unless it was a typo in my old table.

Note about Chrome 6.0.472: If you do a forced reload (like CTRL-F5) it behaves like the url is internally marked to always do a forced reload. The flag is cleared if you go to the address bar and press enter.

Is this defined by the language? Is there a defined maximum? Is it different in different browsers?

Answered By: Jimmy ( 153)

+/- 9007199254740992

ECMA Section 8.5 - Numbers

Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type (indeed, the integer 0 has two representations, +0 and −0).

They are 64-bit floating point values, the largest exact integral value is 253, or 9007199254740992.

Note that the bitwise operators and shift operators operate on 32-bit ints.


Test it out!

var x = 9007199254740992;
var y = -x;
x == x + 1; // true !
y == y - 1; // also true !
// Arithmetic operators work, but bitwise/shifts only operate on int32:
x / 2;      // 4503599627370496
x >> 1;     // 0
x | 1;      // 1
159
Michael Gundlach

In Firefox 3, the answer is 6 per domain: as soon as a 7th XmlHttpRequest (on any tab) to the same domain is fired, it is queued until one of the other 6 finish.

What are the numbers for the other major browsers?

Also, are there ways around these limits without having my users modify their browser settings? For example, are there limits to the number of jsonp requests (which use script tag injection rather than an XmlHttpRequest object)?

Background: My users can make XmlHttpRequests from a web page to the server, asking the server to run ssh commands on remote hosts. If the remote hosts are down, the ssh command takes a few minutes to fail, eventually preventing my users from performing any further commands.

Answered By: Bob ( 50)

One trick you can use to increase the number of concurrent conncetions is to host your images from a different sub domain. These will be treated as seperate requests, each domain is what will be limited to the concurrent maximum.

IE6, IE7 - Have a limit of two. IE8 is 6 if your a broadband, 2 if you are dial up.