Error: JavaScript is needed to render this site: please enable it.
This gives miscellaneous information about design issues, especially those design issues which simplify testing websites.
This page is organized into these sections:
NB: this page offers suggestions which you won’t welcome unless you focus on making pages which meet the needs of a site’s owner and users. If your priority is making cool, flashy sites which flaunt your design prowess, this isn’t for you.
Testing is easier if — from the very start — you design in ways that reduce browser differences, or that make differences less important. Here are a few guidelines:
Keep the design and code simple: strive to make it simply perfect.
Make your sites load fast. If it takes too long to load pages, users will flee your site and go elsewhere.
Minimizing and optimizing images is a key step in making sites fast: only use images to make your sites effective, make image file sizes as tiny as possible with acceptable image quality, and reüse images throughout a site.
Minimizing embedded fonts is also a key step in making sites fast, especially for the first page which users access, since embedded fonts are downloaded with the first page, then cached for higher speed with subsequent pages.
Design to the standards, using a DOCTYPE and meta tag which make browsers honour standards more strictly. And validate your code using an HTML validator and a CSS validator.
NB: to learn why you should design to the standards, see Using Web Standards in your Web Pages.
NB: compliance with standards can have extremely subtle, unexpected consequences which can result in pages not
working as expected even when there is nothing wrong with the code. For example, there are some browsers which are highly standards
compliant which will ignore files which don’t have the proper MIME type, e.g. CSS files which aren’t of type text/css
. Something like this
can be very hard to troubleshoot, because the natural tendency is to believe that the fault is in the code, when the fault is actually in how the server is
configured.
Caution: it’s fairly common to have a page which works as expected in quirks mode, but not in strict mode, even when the code is valid. When this happens with a browser which is highly standards compliant, it’s possible that the problem is due to a browser bug: however, it’s far more likely that the browser is behaving properly in strict mode, but that the browser doesn’t behave as you expect because your understanding of some element of the standards is faulty. When this happens you should use such problems as learning opportunities: your objective should not only be to make the page work in strict mode as expected: your should also learn where you went wrong so that you are less likely to repeat your errors in the future.
There are good resources for learning and designing to the standards, for example:
Once it was hard to make pages valid, because poor standards support entailed the use of non-standard features. Making invalid pages today is rarely justified. Old software tools such as Microsoft’s FrontPage create badly broken code; modern tools are much better.
A special challenge is legacy sites, like this site, which were created long ago. Such sites often have accreted code needed only by extinct browsers. HTML often follows Transitional standards, instead of Strict. And CSS can be especially bloated and complex, because it had to overcome serious CSS defects in ancient browsers. Even code that was well designed can devolve into chaos. Sites can, therefore, become more costly to maintain as time goes on, especially when not maintained by the original designer(s). At a certain point it may make sense to redo the code using modern practices. It may be hard, however, to get management approval for such a redesign, since management may not understand the issues; it may, therefore, be better to delay the redesign until the need arises for a major site update which has clear, tangible benefits that even a manager can understand ;-)
Test first with browsers which comply well with the standards. Once the design works with such browsers, test with lesser browsers. Problems are most likely with versions of Internet Explorer older than IE 8.
When tests reveal problems due to browser defects, it’s usually possible to tweak the code to fix the problems, but still honour the standards. If you must use non-standard code to fix an IE problem, use conditional comments so that other browsers don’t see the deviant code.
Don’t assume that there are no errors in a page that looks right: it’s common to believe that, if a page looks right, it’s right. First, some errors may not be obvious. Second, a browser that doesn’t conform to the standards may wrongly produce the results you expect. And third, some browsers — most notably IE — try hard to recover from errors gracefully by guessing what the designer intended, which hides errors. Often the first sign that there is a problem with your code is that your page looks bad with a different browser, or when it’s resized, or when it has a different font size.
Use only well-formed HTML, where tags nest properly, and no optional end tags are omitted: some legacy browsers can misbehave badly if the HTML isn’t well-formed. [more⮞ Code checkers may warn about deviant code. [more⮞ Note that xHTML must be well-formed, so a validator alone will suffice to find badly formed xHTML.
Use only CSS for layout; for consistency, put the CSS in files which all pages share.
Caution: the CSS 2.1 specification doesn’t say how marker positions or indent sizes for the <li>
tag should be controlled.
Different browsers and different versions of a browser may control them in different ways, so setting a certain margin and padding for a list may
produce good results for one browser and awful results for another: you may need different CSS for different browsers — and perhaps for different versions of a browser —
and you must test very carefully with your full test suite.
Before starting, decide which browsers need not be supported. Extinct browsers like IE 4 and Netscape 4 clearly don’t have to be supported, but fading browsers might have to be. Different sites attract different types of users, and some sites attract more people who use old browsers.
IE 5 is a problem because its support of standards is abysmal compared to more common browsers, but some people still use it: in July 2017, 0.8% visitors to this site used IE 5, and 0.8% is a bit too large to ignore. IE 6 is a much more serious problem, for although it supports standards much better than IE 5 does, it has many defects: and in July 2017, 5.6% of visitors to this site used IE 6, and 5.6% is far, far too large to ignore. Fortunately this site supports IE 5 and IE 6 because it was originally developed when many people used these browsers, so this site has special legacy code to cater to their differences. New sites can ignore IE 5, but deciding to ignore IE 6 is fraught with peril.
If an old browser must be supported, you must accomodate its defects and limitations, you must do more extensive testing, and you may have to eschew features supported by more capable browsers if the results with the older browser would be unacceptable.
Avoid being tempted to use elements of standards which are very new and supported by few browsers. Even browsers which do support such features may not do so completely or reliably. This especially applies to elements of standards for which the prescribed behaviour is complex. It is safer to use elements of standards which are simpler or which have been supported for quite some time, since there has been time to find and fix the bugs.
A contentious issue is when to use elements of emerging standards, especially elements which enjoy a fair amount of browser support.
Examples are the HTML 5 <canvas>
tag and the CSS 3 opacity
property. New standards likely won’t be fully supported by most browsers for
a long time. But this has also been true of existing standards: e.g., 10 years after its release, CSS 2 was not fully supported by any browser,
and never will be. This author’s view is that well-established elements of proposed standards may be treated much like elements of official standards:
if using such an element makes the site more effective, but the site can still be effective without support of that element, then it’s okay to use
it, especially if the element is simple and well-defined. Nonetheless, using anything which isn’t universally supported complicates testing, and using anything which isn’t in a released standard
complicates validation, so there must be a compelling reason for using elements of emerging standards.
Avoid over-precise control of layout: otherwise you may create fragile pages that break when browsers don’t do layout as you expect, and you will waste time fighting how different browsers do layout, and how user preferences affect layout.
It is a common mistake to believe that layout can be precisely controlled. Browsers may do layout incorrectly because they are not standards-compliant. Browsers may do layout correctly but unexpectedly because a some elements of standards are optional, ambiguous, or unspecified, b layout is affected by user settings, PC configuration, and display technologies, or c you wrongly understand how code should affect layout.
It is also a common mistake to believe that layout should be precisely controlled. Many designers — especially fledgling designers — try to treat a web page like a sheet of paper, with content to be placed in precise, pixel-perfect positions, with fonts of precise, pixel-perfect sizes. Such a view is wrong: a web page is an elastic medium which can adapt — and should adapt — to users with a wide range of devices and abilities.
Sadly, GUI authoring tools encourage precise layout. When told about problems, designers using these tools are often bewildered, denying that problems could possibly occur, or having no idea how to prevent them.
Here are some common problems resulting from attempts to control layout too precisely:
Blocks may overlap if the user‘s font size is larger than the designer expected. E.g. here is overlapping text in a page made by a professional designer using Dreamweaver:
In another example, from a well-known company’s site, the menu overlaps and distorts a search box:
NB: this problem is more common when designing for IE with font sizes in absolute units, e.g. in pixels. Designers may wrongly assume that users can’t change the sizes of such fonts, because IE makes it somewhat difficult to do so. Designers may therefore wrongly assume that such text will always fit into blocks of predictable sizes. In fact, however, users can even resize fonts specified in absolute units — albeit more easily with some browsers — so sizes of blocks containing text are not truly knowable, and overlaps can occur if blocks are placed at absolute locations.
Another problem is unwanted gaps. E.g. here is the header of a page made by another professional designer using DreamWeaver: a larger than expected font below the header’s sliced image results in a large horizontal gap and a 1 pixel vertical offset in the header’s slices:
Another example of this problem, this time in a site made by a designer using NetObjects Fusion, is this one, where vertical gaps appear between rows of a sliced image because text to the right of the slices was larger than the designer expected:
A third problem arises when the designer not only places page elements at pixel-perfect locations, but also tries to fit the content within fixed-size blocks: if the user’s font size is larger than expected, the content may be too large to fit within the blocks, and the content may be cropped. Here is an example where this occurred on the CNN home page:
A fourth problem appears when the designer tries to control layout precisely in order to cram as much as possible into a small space. The page can be so crowded that people have a very hard time finding what they are looking for, and reading what they have found. Also, the page can be so crowded that it can’t be made accessible to the disabled.
In many cases it’s better to break up the page into multiple pages joined by links. For example, one page might list what information is available — as a list of headlines with teasers — with links to pages containing the actual information. Such pages can be smaller and more open, with more whitespace, making them faster to load and easier to use. Such pages are easier to test because an open layout is less likely to break than a layout which is very compact.
A fifth problem appears when the designer uses positioning techniques which have browser-dependent results: this can happen when the site is made for a browser which fails to conform to the standards, but it can also happen when the standards allow different browsers to produce different results. Here is an example created using Microsoft’s Visual studio .NET, where a bullet and part of the text are missing in the first column if the user doesn’t have Internet Explorer:
A sixth problem is that pages may look bad on devices with small displays, e.g. cellphones.
Other problems include such things as: pages requiring horizontal scrolling (because the designers assumed a specific window size wider than some users have); pages with large whitespaces on the right side of the browser window or on the left and right sides (because the designers assumed a specific window size much less than some users have); navigational buttons too small to be read by some users (because the designers made the buttons from images).
Choose your font stacks very carefully, with a focus on choosing fonts which are legible and available. For details about this, see Fonts ⮞ Choices : Suggest Alternate Fonts in a Font Stack.
Avoid complex font sizing: setting sizes in complex ways, or setting a myriad of sizes, can cause unwanted and nonobvious differences in how browsers size fonts, especially with older versions of Internet Explorer.
NB: actual font sizes will vary from specified font sizes due to sub-pixel rounding.
Here are a few guidelines for choosing font sizes:
First, let the normal size of body text be the user’s preferred size, i.e. 1em, if the font is sans-serif — larger, if the font is a harder to read serif. For example:
1em Sample Sans-serif
1.2em Sample Serif
NB: a common mistake is to set the size of text in pixels. This ignores the user’s preferred size, and typically results in text which is hard to read, especially for users with visual acuity problems.
Second, decide the sizes of the headers. A factor to consider here is that the standards surprisingly specify that <h5> and <h6> headers should be smaller that the user’s preferred size.
One approach you should consider is to set the various font sizes in em units, using the sizes recommended in the CCS3 specification and proposed CSS4 specification, i.e.:
Headers | CSS Sizes | Formulæ | CSS Sizes for Method Ⅰ |
---|---|---|---|
H1 | xx-large | 2/1 | 2.0em |
H2 | x-large | 3/2 | 1.5em |
H3 | large | 6/5 | 1.2em |
H4 | medium | 1 | 1.0em |
H5 | small | 8/9 | 0.89em |
x-small | 3/4 | 0.75em | |
H6 | xx-small | 3/5 | 0.6em |
Three counter-intuitive results of this, unfortunately, are that 1 <H5> and <H6> headers are smaller than normal body text, 2 x-small text is hard to read, and 3 xx-small is very, very hard to read.
Note also: browsers traditionally set sizes different from recommended in the specifications. For example, small
shown above is 8/9em (0.89em), but in most browsers it’s about 0.80em, which experience shows is significantly harder to read. For example, in your
browser:
<span style="font-size:0.89em;">ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz 0123456789</span>
is usually distinctly more legible than:
<span style="font-size:0.80em;">ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz 0123456789</span>
The actual small size in your browser is:
<span style="font-size:small;">ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz 0123456789</span>
Here is an alternate method, which may sound complex, but which is actually very simple:
Pick a number N which is the size of <H1> headers (in em units).
Make the size of <H6> headers equal to N1/6 em units.
Make the size of <H5> headers equal to N2/6 em units, of <H4> headers to N3/6 ems, of <H3> headers to N4/6 ems, of <H2> headers to N5/6 ems, and <H1> headers to N6/6 ems. A simple calculator suffices to calculate each size. E.g.:
N = φ2, where φ is the golden ratio, 1.618034. |
N = e (Euler’s number, 2.71828), |
|||||||||||||||||||||||||||||||||||||||||||
|
|
|||||||||||||||||||||||||||||||||||||||||||
N = 2.25 (1.52), |
N = 2, |
|||||||||||||||||||||||||||||||||||||||||||
|
|
Method Ⅱ has the advantages that 1 all headers are larger than normal body text (which is more intuitive), 2 the header sizes are distinctly different, and 3 each header is pleasantly smaller than the next larger header.
Third, decide the sizes of smaller body text, e.g. for menus, footnotes, and superscripts. It is best to pick only one smaller size: picking several smaller sizes may result in the smallest text being unreadable, and other sizes being indistinguishable due to sub-pixel rounding. Remember, too, that some browsers let the user set a lower limit to the font size: if you try to set a smaller size, the lower limit will be used instead.
For this site smaller body text is sized at 0.89em, which is larger than the typical small size of 0.8em.
Fourth, don’t set smaller sizes assuming that the font is Verdana — or some other font with a large ex-height — because the text may be unreadably small if the user doesn’t have such a font.
Avoid overcrowding pages: use whitespace wisely. Making crowded pages makes it harder for users to find information, and increases the likelihood that the layout will fall apart if the browser behaves a little differently than expected, or if the user needs a larger font than expected, or if the user has a device with a small screen, such as a web-enabled cellphone. Making crowded pages also makes it more likely that the tester will fail to notice small or subtle problems.
One especially nasty consequence of overcrowding pages is that it creates a tendency to use unreadably tiny fonts to cram more text into tiny spaces. Here is an example:
The text here is about 0.67em
, tinier even than the standard small
size. The tiny text will create two
problems: first, it will be harder for people to read the headlines, and therefore recognize newsitems in which they might be interested; second, it will make it
more likely that people will leave the site and go elsewhere, to a site which is easier to read.
Avoid using poorly supported aspects of standards when failure would impair functionality: e.g. avoid CSS fixed positioning, which isn’t supported by IE6 and its progenitors.
Experience will teach you much about browsers differences, but good up-to-date reference books are essential. E.g.:
Because many people continue to use old, less capable browsers, even when much better versions are available, you may have to avoid using some poorly supported elements of standards for many years, until so few people use the old browsers that you can reasonably exclude these people from your websites.
If you must use something that is poorly supported, use a proven work-around that works
with all browsers. E.g. if you need
getElementById()
, which old versions of IE don’t support, cloak it in this equivalent function, which uses IE’s
document.all[]
when getElementById()
is missing:
Another example of a proven work-around is to hide HTML or CSS that must be seen only by specific browsers by using @import restrictions or IE conditional comments. E.g. this HTML links to ie50.css only if the browser is IE 5.0x, enabling one to create CSS that copes specifically with IE 5.0x incompatibilities:
Use programming languages like JavaScript very cautiously: develop software only when needed; make maximum use of standard, proven components and techniques; and understand the standard components so that you can adapt them to meet your evolving needs.
Creating software disproportionately increases development costs, because it’s harder to make good software — and to maintain it — than it’s to make and maintain good HTML and CSS.
Creating any software for a site adds an additional level of complexity, especially when “pushing the envelope”. Moreover, using a client-side language like JavaScript increases the risk of encountering browser differences. JavaScript differences include such things as features missing (especially in older versions), features added (especially proprietary features), and features broken (e.g. when a browser fails to correctly adhere to the specifications). A good JavaScript reference manual is essential, but even a good manual can’t cover all possible problems.
Experience will teach you much about browsers differences, but good up-to-date reference books are essential. E.g.:
Using software components supplied by others can be useful, but if they become outdated and you can’t adapt them to changing needs, they become a liability .
One especially troublesome issue is the use of document.write
to generate HTML or CSS “on the fly”, as the browser is
parsing the HTML or CSS: the code generated may not be inserted where you expect, and may be inserted at different places by different browsers;
or the code generated may be inserted where you expect, but a browser’s parser may get confused and fail at some later point. It
is safer to wait till the page is displayed, and then use the innerHTML()
method to dynamically change the page.
Avoid browser snifing, since this isn’t entirely reliable.
For more information about browser snifing and the alternatives, see Resources ⮞ Browser Sniffing.
Avoid plugins, since plugins are not universally installed, and because some users have versions of plugins which have security defects.
One special problem is that makers of some plugins have chronic problems with update procedures which don’t work, resulting in many users uninstalling the plugins in frustration, or continuing to use old versions which are unsafe.
Another special problem is that many users have certain plugins disabled, choosing to enable them on a case by case basis only, and so don’t work by default.
Pages with dark themes can look quite attractive — perhaps because they are so different — but there is a problem: when switching from a window with a light theme to a window with a dark theme, the user’s eyes struggle to adapt to the severe change in colour themes, and this can make it harder for the user to quickly absorb the content of the second window. There is a similar problem when switching from a window with a dark theme to a window with a light theme.
Testing is made more difficult by a number of contentious things which browser makers have done:
The information below would be much, much longer if horribly broken browsers like Netscape 4 were still being used. Thank goodness that old browsers do die, eventually.
Browsers, like most other software, have bugs, and many bugs affect how pages are rendered, often in obscure situations. Older browsers have more bugs, and more serious bugs, so bugs are a bigger problem for users who don’t (or can’t) keep their browsers up-to-date.
The biggest problems are IE older than IE 9: other browsers create fewer problems because their users tend to keep up-to-date. IE is a problem, not only because many users refuse to update (e.g. many IE5 users refuse to update to IE6), but also because some users can’t update (e.g. all IE5 users, and many IE6 users, can’t update to IE7, IE8, or IE9, because IE7 and IE8 are not available for any version of Windows older than XP SP2, and IE9 isn’t available for any version of Windows older than Vista).
There are work-arounds for many bugs — e.g. the notorious IE peekaboo bug — and there is much information on the Internet about these bugs and their work-arounds. Experience and experiment are required to solve each bug.
Browser bugs complicate testing, especially when bugs are triggered by conditions which might not appear during routine testing. Internet Explorer should be tested more brutally.
Caches are used by browsers to improve performance, but, unfortunately, browsers may display pages using cached files even when updated files exist. As a result, when you update the file(s) from which a page is built, then reload the page, the browser may display the older page instead of the updated page, or worse, it may display a page by combining some files which are old, and some which are updated.
To make things worse, caching can also take place at the server or at points between the PC and the server.
This can complicate testing because it may not be obvious when loading a page that the browser has used old files: a problem can either be hidden or created by the use of old files.
There are ways to minimize this problem, for example by configuring the browser to use its cache less, or by using a command which supposedly makes the browser refresh a page using the latest versions of the files, but the bottom line is that this problem can’t be eliminated. For more about this, see the Wikipedia article Bypass your cache.
The caching problem can also affect users — or worse, users who are clients — who may go to an updated page and find that the page is displayed wrongly due to caching. Users — and clients — may think that the designer has done a poor job when updating a page, either by not making a required update, or by doing an update wrongly. It may be helpful to prepare a FAQ which explains the caching problem and what to do about it, so that when someone complains you can cite a ready-made explanation: for example, see Your Updates - Why Aren’t They There‽.
Most browsers use the DOCTYPE to decide how strictly to apply standards. Using the DOCTYPE for this purpose isn’t part of any official standard, but it has proven useful for making it possible to apply standards properly without breaking legacy pages which were designed when today’s standards either did not exist or were very new. As discussed elsewhere, there are two or three DOCTYPE modes: quirks mode, which asks the browser to misbehave like legacy browsers; standards mode, which asks the browser to comply with the standards; and (except for Internet Explorer) almost-standards mode, which differs from standards mode only in how it calculates the heights of inline images.
One facet of using DOCTYPEs for this purpose is that some DOCTYPEs trigger different modes in different browsers. To help make browsers behave alike, only DOCTYPEs which trigger standards mode in all browsers should be used, so only a few are recommended DOCTYPEs.
Something which complicates the use of DOCTYPEs is that IE8 and IE9 honour the DOCTYPE differently if a meta tag appears telling IE8 or IE9 to render pages like some other version of IE. For example, the DOCTYPE could indicate standards mode, but the meta tag could tell IE8 or IE9 to emulate IE7, which would result in pages not being rendered as strictly as they could be. More information about this is available in Microsoft’s Defining Document Compatibility.
Another IE8 issue is that the user can force it into an IE7 mode, making it use an IE7 user agent and making it render pages more as IE7 would: though likely not exactly as IE7 would, so a page could be rendered differently in IE7 than it would in IE8’s IE7 emulation.
Another aspect of using DOCTYPEs for this purpose is that it has encouraged lazy designers to continue to make broken sites: designers omit the DOCTYPE or pick a DOCTYPE which triggers yesteryear’s quirky browser behaviours, focus on making websites which appear to work with their version of Internet Explorer, then complain in bewilderment when other browsers don’t render their broken code the same way that their version of Internet Explorer does. The introduction of a meta tag by Microsoft in IE8 (and, presumably, later versions of IE) to make IE work like an older version makes it even easier for lazy designers to make broken sites.
Using a DOCTYPE which triggers standards mode will make it more likely that browsers will act alike: there may still be differences, but there will be fewer than with quirks mode. Using such a DOCTYPE is therefore an important part of making it easier to test sites.
HTML 5 may change this: the official position of its creators is that the DOCTYPE has no purpose,
so HTML 5 will have the unadorned DOCTYPE <!DOCTYPE html>
: but there is no provision in this for an HTML 5 page which
retains some elements of older standards for backwards compatibility. We live in interesting times.
Browsers perversely make the sizes of <h5>
and <h6>
headers smaller than the size of normal body text.
This counter-intuitive behaviour is specified by the CSS3 and proposed CSS4 specifications.
See Sizing Headers for suggestions about setting header font sizes.
As discussed elsewhere, CSS recognizes five generic fonts: serif, sans-serif, cursive, fantasy, and
monospace, with more being considered. To test a page which is styled using these fonts, it’s necessary to determine which font is used, and to do this it can be
helpful to be able to select generic fonts which can be easily distinguished. Indeed, the CSS Standard says
User agents are encouraged to allow users to select alternative choices for the generic fonts
. Sadly, only two modern browsers,
SeaMonkey and Vivaldi, let the users select all the generic fonts. The inability to select the generic fonts can
complicate testing.
Since JavaScript and the DOM are inextricably intertwined, this discusses both together. For brevity, JavaScript will be used to refer to both.
A problem with JavaScript code is that different browsers may implement the JavaScript differently. This can happen, for example, if a JavaScript engine has a bug, complies with a newer JavaScript standard, or supports non-standard elements.
Experience will teach you much about browsers differences, but good reference books which cite browser dependencies are essential. E.g.:
Here are some considerations:
JavaScript engine problems resulting from bugs or different standards are more common when using complex JavaScript, especially when using the DOM, and especially in old browsers.
Older JavaScript engines tend to have more bugs.
Because users don’t always update their browsers, bugs in JavaScript engines may afflict some users and not others.
Because Microsoft sometimes releases a new JavaScript engine without updating its browsers, and because users don’t always update their engines, bugs in Microsoft JavaScript engines may afflict some users and not others.
The 32- and 64-bit versions of browsers may have different JavaScript engines, with different behaviours. These means that it may be necessary to test with both versions of such browsers.
Problems can be avoided by avoiding complex JavaScript, since problems tend to lurk in unusual code.
Elements of newer JavaScript standards can sometimes be supported by older browsers by using object detection to conditionally clone the new elements. For example, consider the following code:
This declares the Array indexOf()
method for browsers which don’t already support this method: the code uses object detection to
determine whether the method exists, and if it doesn’t exist, declares a clone of it.
The CSS3 shape-outside:url()
and shape-outside:attr(src url)
do not work with local files, and
therefore are useless.
Most browsers support some CSS which isn’t part of today’s official standards. In some cases the CSS is part of a draft standard; in some cases the CSS has been proposed — but not accepted — as an addition to a draft; and in some cases the CSS is inherently peculiar to a single browser.
Examples of CSS in an emerging standard are the opacity
and text-shadow
properties, which are defined in the draft CSS 3 standard. Use of such CSS clearly
complicates testing, since browser support is uneven: but, on the other hand, this is just as true of unevenly supported features of CSS 2; well defined CSS 3, therefore, can’t
be excluded just for this reason. Use of such CSS may be justified if there is
enough browser support to make it worth the effort, and if the absence of support by other browsers is harmless, or can be made harmless.
An example of CSS which was a proposed standard, but which was not accepted, are the properties which Internet Explorer and some other browsers recognize for styling scrollbars. Use of such CSS may, perhaps, be justified — styling scrollbars is particularily contentious — but again only if there is enough browser support to make it worthwhile, and only if the absence of support by other browsers is harmless or can be made harmless.
Marker positioning can also be a serious problem. A marker is the number, bullet, or other character which normally appears to the left of each item in a list. The CSS 2.1 specification doesn’t say how to control the size of the indent for each item, nor does it say where to position the marker within the indent: instead, the CSS specification leaves this up to each individual browser.
Browsers control this through various combinations of <li>
, margin-left
, and padding-left
properties, and different browsers do this
differently. Indeed, different versions of the same browser may do it differently: IE4 does it one way, IE5 another, and IE6 yet
another; Opera 9 does it one way, Opera 7 another, and older versions of Opera yet another; the Gecko browsers, fortunately, appear to be
consistent.
If, therefore, you wish to control the <li>
indent size and marker position, you must be prepared to use different CSS for each version of each browser, and to test with each browser and
each version. You must also be prepared to update each site when a new version appears which handles marker position differently.
This is possible, but onerous.
Note that CSS 2.0 defined a way to control the marker indent size and positioning, but no browser implemented it, so this was dropped in CSS 2.1. CSS 3 has re-introduced it, but it will be useless until most browsers support it, which won’t be for many years, perhaps when all of today’s designers have retired.
An example of CSS which is peculiar to a single browser is Internet Explorer’s alpha transparency filter. Though it’s only supported by Internet Explorer (and not by IE 8), it’s the only way to make older versions of Internet Explorer support alpha transparency in PNG images. Since all other browsers support alpha transparency in PNG images properly, without a filter, using this particular proprietary CSS can be justified to ensure that pages work the same for all browsers. Using other proprietary CSS, however, is more contentious, and harder to justify.
In any case, use of such CSS complicates testing, and this must be a factor in deciding whether to use such CSS.
Most browsers support some HTML which isn’t part of today’s official standards. In some cases the HTML is part of a draft standard; in some cases the HTML is archaic HTML created for browsers which today are extinct; and in some cases the HTML is peculiar to a single browser.
An example of HTML in a new standard is the <canvas>
tag, which is defined in the HTML 5 standard. Use of such HTML clearly
complicates testing, since browser support is very uneven. Use of such HTML may be justified if there is
enough browser support to make it worth the effort, but only if the absence of support by other browsers is harmless or can be made harmless.
It is very important to use such HTML only if it helps make a site more effective, and only if the site remains effective even when the HTML isn’t supported.
It is easier to use non-standard CSS than non-standard HTML, because much CSS in emerging standards can be used in such a way that all browsers show the content even if they make the content appear differently.
Two examples of archaic HTML are the <blink>
and <marquee>
tags. The first was a Netscape invention,
the second a Microsoft invention, and neither is in today’s standards.
Flashing objects with frequencies from 2–55 Hz can trigger epileptic episodes in some people. For this reason, the
archaic HTML tags <blink>
and <marquee>
should never be used, the CSS 2
text-decoration:blink
should never be used, and flashing effects using other techniques should be used only with extreme caution.
Another example of archaic HTML is the background
attribute of the <table>
tag: this attribute isn’t
in the standards, but its use was justifiable years ago when old browsers supported this attribute but did not support equivalent CSS. There is no
justification for using such HTML today.
Examples of HTML which are peculiar to a particular browser include the HTML which Microsoft invented to enable documents to be converted
from Word format to HTML format, and back, without loss of formatting information, e.g. <o:SmartTagType>
. Such HTML
should never be used in web pages.
The bottom line is that you shouldn’t use non-standard HTML, except possibly HTML which is well-defined in an emerging specification: and if you use such HTML, you must make sure that the site remains effective without it, and you must accept that this complicates testing.
Since JavaScript and the DOM are inextricably intertwined, this discusses both together. For brevity, JavaScript will be used to refer to both.
Most browsers support some JavaScript which isn’t part of today’s official standards. In some cases the JavaScript is part of a draft standard, or has been proposed for such a standard; in some cases the JavaScript is archaic, created for browsers which are extinct, or nearly extinct; and in some cases the JavaScript is peculiar to one browser.
An example of archaic JavaScript is Internet Explorer’s document.all[]
array, used to get the element associated with an
HTML ID. The document.getElementById()
method should be used instead, but Internet Explorer 5.0x for Windows, which is still
used by some, doesn’t support it: for this reason it’s necessary to use object detection to decide whichever technique to use.
It is best to encapsulate such archaic JavaScript in a function which uses the appropriate technique, for example:
Encapsulating such differences in a standard, tested function keeps the software simple, and makes testing no more complex.
An example of JavaScript which is peculiar to one browser, is JavaScript for Internet Explorer to detect plugins. An
array navigator.plugins[]
should have details about installed plugins, but Internet Explorer stupidly doesn’t support it.
Code unique to Internet Explorer must therefore be used.
Code to get information about plugins should be encapsulated in a function which uses the appropriate technique, though this is
harder because different code is needed for each Internet Explorer plugin.
The bottom line is that you shouldn’t use non-standard JavaScript, except when it’s extremely important to do so, and when the JavaScript differences are encapsulated within standard functions which make it unnecessary for other code to care about the differences. Creating a new standard function will complicate testing, but using a proven standard function shouldn’t.
Many designers create pages which contain JavaScript errors, especially when designers consider the errors to be harmless: an example is assigning a value to an undeclared variable. This creates two problems. First, it may worry users who notice the errors, because they will worry that there is a problem with their PC, or a problem with the site. Second, it makes it harder to detect errors which are not harmless, because the serious errors are hidden within a pile of supposedly harmless errors. There is no reason to allow JavaScript code to have any errors, and every reason to ensure that there are none.
Some plugins are not available for certain browsers. For example, many plugins are not available for some 64-bit browsers. These creates obvious design problems for sites which depend on the plugins being installed.
Chrome 68 (and later) says a site is insecure if the page is an http:// page. This is an annoyance.
Chrome 68 (and later) darkens background colours on some PCs, e.g. turns white to pale yellow, and this is beyond annoying: changing the background colour makes it harder to use Chrome to test local sites.
Chrome based browsers may produce defects on pages. For example, 1 unwanted underlines appear when double underlines are used to decorate links, or 2 coloured edges appear where nothing in the site specifies such edges. Sometimes this happens when a page is online (remote), but not when it’s offline (local), or vice versa. This is clearly caused by bugs in the browser engine.
The unwanted underlines can only be avoided by avoiding double underlines in links. Other defects can sometimes be avoided by trial-and-error tweaks of the CSS, e.g. by specifying redundant background colours, but this is complicated by the fact that sometimes it matters whether pages are online or offline.
When refreshing a page, Chrome based browsers ofen jump elsewhere within the page.
Like many browsers, Firefox can automatically resize (shrink) images which are too big to fit within the browser window. This can be nice for the user, as it enables users to view large images without scrolling, but it can make it harder to test sites, since the resizing can obscure problems. It is therefore vital to disable resizing during testing. This is easy to do with many browsers, including Internet Explorer, SeaMonkey, and Firefox 1.5: but it’s harder to do this in later versions of Firefox because, in an act of appalling stupidity, the Firefox makers removed the option from the Preferences in Firefox 2: resizing can now only be disabled using Firefox’s arcane about:config page.
To disable resizing with Firefox, it’s now necessary to set the following about:config property to false
:
browser.enable_automatic_image_resizing
As discussed elsewhere, CSS recognizes five generic fonts: serif, sans-serif, cursive, fantasy, and monospace, and more are being considered. To test a page which is styled using these fonts, it’s necessary to determine which font is used, and to do this it can be helpful to be able to select generic fonts which can be easily distinguished. Firefox’s progenitor, Mozilla, enables one to select all the generic fonts, but the Firefox makers foolishly chose to remove this feature. (The Firefox makers appear to believe that taking features away from users makes Firefox easier to use.) This can complicate testing. Designers who make pages using all the generic fonts should consider using SeaMonkey or Vivaldi instead of Firefox for testing, since SeaMonkey and Vivaldi still allow all five generic fonts to be selected. Alternatively, designers can avoid using the cursive and fantasy generic fonts.
Firefox 3 changed how Firefox calculates font sizes, in such a way that text which should be of different sizes is rendered in the same
size. For example, the text size resulting from font-size:small
may be he same as font-size:x-small
.
Whether this happens depends on the user’s default font size and on other factors which are not clear. This results in pages which look
disconcertently different from what they should be.
You can check this effect by examining the following line, looking for discrepancies:
medium small x-small xx-small
Internet Explorer distorts colours. Usually the differences are not obvious, but there are exceptions: for example, when the edges of an image have the same colour as the image’s background, so that the image should blend in with the background, Internet Explorer may distort the colours, resulting in a noticable colour change around the image which prevents the image from blending in.
For example, here are screen captures of the same image, with a solid background chosen to make the image blend in, as it appears with Firefox 3 and Internet Explorer 7. Firefox displays the image correctly; Internet Explorer distorts the colours.
With 16-bit (64K colour) monitors, colour distortion may be unavoidable and is found in all browsers: but with 24-bit (16M colour) monitors, there is no justification for colour distortion, but Internet Explorer — and only Internet Explorer — nonetheless may distort the colours.
With the CSS box model, the width of a box is the width of the contents of the box, excluding padding, borders, and margins. Internet Explorer wrongly makes the width include the padding, borders, and margins, in IE 5, and in quirks mode in IE 6–9: in standards mode, IE 6–11 handle the width correctly.
If standards mode is used, there are testing problems, but only with IE5. To avoid problems with IE5, simply design the page so that the width isn’t critical, i.e. so that the page works well enough even with the wrong box model.
As discussed elsewhere, CSS recognizes five generic fonts: serif, sans-serif, cursive, fantasy, and monospace, with more being considered. To test a page which is styled using these fonts, it’s necessary to determine which font is used, and to do this it can be helpful to be able to select generic fonts which can be easily distinguished. Unfortunately, IE picks the five fonts, and doesn’t allow the user to select them, so if IE picks fonts which can’t be easily distinguished, people testing pages can do nothing about it.
When Internet Explorer encounters a CSS coding error, it may guess what the designer intended and render pages accordingly, whereas the CSS standards say that browsers should ignore such errors. The result is that Internet Explorer obscures errors, making testing harder.
An example is height:20
: this is an error because no units are specified; units are mandatory unless the value is zero. Internet Explorer commonly assumes that the
designer meant height:20px
, and renders pages accordingly: such errors are common, probably because dimensions are not required
in HTML, where pixels are the standard unit, so sloppy designers can easily err when they create similar CSS. An amazing number of pages on the Internet have such errors.
The best way to prevent this problem is to only test sites which have been validated, hence have no CSS errors.
Some sites still attract a significant number of IE 5.01 users, and these sites present a special challenge due to 5.01’s abysmal CSS support: there is much that 5.01 doesn’t do; and much that it does do, but wrongly. Users should upgrade, at least to IE 6.0, but many users won’t, or can’t.
Some major 5.01 CSS problems are: it has a broken box model; it doesn’t support many CSS properties; and it supports inheritance very badly.
Sites which support 5.01 users should be tested more thoroughly, and will likely need a special stylesheet, imported using conditional comments, to override the CSS used by more capable browsers.
As discussed elsewhere, browser makers have chosen to use the DOCTYPE to decide how strictly to honour the standards. This use of DOCTYPEs isn’t prescribed in the standards: it’s just a common way of enabling browsers to better honour the standards without breaking legacy sites.
A problem with IE is that versions older than IE7 require that the DOCTYPE appear on the very first line of an HTML file: if it’s not on the first line, IE will render pages in quirks mode.
To avoid this problem, it’s advisable to always put the DOCTYPE on the first line.
When the designer specifies a font size in absolute units, e.g. pixels, Internet Explorer makes it harder for a user to make it use another size. Many designers appear to assume that it isn’t just hard, but impossible, and therefore design a site assuming that the font size will always be exactly what they specify. However, it’s possible to override the font size in Internet Explorer, and it’s very easy to do so with many other browsers. The designer’s false assumption can, therefore, result in pages which look awful when the user overrides the size, e.g. to enlarge the text to make it more readable.
NB: the user can tell Internet Explorer to honour their preferred font size, ignoring the size set by the designer, using the command Tools, Internet Options, General, Accessibility, Check “Ignore Font Sizes Specified on Web Pages”. This is even easier with other browsers: with Firefox and Safari, for example, the user can simply do Ctrl + or Ctrl - to increase or decrease the font size. In this author’s experience, few people know about the Internet Explorer option, so it’s easy for both designers and users to believe that it isn’t possible to override the font size.
The best way to deal with this is not to specify font sizes in absolute units, but instead to specify the sizes in ways which result in text growing or shrinking as the user’s preferred font size grows or shrinks. This is an element of “fluid design”. There are four ways of doing this, which are discussed in Resources ⮞ Fluid Design and Font Sizes.
A problem with all these methods is that browsers tend to require users to pick their preferred font size from a list, and the list may not have the size which the user would most prefer.
NB: this author recommends that only two font sizes be used for most body text, one size for normal body text, and a smaller size for footnotes and possibly for sidebars. Using more than two sizes for body text makes it harder to pick sizes which are readable, but clearly distinguishable, and may also confuse users, who could wonder what the significance of the different sizes is.
Something that often goes along with setting font sizes in absolute units, is setting positions of content at absolute positions: a designer who wrongly assumes that browsers will honour fixed font sizes will also often wrongly assume how much space the text will occupy; and such a designer may therefore position other blocks of content at specific, absolute positions, which often results in content overlapping or being cropped when the browser does not honour fixed font sizes. It is therefore important not only not to specify fixed font sizes, but also not to specify absolute positions for content. This is another aspect of fluid design.
When Internet Explorer encounters an HTML coding error, it guesses what the designer wants to happen, and renders the page accordingly. This can be good for users, because it makes pages usable which might otherwise not be, but this can make it harder to test sites, because it can obscure errors, and because other browsers likely won’t behave the same.
Internet Explorer’s tolerance of HTML and CSS errors is arguably the major reason why so many designers create so many sites which appear to work with Internet Explorer, but which fail to work with other, less tolerant browsers: designers think that their sites are fine because they see no problems, forgetting — or not knowing! — that Internet Explorer hides problems.
Another important reason for the creation of so many sites that fail with other browsers, perhaps, is that many designers hold the view — perhaps subconsciously — that their version of Internet Explorer does what is right.
The best way to prevent this problem is to only test sites which have been validated, hence have no HTML errors.
The 32- and 64-bit versions of IE may have different engines: this is true, for example, of IE 9. The fact that there are different engines creates the possibility that the 32- and 64-bit versions of IE9 could behave differently.
When Internet Explorer encounters a JavaScript error, it offers information about the error, but the information is often too sparse to be useful: it reports only one error at a time, and although it reports the number of the line where the error was encountered, the number can be wrong, and the file where the error was encountered isn’t named.
It is therefore better to test first with browsers which provide more information about JavaScript errors — e.g. Firefox, and to test with Internet Explorer only when these browsers report no errors: this leaves only any errors arising from differences in Internet Explorer’s implementation of JavaScript.
Internet Explorer has complex security options which give users a lot of control about what IE will allow, on a site by site basis. One aspect of this is that the security mode can depend on whether a site is local, on the tester’s PC, or remote, on the web. This can result in sites behaving differently when tested locally than when tested remotely.
To circumvent this problem, Microsoft supports a MOTW (a Mark of the Web) comment, which can be added to web pages to make IE run a local site as if it were remote. Designers who want to ensure that sites can be properly tested locally should add the MOTW comment to all pages of sites which are affected by IE’s security options.
Unlike most browsers, Internet Explorer stupidly allows only one version of Internet Explorer to be installed on a PC. This is because Microsoft perversely and unnecessarily chose to integrate Internet Explorer with the operating system. There are ways to install more than one version of Internet Explorer, but testing with multiple versions isn’t anywhere as easy as testing with multiple versions of other browsers.
With IE, specifying a <table> width of 100% will typically make the table extend beyond the right margin. To prevent this, designers should set table widths using CSS, and should use a technique like conditional comments to make the width 98% for IE.
Internet Explorer doesn’t implement xHTML: they treat xHTML as if it were HTML.
There are two approaches to dealing with this. One is to always use HTML, never xHTML. The other approach is to use xHTML, but to follow the recommendations in the xHTML specifications to ensure that legacy browsers will render xHTML appropriately. Both approaches have their merits. This author prefers the latter approach, but recognizes that others don’t agree.
Testing will be easier and quicker if you use automated tools that can find errors in your code. For example:
Validators: HTML validators and CSS validators report syntax errors, i.e. violations of a language’s rules. Validators may also issue warnings, e.g. about dubious elements in the code.
Using a validator is a fast and simple way to identify the most blatant errors. Because different browsers may handle errors differently, fix all syntax errors and critical warnings before manual testing.
Using a validator and studying its error messages will also help you learn more about HTML and CSS, and so enable you to improve the quality of your work.
NB: some authoring programs like FrontPage create invalid code; avoid using such programs.
Code Checkers: code checkers report errors and warnings about a page or site, e.g. missing end tags, nesting errors,
broken links, wrongly sized images, or accessibility issues. A code checker might report some of the same errors that a validator would report, but a code checker will
go beyond this to report what appear to be errors in perfectly valid code. E.g. if you have an img
tag with a src
attribute naming an image file that doesn’t exist, a validator won’t report an error, because it doesn’t violate the HTML syntax to refer to
a missing image file; but a code checker should report an error, or at least a warning, because it’s very likely that the code specifies
either the wrong filename, or the correct filename of an image that hasn’t yet been created.
Many HTML code checkers and CSS code checkers are available.
Code checkers may issue warnings which you decide can be safely ignored. Deciding which warnings may be ignored can take time, so it’s a good idea to change your code to eliminate even minor warnings so that new critical warnings are not hidden amongst old minor warnings, like needles in a haystack.
Using a validator and studying its error messages will also help you learn more about HTML and CSS, and so enable you to improve the quality of your work.
Some code checkers let you do less stringent checks to minimize unwanted warnings: this may be useful after making minor changes, but you should do strict checks after major changes or before major testing.
Error Consoles: these are browser windows which can dynamically report error and warnings detected while pages are loaded and used. Output in an error console likely indicates a coding error.
You must load the error console:
Gecko-Based Browsers: many Gecko-based browsers support an Error Console (called the JavaScript Console in Mozilla and older versions of Firefox). To view it you must click Tools, Error Console (or Tools, JavaScript Console in Mozilla and older versions of FF). You may be able to increase the amount of information reported: e.g. with Firefox, setting the about:config item javascript.options.strict to true will make Firefox report more potential JavaScript problems.
Internet Explorer: IE has no error console. It can produce a popup that reports JavaScript errors, but the report is often of little use except to say that somewhere there is some kind of error: the details it gives are often wrong or misleading. For initial testing you should use a more capable browser.
Safari: Safari supports an error console called the JavaScript Console. To use it you must turn on logging of JavaScript errors and load the window. With Safari 1.3+ you can output your own messages to the console. [more⮞
Error consoles are also useful in debugging, but will be discussed elsewhere.
Sanity Checkers: these are tools which can be used to detect anomalies which might result from errors. Examples of sanity checkers are:
Dust-Me Selectors Extension: this is a free Firefox extension [more⮞, which can be used to identify CSS selectors which are not used. It could be that unused selectors can simply be deleted; it may be that they are needed, but not yet; or it may be that they should have been used, but weren’t, in which case the wrong selectors may have been used.
Web Developer Extension: this is a free Firefox and SeaMonkey extension [more⮞, which offers many functions useful to website designers, including functions which can be considered to be sanity checkers, such as these:
Disable JavaScript: this can be used to quickly check whether a page works as expected when JavaScript is disabled.
Display Alt Attributes: this displays the values of the alt
text associated with images,
making it easy to quickly check the text which would appear if the user disables images.
Outline Images Without Title Attributes: this can be used to identify images without title
attributes.
View Generated Source: this is like View Source, except that it also includes dynamically generated HTML. This makes it possible to see if there are errors in dynamic HTML.
There are many more useful functions, some useful for performing sanity checks, some useful for finding coding errors, and some useful for both.
Link Checking Tools: these are tools which can be used to report broken links. An example is the excellent Xenu Link Sleuth for Windows.
Grammar & Spell Checking Tools: these are tools which can be used to detect possible errors in grammar and/or spelling. Examples are the Grammerly grammar- and spell-checker extension for Firefox, and the Grammerly grammar- and spell-checker app for Windows: these tools are presently (August 2018) imperfect, but they are still useful.
This discusses selection and installation of browser test suites.
The first thing you must do is decide which browsers you will test with. Deciding can be hard.
You should clearly test with the browsers used by a significant number of your users. But what is significant? And how will you find out which browsers your visitors use?
When you decide what is significant, you decide how many people your site won’t serve, but you may also decide how effective your site will be. If you don’t serve old browsers, you will lose people who use those browsers. But to serve old browsers, you may have to eschew features offered by newer browsers, which could make your site less effective.
When you try to find out which browsers your visitors use, you will naturally check the browser stats, but will likely discover that the stats are unreliable or unavailable. For example, stats typically report how many use Gecko-based browsers, but not how many use which version of Gecko — because this information isn’t generally available — and knowing this would be very helpful because different versions have different capabilities.
A conservative strategy would be to deploy an initial site that supports very old browsers like IE 6, and then to study the access logs to decide whether the numbers justify supporting the browsers in site updates. With this strategy you may initially use features supported only by modern browsers, but only when this doesn’t impair the site’s functionality. A side-effect of this strategy is that it compels you to make an initial site that is simpler, and this simplicity may result in a better site: needless complexity makes sites less effective.
It can also make sense to test with uncommon browsers which comply very well with standards: browsers are made more standards compliant as time goes on, hence making your site work with today’s standards compliant browsers will help ensure your site will work well — or can be easily made to work well — with future versions of browsers which today are less compliant: e.g. if Firefox does something properly which other browsers don’t, making your site work with Firefox will help ensure that your site will continue to work properly when other browsers catch up to Firefox .
It can also make sense to test with an uncommon, very standards compliant browser because this may reveal errors in your code which wouldn’t otherwise be obvious.
NB: modern browsers are very standards-compliant, so problems with non-compliance chiefly appear with older browsers; in June 2018, many people continue to use old versions of Firefox and Internet Explorer.
You need not test with several browsers which use the same browser engine, because they will render pages alike: e.g. if you test with the latest version of Vivaldi, your don’t have to test with the latest versions of Chrome and Opera.
The common browser engines are listed on the page Resources ⮞ Engines.
More and more people are using mobile devices such as cellphones for web browsing. This presents major challenges for web design, which can be met in part by specifying CSS rules which are specific to such devices.
This author has made several existing sites — though not this site — mobile-friendly.
You should consider testing with a mobile browser, or with a browser than emulates a mobile browser. For example:
Any browser supporting CSS 3 media queries may be used for sites which use media queries, simply by shrinking the browser window to the point where the media queries kick in. Most modern browsers support media queries. You can use:
This isn’t a perfect solution for testing sites for compatibility with mobile devices, but it’s nonetheless very useful.
Apple iPhone Emulator (works with certain browsers only)
Opera Mobile Emulator: (note that Opera Mobile honours CSS 3 media queries only when its “Mobile View” option is on)
Note also that makers of mobile phones often offer emulators, typically bundled with SDK’s, which may be used to test sites for their phones.
You likely want to install multiple test browsers on a single PC. This is possible, but there are problems, and each browser must be handled differently. Following is information about installing and running these test browsers:
NB: you can find many browsers for testing here, in the Browser News.
NB: to install multiple browsers, you need a moderate amount of hard disk space; to run multiple browsers at the same time, you need a large amount of RAM. Fortunately both are cheap.
Installing multiple versions of Gecko browsers is simple. You can install as many versions as you like, but must install each in its own directory. You must also create a unique profile for each version. How to do this is nicely documented in mozillaZine.
Running Gecko browsers is more complex. You have to invoke the browser with an option specifying the profile. E.g.:
Suppose you installed Firefox 1.5 in c:\test\gecko\firefox1.5\.
Suppose you created a profile for it named FF1.5.
You could then run Firefox 1.5 using this DOS command:
It is easiest to do this in Windows by creating a .bat file with this command, and creating a shortcut to the .bat file. An example of such a file would be:
The above only works, however, when you need to run only one version at a time. If you want to run more than one version at a time, you must first set an environment variable, MOZ_NO_REMOTE, e.g. in a .bat file:
Installing multiple versions of Opera is simple for versions up to version 12. You can install as many versions as you like, but must install each in its own directory. Running multiple versions of Opera is likewise simple: just run the version(s) you want.
Installing multiple versions of Opera for versions 15+ is something this author has been unable to do: installing multiple versions appears to work, however, attempts to run old versions often results in the latest version being run instead. Opera Software clearly isn’t trying to help developers cope with multiple versions of Opera. If you have found out how to do this, please contact this author.
Installing multiple versions of Opera for versions 22+ is even more difficult, since these versions have an auto-update feature, and there is no obvious way to disable it, so Opera will automatically update older versions that are really needed for testing sites.
Installing multiple versions of Internet Explorer on one PC is complex, and there are several approaches:
With a multi-boot facility, install multiple copies of Windows on one PC, each copy with its own hard disk partition and its own version of IE.
This is troublesome, because it’s necessary to reboot in order to switch from one version of IE to another. It also is wasteful of hard disk space, which can be a problem unless the hard drive is very large. In addition, a software license is needed for each different version of Windows.
With a virtual machine one can install virtual copies of Windows and certain other operating systems. E.g. on a PC with Windows XP Pro SP2 (and up), one could install a virtual copy of Windows 98 with IE 5, and a second virtual copy of Windows 98 with IE 5.5. The number of virtual copies is limited only by hard disk space; the number of virtual copies you can run at one time is limited only by the amount of RAM and the speed of the CPU.
This is the technique recommended by Microsoft.
To use virtual copies of Windows, however, you need a license for each virtual copy. E.g. if you want to test sites with IE5, you can use Virtual PC to install a virtual copy of Windows 2000, but you must have a license for Windows 2000. Exception: Microsoft issues free virtual images of Windows, each with a version of IE, for those who wish to test multiple recent versions of IE on a single PC.
There is a special procedure for installing several versions of IE in XP Mode.
NB: there is a Virtual PC newsgroup, microsoft.public.virtualpc.
Using a technique discovered by Joe Maddalone, and refined by others, it’s possible to install several versions of IE at once in the form of the Internet Explorer Collection.
This technique isn’t reliable, however. This author has encountered three problems: a some versions of IE were broken and wouldn’t run; b on one occasion, use of the collection destabilized a PC, requiring the loss of two working days to recover from the damage; and c on one occasion, the collection’s IE 5.01 failed badly with a page’s JavaScript when that same page worked fine with IE 5.01 running under Virtual PC.
Using this collection is much easier than using Virtual PC, but in this author’s experience the collection can’t be trusted.
A product similar to Microsoft’s Virtual PC is VMware’s free VMware Player. It also lets you install virtual copies of operating systems, but it’s better at supporting a non-Windows O/S. Especially useful is that you can install Virtual Appliances, which are easy to install pre-configured images of operating systems, e.g. an image of Linux with KDE and KDE’s Konqueror browser. A big problem, however, is that VMware won’t let you download its Player unless you answer a large number of unnecessary and irrelevant questions.
The free VMware Player has a number of limitations, including the fact that it can only run a pre-built virtual machine: it can’t create a new virtual machine. For more flexibility you need a product designed to create test environments, e.g. VMware WorkStation for Windows or Linux.
There are several other approaches for testing with browsers when you don’t have the right kind of PC or O/S:
You can test pages with Safari remotely at iCapture.
You can test pages with Safari remotely at safaritest.
You can adopt a design methodology that improves various aspects of site development, including testing:
Details:
Keep it simple: simplicity means less to do; simplicity means less complexity in what you do; thoughtful simplicity means a better site that is easier to use … and to test.
This doesn’t mean that a site must be starkly plain. A site should enable a pleasant browsing experience and project a positive image of the site’s owner, so the site must look good. Pages should simply not be overly ornate.
Simplicity is the hallmark of great minds:
“Simplicity is the ultimate sophistication” — Leonardo da Vinci
“The cheapest, fastest, and most reliable components of a computer system are those that aren’t there” — Graham Bell
“Complexity creates confusion, simplicity focus” — Edward de Bono
“Simplicity is prerequisite for reliability” — Edsger Wybe Dijkstra
“A complex system that works is invariably found to have evolved from a simple system that works” — John Gall
“The ability to simplify means to eliminate the unnecessary so that the necessary may speak” — Hans Hofmann
“Simplicity is the final achievement: after one has played a vast quantity of notes and more notes, it’s simplicity that emerges as the crowning reward of art” — Frederic Chopin
“Perfection is achieved not when there is nothing more to add, but rather when there is nothing more to take away” — Antoine de Saint-Exupéry
“An honest tale speeds best being plainly told” — William Shakespeare
Keeping things simple means using the available technologies wisely. E.g. when coding a page, the wisest choice is usually to use HTML or xHTML to specify structure, and to use CSS to specify appearance, with carefully chosen CSS classes for page elements which recur throughout a site: other choices may result in bloated, complex code that takes much more time to create and maintain.
An often overlooked aspect of keeping things simple is to keep text simple. People want to find information quickly, and skim text to find what they want: they don’t want to wade through rivers of turgid text. Keep text simple, clear, and concise. This makes pages more usable, and text errors easier to find.
Document key information needed when developing or updating the site.
For example, the documentation could list and briefly describe each file and JavaScript function: e.g., as I did for a baseball team’s site. The documentation may also include information that you might otherwise put elsewhere, e.g. a changelog. The documentation shouldn’t duplicate what appears in other files, but since comments in source files (e.g. HTML, CSS, and JavaScript files) increase page load times, you should document such source files elsewhere.
It is especially important to document software clearly and in detail — e.g. to specify what a method does, what its arguments and return value are, and how it responds to errors — and to do so during the design process, not later. I spent 27 years developing software before I began designing websites, and one thing I learned early on is that a major cause of bugs is poorly documented software. If you carefully document the behaviour of each software component before you implement it, you are much more likely to implement it correctly, and those using it will much more likely use it correctly.
Software design is a science and art in which great attention to detail is essential. Poorly documented software means that some details will be unclear, likely will be misunderstood, and will often cause errors.
Documentation must be up-to-date: errors are worse than useless. If you are not sure you can keep it up-to-date, you may be better off not creating the documentation in the first place. Note: the task of keeping documentation up-to-date is harder when the documentation isn’t in the same file as the things it documents: e.g. it can be easy to forget that changing a JavaScript file may affect documentation in a different file. You must exert self-discipline to ensure that documentation is up-to-date.
Paying close attention to details is essential for efficient, effective design. It can sometimes be so easy to slap page elements together that
fine details are overlooked, and it’s mainly in the details that errors lurk: as the old saying goes, The Devil is in the details
.
Being able to focus on the details is a skill that must be practised consciously and assiduously over a long period of time, but your diligence will reward you with the ability to do better work in less time.
A good way to learn is to learn from your mistakes. When you find an error, always ask yourself where you went wrong. Every failure is an opportunity to learn.
Design for what browsers do well: trying to force browsers to do what they do badly invites needless problems.
Heed the words of the poet, Henry Wadsworth Longfellow: “The talent of success is nothing more than doing what you can do well.”
If you find yourself fighting the browsers, trying to force them to do reluctantly what you want, you likely are trying to make them do something they don’t do well: stop fighting; find another approach that browsers do well.
It is very common, alas, to try to make browsers do things they don’t do well, e.g. because the designer is using features that are not well supported, or is more familiar with a medium with much different fortés, or could not resist unwise client demands. One common error is to try to control layout too precisely.
It is also very common to fail to make browsers do things which they do especially well. For example, using CSS it’s easy to tell browsers to make pages look different when printed than when displayed, e.g. by using fonts more suited for printing, by omitting menus which serve no purpose when printed, and by minimizing the use of colour to minimize the user’s print costs. Below is one page I made as it appears when displayed, and as it appears when printed:
Don’t do something just because it’s possible: do it because it makes the site more effective, or because it enables you to do your work more effectively.
Many technophiles are eager to use anything new or flashy, even when not needed for the task at hand: e.g. many designers create flash pages — entry pages with eye catching special effects having no functional purpose — adding to all phases of the development time, and annoying impatient visitors who simply want to use the site.
A survey revealed that business professionals consider these to be important qualities of a great website:
For an effective website, focus first on these qualities.
One key element to good web design is to make sites which are accessible, i.e. which can be used by persons with disabilities: for example visual disabilities.
Making sites accessible is not only good design: in many countries it’s required by law.
Accessibility is a complex topic: you can start by reading How to Make Websites User Friendly and Accessible for Everybody.
There are four objectives to optimizing images.
One objective is to increase image quality. Poor image quality will decrease the site’s quality and reflect poorly on the site’s owner, thus making the site less effective. The images should be carefully selected to be attractive and to enhance site content. JPEG images should be individually saved at compression levels which maintain image quality. PNG and GIF images should be clear and attractive.
A second objective is to minimize page load times. To some extent this conflicts with the first objective: for example, a JPEG saved at a higher compression ratio will load faster, but also be of lower quality; the compression ratio must therefore be selected very carefully. Things you can do to minimize page load times are a omit images that are not needed, b use the same images on multiple pages, c reduce image dimensions, d reduce the number of colours in PNG and GIF images, and e save images using special tools which minimize image file sizes. A particularly good free tool for losslessly cropping, flipping, and 90°-rotating JPEG images is JPEGcrop. A particularily good free tool for minimizing PNG file sizes is PngGauntlet. As for GIF images, they are usually more compact when saved as PNG images.
A third objective — which applies only to animated images — is to ensure that the desired frames appear if the browser either a does not animate the image (displaying only the first frame), or b displays the animation only once, even though the animation is supposed to loop indefinitely (ending by displaying the last frame). The browser may fail to animate the animation as specified, for example, if the user has set an option not to animate images, or not to loop images.
A fourth objective — which also applies only to animated images — is to ensure that the images will not likely trigger epileptic seizures: flashing or flickering images can cause epileptic seizures in some people, hence animated graphics must be carefully designed not to trigger seizures, e.g. by avoiding flashing at a frequency from 2–55 Hertz. For more about this, see Photosensitivity and Seizures and Photosensitive Epilepsy.
Note that images can be saved more compactly using more modern image formats, however these formats are unfortunately not well supported by today’s browsers.
These formats include
HEIF,
JPEG2000,
JXR,
MNG,
SVG,
and WebP.
The major browser majors have, unfortunately, not been proäctive in supporting new image formats. Also, even if all the major browser makers agreed to support new image formats,
there are legacy browsers which will never support either the HTML 5 <picture>
tag, nor a new image format, and which will still be in use a decade from now — or longer — making it difficult to switch to a new image format.
Practice making a site for mobile devices such as cellphones. Coping with the constraints of such devices will teach you lessons which will enable you to improve all your websites.
For example, designing a site for small screens will forcibly remind you that users have a wide variety of screen sizes and resolutions, hence atune you to making sites which better serve a wider range of users. Also, designing a site for mobile devices should force you to reëvaluate what is truly necessary in a website: discarding what isn’t essential will make your sites simpler, more focused, more effective, and easier to test.
One special insight which you may gain from making a site for mobile devices is that people on the go may have different needs than people seated in front of a PC: people on the go are more focused on what they can use right away, for what they are doing at the moment; you should find out what they will most want, make menus which make it easier to find it, and present it as clearly and compactly as possible. This author has found it useful, for example, to produce pages in which information which normally appears for users of desktop PCs — such as decorative images and supplemental text — won’t appear to users of mobile devices, or at least won’t appear on pages which mobile users would often access.
When creating new pages, or changing existing pages, do your work one step at a time, with verifiable results at each step.
This makes it easier to develop and test your work: if a new problem appears, you will know that it’s caused by what you have just done, and therefore you only have to review the little bit of work you have done since your previous step. This process is called “incremental development”. Making massive changes before testing any of them makes testing — and debugging — a massive problem.
Reduce, Reuse, Recycle: this mantra helps reduce wasted time, in more ways that you might expect.
Reduce: as repeatedly said here, keep it simple and focus on effectiveness. You are designing the site for the user, not to show off your amazing technical virtuosity⸮ Keeping it simple can result in a better site with simpler and smaller code, and this will, of course, be simpler to test.
For example, don’t use software and image rollovers for buttons: instead, make simpler buttons — preferably using styled text, not images — and either don’t do rollovers, or do CSS rollovers. I once inherited a site with a menu generated by 230 lines of HTML, plus Java to control rollovers; I had to change the menu because it was impaired by the Eolas update, and I made a nicer menu with just 16 lines of HTML and 4 lines of CSS, with CSS rollovers. The old and new menus are pictured below:
In many cases not even graphic buttons are needed: simple text links and buttons, perhaps with rollovers, are often quite effective, and — as stated above — you should focus on effectiveness.
You might argue that what you want to do can’t be done without JavaScript or a plethora of image buttons, and this may be true, though CSS can do more than you might expect (as shown by the CSS styled text buttons depicted below): but is what you want needed? It may be that what you want is very pretty, but that something simpler will suffice; it may even be that something simpler will be better, e.g. because CSS styled text buttons like these are more accessible, and they load faster than image buttons:
NB: some of these buttons have background images, and one uses an image (of a key) as content, but the buttons are still essentially text buttons: the set of buttons on a site all share the same background images, all styling and rollovers are done using CSS, and buttons automatically resize when the user changes their preferred font size.
Reuse: create common elements that are reused throughout the site. By creating a common element, you reduce the need to reïnvent and retest it when it’s used elsewhere in your site.
For example: use common blocks of code for elements such as headers and footers that appear on every page, perhaps using server side includes if your server lets you do this; create a template from one of the first pages you create, using the template when making later pages; create flexible CSS classes for common objects, e.g. sidebars. The sidebars on this page, for example, are styled using four CSS rules, plus one rule for the dropcap.
Using common elements also helps make pages look more consistent.
Recycle: think of future sites when you design a site. Sometimes you can spend just a bit more time making a component or perfecting a technique that is a bit more general than is strictly required for one site, but which may be recycled in later sites. If you create just one or two for each site you make, you can quickly amass a library of proven components and techniques which you can recycle in other sites, saving much time in the long run.
E.g. I made a browser sniffer for a site which I have used many times in other sites. It may do more than is needed in any given site — e.g. it identifies the Opera browser even when this doesn’t matter for a given site — but the advantage of being able to use the same proven block of code over and over outweighs the overhead of a few redundant lines of code.
Tip: when designing software, object-oriented programming often makes it easier to think about making components which can be repurposed.
Location, Location, Location: this mantra encourages consistency in organizing files to reduce confusion and errors.
Create a simple, consistent, fairly flat file structure for a small site’s files.
E.g. I normally put all source files — the HTML, CSS, and JS files — in the home directory,
static image files in img/
, animated image files in ani/
, and document files in doc/
.
One nasty factor to consider in picking pathnames is that Windows considers upper- and lowercase
characters in pathnames to be the same, whereas most other O/S’s consider them to be different. E.g. Windows will deem
BrowserNews/
and browsernews/
to be the same, but Linux won’t.
This means that, if the case of a character in a pathname is wrong, and both the development PC and the server run Windows, you will likely not
notice the error during testing. This also means that if other sites link to your site — or if users set bookmarks to your site —
URLs with characters in the wrong case may not be noticed. But if the site is later moved to a non-Windows server,
the errors will become important, and the URLs will fail. You should therefore think twice
before picking pathnames with mixed case characters.
Note that this issue doesn’t affect domain names: upper- and lowercase characters in a URL’s domain name are
defined to be the same, no matter the O/S. E.g. www.BrowserNews.com
is always
the same as www.browsernews.com
.
A simple, consistent, flat file structure makes it easier to locate files.
It can also help reduce coding errors: e.g. if you put all document files in
doc/
, then you know that any document pathname will be doc/*.*
. (This may sound trivial, but I
have often seen sites where images or documents did not appear simply because the designer had specified the wrong pathnames.)
I go one step further and use a simple, consistent scheme to name files. E.g. all images that are buttons have
pathnames beginning with img/but_
, images of icons have pathnames beginning with img/ico_
,
and images of maps have pathnames beginning with img/map_
. (Again this may sound trivial, but it
helps to organize and identify the files, and to reduce coding errors: e.g. you won’t think that img/but_home.png
is anything other than the image of a button.)
You may prefer a different convention. That’s fine: the important thing is that you be consistent with whatever convention you adopt.
Also, be consistent in the internal organization of a file.
For example, I often organize a CSS file in four sections:
Simple rules that apply to the whole site, e.g. a rule for styling a normal link.
Sets of rules that apply to common elements on the site, e.g. rules for styling a sidebar.
Sets of rules that apply only to specific pages, e.g. rules for styling the sitemap on a Site Map page.
Sets of rules for printing, e.g. to specify a simpler monochrome look when printing pages.
Organizing CSS files in this way can make it easier to find a relevant rule and can reduce the likelihood that a rule will fail because of a related rule that isn’t nearby in the file.
There are many websites out there, some good, some great, many mediocre. There are also many textbooks that offer standard solutions to common problems. Study the websites. Study the books. Don’t reïnvent the wheel and create something from scratch when you can adapt existing, proven ideas and techniques. These ideas and techniques can save you much time and enable you to build better sites.
Especially valuable are the techniques for dealing with bugs and poor standards support. E.g. IE6 has a bug, the peekaboo bug, which sometimes wrongly hides content: ingenious designers have, often though trial and error, evolved non-obvious solutions for many bugs like this; you will save much time by finding out how to quickly find their solutions.
Initial testing will be easier if you configure your test browsers so that their default fonts and font sizes are very similar. You may, otherwise, initially waste time on differences which are not due to browser differences, but simply to configuration differences; or you may overlook differences which are due to how some browsers (mainly old versions of IE) size fonts.
One factor makes it difficult or impossible to make default fonts the same: most browsers don’t let the user choose all the default fonts. For example, IE doesn’t let the user choose any of the CSS generic fonts, serif, san-serif, cursive, fantasy, and monospace: IE does let the user choose a proportional font and a monospace font, but they are unrelated to the CSS generic fonts. Many other browsers have similar limitations: e.g. Firefox lets the user choose some, but not all, of the CSS generic fonts. Few browsers — SeaMonkey and Vivaldi being the notable exceptions — let the user configure them all.
Also, two factors may make it difficult or impossible to make default sizes the same.
font-size:small
, font-size:x-small
, etc. The Resources ⮞ Font Metrics
and Sizes page, which lists the the medium and small sizes for a variety of fonts, illustrates this point: viewing that page with
several different browsers illustrates how very much the sizes can vary from browser to browser.It may be possible to set a font size not normally offered by the browser. For example, Firefox’s about:config and Opera’s opera:config pages can be used to set font sizes which are not in the lists of sizes available to normal users. Alternatively, some browsers — notably Safari — do what all browsers should do: let the user enter the size they want if the size isn’t in the browser’s list.
The Resources ⮞ Fluid Design and Font Sizes page, which discusses font issues related to fluid design, also illustrates how text looks when sized in different ways: viewing sections of that page using different browsers may be helpful in comparing the default font sizes of the browsers, so that their default sizes can be adjusted to produce more similar results.
It should be noted that setting the same default font size in different browsers doesn’t guarantee that a page will
have the same font size in these browsers. For example, this author observed that, when the default font size was set to 17px
in both Firefox 3 and Opera 9, medium text was indeed the same size, however, text sized using font-size:small
in CSS
was not the same size. Using percentages of the default size appears to produce more consistent results: i.e. font-size:80%;
will produce
more consistent sizes in various browsers.
It is still essential, of course, to test the site with a variety of font sizes, but this should be done only after resolving problems unrelated to font size.
Many browsers offer error consoles: optional windows which display such things as:
JavaScript error messages: these indicate actual errors in the JavaScript code, e.g. use of an undeclared variable or missing method.
JavaScript warning messages: these indicate valid but questionable aspects of the JavaScript code. e.g. assigning a value to an undeclared variable.
CSS error messages: these indicate actual errors in the CSS, e.g. declaring an invalid property. Most CSS errors can be avoided by using a CSS validator, however, CSS errors resulting from dynamic JavaScript can be avoided only by fixing the code generating the error.
Before any testing you should manually clear the console of existing messages. During testing, you should look for any messages which may appear: for any message, whether an error or warning, you should change your code so that the message will no longer appear.
It is important to fix the causes of the errors and warnings, for example by explicitly declaring all JavaScript variables. Fixing the errors is clearly necessary. Fixing the warnings isn’t necessary, but is highly advisable: for if warnings are not fixed, the error console may be filled with so many warning messages that any error messages will go unnoticed.
A troublesome problem is a browser like Internet Explorer which supports non-standard JavaScript or CSS: the only way to prevent such code from generating error or warning message in (for example) the Firefox Error Console is to either a stop using the non-standard code, or b hiding the non-standard code from browsers which don’t support it. With Internet Explorer, for example, non-standard CSS can be put in a file which is only linked to using Internet Explorer conditional comments.
Spambots can force many FormMail pages to send spam. In the worst case, FormMail pages may send spam to massive lists of arbitrary email addresses. In milder cases, FormMail pages may send spam only to a domain’s legitimate email addresses.
There are two steps to hardening FormMail pages to block spambots:
NB: when FormMail pages block spambots, overall spam appears to be dramatically reduced, not just spam sent by FormMail pages. This suggests that many spammers stop sending spam to email addresses used by hardened FormMail pages.
Plan how you will test a site. Create a checklist. And follow the plan.
For example, the test plan might define three types of testing: initial, secondary, and tertiary:
Initial testing is done early, after a small number of typical pages have been created. The purpose of initial testing is to make sure that the design works. If there is a fundamental flaw in the design, this is the time to fix it; if there is a problem with a certain browser, this is the time to cope with it..
Initial testing begins with the use of automated test tools. This is following by tests with browsers that are highly compliant with standards, then with major browsers that are less compliant with standards, and finally with all the remaining browsers in the test suite. Testing should be comprehensive, and testers should try hard to make the design fail.
It is critical to use automated test tools as the first step in formal testing. It is surprisingly easy to make broken pages which look right; automated testing saves time not only by helping to find out why pages aren’t rendered as expected, but also by helping to identify problems in pages which look perfectly fine.
Initial testing doesn’t end until problems have been fixed
Initial testing may involve the client, e.g. to enable the client to review the design: many people are unable to envision something until they see it in action, and it isn’t uncommon for a client to see initial work and realize that it was not what they expected, or that it has unexpected usability problems. (Changes at this point may be chargeable, assuming that the client signed off on the initial design.)
NB: because modern versions of Chrome, Opera, and Vivaldi use the same browser engine, it’s only necessary to test sites with one of these browsers; I suggest Vivaldi, because it’s more configurable, e.g. lets you choose the generic fonts.
Caution: problems can lurk in a site that looks fine, so testing may miss problems. It is therefore important to maintain a neutral or skeptical attitude during site development and testing.
Here is a sample test checklist:
Secondary testing is less formal, and is done repeatedly throughout the development cycle whenever the designer wants to check their work. The purpose of secondary testing is to see if the work done to date is okay.
Secondary testing begins with the use of automated test tools. This is following by tests with browsers that are highly compliant with standards, then perhaps with major browsers that are less compliant with standards. Minor browsers are not tested unless there is some reason to suspect that there may be a problem: secondary testing should be quick and therefore less comprehensive than initial testing; it’s assumed than initial testing has already found and fixed critical problems with minor browsers. If a problem is found, it may be fixed then, or it may simply be added to a to-do list.
Secondary testing may end with an interim site which is made available to the client, who may wish to request changes, but it’s important to make it clear that the work is incomplete and may have outstanding problems. (Changes requested by the client at this point may be chargeable, assuming that the client signed off on the initial design.)
Tertiary testing is done before deploying the site. The purpose of tertiary testing is to make sure that the site is ready for use.
Tertiary testing begins with the use of automated test tools. This is following by tests with browsers that are highly compliant with standards, then with major browsers that are less compliant with standards, and finally with all the remaining browsers in the test suite. Testing should be comprehensive. Problems should be fixed, with testing repeated on pages affected by any changes needed to fix the problems.
The client must sign off on the site when final testing has proven that the site meets the requirements.
The test plan should also define how changes will be handled, and how the site is to be re-tested:
It is most cost effective to add desired changes to a to-do list, and then to schedule the changes so that related changes and very minor changes are done at the same time.
It is critical to establish the scope of changes in order to determine what must be re-tested. In some cases, e.g. when changes are related, only one page or a set of pages need be re-tested; in some cases, e.g. when changes are very minor, it suffices to test with only one browser, or with a small set of browsers; and in some cases it’s necessary to re-test the entire site with the full test suite. In the last case, as many changes as possible should be done together so that as many changes as possible can be tested at one time.
Testing is complicated by the fact that a lot of people use old versions of browsers. Some use browsers which are no longer supported. This means that it’s necessary to do a full test with a large number of versions of browsers.
There are two types of old browsers. One type, exemplified by Internet Explorer, exists in versions spanning many years, with large differences between the versions. The other type, exemplified by Firefox, exist in versions released very closely together, with small differences between the versions. The first type requires that each version be tested with a new site. The second type allows older versions to be skipped during tests, though this creates the risk of compatibility issues with untested versions.