summaryrefslogtreecommitdiffstats
path: root/testing/web-platform/tests/docs
diff options
context:
space:
mode:
authorMatt A. Tobin <mattatobin@localhost.localdomain>2018-02-02 04:16:08 -0500
committerMatt A. Tobin <mattatobin@localhost.localdomain>2018-02-02 04:16:08 -0500
commit5f8de423f190bbb79a62f804151bc24824fa32d8 (patch)
tree10027f336435511475e392454359edea8e25895d /testing/web-platform/tests/docs
parent49ee0794b5d912db1f95dce6eb52d781dc210db5 (diff)
downloadUXP-5f8de423f190bbb79a62f804151bc24824fa32d8.tar
UXP-5f8de423f190bbb79a62f804151bc24824fa32d8.tar.gz
UXP-5f8de423f190bbb79a62f804151bc24824fa32d8.tar.lz
UXP-5f8de423f190bbb79a62f804151bc24824fa32d8.tar.xz
UXP-5f8de423f190bbb79a62f804151bc24824fa32d8.zip
Add m-esr52 at 52.6.0
Diffstat (limited to 'testing/web-platform/tests/docs')
-rw-r--r--testing/web-platform/tests/docs/OWNERS4
-rw-r--r--testing/web-platform/tests/docs/configuration.md97
-rw-r--r--testing/web-platform/tests/docs/css-metadata.md218
-rw-r--r--testing/web-platform/tests/docs/css-naming.md77
-rw-r--r--testing/web-platform/tests/docs/css-user-styles.md88
-rw-r--r--testing/web-platform/tests/docs/github-101.md361
-rw-r--r--testing/web-platform/tests/docs/lint-tool.md136
-rw-r--r--testing/web-platform/tests/docs/manual-test.md72
-rw-r--r--testing/web-platform/tests/docs/reftests.md152
-rw-r--r--testing/web-platform/tests/docs/review-checklist.md128
-rw-r--r--testing/web-platform/tests/docs/review-process.md39
-rw-r--r--testing/web-platform/tests/docs/running_tests.md34
-rw-r--r--testing/web-platform/tests/docs/submission-process.md42
-rw-r--r--testing/web-platform/tests/docs/test-format-guidelines.md346
-rw-r--r--testing/web-platform/tests/docs/test-style-guidelines.md437
-rw-r--r--testing/web-platform/tests/docs/test-templates.md135
16 files changed, 2366 insertions, 0 deletions
diff --git a/testing/web-platform/tests/docs/OWNERS b/testing/web-platform/tests/docs/OWNERS
new file mode 100644
index 000000000..af3e0845c
--- /dev/null
+++ b/testing/web-platform/tests/docs/OWNERS
@@ -0,0 +1,4 @@
+@sideshowbarker
+@dontcallmedom
+@zcorpan
+@Ms2ger
diff --git a/testing/web-platform/tests/docs/configuration.md b/testing/web-platform/tests/docs/configuration.md
new file mode 100644
index 000000000..6d5bbbca8
--- /dev/null
+++ b/testing/web-platform/tests/docs/configuration.md
@@ -0,0 +1,97 @@
+Web-platform-tests are designed to run in a self-contained environment
+on the local computer. All the required resources are packaged with
+the web-platform-tests repository.
+
+## Requirements
+
+ * [git](http://git-scm.com/)
+ * [Python 2.7](http://python.org)
+ * [OpenSSL](https://www.openssl.org)
+
+## Hosts configuration
+
+The tests depend on certain domains being available. These are
+typically configured locally with `web-platform.test` as the top level
+domain and five subdomains. To configure these domains you need to edit
+your
+[`hosts` file](http://en.wikipedia.org/wiki/Hosts_%28file%29%23Location_in_the_file_system). The
+following entries are required:
+
+```
+127.0.0.1 web-platform.test
+127.0.0.1 www.web-platform.test
+127.0.0.1 www1.web-platform.test
+127.0.0.1 www2.web-platform.test
+127.0.0.1 xn--n8j6ds53lwwkrqhv28a.web-platform.test
+127.0.0.1 xn--lve-6lad.web-platform.test
+0.0.0.0 nonexistent-origin.web-platform.test
+```
+
+## Cloning the Repository
+
+If you have not done so, clone the web-platform-tests repository:
+
+ git clone --recursive git@github.com:w3c/web-platform-tests.git
+
+If you have already made a clone, but did not specify `--recursive`
+update all submodules:
+
+ git submodule update --init --recursive
+
+## Font Files
+
+A number of tests rely upon a set of custom fonts, with
+[Ahem](https://github.com/w3c/csswg-test/raw/master/fonts/ahem/ahem.ttf)
+being required to be installed according to the normal font-install
+procedure for your operating system. Other tests which require other
+fonts explicitly state this and provide links to required fonts.
+
+## Running the Test Server
+
+The test environment can be started using
+
+ ./serve
+
+This will start HTTP servers on two ports and a websockets server on
+one port. By default one web server starts on port 8000 and the other
+ports are randomly-chosen free ports. Tests must be loaded from the
+*first* HTTP server in the output. To change the ports, copy the
+`config.default.json` file to `config.json` and edit the new file,
+replacing the part that reads:
+
+```
+"http": [8000, "auto"]
+```
+
+to some port of your choice e.g.
+
+```
+"http": [1234, "auto"]
+```
+
+If you installed OpenSSL in such a way that running `openssl` at a
+command line doesn't work, you also need to adjust the path to the
+OpenSSL binary. This can be done by adding a section to `config.json`
+like:
+
+```
+"ssl": {"openssl": {"binary": "/path/to/openssl"}}
+```
+
+### Windows Notes
+
+Running wptserve with SSL enabled on Windows typically requires
+installing an OpenSSL distribution.
+[Shining Light](https://slproweb.com/products/Win32OpenSSL.html)
+provide a convenient installer that is known to work, but requires a
+little extra setup.
+
+After installation ensure that the path to OpenSSL is on your `%Path%`
+environment variable.
+
+Then set the path to the default OpenSSL configuration file (usually
+something like `C:\OpenSSL-Win32\bin\openssl.cfg` in the server
+configuration. To do this copy `config.default.json` in the
+web-platform-tests root to `config.json`. Then edit the JSON so that
+the key `ssl/openssl/base_conf_path` has a value that is the path to
+the OpenSSL config file.
diff --git a/testing/web-platform/tests/docs/css-metadata.md b/testing/web-platform/tests/docs/css-metadata.md
new file mode 100644
index 000000000..aacc868d4
--- /dev/null
+++ b/testing/web-platform/tests/docs/css-metadata.md
@@ -0,0 +1,218 @@
+CSS tests have some additional requirements for metadata.
+
+### Specification Links
+
+Specification Links
+
+``` html
+<link rel="help" href="RELEVANT_SPEC_SECTION" />
+```
+
+The specification link elements provide a way to align the test with
+information in the specification being tested.
+
+* Links should link to relevant sections within the specification
+* Use the anchors from the specification's Table of Contents
+* A test can have multiple specification links
+ * Always list the primary section that is being tested as the
+ first item in the list of specification links
+ * Order the list from the most used/specific to least used/specific
+ * There is no need to list common incidental features like the
+ color green if it is being used to validate the test unless the
+ case is specifically testing the color green
+* If the test is part of multiple test suites, link to the relevant
+ sections of each spec.
+
+Example 1:
+
+``` html
+<link rel="help"
+href="http://www.w3.org/TR/CSS21/text.html#alignment-prop" />
+```
+
+Example 2:
+
+``` html
+<link rel="help"
+href="http://www.w3.org/TR/CSS21/text.html#alignment-prop" />
+<link rel="help" href="http://www.w3.org/TR/CSS21/visudet.html#q7" />
+<link rel="help"
+href="http://www.w3.org/TR/CSS21/visudet.html#line-height" />
+<link rel="help"
+href="http://www.w3.org/TR/CSS21/colors.html#background-properties" />
+```
+
+### Requirement Flags
+
+<table>
+<tr>
+ <th>Token</th>
+ <th>Description</th>
+</tr>
+<tr>
+ <td>ahem</td>
+ <td>Test requires
+ <a href="http://www.w3.org/Style/CSS/Test/Fonts/Ahem">Ahem font</a>
+ </td>
+</tr>
+<tr>
+ <td>animated</td>
+ <td>Test is animated in final state. (Cannot be verified using
+ reftests/screenshots.)</td>
+</tr>
+<tr>
+ <td>asis</td>
+ <td>The test has particular markup formatting requirements and
+ cannot be re-serialized.</td>
+</tr>
+<tr>
+ <td>combo</td>
+ <td>Test, which must have an unsuffixed filename number, is
+ strictly the union of all the suffixed tests with the same name
+ and number. (See File name format, below.)</td>
+</tr>
+<tr>
+ <td>dom</td>
+ <td>Requires support for JavaScript and the Document Object Model (
+ DOM)</td>
+</tr>
+<tr>
+ <td>font</td>
+ <td>Requires a specific font to be installed. (Details must be
+ provided and/or the font linked to in the test description)</td>
+</tr>
+<tr>
+ <td>history</td>
+ <td>User agent session history is required. Testing :visited is a
+ good example where this may be used.</td>
+</tr>
+<tr>
+ <td>HTMLonly</td>
+ <td>Test case is only valid for HTML</td>
+</tr>
+<tr>
+ <td>http</td>
+ <td>Requires HTTP headers</td>
+</tr>
+<tr>
+ <td>image</td>
+ <td>Requires support for bitmap graphics and the graphic to load
+ </td>
+</tr>
+<tr>
+ <td>interact</td>
+ <td>Requires human interaction (such as for testing scrolling
+ behavior)</td>
+</tr>
+<tr>
+ <td>invalid</td>
+ <td>Tests handling of invalid CSS. Note: This case contains CSS
+ properties and syntax that may not validate.</td>
+</tr>
+<tr>
+ <td>may</td>
+ <td>Behavior tested is preferred but OPTIONAL.
+ <a href="http://www.ietf.org/rfc/rfc2119.txt">[RFC2119]</a></td>
+</tr>
+<tr>
+ <td>namespace</td>
+ <td>Requires support for XML Namespaces</td>
+</tr>
+<tr>
+ <td>nonHTML</td>
+ <td>Test case is only valid for formats besides HTML (e.g. XHTML
+ or arbitrary XML)</td>
+</tr>
+<tr>
+ <td>paged</td>
+ <td>Only valid for paged media</td>
+</tr>
+<tr>
+ <td>scroll</td>
+ <td>Only valid for continuous (scrolling) media</td>
+</tr>
+<tr>
+ <td>should</td>
+ <td>Behavior tested is RECOMMENDED, but not REQUIRED. <a
+ href="http://www.ietf.org/rfc/rfc2119.txt">[RFC2119]</a></td>
+</tr>
+<tr>
+ <td>speech</td>
+ <td>Device supports audio output. Text-to-speech (TTS) engine
+ installed</td>
+</tr>
+<tr>
+ <td>svg</td>
+ <td>Requires support for vector graphics (SVG)</td>
+</tr>
+<tr>
+ <td>userstyle</td>
+ <td>Requires a user style sheet to be set</td>
+</tr>
+<tr>
+ <td>32bit</td>
+ <td>Assumes a 32-bit integer as the minimum (-2147483648) or
+ maximum (2147483647) value</td>
+</tr>
+<tr>
+ <td>96dpi</td>
+ <td>Assumes 96dpi display</td>
+</tr>
+</table>
+
+
+Example 1 (one token applies):
+``` html
+<meta name="flags" content="invalid" />
+```
+
+Example 2 (multiple tokens apply):
+
+``` html
+<meta name="flags" content="ahem image scroll" />
+```
+
+Example 3 (no tokens apply):
+
+``` html
+<meta name="flags" content="" />
+```
+
+### Test Assertions
+
+``` html
+<meta name="assert" content="TEST ASSERTION" />
+```
+
+This element should contain a complete detailed statement expressing
+what specifically the test is attempting to prove. If the assertion
+is only valid in certain cases, those conditions should be described
+in the statement.
+
+The assertion should not be:
+
+* A copy of the title text
+* A copy of the test verification instructions
+* A duplicate of another assertion in the test suite
+* A line or reference from the CSS specification unless that line is
+ a complete assertion when taken out of context.
+
+The test assertion is **optional**. It helps the reviewer understand
+the goal of the test so that he or she can make sure it is being
+tested correctly. Also, in case a problem is found with the test
+later, the testing method (e.g. using `color` to determine pass/fail)
+can be changed (e.g. to using `background-color`) while preserving
+the intent of the test (e.g. testing support for ID selectors).
+
+Examples of good test assertions:
+
+* "This test checks that a background image with no intrinsic size
+ covers the entire padding box."
+* "This test checks that 'word-spacing' affects each space (U+0020)
+ and non-breaking space (U+00A0)."
+* "This test checks that if 'top' and 'bottom' offsets are specified
+ on an absolutely-positioned replaced element, then any remaining
+ space is split amongst the 'auto' vertical margins."
+* "This test checks that 'text-indent' affects only the first line
+ of a block container if that line is also the first formatted line
+ of an element."
diff --git a/testing/web-platform/tests/docs/css-naming.md b/testing/web-platform/tests/docs/css-naming.md
new file mode 100644
index 000000000..c508ac33f
--- /dev/null
+++ b/testing/web-platform/tests/docs/css-naming.md
@@ -0,0 +1,77 @@
+CSS tests require a specific naming convention. This is also a good,
+but not mandatory, style to use for other tests.
+
+## File Name
+
+The file name format is ```test-topic-###.ext``` where `test-topic`
+somewhat describes the test, `###` is a zero-filled number used to
+keep the file names unique, and `ext` is typically either
+`html` or `xht`.
+
+Test filenames must also be globally unique. There cannot be multiple
+tests with the same filename, even if they are in different parent
+directories. For example, having both
+`/css-values-3/foo-001.html` and `/css-variables-1/foo-001.html`
+would not be allowed. This restriction is in place because some tools
+that use the CSS tests dump all of the test files into a single
+directory, which would cause all but one of the tests with the same
+filename to be clobbered and accidentally skipped.
+
+### test-topic
+
+`test-topic` is a short identifier that describes the test. The
+`test-topic` should avoid conjunctions, articles, and prepositions.
+It is a file name, not an English phrase: it should be as concise
+as possible.
+
+Examples:
+```
+ margin-collapsing-###.ext
+ border-solid-###.ext
+ float-clear-###.ext
+```
+
+### `###`
+
+`###` is a zero-filled number used to keep the file names unique when
+files have the same test-topic name.
+
+Note: The number format is limited to 999 cases. If you go over this
+number it is recommended that you reevaluate your test-topic name.
+
+For example, in the case of margin-collapsing there are multiple
+cases so each case could have the same test-topic but different
+numbers:
+
+```
+ margin-collapsing-001.xht
+ margin-collapsing-002.xht
+ margin-collapsing-003.xht
+```
+
+There may also be a letter affixed after the number, which can be
+used to indicate variants of a test.
+
+For example, ```float-wrap-001l.xht``` and ```float-wrap-001r.xht```
+might be left and right variants of a float test.
+
+If tests using both the unsuffixed number and the suffixed number
+exist, the suffixed tests must be subsets of the unsuffixed test.
+
+For example, if ```bidi-004``` and ```bidi-004a``` both exist,
+```bidi-004a``` must be a subset of ```bidi-004```.
+
+If the unsuffixed test is strictly the union of the suffixed tests,
+i.e. covers all aspects of the suffixed tests (such that a user agent
+passing the unsuffixed test will, by design, pass all the suffixed
+tests), then the unsuffixed test should be marked with the combo flag.
+
+If ```bidi-004a``` and ```bidi-004b``` cover all aspects of ```bidi-
+004``` (except their interaction), then bidi-004 should be given the
+combo flag.
+
+### ext
+
+`ext` is the file extension or format of the file.
+For XHTML test files, it should be `xht`.
+For HTML (non-XML) test files, it should be `html`.
diff --git a/testing/web-platform/tests/docs/css-user-styles.md b/testing/web-platform/tests/docs/css-user-styles.md
new file mode 100644
index 000000000..317933969
--- /dev/null
+++ b/testing/web-platform/tests/docs/css-user-styles.md
@@ -0,0 +1,88 @@
+Some test may require special user style sheets to be applied in order
+for the case to be verified. In order for proper indications and
+prerequisite to be displayed every user style sheet should contain the
+following rules.
+
+``` css
+#user-stylesheet-indication
+{
+ /* Used by the harness to display an indication there is a user
+ style sheet applied */
+ display: block!important;
+}
+```
+
+The rule ```#user-stylesheet-indication``` is to be used by any
+harness running the test suite.
+
+A harness should identify test that need a user style sheet by
+looking at their flags meta tag. It then should display appropriate
+messages indicating if a style sheet is applied or if a style sheet
+should not be applied.
+
+Harness style sheet rules:
+
+``` css
+#userstyle
+{
+ color: green;
+ display: none;
+}
+#nouserstyle
+{
+ color: red;
+ display: none;
+}
+```
+
+Harness userstyle flag found:
+
+``` html
+<p id="user-stylesheet-indication" class="userstyle">A user style
+sheet is applied.</p>
+```
+
+Harness userstyle flag NOT found:
+
+``` html
+<p id="user-stylesheet-indication" class="nouserstyle">A user style
+sheet is applied.</p>
+```
+
+Within the test case it is recommended that the case itself indicate
+the necessary user style sheet that is required.
+
+Examples: (code for the [`cascade.css`][cascade-css] file)
+
+``` css
+#cascade /* ID name should match user style sheet file name */
+{
+ /* Used by the test to hide the prerequisite */
+ display: none;
+}
+```
+
+The rule ```#cascade``` in the example above is used by the test
+page to hide the prerequisite text. The rule name should match the
+user style sheet CSS file name in order to keep this orderly.
+
+Examples: (code for [the `cascade-###.xht` files][cascade-xht])
+
+``` html
+<p id="cascade">
+ PREREQUISITE: The <a href="support/cascade.css">
+ "cascade.css"</a> file is enabled as the user agent's user style
+ sheet.
+</p>
+```
+
+The id value should match the user style sheet CSS file name and the
+user style sheet rule that is used to hide this text when the style
+sheet is properly applied.
+
+Please flag test that require user style sheets with the userstyle
+flag so people running the tests know that a user style sheet is
+required.
+
+[cascade-css]: https://github.com/w3c/csswg-test/blob/master/css21/cascade/support/cascade.css
+[cascade-xht]: https://github.com/w3c/csswg-test/blob/master/css21/cascade/cascade-001.xht
diff --git a/testing/web-platform/tests/docs/github-101.md b/testing/web-platform/tests/docs/github-101.md
new file mode 100644
index 000000000..a1ee9fdfa
--- /dev/null
+++ b/testing/web-platform/tests/docs/github-101.md
@@ -0,0 +1,361 @@
+All the basics that you need to know are documented on this page, but for the
+full GitHub documentation, visit [help.github.com][help].
+
+If you are already an experienced Git/GitHub user, all you need to
+know is that we use the normal GitHub Pull Request workflow for test
+submissions. The only unusual thing is that, to help with code review,
+we ask that you do not amend or otherwise squash your submission as
+you go along, but keep pushing updates as new commits.
+
+If you are a first-time GitHub user, read on for more details of the workflow.
+
+## Setup
+
+1. Create a GitHub account if you do not already have one on
+ [github.com][github]
+
+2. Download and install the latest version of Git:
+ [http://git-scm.com/downloads][git]. Please refer to the instruction there
+ for different platforms.
+
+3. Configure your settings so your commits are properly labeled:
+
+ On Mac or Linux or Solaris, open the Terminal.
+
+ On Windows, open Git Bash (From the Start Menu > Git > Git Bash).
+
+ At the prompt, type:
+
+ $ git config --global user.name "Your Name"
+
+ _This will be the name that is displayed with your test submissions_
+
+ Next, type:
+
+ $ git config --global user.email "your_email@address.com"
+
+ _This should be the email address you used to create the account in Step 1._
+
+ Next, type:
+
+ $ git config --global push.default upstream
+
+ This ensures that git push will never unintentionally create or update
+ a remote branch.
+
+4. (Optional) If you don't want to enter your username and password every
+ time you talk to the remote server, you'll need to set up password caching.
+ See [Caching your GitHub password in Git][password-caching].
+
+## Test Repositories
+
+The test repository that you contribute to will depend on the specification
+that you are testing. Currently there are two test repositories, one for CSS
+specification tests and the main W3C repository that contains tests for all
+other specificatons:
+
+**Main W3C test repository**: [github.com/w3c/web-platform-tests][main-repo]
+
+**CSS specification test repository**: [github.com/w3c/csswg-test][css-repo]
+
+## Fork
+
+Now that you have Git set up, you will need to fork the test repository. This
+will enable you to [submit][submit] your tests using a pull request (more on this
+[below][submit]).
+
+1. In the browser, go the the GitHub page for the test repository:
+
+ CSS test repository: [github.com/w3c/csswg-test][css-repo]
+
+ Main W3C test repository: [github.com/w3c/web-platform-tests][main-repo]
+
+2. Click the ![fork][forkbtn] button in the upper right.
+
+3. The fork will take several seconds, then you will be redirected to your
+ GitHub page for this forked repository. If you forked the HTML test repo
+ (for example), you will now be at
+ **https://github.com/username/web-platform-tests**.
+
+4. After the fork is complete, you're ready to [clone](#clone).
+
+## Clone
+
+If your [fork](#fork) was successful, the next step is to clone (download a copy of the files).
+
+### Clone the test repo
+At the command prompt, cd into the directory where you want to keep the tests.
+
+* If you forked the W3C Web Platform tests:
+
+ $ git clone --recursive https://github.com/username/web-platform-tests.git
+
+ If you forked the CSS tests:
+
+ $ git clone --recursive https://github.com/username/csswg-test.git
+
+ _This will download the tests into a directory named for the repo:_
+ `./web-platform-tests` _or_ `./csswg-test`.
+
+* You should now have a full copy of the test repository on your local
+ machine. Feel free to browse the directories on your hard drive. You can also
+ browse them on [github.com][github-w3c] and see the full history of contributions
+ there.
+
+### Clone the submodules
+
+* If you cloned the test repo and used the `--recursive` option, you'll find its submodules in `[repo-root]/resources/`.
+
+* If you cloned the the test repo and did not use the `--recursive` option, you will likely have an empty `resources` directory at the root of your cloned repo. You can clone the submodules with these additional steps:
+
+ $ cd test-repo-root
+ $ git submodule update --init --recursive
+
+ _You should now see the submodules in the repository. For example,_ `testharness` _files in should be in the resources directory._
+
+
+## Configure Remote / Upstream
+Synchronizing your forked repository with the W3C repository will enable you to
+keep your forked local copy up-to-date with the latest commits in the W3C
+repository.
+
+1. On the command line, navigate to to the directory where your forked copy of
+ the repository is located.
+
+2. Make sure that you are on the master branch. This will be the case if you
+ just forked, otherwise switch to master.
+
+ $ git checkout master
+
+3. Next, add the remote of the repository your forked. This assigns the
+ original repository to a remote called "upstream"
+
+ If you forked the [Web Platform Tests repository][main-repo]:
+
+ $ git remote add upstream https://github.com/w3c/web-platform-tests.git
+
+ If you forked the [CSSWG-test repository][css-repo]:
+
+ $ git remote add upstream https://github.com/w3c/csswg-test.git
+
+4. To pull in changes in the original repository that are not present in your
+ local repository first fetch them:
+
+ $ git fetch upstream
+
+ Then merge them into your local repository:
+
+ $ git merge upstream/master
+
+ For additional information, please see the [GitHub docs][github-fork-docs].
+
+## Branch
+
+Now that you have everything locally, create a branch for your tests.
+
+_Note: If you have already been through these steps and created a branch
+and now want to create another branch, you should always do so from the
+master branch. To do this follow the steps from the beginning of the [previous
+section][remote-upstream]. If you don't start with a clean master
+branch you will end up with a big nested mess._
+
+At the command line:
+
+ $ git checkout -b topic
+
+This will create a branch named `topic` and immediately
+switch this to be your active working branch.
+
+_The branch name should describe specifically what you are testing.
+For Example:_
+
+ $ git checkout -b flexbox-flex-direction-prop
+
+You're ready to start writing tests! Come back to this page you're ready to
+[commit][commit] them or [submit][submit] them for review.
+
+
+## Commit
+
+Before you submit your tests for review and contribution to the main test
+repo, you'll need to first commit them locally, where you now have your own
+personal version control system with git. In fact, as you are writing your
+tests, you may want to save versions of your work as you go before you submit
+them to be reviewed and merged.
+
+1. When you're ready to save a version of your work, go to the command
+ prompt and cd to the directory where your files are.
+
+2. First, ask git what new or modified files you have:
+
+ $ git status
+
+ _This will show you files that have been added or modified_.
+
+3. For all new or modified files, you need to tell git to add them to the
+ list of things you'd like to commit:
+
+ $ git add [file1] [file2] ... [fileN]
+
+ Or:
+
+ $ git add [directory_of_files]
+
+4. Run `git status` again to see what you have on the 'Changes to be
+ committed' list. These files are now 'staged'.
+
+5. Alternatively, you can run `git diff --staged`, which will show you the
+ diff of things to be committed.
+
+6. Once you've added everything, you can commit and add a message to this
+ set of changes:
+
+ $ git commit -m "Tests for indexed getters in the HTMLExampleInterface"
+
+7. Repeat these steps as many times as you'd like before you submit.
+
+## Submit
+
+If you're here now looking for more instructions, that means you've written
+some awesome tests and are ready to submit them. Congratulations and welcome
+back!
+
+1. The first thing you do before submitting them to the W3C repo is to push
+them back up to the server:
+
+ $ git push origin topic
+
+ _Note: Here,_ `origin` _refers to remote repo from which you cloned
+ (downloaded) the files after you forked, referred to as
+ web-platform-tests.git in the previous example;_
+ `topic` _refers to the name of your local branch that
+ you want to push_.
+
+2. Now you can send a message that you have changes or additions you'd like
+ to be reviewed and merged into the main (original) test repository. You do
+ this by using a pull request. In a browser, open the GitHub page for your
+ forked repository: **https://github.com/username/web-platform-tests**.
+
+3. Now create the pull request. There are several ways to create a PR in the
+GitHub UI. Below is one method and others can be found on
+[GitHub.com][github-createpr]
+
+ a. Click the ![pull request link][pullrequestlink] link on the right side
+ of the UI, then click the ![new pull request][pullrequestbtn] button.
+
+ b. On the left, you should see the base repo is the
+ w3c/web-platform-tests. On the right, you should see your fork of that
+ repo. In the branch menu of your forked repo, switch to
+ `topic`
+ **Note:** If you see _'There isn't anything to compare'_, click the
+ ![edit][editbtn] button and make sure your fork and your
+ `topic` branch is selected on the right side.
+
+ c. Select the ![create pull request][createprlink] link at the top.
+
+ d. Scroll down and review the diff
+
+ e. Scroll back up and in the Title field, enter a brief description for
+ your submission.
+
+ Example: "Tests for CSS Transforms skew() function."
+
+ f. If you'd like to add more detailed comments, use the comment field
+ below.
+
+ g. Click ![the send pull request button][sendpullrequest]
+
+
+4. Wait for feedback on your pull request and once your pull request is
+accepted, delete your branch (see '
+[When Pull Request is Accepted][cleanup]').
+
+That's it! If you're currently at a Test the Web Forward event, find an
+expert nearby and ask for a review. If you're doing this on your own
+(AWESOME!), your pull request will go into a queue and will be reviewed
+soon.
+
+## Modify
+
+Once you submit your pull request, a reviewer will check your proposed changes
+for correctness and style. It is likely that this process will lead to some
+comments asking for modifications to your code. When you are ready to make the
+changes, follow these steps:
+
+1. Check out the branch corresponding to your changes e.g. if your branch was
+ called `topic`
+ run:
+
+ $ git checkout topic
+
+2. Make the changes needed to address the comments, and commit them just like
+ before.
+
+3. Push the changes to the remote branch containing the pull request:
+
+ $ git push origin topic
+
+4. The pull request will automatically be updated with the new commit. Note
+ for advanced users: it is generally discouraged to rebase your pull request
+ before review is complete. Tests typically have few conflicts so this
+ should not be a problem in the common case.
+
+Sometimes it takes multiple iterations through a review before the changes are
+finally accepted. Don't worry about this; it's totally normal. The goal of test
+review is to work together to create the best possible set of tests for the web
+platform.
+
+## Cleanup
+Once your pull request has been accepted, you will be notified in the GitHub
+UI and you may get an email. At this point, your changes have been merged
+into the main test repository. You do not need to take any further action
+on the test but you should delete your branch. This can easily be done in
+the GitHub UI by navigating to the pull requests and clicking the
+'Delete Branch' button.
+
+![pull request accepted delete branch][praccepteddelete]
+
+Alternatively, you can delete the branch on the command line.
+
+ $ git push origin --delete <branchName>
+
+## Tips & Tricks
+
+The following workflow is recommended:
+
+1. Start branch based on latest w3c/master
+2. Write tests
+3. Rebase onto latest w3c/master
+4. Submit tests
+5. Stop fiddling with the branch base until review is done
+6. After the PR has been accepted, delete the branch. (Every new PR should
+come from a new branch.)
+7. Synchronize your fork with the W3C repository by fetching your upstream and
+ merging it. (See '[Configure Remote / Upstream][remote-upstream]')
+
+You need to be able to set up remote upstream, etc. Please refer to [Pro Git
+Book][git-book] and enjoy reading.
+
+[branch]: #branch
+[commit]: #commit
+[clone]: #clone
+[css-repo]: https://github.com/w3c/csswg-test
+[forkbtn]: /assets/forkbtn.png
+[git]: http://git-scm.com/downloads
+[git-book]: http://git-scm.com/book
+[github]: https://github.com/
+[github-w3c]: https://github.com/w3c
+[github-fork-docs]: https://help.github.com/articles/fork-a-repo
+[github-createpr]: https://help.github.com/articles/creating-a-pull-request
+[help]: https://help.github.com/
+[main-repo]: https://github.com/w3c/web-platform-tests
+[password-caching]: https://help.github.com/articles/caching-your-github-password-in-git
+[pullrequestlink]: /assets/pullrequestlink.png
+[pullrequestbtn]: /assets/pullrequestbtn.png
+[editbtn]: /assets/editbtn.png
+[createprlink]: /assets/createprlink.png
+[sendpullrequest]: /assets/sendpullrequest.png
+[praccepteddelete]: /assets/praccepteddelete.png
+[submit]: #submit
+[remote-upstream]: #configure-remote-upstream
+[cleanup]: #cleanup
diff --git a/testing/web-platform/tests/docs/lint-tool.md b/testing/web-platform/tests/docs/lint-tool.md
new file mode 100644
index 000000000..56b2b4896
--- /dev/null
+++ b/testing/web-platform/tests/docs/lint-tool.md
@@ -0,0 +1,136 @@
+We have a lint tool for catching common mistakes in test files. You can run
+it manually by starting the `lint` executable from the root of your local
+web-platform-tests working directory like this:
+
+```
+./lint
+```
+
+The lint tool is also run automatically for every submitted pull request,
+and reviewers will not merge branches with tests that have lint errors, so
+you must either [fix all lint errors](#fixing-lint-errors), or you must
+[white-list test files] (#updating-the-whitelist) to suppress the errors.
+
+## Fixing lint errors
+
+You must fix any errors the lint tool reports, unless an error is for
+something essential to a certain test or that for some other exceptional
+reason shouldn't prevent the test from being merged. In those cases you can
+[white-list test files](#updating-the-whiteslist) to suppress the errors.
+Otherwise, use the details in this section to fix all errors reported.
+
+* **CONSOLE**: Test-file line has a `console.*(...)` call; **fix**: remove
+ the `console.*(...)` call (and in some cases, consider adding an
+ `assert_*` of some kind in place of it).
+
+* **CR AT EOL**: Test-file line ends with CR (U+000D) character; **fix**:
+ reformat file so each line just has LF (U+000A) line ending (standard,
+ cross-platform "Unix" line endings instead of, e.g., DOS line endings).
+
+* **EARLY-TESTHARNESSREPORT**: Test file has an instance of
+ `<script src='/resources/testharnessreport.js'>` prior to
+ `<script src='/resources/testharness.js'>`; **fix**: flip the order.
+
+* **INDENT TABS**: Test-file line starts with one or more tab characters;
+ **fix**: use spaces to replace any tab characters at beginning of lines.
+
+* **INVALID-TIMEOUT**: Test file with `<meta name='timeout'...>` element
+ that has a `content` attribute whose value is not `long`; **fix**:
+ replace the value of the `content` attribute with `long`.
+
+* **LATE-TIMEOUT**: Test file with `<meta name="timeout"...>` element after
+ `<script src='/resources/testharnessreport.js'>` element ; **fix**: move
+ the `<meta name="timeout"...>` element to precede the `script` element.
+
+* **MALFORMED-VARIANT**: Test file with a `<meta name='variant'...>`
+ element whose `content` attribute has a malformed value; **fix**: ensure
+ the value of the `content` attribute starts with `?` or `#` or is empty.
+
+* **MISSING-TESTHARNESSREPORT**: Test file is missing an instance of
+ `<script src='/resources/testharnessreport.js'>`; **fix**: ensure each
+ test file contains `<script src='/resources/testharnessreport.js'>`.
+
+* **MULTIPLE-TESTHARNESS**: Test file with multiple instances of
+ `<script src='/resources/testharness.js'>`; **fix**: ensure each test
+ has only one `<script src='/resources/testharness.js'>` instance.
+
+* **MULTIPLE-TESTHARNESSREPORT**: Test file with multiple instances of
+ `<script src='/resources/testharnessreport.js'>`; **fix**: ensure each test
+ has only one `<script src='/resources/testharnessreport.js'>` instance.
+
+* **MULTIPLE-TIMEOUT**: Test file with multiple `<meta name="timeout"...>`
+ elements; **fix**: ensure each test file has only one instance of a
+ `<meta name="timeout"...>` element.
+
+* **PARSE-FAILED**: Test file failed parsing by manifest builder; **fix**:
+ examine the file to find the causes of any parse errors, and fix them.
+
+* **PATH LENGTH**: Test file's pathname has a total length greater than 150
+ characters; **fix**: use shorter filename to rename the test file.
+
+* **PRINT STATEMENT**: A server-side python support file contains a `print`
+ statement; **fix**: remove the `print` statement or replace it with
+ something else that achieves the intended effect (e.g., a logging call).
+
+* **SET TIMEOUT**: Test-file line has `setTimeout(...)` call; **fix**:
+ replace all `setTimeout(...)` calls with `step_timeout(...)` calls.
+
+* **TRAILING WHITESPACE**: Test-file line has trailing whitespace; **fix**:
+ remove trailing whitespace from all lines in the file.
+
+* **VARIANT-MISSING**: Test file with a `<meta name='variant'...>` element
+ that's missing a `content` attribute; **fix**: add a `content` attribute
+ with an appropriate value to the `<meta name='variant'...>` element.
+
+* **W3C-TEST.ORG**: Test-file line has the string `w3c-test.org`; **fix**:
+ either replace the `w3c-test.org` string with the expression
+ `{{host}}:{{ports[http][0]}}` or a generic hostname like `example.org`.
+
+## Updating the whitelist
+
+Normally you must [fix all lint errors](#fixing-lint-errors). But in the
+unusual case of error reports for things essential to certain tests or that
+for other exceptional reasons shouldn't prevent a merge of a test, you can
+update and commit the `lint.whitelist` file in the web-platform-tests root
+directory to suppress errors the lint tool would report for a test file.
+
+To add a test file or directory to the whitelist, use the following format:
+
+```
+ERROR TYPE:file/name/pattern
+```
+
+For example, to whitelist the file `example/file.html` such that all
+`TRAILING WHITESPACE` errors the lint tool would report for it are
+suppressed, add the following line to the `lint.whitelist` file.
+
+```
+TRAILING WHITESPACE:example/file.html
+```
+
+To whitelist an entire directory rather than just one file, use the `*`
+wildcard. For example, to whitelist the `example` directory such that all
+`TRAILING WHITESPACE` errors the lint tool would report for any files in it
+are suppressed, add the following line to the `lint.whitelist` file.
+
+```
+TRAILING WHITESPACE:example/*
+```
+
+If needed, you can also use the `*` wildcard to express other filename
+patterns or directory-name patterns (just as you would when, e.g.,
+executing shell commands from the command line).
+
+Finally, to whitelist just one line in a file, use the following format:
+
+```
+ERROR TYPE:file/name/pattern:line_number
+```
+
+For example, to whitelist just line 128 of the file `example/file.html`
+such that any `TRAILING WHITESPACE` error the lint tool would report for
+that line is suppressed, add the following to the `lint.whitelist` file.
+
+```
+TRAILING WHITESPACE:example/file.html:128
+```
diff --git a/testing/web-platform/tests/docs/manual-test.md b/testing/web-platform/tests/docs/manual-test.md
new file mode 100644
index 000000000..4b5469589
--- /dev/null
+++ b/testing/web-platform/tests/docs/manual-test.md
@@ -0,0 +1,72 @@
+Some testing scenarios are intrinsically difficult to automate and
+require a human to run the test and check the pass condition.
+
+## When to Write Manual Tests
+
+Whenever possible it's best to write a fully automated test. For a
+browser vendor it's possible to run an automated test hundreds of
+times a day, but manual tests are likely to be run a handful of times
+a year. This makes them significantly less useful for catching
+regressions than automated tests.
+
+However, there are certain scenarios in which this is not yet
+possible. For example:
+
+* Tests that require interaction with browser security UI (e.g. a test
+ in which a user refuses a geolocation permissions grant)
+
+* Tests that require interaction with the underlying OS e.g. tests for
+ drag and drop from the desktop onto the browser
+
+* Tests that require non-default browser configuration e.g. images
+ disabled
+
+* Tests that require interaction with the physical environment
+ e.g. tests that the vibration API causes the device to vibrate or
+ that various sensor APIs respond in the expected way.
+
+There are also some rare cases where it isn't possible to write a layout
+test as a reftest, and a manual test must be written instead.
+
+## Requirements for a Manual Test
+
+Manual tests are distinguished by their filename; all manual tests
+have filenames of the form `name-manual.ext` i.e. a `-manual`
+suffix after the main filename but before the extension.
+
+Manual tests must be fully
+[self-describing](test-style-guielines.html#self-describing-tests). It
+is particularly important for these tests that it is easy to determine
+the result from the information presented to the tester, because a
+tester may have hundreds of tests to get through, and little
+understanding of the features that they are testing. Therefore
+minimalism is a virtue. An ideal self-describing test will have:
+
+* Step-by-step instructions for performing the test (if required)
+
+* A clear statement of the test outcome (if it can be automatically
+ determined after some setup) or of how to determine the outcome.
+
+Any information other than this (e.g. quotes from the spec) should be
+avoided.
+
+## Using testharness.js for Manual Tests
+
+A convenient way to present the results of a test that can have the
+result determined by script after some manual setup steps is to use
+testharness.js to determine and present the result. In this case one
+must pass `{explicit_timeout: true}` in a call to `setup()` in order
+to disable the automatic timeout of the test. For example:
+
+```html
+<!doctype html>
+<title>Manual click on button triggers onclick handler</title>
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<script>
+setup({explicit_timeout: true})
+</script>
+<p>Click on the button below. If a "PASS" result appears the test
+passes, otherwise it fails</p>
+<button onclick="done()">Click Here</button>
+```
diff --git a/testing/web-platform/tests/docs/reftests.md b/testing/web-platform/tests/docs/reftests.md
new file mode 100644
index 000000000..f096bc0c6
--- /dev/null
+++ b/testing/web-platform/tests/docs/reftests.md
@@ -0,0 +1,152 @@
+A reftest is a test that compares the visual output of one file (the
+test case) with the output of one or more other files (the
+references). The test and the reference must be carefully written so
+that when the test passes they have identical rendering, but different
+rendering when the test fails.
+
+## How to Run Reftests
+
+Reftests can be run manually simply by opening the test and the
+reference file in multiple windows or tabs and either placing them
+side-by side or flipping between the two. In automation the comparison
+is done in an automated fashion, which can lead to differences too
+small for the human eye to notice causing tests to fail.
+
+## Components of a Reftest
+
+In the simplest case, a reftest consists of a pair of files called the
+*test* and the *reference*.
+
+The *test* file is the one that makes use of the technology being
+tested. It also contains a `link` element with `rel="match"` or
+`rel="mismatch"` and `href` attribute pointing to the *reference* file
+e.g. `<link rel=match href=references/green-box-ref.html>`.
+
+The *reference* file is typically written to be as simple as possible,
+and does not use the technology under test. It is desirable that the
+reference be rendered correctly even in UAs with relatively poor
+support for CSS and no support for the technology under test.
+
+When the `<link>` element in the *test* has `rel="match"`, the test
+only passes if the *test* and *reference* have pixel-perfect identical
+rendering. `rel="mismatch"` inverts this so the test only passes when
+the renderings differ.
+
+In general the files used in a reftest should follow the
+[format][format] and [style][style] guidelines. The *test* should also
+be [self-describing][selfdesc], to allow a human to determine whether
+the the rendering is as expected.
+
+Note that references can be shared between tests; this is strongly
+encouraged since it permits optimizations when running tests.
+
+## Controlling When Comparison Occurs
+
+By default reftest screenshots are taken in response to the `load`
+event firing. In some cases it is necessary to delay the screenshot
+later than this, for example becase some DOM manipulation is
+required to set up the desired test conditions. To enable this, the
+test may have a `class="reftest-wait"` attribute specified on the root
+element. This will cause the screenshot to be delayed until the `load`
+event has fired and the `reftest-wait` class has been removed from the
+root element (technical note: the implementation in wptrunner uses
+mutation observers so the screenshot will be triggered in the
+microtask checkpoint after the class is removed. Because the harness
+isn't synchronized with the browser event loop it is dangerous to rely
+on precise timing here).
+
+## Matching Multiple References
+
+Sometimes it is desirable for a file to match multiple references or,
+in rare cases, to allow it to match more than one possible
+reference. Note: *this is not currently supported by test runners and
+so best avoided if possible until that support improves*.
+
+Multiple references linked from a single file are interpreted as
+multiple possible renderings for that file. `<link rel=[mis]match>`
+elements in a reference create further conditions that must be met in
+order for the test to pass. For example, consider a situation where
+`a.html` has `<link rel=match href=b.html>` and `<link rel=match
+href=c.html>`, `b.html` has `<link rel=match href=b1.html>` and `c.html`
+has `<link rel=mismatch href=c1.html>`. In this case, to pass we must
+either have `a.html`, `b.html` and `b1.html` all rendering identically, or
+`a.html` and `c.html` rendering identically, but `c.html` rendering
+differently from `c1.html`.
+
+## Fuzzy Matching
+
+In some situations a test may have subtle differences in rendering
+compared to the reference due to e.g. antialiasing. This may cause the
+test to pass on some platforms but fail on others. In this case some
+affordance for subtle discrepancies is desirable. However no mechanism
+to allow this has yet been standardized.
+
+## Limitations
+
+In some cases, a test cannot be a reftest. For example, there is no
+way to create a reference for underlining, since the position and
+thickness of the underline depends on the UA, the font, and/or the
+platform. However, once it's established that underlining an inline
+element works, it's possible to construct a reftest for underlining
+a block element, by constructing a reference using underlines on a
+```<span>``` that wraps all the content inside the block.
+
+## Example Reftests
+
+These examples are all [self-describing][selfdesc] tests as they
+each have a simple statement on the page describing how it should
+render to pass the tests.
+
+### HTML example
+
+### Test File
+
+This test verifies that a right-to-left rendering of **SAW** within a
+```<bdo>``` element displays as **WAS**.
+
+([view page rendering][html-reftest-example])
+
+```html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>BDO element dir=rtl</title>
+<link rel="help" href="http://www.whatwg.org/specs/web-apps/current-work/#the-bdo-element">
+<meta name="assert" content="BDO element's DIR content attribute renders corrently given value of 'rtl'.">
+<link rel="match" href="test-bdo-001.html">
+<p>Pass if you see WAS displayed below.</p>
+<bdo dir="rtl">SAW</bdo>
+```
+
+### Reference File
+
+The reference file must look exactly like the test file,
+except that the code behind it is different.
+
+* All metadata is removed.
+* The ```title``` need not match.
+* The markup that created the actual test data is
+ different: here, the same effect is created with
+ very mundane, dependable technology.
+
+([view page rendering][html-reffile-example])
+
+```html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>HTML Reference File</title>
+<p>Pass if you see WAS displayed below.</p>
+<p>WAS</p>
+```
+
+[testharness]: ./testharness-documentation.html
+[format]: ./test-format-guidelines.html
+[style]: ./test-style-guidelines.html
+[selfdesc]: ./test-style-guidelines.html#self-describing-tests
+[reference-links]: ./test-templates.html#reference-links
+[html-reftest-example]: ./html-reftest-example.html
+[html-reffile-example]: ./html-reffile-example.html
+[css-reftest-example]: http://test.csswg.org/source/css21/borders/border-bottom-applies-to-009.xht
+[css-reffile-example]: http://test.csswg.org/source/css21/borders/border-bottom-applies-to-001-ref.xht
+[svg-reftest-example]: http://test.csswg.org/source/css-transforms-1/translate/svg-translate-001.html
+[svg-reffile-example]: http://test.csswg.org/source/css-transforms-1/translate/reference/svg-translate-ref.html
+[indicating-failure]: ./test-style-guidelines.html#failure
diff --git a/testing/web-platform/tests/docs/review-checklist.md b/testing/web-platform/tests/docs/review-checklist.md
new file mode 100644
index 000000000..70ffb81bc
--- /dev/null
+++ b/testing/web-platform/tests/docs/review-checklist.md
@@ -0,0 +1,128 @@
+When reviewing a test, make sure the test follows the
+[format][format] and [style][style] guidelines.
+
+In addition, the test should be checked for the following:
+
+## All tests
+<input type="checkbox">
+The test passes when it's supposed to pass
+
+<input type="checkbox">
+The test fails when it's supposed to fail
+
+<input type="checkbox">
+The test is testing what it thinks it's testing
+
+<input type="checkbox">
+The spec backs up the expected behavior in the test.
+
+<input type="checkbox">
+The test is automated as either [reftest][reftest] or a
+[script test][scripttest] unless there's a very good reason why the
+test must be manual.
+
+<input type="checkbox">
+The test does not use external resources.
+
+<input type="checkbox">
+The test does not use proprietary features (vendor-prefixed or otherwise).
+
+
+## Reftests Only
+<input type="checkbox">
+The test has a [self-describing][selftest] statement
+
+<input type="checkbox">
+The self-describing statement is accurate, precise, simple, and
+self-explanatory. Your mother/husband/roommate/brother/bus driver
+should be able to say whether the test passed or failed within a few
+seconds, and not need to spend several minutes thinking or asking
+questions.
+
+<input type="checkbox">
+The reference file is accurate and will render pixel-perfect
+identically to the test on all platforms.
+
+<input type="checkbox">
+The reference file uses a different technique that won't fail in
+the same way as the test.
+
+<input type="checkbox">
+The title is descriptive but not too wordy.
+
+<input type="checkbox">
+The test is as cross-platform as reasonably possible, working
+across different devices, screen resolutions, paper sizes, etc. If
+there are limitations (e.g. the test will only work on 96dpi
+devices, or screens wider than 200 pixels) that these are documented
+in the instructions.
+
+
+## Script Tests Only
+
+<input type="checkbox">
+The test uses the most specific asserts possible (e.g. doesn't use
+`assert_true` for everything).
+
+<input type="checkbox">
+The number of tests in each file and the test names are consistent
+across runs and browsers. It is best to avoid the pattern where there is
+a test that asserts that the feature is supported and bails out without
+running the rest of the tests in the file if it isn't.
+
+<input type="checkbox">
+The test avoids patterns that make it less likely to be stable.
+In particular, tests should avoid setting internal timeouts, since the
+time taken to run it may vary on different devices; events should be used
+instead (if at all possible).
+
+<input type="checkbox">
+The test uses `idlharness.js` if it covers the use case.
+
+<input type="checkbox">
+Tests in a single file are separated by one empty line.
+
+
+## In-depth Checklist
+
+<input type="checkbox">
+A test does not use self-closing start tag ("/" (U+002F)) when using the
+HTML syntax.
+
+<input type="checkbox">
+The test does not use the Unicode byte order mark (BOM U+FEFF). The test
+uses Unix line endings (LF, no CR). The executable bit is not set
+unnecessarily.
+
+<input type="checkbox">
+For indentation, spaces are preferred over tabs.
+
+<input type="checkbox">
+The test does not contain trailing whitespace (whitespace at the end of
+lines).
+
+<input type="checkbox">
+The test does not contain commented-out code.
+
+<input type="checkbox">
+The test does not use `console.*` methods for anything. The
+[script test][scripttest] harness never relies on `console.*` methods in
+any way, and so use of `console.*` methods in tests is usually just the
+equivalent of extra `printf`s in production code; i.e., leftover debugging
+that isn't actually useful to the next person running the test. It also
+introduces useless overhead when running tests in automation.
+
+<input type="checkbox">
+The test is placed in the relevant directory, based on the /TR latest
+version link if available.
+
+<input type="checkbox">
+If the test needs code running on the server side, the server code must
+be written in python, and the python code must be reviewed carefully to
+ensure it isn't doing anything dangerous.
+
+[format]: ./test-format-guidelines.html
+[style]: ./test-style-guidelines.html
+[reftest]: ./reftests.html
+[scripttest]: ./testharness-documentation.html
+[selftest]: ./test-style-guidelines.html#self-describing
diff --git a/testing/web-platform/tests/docs/review-process.md b/testing/web-platform/tests/docs/review-process.md
new file mode 100644
index 000000000..4977f3ad6
--- /dev/null
+++ b/testing/web-platform/tests/docs/review-process.md
@@ -0,0 +1,39 @@
+## Test Review Policy
+
+In order to encourage a high level of quality in the W3C test
+suites, test contributions must be reviewed by a peer.
+
+The reviewer can be anyone (other than the original test author) that
+has the required experience with both the spec under test and with the
+test [format][format] and [style][style] guidelines. Review must
+happen in public, but the exact review location is flexible. In
+particular if a vendor is submitting tests that have already been
+reviewed in their own review system, that review may be carried
+forward, as long as the original review is clearly linked in the
+GitHub pull request.
+
+To assist with test reviews, a [review checklist][review-checklist]
+is available.
+
+## Review Tools
+
+All new code submissions must use the GitHub pull request
+workflow. The GitHub UI for code review may be used, but other tools
+may also be used as long as the review is clearly linked.
+
+## Labels
+
+Pull requests get automatically labelled in the GitHub repository. Check
+out the [list of labels in Github][issues]
+to see the open pull requests for a given specification or a given Working Group.
+
+## Status
+
+The
+[web-platform-tests dashboard](http://testthewebforward.org/dashboard/#all)
+shows the number of open review requests, and can be filtered by testsuite.
+
+[format]: ./test-format-guidelines.html
+[style]: ./test-style-guidelines.html
+[review-checklist]: ./review-checklist.html
+[issues]: https://github.com/w3c/web-platform-tests/issues
diff --git a/testing/web-platform/tests/docs/running_tests.md b/testing/web-platform/tests/docs/running_tests.md
new file mode 100644
index 000000000..98fcd4135
--- /dev/null
+++ b/testing/web-platform/tests/docs/running_tests.md
@@ -0,0 +1,34 @@
+In simple cases individual tests can be run by simply loading the page
+in a browser window. For running larger groups of tests, or running
+tests frequently, this is not a practical approach, and several better
+options exist.
+
+## From Inside a Browser
+
+For running multiple tests inside a browser, there is the test runner,
+located at
+
+ /tools/runner/index.html
+
+This allows all the tests, or those matching a specific prefix
+(e.g. all tests under `/dom/`) to be run. For testharness.js tests,
+the results will be automatically collected, whilst the runner
+provides a simple UI for manually comparing reftest rendering and
+running manual tests.
+
+Because it runs entirely in-browser, this runner cannot deal with
+edge-cases like tests that cause the browser to crash or hang.
+
+## By Automating the Browser
+
+For automated test running designed to be robust enough to use in a CI
+environment, the [wptrunner](http://github.com/w3c/wptrunner) test runner
+can be used. This is a test runner written in Python and designed to
+control the browser from the outside using some remote control
+protocol such as WebDriver. This allows it to handle cases such as the
+browser crashing that cannot be handled by an in-browser harness. It
+also has the ability to automatically run both testharness-based tests
+and reftests.
+
+Full instructions for using wptrunner are provided in its own
+[documentation](http://wptrunner.readthedocs.org).
diff --git a/testing/web-platform/tests/docs/submission-process.md b/testing/web-platform/tests/docs/submission-process.md
new file mode 100644
index 000000000..fd5d763ba
--- /dev/null
+++ b/testing/web-platform/tests/docs/submission-process.md
@@ -0,0 +1,42 @@
+Test submission is via the typical GitHub workflow.
+
+* Fork the [GitHub repository][repo] (and make sure you're still relatively in
+sync with it if you forked a while ago)
+
+* Create a branch for your changes. Being a key of effective Git flow, it is
+strongly recommended that the **topic branch** tradition be followed here,
+i.e. the branch naming convention is based on the "topic" you will be working
+on, e.g. `git checkout -b topic-name`
+
+* Make your changes
+
+* Run the `lint` script in the root of your checkout to detect common
+ mistakes in test submissions. This will also be run after submission
+ and any errors will prevent your PR being accepted. If it detects an
+ error that forms an essential part of your test, edit the list of
+ exceptions stored in `tools/lint/lint.whitelist`.
+
+* Commit your changes.
+
+* Push your local branch to your GitHub repository.
+
+* Using the GitHub UI create a Pull Request for your branch.
+
+* When you get review comments, make more commits to your branch to
+ address the comments (**note**: Do *not* rewrite existing commits using
+ e.g. `git commit --amend` or `git rebase -i`. The review system
+ depends on the full branch history).
+
+* Once everything is reviewed and all issues are addressed, your pull
+ request will be automatically merged.
+
+For detailed guidelines on setup and each of these steps, please refer to the
+[Github Test Submission][github101] documentation.
+
+Hop on to [irc or the mailing list][discuss] if you have an
+issue. There is no need to announce your review request, as soon as
+you make a Pull Request GitHub will inform interested parties.
+
+[repo]: https://github.com/w3c/web-platform-tests/
+[github101]: ./github-101.html
+[discuss]: /discuss.html
diff --git a/testing/web-platform/tests/docs/test-format-guidelines.md b/testing/web-platform/tests/docs/test-format-guidelines.md
new file mode 100644
index 000000000..d3e75db11
--- /dev/null
+++ b/testing/web-platform/tests/docs/test-format-guidelines.md
@@ -0,0 +1,346 @@
+This page describes the available test types and the requirements for
+authoring that apply to all test types. There is also a supplementary
+[guide to writing good testcases](test-style-guidelines.html).
+
+## Test Locations
+
+Each top level directory in the repository corresponds to tests for a
+single specification. For W3C specs, these directories are named after
+the shortname of the spec (i.e. the name used for snapshot
+publications under `/TR/`).
+
+Within the specification-specific directory there are two common ways
+of laying out tests. The first is a flat structure which is sometimes
+adopted for very short specifications. The alternative is a nested
+structure with each subdirectory corresponding to the id of a heading
+in the specification. This layout provides some implicit metadata
+about the part of a specification being tested according to its
+location in the filesystem, and is preferred for larger
+specifications.
+
+When adding new tests to existing specifications, try to follow the
+structure of existing tests.
+
+Because of path length limitations on Windows, test paths must be less
+that 150 characters relative to the test root directory (this gives
+vendors just over 100 characters for their own paths when running in
+automation).
+
+## Choosing the Test Type
+
+Tests should be written using the mechanism that is most conducive to
+running in automation. In general the following order of preference holds:
+
+* [idlharness.js](testharness-idlharness.html) tests - for testing
+ anything in a WebIDL block.
+
+* [testharness.js](testharness.html) tests - for any test that can be
+ written using script alone.
+
+* [Reftests][reftests] - for most tests of rendering.
+
+* WebDriver tests - for testing the webdriver protocol itself or (in
+ the future) for certain tests that require access to privileged APIs.
+
+* [Manual tests][manual-tests] - as a last resort for anything that can't be tested
+ using one of the above techniques.
+
+Some scenarios demand certain test types. For example:
+
+* Tests for layout will generally be reftests. In some cases it will
+ not be possible to construct a reference and a test that will always
+ render the same, in which case a manual test, accompanied by
+ testharness tests that inspect the layout via the DOM must be
+ written.
+
+* Features that require human interaction for security reasons
+ (e.g. to pick a file from the local filesystem) typically have to be
+ manual tests.
+
+## General Test Design Requirements
+
+### Short
+
+Tests should be as short as possible. For reftests in particular
+scrollbars at 800&#xD7;600px window size must be avoided unless scrolling
+behaviour is specifically being tested. For all tests extraneous
+elements on the page should be avoided so it is clear what is part of
+the test (for a typical testharness test, the only content on the page
+will be rendered by the harness itself).
+
+### Minimal
+
+Tests should generally avoid depending on edge case behaviour of
+features that they don't explicitly intend to test. For example,
+except where testing parsing, tests should contain no
+[parse errors][validator]. Of course tests which intentionally address
+the interactions between multiple platform features are not only
+acceptable but encouraged.
+
+### Cross-platform
+
+Tests should be as cross-platform as reasonably possible, working
+across different devices, screen resolutions, paper sizes, etc.
+Exceptions should document their assumptions.
+
+### Self-Contained
+
+Tests must not depend on external network resources, including
+w3c-test.org. When these tests are run on CI systems they are
+typically configured with access to external resources disabled, so
+tests that try to access them will fail. Where tests want to use
+multiple hosts this is possible thorough a known set of subdomains and
+features of wptserve (see
+["Tests Involving Multiple Origins"](#tests-involving-multiple-origins)).
+
+## File Names
+
+Generally file names should be somewhat descriptive of what is being
+tested; very generic names like `001.html` are discouraged. A common
+format, required by CSS tests, is described in
+[CSS Naming Conventions](css-naming.html).
+
+## File Formats
+
+Tests must be HTML, XHTML or SVG files.
+
+Note: For CSS tests, the test source will be parsed and
+re-serialized. This re-serialization will cause minor changes to the
+test file, notably: attribute values will always be quoted, whitespace
+between attributes will be collapsed to a single space, duplicate
+attributes will be removed, optional closing tags will be inserted,
+and invalid markup will be normalized. If these changes should make
+the test inoperable, for example if the test is testing markup error
+recovery, add the [flag][requirement-flags] `asis` to prevent
+re-serialization. This flag will also prevent format conversions so it
+may be necessary to provide alternate versions of the test in other
+formats (XHTML, HTML, etc.)
+
+## Character Encoding
+
+Except when specifically testing encoding, tests must be encoded in
+UTF-8, marked through the use of e.g. `<meta charset=utf-8>`, or in
+pure ASCII.
+
+## Support files
+
+Various support files are available in in the `/common/` and `/media/`
+directories (web-platform-tests) and `/support/` (CSS). Reusing
+existing resources is encouraged where possible, as is adding
+generally useful files to these common areas rather than to specific
+testsuites.
+
+For CSS tests the following standard images are available in the
+support directory:
+
+ * 1x1 color swatches
+ * 15x15 color swatches
+ * 15x15 bordered color swatches
+ * assorted rulers and red/green grids
+ * a cat
+ * a 4-part picture
+
+## Tools
+Sometimes you may want to add a script to the repository that's meant
+to be used from the command line, not from a browser (e.g., a script
+for generating test files). If you want to ensure (e.g., or security
+reasons) that such scripts won't be handled by the HTTP server, but
+will instead only be usable from the command line, then place them
+in either:
+
+* the `tools` subdir at the root of the repository, or
+* the `tools` subdir at the root of any top-level directory in the
+ repo which contains the tests the script is meant to be used with
+
+Any files in those `tools` directories won't be handled by the HTTP
+server; instead the server will return a 404 if a user navigates to
+the URL for a file within them.
+
+If you want to add a script for use with a particular set of tests
+but there isn't yet any `tools` subdir at the root of a top-level
+directory in the repository containing those tests, you can create
+a `tools` subdir at the root of that top-level directory and place
+your scripts there.
+
+For example, if you wanted to add a script for use with tests in the
+`notifications` directory, create the `notifications/tools` subdir
+and put your script there.
+
+## Style Rules
+
+A number of style rules should be applied to the test file. These are
+not uniformly enforced throughout the existing tests, but will be for
+new tests. Any of these rules may be broken if the test demands it:
+
+ * No trailing whitespace
+
+ * Use spaces rather than tabs for indentation
+
+ * Use UNIX-style line endings (i.e. no CR characters at EOL).
+
+## Advanced Testing Features
+
+Certain test scenarios require more than just static HTML
+generation. This is supported through the
+[wptserve](http://github.com/w3c/wptserve) server. Several scenarios
+in particular are common:
+
+### Standalone workers tests
+
+Tests that only require assertions in a dedicated worker scope can use
+standalone workers tests. In this case, the test is a JavaScript file
+with extension `.worker.js` that imports `testharness.js`. The test can
+then use all the usual APIs, and can be run from the path to the
+JavaScript file with the `.js` removed.
+
+For example, one could write a test for the `FileReaderSync` API by
+creating a `FileAPI/FileReaderSync.worker.js` as follows:
+
+ importScripts("/resources/testharness.js");
+ test(function () {
+ var blob = new Blob(["Hello"]);
+ var fr = new FileReaderSync();
+ assert_equals(fr.readAsText(blob), "Hello");
+ }, "FileReaderSync#readAsText.");
+ done();
+
+This test could then be run from `FileAPI/FileReaderSync.worker`.
+
+### Multi-global tests
+
+Tests for features that exist in multiple global scopes can be written in a way
+that they are automatically run in a window scope as well as a dedicated worker
+scope.
+In this case, the test is a JavaScript file with extension `.any.js`.
+The test can then use all the usual APIs, and can be run from the path to the
+JavaScript file with the `.js` replaced by `.worker` or `.html`.
+
+For example, one could write a test for the `Blob` constructor by
+creating a `FileAPI/Blob-constructor.any.js` as follows:
+
+ test(function () {
+ var blob = new Blob();
+ assert_equals(blob.size, 0);
+ assert_equals(blob.type, "");
+ assert_false(blob.isClosed);
+ }, "The Blob constructor.");
+
+This test could then be run from `FileAPI/Blob-constructor.any.worker` as well
+as `FileAPI/Blob-constructor.any.html`.
+
+### Tests Involving Multiple Origins
+
+In the test environment, five subdomains are available; `www`, `www1`,
+`www2`, `天気の良い日` and `élève`. These must be used for
+cross-origin tests. In addition two ports are available for http and
+one for websockets. Tests must not hardcode the hostname of the server
+that they expect to be running on or the port numbers, as these are
+not guaranteed by the test environment. Instead tests can get this
+information in one of two ways:
+
+* From script, using the `location` API.
+
+* By using a textual substitution feature of the server.
+
+In order for the latter to work, a file must either have a name of the
+form `{name}.sub.{ext}` e.g. `example-test.sub.html` or be referenced
+through a URL containing `pipe=sub` in the query string
+e.g. `example-test.html?pipe=sub`. The substitution syntax uses `{{ }}`
+to delimit items for substitution. For example to substitute in
+the host name on which the tests are running, one would write:
+
+ {{host}}
+
+As well as the host, one can get full domains, including subdomains
+using the `domains` dictionary. For example:
+
+ {{domains[www]}}
+
+would be replaced by the fully qualified domain name of the `www`
+subdomain. Ports are also available on a per-protocol basis e.g.
+
+ {{ports[ws][0]}}
+
+is replaced with the first (and only) websockets port, whilst
+
+ {{ports[http][1]}}
+
+is replaced with the second HTTP port.
+
+The request URL itself can be used as part of the substitution using
+the `location` dictionary, which has entries matching the
+`window.location` API. For example
+
+ {{location[host]}}
+
+is replaced by `hostname:port` for the current request.
+
+### Tests Requiring Special Headers
+
+For tests requiring that a certain HTTP header is set to some static
+value, a file with the same path as the test file except for an an
+additional `.headers` suffix may be created. For example for
+`/example/test.html`, the headers file would be
+`/example/test.html.headers`. This file consists of lines of the form
+
+ header-name: header-value
+
+For example
+
+ Content-Type: text/html; charset=big5
+
+To apply the same headers to all files in a directory use a
+`__dir__.headers` file. This will only apply to the immediate
+directory and not subdirectories.
+
+Headers files may be used in combination with substitutions by naming
+the file e.g. `test.html.sub.headers`.
+
+### Tests Requiring Full Control Over The HTTP Response
+
+For full control over the request and response the server provides the
+ability to write `.asis` files; these are served as literal HTTP
+responses. It also provides the ability to write python scripts that
+have access to request data and can manipulate the content and timing
+of the response. For details see the
+[wptserve documentation](http://wptserve.readthedocs.org).
+
+## CSS-Specific Requirements
+
+Tests for CSS specs have some additional requirements that have to be
+met in order to be included in an official specification testsuite.
+
+* [Naming conventions](css-naming.html)
+
+* [User style sheets](css-user-styles.html)
+
+* [Metadata](css-metadata.html)
+
+## Lint tool
+
+We have a lint tool for catching common mistakes in test files. You can run
+it manually by starting the `lint` executable from the root of your local
+web-platform-tests working directory like this:
+
+```
+./lint
+```
+
+The lint tool is also run automatically for every submitted pull request,
+and reviewers will not merge branches with tests that have lint errors, so
+you must fix any errors the lint tool reports. For details on doing that,
+see the [lint-tool documentation][lint-tool].
+
+But in the unusual case of error reports for things essential to a certain
+test or that for other exceptional reasons shouldn't prevent a merge of a
+test, update and commit the `lint.whitelist` file in the web-platform-tests
+root directory to suppress the error reports. For details on doing that,
+see the [lint-tool documentation][lint-tool].
+
+[lint-tool]: ./lint-tool.html
+[reftests]: ./reftests.html
+[manual-tests]: ./manual-test.html
+[test-templates]: ./test-templates.html
+[requirement-flags]: ./test-templates.html#requirement-flags
+[testharness-documentation]: ./testharness-documentation.html
+[validator]: http://validator.w3.org
diff --git a/testing/web-platform/tests/docs/test-style-guidelines.md b/testing/web-platform/tests/docs/test-style-guidelines.md
new file mode 100644
index 000000000..e8ccbd9fa
--- /dev/null
+++ b/testing/web-platform/tests/docs/test-style-guidelines.md
@@ -0,0 +1,437 @@
+## Key Aspects of a Well Designed Test
+
+A badly written test can lead to false passes or false failures, as
+well as inaccurate interpretations of the specs. Therefore it is
+important that the tests all be of a high standard. All tests must
+follow the [test format guidelines][test-format] and well designed
+tests should meet the following criteria:
+
+* **The test passes when it's supposed to pass**
+* **The test fails when it's supposed to fail**
+* **It's testing what it claims to be testing**
+
+## Self-Describing Tests
+
+As the tests are likely to be used by many other people, making them
+easy to understand is very important. Ideally, tests are written as
+self-describing, which is a test page that describes what the page
+should look like when the test has passed. A human examining the
+test page can then determine from the description whether the test
+has passed or failed.
+
+_Note: The terms "the test has passed" and "the test has failed"
+refer to whether the user agent has passed or failed a
+particular test — a test can pass in one web browser and fail in
+another. In general, the language "the test has passed" is used
+when it is clear from context that a particular user agent is
+being tested, and the term "this-or-that-user-agent has passed
+the test" is used when multiple user agents are being compared._
+
+Self-describing tests have some advantages:
+
+* They can be run easily on any layout engine.
+* They can test areas of the spec that are not precise enough to be
+ comparable to a reference rendering. (For example, underlining
+ cannot be compared to a reference because the position and
+ thickness of the underline is UA-dependent.)
+* Failures can (should) be easily determined by a human viewing the
+ test without needing special tools.
+
+### Manual Tests
+
+While it is highly encouraged to write automatable tests either as [
+reftests][reftests] or [script tests][scripttests], in rare cases a
+test can only be executed manually. All manual tests must be
+self-describing tests. Additionally, manual tests should be:
+
+* Easy & quick to determine the result
+* Self explanatory & not require an understanding of the
+ specification to determine the result
+* Short (a paragraph or so) and certainly not require scrolling
+ on even the most modest of screens, unless the test is
+ specifically for scrolling or paginating behaviour.
+
+### Reftests
+
+[Reftests][reftests] should be self-describing tests wherever
+possible. This means the the descriptive statement included in the
+test file must also appear in the reference file so their renderings
+may be automatically compared.
+
+### Script Tests
+
+[Script tests][scripttests] may also be self-describing, but rather
+than including a supplemental statement on the page, this is
+generally done in the test results output from ```testharness.js```.
+
+### Self-Describing Test Examples
+
+The following are some examples of self-describing tests, using some
+common [techniques](#techniques) to identify passes:
+
+* [Identical Renderings][identical-renderings]
+* [Green Background][green-background]
+* [No Red 1][no-red-1]
+* [No Red 2][no-red-2]
+* [Described Alignment][described-alignment]
+* [Overlapping][overlapping]
+* [Imprecise Description 1][imprecise-1]
+* [Imprecise Description 2][imprecise-2]
+
+## Techniques
+
+In addition to the [self describing](#self-describing) statement
+visible in the test, there are many techniques commonly used to add
+clarity and robustness to tests. Particularly for reftests, which
+rely wholly on how the page is rendered, the following should be
+considered and used when designing new tests.
+
+### Indicating success
+
+#### The green paragraph
+
+This is the simplest form of test, and is most often used when
+testing the things that are independent of the rendering, like
+the CSS cascade or selectors. Such tests consist of a single line of
+text describing the pass condition, which will be one of the
+following:
+
+<span style="color: green">This line should be green.</span>
+
+<span style="border: 5px solid green">This line should have a green
+ border.</span>
+
+<span style="background: green; color: white">This line should have
+ a green background.</span>
+
+#### The green page
+
+This is a variant on the green paragraph test. There are certain
+parts of CSS that will affect the entire page, when testing these
+this category of test may be used. Care has to be taken when writing
+tests like this that the test will not result in a single green
+paragraph if it fails. This is usually done by forcing the short
+descriptive paragraph to have a neutral color (e.g. white).
+
+This [example][green-page] is poorly designed, because it does not
+look red when it has failed.
+
+#### The green square
+
+This is the best type of test for cases where a particular rendering
+rule is being tested. The test usually consists of two boxes of some
+kind that are (through the use of positioning, negative margins,
+zero line height, transforms, or other mechanisms) carefully placed
+over each other. The bottom box is colored red, and the top box is
+colored green. Should the top box be misplaced by a faulty user
+agent, it will cause the red to be shown. (These tests sometimes
+come in pairs, one checking that the first box is no bigger than the
+second, and the other checking the reverse.) These tests frequently
+look like:
+
+<p>Test passes if there is a green square and no red.</p>
+<div style="width: 100px; height: 100px; background: green"></div>
+
+#### The green paragraph and the blank page
+
+These tests appear to be identical to the green paragraph tests
+mentioned above. In reality, however, they actually have more in
+common with the green square tests, but with the green square
+colored white instead. This type of test is used when the
+displacement that could be expected in the case of failure is
+likely to be very small, and so any red must be made as obvious as
+possible. Because of this, test would appear totally blank when the
+test has passed. This is a problem because a blank page is the
+symptom of a badly handled network error. For this reason, a single
+line of green text is added to the top of the test, reading
+something like:
+
+<p style="color: green">This line should be green and there should
+be no red on this page.</p>
+[Example][green-paragraph]
+
+#### The two identical renderings
+
+It is often hard to make a test that is purely green when the test
+passes and visibly red when the test fails. For these cases, it may
+be easier to make a particular pattern using the feature that is
+being tested, and then have a reference rendering next to the test
+showing exactly what the test should look like.
+
+The reference rendering could be either an image, in the case where
+the rendering should be identical, to the pixel, on any machine, or
+the same pattern made using different features. (Doing the second
+has the advantage of making the test a test of both the feature
+under test and the features used to make the reference rendering.)
+
+[Visual Example 1][identical-visual-1]
+
+[Visual Example 2][identical-visual-2]
+
+[Text-only Example][identical-text]
+
+### Indicating failure
+
+In addition to having clearly defined characteristics when
+they pass, well designed tests should have some clear signs when
+they fail. It can sometimes be hard to make a test do something only
+when the test fails, because it is very hard to predict how user
+agents will fail! Furthermore, in a rather ironic twist, the best
+tests are those that catch the most unpredictable failures!
+
+Having said that, here are the best ways to indicate failures:
+
+#### Red
+
+Using the color red is probably the best way of highlighting
+failures. Tests should be designed so that if the rendering is a few
+pixels off some red is uncovered or otherwise rendered on the page.
+
+[Visual Example][red-visual]
+
+[Text-only Example][red-text]
+
+_View the pages' source to see the usage of the color
+red to denote failure._
+
+#### Overlapped text
+
+Tests of the `line-height`, `font-size` and similar properties can
+sometimes be devised in such a way that a failure will result in the
+text overlapping.
+
+#### The word "FAIL"
+
+Some properties lend themselves well to this kind of test, for
+example `quotes` and `content`. The idea is that if the word "FAIL"
+appears anywhere, something must have gone wrong.
+
+[Example][fail-example]
+
+_View the page's source to see the usage of the word FAIL._
+
+### Special Fonts
+
+#### Ahem
+Todd Fahrner has developed a font called [Ahem][ahem-readme], which
+consists of some very well defined glyphs of precise sizes and
+shapes. This font is especially useful for testing font and text
+properties. Without this font it would be very hard to use the
+overlapping technique with text.
+
+The font's em-square is exactly square. Its ascent and descent is
+exactly the size of the em square. This means that the font's extent
+is exactly the same as its line-height, meaning that it can be
+exactly aligned with padding, borders, margins, and so forth.
+
+The font's alphabetic baseline is 0.2em above its bottom, and 0.8em
+below its top.
+
+The font has four glyphs:
+
+* X U+0058 A square exactly 1em in height and width.
+* p U+0070 A rectangle exactly 0.2em high, 1em wide, and aligned so
+that its top is flush with the baseline.
+* É U+00C9 A rectangle exactly 0.8em high, 1em wide, and aligned so
+that its bottom is flush with the baseline.
+* U+0020 A transparent space exactly 1em high and wide.
+
+Most other US-ASCII characters in the font have the same glyph as X.
+
+#### Ahem Usage
+__If the test uses the Ahem font, make sure its computed font-size
+is a multiple of 5px__, otherwise baseline alignment may be rendered
+inconsistently (due to rounding errors introduced by certain
+platforms' font APIs). We suggest to use a minimum computed font-
+size of 20px.
+
+E.g. Bad:
+
+``` css
+{font: 1in/1em Ahem;} /* Computed font-size is 96px */
+{font: 1in Ahem;}
+{font: 1em/1em Ahem} /* with computed 1em font-size being 16px */
+{font: 1em Ahem;} /* with computed 1em font-size being 16px */
+```
+
+E.g. Good:
+
+``` css
+{font: 100px/1 Ahem;}
+{font: 1.25em/1 Ahem;} /* with computed 1.25em font-size being 20px
+*/
+```
+
+__If the test uses the Ahem font, make sure the line-height on block
+elements is specified; avoid `line-height: normal`__. Also, for
+absolute reliability, the difference between computed line-height
+and computed font-size should be divisible by 2.
+
+E.g. Bad:
+
+``` css
+{font: 1.25em Ahem;} /* computed line-height value is 'normal' */
+{font: 20px Ahem;} /* computed line-height value is 'normal' */
+{font-size: 25px; line-height: 50px;} /* the difference between
+computed line-height and computed font-size is not divisible by 2. */
+```
+
+E.g. Good:
+
+``` css
+{font-size: 25px; line-height: 51px;} /* the difference between
+computed line-height and computed font-size is divisible by 2. */
+```
+
+[Example test using Ahem][ahem-example]
+
+_View the page's source to see how the Ahem font is used._
+
+
+##### Installing Ahem
+
+1. Download the [TrueType version of Ahem][download-ahem].
+2. Open the folder where you downloaded the font file.
+3. Right-click the downloaded font file and select "Install".
+
+### Explanatory Text
+
+For tests that must be long (e.g. scrolling tests), it is important
+to make it clear that the filler text is not relevant, otherwise the
+tester may think he is missing something and therefore waste time
+reading the filler text. Good text for use in these situations is,
+quite simply, "This is filler text. This is filler text. This is
+filler text.". If it looks boring, it's working!
+
+### Color
+
+In general, using colors in a consistent manner is recommended.
+Specifically, the following convention has been developed:
+
+#### Red
+Any red indicates failure.
+
+#### Green
+In the absence of any red, green indicates success.
+
+#### Blue
+Tests that do not use red or green to indicate success or failure
+should use blue to indicate that the tester should read the text
+carefully to determine the pass conditions.
+
+#### Black
+Descriptive text is usually black.
+
+#### Fuchsia, Yellow, Teal, Orange
+These are useful colors when making complicated patterns for tests
+of the two identical renderings type.
+
+#### Dark Gray
+Descriptive lines, such as borders around nested boxes, are usually
+dark gray. These lines come in useful when trying to reduce the test
+for engineers.
+
+#### Silver / Light Gray
+
+Sometimes used for filler text to indicate that it is irrelevant.
+
+### Methodical testing
+
+Some web features can be tested quite thoroughly with a very
+methodical approach. For example, testing that all the length units
+work for each property taking lengths is relatively easy, and can be
+done methodically simply by creating a test for each property/unit
+combination.
+
+In practice, the important thing to decide is when to be methodical
+and when to simply test, in an ad hoc fashion, a cross section of
+the possibilities.
+
+This is an [example][methodical-test] of a methodical test of the
+`:not()` pseudo-class with each attribute selector in turn, first
+for long values and then for short values.
+
+### Overlapping
+
+This technique should not be cast aside as a curiosity -- it is in
+fact one of the most useful techniques for testing CSS, especially
+for areas like positioning and the table model.
+
+The basic idea is that a red box is first placed using one set of
+properties, e.g. the block box model's margin, height and width
+properties, and then a second box, green, is placed on top of the
+red one using a different set of properties, e.g. using absolute
+positioning.
+
+This idea can be extended to any kind of overlapping, for example
+overlapping to lines of identical text of different colors.
+
+## Tests to avoid
+
+### The long test
+
+Any manual test that is so long that is needs to be scrolled to be
+completed is too long. The reason for this becomes obvious when you
+consider how manual tests will be run. Typically, the tester will be
+running a program (such as "Loaderman") which cycles through a list
+of several hundred tests. Whenever a failure is detected, the tester
+will do something (such as hit a key) that takes a note of the test
+case name. Each test will be on the screen for about two or three
+seconds. If the tester has to scroll the page, that means he has to
+stop the test to do so.
+
+Of course, there are exceptions -- the most obvious one being any
+tests that examine the scrolling mechanism! However, these tests are
+considered tests of user interaction and are not run with the
+majority of the tests.
+
+Any test that is so long that it needs scrolling can usually be
+split into several smaller tests, so in practice this isn't much of
+a problem.
+
+This is an [example][long-test] of a test that is too long.
+
+### The counterintuitive "this should be red" test
+
+As mentioned many times in this document, red indicates a bug, so
+nothing should ever be red in a test.
+
+There is one important exception to this rule... the test for the
+`red` value for the color properties!
+
+### Unobvious tests
+
+A test that has half a sentence of normal text with the second half
+bold if the test has passed is not very obvious, even if the
+sentence in question explains what should happen.
+
+There are various ways to avoid this kind of test, but no general
+rule can be given since the affected tests are so varied.
+
+The last [subtest on this page][unobvious-test] shows this problem.
+
+[test-format]: ./test-format-guidelines.html
+[reftests]: ./reftests.html
+[scripttests]: ./testharness-documentation.html
+[identical-renderings]: http://test.csswg.org/source/css21/syntax/escapes-000.xht
+[green-background]: http://test.csswg.org/source/css21/syntax/escapes-002.xht
+[no-red-1]: http://test.csswg.org/source/css21/positioning/abspos-containing-block-003.xht
+[no-red-2]: http://test.csswg.org/source/css21/tables/border-conflict-w-079.xht
+[described-alignment]: http://test.csswg.org/source/css21/margin-padding-clear/margin-collapse-clear-007.xht
+[overlapping]: http://test.csswg.org/source/css21/tables/table-anonymous-objects-021.xht
+[imprecise-1]: http://test.csswg.org/source/css21/tables/border-style-inset-001.xht
+[imprecise-2]: http://test.csswg.org/source/css21/text/text-decoration-001.xht
+[green-page]: http://www.hixie.ch/tests/adhoc/css/background/18.xml
+[green-paragraph]: http://www.hixie.ch/tests/adhoc/css/fonts/size/002.xml
+[identical-visual-1]: http://test.csswg.org/source/css21/floats-clear/margin-collapse-123.xht
+[identical-visual-2]: http://test.csswg.org/source/css21/normal-flow/inlines-016.xht
+[identical-text]: http://test.csswg.org/source/css21/fonts/shand-font-000.xht
+[red-visual]: http://test.csswg.org/source/css21/positioning/absolute-replaced-height-018.xht
+[red-text]: http://test.csswg.org/source/css21/syntax/comments-003.xht
+[fail-example]: http://test.csswg.org/source/css21/positioning/abspos-overflow-005.xht
+[ahem-example]: http://test.csswg.org/source/css21/positioning/absolute-non-replaced-width-001.xht
+[ahem-readme]: http://www.w3.org/Style/CSS/Test/Fonts/Ahem/README
+[download-ahem]: http://www.w3.org/Style/CSS/Test/Fonts/Ahem/AHEM____.TTF
+[long-test]: http://www.hixie.ch/tests/evil/mixed/lineheight3.html
+[unobvious-test]: http://www.w3.org/Style/CSS/Test/CSS1/current/sec525.htm
+[methodical-test]: http://www.hixie.ch/tests/adhoc/css/selectors/not/010.xml
diff --git a/testing/web-platform/tests/docs/test-templates.md b/testing/web-platform/tests/docs/test-templates.md
new file mode 100644
index 000000000..3738ebf13
--- /dev/null
+++ b/testing/web-platform/tests/docs/test-templates.md
@@ -0,0 +1,135 @@
+This page contains templates for creating tests. The template syntax
+is compatible with several popular editors including TextMate, Sublime
+Text, and emacs' YASnippet mode.
+
+Each template is given in two forms, one minimal and one including
+[extra metadata](css-metadata.html). Usually the metadata is required
+by CSS tests and optional for other tests.
+
+Templates for filenames are also given. In this case `{}` is used to
+delimit text to be replaced and `#` represents a digit.
+
+## Reftests
+
+### Minimal Reftest
+
+``` html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>${1:Test title}</title>
+<link rel="match" href="${2:URL of match}">
+<style>
+ ${3:Test CSS}
+</style>
+<body>
+ ${4:Test content}
+</body>
+```
+
+Filename: `{test-topic}-###.html`
+
+### Reftest Including Metadata
+
+``` html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>${1:Test area}: ${2:Scope of test}</title>
+<link rel="author" title="${3:Author's name}" href="${4:Contact link}">
+<link rel="help" href="${5:Link to tested section}">
+<link rel="match" href="${6:URL of match}">
+<meta name="flags" content="${7:Requirement flags}">
+<meta name="assert" content="${8:Description of what you're trying to test}">
+<style>
+ ${9:Test CSS}
+</style>
+<body>
+ ${10:Test content}
+</body>
+```
+
+Filename: `{test-topic}-###.html`
+
+### Minimal Reftest Reference:
+
+``` html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>${1:Reference title}</title>
+<style>
+ ${2:Reference CSS}
+</style>
+<body>
+ ${3:Reference content}
+</body>
+```
+
+Filename: `{description}.html` or `{test-topic}-###-ref.html`
+
+### Reference Including Metadata
+
+``` html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>${1:Reference title}</title>
+<link rel="author" title="${2:Author's name}" href="${3:Contact link}">
+<style>
+ ${4:Reference CSS}
+</style>
+<body>
+ ${5:Reference content}
+</body>
+```
+
+Filename: `{description}.html` or `{test-topic}-###-ref.html`
+
+## testharness.js tests
+
+### Minimal Script Test
+
+``` html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>${1:Test title}</title>
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<script>
+${2:Test body}
+</script>
+```
+
+Filename: `{test-topic}-###.html`
+
+### Script Test With Metadata
+
+``` html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>${1:Test title}</title>
+<link rel="author" title="${2:Author's name}" href="${3:Contact link}">
+<link rel="help" href="${4:Link to tested section}">
+<meta name="flags" content="${5:Requirement flags}">
+<meta name="assert" content="${6:Description of what you're trying to test}">
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<script>
+${7:Test body}
+</script>
+```
+
+Filename: `{test-topic}-###.html`
+
+### Manual Test
+
+``` html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>${1:Test title}</title>
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<script>
+setup({explicit_timeout: true});
+${2:Test body}
+</script>
+```
+
+Filename: `{test-topic}-###-manual.html`