summaryrefslogtreecommitdiffstats
path: root/testing/web-platform/harness/docs/expectation.rst
diff options
context:
space:
mode:
authorMatt A. Tobin <mattatobin@localhost.localdomain>2018-02-02 04:16:08 -0500
committerMatt A. Tobin <mattatobin@localhost.localdomain>2018-02-02 04:16:08 -0500
commit5f8de423f190bbb79a62f804151bc24824fa32d8 (patch)
tree10027f336435511475e392454359edea8e25895d /testing/web-platform/harness/docs/expectation.rst
parent49ee0794b5d912db1f95dce6eb52d781dc210db5 (diff)
downloadUXP-5f8de423f190bbb79a62f804151bc24824fa32d8.tar
UXP-5f8de423f190bbb79a62f804151bc24824fa32d8.tar.gz
UXP-5f8de423f190bbb79a62f804151bc24824fa32d8.tar.lz
UXP-5f8de423f190bbb79a62f804151bc24824fa32d8.tar.xz
UXP-5f8de423f190bbb79a62f804151bc24824fa32d8.zip
Add m-esr52 at 52.6.0
Diffstat (limited to 'testing/web-platform/harness/docs/expectation.rst')
-rw-r--r--testing/web-platform/harness/docs/expectation.rst248
1 files changed, 248 insertions, 0 deletions
diff --git a/testing/web-platform/harness/docs/expectation.rst b/testing/web-platform/harness/docs/expectation.rst
new file mode 100644
index 000000000..6a0c77684
--- /dev/null
+++ b/testing/web-platform/harness/docs/expectation.rst
@@ -0,0 +1,248 @@
+Expectation Data
+================
+
+Introduction
+------------
+
+For use in continuous integration systems, and other scenarios where
+regression tracking is required, wptrunner supports storing and
+loading the expected result of each test in a test run. Typically
+these expected results will initially be generated by running the
+testsuite in a baseline build. They may then be edited by humans as
+new features are added to the product that change the expected
+results. The expected results may also vary for a single product
+depending on the platform on which it is run. Therefore, the raw
+structured log data is not a suitable format for storing these
+files. Instead something is required that is:
+
+ * Human readable
+
+ * Human editable
+
+ * Machine readable / writable
+
+ * Capable of storing test id / result pairs
+
+ * Suitable for storing in a version control system (i.e. text-based)
+
+The need for different results per platform means either having
+multiple expectation files for each platform, or having a way to
+express conditional values within a certain file. The former would be
+rather cumbersome for humans updating the expectation files, so the
+latter approach has been adopted, leading to the requirement:
+
+ * Capable of storing result values that are conditional on the platform.
+
+There are few extant formats that meet these requirements, so
+wptrunner uses a bespoke ``expectation manifest`` format, which is
+closely based on the standard ``ini`` format.
+
+Directory Layout
+----------------
+
+Expectation manifest files must be stored under the ``metadata``
+directory passed to the test runner. The directory layout follows that
+of web-platform-tests with each test path having a corresponding
+manifest file. Tests that differ only by query string, or reftests
+with the same test path but different ref paths share the same
+reference file. The file name is taken from the last /-separated part
+of the path, suffixed with ``.ini``.
+
+As an optimisation, files which produce only default results
+(i.e. ``PASS`` or ``OK``) don't require a corresponding manifest file.
+
+For example a test with url::
+
+ /spec/section/file.html?query=param
+
+would have an expectation file ::
+
+ metadata/spec/section/file.html.ini
+
+
+.. _wptupdate-label:
+
+Generating Expectation Files
+----------------------------
+
+wptrunner provides the tool ``wptupdate`` to generate expectation
+files from the results of a set of baseline test runs. The basic
+syntax for this is::
+
+ wptupdate [options] [logfile]...
+
+Each ``logfile`` is a structured log file from a previous run. These
+can be generated from wptrunner using the ``--log-raw`` option
+e.g. ``--log-raw=structured.log``. The default behaviour is to update
+all the test data for the particular combination of hardware and OS
+used in the run corresponding to the log data, whilst leaving any
+other expectations untouched.
+
+wptupdate takes several useful options:
+
+``--sync``
+ Pull the latest version of web-platform-tests from the
+ upstream specified in the config file. If this is specified in
+ combination with logfiles, it is assumed that the results in the log
+ files apply to the post-update tests.
+
+``--no-check-clean``
+ Don't attempt to check if the working directory is clean before
+ doing the update (assuming that the working directory is a git or
+ mercurial tree).
+
+``--patch``
+ Create a a git commit, or a mq patch, with the changes made by wptupdate.
+
+``--ignore-existing``
+ Overwrite all the expectation data for any tests that have a result
+ in the passed log files, not just data for the same platform.
+
+Examples
+~~~~~~~~
+
+Update the local copy of web-platform-tests without changing the
+expectation data and commit (or create a mq patch for) the result::
+
+ wptupdate --patch --sync
+
+Update all the expectations from a set of cross-platform test runs::
+
+ wptupdate --no-check-clean --patch osx.log linux.log windows.log
+
+Add expectation data for some new tests that are expected to be
+platform-independent::
+
+ wptupdate --no-check-clean --patch --ignore-existing tests.log
+
+Manifest Format
+---------------
+The format of the manifest files is based on the ini format. Files are
+divided into sections, each (apart from the root section) having a
+heading enclosed in square braces. Within each section are key-value
+pairs. There are several notable differences from standard .ini files,
+however:
+
+ * Sections may be hierarchically nested, with significant whitespace
+ indicating nesting depth.
+
+ * Only ``:`` is valid as a key/value separator
+
+A simple example of a manifest file is::
+
+ root_key: root_value
+
+ [section]
+ section_key: section_value
+
+ [subsection]
+ subsection_key: subsection_value
+
+ [another_section]
+ another_key: another_value
+
+Conditional Values
+~~~~~~~~~~~~~~~~~~
+
+In order to support values that depend on some external data, the
+right hand side of a key/value pair can take a set of conditionals
+rather than a plain value. These values are placed on a new line
+following the key, with significant indentation. Conditional values
+are prefixed with ``if`` and terminated with a colon, for example::
+
+ key:
+ if cond1: value1
+ if cond2: value2
+ value3
+
+In this example, the value associated with ``key`` is determined by
+first evaluating ``cond1`` against external data. If that is true,
+``key`` is assigned the value ``value1``, otherwise ``cond2`` is
+evaluated in the same way. If both ``cond1`` and ``cond2`` are false,
+the unconditional ``value3`` is used.
+
+Conditions themselves use a Python-like expression syntax. Operands
+can either be variables, corresponding to data passed in, numbers
+(integer or floating point; exponential notation is not supported) or
+quote-delimited strings. Equality is tested using ``==`` and
+inequality by ``!=``. The operators ``and``, ``or`` and ``not`` are
+used in the expected way. Parentheses can also be used for
+grouping. For example::
+
+ key:
+ if (a == 2 or a == 3) and b == "abc": value1
+ if a == 1 or b != "abc": value2
+ value3
+
+Here ``a`` and ``b`` are variables, the value of which will be
+supplied when the manifest is used.
+
+Expectation Manifests
+---------------------
+
+When used for expectation data, manifests have the following format:
+
+ * A section per test URL described by the manifest, with the section
+ heading being the part of the test URL following the last ``/`` in
+ the path (this allows multiple tests in a single manifest file with
+ the same path part of the URL, but different query parts).
+
+ * A subsection per subtest, with the heading being the title of the
+ subtest.
+
+ * A key ``type`` indicating the test type. This takes the values
+ ``testharness`` and ``reftest``.
+
+ * For reftests, keys ``reftype`` indicating the reference type
+ (``==`` or ``!=``) and ``refurl`` indicating the URL of the
+ reference.
+
+ * A key ``expected`` giving the expectation value of each (sub)test.
+
+ * A key ``disabled`` which can be set to any value to indicate that
+ the (sub)test is disabled and should either not be run (for tests)
+ or that its results should be ignored (subtests).
+
+ * A key ``restart-after`` which can be set to any value to indicate that
+ the runner should restart the browser after running this test (e.g. to
+ clear out unwanted state).
+
+ * Variables ``debug``, ``os``, ``version``, ``processor`` and
+ ``bits`` that describe the configuration of the browser under
+ test. ``debug`` is a boolean indicating whether a build is a debug
+ build. ``os`` is a string indicating the operating system, and
+ ``version`` a string indicating the particular version of that
+ operating system. ``processor`` is a string indicating the
+ processor architecture and ``bits`` an integer indicating the
+ number of bits. This information is typically provided by
+ :py:mod:`mozinfo`.
+
+ * Top level keys are taken as defaults for the whole file. So, for
+ example, a top level key with ``expected: FAIL`` would indicate
+ that all tests and subtests in the file are expected to fail,
+ unless they have an ``expected`` key of their own.
+
+An simple example manifest might look like::
+
+ [test.html?variant=basic]
+ type: testharness
+
+ [Test something unsupported]
+ expected: FAIL
+
+ [test.html?variant=broken]
+ expected: ERROR
+
+ [test.html?variant=unstable]
+ disabled: http://test.bugs.example.org/bugs/12345
+
+A more complex manifest with conditional properties might be::
+
+ [canvas_test.html]
+ expected:
+ if os == "osx": FAIL
+ if os == "windows" and version == "XP": FAIL
+ PASS
+
+Note that ``PASS`` in the above works, but is unnecessary; ``PASS``
+(or ``OK``) is always the default expectation for (sub)tests.