Test scenario Metadata
The goal is to reuse test files as much as possible and automate the creation of test sets: given enough meta data info, we could write a script (XSLT or a popular scripting language) that automatically creates zips for, say, "ODF 1.1 presentations", "ODF 1.2 spreadsheets" etc
A meta data file could look like this (either stored inside the ODF package, or as a separate XML file)
First, a trivial doctest root element
<?xml version="1.0"?> <doctest file-id="fedict.hanssens.1" location="complex-test-meta.xml" lastupdate="2008-12-30T17:57:18Z" xmlns="http://docs.oasis-open.org/ns/doctest/wd-20081230">
Next is a test-group, grouping one or more test scenarios that can be run over the same input files. Note that the proposed schema allows for different input files for different application types ("spreadsheets", ...).
In order to keep track of who wrote the tests, one or more author elements can be added.
<test-group id="oic.draft.1" status="submitted" lastupdate="2008-12-30T09:33:18Z"> <author email="firstname.lastname@example.org" website="http://www.fedict.be">Bart Hanssens</author> <input-file application="text" location="complex-test.odt"/>
Then we come to the real test scenario's. What version and part of the specification(s) are we targeting ? Is it required ? Does the spec refer to other specs ? And of course a description of the test scenario itself and what should be considered a "success" and what's a "failure" ?
<test type="representation" id="odf-1.1-17.2" process="read"> <doc-spec version="1.1" section="17.2" use="optional">Zip file structure</doc-spec> <other-spec location="ftp://ftp.uu.net/pub/archiving/zip/doc/appnote-970311-iz.zip">ZIP</other-spec> <description>An empty document, contained in a package. The application must be able to open this without producing any warning or error message.</description> <success>The application opens the file without producing warnings or error messages.</success> <failure>Warning or error messages, crashing or otherwise refusing to open and/or represent the empty file.</failure> </test>
This schema can be used for "atomic" and "complex" cases: multiple test scenarios can be added, and it can be specified that one test depends on another. Also, different scenarios are possible for "read" and "write" operations.
Using the application attribute, it can be specified that a test scenario is only valid for a certain application type ("text", "spreadsheet", ...). Combined with the application attribute of the input-file, one could select the appropriate document.
The same test can often be reused for testing multiple versions of the ODF spec (1.1, 1.2) by adding another doc-spec. However, test scenarios may vary between versions, so sometimes this isn't the case and an additional test is required.
<test type="representation" id="odf-1.1-17.6" depends="odf-1.1-17.2" process="read" application="text"> <doc-spec version="1.1" section="17.6" use="encouraged">Preview Image</doc-spec> <other-spec location="http://jens.triq.net/thumbnail-spec/index.html" version="0.7.0">Thumbnail Managing Standard</other-spec> <description>Preview image, this is a dummy image containing the black text "This preview is intentionally left blank" on a white background.</description> <success>Correctly show the image.</success> <failure>Unable or wrongly represent the preview.</failure> </test> </test-group>
Of course, initially we don't include results.
But if some vendor, expert or researcher wants to submit the results, it could be beneficial to store it in the same file, so there's no discussion what's been tested. And it makes it easier to automatically extract reports (again using XSLT or a popular scripting language). Note that these reports won't contain the specific product details, but general data (X is supported by Y% of the implementations)
<result-group status="submitted" lastupdate="2008-12-30T16:57:18Z"> <author email="email@example.com" website="http://www.fedict.be">Bart Hanssens</author> <configuration> <application name="My Favorite App" version="1.0" language="en"/> <os name="Microsoft Windows" version="XP Pro sp3" language="en"/> <output-device type="screen"/> </configuration>
The evaluation is done by the person who submitted the result, not by the TC itself. Note that the attribute can also have the value unsure or spec-not-clear
<result test-ref="odf-1.1-17.2"> <evaluation type="success">This was a very easy test, but please check the comment on the wiki.</evaluation> </result> </result-group> </doctest>