index.html 46.7 KB
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144
<!DOCTYPE html PUBLIC '-//W3C//DTD XHTML 1.0 Strict//EN'
    'http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd'>
<html lang="en" dir="ltr" xmlns="http://www.w3.org/1999/xhtml"
xml:lang="en">
<head>
<meta http-equiv="Content-Type" content=
"text/html; charset=utf-8" />
<title>A Method for Writing Testable Conformance
Requirements</title>
<link rel="stylesheet" href=
"http://www.w3.org/StyleSheets/TR/W3C-ED.css" type="text/css" />
<style type="text/css">
/*<![CDATA[*/
/*****************************************************************
 * ReSpec CSS
 * Robin Berjon <robin at berjon dot com>
 * v0.05 - 2009-07-31
 *****************************************************************/


/* --- INLINES --- */
em.rfc2119 { 
    text-transform:     lowercase;
    font-variant:       small-caps;
    font-style:         normal;
    color:              #900;
}

h1 acronym, h2 acronym, h3 acronym, h4 acronym, h5 acronym, h6 acronym, a acronym,
h1 abbr, h2 abbr, h3 abbr, h4 abbr, h5 abbr, h6 abbr, a abbr {
    border: none;
}

dfn {
    font-weight:    bold;
}

a.internalDFN {
    color:  inherit;
    border-bottom:  medium solid #99c;
    text-decoration:    none;
}

a.externalDFN {
    color:  inherit;
    border-bottom:  medium dotted #ccc;
    text-decoration:    none;
}

a.bibref {
    text-decoration:    none;
}

code {
    color:  #ff4500;
}



/* --- TOC --- */
.toc a {
    text-decoration:    none;
}

a .secno {
    color:  #000;
}

/* --- TABLE --- */
table.simple {
    border-spacing: 0;
    border-collapse:    collapse;
    border-bottom:  3px solid #005a9c;
}

.simple th {
    background: #005a9c;
    color:  #fff;
    padding:    3px 5px;
    text-align: left;
}

.simple th[scope="row"] {
    background: inherit;
    color:  inherit;
    border-top: 1px solid #ddd;
}

.simple td {
    padding:    3px 10px;
    border-top: 1px solid #ddd;
}

.simple tr:nth-child(even) {
    background: #f0f6ff;
}

/* --- DL --- */
.section dd > p:first-child {
    margin-top: 0;
}

.section dd > p:last-child {
    margin-bottom: 0;
}

.section dd {
    margin-bottom:  1em;
}

.section dl.attrs dd, .section dl.eldef dd {
    margin-bottom:  0;
}

/* --- EXAMPLES --- */
pre.example {
    border-top: 1px solid #ff4500;
    border-bottom: 1px solid #ff4500;
    padding:    1em;
    margin-top: 1em;
}

pre.example:before {
    content:    "Example";
    display:    block;
    width:      150px;
    background: #ff4500;
    color:  #fff;
    font-family:    initial;
    padding:    3px;
    font-weight:    bold;
    margin: -1em 0 1em -1em;
}

/* --- EDITORIAL NOTES --- */
.issue {
    padding:    1em;
    border: 1px solid #f00;
    background: #ffc;
}

.issue:before {
    content:    "Issue";
    display:    block;
    width:  150px;
    margin: -1.5em 0 0.5em 0;
    font-weight:    bold;
    border: 1px solid #f00;
    background: #fff;
    padding:    3px 1em;
}

.note {
    padding:    1em;
    border: 2px solid #cff6d9;
    background: #e2fff0;
}

.note:before {
    content:    "Note";
    display:    block;
    width:  150px;
    margin: -1.5em 0 0.5em 0;
    font-weight:    bold;
    border: 1px solid #cff6d9;
    background: #fff;
    padding:    3px 1em;
}

/* --- SYNTAX HIGHLIGHTING --- */
pre.sh_sourceCode {
  background-color: white;
  color: black;
  font-style: normal;
  font-weight: normal;
}

pre.sh_sourceCode .sh_keyword { color: #005a9c; font-weight: bold; }           /* language keywords */
pre.sh_sourceCode .sh_type { color: #666; }                            /* basic types */
pre.sh_sourceCode .sh_usertype { color: teal; }                             /* user defined types */
pre.sh_sourceCode .sh_string { color: red; font-family: monospace; }        /* strings and chars */
pre.sh_sourceCode .sh_regexp { color: orange; font-family: monospace; }     /* regular expressions */
pre.sh_sourceCode .sh_specialchar { color:      #ffc0cb; font-family: monospace; }  /* e.g., \n, \t, \\ */
pre.sh_sourceCode .sh_comment { color: #A52A2A; font-style: italic; }         /* comments */
pre.sh_sourceCode .sh_number { color: purple; }                             /* literal numbers */
pre.sh_sourceCode .sh_preproc { color: #00008B; font-weight: bold; }       /* e.g., #include, import */
pre.sh_sourceCode .sh_symbol { color: blue; }                            /* e.g., <, >, + */
pre.sh_sourceCode .sh_function { color: black; font-weight: bold; }         /* function calls and declarations */
pre.sh_sourceCode .sh_cbracket { color: red; }                              /* block brackets (e.g., {, }) */
pre.sh_sourceCode .sh_todo { font-weight: bold; background-color: #00FFFF; }   /* TODO and FIXME */

/* Predefined variables and functions (for instance glsl) */
pre.sh_sourceCode .sh_predef_var { color: #00008B; }
pre.sh_sourceCode .sh_predef_func { color: #00008B; font-weight: bold; }

/* for OOP */
pre.sh_sourceCode .sh_classname { color: teal; }

/* line numbers (not yet implemented) */
pre.sh_sourceCode .sh_linenum { display: none; }

/* Internet related */
pre.sh_sourceCode .sh_url { color: blue; text-decoration: underline; font-family: monospace; }

/* for ChangeLog and Log files */
pre.sh_sourceCode .sh_date { color: blue; font-weight: bold; }
pre.sh_sourceCode .sh_time, pre.sh_sourceCode .sh_file { color: #00008B; font-weight: bold; }
pre.sh_sourceCode .sh_ip, pre.sh_sourceCode .sh_name { color: #006400; }

/* for Prolog, Perl... */
pre.sh_sourceCode .sh_variable { color: #006400; }

/* for LaTeX */
pre.sh_sourceCode .sh_italics { color: #006400; font-style: italic; }
pre.sh_sourceCode .sh_bold { color: #006400; font-weight: bold; }
pre.sh_sourceCode .sh_underline { color: #006400; text-decoration: underline; }
pre.sh_sourceCode .sh_fixed { color: green; font-family: monospace; }
pre.sh_sourceCode .sh_argument { color: #006400; }
pre.sh_sourceCode .sh_optionalargument { color: purple; }
pre.sh_sourceCode .sh_math { color: orange; }
pre.sh_sourceCode .sh_bibtex { color: blue; }

/* for diffs */
pre.sh_sourceCode .sh_oldfile { color: orange; }
pre.sh_sourceCode .sh_newfile { color: #006400; }
pre.sh_sourceCode .sh_difflines { color: blue; }

/* for css */
pre.sh_sourceCode .sh_selector { color: purple; }
pre.sh_sourceCode .sh_property { color: blue; }
pre.sh_sourceCode .sh_value { color: #006400; font-style: italic; }

/* other */
pre.sh_sourceCode .sh_section { color: black; font-weight: bold; }
pre.sh_sourceCode .sh_paren { color: red; }
pre.sh_sourceCode .sh_attribute { color: #006400; }

/*]]>*/
</style>
<link charset="utf-8" type="text/css" rel="stylesheet" href=
"http://www.w3.org/StyleSheets/TR/W3C-WG-NOTE" />
</head>
<body>
<div class="head">
<p><a href="http://www.w3.org/"><img src=
"http://www.w3.org/Icons/w3c_home" alt="W3C" height="48" width=
"72" /></a></p>
<h1>A Method for Writing Testable Conformance Requirements</h1>
<h2><acronym title="World Wide Web Consortium">W3C</acronym>
Working Group Note 28 January 2010</h2>
<dl>
<dt>This Version:</dt>
<dd><a href=
"http://www.w3.org/TR/2010/NOTE-test-methodology-20100128/">http://www.w3.org/TR/2010/NOTE-test-methodology-20100128/</a></dd>
<dt>Latest Published Version:</dt>
<dd><a href=
"http://www.w3.org/TR/test-methodology/">http://www.w3.org/TR/test-methodology/</a></dd>
<dt>Previous version:</dt>
<dd>none</dd>
<dt>Editors:</dt>
<dd><a href="http://www.w3.org/People/Dom/">Dominique
Hazaël-Massieux</a>, <a href="http://w3.org/"><acronym title=
"World Wide Web Consortium">W3C</acronym></a></dd>
<dd><a href="http://datadriven.com.au/">Marcos Cáceres</a>,
<a href="http://opera.com/">Opera Software</a></dd>
</dl>
<p class="copyright"><a href=
"http://www.w3.org/Consortium/Legal/ipr-notice#Copyright">Copyright</a>
© 2010 <a href="http://www.w3.org/"><acronym title=
"World Wide Web Consortium">W3C</acronym></a><sup>®</sup>
(<a href="http://www.csail.mit.edu/"><acronym title=
"Massachusetts Institute of Technology">MIT</acronym></a>,
<a href="http://www.ercim.eu/"><acronym title="European Research Consortium for Informatics and Mathematics">
ERCIM</acronym></a>, <a href=
"http://www.keio.ac.jp/">Keio</a>), All Rights Reserved.
W3C <a href=
"http://www.w3.org/Consortium/Legal/ipr-notice#Legal_Disclaimer">liability</a>,
<a href=
"http://www.w3.org/Consortium/Legal/ipr-notice#W3C_Trademarks">trademark</a>
and <a href=
"http://www.w3.org/Consortium/Legal/copyright-documents">document
use</a> rules apply.</p>
<hr /></div>
<div class="introductory section" id="abstract">
<h2>Abstract</h2>
<p>In this document we present a method for writing, marking-up,
and analyzing conformance requirements in technical
specifications.</p>
</div>
<div id="sotd" class="introductory section">
<h2>Status of This Document</h2>
<p><em>This section describes the status of this document at the
time of its publication. Other documents may supersede this
document. A list of current <acronym title=
"World Wide Web Consortium">W3C</acronym> publications and the
latest revision of this technical report can be found in the
<a href="http://www.w3.org/TR/"><acronym title=
"World Wide Web Consortium">W3C</acronym> technical reports
index</a> at http://www.w3.org/TR/.</em></p>
This is the first publication of this document as a Working Group
Note by the Mobile Web Initiative Test Suites Working Group. This
publication results from the collaboration between the Mobile Web
Initiative Test Suites Working Group and the Web Applications
Working Group on the development of test suites for the Widgets
family of specifications.
<p>This document was published by the <a href=
"http://www.w3.org/2005/MWI/Tests/">Mobile Web Test Suites Working
Group</a> as a Working Group Note. If you wish to make comments
regarding this document, please send them to <a href=
"mailto:public-mwts@w3.org">public-mwts@w3.org</a> (<a href=
"mailto:public-mwts-request@w3.org?subject=subscribe">subscribe</a>,
<a href=
"http://lists.w3.org/Archives/Public/public-mwts/">archives</a>).
All feedback is welcome.</p>
<p>Publication as a Working Group Note does not imply endorsement
by the <acronym title="World Wide Web Consortium">W3C</acronym>
Membership. This is a draft document and may be updated, replaced
or obsoleted by other documents at any time. It is inappropriate to
cite this document as other than work in progress.</p>
<p>This document was produced by a group operating under the
<a href="http://www.w3.org/Consortium/Patent-Policy-20040205/">5
February 2004 <acronym title=
"World Wide Web Consortium">W3C</acronym> Patent Policy</a>.
<acronym title="World Wide Web Consortium">W3C</acronym> maintains
a <a href="http://www.w3.org/2004/01/pp-impl/40010/status" rel=
"disclosure">public list of any patent disclosures</a> made in
connection with the deliverables of the group; that page also
includes instructions for disclosing a patent. An individual who
has actual knowledge of a patent which the individual believes
contains <a href=
"http://www.w3.org/Consortium/Patent-Policy-20040205/#def-essential">
Essential Claim(s)</a> must disclose the information in accordance
with <a href=
"http://www.w3.org/Consortium/Patent-Policy-20040205/#sec-Disclosure">
section 6 of the <acronym title=
"World Wide Web Consortium">W3C</acronym> Patent Policy</a>.</p>
</div>
<div class="section" id="toc">
<h2 class="introductory">Table of Contents</h2>
<ul class="toc">
<li><a href="#intro"><span class="secno">1.</span>
Introduction</a></li>
<li><a href="#common-mistakes"><span class="secno">2.</span> Common
Mistakes</a></li>
<li><a href="#the-method"><span class="secno">3.</span> The
Method</a></li>
<li>
<ul class="toc">
<li><a href=
"#relationship-to-the-standardization-process"><span class=
"secno">3.1</span> Relationship to the standardization
process</a></li>
<li><a href="#value-of-applying-the-method"><span class=
"secno">3.2</span> Value of applying the method</a></li>
</ul>
</li>
<li><a href=
"#structural-components-of-a-conformance-requirement"><span class=
"secno">4.</span> Structural Components of a Conformance
Requirement</a></li>
<li><a href=
"#conventions-for-marking-up-conformance-requirements"><span class=
"secno">5.</span> Conventions for Marking-up Conformance
Requirements</a></li>
<li><a href="#extracting-conformance-requirements"><span class=
"secno">6.</span> Extracting Conformance Requirements</a></li>
<li><a href="#testable-assertions-and-test-----cases"><span class=
"secno">7.</span> Testable Assertions and Test Cases</a></li>
<li><a href="#conclusions"><span class="secno">8.</span>
Conclusions</a></li>
<li><a href="#references"><span class="secno">A.</span>
References</a></li>
<li>
<ul class="toc">
<li><a href="#normative-references"><span class="secno">A.1</span>
Normative references</a></li>
<li><a href="#informative-references"><span class=
"secno">A.2</span> Informative references</a></li>
</ul>
</li>
</ul>
</div>
<div class="section" id="intro">
<h2><span class="secno">1.</span> Introduction</h2>
<p>In this document we present a method for writing, marking-up,
and analyzing conformance requirements in technical
specifications.</p>
<p>We argue that the method yields specifications whose conformance
requirements are testable: that is, upon applying the method, parts
of what is written in the specification can be converted into a
test suite without requiring, for instance, the use of a formal
language.</p>
<p>The method was derived from a collaboration between the
<acronym title="World Wide Web Consortium">W3C</acronym>'s <a href=
"http://www.w3.org/2005/MWI/Tests/">Mobile Web Initiative: Test
Suites Working Group</a> and the <a href=
"http://www.w3.org/2008/webapps/">Web Applications Working
Group</a>. This collaboration aimed to improve the written quality
and testability of various specifications. The applications,
limitations, as well as possible directions for future work that
could refine this method are described in this document.</p>
</div>
<div class="section" id="common-mistakes">
<h2><span class="secno">2.</span> Common Mistakes</h2>
<p>When working on a specification, there are common mistakes an
editor can make when writing conformance requirements that makes
them difficult, if not impossible, to test. For technical
specifications, the testability of a conformance requirement is
imperative: conformance requirements eventually become the test
cases that implementations rely on to claim conformance to a
specification. If no implementation can claim conformance, or if
aspects of the specification are not testable, then the probability
of a specification becoming a ratified standard, and, more
importantly, achieving interoperability among implementations, is
significantly reduced.</p>
<p>The most common mistakes that editors make when writing
conformance requirements include, but are not limited to:</p>
<ul>
<li>
<p>Creating conformance requirements for products that don’t have
behavior, e.g. “an XML file <em title="must" class=
"rfc2119">must</em> be well-formed.” — this cannot be tested since
it doesn’t say what the outcome is on that condition.</p>
</li>
<li>
<p>Using a passive voice for describing the behavior, e.g. “an
invalid XML file must be ignored” — this hides what product is
supposed to follow the prescribed behavior.</p>
</li>
<li>
<p>Using under-defined behaviors, e.g. “a user agent must reject
malformed XML” without defining the algorithmic process that is to
“reject” something — this makes it impossible to define the outcome
of the testable assertion.</p>
</li>
</ul>
</div>
<div class="section" id="the-method">
<h2><span class="secno">3.</span> The Method</h2>
<p>Because conformance requirements are intertwined as part of the
text of a specification (as sentences, paragraphs, dot points,
etc.), it can be difficult to detect the various common mistakes.
For this reason, the first step in our method is to identify and
mark-up (using HTML) various structural components that constitute
a conformance requirement. Understanding these structural
components is important, because it is that structure that
determines the testability of a conformance requirement. We discuss
the structure of conformance requirements, as well as how to mark
them up, in more detail below.</p>
<p>Once conformance requirements have been marked up into their
component parts, then they can be extracted and analyzed outside
the context of the specification. Seeing a conformance requirement
out of context can often expose inconsistencies and redundancies
that may otherwise been difficult for the editor, or an independent
reviewer, to identify. The ability to extract conformance
requirements from a specification also allows them to be used in
other contexts, such as in the creation of a test suite.</p>
<p>The general process that constitutes the method is as
follows:</p>
<ul>
<li>
<p>Markup conformance requirements that need to be tested and give
them a stable identifier that will persist across drafts of the
specification.</p>
</li>
<li>
<p>Extract the conformance requirements and examine them
independently of the specification. Fix common mistakes and remove
any duplicates.</p>
</li>
<li>
<p>Establish a quality assurance process for both creating and
verifying the test cases. In our case, this included providing a
set of tools, templates, and methods explaining how to build useful
test cases for the said specification (see [<a href=
"#bib-WIDGETS-PC-TESTS" rel="biblioentry" class=
"bibref">WIDGETS-PC-TESTS</a>]). During the creation of the test
suite for the [<a href="#bib-WIDGETS" rel="biblioentry" class=
"bibref">WIDGETS</a>] specification, we also imposed a rule within
those working on the test suite that a test case had to be
independently verified before being committed into the final test
suite. Although defective test cases still made it into the test
suite, as more implementers worked their way through the test
suite, the more bugs were found and fixed (both in the
specification and in the test suite).</p>
</li>
<li>
<p>Create <a>test cases</a> and corresponding <a>testable
assertions</a>. The act of converting prose into a computational
form (a test case) can also help expose redundancies, ambiguities,
and common mistakes. Bind the testable assertions to a conformance
requirement via a stable identifier.</p>
</li>
<li>
<p>Build at least one test case for each conformance requirement.
On average, our test suite contains 3 test cases per conformance
requirement, with some assertions having 10 or more test cases.</p>
</li>
<li>
<p>Compare the results of running these test cases on existing
implementations to find bugs in the specification or in the test
suite. For example, if one finds that all implementations are
failing a test case, it might mean that the test case is
defective.</p>
</li>
<li>Republish the specification, call for implementations, and
gather feedback. Fix any issues that were identified and republish
the specification if necessary.</li>
</ul>
<p>As the Web Applications Working Group learned, it can be
problematic to enter the <acronym title=
"World Wide Web Consortium">W3C</acronym>’s Candidate
Recommendation phase without having a complete and thoroughly
verified test suite: because this method was mostly applied during
Candidate Recommendation, so many redundancies and issues where
found that the specification had to drop back to Working Draft.
This demonstrated that the method was effective, but needs to be
applied as early as possible in the specification writing
process.</p>
<div class="section" id=
"relationship-to-the-standardization-process">
<h3><span class="secno">3.1</span> Relationship to the
standardization process</h3>
<p>The standards organization, which in this case is the
<acronym title="World Wide Web Consortium">W3C</acronym>, plays a
significant role in relation to the method: the standards
organization provides access to a community of experts, as well as
the tools that facilitate the interaction and communication between
actors and the deliverables that are the outputs of a working
group.</p>
<p>Deliverables include the specification, testable assertions, and
test cases that constitute the test suite. Actors include editors,
test creators, QA engineers, implementers, and specification
reviewers. Actors, which in many cases will be the same person in
multiple roles, literally provide the intelligence that improves
the quality of deliverables.</p>
<p>The tools provided by the standards organization harness the
collective intelligence of actors (by capturing their interactions
and communications). Some of the tools provided by the
<acronym title="World Wide Web Consortium">W3C</acronym> include
CVS, IRC, a web server, phone bridge, the technical report
repository, publication rules checker, issue tracking software, and
a mailing list. The standards organization also provides the legal
framework that allows multiple competing entities to share
intellectual property and collaborate with each other.</p>
<p>The method simply taps into the community-driven process that is
standardization, which, through its process, is structured to
produce high-quality peer-reviewed work. The following diagram
visualizes how actors communicate with each other to improve the
quality of various deliverables through tools provided by the
standards organization. As can be seen, interaction between actors,
tools, and deliverables form feedback loops that serve to improve
the work being produced by the working group.</p>
<p><img src="testing-methodology.png" alt=
"Diagram summarizing the described testing methodology" height=
"484" width="982" /></p>
</div>
<div class="section" id="value-of-applying-the-method">
<h3><span class="secno">3.2</span> Value of applying the
method</h3>
<p>An economic case can be made for identifying and removing
redundant conformance requirements from a specification: consider
that an average size specification can have around 50 conformance
requirements, and each conformance requirement will require one or
more test cases. Each test case will require a testable assertion,
which may be either written in prose (e.g., “to pass,
<code>a</code> must equal <code>b</code>.”) or expressed
computationally (e.g., <code>if(a===b)</code>). In terms of
resource allocation, someone needs to either manually create or
computationally generate the test cases. Someone then needs to
verify if each test case actually tests the conformance
requirement, and, where it doesn’t, fixes need to be made to either
the test case or to the specification.</p>
<p>Eventually, QA engineers will need to run the test cases and
conformance violations will need to be reported by filling bug
reports. Even in a pure computational setting, having redundant
tests in a test suite still results in wasted CPU cycles every time
a build of software is run against the test suite. If redundant
tests build up, it can have a significant impact on quality
assurance processes where it can take hours - or sometimes days -
to run builds of a product through various test suites.</p>
<p>Simply put, a test suite for a specification should only test
what is necessary for a product to conform - and no more.</p>
<p>It should be noted that “acid tests” (e.g., the <a href=
"http://acid3.acidtests.org/">Acid3</a> test) certainly have an
important role in creating interoperability by exposing erroneous
edge-case behavior and the limitations of implementations. But such
stress tests are typically beyond the scope of a test suite for a
specification.</p>
</div>
</div>
<div class="section" id=
"structural-components-of-a-conformance-requirement">
<h2><span class="secno">4.</span> Structural Components of a
Conformance Requirement</h2>
<p>To be testable, a conformance requirement must contains certain
the necessary information to create a testable assertions, as
described in <cite>The Structure of a Test Assertion</cite> in the
<cite>Test Assertions Guidelines</cite> [<a href="#bib-OASIS-TAG"
rel="biblioentry" class="bibref">OASIS-TAG</a>].</p>
<p>Consider the following conformance requirement from the
[<a href="#bib-WIDGETS" rel="biblioentry" class=
"bibref">WIDGETS</a>] specification:</p>
<blockquote>
<p>If the <code>src</code> attribute of the <code>content</code>
element is absent or an empty string, then the user agent
<em>must</em> ignore this element.</p>
</blockquote>
<p>The structure of the conformance requirement can be decomposed
into the following structural components:</p>
<dl>
<dt>Product</dt>
<dd>
<p>A product that is supposed to follow the requirement — in this
case, the “user agent”. (see also the definition of “classes of
product” in [<a href="#bib-QAFRAME-SPEC" rel="biblioentry" class=
"bibref">QAFRAME-SPEC</a>])</p>
</dd>
<dt>Strictness level</dt>
<dd>
<p>The strictness of the applicability of the requirement to a
product — in this case, “the user agent <em>must</em>” do
something. <acronym title=
"World Wide Web Consortium"><acronym title=
"World Wide Web Consortium">W3C</acronym></acronym> specifications
use the [<a href="#bib-RFC2119" rel="biblioentry" class=
"bibref">RFC2119</a>] keywords (<em title="must" class=
"rfc2119">must</em>, <em title="should" class=
"rfc2119">should</em>, <em title="may" class="rfc2119">may</em>,
etc.) to indicate the level of requirement that is imposed on a
product.</p>
</dd>
<dt>Prerequisites</dt>
<dd>
<p>An explanation of the prerequisites that need to be in place in
order for the requirement to apply — in this case, “if the src
attribute of the content element is absent or an empty string”.</p>
</dd>
<dt>Behavior</dt>
<dd>
<p>a clear explanation of what the product is supposed to do — in
this case, “ignore this element”.</p>
</dd>
<dt>Terms</dt>
<dd>
<p>Keywords that are relevant to understanding how to apply the
desired behavior. For instance, what it actually means to “ignore”
(definitively and algorithmically) needs to be specified somewhere
in the specification.</p>
<p>Terms take one of the three forms in a specification: an
algorithm, a definition, or a statement of fact.</p>
</dd>
<dd>
<p>An example of an algorithm:</p>
<blockquote>
<p>In the case the user agent is asked to ignore an [XML] element
or node, a user agent:</p>
<ol>
<li>
<p>Stops processing the current <var>element</var>, ignoring all of
the <var>element</var>‘s attributes and child nodes (if any), and
proceed to the next <var>element</var> in the <var>elements
list</var>.</p>
</li>
<li>
<p>Make a record that it has attempted to process an element of
that type.</p>
</li>
</ol>
</blockquote>
<p>An example of a definition:</p>
<blockquote><p>A user agent is an implementation of this
specification that also supports XML&hellip;</p></blockquote>
<p>An example of a statement of fact:</p>
<blockquote><p>A user agent will need to keep a record of all
element types it has attempted to process even if they were ignored
(this is to determine if the user agent has attempted to process an
element of a given type already).</p></blockquote>
</dd>
</dl>
<p>Having an understanding of the structural components that need
to be present in every conformance requirement, an editor can then
use the following conventions to mark-up their specification.</p>
</div>
<div class="section" id=
"conventions-for-marking-up-conformance-requirements">
<h2><span class="secno">5.</span> Conventions for Marking-up
Conformance Requirements</h2>
<p>Using mark-up makes it possible to exploit the structure of
conformance requirements for various purposes, particularly for
analysing that conformance requirements don’t exhibit the common
mistakes. Here, we describe how we made use of HTML to markup
conformance requirements in the [<a href="#bib-WIDGETS" rel=
"biblioentry" class="bibref">WIDGETS</a>] specifciation. However,
this should be considered purely as an example that would need to
be adapted to fit each specification particularities.</p>
<p>Consider the following conformance requirement from the
[<a href="#bib-WIDGETS" rel="biblioentry" class=
"bibref">WIDGETS</a>] specification, as we will make use of it in
this section:</p>
<blockquote>
<p id="ta-a1"><q>If a user agent encounters a file matching a file
name given in the file name column of the default start files table
in an arbitrary folder, then user agent <em class="ct">must</em>
treat that file as an arbitrary file.</q></p>
</blockquote>
<p>From the previous section, we know that the relevant structural
components are:</p>
<ul>
<li>Product: <q>the user agent</q>.</li>
<li>Strictness level: <q>must</q>.</li>
<li>Prerequisites: <q>If a user agent encounters a file matching a
file name given in the file name column of the default start files
table in an arbitrary folder</q></li>
<li>Behavior: <q>treat that file as an arbitrary file</q>.</li>
<li>Terms: <q>file</q>, <q>folder</q>, <q>file name</q>,
<q>arbitrary</q>, <q>default start files table.</q></li>
</ul>
<p>Having identified all the component parts, our method for
marking up the conformance requirement is as follows (in no
particular order):</p>
<ul>
<li>
<p>Isolate each conformance requirement within an appropriate HTML
element, such as a <code>p</code> element — this isolates all the
useful information into a single logical container, making it easy
to extract and examine out of context. We discuss methods for
extracting conformance requirements in the next section of this
document.</p>
</li>
<li>
<p>Assigning a unique identifier to each conformance requirement
and mark it as testable— In our case, each conformance requirement
is uniquely identified through the <code>id</code> attribute on the
<code>p</code> element; the unique identifier starts by convention
with <code>ta-</code>, which denotes it as conformance requirement,
followed by a randomly generated string (e.g., <code>&lt;p
id="ta-abc"&gt;</code>). The uniqueness of the id can be verified
by running the HTML document through a <a href=
"http://validator.w3.org">validator</a>.</p>
<p>This same <code>id</code> attribute is also useful as it allows
linking back to the specification exactly to the point where the
assertion is made. This proved useful during testing, where the
tester can get more context on the definitions and the spirit of
the assertion when the letter of it is not enough.</p>
<p>When creating the test suite, distinguishing conformance
requirements from other parts of the specification allows test
cases to be grouped by conformance requirement through exploiting
this identifier. In addition, examining how many test cases are
grouped around a conformance requirement is useful for assessing
how much of the specification has been, or needs to be, tested and
verified.</p>
<p>Note that in the case of the [<a href="#bib-WIDGETS" rel=
"biblioentry" class="bibref">WIDGETS</a>] specification, the
randomly generated strings were 10 characters long (e.g.,
<samp>ta-qxLSCRCHlN</samp>) which proved somewhat cumbersome for
people to work with. Other specifications the Working Group is
working on use much shorter identifiers (two letters).</p>
</li>
<li>
<p>Explicitly identify the product to which the conformance
requirement applies — this defines how the test cases are built in
the test suite, based on how the product is supposed to operate.
The conformance product to which the requirement applies is marked
up with a class attribute set to one of the predefined values — in
the case of the [<a href="#bib-WIDGETS" rel="biblioentry" class=
"bibref">WIDGETS</a>] specification, the product was identified
using a span element with a class atttribute value of
<code>product-ua</code>. For example, “<code>A &lt;span
class="product-ua"&gt;user agent&lt;/span&gt;</code>”.</p>
</li>
<li>
<p>Identify the level of requirement for conformance — that is, use
an element to explicitly mark-up [<a href="#bib-RFC2119" rel=
"biblioentry" class="bibref">RFC2119</a>] keywords (<em title=
"must" class="rfc2119">must</em>, <em title="should" class=
"rfc2119">should</em>, <em title="may" class="rfc2119">may</em>,
etc.). Doing so is useful for identifying aspects of the
specification that must be included in the test suite, and aspects
in the specification that might not be worth testing (e.g.,
conformance requirements that make use of the RFC2119 “<em class=
"ct">optional</em>” keyword).</p>
<p>The level of requirement is marked up by an emphasis element
(<code>&lt;em&gt;</code>) with a <code>class</code> defined to
<code>ct</code>; this mark-up convention also allows determining if
a given paragraph of the specification contains a requirement or
not. For example, <code>&lt;em
class="ct"&gt;must&lt;/em&gt;</code>.</p>
</li>
<li>
<p>Hyperlink to the appropriate terms. For example, “encounters a
<code>&lt;a class="term"
href="#file"&gt;</code>file<code>&lt;/a&gt;</code> matching”.</p>
</li>
</ul>
<p>The following code shows what the conformance requirement
presented at the start of this section would look like once the
above dot points are applied:</p>
<blockquote>
<p><code>&lt;p <strong>id="ta-a1"</strong>&gt;</code>If a user
agent encounters a <code><strong>&lt;a
href="#file"&gt;</strong></code>file<code>&lt;/a&gt;</code>
matching a file name given in the file name column of the
<code>&lt;a href="#default-start-files-table"&gt;</code>default
start files table<code>&lt;/a&gt;</code> in an <code>&lt;a
href="#arbitrary"&gt;</code>arbitrary<code>&lt;/a&gt; &lt;a
href="#folder"&gt;</code>folder<code>&lt;/a&gt;</code>, then
<code>&lt;a <strong>class="product-ua"</strong>
href="#user-agent"&gt;</code>user agent<code>&lt;/a&gt;
<strong>&lt;em
class="ct"&gt;</strong></code>must<code>&lt;/em&gt;</code> treat
that file as an <code>&lt;a
href="#arbitrary"&gt;</code>arbitrary<code>&lt;/a&gt;</code>
file.<code>&lt;/p&gt;</code></p>
</blockquote>
</div>
<div class="section" id="extracting-conformance-requirements">
<h2><span class="secno">6.</span> Extracting Conformance
Requirements</h2>
<p>After the [<a href="#bib-WIDGETS" rel="biblioentry" class=
"bibref">WIDGETS</a>] specification was marked up using the
conventions described above, the conformance requirements were
extracted using an <a href=
"http://dev.w3.org/2006/waf/widgets/tests/extractTestAssertions.xsl">
XSLT style sheet</a> which served as the basis for <a href=
"http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/1021.html">
a review of the testability of the specification</a>.</p>
<p>Over time, the XSLT style sheet was discarded in favor of using
<a href=
"http://dev.w3.org/2006/waf/widgets-shared/javascript/ShowTestSuite.js">
a JavaScript system</a>. The JavaScript-based system replicates
what the XSLT style sheet was doing, but then mashes the
conformance requirements with an <a href=
"http://dev.w3.org/2006/waf/widgets/test-suite/test-suite.xml">XML
document</a> that describes all the tests cases in the test suite.
This allows those working on the specification and on the test
suite to not only see the conformance requirements, but also what
test cases have been created.</p>
<p>Because we wrap all conformance requirements in <code>p</code>
elements, the actual process of extracting conformance requirements
is relatively simple. We use the <a href=
"http://jquery.org">JQuery</a> JavaScript library, in conjunction
with a simple CSS selector, as the means to extract the conformance
requirements. The CSS selector finds all <code>p</code> elements in
the document that have an <code>id</code> attribute that starts
with the string <code>ta-</code>. For example:</p>
<pre>
<code>function processSpec(spec){
  //CSS selector
        var taSelector = 'p[id^="ta-"]';

        //Extracted nodes
  var requirements    = $(spec).find(taSelector, false);
  //Display the resuts... 
  requirements.each(function(){...}}
}</code>
</pre></div>
<div class="section" id="testable-assertions-and-test-----cases">
<h2><span class="secno">7.</span> Testable Assertions and Test
Cases</h2>
<p>The working groups found that once conformance requirements have
been extracted, the work of creating test cases for a test suite
was significantly simplified.</p>
<p>A <dfn id="dfn-test-case">test case</dfn> is a machine
processable object that is used to test one or more conformance
requirements. A <dfn id="dfn-testable-assertion">testable
assertion</dfn>, on the other hand, is a prose description of a
test case intended for human testers - i.e., for a given test case,
testable assertion defines exactly what the user agent needs to do
(behaviorally or conditionally) to pass the test case. It is
important to note that testable assertions don’t appear in a
specification - they only appear in a test suite to describe a test
case.</p>
<p>To create a test, a test writer looks at a given conformance
requirement, creates a test case that matches the pre-requisites
set in the requirement, and documents the expected outcome
described by the required behavior as a testable assertion (or vice
versa).</p>
<p>To demonstrate how testable assertions are written, again
consider the following conformance requirement from the
<cite>Widgets Packaging and Configuration</cite> [<a href=
"#bib-WIDGETS" rel="biblioentry" class="bibref">WIDGETS</a>]
specification:</p>
<blockquote>
<p>If the <code>src</code> attribute of the <code>content</code>
element is absent or an empty string, then the user agent
<em>must</em> ignore this element.</p>
</blockquote>
<p>After following the definitions to the terms given in the
specification, the conformance requirement above can be turned into
one or more testable assertions (which are used to derive test
cases for the test suite).</p>
<p>An example of two testable assertion derived from the above
conformance requirement:</p>
<ul>
<li>
<blockquote>
<p><q>Test that the user agent skips a content element with no src
attribute and loads default start file. To pass, the user agent
must use as start file <code>index.htm</code> at the root of the
widget.</q></p>
</blockquote>
</li>
<li>
<blockquote>
<p><q>Test that the user agent skips a content element that points
to a non-existing file. To pass, the user agent must use as start
file <code>index.htm</code>.</q></p>
</blockquote>
</li>
</ul>
<p>And the corresponding test cases for the testable assertions
take the following computable form (inteded for the user
agent):</p>
<ul>
<li>
<pre>
<code>&lt;widget xmlns="http://www.w3.org/ns/widgets"&gt;
   &lt;content src="" type="text/html"/&gt;
&lt;/widget&gt;</code>
</pre></li>
<li>
<pre>
<code>&lt;widget xmlns="http://www.w3.org/ns/widgets"&gt;
  &lt;content src="doesnotexist.html"/&gt;
&lt;/widget&gt;</code>
</pre></li>
</ul>
<p>Each test case can be associated to a given testable assertion;
later on, when running the test suite and finding test cases that
fail, it allows identifying the assertion behind it that has
failed, and thus evaluate which of the implementation, the test
case, or the specification is wrong.</p>
<p>To maintain the association between test cases and test
assertions, a simple <a href=
"http://dev.w3.org/2006/waf/widgets/test-suite/test-suite.xml">XML
file</a> was created:</p>
<blockquote>
<pre>
<code>&lt;testsuite for="http://www.w3.org/TR/widgets/"&gt;
&lt;test id="b5"
    for="ta-a1"
    src="test-cases/ta-a1/000/b5.wgt"&gt;
Tests that a UA does not go searching in an 
arbitrary folder ("abc123") for default start 
files. To pass, the user agent must treat this 
widget as an invalid widget.
&lt;/test&gt;
&lt;test ...&gt; ... &lt;/test&gt;
&lt;/testsuite&gt;</code>
</pre></blockquote>
<p>The <code>testsuite</code> element serves as a wrapper for the
test cases of the test suite. It also identifies which test suite
was tested through a URI; set in the <code>for</code>
attribute.</p>
<p>The <code>test</code> element, on the other had, describes a
single test case by:</p>
<ul>
<li>
<p>Providing a unique identifier for the test case; set in the
<code>id</code> attribute.</p>
</li>
<li>
<p>Identifying the conformance requirement being tested; set in the
<code>for</code> attribute.</p>
</li>
<li>
<p>Linking to the resource that represents the test case; set in
the <code>src</code> attribute.</p>
</li>
<li>
<p>And finally, describing the expected outcome of the test; set as
the textual content of the element.</p>
</li>
</ul>
<p>This XML file allows generating the final round of packaging and
information needed for the test suite:</p>
<ul>
<li>
<p>its content is integrated in the test suite description document
[<a href="#bib-WIDGETS-PC-TESTS" rel="biblioentry" class=
"bibref">WIDGETS-PC-TESTS</a>] with JavaScript to attach test cases
to the previously extracted test assertions.</p>
</li>
<li>
<p>it allows quick assessment of the coverage of the test suite by
finding which conformance requirements don’t have matching test
cases.</p>
</li>
<li>
<p>the list of test cases can be used to create simple test
harnesses for widget runtime engines.</p>
</li>
<li>
<p>The same list is used to generate an <a href=
"http://dev.w3.org/2006/waf/widgets/imp-report/">implementation
report</a> [<a href="#bib-WIDGETS-PC-INTEROP" rel="biblioentry"
class="bibref">WIDGETS-PC-INTEROP</a>] comparing the results of
running the test cases for various run time engines.</p>
</li>
</ul>
<p>The implementation reports also leverage the identifiers
assigned to each test case to indicate if the implementation has
passed or failed a test. In order to create the implementation
reports, the Working Group created another simple XML format: each
implementer is assigned an XML file, which in most cases they
themselves maintain.</p>
<p>An example of the results format:</p>
<pre>
<code>&lt;results testsuite="http://dev.w3.org/2006/waf/widgets/test-suite/test-suite.xml" 
  id="Opera" 
  product="Opera widgets" 
  href="http://opera.com/browser/next"&gt;
    
    &lt;result for="b5" verdict="pass"/&gt;

    &lt;result for="dn" verdict="fail"&gt;
    Opera did not process the file because it did not 
    have a .wgt file extension.
    &lt;/result&gt; 

 &lt;/results&gt;</code>
</pre>
<p>The <code>results</code> element serves as a wrapper that
describes what test suite was tested, and some basic details about
the implementation by:</p>
<ul>
<li>
<p>Identifying the test suite by its URI, set in the
<code>testsuite</code> attribute.</p>
</li>
<li>
<p>Identifying the product in a human legible form; set in the
product attribute.</p>
</li>
<li>
<p>Providing a hyperlink to where independent parties can either
get more information about a product, or actually download the
product so they can, where possible, verify the results
independently.</p>
</li>
</ul>
<p>The <code>result</code> element, on the other hand, describes
individual results gained from testing including:</p>
<ul>
<li>
<p>identifying the test case by its id, set in the <code>for</code>
attribute.</p>
</li>
<li>indicating what happened when the test was run; set by the
<code>verdict</code> attribute commonly to the value
<code>pass</code> or <code>fail</code>. In certain cases, however,
it was not possible for a tester to say if a test had passed or
failed, so new verdicts had to be created. We discuss these
below.</li>
</ul>
<p>Tallying the results allowed the working group to easily
visualize the data in the test suite for each product:</p>
<p><img src="chart.png" alt=
"A bar chart of a graph showing different segments representing, pass, fails, and other bits of information about the conformance of a product to the test suite." /></p>
<p>At at glance, it is possible to see for an implementation the
number of tests cases passed, failed, and untested. Where the
testing was being conducted independently (i.e., not by the
implementer), it was also possible to visualize where it was not
possible to run a test because, for example, there was no way to
get at a result without having direct access to the source code of
the product. And where the test ran, but it was not possible to
determine if the test actually passed or failed, the verdict was
labeled as <code>incomplete</code>.</p>
<p>Having the raw results data also allowed the working group to
visualize at a glance how conformant each implementation is to the
specification:</p>
<p><img src="ometer.png" alt=
"Graph showing a 17 percent level of conformance for a given implementation" /></p>
<p>The above meter simply represents the number of tests passed by
an implementation. However, the working groups found it
particularly useful to be able to see all the meters and charts
together:</p>
<p><img src="sidebyside.png" alt=
"The meters and pie charts all seen next to each other." /></p>
</div>
<div class="section" id="conclusions">
<h2><span class="secno">8.</span> Conclusions</h2>
<p>While the method described in this document uses three separate
steps (marking it up the specification, making the specification
testable, and linking test assertions to test cases), these steps
don’t have to be applied sequentially, and in practice work best as
an iterative process.</p>
<p>Althought it has some limitations and shortcomings, this method
has proved effective for the [<a href="#bib-WIDGETS" rel=
"biblioentry" class="bibref">WIDGETS</a>] specification, and is now
being applied to the other Widgets specifications developed by the
Web Applications Working Group.</p>
</div>
<div class="appendix section" id="references">
<h2><span class="secno">A.</span> References</h2>
<div class="section" id="normative-references">
<h3><span class="secno">A.1</span> Normative references</h3>
<p>No normative references.</p>
</div>
<div class="section" id="informative-references">
<h3><span class="secno">A.2</span> Informative references</h3>
<dl class="bibliography">
<dt id="bib-OASIS-TAG">[OASIS-TAG]</dt>
<dd>Stephen D. Green, Dmitry Kostovarov. <a href=
"http://docs.oasis-open.org/tag/guidelines/v1.0/testassertionsguidelines.html">
<cite>Test Assertions Guidelines</cite></a>. OASIS Committee Draft
(Work in progress) .URL: <a href=
"http://docs.oasis-open.org/tag/guidelines/v1.0/testassertionsguidelines.html">
http://docs.oasis-open.org/tag/guidelines/v1.0/testassertionsguidelines.html</a></dd>
<dt id="bib-QAFRAME-SPEC">[QAFRAME-SPEC]</dt>
<dd>Lynne Rosenthal; et al. <a href=
"http://www.w3.org/TR/2005/REC-qaframe-spec-20050817"><cite>QA
Framework: Specification Guidelines.</cite></a> 17 August 2005. W3C
Recommendation. URL: <a href=
"http://www.w3.org/TR/2005/REC-qaframe-spec-20050817">http://www.w3.org/TR/2005/REC-qaframe-spec-20050817</a></dd>
<dt id="bib-RFC2119">[RFC2119]</dt>
<dd>S. Bradner. <a href=
"http://www.ietf.org/rfc/rfc2119.txt"><cite>Key words for use in
RFCs to Indicate Requirement Levels.</cite></a> Internet RFC 2119.
URL: <a href=
"http://www.ietf.org/rfc/rfc2119.txt">http://www.ietf.org/rfc/rfc2119.txt</a></dd>
<dt id="bib-WIDGETS">[WIDGETS]</dt>
<dd>Marcos Caceres. <a href=
"http://www.w3.org/TR/2009/CR-widgets-20091201/"><cite>Widget
Packaging and Configuration.</cite></a> 01 December 2009. W3C
Candidate Recommendation. (Work in progress.) URL: <a href=
"http://www.w3.org/TR/2009/CR-widgets-20091201/">http://www.w3.org/TR/2009/CR-widgets-20091201/</a></dd>
<dt id="bib-WIDGETS-PC-INTEROP">[WIDGETS-PC-INTEROP]</dt>
<dd>Marcos Cáceres, Samuel Santos, Daniel Silva. <a href=
"http://dev.w3.org/2006/waf/widgets/imp-report/">Implementation
Report: Widgets Packaging and Configuration</a>, URL: <a href=
"http://dev.w3.org/2006/waf/widgets/imp-report/">http://dev.w3.org/2006/waf/widgets/imp-report/</a></dd>
<dt id="bib-WIDGETS-PC-TESTS">[WIDGETS-PC-TESTS]</dt>
<dd>Marcos Caceres, Kai Hendry. <a href=
"http://dev.w3.org/2006/waf/widgets/test-suite/"><cite>Test Suite
for Packaging and Configuration.</cite></a> W3C Test Suite. (Work
in progress.) URL: <a href=
"http://dev.w3.org/2006/waf/widgets/test-suite/">http://dev.w3.org/2006/waf/widgets/test-suite/</a></dd>
</dl>
</div>
</div>
</body>
</html>