index.html 123 KB
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE html PUBLIC '-//W3C//DTD XHTML+RDFa 1.0//EN' 'http://www.w3.org/MarkUp/DTD/xhtml-rdfa-1.dtd'>
<html dir="ltr" about="" property="dcterms:language" content="en" xmlns="http://www.w3.org/1999/xhtml" xmlns:dcterms='http://purl.org/dc/terms/' xmlns:bibo='http://purl.org/ontology/bibo/' xmlns:foaf='http://xmlns.com/foaf/0.1/' xmlns:xsd='http://www.w3.org/2001/XMLSchema#'>
<head>

	
		<title>Media Accessibility User Requirements</title>
		<meta content="text/html;charset=utf-8" http-equiv="Content-Type" />
		
<!--  
      === NOTA BENE ===
      For the three scripts below, if your spec resides on dev.w3 you can check them
      out in the same tree and use relative links so that they'll work offline,
      -->

		
        
<!-- 		<script src='/dev/2009/dap/ReSpec.js/js/respec.js' class='remove'></script>  -->

		
		<style type="text/css">
.req-handle
{
	font-weight : bold;
	text-transform : uppercase;
}
.list-in-req
{
	list-style-type : lower-alpha;
}
		</style>
	<style type="text/css">
/*****************************************************************
 * ReSpec CSS
 * Robin Berjon (robin at berjon dot com)
 * v0.05 - 2009-07-31
 *****************************************************************/


/* --- INLINES --- */
em.rfc2119 { 
    text-transform:     lowercase;
    font-variant:       small-caps;
    font-style:         normal;
    color:              #900;
}

h1 acronym, h2 acronym, h3 acronym, h4 acronym, h5 acronym, h6 acronym, a acronym,
h1 abbr, h2 abbr, h3 abbr, h4 abbr, h5 abbr, h6 abbr, a abbr {
    border: none;
}

dfn {
    font-weight:    bold;
}

a.internalDFN {
    color:  inherit;
    border-bottom:  1px solid #99c;
    text-decoration:    none;
}

a.externalDFN {
    color:  inherit;
    border-bottom:  1px dotted #ccc;
    text-decoration:    none;
}

a.bibref {
    text-decoration:    none;
}

code {
    color:  #ff4500;
}


/* --- WEB IDL --- */
pre.idl {
    border-top: 1px solid #90b8de;
    border-bottom: 1px solid #90b8de;
    padding:    1em;
    line-height:    120%;
}

pre.idl::before {
    content:    &quot;WebIDL&quot;;
    display:    block;
    width:      150px;
    background: #90b8de;
    color:  #fff;
    font-family:    initial;
    padding:    3px;
    font-weight:    bold;
    margin: -1em 0 1em -1em;
}

.idlType {
    color:  #ff4500;
    font-weight:    bold;
    text-decoration:    none;
}

/*.idlModule*/
/*.idlModuleID*/
/*.idlInterface*/
.idlInterfaceID, .idlDictionaryID {
    font-weight:    bold;
    color:  #005a9c;
}

.idlSuperclass {
    font-style: italic;
    color:  #005a9c;
}

/*.idlAttribute*/
.idlAttrType, .idlFieldType, .idlMemberType {
    color:  #005a9c;
}
.idlAttrName, .idlFieldName, .idlMemberName {
    color:  #ff4500;
}
.idlAttrName a, .idlFieldName a, .idlMemberName a {
    color:  #ff4500;
    border-bottom:  1px dotted #ff4500;
    text-decoration: none;
}

/*.idlMethod*/
.idlMethType {
    color:  #005a9c;
}
.idlMethName {
    color:  #ff4500;
}
.idlMethName a {
    color:  #ff4500;
    border-bottom:  1px dotted #ff4500;
    text-decoration: none;
}

/*.idlParam*/
.idlParamType {
    color:  #005a9c;
}
.idlParamName {
    font-style: italic;
}

.extAttr {
    color:  #666;
}

/*.idlConst*/
.idlConstType {
    color:  #005a9c;
}
.idlConstName {
    color:  #ff4500;
}
.idlConstName a {
    color:  #ff4500;
    border-bottom:  1px dotted #ff4500;
    text-decoration: none;
}

/*.idlException*/
.idlExceptionID {
    font-weight:    bold;
    color:  #c00;
}

.idlTypedefID, .idlTypedefType {
    color:  #005a9c;
}

.idlRaises, .idlRaises a.idlType, .idlRaises a.idlType code, .excName a, .excName a code {
    color:  #c00;
    font-weight:    normal;
}

.excName a {
    font-family:    monospace;
}

.idlRaises a.idlType, .excName a.idlType {
    border-bottom:  1px dotted #c00;
}

.excGetSetTrue, .excGetSetFalse, .prmNullTrue, .prmNullFalse, .prmOptTrue, .prmOptFalse {
    width:  45px;
    text-align: center;
}
.excGetSetTrue, .prmNullTrue, .prmOptTrue { color:  #0c0; }
.excGetSetFalse, .prmNullFalse, .prmOptFalse { color:  #c00; }

.idlImplements a {
    font-weight:    bold;
}

dl.attributes, dl.methods, dl.constants, dl.fields, dl.dictionary-members {
    margin-left:    2em;
}

.attributes dt, .methods dt, .constants dt, .fields dt, .dictionary-members dt {
    font-weight:    normal;
}

.attributes dt code, .methods dt code, .constants dt code, .fields dt code, .dictionary-members dt code {
    font-weight:    bold;
    color:  #000;
    font-family:    monospace;
}

.attributes dt code, .fields dt code, .dictionary-members dt code {
    background:  #ffffd2;
}

.attributes dt .idlAttrType code, .fields dt .idlFieldType code, .dictionary-members dt .idlMemberType code {
    color:  #005a9c;
    background:  transparent;
    font-family:    inherit;
    font-weight:    normal;
    font-style: italic;
}

.methods dt code {
    background:  #d9e6f8;
}

.constants dt code {
    background:  #ddffd2;
}

.attributes dd, .methods dd, .constants dd, .fields dd, .dictionary-members dd {
    margin-bottom:  1em;
}

table.parameters, table.exceptions {
    border-spacing: 0;
    border-collapse:    collapse;
    margin: 0.5em 0;
    width:  100%;
}
table.parameters { border-bottom:  1px solid #90b8de; }
table.exceptions { border-bottom:  1px solid #deb890; }

.parameters th, .exceptions th {
    color:  #fff;
    padding:    3px 5px;
    text-align: left;
    font-family:    initial;
    font-weight:    normal;
    text-shadow:    #666 1px 1px 0;
}
.parameters th { background: #90b8de; }
.exceptions th { background: #deb890; }

.parameters td, .exceptions td {
    padding:    3px 10px;
    border-top: 1px solid #ddd;
    vertical-align: top;
}

.parameters tr:first-child td, .exceptions tr:first-child td {
    border-top: none;
}

.parameters td.prmName, .exceptions td.excName, .exceptions td.excCodeName {
    width:  100px;
}

.parameters td.prmType {
    width:  120px;
}

table.exceptions table {
    border-spacing: 0;
    border-collapse:    collapse;
    width:  100%;
}

/* --- TOC --- */
.toc a {
    text-decoration:    none;
}

a .secno {
    color:  #000;
}

/* --- TABLE --- */
table.simple {
    border-spacing: 0;
    border-collapse:    collapse;
    border-bottom:  3px solid #005a9c;
}

.simple th {
    background: #005a9c;
    color:  #fff;
    padding:    3px 5px;
    text-align: left;
}

.simple th[scope=&quot;row&quot;] {
    background: inherit;
    color:  inherit;
    border-top: 1px solid #ddd;
}

.simple td {
    padding:    3px 10px;
    border-top: 1px solid #ddd;
}

.simple tr:nth-child(even) {
    background: #f0f6ff;
}

/* --- DL --- */
.section dd &gt; p:first-child {
    margin-top: 0;
}

.section dd &gt; p:last-child {
    margin-bottom: 0;
}

.section dd {
    margin-bottom:  1em;
}

.section dl.attrs dd, .section dl.eldef dd {
    margin-bottom:  0;
}

/* --- EXAMPLES --- */
pre.example {
    border-top: 1px solid #ff4500;
    border-bottom: 1px solid #ff4500;
    padding:    1em;
    margin-top: 1em;
}

pre.example::before {
    content:    &quot;Example&quot;;
    display:    block;
    width:      150px;
    background: #ff4500;
    color:  #fff;
    font-family:    initial;
    padding:    3px;
    font-weight:    bold;
    margin: -1em 0 1em -1em;
}

/* --- EDITORIAL NOTES --- */
.issue {
    padding:    1em;
    margin: 1em 0em 0em;
    border: 1px solid #f00;
    background: #ffc;
}

.issue::before {
    content:    &quot;Issue&quot;;
    display:    block;
    width:  150px;
    margin: -1.5em 0 0.5em 0;
    font-weight:    bold;
    border: 1px solid #f00;
    background: #fff;
    padding:    3px 1em;
}

.note {
    margin: 1em 0em 0em;
    padding:    1em;
    border: 2px solid #cff6d9;
    background: #e2fff0;
}

.note::before {
    content:    &quot;Note&quot;;
    display:    block;
    width:  150px;
    margin: -1.5em 0 0.5em 0;
    font-weight:    bold;
    border: 1px solid #cff6d9;
    background: #fff;
    padding:    3px 1em;
}

/* --- Best Practices --- */
div.practice {
    border: solid #bebebe 1px;
    margin: 2em 1em 1em 2em;
}

span.practicelab {
    margin: 1.5em 0.5em 1em 1em;
    font-weight: bold;
    font-style: italic;
}

span.practicelab   { background: #dfffff; }

span.practicelab {
    position: relative;
    padding: 0 0.5em;
    top: -1.5em;
}

p.practicedesc {
    margin: 1.5em 0.5em 1em 1em;
}

@media screen {
    p.practicedesc {
        position: relative;
        top: -2em;
        padding: 0;
        margin: 1.5em 0.5em -1em 1em;
    }
}

/* --- SYNTAX HIGHLIGHTING --- */
pre.sh_sourceCode {
  background-color: white;
  color: black;
  font-style: normal;
  font-weight: normal;
}

pre.sh_sourceCode .sh_keyword { color: #005a9c; font-weight: bold; }           /* language keywords */
pre.sh_sourceCode .sh_type { color: #666; }                            /* basic types */
pre.sh_sourceCode .sh_usertype { color: teal; }                             /* user defined types */
pre.sh_sourceCode .sh_string { color: red; font-family: monospace; }        /* strings and chars */
pre.sh_sourceCode .sh_regexp { color: orange; font-family: monospace; }     /* regular expressions */
pre.sh_sourceCode .sh_specialchar { color: 	#ffc0cb; font-family: monospace; }  /* e.g., \n, \t, \\ */
pre.sh_sourceCode .sh_comment { color: #A52A2A; font-style: italic; }         /* comments */
pre.sh_sourceCode .sh_number { color: purple; }                             /* literal numbers */
pre.sh_sourceCode .sh_preproc { color: #00008B; font-weight: bold; }       /* e.g., #include, import */
pre.sh_sourceCode .sh_symbol { color: blue; }                            /* e.g., *, + */
pre.sh_sourceCode .sh_function { color: black; font-weight: bold; }         /* function calls and declarations */
pre.sh_sourceCode .sh_cbracket { color: red; }                              /* block brackets (e.g., {, }) */
pre.sh_sourceCode .sh_todo { font-weight: bold; background-color: #00FFFF; }   /* TODO and FIXME */

/* Predefined variables and functions (for instance glsl) */
pre.sh_sourceCode .sh_predef_var { color: #00008B; }
pre.sh_sourceCode .sh_predef_func { color: #00008B; font-weight: bold; }

/* for OOP */
pre.sh_sourceCode .sh_classname { color: teal; }

/* line numbers (not yet implemented) */
pre.sh_sourceCode .sh_linenum { display: none; }

/* Internet related */
pre.sh_sourceCode .sh_url { color: blue; text-decoration: underline; font-family: monospace; }

/* for ChangeLog and Log files */
pre.sh_sourceCode .sh_date { color: blue; font-weight: bold; }
pre.sh_sourceCode .sh_time, pre.sh_sourceCode .sh_file { color: #00008B; font-weight: bold; }
pre.sh_sourceCode .sh_ip, pre.sh_sourceCode .sh_name { color: #006400; }

/* for Prolog, Perl... */
pre.sh_sourceCode .sh_variable { color: #006400; }

/* for LaTeX */
pre.sh_sourceCode .sh_italics { color: #006400; font-style: italic; }
pre.sh_sourceCode .sh_bold { color: #006400; font-weight: bold; }
pre.sh_sourceCode .sh_underline { color: #006400; text-decoration: underline; }
pre.sh_sourceCode .sh_fixed { color: green; font-family: monospace; }
pre.sh_sourceCode .sh_argument { color: #006400; }
pre.sh_sourceCode .sh_optionalargument { color: purple; }
pre.sh_sourceCode .sh_math { color: orange; }
pre.sh_sourceCode .sh_bibtex { color: blue; }

/* for diffs */
pre.sh_sourceCode .sh_oldfile { color: orange; }
pre.sh_sourceCode .sh_newfile { color: #006400; }
pre.sh_sourceCode .sh_difflines { color: blue; }

/* for css */
pre.sh_sourceCode .sh_selector { color: purple; }
pre.sh_sourceCode .sh_property { color: blue; }
pre.sh_sourceCode .sh_value { color: #006400; font-style: italic; }

/* other */
pre.sh_sourceCode .sh_section { color: black; font-weight: bold; }
pre.sh_sourceCode .sh_paren { color: red; }
pre.sh_sourceCode .sh_attribute { color: #006400; }

</style><link href="http://www.w3.org/StyleSheets/TR/W3C-WD" rel="stylesheet" type="text/css" charset="utf-8" /></head><body style="display: inherit;"><div class="head"><p><a href="http://www.w3.org/"><img width="72" height="48" alt="W3C" src="http://www.w3.org/Icons/w3c_home" /></a></p><h1 id="title" class="title" property="dcterms:title">Media Accessibility User Requirements</h1><h2 content="2012-01-03T05:00:00+0000" datatype="xsd:dateTime" property="dcterms:issued" id="w3c-working-draft-03-january-2012"><acronym title="World Wide Web Consortium">W3C</acronym> Working Draft 3 January 2012</h2><dl><dt>This version:</dt><dd><a href="http://www.w3.org/TR/2012/WD-media-accessibility-reqs-20120103/">http://www.w3.org/TR/2012/WD-media-accessibility-reqs-20120103/</a></dd><dt>Latest published version:</dt><dd><a href="http://www.w3.org/TR/media-accessibility-reqs/">http://www.w3.org/TR/media-accessibility-reqs/</a></dd><dt>Latest editor's draft:</dt><dd><a href="http://www.w3.org/WAI/PF/media-accessibility-reqs/">http://www.w3.org/WAI/PF/media-accessibility-reqs/</a></dd><dt>Editors:</dt><dd rel="bibo:editor"><span typeof="foaf:Person"><a href="http://blog.halindrome.com" content="Shane McCarron" property="foaf:name" rel="foaf:homepage">Shane McCarron</a>, Applied Testing and Technology, Inc. <span class="ed_mailto"><a href="mailto:shane@aptest.com" rel="foaf:mbox">shane@aptest.com</a></span> </span>
</dd>
<dd rel="bibo:editor"><span typeof="foaf:Person"><span property="foaf:name">Michael Cooper</span>, <a href="http://www.w3.org/" rel="foaf:workplaceHomepage"><acronym title="World Wide Web Consortium">W3C</acronym></a></span>
</dd>
<dt>Authors:</dt><dd rel="dcterms:contributor"><span typeof="foaf:Person"><span property="foaf:name">Judy Brewer</span>, <a href="http://www.w3.org/" rel="foaf:workplaceHomepage"><acronym title="World Wide Web Consortium">W3C</acronym></a></span>
</dd>
<dd rel="dcterms:contributor"><span typeof="foaf:Person"><span property="foaf:name">Eric Carlson</span>, <a href="http://www.apple.com/" rel="foaf:workplaceHomepage">Apple, Inc.</a></span>
</dd>
<dd rel="dcterms:contributor"><span typeof="foaf:Person"><span property="foaf:name">John Foliot</span>, Invited Expert</span>
</dd>
<dd rel="dcterms:contributor"><span typeof="foaf:Person"><span property="foaf:name">Geoff Freed</span>, Invited Expert</span>
</dd>
<dd rel="dcterms:contributor"><span typeof="foaf:Person"><span property="foaf:name">Sean Hayes</span>, <a href="http://www.microsoft.com/" rel="foaf:workplaceHomepage">Microsoft Corporation</a></span>
</dd>
<dd rel="dcterms:contributor"><span typeof="foaf:Person"><span property="foaf:name">Silvia Pfeiffer</span>, Invited Expert</span>
</dd>
<dd rel="dcterms:contributor"><span typeof="foaf:Person"><span property="foaf:name">Janina Sajka</span>, Invited Expert</span>
</dd>
</dl><p class="copyright"><a
href="http://www.w3.org/Consortium/Legal/ipr-notice#Copyright">Copyright</a>
&copy; 2012 <a href="http://www.w3.org/"><acronym title="World Wide Web
Consortium">W3C</acronym></a><sup>&reg;</sup> (<a
href="http://www.csail.mit.edu/"><acronym title="Massachusetts Institute
of Technology">MIT</acronym></a>, <a href="http://www.ercim.eu/"><acronym
title="European Research Consortium for Informatics and
Mathematics">ERCIM</acronym></a>, <a
href="http://www.keio.ac.jp/">Keio</a>), All Rights Reserved. W3C <a
href="http://www.w3.org/Consortium/Legal/ipr-notice#Legal_Disclaimer">liability</a>,
<a
href="http://www.w3.org/Consortium/Legal/ipr-notice#W3C_Trademarks">trademark</a>
and <a
href="http://www.w3.org/Consortium/Legal/copyright-documents">document
use</a> rules apply.</p><hr /></div>
		<div id="abstract" class="introductory section" property="dcterms:abstract" datatype="" typeof="bibo:Chapter" about="#abstract">
<!-- OddPage -->
<h2>Abstract</h2>
			<p>This document aggregates the accessibility requirements of users with disabilities that
    the <acronym title="World Wide Web Consortium">W3C</acronym> HTML5 Accessibility Task Force has collected with respect to audio
    and video on the Web. </p>
    <p>It first provides an introduction to the needs of users with disabilties 
    in relation to audio and video.</p>
			<p>Then it explains what alternative content technologies have been developed
    to help such users gain access to the content of audio and video. </p>
			<p>A third section explains how these content technologies fit in the larger
    picture of accessibility, both technically within a Web user agent
    and from a production process point of view. </p>
			<p>This document is most explicitly not a collection of baseline user agent
    or authoring tool requirements. It is important to recognize that not all
    user agents (nor all authoring tools) will support all the features discussed
    in this document. Rather, this document attempts to supply a comprehensive
    collection of user requirements needed to support media accessibility in
    the context of HTML5. As such, it should be expected that this document
    will continue to develop for some time. </p>
			<p>Please also note this document is not an inventory of technology currently
    provided by, or missing from HTML5 specification drafts. Technology listed
    here is here because it's important for accommodating the alternative access
    needs of users with disabilities to web-based media. This document is our
    inventory of Media Accessibility User Requirements. </p>
		</div><div class="introductory section" id="sotd" typeof="bibo:Chapter" about="#sotd">
<!-- OddPage -->
			<h2>Status of This Document</h2>      <p><em>This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the <a
				
				href="http://www.w3.org/TR/">W3C technical reports index</a> at http://www.w3.org/TR/.</em></p>
			<p>This is a First Public <a href="http://www.w3.org/2004/02/Process-20040205/tr.html#RecsWD">Working Draft</a> by the <a
				
				href="http://www.w3.org/WAI/PF/">Protocols &amp; Formats Working Group</a> (PFWG) of the <a
					
					href="http://www.w3.org/WAI/">Web Accessibility Initiative</a>. The document was <a
						
						href="http://www.w3.org/WAI/PF/HTML/wiki/Media_Accessibility_User_Requirements">originally developed in a wiki page of the HTML Accessibility Task Force</a> to elaborate accessibility requirements for HTML 5 media. The resulting documentation is applicable to any content technology that provides video or audio support, so the document is published by the PFWG to more clearly serve as a universal resource. Changes to the above wiki page may be migrated to this document over time, but this document reflects the consensus of the PFWG. After this document receives thorough public review, the PFWG plans to publish it as a Working Group Note. A <a
							
							href="http://www.w3.org/WAI/PF/media-accessibility-reqs/change-history">history of changes to Media Accessibility User Requirements</a> is available.</p>
			<p>Feedback on the requirements is essential to ongoing efforts to make media content technologies accessible. The PFWG asks in particular:</p>
			<ul>
				<li> Are the use cases for media accessibility clear and complete?</li>
				<li>Do the features to enhance media accessibility meet the use cases?</li>
				<li>Are the technical requirements for media accessibility complete and achievable?</li>
			</ul>
			<p>Start with the <a href="http://www.w3.org/WAI/PF/comments/instructions">instructions for commenting</a> page to submit comments (preferred), or send email to <a
				
				href="mailto:public-pfwg-comments@w3.org">public-pfwg-comments@w3.org</a> (<a
					
					href="http://lists.w3.org/Archives/Public/public-pfwg-comments/">comment archive</a>). Comments should be made by <strong>10 February 2012</strong>. In-progress updates to the document may be viewed in the <a
						
						href="/WAI/PF/media-accessibility-reqs/">publicly visible editors' draft</a>.</p>
			<p>Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.</p>
			<p> This document was produced by a group operating under the <a
href="http://www.w3.org/Consortium/Patent-Policy-20040205/">5 February
2004 W3C Patent Policy</a>. W3C maintains a <a rel="disclosure"
href="http://www.w3.org/2004/01/pp-impl/32212/status">public list of any
patent disclosures</a> made in connection with the deliverables of the
group; that page also includes instructions for disclosing a patent. An
individual who has actual knowledge of a patent which the individual
believes contains <a
href="http://www.w3.org/Consortium/Patent-Policy-20040205/#def-essential">Essential
Claim(s)</a> must disclose the information in accordance with <a
href="http://www.w3.org/Consortium/Patent-Policy-20040205/#sec-Disclosure">section
6 of the W3C Patent Policy</a>. </p>
		</div><div id="toc" typeof="bibo:Chapter" about="#toc" class="section"><h2 class="introductory">Table of Contents</h2><ul class="toc"><li class="tocline"><a href="#abstract" class="tocxref">Abstract</a></li><li class="tocline"><a href="#sotd" class="tocxref">Status of This Document</a></li><li class="tocline"><a href="#media-accessibility-checklist" class="tocxref"><span class="secno">1. </span> Media Accessibility Checklist </a></li><li class="tocline"><a href="#accessible-media-requirements-by-type-of-disability" class="tocxref"><span class="secno">2. </span> Accessible Media Requirements by Type of Disability </a><ul class="toc"><li class="tocline"><a href="#blindness" class="tocxref"><span class="secno">2.1 </span> Blindness </a></li><li class="tocline"><a href="#low-vision" class="tocxref"><span class="secno">2.2 </span> Low vision </a></li><li class="tocline"><a href="#atypical-color-perception" class="tocxref"><span class="secno">2.3 </span> Atypical color perception </a></li><li class="tocline"><a href="#deafness" class="tocxref"><span class="secno">2.4 </span> Deafness </a></li><li class="tocline"><a href="#hard-of-hearing" class="tocxref"><span class="secno">2.5 </span> Hard of hearing </a></li><li class="tocline"><a href="#deaf-blind" class="tocxref"><span class="secno">2.6 </span> Deaf-blind </a></li><li class="tocline"><a href="#physical-impairment" class="tocxref"><span class="secno">2.7 </span> Physical impairment </a></li><li class="tocline"><a href="#cognitive-and-neurological-disabilities" class="tocxref"><span class="secno">2.8 </span> Cognitive and neurological disabilities </a></li></ul></li><li class="tocline"><a href="#alternative-content-technologies" class="tocxref"><span class="secno">3. </span> Alternative Content Technologies </a><ul class="toc"><li class="tocline"><a href="#described-video" class="tocxref"><span class="secno">3.1 </span> Described video </a></li><li class="tocline"><a href="#text-video-description" class="tocxref"><span class="secno">3.2 </span> Text video description </a></li><li class="tocline"><a href="#extended-video-descriptions" class="tocxref"><span class="secno">3.3 </span> Extended video descriptions </a></li><li class="tocline"><a href="#clean-audio" class="tocxref"><span class="secno">3.4 </span> Clean audio </a></li><li class="tocline"><a href="#content-navigation-by-content-structure" class="tocxref"><span class="secno">3.5 </span> Content navigation by content structure </a></li><li class="tocline"><a href="#captioning" class="tocxref"><span class="secno">3.6 </span> Captioning </a></li><li class="tocline"><a href="#enhanced-captions-subtitles" class="tocxref"><span class="secno">3.7 </span> Enhanced captions/subtitles </a></li><li class="tocline"><a href="#sign-translation" class="tocxref"><span class="secno">3.8 </span> Sign translation </a></li><li class="tocline"><a href="#transcripts" class="tocxref"><span class="secno">3.9 </span> Transcripts </a></li></ul></li><li class="tocline"><a href="#system-requirements" class="tocxref"><span class="secno">4. </span> System Requirements </a><ul class="toc"><li class="tocline"><a href="#access-to-interactive-controls---menus" class="tocxref"><span class="secno">4.1 </span> Access to interactive controls / menus </a></li><li class="tocline"><a href="#granularity-level-control-for-structural-navigation" class="tocxref"><span class="secno">4.2 </span> Granularity level control for structural navigation </a></li><li class="tocline"><a href="#time-scale-modification" class="tocxref"><span class="secno">4.3 </span> Time-scale modification </a></li><li class="tocline"><a href="#production-practice-and-resulting-requirements" class="tocxref"><span class="secno">4.4 </span> Production practice and resulting requirements </a></li><li class="tocline"><a href="#discovery-and-activation-deactivation-of-available-alternative-content-------by-the-user" class="tocxref"><span class="secno">4.5 </span> Discovery and activation/deactivation of available alternative content
      by the user </a></li><li class="tocline"><a href="#requirements-on-making-properties-available-to-the-accessibility-interface" class="tocxref"><span class="secno">4.6 </span> Requirements on making properties available to the accessibility interface </a></li><li class="tocline"><a href="#requirements-on-the-use-of-the-viewport" class="tocxref"><span class="secno">4.7 </span> Requirements on the use of the viewport </a></li><li class="tocline"><a href="#requirements-on-the-parallel-use-of-alternate-content-on-potentially-------multiple-devices-in-parallel" class="tocxref"><span class="secno">4.8 </span> Requirements on the parallel use of alternate content on potentially
      multiple devices in parallel </a></li></ul></li><li class="tocline"><a href="#acknowledgements" class="tocxref"><span class="secno">A. </span>Acknowledgements</a><ul class="toc"><li class="tocline"><a href="#ack_group" class="tocxref"><span class="secno">A.1 </span>Participants in the PFWG at the time of publication</a></li><li class="tocline"><a href="#ack_others" class="tocxref"><span class="secno">A.2 </span>Other previously active PFWG participants and contributors</a></li><li class="tocline"><a href="#ack_funders" class="tocxref"><span class="secno">A.3 </span>Enabling funders</a></li></ul></li><li class="tocline"><a href="#references" class="tocxref"><span class="secno">B. </span>References</a><ul class="toc"><li class="tocline"><a href="#normative-references" class="tocxref"><span class="secno">B.1 </span>Normative references</a></li><li class="tocline"><a href="#informative-references" class="tocxref"><span class="secno">B.2 </span>Informative references</a></li></ul></li></ul></div>
		
		<div id="media-accessibility-checklist" typeof="bibo:Chapter" about="#media-accessibility-checklist" class="section">
			
<!-- OddPage -->
<h2><span class="secno">1. </span> Media Accessibility Checklist </h2>
			<p>The following User Requirements have also been distilled into a <a href="http://www.w3.org/WAI/PF/HTML/wiki/Media_Accessibility_Checklist">Media Accessibility Checklist</a>.</p>
		</div>
		<div id="accessible-media-requirements-by-type-of-disability" typeof="bibo:Chapter" about="#accessible-media-requirements-by-type-of-disability" class="section">
			
<!-- OddPage -->
<h2><span class="secno">2. </span> Accessible Media Requirements by Type of Disability </h2>
            <p>Editorial note: This section is a rough draft. It will be edited to align with
           
    <a href="http://www.w3.org/WAI/intro/people-use-web/Overview.html">How
    People with Disabilities Use the Web</a> once that document is complete.
    This draft is included now provide general background
    for sections 2 and 3 of this document. </p>
			<p>Comprehension of media may be affected by loss of visual function, loss
            of audio function, cognitive issues, or a combination of all three. 
            Cognitive disabilities may affect access to and/or
    comprehension of media. Physical disabilities such as dexterity impairment,
    loss of limbs, or loss of use of limbs may affect access to media. Once richer
    forms of media, such as virtual reality, become more commonplace, tactile
    issues may come into play. Control of the media player can be an important
    issue, e.g., for people with physical disabilties, however this is typically not addressed
    by the media formats themselves, but is a requirement of the technology used
    to build the player. </p>
			<div id="blindness" typeof="bibo:Chapter" about="#blindness" class="section">
				<h3><span class="secno">2.1 </span> Blindness </h3>
				<p>People who are blind cannot access information if it is presented only
      in the visual mode. They require information in an alternative representation,
      which typically means the audio mode, although information can also be
      presented as text. It is important to remember that not only the main video
      is inaccessible, but any other visible ancillary information such as stock
      tickers, status indicators, or other on-screen graphics, as well as any
      visual controls needed to operate the content. Since people who are blind
      use a screen reader and/or refreshable braille display, these assistive
      technologies (<abbr title="Assistive Technology">AT</abbr>s) need to work hand-in-hand with the access mechanism
      provided for the media content. </p>
			</div>
			<div id="low-vision" typeof="bibo:Chapter" about="#low-vision" class="section">
				<h3><span class="secno">2.2 </span> Low vision </h3>
				<p>People with low vision can use some visual information.
      Depending on their visual
      ability they might have specific issues such as difficulty discriminating
      foreground information from background information, or discriminating colors.
      Glare caused by excessive scattering in the eye can be a significant problem,
      especially for very bright content or surroundings. They may be unable
      to react quickly to transient information, and may have a narrow angle
      of view and so may not detect key information presented temporarily where
      they are not looking, or in text that is moving or scrolling. A person
      using a low-vision <abbr title="Assistive Technology">AT</abbr> aid, such as a screen magnifier, will only be viewing
      a portion of the screen, and so must manage tracking media content via
      their <abbr title="Assistive Technology">AT</abbr>. They may have difficulty reading when text is too small, has
      poor background contrast, or when outline or other fancy font types or
      effects are used. They may be using an <abbr title="Assistive Technology">AT</abbr> that adjusts all the colors of
      the screen, such as inverting the colors, so the media content must be
      viewable through the <abbr title="Assistive Technology">AT</abbr>. </p>
			</div>
			<div id="atypical-color-perception" typeof="bibo:Chapter" about="#atypical-color-perception" class="section">
				<h3><span class="secno">2.3 </span> Atypical color perception </h3>
				<p>A significant percentage of the population has atypical color perception,
      and may not be able to discriminate between different colors, or may miss
      key information when coded with color only. </p>
			</div>
			<div id="deafness" typeof="bibo:Chapter" about="#deafness" class="section">
				<h3><span class="secno">2.4 </span> Deafness </h3>
				<p>People who are deaf generally cannot use audio. Thus, an alternative representation
      is required, typically through synchronized captions and/or sign translation. </p>
			</div>
			<div id="hard-of-hearing" typeof="bibo:Chapter" about="#hard-of-hearing" class="section">
				<h3><span class="secno">2.5 </span> Hard of hearing </h3>
				<p>People who are hard of hearing may be able to use some audio material,
      but might not be able to discriminate certain types of sound, and may miss
      any information presented as audio only if it contains frequencies they
      can't hear, or is masked by background noise or distortion. They may miss
      audio which is too quiet, or of poor quality. Speech may be problematic
      if it is too fast and cannot be played back more slowly. Information presented
      using multichannel audio (e.g., stereo) may not be perceived by people
      who are deaf in one ear. </p>
			</div>
			<div id="deaf-blind" typeof="bibo:Chapter" about="#deaf-blind" class="section">
				<h3><span class="secno">2.6 </span> Deaf-blind </h3>
				<p>Individuals who are deaf-blind have a combination of conditions that may
      result in one of the following: blindness and deafness; blindness and difficulty
      in hearing; low vision and deafness; or low vision and difficulty in hearing.
      Depending on their combination of conditions, individuals who are deaf-blind
      may need captions that can be enlarged, changed to high-contrast colors,
      or otherwise styled; or they may need captions and/or described video that
      can be presented with <abbr title="Assistive Technology">AT</abbr> (e.g., a refreshable braille display). They may
      need synchronized captions and/or described video, or they may need a non-time-based
      transcript which they can read at their own pace. </p>
			</div>
			<div id="physical-impairment" typeof="bibo:Chapter" about="#physical-impairment" class="section">
				<h3><span class="secno">2.7 </span> Physical impairment </h3>
				<p>People with physical disabilities such as poor dexterity, loss of limbs, or
      loss of use of limbs may use the keyboard alone rather than the combination
      of a pointing device plus keyboard to interact with content and controls,
      or may use a switch with an on-screen keyboard, or other assistive-technology
      access. The player itself must be usable via the keyboard and pointing
      devices. The user must have full access to all player controls, including
      methods for selecting alternative content. </p>
			</div>
			<div id="cognitive-and-neurological-disabilities" typeof="bibo:Chapter" about="#cognitive-and-neurological-disabilities" class="section">
				<h3><span class="secno">2.8 </span> Cognitive and neurological disabilities </h3>
				<p>Cognitive and neurological disabilities include a wide range of conditions
      that may include intellectual disabilities (called learning disabilities
      in some regions), autism-spectrum disorders, memory impairments, mental-health
      disabilities, attention-deficit disorders, audio- and/or visual-perceptive
      disorders, dyslexia and dyscalculia (called learning disabilities in other
      regions), or seizure disorders. Necessary accessibility supports vary widely
      for these different conditions. Individuals with some conditions may process
      information aurally better than by reading text; therefore, information
      that is presented as text embedded in a video should also be available
      as audio descriptions. Individuals with other conditions may need to reduce
      distractions or flashing in presentations of video. Some conditions such
      as autism-spectrum disorders may have multi-system effects and individuals
      may need a combination of different accommodation.
					Overall, the media experience for people on the autism spectrum should
        be customizable and well designed so as to not be overwhelming. Care
        must be taken to present a media experience that focuses on the purpose
        of the content and provides alternative content in a clear, concise manner. </p>
      
<!-- 
				<section>
					<h4> Autism </h4>
					<p>Individuals with an autism-spectrum disorder are commonly impacted in
        the areas of communication, social interaction, and repetitive behaviors.
        They can have difficulty interpreting and expressing social communication,
        as well as difficulty shifting between context and activities. Therefore,
        a supplemental content track could be used to focus the individual’s
        attention on the key points of the media. For example, supplemental text
        could point out the key educational messages or plainly state the meaning
        of social interactions. Verbal communications could be broken down into
        the key messages; tone of voice could be interpreted; phrases of speech
        and communication styles such as sarcasm could be explained. </p>
					<p>Individuals on the autism spectrum can be quite visual and learn effectively
        from social stories. A social story is a simple description of a social
        situation, such as an upcoming event, a social interaction, or a change
        in routine. A social story is commonly a series of pictures, supported
        by simple text to describe the actions, behavior, and outcomes. This
        technique could be carried over to media by providing a social story
        as alternative content. The media of the social story could be a combination
        of pictures and synchronized text or audio. </p>
					<p>Overall, the media experience for people on the autism spectrum should
        be customizable and well designed so as to not be overwhelming. Care
        must be taken to present a media experience that focuses on the purpose
        of the content and provides alternative content in a clear, concise manner. </p>
				</section>
                 -->

			</div>
		</div>
		<div id="alternative-content-technologies" typeof="bibo:Chapter" about="#alternative-content-technologies" class="section">
			
<!-- OddPage -->
<h2><span class="secno">3. </span> Alternative Content Technologies </h2>
			<p>A number of alternative content types have been developed to help users
    with sensory disabilities gain access to audio-visual content. This section
    lists them, explains generally what they are, and provides a number of requirements
    on each that need to be satisfied with technology developed in HTML5 around
    the media elements. </p>
			<div id="described-video" typeof="bibo:Chapter" about="#described-video" class="section">
				<h3><span class="secno">3.1 </span> Described video </h3>
				<p>Described video contains descriptive narration of key visual elements
      designed to make visual media accessible to people who are blind or visually
      impaired. The descriptions include actions, costumes, gestures, scene changeset
      or any other important visual information that someone who cannot see the
      screen might ordinarily miss. Descriptions are traditionally audio recordings
      timed and recorded to fit into natural pauses in the program, although
      they may also briefly obscure the main audio track. (See the section on
      extended descriptions for an alternative approach.) The descriptions are
      usually read by a narrator with a voice that cannot be easily confused
      with other voices in the primary audio track. They are authored to convey
      objective information (e.g., a yellow flower) rather than subjective judgments
      (e.g., a beautiful flower). </p>
				<p>As with captions, descriptions can be open or closed. </p>
				<ul>
					<li>
						<strong>Open descriptions</strong> are merged with the program-audio
        track and cannot be turned off by the viewer. </li>
					<li>
						<strong>Closed descriptions</strong> can be turned on and off by the
        viewer. They can be recorded as a separate track containing descriptions
        only, timed to play at specific spots in the timeline and played in parallel
        with the program-audio track. </li>
					<li> Some descriptions can be delivered as a <strong>separate audio</strong> channel
        mixed in at the player. </li>
					<li> Other options include a computer-generated <strong>‘text to speech’
          track,</strong> also known as text video descriptions. This is described
          in the next subsection. </li>
				</ul>
				<p>Described video provides benefits that reach beyond blind or visually
      impaired viewers; e.g., students grappling with difficult materials or
      concepts. Descriptions can be used to give supplemental information about
      what is on screen—the structure of lengthy mathematical equations or the
      intricacies of a painting, for example. </p>
				<p>Described video is available on some television programs and in many movie
      theaters in the U.S. and other countries. Regulations in the U.S. and Europe
      are increasingly focusing on description, especially for television, reflecting
      its priority with citizens who have visual impairments. The technology
      needed to deliver and render basic video descriptions is in fact relatively
      straightforward, being an extension of common audio-processing solutions.
      Playback products must support multi-audio channels required for description,
      and any product dealing with broadcast TV content must provide adequate
      support for descriptions. Descriptions can also provide text that can be
      indexed and searched. </p>
				<p>Systems supporting described video that are not open descriptions must: </p>
				<div typeof="bibo:Chapter" about="#DV-1" class="section">
					<h4 id="DV-1"><strong class="req-handle">[DV-1]</strong> Provide an indication that descriptions are available, and
          are active/non-active. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DV-2" class="section">
					<h4 id="DV-2"><strong class="req-handle">[DV-2]</strong> Render descriptions in a time-synchronized manner, using
          the media resource as the timebase master. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DV-3" class="section">
					<h4 id="DV-3"><strong class="req-handle">[DV-3]</strong> Support multiple description tracks (e.g., discrete tracks
          containing different levels of detail). </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DV-4" class="section">
					<h4 id="DV-4"><strong class="req-handle">[DV-4]</strong> Support recordings of real human speech as a track of the
          media resource, or as an external file. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DV-5" class="section">
					<h4 id="DV-5"><strong class="req-handle">[DV-5]</strong> Allow the author to independently adjust the volumes of the
          audio description and original soundtracks. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DV-6" class="section">
					<h4 id="DV-6"><strong class="req-handle">[DV-6]</strong> Allow the user to independently adjust the volumes of the
          audio description and original soundtracks, with the user's settings
          overriding the author's. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DV-7" class="section">
					<h4 id="DV-7"><strong class="req-handle">[DV-7]</strong> Permit smooth changes in volume rather than stepped changes.
          The degree and speed of volume change should be under provider control. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DV-8" class="section">
					<h4 id="DV-8"><strong class="req-handle">[DV-8]</strong> Allow the author to provide fade and pan controls to be accurately
          synchronised with the original soundtrack. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DV-9" class="section">
					<h4 id="DV-9"><strong class="req-handle">[DV-9]</strong> Allow the author to use a codec which is optimised for voice
          only, rather than requiring the same codec as the original soundtrack. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DV-10" class="section">
					<h4 id="DV-10"><strong class="req-handle">[DV-10]</strong> Allow the user to select from among different languages
          of descriptions, if available, even if they are different from the
          language of the main soundtrack. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DV-11" class="section">
					<h4 id="DV-11"><strong class="req-handle">[DV-11]</strong> Support the simultaneous playback of both the described
          and non-described audio tracks so that one may be directed at separate
          outputs (e.g., a speaker and headphones). </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DV-12" class="section">
					<h4 id="DV-12"><strong class="req-handle">[DV-12]</strong> Provide a means to prevent descriptions from carrying over
          from one program or channel when the user switches to a different program
          or channel. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DV-13" class="section">
					<h4 id="DV-13"><strong class="req-handle">[DV-13]</strong> Allow the user to relocate the description track within
          the audio field, with the user setting overriding the author setting.
          The setting should be re-adjustable as the media plays. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DV-14" class="section">
					<h4 id="DV-14"><strong class="req-handle">[DV-14]</strong> Support metadata, such as copyright information, usage rights,
          language, etc. </h4>
				</div>
			</div>
			<div id="text-video-description" typeof="bibo:Chapter" about="#text-video-description" class="section">
				<h3><span class="secno">3.2 </span> Text video description </h3>
				<p>Described video that uses text for the description source rather than
      a recorded voice creates specific requirements. </p>
				<p>Text video descriptions (TVDs) are delivered to the client as text and
      rendered locally by assistive technology such as a screen reader or a braille
      device. This can have advantages for screen-reader users who want full
      control of the preferred voice and speaking rate, or other options to control
      the speech synthesis. </p>
				<p>Text video descriptions are provided as text files containing start times
      for each description cue. Since the duration that a screen reader takes
      to read out a description cannot be determined during authoring of the
      cues, it is difficult to ensure they don't obscure the main audio or other
      description cues. This is likely to be caused by at least three reasons: </p>
				<ul>
					<li> An author of text video descriptions does not have a screen reader.
        This means s/he cannot check if the description fits within the time
        frame. Even if s/he has a screen reader, a user's screen reader will
        be set to a different reading speed and may take longer to read the same
        sentence. </li>
					<li> Some screen-reader users (e.g., those who are elderly or have learning
        disabilities) may slow down the speech rate. </li>
					<li> A visually complicated scene (e.g., figures on a blackboard in an
        online physics class) may require more description time than is available
        in the program-audio track. </li>
				</ul>
				<p>Systems supporting text video descriptions must: </p>
				<div typeof="bibo:Chapter" about="#TVD-1" class="section">
					<h4 id="TVD-1"><strong class="req-handle">[TVD-1]</strong> Support presentation of text video descriptions through
          a screen reader or braille device, with playback speed control and
          voice control and synchronisation points with the video. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#TVD-2" class="section">
					<h4 id="TVD-2"><strong class="req-handle">[TVD-2]</strong> TVDs need to be provided in a format that contains the following
          information:</h4>
				</div>
				<ol class="list-in-req">
					<li>start time, text per description cue (the duration is determined
              dynamically, though an end time could provide a cut point) </li>
					<li>possibly a speech-synthesis markup to improve quality of
              the description (existing speech synthesis markups include <a href="http://www.w3.org/TR/speech-synthesis/">SSML</a> and <a href="http://www.w3.org/TR/css3-speech/">Speech
              CSS</a>) </li>
					<li>accompanying metadata providing labeling for speakers, language,
              etc. </li>
				</ol>
				<div typeof="bibo:Chapter" about="#TVD-3" class="section">
					<h4 id="TVD-3"><strong class="req-handle">[TVD-3]</strong> Where possible, provide a text or separate audio track privately
          to those that need it in a mixed-viewing situation, e.g., through headphones. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#TVD-4" class="section">
					<h4 id="TVD-4"><strong class="req-handle">[TVD-4]</strong> Where possible, provide options for authors and users to
          deal with the overflow case: continue reading, stop reading, and pause
          the video. (One solution from a user's point of view may be to pause
          the video and finish reading the TVD, for example.) User preference
          should override authored option. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#TVD-5" class="section">
					<h4 id="TVD-5"><strong class="req-handle">[TVD-5]</strong> Support the control over speech-synthesis playback speed,
          volume and voice, and provide synchronisation points with the video. </h4>
				</div>
			</div>
			<div id="extended-video-descriptions" typeof="bibo:Chapter" about="#extended-video-descriptions" class="section">
				<h3><span class="secno">3.3 </span> Extended video descriptions </h3>
				<p>Video descriptions are usually provided as recorded speech, timed to play
      in the natural pauses in dialog or narration. In some types of material,
      however, there is not enough time to present sufficient descriptions. To
      meet such cases, the concept of extended description was developed. Extended
      descriptions work by pausing the video and program audio at key moments,
      playing a longer description than would normally be permitted, and then
      resuming playback when the description is finished playing. This will naturally
      extend the timeline of the entire presentation. This procedure has not
      been possible in broadcast television; however, hard-disk recording and
      on-demand Internet systems can make this a practical possibility. </p>
				<p>Extended video description (EVD) has been reported to have benefits for
      cognitive disabilities; for example, it might benefit people with Aspergers
      Syndrome and other Autistic Spectrum Disorders, in that it can make connections
      between cause and effect, point out what is important to look at, or explain
      moods that might otherwise be missed. </p>
				<p>Systems supporting extended audio descriptions must: </p>
				<div typeof="bibo:Chapter" about="#EVD-1" class="section">
					<h4 id="EVD-1"><strong class="req-handle">[EVD-1]</strong> Support detailed user control as specified in <a href="http://www.w3.org/WAI/PF/src/media-a11y-req#TVD-4">[TVD-4]</a> for
          extended video descriptions. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#EVD-2" class="section">
					<h4 id="EVD-2"><strong class="req-handle">[EVD-2]</strong> Support automatically pausing the video and main audio tracks
          in order to play a lengthy description. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#EVD-3" class="section">
					<h4 id="EVD-3"><strong class="req-handle">[EVD-3]</strong> Support resuming playback of video and main audio tracks
          when the description is finished. </h4>
					<p>Note that this is an advanced feature and would only be expected by
      advanced systems. </p>
				</div>
			</div>
			<div id="clean-audio" typeof="bibo:Chapter" about="#clean-audio" class="section">
				<h3><span class="secno">3.4 </span> Clean audio </h3>
				<p>A relatively recent development in television accessibility is the concept
      of <a href="http://www.etsi.org/deliver/etsi_ts/101100_101199/101154/01.09.01_60/ts_101154v010901p.pdf">clean
      audio</a>, which takes advantage of the increased adoption of multichannel
      audio. This is primarily aimed at audiences who are hard of hearing, and
      consists of isolating the audio channel containing the spoken dialog and
      important non-speech information that can then be amplified or otherwise
      modified, while other channels containing music or ambient sounds are attenuated. </p>
				<p>Using the isolated audio track may make it possible to apply more sophisticated
      audio processing such as pre-emphasis filters, pitch-shifting, and so on
      to tailor the audio to the user's needs, since hearing loss is typically
      frequency-dependent, and the user may have usable hearing in some bands
      yet none at all in others. </p>
				<p>Systems supporting clean audio and multiple audio tracks must: </p>
				<div typeof="bibo:Chapter" about="#CA-1" class="section">
					<h4 id="CA-1"><strong class="req-handle">[CA-1]</strong> Support clean audio as a separate, alternative audio track
          from other audio-based alternative media resources. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CA-2" class="section">
					<h4 id="CA-2"><strong class="req-handle">[CA-2]</strong> Support the synchronisation of multitrack audio either within
          the same file or from separate files - preferably both. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CA-3" class="section">
					<h4 id="CA-3"><strong class="req-handle">[CA-3]</strong> Support separate volume control of the different audio tracks. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CA-4" class="section">
					<h4 id="CA-4"><strong class="req-handle">[CA-4]</strong> Support pre-emphasis filters, pitch-shifting, and other audio-processing
          algorithms. </h4>
				</div>
			</div>
			<div id="content-navigation-by-content-structure" typeof="bibo:Chapter" about="#content-navigation-by-content-structure" class="section">
				<h3><span class="secno">3.5 </span> Content navigation by content structure </h3>
				<p>Most people are familiar with fast forward and rewind in media content.
      However, because they progress through content based only on time, fast
      forward and rewind are ineffective particularly when the content is being
      used for purposes other than entertainment. People with disabilities are
      also particularly disadvantaged if forced to rely solely on time-based
      fast forward and rewind to study content. </p>
				<p>Fortunately, most content is structured, and appropriate markup can expose
      this structure to forward and rewind controls: </p>
				<ul>
					<li> Books generally have chapters and perhaps subsections within those
        chapters. They also have structures such as page numbers, side-bars,
        tables, footnotes, tables of contents, glossaries, etc. </li>
					<li> Short music selections tend to have verses and repeating choruses. </li>
					<li> Larger classical-music works have movements which are further dividable
        by component parts such as exposition, development and recapitulation,
        or theme and variations. </li>
					<li> Operas, theatrical plays, and movies have acts and scenes within those
        acts. </li>
					<li> Television programs generally have clear divisions; e.g., newscasts
        have individual stories usually wrapped within a larger structures called
        news, weather, or sports. </li>
					<li> A lecturer may first lay out a topic, then consider a series of approaches
        or illustrative examples, and finally draw a conclusion. </li>
				</ul>
                <p>This is, of course, a <abbr title="Document Object Model">DOM</abbr> view of 
                content. However, effective <abbr title="Document Object Model">DOM</abbr>-based
      navigation will require an additional control not typically available on
      current media players. This real-time control, which we are calling a &quot;granularity-level
      control,&quot; will allow the user to adjust the level of granularity applied
      to &quot;next&quot; and &quot;previous&quot; controls. This is necessary
      because next and previous are too cumbersome if accessing every <abbr title="Document Object Model">DOM</abbr> element,
      but unsatisfactorally broad and coarse if set to only the top hierarchical
      <abbr title="Document Object Model">DOM</abbr> level. Allowing the user to adjust the <abbr title="Document Object Model">DOM</abbr> level that next and previous
      go to has proven very effective—hence the real-time granularity level
      control. </p>
				<p><strong>Two examples of granularity levels</strong> </p>
				<p>1. In a news broadcast, the most global level (analogous to   &lt;h1&gt;)
      might be the category called &quot;news, weather, and sports.&quot;    The
      second level (analogous to &lt;h2&gt;) would identify each individual news
      (or sports) story. With the granularity control set to level 1, &quot;next&quot; and &quot;previous&quot; would
      cycle among news, weather, and sports. Set at level 2, it would cycle among
      individual news (or sports) stories. </p>
				<p>2. In a bilingual audiobook-plus-e-text production of Dante Alighieri's &quot;La
      Divina Commedia,&quot; the user would choose whether to listen to the original
      medieval Italian or its modern-language translation—possibly toggling
      between them. Meanwhile, both the original and translated texts might appear
      on screen, with both the original and translated text highlighted, line
      by line, in sync with the audio narration. </p>
				<ul>
					<li> The most global (&lt;h1&gt;) level would be each individual book— &quot;Inferno,&quot; &quot;Purgatorio,&quot; and &quot;Paradiso.&quot; </li>
					<li> The second (&lt;h2&gt;) level would be each individual canto. </li>
					<li> The third (&lt;h3&gt;) level would be each individual verso. </li>
					<li> The fourth (&lt;h4&gt;) level would be each individual line of poetry. </li>
				</ul>
				<p>With granularity set at level 1, &quot;next&quot; and &quot;previous&quot; would
      cycle among the three books of &quot;La Divina Commedia.&quot;  Set at
      level 2, they would cycle among its cantos, at level 3 among its versos,
      and at level 4 among the individual lines of poetry text. </p>
				<p><strong>Navigating ancillary content</strong> </p>
				<p>There is a kind of structure, particularly in longer media resources,
      which requires special navigational consideration. While present in the
      media resource, it does not fit in the natural beginning-to-end progression
      of the resource. Its consumption tends to interrupt this natural beginning-to-end
      progression. A familiar example is a footnote or sidebar in a book. One
      must pause reading the text narrative to read a footnote or sidebar. Yet
      these structures are important and might require their own alternative
      media renditions. We have chosen to call such structures &quot;ancillary
      content structures.&quot; </p>
				<p>Commercials, news briefs, weather updates, etc., are familiar examples
      from television programming. While so prevalent that most of us may be
      inured to it, they do function to interrupt the primary television program.
      Users will want the ability to navigate past these ancillary structures—or
      perhaps directly to them. </p>
				<p>E-text-plus-audio productions of titles such as &quot;La Divina Commedia,&quot; described
      above, may well include reproductions of famous frescoes or paintings interspersed
      throughout the text, though these are not properly part of the text/content.
      Such illustrations must be programatically discoverable by users. They
      also need to be described. However, the user needs the option of choosing
      when to pause for that interrupting description. </p>
				<p>One current HTML5 media-based example of ancillary content is the Mozilla
      Popcorn Javascript library and API which can be further explored with the
      following three resources: </p>
				<ul>
					<li> <a href="https://wiki.mozilla.org/PopcornOpenVideoAPI">Mozilla PopcornOpenVideoAPI</a> documentation </li>
					<li> <a href="http://www.webmonkey.com/2010/08/mozillas-popcorn-project-adds-extra-flavor-to-web-video/">Mozilla’s
          Popcorn Project Adds Extra Flavor to Web Video</a> blog post </li>
					<li> <a href="http://popcornjs.org/">Popcorn.js</a> script
        library </li>
				</ul>
				<p><strong>Additional note</strong> </p>
				<p>Media in HTML5 will be used heavily and broadly. These accessibility user
      requirements will often find broad applicability. </p>
				<p>Just as the structures introduced particularly by nonfiction titles make
      books more usable, media is more usable when its inherent structure is
      exposed by markup. Markup-based access to structure is critical for persons
      with disabilities who cannot infer structure from purely presentational
      queues. </p>
				<p>Structural navigation has proven highly effective in various programs
      of electronic book publication for persons with print disabilities. Nowadays,
      these programs are based on the <a href="http://www.daisy.org/daisy-standard">ANSI/NISO
      Z39.86 specifications</a>. Z39.86 structural navigation is also supported
      by <a href="http://idpf.org/">e-publishing industry specifications</a>. </p>
				<p>The user can navigate along the timebase using a continuous scale, and
      by relative time units within rendered audio and animations (including
      video and animated images) that last three or more seconds at their default
      playback rate. (UAAG 2.0 4.9.6?) </p>
				<p>The user can navigate by semantic structure within the time-based media,
      such as by chapters or scenes, if present in the media (UAAG 2.0 4.9.7). </p>
				<p>Systems supporting content navigation must: </p>
				<div typeof="bibo:Chapter" about="#CN-1" class="section">
					<h4 id="CN-1"><strong class="req-handle">[CN-1]</strong> Provide a means to structure media resources so that users
          can navigate them by semantic content structure, e.g., through adding
          a track to the video that contains navigation markers (in table-of-content
          style). This means must allow authors to identify ancillary content
          structures, which may be a hierarchical structure. Support keeping
          all media representations synchronised when users navigate. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CN-2" class="section">
					<h4 id="CN-2"><strong class="req-handle">[CN-2]</strong> The navigation track should provide for hierarchical structures
          with titles for the sections. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CN-3" class="section">
					<h4 id="CN-3"><strong class="req-handle">[CN-3]</strong> Support both global navigation by the larger structural elements
          of a media work, and also the most localized atomic structures of that
          work, even though authors may not have marked-up all levels of navigational
          granularity. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CN-4" class="section">
					<h4 id="CN-4"><strong class="req-handle">[CN-4]</strong> Support third-party provided structural navigation markup. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CN-5" class="section">
					<h4 id="CN-5"><strong class="req-handle">[CN-5]</strong> Keep all content representations in sync, so that moving
          to any particular structural element in media content also moves to
          the corresponding point in all provided alternate media representations
          (captions, described video, transcripts, etc) associated with that
          work. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CN-6" class="section">
					<h4 id="CN-6"><strong class="req-handle">[CN-6]</strong> Support direct access to any structural element, possibly
          through URIs. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CN-7" class="section">
					<h4 id="CN-7"><strong class="req-handle">[CN-7]</strong> Support pausing primary content traversal to provide access
          to such ancillary content in line. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CN-8" class="section">
					<h4 id="CN-8"><strong class="req-handle">[CN-8]</strong> Support skipping of ancillary content in order to not interrupt
          content flow. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CN-9" class="section">
					<h4 id="CN-9"><strong class="req-handle">[CN-9]</strong> Support access to each ancillary content item, including
          with &quot;next&quot; and &quot;previous&quot; controls, apart from
          accessing the primary content of the title. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CN-10" class="section">
					<h4 id="CN-10"><strong class="req-handle">[CN-10]</strong> Support that in bilingual texts both the original and translated
          texts can appear on screen, with both the original and translated text
          highlighted, line by line, in sync with the audio narration. </h4>
				</div>
			</div>
			<div id="captioning" typeof="bibo:Chapter" about="#captioning" class="section">
				<h3><span class="secno">3.6 </span> Captioning </h3>
				<p>For people who are deaf or hard-of-hearing, captioning is a prime alternative
      representation of audio. Captions are in the same language as the main
      audio track and, in contrast to foreign-language subtitles, render a transcription
      of dialog or narration as well as important non-speech information, such
      as sound effects, music, and laughter. Historically, captions have been
      either closed or open. Closed captions have been transmitted as data along
      with the video but were not visible until the user elected to turn them
      on, usually by invoking an on-screen control or menu selection. Open captions
      have always been visible; they had been merged with the video track and
      could not be turned off. </p>
				<p>Ideally, captions should be a verbatim representation of the audio; however,
      captions are sometimes edited for various reasons— for example, for reading
      speed or for language level. In general, consumers of captions have expressed
      that the text should represent exactly what is in the audio track. If edited
      captions are provided, then they should be clearly marked as such, and
      the full verbatim version should also be available as an option. </p>
				<p>The timing of caption text can coincide with the mouth movement of the
      speaker (where visible), but this is not strictly necessary. For timing
      purposes, captions may sometimes precede or extend slightly after the audio
      they represent. Captioning should also use adequate means to distinguish
      between speakers as turn-taking occurs during conversation; this has in
      the past been done by positioning the text near the speaker, by associating
      different colors to different speakers, or by putting the name and a colon
      in front of the text line of a speaker. </p>
				<p>Captions are useful to a wide array of users in addition to their originally
      intended audiences. Gyms, bars and restaurants regularly employ captions
      as a way for patrons to watch television while in those establishments.
      People learning to read or learning the language of the country where they
      live as a second language also benefit from captions: research has shown
      that captions help reinforce vocabulary and language. Captions can also
      provide a powerful search capability, allowing users and search engines
      to search the caption text to locate a specific video or an exact point
      in a video. </p>
				<p>Formats for captions, subtitles or foreign-language subtitles must: </p>
				<div typeof="bibo:Chapter" about="#CC-1" class="section">
					<h4 id="CC-1"><strong class="req-handle">[CC-1]</strong> Render text in a time-synchronized manner,
          using the media resource as the timebase master. </h4>
					<p class="note">Most of the time, the main audio track would be the best candidate
        for the timebase. Where a video without audio, but with a text track,
        is available, the video track becomes the timebase master. Also, there
        may be situations where an explicit timing track is available. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-2" class="section">
					<h4 id="CC-2"><strong class="req-handle">[CC-2]</strong> Allow the author to specify erasures, i.e.,
          times when no text is displayed on the screen (no text cues are active). </h4>
					<p class="note">This should be possible both within media resources and caption
      formats. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-3" class="section">
					<h4 id="CC-3"><strong class="req-handle">[CC-3]</strong> Allow the author to assign timestamps so
          that one caption/subtitle follows another, with no perceivable gap
          in between. </h4>
					<p class="note">This means that caption cues should be able to either let the
        start time of the subsequent cue be determined by the duration of the
        cue or have the end time be implied by the start of the next cue. For
        overlapping captions, explicit start and end times are then required. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-4" class="section">
					<h4 id="CC-4"><strong class="req-handle">[CC-4]</strong> Be available in a text encoding. </h4>
					<p class="note">This means that determined character encodings should be supported
        - which could be either by making the character encoding explicit or
        by enforcing a single default one such as UTF-8. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-5" class="section">
					<h4 id="CC-5"><strong class="req-handle">[CC-5]</strong> Support positioning in all parts of the
          screen - either inside the media viewport but also possibly in a determined
          space next to the media viewport. This is particularly important when
          multiple captions are on screen at the same time and relate to different
          speakers, or when in-picture text is avoided. </h4>
					<p class="note">The minimum requirement is a bounding box (with an optional background)
        into which text is flowed, and that probably needs to be pixel aligned.
        The absolute position of text within the bounding box is less critical,
        although it is important to be able to avoid bad word-breaks and have
        adequate white space around letters and so on. There is more on this
        in a separate requirement. </p>
					<p>The caption format could provide a min-width/min-height for its bounding
        box, which typically is calculated from the bottom of the video viewport,
        but can be placed elsewhere by the Web page, with the Web page being
        able to make that box larger and scale the text relatively, too. The
        positions inside the box should probably be into regions, such as top,
        right, bottom, left, center. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-6" class="section">
					<h4 id="CC-6"><strong class="req-handle">[CC-6]</strong> Support the display of multiple regions
          of text simultaneously. </h4>
					<p class="note">This typically relates to multiple text cues that are defined
        on overlapping times. If the cues' rendering target are made out to different
        spatial regions, they can be displayed simultaneously. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-7" class="section">
					<h4 id="CC-7"><strong class="req-handle">[CC-7]</strong> Display multiple rows of text when rendered
          as text in a right-to-left or left-to-right language. </h4>
					<p class="note">Internationalization is important not just for subtitles, as captions
      can be used in all languages. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-8" class="section">
					<h4 id="CC-8"><strong class="req-handle">[CC-8]</strong> Allow the author to specify line breaks. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CC-9" class="section">
					<h4 id="CC-9"><strong class="req-handle">[CC-9]</strong> Permit a range of font faces and sizes. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CC-10" class="section">
					<h4 id="CC-10"><strong class="req-handle">[CC-10]</strong> Render a background in a range of colors,
          supporting a full range of opacities. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CC-11" class="section">
					<h4 id="CC-11"><strong class="req-handle">[CC-11]</strong> Render text in a range of colors. </h4>
					<p class="note">The user should have final control over rendering styles like
      color and fonts; e.g., through user preferences. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-12" class="section">
					<h4 id="CC-12"><strong class="req-handle">[CC-12]</strong> Enable rendering of text with a thicker
          outline or a drop shadow to allow for better contrast with the background. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CC-13" class="section">
					<h4 id="CC-13"><strong class="req-handle">[CC-13]</strong> Where a background is used, it is preferable
          to keep the caption background visible even in times where no text
          is displayed, such that it minimises distraction. However, where captions
          are infrequent the background should be allowed to disappear to enable
          the user to see as much of the underlying video as possible. </h4>
					<p class="note">It may be technically possible to have cues without text. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-14" class="section">
					<h4 id="CC-14"><strong class="req-handle">[CC-14]</strong> Allow the use of mixed display styles—
          e.g., mixing paint-on captions with pop-on captions— within a single
          caption cue or in the caption stream as a whole. Pop-on captions are
          usually one or two lines of captions that appear on screen and remain
          visible for one to several seconds before they disappear. Paint-on
          captions are individual characters that are &quot;painted on&quot; from
          left to right, not popped onto the screen all at once, and usually
          are verbatim. Another often-used caption style in live captioning is
          roll-up - here, cue text follows double chevrons (&quot;greater than&quot; symbols),
          and are used to indicate different speaker identifications. Each sentence &quot;rolls
          up&quot; to about three lines. The top line of the three disappears
          as a new bottom line is added, allowing the continuous rolling up of
          new lines of captions. </h4>
					<p class="note">Similarly, in karaoke, individual characters are often &quot;painted
      on&quot;. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-15" class="section">
					<h4 id="CC-15"><strong class="req-handle">[CC-15]</strong> Support positioning such that the lowest
          line of captions appears at least 1/12 of the total screen height above
          the bottom of the screen, when rendered as text in a right-to-left
          or left-to-right language. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CC-16" class="section">
					<h4 id="CC-16"><strong class="req-handle">[CC-16]</strong> Use conventions that include inserting
          left-to-right and right-to-left segments within a vertical run (e.g.
          Tate-chu-yoko in Japanese), when rendered as text in a top-to-bottom
          oriented language. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CC-17" class="section">
					<h4 id="CC-17"><strong class="req-handle">[CC-17]</strong> Represent content of different natural
          languages. In some cases the inclusion of a few foreign words form
          part of the original soundtrack, and thus need to be in the same caption
          resource. Also allow for separate caption files for different languages
          and on-the-fly switching between them. This is also a requirement for
          subtitles. </h4>
					<p class="note">Caption/subtitle files that are alternatives in different languages
        are probably best provided in different caption resources and are user
        selectable. Realistically, having no more than 2 languages present at
        the same time on screen is probably the limit. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-18" class="section">
					<h4 id="CC-18"><strong class="req-handle">[CC-18]</strong> Represent content of at least those specific
          natural languages that may be represented with [Unicode 3.2], including
          common typographical conventions of that language (e.g., through the
          use of furigana and other forms of ruby text). </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CC-19" class="section">
					<h4 id="CC-19"><strong class="req-handle">[CC-19]</strong> Present the full range of typographical
          glyphs, layout and punctuation marks normally associated with the natural
          language's print-writing system. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CC-20" class="section">
					<h4 id="CC-20"><strong class="req-handle">[CC-20]</strong> Permit in-line mark-up for foreign words
          or phrases. </h4>
					<p class="note">Italics markup may be sufficient for a human user, but it is important
        to be able to mark up languages so that the text can be rendered correctly,
        since the same Unicode can be shared between languages and rendered differently
        in different contexts. This is mainly an I18n issue. It is also important
        for audio rendering, to get pronunciation correct. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-21" class="section">
					<h4 id="CC-21"><strong class="req-handle">[CC-21]</strong> Permit the distinction between different
          speakers. </h4>
					<p>Further, systems that support captions must: </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-22" class="section">
					<h4 id="CC-22"><strong class="req-handle">[CC-22]</strong> Support captions that are provided inside
          media resources as tracks, or in external files. </h4>
					<p class="note">It is desirable to expose the same API to both. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-23" class="section">
					<h4 id="CC-23"><strong class="req-handle">[CC-23]</strong> Ascertain that captions are displayed in
          sync with the media resource. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CC-24" class="section">
					<h4 id="CC-24"><strong class="req-handle">[CC-24]</strong> Support user activation/deactivation of
          caption tracks. </h4>
					<p class="note">This requires a menu of some sort that displays the available
      tracks for activation/deactivation. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-25" class="section">
					<h4 id="CC-25"><strong class="req-handle">[CC-25]</strong> Support edited and verbatim captions, if
          available. </h4>
					<p class="note">Edited and verbatim captions can be provided in two different
        caption resources. There is a need to expose to the user how they differ,
        similar to how there can be caption tracks in different languages. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-26" class="section">
					<h4 id="CC-26"><strong class="req-handle">[CC-26]</strong> Support multiple tracks of foreign-language
          subtitles in different languages. </h4>
					<p class="note">These different-language &quot;tracks&quot; can be provided in
      different resources. </p>
				</div>
				<div typeof="bibo:Chapter" about="#CC-27" class="section">
					<h4 id="CC-27"><strong class="req-handle">[CC-27]</strong> Support live-captioning functionality. </h4>
				</div>
			</div>
			<div id="enhanced-captions-subtitles" typeof="bibo:Chapter" about="#enhanced-captions-subtitles" class="section">
				<h3><span class="secno">3.7 </span> Enhanced captions/subtitles </h3>
				<p>Enhanced captions are timed text cues that have been enriched with further
      information - examples are glossary definitions for acronyms and other
      intialisms, foreign terms (for example, Latin), jargon or descriptions
      for other difficult language. They may be age-graded, so that multiple
      caption tracks are supplied, or the glossary function may be added dynamically
      through machine lookup. </p>
				<p>Glossary information can be added in the normal time allotted for the
      cue (e.g., as a callout or other overlay), or it might take the form of
      a hyperlink that, when activated, pauses the main content and allows access
      to more complete explanatory material. </p>
				<p>Such extensions can provide important additional information to the content
      that will enable or improve the understanding of the main content to accessibility
      users. Enhanced text cues will be particularly useful for those with restricted
      reading skills, to subtitle users, and to caption users. Users may often
      come across keywords in text cues that lend themselves to further in-depth
      information or hyperlinks, such as an e-mail contact or phone number for
      a person, a strange term that needs a Wikipedia link for definition, or
      an idiom that needs comments to explain it to a foreign-language speaker. </p>
				<p>Systems that support enhanced captions must: </p>
				<div typeof="bibo:Chapter" about="#ECC-1" class="section">
					<h4 id="ECC-1"><strong class="req-handle">[ECC-1]</strong> Support metadata markup for (sections of)
          timed text cues. </h4>
					<p class="note">Such &quot;metadata&quot; markup can be realised through a @title
        attribute on a &lt;span&gt; of the text, or a hyperlink to another location
        where a term is explained, an &lt;abbr&gt; element, an   &lt;acronym&gt; element,
        a &lt;dfn&gt; element, or through RDFa or microdata. </p>
				</div>
				<div typeof="bibo:Chapter" about="#ECC-2" class="section">
					<h4 id="ECC-2"><strong class="req-handle">[ECC-2]</strong> Support hyperlinks and other activation
          mechanisms for supplementary data for (sections of) caption text. </h4>
					<p class="note">This can be realised through inclusion of &lt;a&gt; elements or
        buttons into timed text cues, where additional overlays could be created
        or a different page be loaded. One needs to deal here with the need to
        pause the media timeline for reading of the additional information. </p>
				</div>
				<div typeof="bibo:Chapter" about="#ECC-3" class="section">
					<h4 id="ECC-3"><strong class="req-handle">[ECC-3]</strong> Support text cues that may be longer than
          the time available until the next text cue and thus provide overlapping
          text cues - in this case, a feature should be provided to decide if
          overlap is ok or should be cut or the media resource be paused while
          the caption is displayed. Timing would be provided by the author, but
          with the user being able to override it. </h4>
					<p class="note">This feature is analogous to extended video descriptions - where
        timing for a text cue is longer than the available time for the cue,
        it may be necessary to halt the media to allow for more time to read
        back on the text and its additional material. In this case, the pause
        is dependent on the user's reading speed, so this may imply user control
        or timeouts. </p>
				</div>
				<div typeof="bibo:Chapter" about="#ECC-4" class="section">
					<h4 id="ECC-4"><strong class="req-handle">[ECC-4]</strong> It needs to be possible to define timed
          text cues that are allowed to overlap with each other in time and be
          present on screen at the same time (e.g., those that come from speech
          of different speakers), and such that are not allowed to overlap and
          thus cause media playback pause to allow users to catch up with their
          reading. </h4>
					<p class="note">This could be realised through a hint on the text cue or even
      for a whole track. </p>
				</div>
				<div typeof="bibo:Chapter" about="#ECC-5" class="section">
					<h4 id="ECC-5"><strong class="req-handle">[ECC-5]</strong> Allow users to define the reading speed
          and thus define how long each text cue requires, and whether media
          playback needs to pause sometimes to let them catch up on their reading. </h4>
					<p class="note">This can be a setting in the UA, which will define user-interface
      behavior. </p>
				</div>
			</div>
			<div id="sign-translation" typeof="bibo:Chapter" about="#sign-translation" class="section">
				<h3><span class="secno">3.8 </span> Sign translation </h3>
				<p>Sign language shares the same concept as captioning: it presents both
      speech and non-speech information in an alternative format. Note that due
      to the wide regional variation in signing systems (e.g., American Sign
      Language vs British Sign Language), sign translation may not be appropriate
      for content with a global audience unless localized variants can be made
      available. </p>
				<p>Signing can be open, mixed with the video and offered as an entirely alternate
      stream or closed (using some form of picture-in-picture or alpha-blending
      technology). It is possible to use quite low bit rates for much of the
      signing track, but it is important that facial, arm, hand and other body
      gestures be delivered at sufficient resolution to support legibility. Animated
      avatars may not currently be sufficient as a substitute for human signers,
      although research continues in this area and it may become practical at
      some point in the future. </p>
				<p>Acknowledging that not all devices will be capable of handling multiple
      video streams, this is a <em class="rfc2119" title="should">should</em> requirement for browsers where hardware
      is capable of support. Strong authoring guidance for content creator will
      mitigate situations where user-agents are unable to support multiple video
      streams (WCAG) - for example, on mobile devices that cannot support multiple
      streams, authors should be encouraged to offer two versions of the media
      stream, including one with signed captions burned into the media. </p>
				<p>Selecting from multiple tracks for different sign languages should be
      achieved in the same fashion that multiple caption/subtitle files are handled. </p>
				<p>Systems supporting sign language must: </p>
				<div typeof="bibo:Chapter" about="#SL-1" class="section">
					<h4 id="SL-1"><strong class="req-handle">[SL-1]</strong> Support sign-language video either as a track as part of
          a media resource or as an external file. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#SL-2" class="section">
					<h4 id="SL-2"><strong class="req-handle">[SL-2]</strong> Support the synchronized playback of the sign-language video
          with the media resource. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#SL-3" class="section">
					<h4 id="SL-3"><strong class="req-handle">[SL-3]</strong> Support the display of sign-language video either as picture-in-picture
          or alpha-blended overlay, as parallel video, or as the main video with
          the original video as picture-in-picture or alpha-blended overlay.
          Parallel video here means two discrete videos playing in sync with
          each other. It is preferable to have one discrete   &lt;video&gt; element
          contain all pieces for sync purposes rather than specifying multiple &lt;video&gt; elements
          intended to work in sync. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#SL-4" class="section">
					<h4 id="SL-4"><strong class="req-handle">[SL-4]</strong> Support multiple sign-language tracks in several sign languages. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#SL-5" class="section">
					<h4 id="SL-5"><strong class="req-handle">[SL-5]</strong> Support the interactive activation/deactivation of a sign-language
          track by the user. </h4>
				</div>
			</div>
			<div id="transcripts" typeof="bibo:Chapter" about="#transcripts" class="section">
				<h3><span class="secno">3.9 </span> Transcripts </h3>
				<p>While synchronized captions are generally preferable for people with hearing
      impairments, for some users they are not viable – those who are deaf-blind,
      for example, or those with cognitive or reading impairments that make it
      impossible to follow synchronized captions. And even with ordinary captions,
      it is possible to miss some information as the captions and the video require
      two separate loci of attention. The full transcript supports different
      user needs and is not a replacement for captioning. A transcript can either
      be presented simultaneously with the media material, which can assist slower
      readers or those who need more time to reference context, but it should
      also be made available independently of the media. </p>
				<p>A full text transcript should include information that would be in both
      the caption and video description, so that it is a complete representation
      of the material, as well as containing any interactive options. </p>
				<p>Systems supporting transcripts must: </p>
				<div typeof="bibo:Chapter" about="#T-1" class="section">
					<h4 id="T-1"><strong class="req-handle">[T-1]</strong> Support the provisioning of a full text transcript for the
          media asset in a separate but linked resource, where the linkage is
          programatically accessible to <abbr title="Assistive Technology">AT</abbr>. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#T-2" class="section">
					<h4 id="T-2"><strong class="req-handle">[T-2]</strong> Support the provisioning of both scrolling and static display
          of a full text transcript with the media resource, e.g., in a area next
          to the video or underneath the video, which is also <abbr title="Assistive Technology">AT</abbr> accessible. </h4>
				</div>
			</div>
		</div>
		<div id="system-requirements" typeof="bibo:Chapter" about="#system-requirements" class="section">
			
<!-- OddPage -->
<h2><span class="secno">4. </span> System Requirements </h2>
			<div id="access-to-interactive-controls---menus" typeof="bibo:Chapter" about="#access-to-interactive-controls---menus" class="section">
				<h3><span class="secno">4.1 </span> Access to interactive controls / menus </h3>
				<p>Media elements offer a rich set of interaction possibilities to users.
      These interaction possibilities must be available to all users, including
      those that cannot use a pointer device for interaction. Further, these
      interaction possibilities must be available to all users for all means
      in which the controls are exposed - no matter whether they are exposed
      by the user agent, or are scripted. Further, the interaction possibilities
      need to be rich enough to allow all users fine grained control over media
      playback. </p>
				<p>It is imperative that controls be device independent, so that control
      may be achieved by keyboard, pointing device, speech, etc. </p>
				<p>Systems supporting keyboard accessibility must: </p>
				<div typeof="bibo:Chapter" about="#KA-1" class="section">
					<h4 id="KA-1"><strong class="req-handle">[KA-1]</strong> Support operation of all functionality via
          the keyboard on systems where a keyboard is (or can be) present, and
          where a unique focus object is employed. This does not forbid and should
          not discourage providing mouse input or other input methods in addition
          to keyboard operation. (UAAG 2.0 4.1.1) </h4>
					<p class="note">This means that all interaction possibilities with media elements
        need to be keyboard accessible; e.g., through being able to tab onto
        the play, pause, mute buttons, and to move the playback position from
        the keyboard. </p>
				</div>
				<div typeof="bibo:Chapter" about="#KA-2" class="section">
					<h4 id="KA-2"><strong class="req-handle">[KA-2]</strong> Support a rich set of native controls for
          media operation, including but not limited to play, pause, stop, jump
          to beginning, jump to end, scale player size (up to full screen), adjust
          volume, mute, captions on/off, descriptions on/off, selection of audio
          language, selection of caption language, selection of audio description
          language, location of captions, size of captions, video contrast/brightness,
          playback rate, content navigation on same level (next/prev) and between
          levels (up/down) etc. This is also a particularly important requirement
          on mobile devices or devices without a keyboard. </h4>
					<p class="note">This means that the @controls content attribute needs to provide
        an extended set of control functionality including functionality for
        accessibility users. </p>
				</div>
				<div typeof="bibo:Chapter" about="#KA-3" class="section">
					<h4 id="KA-3"><strong class="req-handle">[KA-3]</strong> All functionality available to native controls
          must also be available to scripted controls. The author would be able
          to choose any/all of the controls, skin them and position them. </h4>
					<p class="note">This means that new IDL attributes need to be added to the media
      elements for the extra controls that are accessibility related. </p>
				</div>
				<div typeof="bibo:Chapter" about="#KA-4" class="section">
					<h4 id="KA-4"><strong class="req-handle">[KA-4]</strong> It must always be possible to enable native
          controls regardless of the author preference to guarantee that such
          functionality is available and essentially override author settings
          through user control. This is also a particularly important requirement
          on mobile devices or devices without a keyboard. </h4>
					<p class="note">This could be enabled through a context menu, which is keyboard
      accessible and its keyboard access cannot be turned off. </p>
				</div>
				<div typeof="bibo:Chapter" about="#KA-5" class="section">
					<h4 id="KA-5"><strong class="req-handle">[KA-5]</strong> The scripted and native controls must go
          through the same platform-level accessibility framework (where it exists),
          so that a user presented with the scripted version is not shut out
          from some expected behaviour. </h4>
					<p class="note">This is below the level of HTML and means that the accessibility
      platform needs to be extended to allow access to these controls. </p>
				</div>
				<div typeof="bibo:Chapter" about="#KA-6" class="section">
					<h4 id="KA-6"><strong class="req-handle">[KA-6]</strong> Autoplay on media elements is a particularly
          difficult issue to manage for vision-impaired users, since the mouse
          allows other users to an auto-playing element on a page with a single
          interaction. Therefore, autoplay state needs to be exposed to the platform-level
          accessibility framework. The vision-impaired user must be able to stop
          autoplay either generally on all media elements through a setting,
          or for particular pages through a single keyboard user interaction. </h4>
					<p class="note">This could be enabled through encouraging publishers to us @autoplay,
        encouraging UAs to implement accessibility settings that allow to turn
        off all autoplay, and encouraging <abbr title="Assistive Technology">AT</abbr> to implement a shortcut key to stop
        all autoplay on a Web page. </p>
				</div>
			</div>
			<div id="granularity-level-control-for-structural-navigation" typeof="bibo:Chapter" about="#granularity-level-control-for-structural-navigation" class="section">
				<h3><span class="secno">4.2 </span> Granularity level control for structural navigation </h3>
				<p>As explained in &quot;Content Navigation&quot; above, a real-time control
      mechanism must be provided for adjusting the granularity of the specific
      structural navigation point next and previous. Users must be able to set
      the range/scope of next and previous in real time. </p>
				<div typeof="bibo:Chapter" about="#CNS-1" class="section">
					<h4 id="CNS-1"><strong class="req-handle">[CNS-1]</strong> All identified structures, including ancillary content as
          defined in &quot;Content Navigation&quot; above, must be accessible
          with the use of &quot;next&quot; and &quot;previous,&quot; as refined
          by the granularity control. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CNS-2" class="section">
					<h4 id="CNS-2"><strong class="req-handle">[CNS-2]</strong> Users must be able to discover, skip, play-in-line, or directly
          access ancillary content structures. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CNS-3" class="section">
					<h4 id="CNS-3"><strong class="req-handle">[CNS-3]</strong> Users need to be able to access the granularity control
          using any input mode, e.g., keyboard, speech, pointer, etc. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#CNS-4" class="section">
					<h4 id="CNS-4"><strong class="req-handle">[CNS-4]</strong> Producers and authors may optionally provide additional
          access options to identified structures, such as direct access to any
          node in a table of contents. </h4>
				</div>
			</div>
			<div id="time-scale-modification" typeof="bibo:Chapter" about="#time-scale-modification" class="section">
				<h3><span class="secno">4.3 </span> Time-scale modification </h3>
				<p>While all devices may not support the capability, a standard control API
      must support the ability to speed up or slow down content presentation
      without altering audio pitch. </p>
				<p class="note">While perhaps unfamiliar to some, this feature has been present
      on many devices, especially audiobook players, for some 20 years now. </p>
				<p>The user can adjust the playback rate of prerecorded time-based media
        content, such that all of the following are true (UAAG 2.0 4.9.5): </p>
				<div typeof="bibo:Chapter" about="#TSM-1" class="section">
					<h4 id="TSM-1"><strong class="req-handle">[TSM-1]</strong> The user can adjust the playback rate of the time-based
          media tracks to between 50% and 250% of real time. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#TSM-2" class="section">
					<h4 id="TSM-2"><strong class="req-handle">[TSM-2]</strong> Speech whose playback rate has been adjusted by the user
          maintains pitch in order to limit degradation of the speech quality. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#TSM-3" class="section">
					<h4 id="TSM-3"><strong class="req-handle">[TSM-3]</strong> All provided alternative media tracks remain synchronized
          across this required range of playback rates. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#TSM-4" class="section">
					<h4 id="TSM-4"><strong class="req-handle">[TSM-4]</strong> The user agent provides a function that resets the playback
          rate to normal (100%). </h4>
				</div>
				<div typeof="bibo:Chapter" about="#TSM-5" class="section">
					<h4 id="TSM-5"><strong class="req-handle">[TSM-5]</strong> The user can stop, pause, and resume rendered audio and
          animation content (including video and animated images) that last three
          or more seconds at their default playback rate. (UAAG 2.0 4.9.6) </h4>
				</div>
			</div>
			<div id="production-practice-and-resulting-requirements" typeof="bibo:Chapter" about="#production-practice-and-resulting-requirements" class="section">
				<h3><span class="secno">4.4 </span> Production practice and resulting requirements </h3>
				<p>One of the biggest problems to date has been the lack of a universal system
      for media access. In response to user requirements various countries and
      groups have defined systems to provide accessibility, especially captioning
      for television. However these systems are typically not compatible. In
      some cases the formats can be inter-converted, but some formats — for example
      DVD sub-pictures — are image based and are difficult to convert to text. </p>
				<p>Caption formats are often geared towards delivery of the media, for example
      as part of a television broadcast. They are not well suited to the production
      phases of media creation. Media creators have developed their own internal
      formats which are more amenable to the editing phase, but to date there
      has been no common format that allows interchange of this data. </p>
				<p>Any media based solution should attempt to reduce as far as possible layers
      of translation between production and delivery. </p>
				<p>In general captioners use a proprietary workstation to prepare caption
      files; these can often export to various standard broadcast ingest formats,
      but in general files are not inter-convertible. Most video editing suites
      are not set up to preserve captioning, and so this has typically to be
      added after the final edit is decided on; furthermore since this work is
      often outsourced, the copyright holder may not hold the final editable
      version of the captions. Thus when programming is later re-purposed, e.g.
      a shorter edit is made, or a ‘directors cut’ produced, the captioning may
      have to be redone in its entirety. Similarly, and particularly for news
      footage, parts of the media may go to web before the final TV edit is made,
      and thus the captions that are produced for the final TV edit are not available
      for the web version. </p>
				<p>It is important when purchasing or commissioning media, that captioning
      and described video is taken into account and made equal priority in terms
      of ownership, rights of use, etc., as the video and audio itself. </p>
				<p>This is primarily an authoring requirement. It is a understood that a
      common time-stamp format must be declared in HTML5, so that authoring tools
      can conform to a required output. </p>
				<p>Systems supporting accessibility needs for media must: </p>
				<div typeof="bibo:Chapter" about="#PP-1" class="section">
					<h4 id="PP-1"><strong class="req-handle">[PP-1]</strong> Support existing production practice for alternative content
          resources, in particular allow for the association of separate alternative
          content resources to media resources. Browsers cannot support all forms
          of time-stamp formats out there, just as they cannot support all forms
          of image formats (etc.). This necessitates a clear and unambiguous
          declared format, so that existing authoring tools can be configured
          to export finished files in the required format. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#PP-2" class="section">
					<h4 id="PP-2"><strong class="req-handle">[PP-2]</strong> Support the association of authoring and rights metadata
          with alternative content resources, including copyright and usage information. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#PP-3" class="section">
					<h4 id="PP-3"><strong class="req-handle">[PP-3]</strong> Support the simple replacement of alternative content resources
          even after publishing. This is again dependent on authoring practice
          - if the content creator delivers a final media file that contains
          related accessibility content inside the media wrapper (for example
          an MP4 file), then it will require an appropriate third-party authoring
          tool to make changes to that file - it cannot be demanded of the browser
          to do so. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#PP-4" class="section">
					<h4 id="PP-4"><strong class="req-handle">[PP-4]</strong> Typically, alternative content resources are created by different
          entities to the ones that create the media content. They may even be
          in different countries and not be allowed to re-publish the other one's
          content. It is important to be able to host these resources separately,
          associate them together through the Web page author, and eventually
          play them back synchronously to the user. </h4>
				</div>
			</div>
			<div id="discovery-and-activation-deactivation-of-available-alternative-content-------by-the-user" typeof="bibo:Chapter" about="#discovery-and-activation-deactivation-of-available-alternative-content-------by-the-user" class="section">
				<h3><span class="secno">4.5 </span> Discovery and activation/deactivation of available alternative content
      by the user </h3>
				<p>As described above, individuals need a variety of media (alternative content)
      in order to perceive and understand the content. The author or some Web
      mechanism provides the alternative content. This alternative content may
      be part of the original content, embedded within the media container as
      'fallback content', or linked from the original content. The user is faced
      with discovering the availability of alternative content. </p>
				<p>Alternative content must be both discoverable by the user, and accessible
      in device agnostic ways. The development of APIs and user-agent controls
      should adhere to the following UAAG guidance: </p>
				<p>The user agent can facilitate the discovery of alternative content by
        following these criteria: </p>
				<div typeof="bibo:Chapter" about="#DAC-1" class="section">
					<h4 id="DAC-1"><strong class="req-handle">[DAC-1]</strong> The user has the ability to have indicators rendered along
          with rendered elements that have alternative content (e.g., visual
          icons rendered in proximity of content which has short text alternatives,
          long descriptions, or captions). In cases where the alternative content
          has different dimensions than the original content, the user has the
          option to specify how the layout/reflow of the document should be handled.
          (UAAG 2.0 3.1.1). </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DAC-2" class="section">
					<h4 id="DAC-2"><strong class="req-handle">[DAC-2]</strong> The user has a global option to specify which types of alternative
          content by default and, in cases where the alternative content has
          different dimensions than the original content, how the layout/reflow
          of the document should be handled. (UAAG 2.0 3.1.2). </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DAC-3" class="section">
					<h4 id="DAC-3"><strong class="req-handle">[DAC-3]</strong> The user can browse the alternatives and switch between
          them. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DAC-4" class="section">
					<h4 id="DAC-4"><strong class="req-handle">[DAC-4]</strong> Synchronized alternatives for time-based media (e.g., captions,
          descriptions, sign language) can be rendered at the same time as their
          associated audio tracks and visual tracks (UAAG 2.0 3.1.3). </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DAC-5" class="section">
					<h4 id="DAC-5"><strong class="req-handle">[DAC-5]</strong> Non-synchronized alternatives (e.g., short text alternatives,
          long descriptions) can be rendered as replacements for the original
          rendered content (UAAG 2.0 3.1.3). </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DAC-6" class="section">
					<h4 id="DAC-6"><strong class="req-handle">[DAC-6]</strong> Provide the user with the global option to configure a cascade
          of types of alternatives to render by default, in case a preferred
          alternative content type is unavailable (UAAG 2.0 3.1.4). </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DAC-7" class="section">
					<h4 id="DAC-7"><strong class="req-handle">[DAC-7]</strong> During time-based media playback, the user can determine
          which tracks are available and select or deselect tracks. These selections
          may override global default settings for captions, descriptions, etc.
          (UAAG 2.0 4.9.8) </h4>
				</div>
				<div typeof="bibo:Chapter" about="#DAC-8" class="section">
					<h4 id="DAC-8"><strong class="req-handle">[DAC-8]</strong> Provide the user with the option to load time-based media
          content such that the first frame is displayed (if video), but the
          content is not played until explicit user request. (UAAG 2.0 4.9.2) </h4>
				</div>
			</div>
			<div id="requirements-on-making-properties-available-to-the-accessibility-interface" typeof="bibo:Chapter" about="#requirements-on-making-properties-available-to-the-accessibility-interface" class="section">
				<h3><span class="secno">4.6 </span> Requirements on making properties available to the accessibility interface </h3>
				<p>Often forgotten in media systems, especially with the newer forms of packaging
      such as DVD menus and on-screen program guides, is the fact that the user
      needs to actually get to the content, control its playback, and turn on
      any required accessibility options. For user agents supporting accessibility
      APIs implemented for a platform, any media controls need to be connected
      to that API. </p>
				<p>On self-contained products that do not support assistive technology, any
      menus in the content need to provide information in alternative formats
      (e.g., talking menus). Products with a separate remote control, or that
      are self-contained boxes, should ensure the physical design does not block
      access, and should make accessibility controls, such as the closed-caption
      toggle, as prominent as the volume or channel controls. </p>
				<div typeof="bibo:Chapter" about="#API-1" class="section">
					<h4 id="API-1"><strong class="req-handle">[API-1]</strong> The existence of alternative-content tracks for a media
          resource must be exposed to the user agent. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#API-2" class="section">
					<h4 id="API-2"><strong class="req-handle">[API-2]</strong> Since authors will need access to the alternative content
          tracks, the structure needs to be exposed to authors as well, which
          requires a dynamic interface. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#API-3" class="section">
					<h4 id="API-3"><strong class="req-handle">[API-3]</strong> Accessibility APIs need to gain access to alternative content
          tracks no matter whether those content tracks come from within a resource
          or are combined through markup on the page. </h4>
				</div>
			</div>
			<div id="requirements-on-the-use-of-the-viewport" typeof="bibo:Chapter" about="#requirements-on-the-use-of-the-viewport" class="section">
				<h3><span class="secno">4.7 </span> Requirements on the use of the viewport </h3>
				<p>The video viewport plays a particularly important role with respect to
      alternative-content technologies. Mostly it provides a bounding box for
      many of the visually represented alternative-content technologies (e.g.,
      captions, hierarchical navigation points, sign language), although some
      alternative content does not rely on a viewport (e.g., full transcripts,
      descriptive video). </p>
				<p>One key principle to remember when designing player ‘skins’ is that the
      lower-third of the video may be needed for caption text. Caption consumers
      rely on being able to make fast eye movements between the captions and
      the video content. If the captions are in a non-standard place, this may
      cause viewers to miss information. The use of this area for things such
      as transport controls, while appealing aesthetically, may lead to accessibility
      conflicts. </p>
				<div typeof="bibo:Chapter" about="#VP-1" class="section">
					<h4 id="VP-1"><strong class="req-handle">[VP-1]</strong> It must be possible to deal with three different
          cases for the relation between the viewport size, the position of media
          and of alternative content:</h4>
					<ol class="list-in-req">
						<li>the alternative content's extent is specified in relation
              to the media viewport (e.g., picture-in-picture video, lower-third
              captions) </li>
						<li>the alternative content has its own independent extent,
              but is positioned in relation to the media viewport (e.g., captions
              above the audio, sign-language video above the audio, navigation
              points below the controls) </li>
						<li>the alternative content has its own independent extent and
              doesn't need to be rendered in any relation to the media viewport
              (e.g., text transcripts) </li>
					</ol>
					<p>If alternative content has a different height or width than the media
        content, then the user agent will reflow the (HTML) viewport. (UAAG 2.0
        3.1.4). </p>
					<p class="note">This may create a need to provide an author hint to the Web page
        when embedding alternate content in order to instruct the Web page how
        to render the content: to scale with the media resource, scale independently,
        or provide a position hint in relation to the media. On small devices
        where the video takes up the full viewport, only limited rendering choices
        may be possible, such that the UA may need to override author preferences. </p>
				</div>
				<div typeof="bibo:Chapter" about="#VP-2" class="section">
					<h4 id="VP-2"><strong class="req-handle">[VP-2]</strong> The user can change the following characteristics
          of visually rendered text content, overriding those specified by the
          author or user-agent defaults (UAAG 2.0 3.6.1). (Note: this should
          include captions and any text rendered in relation to media elements,
          so as to be able to magnify and simplify rendered text):</h4>
					<ol class="list-in-req">
						<li>text scale (i.e., the general size of text), </li>
						<li>font family, and </li>
						<li>text color (i.e., foreground and background). </li>
					</ol>
					<p class="note">This should be achievable through UA configuration or even through
        something like a <a href="http://www.greasespot.net/">greasemonkey script</a> or <a href="http://www.mozilla.org/unix/customizing.html#usercss">user
        CSS</a> which can override styles dynamically in the browser. </p>
				</div>
				<div typeof="bibo:Chapter" about="#VP-3" class="section">
					<h4 id="VP-3"><strong class="req-handle">[VP-3]</strong> Provide the user with the ability to adjust
          the size of the time-based media up to the full height or width of
          the containing viewport, with the ability to preserve aspect ratio
          and to adjust the size of the playback viewport to avoid cropping,
          within the scaling limitations imposed by the media itself. (UAAG 2.0
          4.9.9) </h4>
					<p class="note">This can be achieved by simply zooming into the Web page, which
      will automatically rescale the layout and reflow the content. </p>
				</div>
				<div typeof="bibo:Chapter" about="#VP-4" class="section">
					<h4 id="VP-4"><strong class="req-handle">[VP-4]</strong> Provide the user with the ability to control
          the contrast and brightness of the content within the playback viewport.
          (UAAG 2.0 4.9.11) </h4>
					<p class="note">This is a user-agent device requirement and should already be
        addressed in the UAAG. In live content, it may even be possible to adjust
        camera settings to achieve this requirement. It is also a   &quot;<em class="rfc2119" title="should">should</em>&quot; level
        requirement, since it does not account for limitations of various devices. </p>
				</div>
				<div typeof="bibo:Chapter" about="#VP-5" class="section">
					<h4 id="VP-5"><strong class="req-handle">[VP-5]</strong> Captions and subtitles traditionally occupy
          the lower third of the video, where also controls are also usually
          rendered. The user agent must avoiding overlapping of overlay content
          and controls on media resources. This must also happen if, for example,
          the controls are only visible on demand. </h4>
					<p class="note">If there are several types of overlapping overlays, the controls
        should stay on the bottom edge of the viewport and the others should
        be moved above this area, all stacked above each other. </p>
				</div>
			</div>
			<div id="requirements-on-the-parallel-use-of-alternate-content-on-potentially-------multiple-devices-in-parallel" typeof="bibo:Chapter" about="#requirements-on-the-parallel-use-of-alternate-content-on-potentially-------multiple-devices-in-parallel" class="section">
				<h3><span class="secno">4.8 </span> Requirements on the parallel use of alternate content on potentially
      multiple devices in parallel </h3>
				<p>Multiple user devices must be directly addressable. It must be assumed
      that many users will have multiple video displays and/or multiple audio-output
      devices attached to an individual computer, or addressable via LAN. It
      must be possible to configure certain types of media for presentation on
      specific devices, and these configuration settings must be readily overwritable
      on a case-by-case basis by users. </p>
				<p>(A request to the UAAG on clarifications to a number of these points was
      made, and a detailed response was provided. The response requires review
      and integration into this document, but can be found today in the <a href="http://lists.w3.org/Archives/Public/public-html-a11y/2010Jul/0108.html">22 July 2010 message on this topic</a>). </p>
				<p>Systems supporting multiple devices for accessibility must: </p>
				<div typeof="bibo:Chapter" about="#MD-1" class="section">
					<h4 id="MD-1"><strong class="req-handle">[MD-1]</strong> Support a platform-accessibility architecture relevant to
          the operating environment. (UAAG 2.0 2.1.1) </h4>
				</div>
				<div typeof="bibo:Chapter" about="#MD-2" class="section">
					<h4 id="MD-2"><strong class="req-handle">[MD-2]</strong> Ensure accessibility of all user-interface components including
          the user interface, rendered content, and alternative content; make
          available the name, role, state, value, and description via a platform-accessibility
          architecture. (UAAG 2.0 2.1.2) </h4>
				</div>
				<div typeof="bibo:Chapter" about="#MD-3" class="section">
					<h4 id="MD-3"><strong class="req-handle">[MD-3]</strong> If a feature is not supported by the accessibility architecture(s),
          provide an equivalent feature that does support the accessibility architecture(s).
          Document the equivalent feature in the conformance claim. (UAAG 2.0
          2.1.3) </h4>
				</div>
				<div typeof="bibo:Chapter" about="#MD-4" class="section">
					<h4 id="MD-4"><strong class="req-handle">[MD-4]</strong> If the user agent implements one or more DOMs, they must
          be made programmatically available to assistive technologies. (UAAG
          2.0 2.1.4) This assumes the video element will write to the <abbr title="Document Object Model">DOM</abbr>. </h4>
				</div>
				<div typeof="bibo:Chapter" about="#MD-5" class="section">
					<h4 id="MD-5"><strong class="req-handle">[MD-5]</strong> If the user can modify the state or value of a piece of content
          through the user interface (e.g., by checking a box or editing a text
          area), the same degree of write access is available programmatically
          (UAAG 2.0 2.1.5). </h4>
				</div>
				<div typeof="bibo:Chapter" about="#MD-6" class="section">
					<h4 id="MD-6"><strong class="req-handle">[MD-6]</strong> If any of the following properties are supported by the accessibility-platform
          architecture, make the properties available to the accessibility-platform
          architecture (UAAG 2.0 2.1.6):</h4>
					<ol class="list-in-req">
						<li>the bounding dimensions and coordinates of rendered graphical
              objects; </li>
						<li>font family; </li>
						<li>font size; </li>
						<li>text foreground color; </li>
						<li>text background color; </li>
						<li>change state/value notifications. </li>
					</ol>
				</div>
				<div typeof="bibo:Chapter" about="#MD-7" class="section">
					<h4 id="MD-7"><strong class="req-handle">[MD-7]</strong> Ensure that programmatic exchanges between APIs proceed at
          a rate such that users do not perceive a delay. (UAAG 2.0 2.1.7). </h4>
				</div>
			</div>
		</div>
		<div class="appendix section" id="acknowledgements" typeof="bibo:Chapter" about="#acknowledgements">
			
<!-- OddPage -->
<h2><span class="secno">A. </span>Acknowledgements</h2>
			<p>The following people contributed to the development of this document.</p>
			<div id="ack_group" typeof="bibo:Chapter" about="#ack_group" class="section">
				<h3><span class="secno">A.1 </span>Participants in the PFWG at the time of publication</h3>
				<ol>
					<li>David Bolter (Mozilla) </li>
					<li>Sally Cain (Royal National Institute of Blind People)</li>
					<li>Michael Cooper (<acronym title="World Wide Web Consortium">W3C</acronym>/<acronym title="Massachusetts Institute of Technology">MIT</acronym>)</li>
					<li>James Craig (Apple Inc.) </li>
					<li>Steve Faulkner (Invited Expert, The Paciello Group) </li>
					<li>Geoff Freed (Invited Expert, NCAM)</li>
					<li>Jon Gunderson (Invited Expert, UIUC)</li>
					<li>Markus Gylling (DAISY Consortium)</li>
					<li>Sean Hayes (Microsoft Corporation)</li>
					<li>Kenny Johar (Vision Australia) </li>
					<li>Matthew King (IBM Corporation)</li>
					<li>Gez Lemon (International Webmasters Association / HTML Writers Guild (IWA-HWG))</li>
					<li>Thomas Logan (HiSoftware Inc.)</li>
					<li>William Loughborough (Invited Expert)</li>
					<li>Shane McCarron (Invited Expert, Aptest)</li>
					<li>Charles McCathieNevile (Opera Software)</li>
					<li>Mary Jo Mueller (IBM Corporation)</li>
					<li>James Nurthen (Oracle Corporation) </li>
					<li>Joshue O'Connor (Invited Expert) </li>
					<li>Artur Ortega (Yahoo!, Inc.)</li>
					<li>Sarah Pulis (Media Access Australia)</li>
					<li>Gregory Rosmaita (Invited Expert)</li>
					<li>Janina Sajka (Invited Expert, The Linux Foundation)</li>
					<li>Joseph Scheuhammer (Invited Expert, Inclusive Design Research Centre, OCAD University) </li>
					<li>Stefan Schnabel (SAP AG) </li>
					<li>Richard Schwerdtfeger (IBM Corporation)</li>
					<li>Lisa Seeman (Invited Expert, Aqueous) </li>
					<li>Cynthia Shelly (Microsoft Corporation) </li>
					<li>Andi Snow-Weaver (IBM Corporation)</li>
					<li>Gregg Vanderheiden (Invited Expert, Trace)</li>
					<li>Léonie Watson (Invited Expert, Nomensa)</li>
					<li>Gottfried Zimmermann (Invited Expert, Access Technologies Group)</li>
				</ol>
			</div>
			<div id="ack_others" typeof="bibo:Chapter" about="#ack_others" class="section">
				<h3><span class="secno">A.2 </span>Other previously active PFWG participants and contributors</h3>
				<p> Jim Allan (TSB), Simon Bates, Chris Blouch (AOL), Judy Brewer (<acronym title="World Wide Web Consortium">W3C</acronym>/<acronym title="Massachusetts Institute of Technology">MIT</acronym>), Ben Caldwell (Trace), Charles Chen (Google, Inc.), Christian Cohrs, Dimitar Denev (Frauenhofer Gesellschaft), Donald Evans (AOL), Kentarou Fukuda (IBM Corporation), Becky Gibson (IBM), Alfred S. Gilman, Andres Gonzalez (Adobe Systems Inc.), Georgios Grigoriadis (SAP AG), Jeff Grimes (Oracle), Barbara Hartel, John Hrvatin (Microsoft Corporation), Masahiko Kaneko (Microsoft Corporation), Earl Johnson (Sun), Jael Kurz, Diego La Monica (International Webmasters Association / HTML Writers Guild (IWA-HWG)), Aaron Leventhal (IBM Corporation), Alex Li (SAP), Linda Mao (Microsoft), Anders Markussen (Opera Software), Matthew May (Adobe Systems Inc.), Lisa Pappas (Society for Technical Communication (STC)), Dave Pawson (RNIB), David Poehlman, Simon Pieters (Opera Software), T.V. Raman (Google, Inc.), Tony Ross (Microsoft Corporation), Martin Schaus (SAP AG), Marc Silbey (Microsoft Corporation), Henri Sivonen (Mozilla), Henny Swan (Opera Software), Vitaly Sourikov, Mike Squillace (IBM), Ryan Williams (Oracle), Tom Wlodkowski.</p>
			</div>
			<div id="ack_funders" typeof="bibo:Chapter" about="#ack_funders" class="section">
				<h3><span class="secno">A.3 </span>Enabling funders</h3>
				<p>This publication has been funded in part with Federal funds from the U.S. Department of Education, National Institute on Disability and Rehabilitation Research (NIDRR) under contract number ED-OSE-10-C-0067. The content of this publication does not necessarily reflect the views or policies of the U.S. Department of Education, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.</p>
			</div>
		</div>
	<div id="references" class="appendix section" typeof="bibo:Chapter" about="#references">
<!-- OddPage -->
<h2><span class="secno">B. </span>References</h2><div id="normative-references" typeof="bibo:Chapter" about="#normative-references" class="section"><h3><span class="secno">B.1 </span>Normative references</h3><p>No normative references.</p></div><div id="informative-references" typeof="bibo:Chapter" about="#informative-references" class="section"><h3><span class="secno">B.2 </span>Informative references</h3><p>No informative references.</p></div></div></body></html>