index.html 101 KB
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150
<?xml version='1.0' encoding='utf-8' standalone="yes"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
                      "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html xml:lang="en-US" xmlns="http://www.w3.org/1999/xhtml" lang="en-US">
 <head>
   <title>Multimedia Vocabularies on the Semantic Web</title>
   <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
   <link rel="Home" href="http://www.w3.org/2005/Incubator/mmsem/Overview.html" />
   <style type="text/css"></style>
   <link rel="stylesheet" type="text/css" href="http://www.w3.org/StyleSheets/TR/W3C-XGR"/>
 </head>

 <body>
   <div id="headings" class="head">
    <p>      
       <a href="http://www.w3.org/">
        <img height="48" width="72" alt="W3C" src="http://www.w3.org/Icons/w3c_home"/>
       </a>
       <a href="http://www.w3.org/2005/Incubator/XGR/">
        <img height="48" width="160" alt="W3C Incubator Report" src="http://www.w3.org/2005/Incubator/images/XGR"/>
       </a>
    </p>

   <h1>Multimedia Vocabularies on the Semantic Web</h1>
   <h2><a id="w3c-doctype" name="w3c-doctype"/>W3C Incubator Group Report 24 July 2007</h2>
   
   <dl>
    <dt>This version: </dt>
    <dd>
      <a href="http://www.w3.org/2005/Incubator/mmsem/XGR-vocabularies-20070724/">http://www.w3.org/2005/Incubator/mmsem/XGR-vocabularies-20070724/</a></dd>
    <dt>Latest version: </dt>
    <dd>
      <a href="http://www.w3.org/2005/Incubator/mmsem/XGR-vocabularies/">http://www.w3.org/2005/Incubator/mmsem/XGR-vocabularies/</a></dd>
    <dt>Previous version: </dt>
    <dd>
     This is the first public version.
    </dd>

    <dt>Editor: </dt>
    <dd><a href="mailto:michael.hausenblas@joanneum.at">Michael Hausenblas</a>, JOANNEUM RESEARCH</dd>
    
    <dt>Contributors: </dt>
    <dd><a href="http://mmit.informatik.uni-oldenburg.de/en">Susanne Boll</a>, University of Oldenburg</dd>
    <dd><a href="http://tobias.buerger.googlepages.com/">Tobias B&#252;rger</a>, Digital Enterprise Research Institute (DERI)</dd>
    <dd><a href="http://www.iua.upf.es/~ocelma/">Oscar Celma</a>, Music Technology Group, Pompeu Fabra University</dd>
    <dd><a href="http://www.mindswap.org/~chris/">Christian Halaschek-Wiener</a>, University of Maryland</dd>
    <dd><a href="mailto:erik.mannens@ugent.be">Erik Mannens</a>, IBBT-MMLab, University of Ghent</dd>
    <dd><a href="http://www.cwi.nl/~troncy/">Rapha&#235;l Troncy</a>, Center for Mathematics and Computer Science (CWI Amsterdam)</dd>

    <dt>&#160; </dt>
    <dd>Also see <a href="#acknowledgments">Acknowledgements</a>.</dd>
   </dl>

   <p class="copyright">
     <a href="http://www.w3.org/Consortium/Legal/ipr-notice#Copyright">Copyright</a> &#169;  2007 
     <a href="http://www.w3.org/"><acronym title="World Wide Web Consortium">W3C</acronym></a><sup>&#174;</sup> 
     (<a href="http://www.csail.mit.edu/"><acronym title="Massachusetts Institute of Technology">MIT</acronym></a>, 
     <a href="http://www.ercim.org/"><acronym title="European Research Consortium for Informatics and Mathematics">ERCIM</acronym></a>, 
     <a href="http://www.keio.ac.jp/">Keio</a>), All Rights Reserved.
     W3C <a href="http://www.w3.org/Consortium/Legal/ipr-notice#Legal_Disclaimer">liability</a>, 
     <a href="http://www.w3.org/Consortium/Legal/ipr-notice#W3C_Trademarks">trademark</a>
     and <a href="http://www.w3.org/Consortium/Legal/copyright-documents">document use</a> rules apply.
   </p>
   </div>
   <hr />

   <h2>
     <a id="abstract" name="abstract">
    Abstract
     </a>
   </h2>

   <p>
This document gives an overview on the state-of-the-art of multimedia metadata formats. Initially, practical relevant vocabularies for 
developers of Semantic Web applications are listed according to their modality scope. In the second part of this document, the focus is set
on the integration of the multimedia vocabularies into the Semantic Web, that is to say, formal representations of the vocabularies are discussed.
   </p>

   <h2>
      <a id="status" name="status">Status of This Document</a>
   </h2>
   <p>
<em>
This section describes the status of this document at the time of its publication. Other documents may supersede this document.
A list of <a href="http://www.w3.org/2005/Incubator/XGR/">Final Incubator Group Reports</a> is available. 
See also the <a href="http://www.w3.org/TR/">W3C technical reports index</a> at http://www.w3.org/TR/.
</em>
   </p>

   <p>
This document was developed by the W3C <a href="http://www.w3.org/2005/Incubator/mmsem/">Multimedia Semantics Incubator Group</a>,
part of the <a href="http://www.w3.org/2005/Incubator/">W3C Incubator Activity</a>.
   </p>

   <p>
Publication of this document by W3C as part of the <a href="http://www.w3.org/2005/Incubator/">W3C Incubator Activity</a> indicates
no endorsement of its content by W3C, nor that W3C has, is, or will be allocating any resources to the issues addressed by it. 
Participation in Incubator Groups and publication of Incubator Group Reports at the W3C site are benefits of 
<a href="http://www.w3.org/Consortium/join">W3C Membership</a>.
   </p>

   <p>Incubator Groups have as a <a 
href="http://www.w3.org/2005/Incubator/procedures.html#Patent">goal</a> 
to produce work that can be implemented on a Royalty Free basis, as 
defined in the W3C Patent Policy. Participants in this Incubator Group 
have made no statements about whether they will offer licenses according 
to the <a 
href="http://www.w3.org/Consortium/Patent-Policy-20030520.html#sec-Requirements">licensing 
requirements of the W3C Patent Policy</a> for portions of this Incubator 
Group Report that are subsequently incorporated in a W3C Recommendation.
   </p>

   <h2>
     <a id="scope" name="scope">
    Scope
     </a>
   </h2>
   <p>
This document targets Semantic Web developers that deal with multimedia. No prerequisites are assumed. The target audience
may range from prosumers to professionals working with audio-visual archives, libraries, media productions, and broadcast industry.
   </p>
    
   <p>
After reading this document, readers may also be interested in related issues as presented in the
<a href="http://www.w3.org/2005/Incubator/mmsem/wiki/Tools_and_Resources">tools and resources</a> document.     
   </p>
    
   <p>
     <em>
      Note: A living version of this document is maintained at the <a href="http://www.w3.org/2005/Incubator/mmsem/">Multimedia Semantics Incubator Group</a>
            Wiki page: 
      <a href="http://www.w3.org/2005/Incubator/mmsem/wiki/Vocabularies">
      http://www.w3.org/2005/Incubator/mmsem/wiki/Vocabularies
      </a>.
     </em>    
   </p>

   <h2>
      <a id="objectives" name="objectives">Objectives</a>
   </h2>
   <p>
     This document aims at:
   </p>
   <ul>
     <li>Giving an overview on the state-of-the-art of multimedia metadata formats and vocabularies, and</li>
     <li>Summarizing formalizations of multimedia metadata formats to be used on the Semantic Web.</li>
   </ul>
    
   <p>
Discussion of this document is invited on the public mailing list <a href="mailto:public-xg-mmsem@w3.org">public-xg-mmsem@w3.org</a>
(<a href="http://lists.w3.org/Archives/Public/public-xg-mmsem/">public archives</a>).
Public comments should include "[MMSEM-Vocabulary]" as subject prefix .
   </p>
   <hr />
    
   <div class="toc">
   <h2 class="notoc">
    <a id="contents" name="contents">Table of Contents</a>
   </h2>

   <ul id="toc" class="toc">
    <li class="tocline"><a href="#introduction"><b>1. Introduction</b></a>
     <ul class="toc">
      <li class="tocline"><a href="#namespaces">1.1 Declaration of Namespaces</a></li>
      <li class="tocline"><a href="#related">1.2 Related Pages</a></li>
     </ul>
    </li>
    <li class="tocline"><a href="#types"><b>2. Types of Multimedia Metadata</b></a></li>
    <li class="tocline"><a href="#existing"><b>3. Existing Multimedia Metadata Formats</b></a>
     <ul class="toc">
      <li class="tocline"><a href="#existing-SI">3.1 Multimedia Metadata Formats For Describing Still Images</a></li>
      <li class="tocline"><a href="#existing-A">3.2 Multimedia Metadata Formats For Describing  Audio Content</a></li>
      <li class="tocline"><a href="#existing-AV">3.3 Multimedia Metadata Formats For Describing Audio-Visual Content</a></li>
      <li class="tocline"><a href="#existing-MP">3.4 Multimedia Metadata Formats For Describing Multimedia Presentations</a></li>  
      <li class="tocline"><a href="#existing-WF">3.5 Multimedia Metadata Formats For Describing Specific Domains Or Workflows</a></li>  
      <li class="tocline"><a href="#existing-RE">3.6 Other Multimedia Metadata Related Formats</a></li> 
     </ul>
    </li>
    <li class="tocline"><a href="#formal"><b>4. Multimedia Ontologies</b></a></li>
    <li class="tocline"><a href="#references"><b>References</b></a></li>
    <li class="tocline"><a href="#acknowledgments"><b>Acknowledgments</b></a></li>
   </ul>
  </div>

  <h2>
   <a name="introduction" id="introduction">1. Introduction</a>
  </h2>
  <p>
   This document gives an overview on the state-of-the-art of multimedia metadata formats.
   A special focus is set on the usability with respect to the Semantic Web, that is to say, formal representations of exiting vocabularies.
  </p>

  <h3>
   <a name="namespaces">1.1 Declaration of Namespaces</a>
  </h3>

  <p>
The syntax for all RDF code snippets in this document is <a href="http://www.w3.org/DesignIssues/Notation3">N3</a>,
the namespace used herein are listed in <a href="#nsprefs">Table 1-1</a>. Note that the choice of any namespace prefix is arbitrary, hence not
significant semantically [<cite><a href="#XML-NS">XML NS</a></cite>].
  </p>

  <a name="nsprefs" id="nsprefs"></a>
  <table summary="Namespace prefixes usage in this document" border="1">
   <caption>Table 1-1. XML namespaces used in this document.</caption>
   <tbody>
    <tr>
     <th align="left" rowspan="1" colspan="1">Prefix</th>
     <th align="left" rowspan="1" colspan="1">URI</th>
    </tr>
    <tr>
     <td rowspan="1" colspan="1">xsd</td>
     <td rowspan="1" colspan="1">&lt;"http://www.w3.org/2001/XMLSchema#"&gt;</td>
    </tr>
    <tr>
     <td rowspan="1" colspan="1">rdf</td>
     <td rowspan="1" colspan="1">&lt;"http://www.w3.org/1999/02/22-rdf-syntax-ns#"&gt;</td>
		 </tr>
		 <tr>
		   <td rowspan="1" colspan="1">rdfs</td>
		   <td rowspan="1" colspan="1">&lt;"http://www.w3.org/2000/01/rdf-schema#"&gt;</td>
		 </tr>
		 <tr>
		   <td rowspan="1" colspan="1">owl</td>
		   <td rowspan="1" colspan="1">&lt;"http://www.w3.org/2002/07/owl#"&gt;</td>
		 </tr>
		 <tr>
		   <td rowspan="1" colspan="1">dc</td>
		   <td rowspan="1" colspan="1">&lt;"http://purl.org/dc/elements/1.1/"&gt;</td>
		 </tr>
	  </tbody>
  </table>
    
  <h3>
   <a name="namespaces">1.2 Related Pages</a>
  </h3>

  <p>
Complementary and related resources can be obtained at the following pages:
  </p>
  <ul>
   <li>The <a href="http://www.w3.org/2005/Incubator/mmsem/wiki/Tools_and_Resources">Tools and Resources</a> Wiki page of the MMSEM-XG.</li>
   <li>The <a href="http://www.w3.org/2005/Incubator/mmsem/XGR-mpeg7/">MPEG-7 and the Semantic Web</a> document of the MMSEM-XG.</li>
  </ul>  
  
  <h2>
   <a name="types" id="types">2. Types of Multimedia Metadata</a>
  </h2>
  <p>
   Based on [<a href="#Smit06">Smith et al., 2006</a>], the vocabularies in this document are described in terms of the following two tables.
   <a href="#tab-discriminators">Table 2-1</a> lists the used discriminators, and <a href="#tab-categories">Table 2-2</a> the categories. 
   For the example column in both tables the <em>NewsML-G2</em> vocabulary is used.
  </p>

  <p>Note that a discriminator is understood in terms of its possible values, that is, a list of comma-separated values. The possible values given in
  <a href="#tab-discriminators">Table 2-1</a> should be understood as being exhaustive, thus the range for a value is defined by the content 
  of the corresponding column.
  A category on the other side is a list of comma-separated items; the possible items are non-exhaustive. The range for an item of a category
  in <a href="#tab-categories">Table 2-2</a> is open; examples are listed in the content of the corresponding column.
  </p>

  <p>Description of used discriminators:</p>
  <ul>
    <li>Representation: the primary (official) serialization format for the multimedia standard.</li> 
    <li>Content Type: the type of media, a certain multimedia standard is capable to describe.</li>
  </ul> 
 
  <a name="tab-discriminators" id="tab-discriminators"></a>
  <table summary="Discriminators for multimedia metadata standards used in this document" border="1">
   <caption>Table 2-1. Discriminators for multimedia metadata standards used in this document.</caption>
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Discriminator</th>
		  <th align="left" rowspan="1" colspan="1">Permitted Values</th>
		  <th align="left" rowspan="1" colspan="1">Example</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">Representation</td>
		  <td rowspan="1" colspan="1">non-XML (nX), XML (X), RDF (R), OWL (O)</td>
		  <td rowspan="1" colspan="1">X, R</td>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">Content Type</td>
		  <td rowspan="1" colspan="1">still-image (SI), video (V), audio (A), text (T), general purpose (G)</td>
		  <td rowspan="1" colspan="1">G, T, SI, V</td>
		 </tr>
	  </tbody>
  </table>
  
  <p>Description of used categories:</p>
  <ul>
    <li>Workflow: understood in terms of the Canonical Processes of Media Production, see  [<a href="#Hardman05">Hardman, 2005</a>].</li> 
    <li>Domain: the main domain in which a multimedia vocabulary is intended to be used in.</li>
    <li>Industry: the main branch of productive (commercial) usage.</li>
  </ul>
  
  <a name="tab-categories" id="tab-categories"></a>
  <table summary="Categories for multimedia metadata standards used in this document" border="1">
  	<caption>Table 2-2. Categories for multimedia metadata standards used in this document.</caption>
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Category</th>
		  <th align="left" rowspan="1" colspan="1">Items</th>
		  <th align="left" rowspan="1" colspan="1">Example</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">Workflow</td>
		  <td rowspan="1" colspan="1">premeditation, production, publish, etc.</td>
		  <td rowspan="1" colspan="1">publish</td>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">Domain</td>
		  <td rowspan="1" colspan="1">entertainment, news, sports, etc.</td>
		  <td rowspan="1" colspan="1">news</td>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">Industry</td>
		  <td rowspan="1" colspan="1">broadcast, music, publishing, etc.</td>
		  <td rowspan="1" colspan="1">broadcast</td>
		 </tr>
	  </tbody>
  </table>
  
  
  <h2>
   <a name="existing" id="existing">3. Existing Multimedia Metadata Formats</a>
  </h2>
 
  <p>
     This section  introduces common existing metadata formats that are of importance for the description and usage of multimedia content. 
     Each vocabulary starts with a table containing the responsible party, the specification (if available) and a list of discriminators and categories.
     The description of each vocabulary should enable the reader to get an idea of its capabilities, and its limitations. 
  </p>
  
  <h3>
   <a name="existing-SI">3.1 Multimedia Metadata Formats For Describing Still Images</a>
  </h3>
  <p>
  	In the following, metadata formats are listed that deal with the description of still image content.
  </p>
  
  <h4>
   <a name="existing-SI-VRA">3.1.1. Visual Resource Association (VRA)</a>
  </h4>
  <a name="tab-sum-existing-SI-VRA" id="tab-sum-existing-SI-VRA"></a>
  <table summary="Summary for VRA" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		  <th align="left" rowspan="1" colspan="1">Formal Representation</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.vraweb.org/">http://www.vraweb.org/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#VraCore">VRA Core</a>]</td>
		  <td rowspan="1" colspan="1"><a href="#formal-VRA">VRA - RDF/OWL</a></td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-SI-VRA" id="tab-discat-existing-SI-VRA"></a>
  <table summary="Discriminators and categories for VRA" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
  		  <th align="left" rowspan="1" colspan="1">Domain</th>
   		  <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">nX</td>
		  <td rowspan="1" colspan="1">SI</td>
		  <td rowspan="1" colspan="1">publish</td>
		  <td rowspan="1" colspan="1">culture</td>
		  <td rowspan="1" colspan="1">archives</td>
		 </tr>
	  </tbody>
  </table>
  <p>
      The Visual Resource Association (VRA) is an organization consisting of over 600 active members, including many American Universities, 
      galleries and art institutes. These often maintain large collections of (annotated) slides, images and other representations of works of art. 
      The VRA has defined the VRA Core Categories to describe such collections. The VRA Core [<a href="#VraCore">VRA Core</a>] is a set of metadata elements used to describe works of 
      visual culture as well as the images that represent them. 
  </p>
  <p>
     When the Dublin Core [<a href="#DublinCore">Dublin Core</a>] specifies a small and commonly used vocabulary for on-line resources in general, VRA Core defines a similar set targeted
     especially at visual resources. Dublin Core and VRA Core both refer to terms in their vocabularies as elements, and both use qualifiers to refine 
     elements in similar way. The more general elements of VRA Core have direct mappings to comparable fields in Dublin Core. Furthermore, both vocabularies 
     are defined in a way that abstracts from implementation issues and underlying serialization languages. 
  </p>
  
  <h4>
   <a name="existing-SI-Exif">3.1.2 Exchangeable image file format (Exif)</a>
  </h4>
  <a name="tab-sum-existing-SI-Exif" id="tab-sum-existing-SI-Exif"></a>
  <table summary="Summary for Exif" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		  <th align="left" rowspan="1" colspan="1">Formal Representation</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.jeita.or.jp/english/">http://www.jeita.or.jp/english/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#Exif">Exif</a>]</td>
		  <td rowspan="1" colspan="1"><a href="#formal-Exif">Exif - RDF/OWL</a></td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-SI-Exif" id="tab-discat-existing-SI-Exif"></a>
  <table summary="Discriminators and categories for Exif" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
  		  <th align="left" rowspan="1" colspan="1">Domain</th>
   		  <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">nX</td>
		  <td rowspan="1" colspan="1">SI</td>
		  <td rowspan="1" colspan="1">capture-distribute</td>
		  <td rowspan="1" colspan="1">generic</td>
		  <td rowspan="1" colspan="1">digital camera</td>
		 </tr>
	  </tbody>
  </table>
  <p>
One of nowaday's commonly used metadata format for digital images is the Exchangeable Image File Format (Exif) [<a href="#Exif">Exif</a>]. 
The standard "specifies the formats to be used for images and sounds, and tags in digital still cameras and for other systems handling the image
and sound files recorded by digital cameras." The so called Exif header carries the metadata for the captured image or sound. 
  </p>
  <p>
The metadata tags which the Exif standard provides covers metadata related to the capture of the image and the context situation of the capturing.
This includes metadata related to the image data structure (e.g., height, width, orientation), capturing information (e.g., rotation, exposure time, flash), 
recording offset (e.g., image data location, bytes per compressed strip), image data characteristics (e.g., transfer function, color space transformation), 
as well as general tags (e.g., image title, copyright holder, manufacturer). In these days new camera also write GPS information into the header. 
Lastly, we point out that metadata elements pertaining to the image are stored in the image file header and are marked identified by unique tags, 
which serve as an element identifier.
  </p>	  

  <h4>
   <a name="existing-SI-NISOZ3987">3.1.3 NISO Z39.87</a>
  </h4>
  <a name="tab-sum-existing-SI-NISOZ3987" id="tab-sum-existing-SI-NISOZ3987"></a>
  <table summary="Summary for Z39.87" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.niso.org/">http://www.niso.org/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#Z3987">NISO Z39.87</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-SI-NISOZ3987" id="tab-discat-existing-SI-NISOZ3987"></a>
  <table summary="Discriminators and categories for Z39.87" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
  		  <th align="left" rowspan="1" colspan="1">Domain</th>
   		  <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">X</td>
		  <td rowspan="1" colspan="1">SI</td>
		  <td rowspan="1" colspan="1">production</td>
		  <td rowspan="1" colspan="1">interoperability</td>
		  <td rowspan="1" colspan="1">image creation</td>
		 </tr>
	  </tbody>
  </table>
  <p>
The NISO Z39.87 standard [<a href="#Z3987">NISO Z39.87</a>] defines a set of metadata elements for raster digital images to enable users to develop,
exchange, and interpret digital image files.	
  </p>
  <p>
Tags cover a wide spectrum of metadata: basic image parameters, image creation, imaging performance assessment, history.
This standard is intended to facilitate the development of applications to validate, manage, migrate, and otherwise process images of enduring value.
Such applications are viewed to be essential components of large-scale digital repositories and digital asset management systems. 
  </p>
  <p>
The dictionary has been designed to facilitate interoperability between systems, services, and software as well as to support the long-term management 
and continuing access to digital image collections.
  </p>
  
  <h4>
   <a name="existing-SI-DIG35">3.1.4 DIG35</a>
  </h4>
  <a name="tab-sum-existing-SI-DIG35" id="tab-sum-existing-SI-DIG35"></a>
  <table summary="Summary for DIG35" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		  <th align="left" rowspan="1" colspan="1">Formal Representation</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.i3a.org/">http://www.i3a.org/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#DIG35">DIG35</a>]</td>
		  <td rowspan="1" colspan="1"><a href="#formal-DIG35">DIG35 - RDF/OWL</a></td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-SI-DIG35" id="tab-discat-existing-SI-DIG35"></a>
  <table summary="Discriminators and categories for DIG35" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
  		  <th align="left" rowspan="1" colspan="1">Domain</th>
   		  <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">X</td>
		  <td rowspan="1" colspan="1">SI</td>
		  <td rowspan="1" colspan="1">publish</td>
		  <td rowspan="1" colspan="1">archives</td>
		  <td rowspan="1" colspan="1">consumer</td>
		 </tr>
	  </tbody>
  </table>
  <p>
The DIG35 specification [<a href="#DIG35">DIG35</a>] includes a "standard set of metadata for digital images" which promotes interoperability and extensibility, 
as well as a "uniform underlying construct to support interoperability of metadata between various digital imaging devices."
  </p>
  <p>
The metadata properties are encoded within an XML Schema and cover:
  </p>
  <ul>
    <li>Basic Image Parameter (a general-purpose metadata standard);</li>
    <li>Image Creation (e.g. the camera and lens information);</li>
    <li>Content Description (who, what, when and where);</li>
    <li>History (partial information about how the image got to the present state);</li>
    <li>Intellectual Property Rights;</li>
    <li>Fundamental Metadata Types and Fields (define the format of the field defined in all metadata block).</li>
  </ul>
  <p>
   <em>Note:</em> DIG35 Metadata Specification Version 1.1 is not free ($35). 
  </p>  
  
  <h4>
   <a name="existing-SI-PhotoRDF">3.1.5 PhotoRDF</a>
  </h4>
  <a name="tab-sum-existing-SI-PhotoRDF" id="tab-sum-existing-SI-PhotoRDF"></a>
  <table summary="Summary for PhotoRDF" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.w3.org/">http://www.w3.org/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#PhotoRDF">PhotoRDF</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-SI-PhotoRDF" id="tab-discat-existing-SI-PhotoRDF"></a>
  <table summary="Discriminators and categories for PhotoRDF" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
  		  <th align="left" rowspan="1" colspan="1">Domain</th>
   		  <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">R</td>
		  <td rowspan="1" colspan="1">SI</td>
		  <td rowspan="1" colspan="1">capture-distribute</td>
		  <td rowspan="1" colspan="1">personal media</td>
		  <td rowspan="1" colspan="1">photo</td>
		 </tr>
	  </tbody>
  </table>
  <p>
PhotoRDF [<a href="#PhotoRDF">PhotoRDF</a>] is an attempt to standardize a set of categories and labels for personal photo collections. 
The standard has been proposed in early 2002 but did not develop since. The latest version is a W3C Note from 19 April 2002. 
The standard already works as a roof for different other standards that together should solve the "project for describing &amp; retrieving 
(digitized) photos with (RDF) metadata". The metadata is separated into three different schemas, a Dublin Core, a technical schema and a
content schema. As the standard aims to be short and simple it covers only a small set of properties. The Dublin Core schema is adopted 
for those parts of a photo that needs description for its creator, editor, title, date of publishing and so on. With regard to the technical
aspects of a photo, however, the standard includes less properties than <a href="#existing-SI-Exif">EXIF</a>. For the actual description of the content, the content 
schema defines a very small set of keywords that shall be used in the "subject" field of the Dublin Core schema.
  </p>
  <p>
PhotoRDF addressed the demand for a small standard describing personal photos for personal media
management as well as for publishing and exchanging photos between different tools. It covers the different aspects of a photo that 
range from the camera setting to the subject depicted on the photo. The standard fails, however, to cover the central aspects of photos as
they are needed for interoperability of photo tools and photo services. For example, the place or position of a photo is not addressed 
as well as photographic information such as aperture. Also the content description property is limited by a small number of keywords.
The trend for tagging has not been foreseen at the time of the development of the standard.
  </p>
  
  <h3>
   <a name="existing-A">3.2 Multimedia Metadata Formats For Describing Audio Content</a>
  </h3>  
  <p>
  	This section contains metadata for audio content, be it related to music, or speech.
  </p>
  
  <h4>
   <a name="existing-A-ID3">3.2.1 ID3</a>
  </h4>
  <a name="tab-sum-existing-A-ID3" id="tab-sum-existing-A-ID3"></a>
  <table summary="Summary for ID3" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.id3.org/">http://www.id3.org/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#ID3">ID3</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-A-ID3" id="tab-discat-existing-A-ID3"></a>
  <table summary="Discriminators and categories for ID3" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
  		  <th align="left" rowspan="1" colspan="1">Domain</th>
   		  <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">nX</td>
		  <td rowspan="1" colspan="1">A</td>
		  <td rowspan="1" colspan="1">distribute</td>
		  <td rowspan="1" colspan="1">generic</td>
		  <td rowspan="1" colspan="1">music</td>
		 </tr>
	  </tbody>

  </table>
  <p>
ID3 [<a href="#ID3">ID3</a>] is a metadata container used and embedded with an MP3 audio file format. It allows to state information about the title, artist, album, etc. about a song. 
The ID3 specification aims to address a broad spectrum of metadata (represented in so called 'frames') ranging from encryption, over involved people list, lyrics, band, relative volume adjustment to
overownership, artist, and recording dates. Additionally user can define own properties. A list of 79 genres is defined (from Blues to Hard Rock).
  </p>

  <h4>
   <a name="existing-A-MB21">3.2.2 MusicBrainz Metadata Initiative 2.1</a>
  </h4>
  <a name="tab-sum-existing-A-MB21" id="tab-sum-existing-A-MB21"></a>
  <table summary="Summary for MusicBrainz Metadata Initiative 2.1" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://musicbrainz.org/">http://musicbrainz.org/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#MusicBrainz">MusicBrainz</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-A-MB21" id="tab-discat-existing-A-MB21"></a>
  <table summary="Discriminators and categories for MusicBrainz Metadata Initiative 2.1" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
    <th align="left" rowspan="1" colspan="1">Domain</th>
    <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">R</td>
		  <td rowspan="1" colspan="1">A</td>
		  <td rowspan="1" colspan="1">production</td>
		  <td rowspan="1" colspan="1">generic</td>
		  <td rowspan="1" colspan="1">music</td>
		 </tr>
	  </tbody>
  </table>
  <p>
MusicBrainz defines a RDF-S based vocabulary, including three namespaces [<a href="#MusicBrainz">MusicBrainz</a>]. The core set is capable of expressing basic music related metadata such as artist, album, track, etc.).
Instances in RDF are being made available via a query language. The third namespace is reserved for future use in expressing extended music related metadata such as contributors,
roles, lyrics, etc. 
  </p>
  
  <h4>
   <a name="existing-A-MXML">3.2.3 MusicXML</a>
  </h4>
  <a name="tab-sum-existing-A-MXML" id="tab-sum-existing-A-MXML"></a>
  <table summary="Summary for MusicXML" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.recordare.com/">http://www.recordare.com/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#MusicXML">MusicXML</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-A-MXML" id="tab-discat-existing-A-MXML"></a>
  <table summary="Discriminators and categories for MusicXML" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
    <th align="left" rowspan="1" colspan="1">Domain</th>
    <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">X</td>
		  <td rowspan="1" colspan="1">A</td>
		  <td rowspan="1" colspan="1">production</td>
		  <td rowspan="1" colspan="1">generic</td>
		  <td rowspan="1" colspan="1">music</td>
		 </tr>
	  </tbody>
  </table>
  <p>
<em>Recordare</em> has developed the MusicXML technology [<a href="#MusicXML">MusicXML</a>] to create an Internet-friendly method for publishing musical scores, 
enabling musicians and music fans to get more out of their online music. 
  </p>
  <p>
MusicXML is a universal translator for common Western musical notation from the 17<sup>th</sup> century onwards. 
It is designed as an interchange format for notation, analysis, and retrieval for music notation nd digital sheet music applications. 
The MusicXML format is open for use by anyone under a royalty-free license, and is supported by over 75 applications.
  </p>
  <h3>
   <a name="existing-AV">3.3 Multimedia Metadata Formats For Describing Audio-Visual Content</a>
  </h3>
  <p>
In this section, multimedia metadata formats for describing audio-visual content in general are described.
  </p>

  <h4>
   <a name="existing-AV-MPEG-7">3.3.1 Multimedia Content Description Interface (MPEG-7)</a>
  </h4>
 <a name="tab-sum-existing-AV-MPEG-7" id="tab-sum-existing-AV-MPEG-7"></a>
  <table summary="Summary for MPEG-7" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		  <th align="left" rowspan="1" colspan="1">Formal Representation</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.iso.org/iso/en/prods-services/popstds/mpeg.html">http://www.iso.org/iso/en/prods-services/popstds/mpeg.html</a></td>
		  <td rowspan="1" colspan="1">[<a href="#MPEG-7">MPEG-7</a>]</td>
		  <td rowspan="1" colspan="1"><a href="#formal-MPEG-7">MPEG-7 - RDF/OWL</a></td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-AV-MPEG-7" id="tab-discat-existing-AV-MPEG-7"></a>
  <table summary="Discriminators and categories for MPEG-7" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
    <th align="left" rowspan="1" colspan="1">Domain</th>
    <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">X, nX</td>
		  <td rowspan="1" colspan="1">SI, V, A</td>
		  <td rowspan="1" colspan="1">archive-publish</td>
		  <td rowspan="1" colspan="1">generic</td>
		  <td rowspan="1" colspan="1">generic</td>
		 </tr>
	  </tbody>
  </table>  
  <p>
The MPEG-7 standard [<a href="#MPEG-7">MPEG-7</a>], formally named "Multimedia Content Description" aims to be an overall for describing any multimedia content.
MPEG-7 standardizes so-called "description tools" for multimedia content: Descriptors (Ds), Description Schemes (DSs) and the relationships between them. 
Descriptors are used to represent specific features of the content, generally low-level features such as visual (e.g. texture, camera motion) or audio (e.g. melody),
while description schemes refer to more abstract description entities (usually a set of related descriptors). 
These description tools as well as their relationships are represented using the Description Definition Language (DDL), a core part of the language.
The W3C XML Schema recommendation has been adopted as the most appropriate schema for the MPEG-7 DDL, adding a few extensions (array and matrix datatypes) 
in order to satisfy specific MPEG-7 requirements. MPEG-7 descriptions can be serialized as XML or in a binary format defined in the standard. 
 </p>
 
 <p>
MPEG-7's comprehensiveness results from the fact that the standard has been designed for a broad range of applications and thus employs very general and widely applicable concepts. 
The standard contains a large set of tools for diverse types of annotations on different semantic levels 
(the set of MPEG-7 XML Schemas define 1182 elements, 417 attributes and 377 complex types).
The flexibility is very much based on the structuring tools and allows the description to be modular and on different levels of abstraction.
MPEG-7 supports fine grained description, and it provides the possibility to attach descriptors to arbitrary segments on any level of detail of the description. 
The possibility to extend MPEG-7 according to the conformance guidelines defined in part 7 provides further flexibility. 
Two main problems arise in the practical use of MPEG 7 from its flexibility and comprehensiveness: complexity and limited interoperability. 
The complexity is a result of the use of generic concepts, which allow deep hierarchical structures, the high number of different descriptors and description schemes,
and their flexible inner structure, i.e. the variability concerning types of descriptors and their cardinalities. 
This causes sometimes hesitance in using the standard. The interoperability problem is a result of the ambiguities that exist because of the flexible definition of many 
elements in the standard (e.g. the generic structuring tools). There can be several options to structure and organize descriptions which are similar or even identical in
terms of content, and they result in conformant, yet incompatible descriptions. The description tools are defined using DDL. Their semantics is descibed textually in 
the standard documents. 
 </p>
 <p>
Due to the wide application, the semantics of the description tools are often very general. Several works have already pointed out the lack of formal 
semantics of the standard that could extend the traditional text descriptions into machine understandable ones. 
These attempts that aim to bridge the gap between the multimedia community and the Semantic Web, either for the whole standard, or just one of its part, are detailed below. 
 </p>

 <h5>
  MPEG-7 Profiles and Levels
 </h5>
 <p>
Profiles and levels have been proposed as a means to reduce the complexity of MPEG-7 descriptions [<a href="#MPEG-7-Profiles">MPEG-7 Profiles</a>]. 
Like in other MPEG standards, profiles are subsets of the standard that cover certain functionalities, while levels are flavours of profiles with different complexity. 
In MPEG-7, profiles are subsets of description tools for certain application areas, levels have not yet been used. The proposed process of the definition of a profile
consists of three steps: 	
 </p>
 
 <ul>
 	<li>Selection of tools supported in the profile, i.e. the subset of descriptors and description schemes that are used in description that conform to the profile.</li>
	<li>Definition of constraints on these tools, such as restrictions on the cardinality of elements and on the use of attributes.</li>
	<li>Definition of constraints on the semantics of the tools, which describe their use in the profile more precisely.</li>
 </ul>
 
 <p>
The result of tool selection and the definition of tool constraints are formalized using the MPEG-7 DDL and result in an XML schema like the full standard. 
Several profiles have been under consideration for standardization and three profiles have been standardized (they constitute part 9 of the standard, with their XML schemas being
defined in part 11): 
 </p>
 
 <ul>
 	<li>
 		<em>Simple Metadata Profile (SMP)</em>. Allows describing single instances of multimedia content or simple collections. 
		The profile contains tools for global metadata in textual form only. The proposed Simple Bibliographic Profile is a subset of SMP.
		Mappings from ID3, 3GPP and EXIF to SMP have been defined. 
	</li>
	<li>
		<em>User Description Profile (UDP)</em>. Its functionality consists of tools for describing user preferences and usage history for
		the personalization of multimedia content delivery.
	</li>
	<li>
		<em>Core Description Profile (CDP)</em>. Allows describing image, audio, video and audiovisual content as well as collections of multimedia content.
		Tools for the description of relationships between content, media information, creation information, usage information and semantic information are included.
		The CDP does not include the visual and audio description tools defined in parts 3 and 4. 
    </li>
 </ul>
 
 <p>
The adopted profiles will not be sufficient for a number of applications. If an application requires additional description tools, a new profile must be specified. 
It will thus be necessary to define further profiles for specific application areas. For interoperability it is crucial, that the definitions of these profiles are published, 
to check conformance to a certain profile and define mappings between the profiles. It has to be noted, that all of the adopted profiles just define the subset of description 
tools to be included and some tool constraints; none of the profile definitions includes constraints on the semantics of the tools that clarify how they are to be used in the profile. 
 </p>
 <p>
Apart from the standardized ones, a profile for the detailed description of single audiovisual content entities called Detailed Audiovisual Profile (DAVP) 
[<a href="#DAVP">DAVP</a>] has been proposed. 
The profile includes many of the MDS tools, such as a wide range of structuring tools, as well as tools for the description of media, creation and production information and textual 
and semantic annotation, and for summarization. In contrast to the adopted profiles, DAVP includes the tools for audio and visual feature description, which was one motivation for
the definition of the profile. The other motivation was to define a profile the supports interoperability between systems using MPEG-7 by avoiding possible ambiguities and clarifying
the use of the description tools in the profile. The DAVP definition thus includes a set of semantic constraints, which play a crucial role in the profile definition. 
Due to the lack of formal semantics in DDL, these constraints are only described in textual form in the profile definition. 
 </p>
 
 <h5>
  Controlled vocabularies in MPEG-7
 </h5>
 
 <p>
 Annotation of content often contains references to semantic entities such as objects, events, states, places, and times.
 In order to ensure consistent descriptions (e.g. make sure that persons are always referenced with the same name) some kind of controlled vocabulary should be used in these cases. 
 MPEG-7 provides a generic mechanism for referencing terms defined in controlled vocabularies. The only requirement is that the controlled vocabulary is identified by a URI, 
 so that a specific term in a specific controlled vocabulary can be referenced unambiguously. In the simplest case, the controlled vocabulary is just a list of possible values of 
 a property in the content description, without any structure. The list of values can be defined in a file accessed by the application or can be taken from some external source, 
 for example the list of countries defined in ISO 3166. The mechanism can also be used to reference terms from other external vocabularies, such as thesauri or ontologies. 
 </p>
 
 <p>
 Classification schemes (CSs) are a MPEG-7 description tool that allows to describe a set of terms using MPEG-7 description schemes and descriptors. 
 It allows to define hierarchies of terms and simple relations between them, and allows the term names and definitions to be multilingual. Part 5 of the MPEG-7 standard already defines
  a number of classification schemes, and new ones can be added. The CSs defined in the standard are for those description tools, which require or encourage the use of
  controlled vocabularies, such as	
 </p>
 <ul>
 	<li>Technical media information: encoding, physical media types, file formats, defects;</li>
	<li>Content classification: genre, format, rating;</li>
	<li>Other: affection, role of creator, dissemination format</li>
 </ul>
 <p>
  Note: Further descriptions of MPEG-7 will be available in the XGR <a href="http://www.w3.org/2005/Incubator/mmsem/XGR-mpeg7">MPEG-7 and the Semantic Web</a>. 
  </p>

  <h4>
   <a name="existing-AV-AAF">3.3.2 Advanced Authoring Format (AAF)</a>
  </h4>
  <a name="tab-sum-existing-AV-AAF" id="tab-sum-existing-AV-AAF"></a>
  <table summary="Summary for AAF" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.aafassociation.org/">http://www.aafassociation.org/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#AAF">AAF</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-AV-AAF" id="tab-discat-existing-AV-AAF"></a>
  <table summary="Discriminators and categories for AAF" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
    <th align="left" rowspan="1" colspan="1">Domain</th>
    <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">nX</td>
		  <td rowspan="1" colspan="1">SI, V, A</td>
		  <td rowspan="1" colspan="1">production</td>
		  <td rowspan="1" colspan="1">content creation</td>
		  <td rowspan="1" colspan="1">broadcast</td>
		 </tr>
	  </tbody>
  </table>
  <p>
The Advanced Authoring Format (AAF) [<a href="#AAF">AAF</a>] is a cross-platform file format that allows the interchange of data between multimedia authoring tools. 
AAF supports the encapsulation of both metadata and essence, but its primary purpose involves the description of authoring information. 
The object-oriented AAF object model allows for extensive timeline-based modeling of compositions (i.e. motion picture montages), 
including transitions between clips and the application of effects (e.g. dissolves, wipes, flipping). 
Hence, the application domain of AAF is within the post production phase of an audiovisual product and it can be employed in specialized 
video work centers. Among the structural metadata contained for clips and compositions, AAF also supports storing event-related information
(e.g. time-based user annotations and remarks) or specific authoring instructions. 
  </p>    
  <p>
AAF files are fully agnostic as to how essence is coded and serve as a wrapper for any kind of essence coding specification. 
In addition to describe the current location and characteristics of essence clips, AAF also supports descriptions of the entire 
derivation chain for a piece of essence, from its current state to the original storage medium, possibly a tape 
(identified by tape number and time code), or a film (identified by an edge code for example). 
  </p>
  <p>
The AAF data model and essence are independent of the specificities of how AAF files are stored on disk. 
The most common storage specification used for AAF files is the Microsoft Structured Storage format, but other storage formats (e.g. XML) 
can be used. 
  </p>  
  <p>
The AAF metadata specifications and object model are fully extensible (e.g. subclassing existing objects) and the extensions are 
fully contained in a metadata dictionary, stored in the AAF file. In order in order to achieve predictable interoperability between 
implementations created by different developers, due to the format's flexibility and use of proprietary extensions, the Edit Protocol
was established. The Edit Protocol combines a number of best practices and constraints as to how an Edit Protocol-compatible AAF
implementation must function and which subset of the AAF specification can be used in Edit Protocol-compliant AAF files. 
  </p>
  
  <h4>
   <a name="existing-AV-MXF-DMS-1">3.3.3 Material Exchange Format (MXF)</a>
  </h4>
  <a name="tab-sum-existing-AV-MXF-DMS-1" id="tab-sum-existing-AV-MXF-DMS-1"></a>
  <table summary="Summary for MXF DMS-1" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.smpte.org/">http://www.smpte.org/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#MXF">MXF</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-AV-MXF-DMS-1" id="tab-discat-existing-AV-MXF-DMS-1"></a>
  <table summary="Discriminators and categories for MXF DMS-1" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
 	  <th align="left" rowspan="1" colspan="1">Domain</th>
    <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">nX</td>
		  <td rowspan="1" colspan="1">SI, V, A</td>
		  <td rowspan="1" colspan="1">production</td>
		  <td rowspan="1" colspan="1">content creation</td>
		  <td rowspan="1" colspan="1">broadcast</td>
		 </tr>
	  </tbody>
  </table>
  <p>
The Material Exchange Format (MXF) [<a href="#MXF">MXF</a>] is a streamable file format optimized for the interchange of material for the content creation industries.
MXF is a wrapper/container format intended to encapsulate and accurately describe one or more 'clips' of audiovisual essence (video, sound, pictures, etc.).
This file format is essence-agnostic, which means it should be independent of the underlying audio and video coding specifications in the file.
In order to process such a file, its header contains data about the essence.
An MXF file contains enough structural header information to allow applications to interchange essence without any a priori information. 
The MXF metadata allows applications to know the duration of the file, what essence codecs are required, what timeline complexity is involved 
and other key points to allow interchange. 
  </p>
  <p>
There exists a 'Zero Divergence' doctrine, which states that any areas in which AAF and MXF overlap must be technologically identical. 
As such, MXF and AAF share a common data model. This means that they use the same model to represent timelines, clips, descriptions of essence,
and metadata. The major difference between the two is that MXF has chosen not to include transition and layering functionality. 
This makes MXF the favorable file format in embedded systems, such as VRTs or cameras, where resources can be scare. 
Essentially, this creates an environment in which raw essence can be created in MXF, it can be post produced in AAF, and then the 
finished content can be generated as an MXF file. 
  </p>
  <p>
MXF uses KLV coding throughout the file structure.
This KLV is a data interchange format defined by the simple data construct: <em>Key-Length-Value</em>, where the <em>Key</em> identifies the data meaning,
the <em>Length</em> gives the data length, and the <em>Value</em> is the data itself. This principle allows a decoder to identify each component by its key and 
skip any component it cannot recognize using the length value to continue decoding data types with recognized key values. 
KLV coding allows any kind of information to be coded. It is essentially a machine-friendly coding construct that is datacentric 
and is not dependent on human language. Additionally, the KLV structure of MXF allows this file format to be streamable. 
  </p>
  <p>
   Structural Metadata is the way in which MXF describes different essence types and their relationship along a timeline.
   The structural metadata defines the synchronization of different tracks along a timeline. It also defines picture size, 
   picture rate, aspect ratio, audio sampling, and other essence description parameters. The MXF structural metadata is derived from the AAF data model.
   Next to the structural metadata described above, MXF files may contain descriptive and <em>dark</em> metadata. 
  </p>
  <p>
   MXF descriptive metadata comprises information in addition to the structure of the MXF file. 
   Descriptive metadata is metadata created during production or planning of production. Possible information can be about the production, 
   the clip (e.g. which type of camera was used) or a scene (e.g. the actors in it). 
   <b>DMS-1</b> (Descriptive Metadata Scheme 1) [<a href="#MXF-DMS-1">MXF-DMS-1</a>] is an attempt to standardize such information within the MXF format. 
   Furthermore DMS-1 is able to interwork as far as practical with other metadata schemes such as MPEG-7, TV-Anytime, P/meta and Dublin Core.  
   The <b>SMPTE Metadata Dictionary</b> [<a href="#MXF-RP210">MXF-RP210</a>] is a thematically structured list of metadata elements, defined by a key,
   the size of the value and its semantics. 
  </p>  
  <p>
   Dark Metadata is the term given to metadata that is unknown by an application. This metadata may be privately defined and generated, 
   it may be new properties added or it may be standard MXF metadata not relevant to the application processing this MXF file. 
   There are rules in the MXF standard on the use of dark metadata to prevent numerical or namespace clashes when private metadata is added to
   a file already containing dark metadata.
  </p>
     
  <h3>
   <a name="existing-MP">3.4 Multimedia Metadata Formats For Describing Multimedia Presentations</a>
  </h3>
  <p>
The formats listed in this section deal with multimedia presentations with appropriate support for metadata.
  </p>
  <h4>
   <a name="existing-MP-SMIL">3.4.1 Synchronized Multimedia Integration Language (SMIL)</a>
  </h4>
  <a name="tab-sum-existing-MP-SMIL" id="tab-sum-existing-MP-SMIL"></a>
  <table summary="Summary for SMIL" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.w3.org/">http://www.w3.org/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#SMIL">SMIL</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-MP-SMIL" id="tab-discat-existing-MP-SMIL"></a>
  <table summary="Discriminators and categories for SMIL" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
    <th align="left" rowspan="1" colspan="1">Domain</th>
    <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">X</td>
		  <td rowspan="1" colspan="1">G</td>
		  <td rowspan="1" colspan="1">publish, distribution, presentation, interaction</td>
		  <td rowspan="1" colspan="1">generic</td>
		  <td rowspan="1" colspan="1">Web, mobile applications</td>
		 </tr>
	  </tbody>
  </table>
  <p>
  The Synchronized Multimedia Integration Language (SMIL) [<a href="#SMIL">SMIL</a>] is an XML-based 2-dimensional graphics language enabling simple authoring
  of interactive audiovisual presentations. SMIL is used to describe scenes with streaming audio, streaming video, still images,
  text or any other media type. SMIL can be integrated with other web technologies such as XML, DOM, SVG, CSS and XHTML. 
  </p>
  <p>
  Next to media, a SMIL scene also consists of a spatial and temporal layout and supports animation and interactivity.
  SMIL also has a timing mechanism to control animations and for synchronization. SMIL is based on the download-and-play concept;
  it has also a mobile specification, SMIL Basic.
  </p>
  <p>
   The SMIL 2.1 Metainformation module contains elements and attributes that allow description of SMIL documents. It allows 
   authors to describe documents with a very basic vocabulary (<code>meta</code> element; inherited from SMIL 1.0), and in its
   recent version the specification introduces new capabilities for describing metadata using RDF. 
  </p>
  
  <p>
  	Note: <a href="http://www.w3.org/TR/2007/WD-SMIL3-20070713/">SMIL 3.0</a> is a Last Call Working Draft at time of publishing this XGR.
  </p>
  
  <h4>
   <a name="existing-MP-SVG">3.4.2 Scalable Vector Graphics (SVG)</a>
  </h4>
  <a name="tab-sum-existing-MP-SVG" id="tab-sum-existing-MP-SVG"></a>
  <table summary="Summary for SVG" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.w3.org/">http://www.w3.org/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#SVG">SVG</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-MP-SVG" id="tab-discat-existing-MP-SVG"></a>
  <table summary="Discriminators and categories for SVG" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
    <th align="left" rowspan="1" colspan="1">Domain</th>
    <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">X</td>
		  <td rowspan="1" colspan="1">G</td>
		  <td rowspan="1" colspan="1">publish, presentation</td>
		  <td rowspan="1" colspan="1">generic</td>
		  <td rowspan="1" colspan="1">Web, mobile applications</td>
		 </tr>
	  </tbody>
  </table>
  <p>
Scalable Vector Graphics (SVG) [<a href="#SVG">SVG</a>] is a language for describing two-dimensional vector and mixed vector/raster graphics in XML.
It allows for describing scenes with vector shapes (e.g. paths consisting of straight lines, curves), text, and multimedia
(e.g. still images, video, audio). These objects can be grouped, transformed, styled and composited into previously rendered objects.
  </p>
  <p>
SVG files are compact and provide high-quality graphics on the Web, in print, and on resource-limited handheld devices. 
In addition, SVG supports scripting and animation, so SVG is ideal for interactive, data-driven, personalized graphics. 
SVG is based on the download-and-play concept. SVG has also a mobile specification, SVG Tiny, which is a subset of SVG.
  </p>
  <p>
Metadata which is included with SVG content is specified within the <code>metadata</code> elements, 
with contents from other XML namespaces such as Dublin Core or RDF. 
  </p>
    
  <h3>
   <a name="existing-WF">3.5 Multimedia Metadata Formats For Describing Specific Domains Or Workflows</a>
  </h3>  
  <p>
Metadata formats listed in this section focus on a specific domain (e.g. news) or are concerned with workflow issues such as MPEG-21.
  </p> 
  
  <h4>
   <a name="existing-WF-NewsML">3.5.1 NewsML-G2</a>
  </h4>
  <a name="tab-sum-existing-WF-NewsML" id="tab-sum-existing-WF-NewsML"></a>
  <table summary="Summary for NewsML" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.iptc.org/NAR/">http://www.iptc.org/NAR/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#NewsML">NewsML-G2</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-WF-NewsML" id="tab-discat-existing-WF-NewsML"></a>
  <table summary="Discriminators and categories for NewsML" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
    <th align="left" rowspan="1" colspan="1">Domain</th>
    <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">X</td>
		  <td rowspan="1" colspan="1">G</td>
		  <td rowspan="1" colspan="1">publish</td>
		  <td rowspan="1" colspan="1">news</td>
		  <td rowspan="1" colspan="1">news agencies</td>
		 </tr>
	  </tbody>
  </table> 
  <p>
  For easing the exchange of news, the <a href="http://www.iptc.org/pages/index.php">International Press Telecommunication Council</a> (IPTC) 
  has developed the News Architecture for G2-Standards [<a href="#NewsML">NewsML-G2</a>] whose goal is 
  <em>to provide a single generic model for exchanging all kinds of newsworthy 
  information, thus providing a framework for a future family of IPTC news exchange standards</em>.
  This family includes NewsML-G2, SportsML-G2, EventsML-G2, ProgramGuideML-G2 or a future WeatherML. 
  All are XML-based languages used for describing not only the news content (traditional metadata), but also their management, packaging,
  or related to the exchange itself (transportation, routing).
  </p>
  
  <h4>
   <a name="existing-WF-TVAnytime">3.5.2 TVAnytime</a>
  </h4>
  <a name="tab-sum-existing-WF-TVAnytime" id="tab-sum-existing-WF-TVAnytime"></a>
  <table summary="Summary for TVAnytime" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.tv-anytime.org/">http://www.tv-anytime.org/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#TVAnytime">TVAnytime</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-WF-TVAnytime" id="tab-discat-existing-WF-TVAnytime"></a>
  <table summary="Discriminators and categories for TVAnytime" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
    <th align="left" rowspan="1" colspan="1">Domain</th>
    <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">X</td>
		  <td rowspan="1" colspan="1">G</td>
		  <td rowspan="1" colspan="1">distribute</td>
		  <td rowspan="1" colspan="1">Electronic Program Guides (EPG)</td>
		  <td rowspan="1" colspan="1">broadcast</td>
		 </tr>
	  </tbody>
  </table> 
  <p>
The <a href="http://www.tv-anytime.org/">TV Anytime Forum</a> is an association of organizations which seeks to develop specifications to 
provide value-added interactive services, such as the electronic program guide, in the context of TV digital broadcasting. 
The forum identified the metadata [<a href="#TVAnytime">TVAnytime</a>] as one of the key technologies enabling their vision and have adopted MPEG-7 as the description language. 
They have extended the MPEG-7 vocabulary with higher-level descriptors, such as, for example, the intended audience of a program or its broadcast 
conditions. 
  </p>
  
  <h4>
   <a name="existing-WF-MPEG-21">3.5.3 MPEG-21</a>
  </h4>
  <a name="tab-sum-existing-WF-MPEG-21" id="tab-sum-existing-WF-MPEG-21"></a>
  <table summary="Summary for MPEG-21" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.iso.org/iso/en/prods-services/popstds/mpeg.html">http://www.iso.org/iso/en/prods-services/popstds/mpeg.html</a> (ISO/MPEG)</td>
		  <td rowspan="1" colspan="1">[<a href="#MPEG-21">MPEG-21</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-WF-MPEG-21" id="tab-discat-existing-WF-MPEG-21"></a>
  <table summary="Discriminators and categories for MPEG-21" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
    <th align="left" rowspan="1" colspan="1">Domain</th>
    <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">nX, X</td>
		  <td rowspan="1" colspan="1">G</td>
		  <td rowspan="1" colspan="1">annotate, publish, distribute</td>
		  <td rowspan="1" colspan="1">generic</td>
		  <td rowspan="1" colspan="1">generic</td>
		 </tr>
	  </tbody>
  </table> 
  <p>
   The MPEG-21 [<a href="#MPEG-21">MPEG-21</a>] standard aims at defining a framework for multimedia delivery and consumption which supports a variety of businesses engaged
   in the trading of digital objects. MPEG-21 is quite different to its predecessors, as it is not focused on the representation and coding of content
   like MPEG-1 to MPEG-7 do, but instead focusing on filling the gaps in the multimedia delivery chain. 
   MPEG-21 was developed with the vision in mind that it should offer users transparent and interoperable consumption and delivery of rich 
   multimedia content. The MPEG-21 standard consists of a set of tools and builds on its previous coding and metadata standards 
   like MPEG-1, -2, -4 and -7, i.e., it links them together to produce a protectable universal package for collecting, relating, 
   referencing and structuring multimedia content for the consumption by users (the Digital Item). The vision of MPEG-21 is to enable transparent
   and augmented use of multimedia resources (e.g. music tracks, videos, text documents or physical objects) contained in digital items across a
   wide range of networks and devices.
  </p>
  <p>
  The two central concepts of MPEG-21 are <em>Digital Items</em>, a fundamental unit of distribution and transaction, and the concept of <em>Users</em> 
  interacting with Digital Items: A User is any entity that interacts in the MPEG-21 environment or makes use of a Digital Item, and a
  Digital Item is a structured digital object with a standard representation, identification and metadata within the MPEG-21 framework.
  This entity is also the fundamental unit of distribution and transaction within this framework. In other words, the Digital Item groups 
  multimedia resources (e.g. audio, video, image, text) and metadata (such as identifiers, licenses, content-related and processing-related information)
  within a standardized structure enabling interoperability among vendors and manufacturers.
  </p>
  <p>
  The MPEG-21 standard consists of 18 parts of which the following are the most relevant for the scope of the MMSEM-XG:  
  </p>
  <ul>
   <li>
   	   Part 2, Digital Item Declaration (DID), provides an abstract model and an XML-based representation thereof which is used to define Digital Items. 
   	   The DID Model defines digital items, containers, fragments or complete resources, assertions, statements, choices/selections, and annotations on digital items. 
   </li> 
   <li>
       Part 3, Digital Item Identification and Description (DII), is concerned with the ability to identify and refer to complete or partial Digital Items. 
   </li>	
   <li>
       Part 5, Rights Expression Language (REL), provides a machine-readable language to declare rights and permissions using the terms as defined in the Rights Data Dictionary. 
   </li> 
   <li>
       Part 17, Fragment Identification for MPEG Media Types, specifies a syntax for identifying parts (e.g., track of a CD/DVD) of MPEG resources via Uniform Resource Identifiers (URIs).
   </li>
  </ul>
  <p>
	MPEG-21 identifies and defines the mechanisms and elements needed to support the multimedia delivery chain as described above, 
	as well as the relationships between and the operations supported by them. Within the parts of MPEG-21, these elements are
	elaborated by defining the syntax and semantics of their characteristics, such as interfaces to the elements.
  </p>
  <p>
  	Note: For an overview on MPEG-21, see also <a href="http://www.chiariglione.org/MPEG/standards/mpeg-21/mpeg-21.htm">MPEG-21 Overview v.5</a> via Leonardo Chiariglione.
  </p>

  <h4>
   <a name="existing-WF-EBUPMeta">3.5.4 EBU P/Meta</a>
  </h4>
  <a name="tab-sum-existing-WF-EBUPMeta" id="tab-sum-existing-WF-EBUPMeta"></a>
  <table summary="Summary for EBU P/Meta" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.ebu.ch/">http://www.ebu.ch/</a> European Broadcasting Union (EBU)</td>
		  <td rowspan="1" colspan="1">[<a href="#EBUPMETA">EBU P/Meta</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-WF-EBUPMeta" id="tab-discat-existing-WF-EBUPMeta"></a>
  <table summary="Discriminators and categories for EBU P/Meta" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
    <th align="left" rowspan="1" colspan="1">Domain</th>
    <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">nX, X</td>
		  <td rowspan="1" colspan="1">A, V</td>
		  <td rowspan="1" colspan="1">publish</td>
		  <td rowspan="1" colspan="1">generic</td>
		  <td rowspan="1" colspan="1">broadcast</td>
		 </tr>
	  </tbody>
  </table> 
  <p>
The EBU P/Meta working group has designed this standard as a metadata vocabulary [<a href="#EBUPMETA">EBU P/Meta</a>] for programme exchange in the professional broadcast industry.
It is not intended as an internal representation of a broadcaster's system. P/Meta has been designed as metadata format in a business-to-business
scenario to exchange broadcast programme related metadata between content producers, content distributors and archives.
The P/Meta definition uses a three-layer model: the <em>definition layer</em> (i.e. the semantic of the description),  
the <em>technology layer</em> defines the encoding used for exchange (currently KLV &#8212; key, length, value &#8212; and XML representations are specified),
and the lowest layer, the <em>data interchange layer</em>, which is out of scope of the specification. 
P/Meta consists of a number of attributes (some of them with a controlled list of values), which are organized into sets. 
The standard covers the following types of metadata:
  </p>
  <ul>
   <li>Identification</li>
   <li>Technical metadata</li>
   <li>Programme description and classification</li>
   <li>Creation and production information</li>
   <li>Rights and contract information</li>
   <li>Publication information</li>
  </ul>
  <p>
Note: it is worth noting that EBU is working on replacing P/Meta by <a href="#existing-WF-NewsML">NewsML-G2</a>.
  </p>

  <h3>
   <a name="existing-RE">3.6 Other Multimedia Metadata Related Formats</a>
  </h3> 

  <h4>
   <a name="existing-RE-DC">3.6.1 Dublin Core (DC)</a>
  </h4>
  <a name="tab-sum-existing-RE-DC" id="tab-sum-existing-RE-DC"></a>
  <table summary="Summary for Dublin Core" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://dublincore.org/">http://dublincore.org/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#DublinCore">Dublin Core</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-RE-DC" id="tab-discat-existing-RE-DC"></a>
  <table summary="Discriminators and categories for Dublin Core" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
    <th align="left" rowspan="1" colspan="1">Domain</th>
    <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">X, R</td>
		  <td rowspan="1" colspan="1">G</td>
		  <td rowspan="1" colspan="1">publish</td>
		  <td rowspan="1" colspan="1">generic</td>
		  <td rowspan="1" colspan="1">generic</td>
		 </tr>
	  </tbody>
  </table> 
  <p>
The Dublin Core Metadata Initiative (DCMI) has defined a set of elements [<a href="#DublinCore">Dublin Core</a>] for cross-domain information resource description.
The set consists of a flat list of 15 elements describing common properties of resources, such as <code>title</code>, <code>creator</code> etc. 
Dublin Core recommends using controlled vocabularies for providing the values for these elements. 
  </p>

  <h4>
   <a name="existing-RE-XMP_IPTC">3.6.2 XMP and IPTC Metadata for XMP</a>
  </h4>
  <a name="tab-sum-existing-RE-XMP_IPTC" id="tab-sum-existing-RE-XMP_IPTC"></a>
  <table summary="Summary for XMP/IPTC" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Responsible</th>
		  <th align="left" rowspan="1" colspan="1">Specification</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.adobe.com/">http://www.adobe.com/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#XMP">XMP</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <br />
  <a name="tab-discat-existing-RE-XMP_IPTC" id="tab-discat-existing-RE-XMP_IPTC"></a>
  <table summary="Discriminators and categories for XMP" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Representation</th>
		  <th align="left" rowspan="1" colspan="1">Content Type</th>
		  <th align="left" rowspan="1" colspan="1">Workflow</th>
    <th align="left" rowspan="1" colspan="1">Domain</th>
    <th align="left" rowspan="1" colspan="1">Industry</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1">X, R</td>
		  <td rowspan="1" colspan="1">G</td>
		  <td rowspan="1" colspan="1">annotate, publish, distribute</td>
		  <td rowspan="1" colspan="1">generic</td>
		  <td rowspan="1" colspan="1">generic</td>
		 </tr>
	  </tbody>
  </table> 
  <p>
The main goal of XMP [<a href="#XMP">XMP</a>] is to attach more powerful metadata to media assets in order to enable a better management of multimedia content,
and better ways to search and retrieve content in order to improve consumption of these multimedia assets.
Furthermore XMP aims to enhance reuse and repurposing of content and to improve interoperability between different vendors and systems.
  </p>
  <p>
The Adobe XMP specification standardizes the definition, creation, and processing of metadata by providing a data model,
storage model (serialization of the metadata as a stream of XML), and formal schema definitions 
(predefined sets of metadata property definitions that are relevant for a wide range of applications). 
XMP makes use of RDF in order to represent the metadata properties associated with a document.
  </p>
  <p>
With XMP, Adobe provides a method and format for expressing and embedding metadata in various multimedia file formats. 
It provides a basic data model as well as metadata schemas for storing metadata in RDF, and provides storage mechanism and a basic set of
schemas for managing multimedia content like versioning support. The most important components of the specification are the data model and the pre-defined (and extensible) schemas:  
  </p>
  <ul>
   <li>
XMP Data Model is derived from RDF and is a subset of the RDF data model. 
It provides support for metadata properties to attach metadata to a resource.
Properties have property values, which can be structured (structured properties) or simple types or arrays. 
Properties may also have properties (property qualifiers) which may provide additional information about the property value. 
   </li>
   <li>
XMP Schemas consist of predefined sets of metadata property definitions. 
Schemas are essentially collections of statements about resources which are expressed using RDF. 
It is possible to define new external schemas, to extend the existing ones or to add some if necessary. 
There are some predefined schemas included in the specification like a Dublin Core Schema, a basic rights schema or a media management schema. 
   </li>
  </ul>
  <p>
There is a growing number of <a href="http://www.adobe.com/products/xmp/partners.html">commercial applications</a> that already support XMP. 
For example, the <a href="http://www.iptc.org">International Press and Telecommunications Council (IPTC)</a> has integrated XMP in its Image Metadata specifications and almost every Adobe
application like Photoshop or In-Design supports XMP. <em>IPTC Metadata for XMP</em> can be considered as a
multimedia metadata format for describing still images and could actually be soon the most used one.
  </p>
    
  <h2>
   <a name="formal" id="formal">4. Multimedia Ontologies</a>
  </h2>
   <p>
	This section discusses some known approaches for converting existing multimedia metadata into RDF [<a href="#RDF-Primer">RDF Primer</a>] / OWL
	[<a href="#OWL-Guide">OWL Guide</a>] for the purpose of interoperability, reasoning, etc.
  </p>
  <p>
  	Note: The formalizations presented in the following are subsumed under the more common term <b>Multimedia Ontology</b>, hence the title.
  </p>
  
  <h3>
   <a name="formal-VRA">4.1 VRA - RDF/OWL</a>
  </h3>  
  <p>
	At the time of writing, there exists no commonly accepted mapping from VRA Core to RDF/OWL. However, at least two conversions have been proposed: 
  </p>
  <ul>
  	<li><a href="http://www.w3.org/2001/sw/BestPractices/MM/vra-conversion.html">RDF/OWL Representation of VRA</a> by Mark van Assem, Vrije Universiteit Amsterdam, and </li>
  	<li><a href="http://simile.mit.edu/2003/10/ontologies/vraCore3">RDF/OWL VRA ontology</a> from SIMILE.</li>
  </ul>

  <h3>
   <a name="formal-Exif">4.2 Exif - RDF/OWL</a>
  </h3>
  <p>
   Recently, there has been efforts to represent the Exif metadata tags in an RDF-S ontology. 
   The two approaches presented here are semantically very similar, yet are both described for completeness:  
  </p>
  <ul>
   <li>
	The <a href="http://www.kanzaki.com/ns/exif">Kanzaki Exif RDF Schema</a> provides an encoding of the basic Exif metadata tags in RDF Schema.
	We also note here that relevant domains and ranges are used as well. Kanzaki Exif additionally provides an Exif conversion service,
	Exif-to-RDF, which extracts Exif metadata from images and automatically maps it to the RDF encoding.
   </li>
   <li>
	The <a href="http://sourceforge.net/projects/jpegrdf">Norm Walsh Exif RDF Schema</a> provides another encoding of the basic Exif metadata tags in RDF Schema. 
	Walsh Exif additionally provides JPEGRDF, which is a Java application that provides an API to read and manipulate Exif metadata stored 
	in JPEG images. Currently, JPEGRDF can extract, query, and augment the Exif/RDF data stored in the file headers. In particular,
	we note that the API can be used to convert existing Exif metadata in file headers to the schema defined in Walsh Exif.
   </li>
  </ul>

  <h3>
   <a name="formal-DIG35">4.3 DIG35 - RDF/OWL</a>
  </h3>
  <p>
The <a href="http://multimedialab.elis.ugent.be/users/chpoppe/Ontologies/DIG35.zip">DIG35 ontology</a>, developed by the 
<a href="http://www.mmlab.be/">IBBT Multimedia Lab</a> (University of Ghent) in the context of the W3C Multimedia Semantics Incubator Group,
provides an OWL Schema covering the entire DIG35 specification. For the formal representation of DIG35, no other ontologies have been used. 
However, relations with other ontologies such as Exif, FOAF, etc. will be created to give the DIG35 ontology a broader semantic range.
The DIG35 ontology is an OWL Full ontology.
  </p>
  
  <h3>
   <a name="formal-MPEG-7">4.4 MPEG-7 - RDF/OWL</a>
  </h3>

  <p>
  	For MPEG-7, there is no commonly agreed upon mapping to RDF/OWL. However, this section lists existing approaches regarding
	the translation of (parts of) <a href="#existing-AV-MPEG-7">MPEG-7</a> into RDF/OWL. 
  </p>
  
  <h4>
   <a name="formal-MPEG-7-Hunter">4.4.1 MPEG-7 Upper MDS Ontology by Hunter</a>
  </h4>
  <a name="tab-sum-formal-MPEG-7-Hunter" id="tab-sum-formal-MPEG-7-Hunter"></a>
  <table summary="Summary for MPEG-7 Upper MDS Ontology by Hunter" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Ontology Source</th>
		  <th align="left" rowspan="1" colspan="1">Description</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://metadata.net/mpeg7">http://metadata.net/mpeg7</a></td>
		  <td rowspan="1" colspan="1">[<a href="#Hunter01">Hunter, 2001</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <p>
  	Chronologically the first one, this MPEG-7 ontology was firstly developed in RDFS, then converted into DAML+OIL, and is now available in OWL-Full.
	The ontology covers the upper part of the Multimedia Description Scheme (MDS) part of the MPEG-7 standard. It comprises about 60 classes and 40 properties. 
  </p>
  
  <h4>
   <a name="formal-MPEG-7-Tsinaraki">4.4.2 MPEG-7 MDS Ontology by Tsinaraki</a>
  </h4>
  <a name="tab-sum-formal-MPEG-7-Tsinaraki" id="tab-sum-formal-MPEG-7-Tsinaraki"></a>
  <table summary="Summary for MPEG-7 MDS Ontology by Tsinaraki" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Ontology Source</th>
		  <th align="left" rowspan="1" colspan="1">Description</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://elikonas.ced.tuc.gr/ontologies/av_semantics.zip">http://elikonas.ced.tuc.gr/ontologies/av_semantics.zip</a></td>
		  <td rowspan="1" colspan="1">[<a href="#Tsinaraki04">Tsinaraki et.al., 2004</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <p>
  	Starting from the ontology developed by Hunter [<a href="#Hunter01">Hunter, 2001</a>] this MPEG-7 ontology covers the full Multimedia Description Scheme (MDS) 
	part of the MPEG-7 standard. It contains 420 classes and 175 properties. This is an OWL DL ontology.
  </p>
 
  <h4>
   <a name="formal-MPEG-7-Rhizomik">4.4.3 MPEG-7 Ontology by Rhizomik</a>
  </h4>
  <a name="tab-sum-formal-MPEG-7-Rhizomik" id="tab-sum-formal-MPEG-7-Rhizomik"></a>
  <table summary="Summary for MPEG-7 Ontology by Rhizomik" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Ontology Source</th>
		  <th align="left" rowspan="1" colspan="1">Description</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://rhizomik.net/ontologies/mpeg7ontos">http://rhizomik.net/ontologies/mpeg7ontos</a></td>
		  <td rowspan="1" colspan="1">[<a href="#Garcia05">Garcia et.al., 2005</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <p>
  	This MPEG-7 ontology has been produced fully automatically from the MPEG-7 standard in order to give it a formal semantics.
	For such a purpose, a generic mapping XSD2OWL has been implemented. The definitions of the XML Schema types and elements of the ISO standard have been converted into 
	OWL definitions according to the table given in [<a href="#Garcia05">Garcia et.al., 2005</a>]. 
	This ontology could then serve as a top ontology thus easing the integration of other more specific ontologies such as <a href="#existing-A-MB21">MusicBrainz</a>. 
	The authors have also proposed to transform automatically the XML data (instances of MPEG-7) into RDF triples (instances of this top ontology).
  </p>
  <p>
  	This ontology aims to cover the whole standard and it thus the most complete one (with respect to the previous mentioned). 
	It contains finally 2372 classes and 975 properties. This is an OWL Full ontology since it employs the <code>rdf:Property</code>
	construct to cope with the fact that there are properties that have both datatype and object type ranges.
  </p>
  
  <h4>
   <a name="formal-MPEG-7-MMO">4.4.4 Core Ontology for Multimedia (COMM)</a>
  </h4>
  <a name="tab-sum-formal-MPEG-7-MMO" id="tab-sum-formal-MPEG-7-MMO"></a>
  <table summary="Summary for Core Ontology for Multimedia (COMM)" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Ontology Source</th>
		  <th align="left" rowspan="1" colspan="1">Description</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://multimedia.semanticweb.org/COMM/">http://multimedia.semanticweb.org/COMM/</a></td>
		  <td rowspan="1" colspan="1">[<a href="#Arndt07">Arndt et.al., 2007</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <p>
The Core Ontology for Multimedia (COMM) [<a href="#Arndt07">Arndt et.al., 2007</a>] is based on both the MPEG-7 standard and the
DOLCE [<a href="#Masolo02">Masolo et.al., 2002</a>] foundational ontology. 
COMM is an OWL DL ontology. It is composed of multimedia patterns specializing the DOLCE design patterns for Descriptions &amp; Situations and Information Objects.
The ontology covers a very large part of the MPEG-7 standard.  The explicit representation of algorithms in the multimedia patterns allows also to describe the
multimedia analysis steps, something that is not possible in MPEG-7.
  </p>
  
  <h4>
   <a name="formal-MPEG-7-aceMedia">4.4.5 aceMedia Visual Descriptor Ontology</a>
  </h4>
  <a name="formal-MPEG-7-aceMedia" id="formal-MPEG-7-aceMedia"></a>
  <table summary="Summary for aceMedia Visual Descriptor Ontology" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Ontology Source</th>
		  <th align="left" rowspan="1" colspan="1">Description</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.acemedia.org/aceMedia/files/software/m-ontomat/acemedia-visual-descriptor-ontology-v09.rdfs">http://www.acemedia.org/aceMedia/files/software/m-ontomat/acemedia-visual-descriptor-ontology-v09.rdfs</a></td>
		  <td rowspan="1" colspan="1">[<a href="#VDO">VDO</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <p>
  	The Visual Descriptor Ontology (VDO) developed within the aceMedia project for semantic multimedia content analysis and reasoning, 
	contains representations of MPEG-7 visual descriptors and models Concepts and Properties that describe visual characteristics of objects. 
	The term descriptor refers to a specific representation of a visual feature (color, shape, texture etc) that defines the syntax and the 
	semantics of a specific aspect of the feature. For example, the dominant color descriptor specifies among others, the number and value of 
	dominant colors that are present in a region of interest and the percentage of pixels that each associated color value has. 
	Although the construction of the VDO is tightly coupled with the specification of the MPEG-7 Visual Part, several modifications were carried 
	out in order to adapt to the XML Schema provided by MPEG-7 to an ontology and the data type representations available in RDF Schema. 
  </p>

  <h3>
   <a name="formal-Mindswap">4.5 Mindswap Image Region Ontology</a>
  </h3>
  <a name="tab-sum-formal-Mindswap" id="tab-sum-formal-Mindswap"></a>
  <table summary="Summary for Mindswap Image Region Ontology" border="1">
	  <tbody>
		<tr>
		  <th align="left" rowspan="1" colspan="1">Ontology Source</th>
		  <th align="left" rowspan="1" colspan="1">Description</th>
		 </tr>
		 <tr>
		  <td rowspan="1" colspan="1"><a href="http://www.mindswap.org/2005/owl/digital-media">http://www.mindswap.org/2005/owl/digital-media</a></td>
		  <td rowspan="1" colspan="1">[<a href="#Halaschek05">Halaschek-Wiener et.al., 2005</a>]</td>
		 </tr>
	  </tbody>
  </table>
  <p>
   The Mindswap digital-media is an OWL ontology which models concepts and relations covering various aspects of the digital media domain.
   The main purpose of the ontology is to provide the expressiveness to assert what is depicted within various types of digital media, 
   including image and videos. The ontology defines concepts including image, video, video frame, region, as well as relations such as depicts,
   regionOf, etc. Using these concepts and their associated properties, it is therefore possible to assert that an image/imageRegion depicts 
   some instance, etc.
  </p>
   
  <h3>
   <a name="formal-VRA">4.6 Audio Ontologies</a>
  </h3>  
  <p>
The audio community if quite active in disseminating Semantic Web technologies; the known formalisations in the realm of audio (mainly music) are: 
  </p>
  <ul>
   <li>
<a href="http://musicontology.com/">Music Ontology Specification</a> by Frederick Giasson et Yves Raimond (<a href="http://www.zitgist.com/">Zitgist</a>). 
The Music Ontology Specification provides main concepts and properties fo describing music (i.e. artists, albums and tracks) on the Semantic Web.
Based on (or inspired by) the MusicBrainz <a href="#existing-A-MB21">MusicBrainz</a> editorial metadata.
   </li>
   <li>
Kanzaki's <a href="http://www.kanzaki.com/ns/music">music vocabulary</a>. 
A vocabulary to describe classical music and performances. Classes (categories) for musical work, event, instrument and performers, as well as related properties are defined.	
   </li>
   <li>
<a href="http://foafing-the-music.iua.upf.edu/">Music Recommendation</a> by Oscar Celma, Universitat Pompeu Fabra. 
Foafing the Music system [<a href="#Celma06">Celma, 2006</a>] uses the Friend of a Friend (FOAF) and RDF Site Summary (RSS) vocabularies for recommending music to a user,
depending on the user's musical tastes and listening habits. It comprises a simple OWL-DL ontology that defines basic information of artists 
(and their relationships), and songs. It includes some descriptors automatically extracted from the audio (beats per minute, key and mode, intensity, etc.).
  </li>
  </ul>
 
  <!-- ===================================================================== -->
  <h2>
   <a name="references" id="references">References</a>
  </h2>

  <dl>
  	
   <dt>
    <a id="AAF" name="AAF">[AAF]</a>
   </dt>
   <dd>
    Advanced Media Workflow Association (formerly AAF Association),
    <a href="http://www.aafassociation.org/html/techinfo/index.html#aaf_specifications">AAF Specifications</a>
   </dd>

   <dt>
    <a id="Arndt07" name="Arndt07">[Arndt et.al., 2007]</a>
   </dt>
   <dd>
R. Arndt, R. Troncy, S. Staab, L. Hardman and M. Vacura.
COMM: Designing a Well-Founded Multimedia Ontology for the Web.
In <a href="http://iswc2007.semanticweb.org/">6th International Semantic Web Conference (ISWC'2007)</a>, 
Busan, Korea, November 11-15, 2007.
   </dd>
   
   <dt>
    <a id="Celma06" name="Celma06">[Celma, 2006]</a>
   </dt>
   <dd>
     O. Celma.
	 <a href="http://iswc2006.semanticweb.org/items/swchallenge_celma.pdf">
     	Foafing the Music: Bridging the Semantic Gap in Music Recommendation
	 </a>. 
	Semantic Web Challenge 2006.
   </dd>
      
   <dt>
    <a id="DAVP" name="DAVP">[DAVP]</a>
   </dt>
   <dd>
   W. Bailer and P. Schallauer <a href="http://www.joanneum.at/uploads/tx_publicationlibrary/img3052.pdf">The Detailed Audiovisual Profile: Enabling Interoperability between MPEG-7 Based Systems</a>. 
   In Proc. of 12th International Multi-Media Modeling Conference, Beijing, CN, 2006. 
   </dd>
   
   <dt>
    <a id="DIG35" name="DIG35">[DIG35]</a>
   </dt>
   <dd>
    Digital Imaging Group (DIG),
    <a href="http://xml.coverpages.org/FU-Berlin-DIG35-v10-Sept00.pdf">DIG35 Specification - Metadata for Digital Images - Version 1.0 August 30, 2000</a>
   </dd>

   <dt>
    <a id="DublinCore" name="DublinCore">[Dublin Core]</a>
   </dt>
   <dd>
    The Dublin Core Metadata Initiative,
    <a href="http://dublincore.org/documents/dces/">Dublin Core Metadata Element Set, Version 1.1: Reference Description</a> (2006-12-18)
   </dd>

   <dt>
    <a id="EBUPMETA" name="EBUPMETA">[EBU P/Meta]</a>
   </dt>
   <dd>
    European Broadcasting Union,
    <a href="http://www.ebu.ch/CMSimages/en/tec_doc_t3295_v0102_tcm6-40957.pdf">EBU Tech 3295: The EBU Metadata Exchange Scheme version 1.2 - Publication Release</a>
   </dd>


   <dt>
    <a id="Exif" name="Exif">[Exif]</a>
   </dt>
   <dd>
    Standard of Japan Electronics and Information Technology Industries Association,
    <a href="http://www.digicamsoft.com/exif22/exif22/html/exif22_1.htm">Exchangeable image file format for digital still cameras: Exif Version 2.2</a>
   </dd>

   <dt>
    <a id="Garcia05" name="Garcia05">[Garcia et.al., 2005]</a>
   </dt>
   <dd>
     R. Garcia and O. Celma.
     <a href="http://rhizomik.net/content/roberto/papers/rgocsemannot2005.pdf">
     Semantic Integration and Retrieval of Multimedia Metadata
     </a>. 
In Proc. of the 5th International Workshop on Knowledge Markup and Semantic Annotation (SemAnnot 2005), Galway, Ireland, 7 November 2005.
   </dd>

   <dt>
    <a id="Halaschek05" name="Halaschek05">[Halaschek-Wiener et.al., 2005]</a>
   </dt>
   <dd>
    C. Halaschek-Wiener, A. Schain, J. Golbeck, M. Grove, B. Parsia and J. Hendler. 
	<a href="http://www.mindswap.org/~chris/publications/PhotoStuffSemannot2005.pdf">
    	A Flexible Approach for Managing Digital Images on the Semantic Web
	</a>. 
	In Proc. of the 5th International Workshop on Knowledge Markup and Semantic Annotation (SemAnnot 2005), Galway, Ireland, 7 November 2005. 
   </dd>


   <dt>
    <a id="Hardman05" name="Hardman05">[Hardman, 2005]</a>
   </dt>
   <dd>
	 Lynda Hardman.
	 <a href="http://db.cwi.nl/rapporten/abstract.php?abstractnr=1896">
	 	Canonical Processes of Media Production
	 </a>.
	 In Proc. of the ACM workshop on Multimedia for human communication. ACM Press, 2005. 
   </dd>

   <dt>
    <a id="Hunter01" name="Hunter01">[Hunter, 2001]</a>
   </dt>
   <dd>
    J. Hunter. 
    <a href="http://citeseer.ist.psu.edu/hunter01adding.html">
    	Adding  Multimedia to the Semantic Web &#8212; Building an MPEG-7 Ontology
	</a>.
	In 
	<a href="http://www.semanticweb.org/SWWS/">
		International Semantic Web Working Symposium (SWWS 2001)
	</a>, Stanford University,  California, USA, July 30 - August 1, 2001
   </dd>
   
    <dt>
    <a id="ID3" name="ID3">[ID3]</a>
   </dt>
   <dd>
    Martin Nilsson et. al.,
    <a href="http://www.id3.org/Developer_Information">ID3v2 documents</a>
   </dd>

   <dt>
    <a id="Masolo02" name="Masolo02">[Masolo et.al., 2002]</a>
   </dt>
   <dd>
     C. Masolo and S. Borgo and A. Gangemi and N. Guarino and A. Oltramari and L. Schneider.
	 The WonderWeb Library of Foundational Ontologies (WFOL). 
	 Technical Report, WonderWeb Deliverable 17, 2002.
   </dd>

   <dt>
    <a id="MPEG-7" name="MPEG-7">[MPEG-7]</a>
   </dt>
   <dd>
    Information Technology - Multimedia Content Description Interface (MPEG-7).
    Standard No. ISO/IEC 15938:2001, International Organization for Standardization(ISO), 2001
   </dd>

   <dt>
    <a id="MPEG-21" name="MPEG-21">[MPEG-21]</a>
   </dt>
   <dd>
    Information Technology - Multimedia framework (MPEG-21).
    Standard ISO/IEC TR 21000-1:2004, International Organization for Standardization(ISO), 2004
   </dd>
      
   <dt>
    <a id="MPEG-7-Profiles" name="MPEG-7-Profiles">[MPEG-7-Profiles]</a>
   </dt>
   <dd>
    Information Technology - Multimedia Content Description Interface -- Part 9: Profiles and levels.
    Standard No. ISO/IEC 15938-9:2005, International Organization for Standardization(ISO), 2005
   </dd>
   	  
   <dt>
    <a id="MusicBrainz" name="MusicBrainz">[MusicBrainz]</a>
   </dt>
   <dd>
    MusicBrainz (MetaBrainz Foundation),
    <a href="http://musicbrainz.org/MM/">MusicBrainz Metadata Initiative 2.1</a>
   </dd>

   <dt>
    <a id="MusicXML" name="MusicXML">[MusicXML]</a>
   </dt>
   <dd>
    Recordare,
    <a href="http://www.recordare.com/xml.html">MusicXML Definition Version 2.0</a>
   </dd>

   <dt>
    <a id="MXF" name="MXF">[MXF]</a>
   </dt>
   <dd>
    SMPTE,
    Material Exchange Format (MXF) - File Format Specification (Standard). SMPTE 377M, 2004.
   </dd>
   
   <dt>
    <a id="MXF-DMS-1" name="MXF-DMS-1">[MXF-DMS-1]</a>
   </dt>
   <dd>
    SMPTE,
    Material Exchange Format (MXF) - Descriptive Metadata Scheme-1. SMPTE 380M, 2004.
   </dd>
   
   <dt>
    <a id="MXF-RP210" name="MXF-RP210">[MXF-RP210]</a>
   </dt>
   <dd>
    SMPTE,
    Metadata Dictionary Registry of Metadata Element Descriptions. SMPTE RP210.8, 2004.
   </dd>   

   <dt>
    <a id="Ossenbruggen04" name="Ossenbruggen04">[Ossenbruggen, 2004]</a>
   </dt>
   <dd>
    J. van Ossenbruggen, F. Nack, and L. Hardman. That Obscure Object of Desire: Multimedia Metadata on the Web (Part I). In:
    IEEE Multimedia 11(4), pp. 38-48 October-December 2004
   </dd>

   <dt>
    <a id="Nack05" name="Nack05">[Nack, 2005]</a>
   </dt>
   <dd>
    F. Nack, J. van Ossenbruggen, and L. Hardman. That Obscure Object of Desire: Multimedia Metadata on the Web (Part II). In:
    IEEE Multimedia 12(1), pp. 54-63 January-March 2005
   </dd>

   <dt>
    <a id="Z3987" name="Z3987">[NISO Z39.87]</a>
   </dt>
   <dd>
    American National Standards Institute,
    <a href="http://www.niso.org/standards/resources/Z39-87-2006.pdf">ANSI/NISO Z39.87-2006: Data Dictionary - Technical Metadata for Digital Still Images</a>
   </dd>

   <dt>
    <a id="NewsML" name="NewsML">[NewsML-G2]</a>
   </dt>
   <dd>
    IPTC,
    <a href="http://www.iptc.org/NAR/">News Architecture (NAR) for G2-Standards Specifications (released 30th May, 2007)</a>
   </dd>

  <dt>
    <a name="OWL-Guide" id="OWL-Guide">[OWL Guide]</a>
   </dt>
   <dd>
     <cite>
       <a href="http://www.w3.org/TR/2004/REC-owl-guide-20040210/">
        OWL Web Ontology Language Guide</a></cite>, Michael K.
        Smith, Chris Welty, and Deborah L. McGuinness, Editors, W3C
        Recommendation, 10 February 2004,
        <a href="http://www.w3.org/TR/owl-guide/">http://www.w3.org/TR/owl-guide/</a>
   </dd>

   <dt>
    <a name="OWL" id="OWL">[OWL Semantics and Abstract Syntax]</a></dt>
   <dd>
    <cite>
     <a href=
      "http://www.w3.org/TR/2004/REC-owl-semantics-20040210/">OWL Web
     Ontology Language Semantics and Abstract Syntax</a></cite>, Peter
     F. Patel-Schneider, Patrick Hayes, and Ian Horrocks, Editors, W3C
     Recommendation 10 February 2004,
    <a href="http://www.w3.org/TR/owl-semantics/">http://www.w3.org/TR/owl-semantics/</a></dd>

   <dt>
    <a id="PhotoRDF" name="PhotoRDF">[PhotoRDF]</a>
   </dt>
   <dd>
    W3C Note 19 April 2002,
    <a href="http://www.w3.org/TR/2002/NOTE-photo-rdf-20020419">Describing and retrieving photos using RDF and HTTP</a>
   </dd>

   <dt>
    <a id="PS" name="PS">[PhotoStuff]</a>
   </dt>
   <dd>
    PhotoStuff Project, <a
    href="http://www.mindswap.org/2003/PhotoStuff/">http://www.mindswap.org/2003/PhotoStuff/</a>
   </dd>

   <dt><a id="RDF-Primer" name="RDF-Primer">[RDF Primer]</a></dt>
   <dd>
     <cite><a href="http://www.w3.org/TR/2004/REC-rdf-primer-20040210/">RDF
   Primer</a></cite>, F. Manola, E. Miller, Editors, W3C Recommendation, 10 February 2004,
   <a href="http://www.w3.org/TR/rdf-primer/">http://www.w3.org/TR/rdf-primer/</a>
   </dd>

   <dt><a id="RDF" name="RDF"></a>[RDF Syntax]</dt>
   <dd>
    <cite>
     <a href="http://www.w3.org/TR/2004/REC-rdf-syntax-grammar-20040210/">
      RDF/XML Syntax Specification (Revised)</a>
    </cite>, Dave Beckett,
      Editor, W3C Recommendation, 10 February 2004,
      <a href="http://www.w3.org/TR/rdf-syntax-grammar/">http://www.w3.org/TR/rdf-syntax-grammar/</a>
   </dd>

   <dt>
    <a id="SMIL" name="SMIL">[SMIL]</a>
   </dt>
   <dd>
    W3C Recommendation 13 December 2005,
    <a href="http://www.w3.org/TR/2005/REC-SMIL2-20051213/">Synchronized Multimedia Integration Language (SMIL 2.1)</a> -
	Chapter 8. The SMIL 2.1 Metainformation Module
   </dd>

   <dt>
    <a id="Smit06" name="Smit06">[Smith et.al., 2006]</a>
   </dt>
   <dd>
   J. R. Smith and P. Schirling. Metadata Standards Roundup. IEEE MultiMedia, vol. 13, no. 2, pp. 84-88, Apr-Jun, 2006. 
   </dd>
 

   <dt>
    <a id="SVG" name="SVG">[SVG]</a>
   </dt>
   <dd>
   	W3C Recommendation 14 January 2003,
    <a href="http://www.w3.org/TR/2003/REC-SVG11-20030114/">Scalable Vector Graphics (SVG) 1.1 Specification</a> -
	Chapter 21. Metadata    
   </dd>

   <dt>
    <a id="Tsinaraki04" name="Tsinaraki04">[Tsinaraki et.al., 2004]</a>
   </dt>
   <dd>
     C. Tsinaraki, P. Polydoros and S. Christodoulakis. <a
    href="http://www.music.tuc.gr/Staff/Director/Publications/publ_files/C_TSPC_CIVR_2004.pdf">
    Interoperability support for Ontology-based Video Retrieval Applications</a>. 
	In Proc. of 3rd International Conference on Image and Video Retrieval (CIVR 2004), Dublin, Ireland, 21-23 July 2004. 
   </dd>

   <dt>
    <a id="TVAnytime" name="TVAnytime">[TVAnytime]</a>
   </dt>
   <dd>
    IPTC,
    <a href="http://www.tv-anytime.org/workinggroups/wg-md.html#docs">WG Metadata - Important Documents</a>
   </dd>
 
   <dt>
    <a id="VDO" name="VDO">[VDO]</a>
   </dt>
   <dd>
    aceMedia Visual Descriptor Ontology, <a
    href="http://www.acemedia.org/aceMedia/reference/resource/index.html">
    http://www.acemedia.org/aceMedia/reference/resource/index.html</a>
   </dd>

   <dt>
    <a id="VraCore" name="VraCore">[VRA Core]</a>
   </dt>
   <dd>
    Visual Resources Association Data Standards Committee,
     VRA Core Categories, Version 4.0,
    <a href="http://www.vraweb.org/projects/vracore4/index.html">
     http://www.vraweb.org/projects/vracore4/index.html</a>
   </dd>
  
   <dt>
     <a id="XML-NS" name="XML-NS">[XML NS]</a>
    </dt>
    <dd>
     <cite>
     <a href="http://www.w3.org/TR/1999/REC-xml-names-19990114/">Namespaces
     in XML</a></cite>, Bray T., Hollander D., Layman A.
     (Editors), World Wide Web Consortium, 14 January 1999, 
<a href="http://www.w3.org/TR/REC-xml-names/">http://www.w3.org/TR/REC-xml-names/</a>
    </dd>

   <dt>
    <a id="XMP" name="XMP">[XMP]</a>
   </dt>
   <dd>
    Adobe,
    <a href="http://partners.adobe.com/public/developer/en/xmp/sdk/XMPspecification.pdf">XMP Specification</a>
   </dd>

  </dl>

  <!-- ======================================================================== -->

  <h2>
   <a id="acknowledgments" name="acknowledgments">Acknowledgments</a>
  </h2>

  <p>
   The editor would like to thank 
<a href="mailto:werner.bailer@joanneum.at">Werner Bailer</a> (JOANNEUM RESEARCH),
<a href="http://rhizomik.net/~roberto/">Roberto Garcia Gonzalez</a> (Rhizomik), 
<a href="http://www-itec.uni-klu.ac.at/~timse/">Christian Timmerer</a> (ITEC, Klagenfurt University)
and the contributors members of the XG for their feedback on earlier versions of this document.
  </p>

  <hr/>
  <p>$Id: Overview.html,v 1.5 2007/08/09 09:53:58 rtroncy Exp $</p>

  <!-- ======================================================================== -->

 </body>
</html>