index.html
56.3 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html
PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en-us"><head><title>Protocol for Media Fragments 1.0 Resolution in HTTP</title><style type="text/css">
/**/
code { font-family: monospace; }
div.constraint,
div.issue,
div.note,
div.notice { margin-left: 2em; }
ol.enumar { list-style-type: decimal; }
ol.enumla { list-style-type: lower-alpha; }
ol.enumlr { list-style-type: lower-roman; }
ol.enumua { list-style-type: upper-alpha; }
ol.enumur { list-style-type: upper-roman; }
dt.label { display: run-in; }
li, p { margin-top: 0.3em;
margin-bottom: 0.3em; }
.diff-chg { background-color: yellow; }
.diff-del { background-color: red; text-decoration: line-through;}
.diff-add { background-color: lime; }
table { empty-cells: show; }
table caption {
font-weight: normal;
font-style: italic;
text-align: left;
margin-bottom: .5em;
}
div.issue {
color: red;
}
.rfc2119 {
font-variant: small-caps;
}
div.exampleInner pre { margin-left: 1em;
margin-top: 0em; margin-bottom: 0em}
div.exampleOuter {border: 4px double gray;
margin: 0em; padding: 0em}
div.exampleInner { background-color: #d5dee3;
border-top-width: 4px;
border-top-style: double;
border-top-color: #d3d3d3;
border-bottom-width: 4px;
border-bottom-style: double;
border-bottom-color: #d3d3d3;
padding: 4px; margin: 0em }
div.exampleWrapper { margin: 4px }
div.exampleHeader { font-weight: bold;
margin: 4px}
div.boxedtext {
border: solid #bebebe 1px;
margin: 2em 1em 1em 2em;
}
span.practicelab {
margin: 1.5em 0.5em 1em 1em;
font-weight: bold;
font-style: italic;
}
span.practicelab { background: #dfffff; }
span.practicelab {
position: relative;
padding: 0 0.5em;
top: -1.5em;
}
p.practice
{
margin: 1.5em 0.5em 1em 1em;
}
@media screen {
p.practice {
position: relative;
top: -2em;
padding: 0;
margin: 1.5em 0.5em -1em 1em;
}
}
/**/ </style><link rel="stylesheet" type="text/css" href="http://www.w3.org/StyleSheets/TR/W3C-WD.css"/></head><body><div class="head"><p><a href="http://www.w3.org/"><img src="http://www.w3.org/Icons/w3c_home" alt="W3C" height="48" width="72"/></a></p>
<h1><a name="title" id="title"/>Protocol for Media Fragments 1.0 Resolution in HTTP</h1>
<h2><a name="w3c-doctype" id="w3c-doctype"/>W3C Working Draft 1 December 2011</h2><dl><dt>This version:</dt><dd>
<a href="http://www.w3.org/TR/2011/WD-media-frags-recipes-20111201/">http://www.w3.org/TR/2011/WD-media-frags-recipes-20111201/</a>
</dd><dt>Latest version:</dt><dd><a href="http://www.w3.org/TR/media-frags-recipes/">http://www.w3.org/TR/media-frags-recipes/</a></dd><dt>Editors:</dt><dd><a href="http://www.eurecom.fr/~troncy/">
Raphaël Troncy
</a>, EURECOM</dd><dd><a href="http://multimedialab.elis.ugent.be/emannens">
Erik Mannens
</a>, IBBT Multimedia Lab, University of Ghent</dd><dd><a href="http://blog.gingertech.net/">
Silvia Pfeiffer
</a>, W3C Invited Expert</dd><dd><a href="http://multimedialab.elis.ugent.be/dvdeurse">
Davy Van Deursen
</a>, IBBT Multimedia Lab, University of Ghent</dd><dt>Contributors:</dt><dd><a href="http://www.deri.ie/about/team/member/Michael_Hausenblas/">
Michael Hausenblas
</a>, DERI, National University of Ireland, Galway</dd><dd><a href="mailto:philipj@opera.com">
Philip Jägenstedt
</a>, Opera Software</dd><dd><a href="http://www.cwi.nl/~jack/">
Jack Jansen
</a>, CWI, Centrum Wiskunde & Informatica, Amsterdam</dd><dd><a href="mailto:ylafon@w3.org">
Yves Lafon
</a>, W3C</dd><dd><a href="http://www.kfish.org/">
Conrad Parker
</a>, W3C Invited Expert</dd><dd><a href="http://blog.tomayac.com/">
Thomas Steiner
</a>, Google, Inc.</dd></dl><p class="copyright"><a href="http://www.w3.org/Consortium/Legal/ipr-notice#Copyright">Copyright</a> © 2011 <a href="http://www.w3.org/"><acronym title="World Wide Web Consortium">W3C</acronym></a><sup>®</sup> (<a href="http://www.csail.mit.edu/"><acronym title="Massachusetts Institute of Technology">MIT</acronym></a>, <a href="http://www.ercim.eu/"><acronym title="European Research Consortium for Informatics and Mathematics">ERCIM</acronym></a>, <a href="http://www.keio.ac.jp/">Keio</a>), All Rights Reserved. W3C <a href="http://www.w3.org/Consortium/Legal/ipr-notice#Legal_Disclaimer">liability</a>, <a href="http://www.w3.org/Consortium/Legal/ipr-notice#W3C_Trademarks">trademark</a> and <a href="http://www.w3.org/Consortium/Legal/copyright-documents">document use</a> rules apply.</p></div><hr/><div>
<h2><a name="abstract" id="abstract"/>Abstract</h2><p>
This document complements the Media Fragments 1.0 specification. It described various recipes for processing media fragments URI when used
over the HTTP protocol.
</p></div><div>
<h2><a name="status" id="status"/>Status of this Document</h2><p>
<em>
This section describes the status of this document at the
time of its publication. Other documents may supersede this
document. A list of current W3C publications and the latest revision
of this technical report can be found in the <a href="http://www.w3.org/TR/">W3C technical reports index</a> at
http://www.w3.org/TR/.
</em>
</p><p>
This is the <a href="http://www.w3.org/2005/10/Process-20051014/tr.html#first-wd">
First
Public Working Draft
</a> of the Protocol for Media Fragments 1.0 Resolution in HTTP document. It has been
produced by the <a href="http://www.w3.org/2008/WebVideo/Fragments/">
Media
Fragments Working Group
</a>, which is part of the
<a href="http://www.w3.org/2008/WebVideo/">W3C Video on the Web Activity</a>.
The Working Group intends to publish this document as a Working Group Note, as a starting point for future work.
</p><p>
Please send comments about this document to <a href="mailto:public-media-fragment@w3.org">public-media-fragment@w3.org</a>
mailing list (<a href="http://lists.w3.org/Archives/Public/public-media-fragment/">
public
archive
</a>).
</p><p>
Publication as a Working Draft does not imply endorsement by the
W3C Membership. This is a draft document and may be updated,
replaced or obsoleted by other documents at any time. It is
inappropriate to cite this document as other than work in
progress.
</p><p> This document was produced by a group operating under the <a href="http://www.w3.org/Consortium/Patent-Policy-20040205/">5 February 2004 W3C Patent Policy</a>. W3C maintains a <a rel="disclosure" href="http://www.w3.org/2004/01/pp-impl/42785/status">public list of any patent disclosures</a> made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains <a href="http://www.w3.org/Consortium/Patent-Policy-20040205/#def-essential">Essential Claim(s)</a> must disclose the information in accordance with <a href="http://www.w3.org/Consortium/Patent-Policy-20040205/#sec-Disclosure">section 6 of the W3C Patent Policy</a>. </p></div><div class="toc">
<h2><a name="contents" id="contents"/>Table of Contents</h2><p class="toc">1 <a href="#introduction">Introduction</a><br/>
2 <a href="#processing-protocol-frag">Protocol for URI fragment Resolution in HTTP</a><br/>
2.1 <a href="#processing-protocol-UA-mapped">UA mapped byte ranges</a><br/>
2.1.1 <a href="#processing-protocol-UA-mapped-new">UA requests URI fragment for the first time</a><br/>
2.1.2 <a href="#processing-protocol-UA-mapped-unchanged">UA requests URI fragment it already has buffered</a><br/>
2.1.3 <a href="#processing-protocol-UA-mapped-changed">UA requests URI fragment of a changed resource</a><br/>
2.2 <a href="#processing-protocol-Server-mapped">Server mapped byte ranges</a><br/>
2.2.1 <a href="#processing-protocol-server-mapped-default">Server mapped byte ranges with corresponding binary data</a><br/>
2.2.2 <a href="#processing-protocol-server-mapped-setup">Server mapped byte ranges with corresponding binary data and codec setup data</a><br/>
2.2.3 <a href="#processing-protocol-server-mapped-proxy">Proxy cacheable server mapped byte ranges</a><br/>
3 <a href="#processing-protocol-query">Protocol for URI query Resolution in HTTP</a><br/>
</p>
<h3><a name="appendices" id="appendices"/>Appendices</h3><p class="toc">A <a href="#rtsp-media-fragment-processing">Processing media fragment URIs in RTSP</a> (Non-Normative)<br/>
A.1 <a href="#mapping-mf-to-rtsp-methods">How to map Media Fragment URIs to RTSP protocol methods</a><br/>
A.1.1 <a href="#rtsp-mf-dimensions">Dealing with the media fragment URI dimensions in RTSP</a><br/>
A.1.1.1 <a href="#rtsp-temporal">Temporal Media Fragment URIs</a><br/>
A.1.1.2 <a href="#rtsp-track">Track Media Fragment URIs</a><br/>
A.1.1.3 <a href="#rtsp-spatial">Spatial Media Fragment URIs</a><br/>
A.1.1.4 <a href="#rtsp-id">Id Media Fragment URIs</a><br/>
A.1.2 <a href="#rtsp-combined-mf-dimensions">Putting the media fragment URI dimensions together in RTSP</a><br/>
A.1.3 <a href="#rtsp-caching">Caching and RTSP for media fragment URIs</a><br/>
B <a href="#acknowledgments">Acknowledgements</a> (Non-Normative)<br/>
</p></div><hr/><div class="body"><div class="div1">
<h2><a name="introduction" id="introduction"/>1 Introduction</h2><p>
Audio and video resources on the World Wide Web are currently treated as "foreign" objects, which can only be embedded using a plugin that is capable of decoding and interacting with the media resource. Specific media servers are generally required to provide for server-side features such as direct access to time offsets into a video without the need to retrieve the entire resource. Support for such media fragment access varies between different media formats and inhibits standard means of dealing with such content on the Web.
</p><p>
This specification provides for a media-format independent, standard means of addressing media fragments on the Web using Uniform Resource Identifiers (URI).
In the context of this document, media fragments are regarded along four different dimensions: temporal, spatial, and tracks.
Further, a temporal fragment can be marked with a name and then addressed through a URI using that name, using the id dimension.
The specified addressing schemes apply mainly to audio and video resources - the spatial fragment addressing may also be used on images.
</p><p>
The aim of this specification is to enhance the Web infrastructure for supporting the addressing and retrieval of subparts of time-based Web resources, as well as the automated processing of such subparts for reuse. Example uses are the sharing of such fragment URIs with friends via email, the automated creation of such fragment URIs in a search engine interface, or the annotation of media fragments with RDF. Such use case examples as well as other side conditions on this specification and a survey of existing media fragment addressing approaches are provided in the requirements <cite><a href="#">mf-req</a></cite> document that accompanies this specification document.
</p><p>
The media fragment URIs specified in this document have been implemented and demonstrated to work with media resources over the HTTP protocol.
This specification is not defining the protocol aspect of RTSP handling of a media fragment in the normative sections. We expect the media fragment URI syntax to be
generic and a possible mapping between this syntax and RTSP messages can be found in an appendix of this specification <a href="#rtsp-media-fragment-processing"><b>A Processing media fragment URIs in RTSP</b></a>.
Existing media formats in their current representations and implementations provide varying degrees of support for this specification.
It is expected that over time, media formats, media players, Web Browsers, media and Web servers, as well as Web proxies will be extended
to adhere to the full specification. This specification will help make video a first-class citizen of the World Wide Web.
</p></div><div class="div1">
<h2><a name="processing-protocol-frag" id="processing-protocol-frag"/>2 Protocol for URI fragment Resolution in HTTP</h2><p>
This section defines the protocol steps in HTTP <cite><a href="#">rfc2616</a></cite> to resolve and deliver a media fragment specified as a URI fragment.
</p><p>
In a well known context where the MIME TYPE of the resource requested is known, various recipes are proposed depending on the dimension
addressed in the media fragment URI, the container and codec formats used by the media resource, or some advanced processing features
implemented by the User Agent.
Hence, if the container format of the media resource is fully indexable (e.g. MP4, Ogg or WebM) and if the time dimension is requested
in the media fragment URI, the User Agent MAY priviledge the recipe described in the section <a href="#processing-protocol-frag"><b>2 Protocol for URI fragment Resolution in HTTP</b></a>
since it will be in a position of issuing directly a normal RANGE request expressed in terms of byte ranges. On the other hand, if the
container format of the media resource is a legacy format such as AVI, the Use Agent MAY priviledge the recipe described in the section
<a href="#processing-protocol-Server-mapped"><b>2.2 Server mapped byte ranges</b></a>, issuing a RANGE request expressed with a custom unit such as seconds and waiting for the
server to provide the mapping in terms of byte ranges.
</p><p>
The User Agent MAY also implement a so-called optimistic processing of URI fragments in particular cases where the MIME TYPE of the resource
requested is not yet known. Hence, if a URL fragment occurs within a particular context such as the value of the @src attribute of a
media element (audio, video or source) and if the time dimension is requested in the media fragment URI, the User Agent MAY follow the
scenario specified in section <a href="#processing-protocol-Server-mapped"><b>2.2 Server mapped byte ranges</b></a> and issues directly a range request using custom units
assuming that the resource requested is likely to be a media resource. If the MIME-type of this resource turns out to be a media type,
the server SHOULD interpret the RANGE request as specified in section <a href="#processing-protocol-Server-mapped"><b>2.2 Server mapped byte ranges</b></a>.
Otherwise it SHOULD just ignore the RANGE header.
</p><table border="1" summary="Editorial note: Silvia"><tr><td align="left" valign="top" width="50%"><b>Editorial note: Silvia</b></td><td align="right" valign="top" width="50%"> </td></tr><tr><td colspan="2" align="left" valign="top">
<p>
If the UA needs to retrieve a large part of the resource or even the full resource, it will probably decide to make
multiple range requests rather than a single one. If the resource is, however, small, it may decide to just retrieve the full
resource without a range request. The UA should make this choice given context information, e.g. if it knows that it will be a
lot of data, it will retrieve it in smaller chunks. If it chooses to request the full resource in one go and not make use of a
Range request, the result will be a 200 rather than a 206.
</p>
</td></tr></table><div class="div2">
<h3><a name="processing-protocol-UA-mapped" id="processing-protocol-UA-mapped"/>2.1 UA mapped byte ranges</h3><table border="1" summary="Editorial note"><tr><td align="left" valign="top" width="50%"><b>Editorial note</b></td><td align="right" valign="top" width="50%"> </td></tr><tr><td colspan="2" align="left" valign="top">
<p>This section is ready to implement.</p>
</td></tr></table><p>
The most optimal case is a user agent that knows how to map media fragments to byte ranges. This is the case typically where a user agent has already downloaded those parts of a media resource that allow it to do or guess the mapping, e.g. headers or a resource, or an index of a resource.
</p><p>
In this case, the HTTP exchanges are exactly the same as for any other Web resource where byte ranges are requested <cite><a href="#">rfc2616</a></cite>.
</p><p>
How the UA retrieves the byte ranges is dependent on the media type of the media resource.
We here show examples with only one byte range retrieval per time range, which may
in practice turn into several such retrieval actions necessary to acquire the correct
time range.
</p><p>Here are the three principle cases a media fragment enabled UA and a media Server will encounter:</p><div class="div3">
<h4><a name="processing-protocol-UA-mapped-new" id="processing-protocol-UA-mapped-new"/>2.1.1 UA requests URI fragment for the first time</h4><p>A user requests a media fragment URI:</p><ul><li><p>User → UA (1):</p><div class="exampleInner"><pre>http://www.example.com/video.ogv#t=10,20</pre></div></li></ul><p>The UA has to check if a local copy of the requested fragment is available in its buffer - not in this case. But it knows how to map the fragment to byte ranges: 19147 - 22890. So, it requests these byte ranges from the server:</p><ul><li><p>UA (1) → Proxy (2) → Origin Server (3):</p><div class="exampleInner"><pre>
GET /video.ogv HTTP/1.1
Host: www.example.com
Accept: video/*
Range: bytes=19147-22890
</pre></div></li></ul><p>The server extracts the bytes corresponding to the requested range and replies in a 206 HTTP response:</p><ul><li><p>Origin Server (3) → Proxy (4) → UA (5):</p><div class="exampleInner"><pre>
HTTP/1.1 206 Partial Content
Accept-Ranges: bytes
Content-Length: 3743
Content-Type: video/ogg
Content-Range: bytes 19147-22880/35614993
Etag: "b7a60-21f7111-46f3219476580"
{binary data}
</pre></div></li></ul><p>Assuming the UA has received the byte ranges that it requires to serve t=10,20, which may well be slightly more, it will serve the decoded content to the User from the appropriate time offset. Otherwise it may keep requesting byte ranges to retrieve the required time segments.</p><img src="MF-SD-ClientSide-5.2.1.1.png" alt="Illustration of a UA requesting a URI fragment for the first time"/></div><div class="div3">
<h4><a name="processing-protocol-UA-mapped-unchanged" id="processing-protocol-UA-mapped-unchanged"/>2.1.2 UA requests URI fragment it already has buffered</h4><p>A user requests a media fragment URI:</p><ul><li><p>User → UA (1):</p><div class="exampleInner"><pre>http://www.example.com/video.ogv#t=10,20</pre></div></li></ul><p>The UA has to check if a local copy of the requested fragment is available in its buffer - it is in this case. But the resource could have changed on the server, so it needs to send a conditional GET. It knows the byte ranges: 19147 - 22890. So, it requests these byte ranges from the server under condition of it having changed:</p><ul><li><p>UA (1) → Proxy (2) → Origin Server (3):</p><div class="exampleInner"><pre>
GET /video.ogv HTTP/1.1
Host: www.example.com
Accept: video/*
If-Modified-Since: Sat, 01 Aug 2009 09:34:22 GMT
If-None-Match: "b7a60-21f7111-46f3219476580"
Range: bytes=19147-22890
</pre></div></li></ul><p>The server checks if the resource has changed by checking the date - in this case, the resource was not modified. So, the server replies with a 304 HTTP response. (Note that a If-Range header cannot be used, because if the entity has changed, the entire resource would be sent.)</p><ul><li><p>Origin Server (3) → Proxy (4) → UA (5):</p><div class="exampleInner"><pre>
HTTP/1.1 304 Not Modified
Accept-Ranges: bytes
Content-Length: 3743
Content-Type: video/ogg
Content-Range: bytes 19147-22880/35614993
Etag: "b7a60-21f7111-46f3219476580"
</pre></div></li></ul><p>So, the UA serves the decoded resource to the User our of its existing buffer.</p><img src="MF-SD-ClientSide-5.2.1.2.png" alt="Illustration of a UA requesting a URI fragment it already has buffered"/></div><div class="div3">
<h4><a name="processing-protocol-UA-mapped-changed" id="processing-protocol-UA-mapped-changed"/>2.1.3 UA requests URI fragment of a changed resource</h4><p>A user requests a media fragment URI and the UA sends the exact same GET request as described in the previous subsection.</p><p>This time, the server checks if the resource has changed by checking the date and it has been modified. Since the byte mapping may not be correct any longer, the server can only tell the UA that the resource has changed and leave all further actions to the UA. So, it sends a 412 HTTP response:</p><ul><li><p>Origin Server (3) → Proxy (4) → UA (5):</p><div class="exampleInner"><pre>
HTTP/1.1 412 Precondition Failed
Accept-Ranges: bytes
Content-Length: 3743
Content-Type: video/ogg
Content-Range: bytes 19147-22880/22222222
Etag: "xxxxx-yyyyyyy-zzzzzzzzzzzzz"
</pre></div></li></ul><p>So, the UA can only assume the resource has changed and re-retrieve what it needs to get back to being able to retrieve fragments. For most resources this may mean retrieving the header of the file. After this it is possible again to do a byte range retrieval.</p><img src="MF-SD-ClientSide-5.2.1.3.png" alt="Illustration of a UA requesting a URI fragment it has buffered, but that changed"/></div></div><div class="div2">
<h3><a name="processing-protocol-Server-mapped" id="processing-protocol-Server-mapped"/>2.2 Server mapped byte ranges</h3><p>
Some User Agents cannot undertake the fragment-to-byte mapping themselves, because the mapping is not obvious.
This typically applies to media formats where the setup of the decoding pipeline does
not imply knowledge of how to map fragments to byte ranges, e.g. Ogg without OggIndex.
Thus, the User Agent would be capable of decoding a continuous resource, but would not
know which bytes to request for a media fragment.
</p><p>
In this case, the User Agent could either guess what byte ranges it has to retrieve
and the retrieval action would follow the previous case. Or it could hope that the server
provides a special service, which would allow it to retrieve the byte ranges with a simple
request of the media fragment ranges. Thus, the HTTP request of the User Agent will include
a request for the fragment hoping that the server can do the byte range mapping and send
back the appropriate byte ranges. This is realized by introducing new dimensions for the
HTTP Range header, next to the byte dimension.
</p><p>
The specification for all new Range Request Header dimensions is given through the following
ABNF as an extension to the HTTP Range Request Header definition (see
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.2):
</p><div class="exampleInner"><a name="rangerequestheaderdef" id="rangerequestheaderdef"/><pre>
Range = "Range" ":" ranges-specifier
ranges-specifier = byte-ranges-specifier | fragment-specifier
;
; note that ranges-specifier is extended from <cite><a href="#">rfc2616</a></cite>
; to cover alternate fragment range specifiers
;
fragment-specifier = "include-setup" | fragment-range *( "," fragment-range )
[ ";" "include-setup" ]
fragment-range = time-ranges-specifier | id-ranges-specifier
;
; note that this doesn't capture the restriction to one fragment dimension occurring
; maximally once only in the fragment-specifier definition.
;
time-ranges-specifier = npttimeoption / smptetimeoption / clocktimeoption
npttimeoption = pfxdeftimeformat "=" npt-sec "-" [ npt-sec ]
smptetimeoption = pfxsmpteformat "=" frametime "-" [ frametime ]
clocktimeoption = pfxclockformat "=" datetime "-" [ datetime ]
id-ranges-specifier = idprefix "=" idparam
</pre></div><p>
This specification is meant to be analogous to the one in URIs, but it is a bit stricter.
The time unit is not optional. For instance, it can be "npt", "smpte", "smpte-25",
"smpte-30", "smpte-30-drop" or "clock" for temporal. Where "ntp" is used for a temporal
range, only specification in seconds is possible. Where "clocktime" is used for a temporal
range, only "datetime" is possible and "walltime" is fully specified in HHMMSS with
fraction and full timezone. Indeed, all optional elements in the URI specification
basically become required in the Range header.
</p><p>
There is an optional 'include-setup' flag on the fragment range specifier - this
flag signals to the server whether delivery of the decoder setup information (i.e.
typically file header information) is also required as part of the reply to this
request. This can help avoid an extra roundtrip where a Media Fragment URI is, e.g.
directly typed into a Web browser.
</p><p>
Note that the specification does not foresee a Range dimension for spatial and track media
fragments since they are typically resolved and interpreted by the User Agent (i.e.,
spatial and track fragment extraction is not performed on server-side) for the following
reasons:
</p><ul><li><p>
spatial media fragments are typically not expressible in terms of byte ranges.
Spatial fragment extraction would thus require transcoding operations resulting
in new resources rather than fragments of the original media resource. Track media
fragments are expressible in terms of byte ranges but addressing one track in a media
resource typically results in a huge number of byte ranges (due to interleaved tracks).
Spatial and track fragment extraction is in this case better represented by URI queries.
</p></li><li><p>
When a User Agent receives an extracted spatial media fragment, it is not trivial
to visualize the context of this fragment.
More specifically, spatial context requires a meaningful background, which will not
be available at the User Agent when the spatial fragment is extracted by the
server.
</p></li></ul><p>
Next to the introduction of new dimensions for the HTTP Range request header, we also
introduce a new HTTP response header, called Content-Range-Mapping, which provides the
mapping of the retrieved byte range to the original Range request, which was not in
bytes. It serves two purposes:
</p><ul><li><p>
It Indicates the actual mapped range in terms of fragment dimensions. This is
necessary since the server might not be able to provide a byte range mapping that
corresponds exactly to the requested range. Therefore, the User Agent needs to be
aware of this variance.
</p></li><li><p>
It provides context information regarding the parent resource in case the Range
request contained a temporal dimension. More specifically, the header contains the
start and end time of the parent resource. This way, the User Agent is able to
understand and visualize the temporal context of the media fragment.
</p></li></ul><p>
The specification for the Content-Range-Mapping header is based on the specification
of the Content-Range header
(see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.16)
and is shown below. Note that the Content-Range-Mapping header adds in case of the
temporal dimension the instance start and end in terms of seconds after a slash
"/" character in analogy to the Content-Range header. Also, we introduce an extension
to the Accept-Ranges header
(see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.5).
</p><div class="exampleInner"><a name="contentrangemappingheaderdef" id="contentrangemappingheaderdef"/><pre>
Content-Range-Mapping = "Content-Range-Mapping" ":" '{'
( content-range-mapping-spec [ ";" def-include-setup ] ) / def-include-setup
'}' '=' '{'
byte-content-range-mapping-spec '}'
def-include-setup = %x69.6E.63.6C.75.64.65.2D.73.65.74.75.70 ; "include-setup"
byte-range-mapping-spec = bytes-unit SP
byte-range-resp-spec *( "," byte-range-resp-spec ) "/"
( instance-length / "*" )
content-range-mapping-spec = time-mapping-spec | id-mapping-spec
time-mapping-spec = timeprefix ":" time-mapping-options
time-mapping-options = npt-mapping-option / smpte-mapping-option / clock-mapping-option
npt-mapping-option = deftimeformat SP npt-sec "-" npt-sec "/"
[ npt-sec ] "-" [ npt-sec ]
smpte-mapping-option = smpteformat SP frametime "-" frametime "/"
[ frametime ] "-" [ frametime ]
clock-mapping-option = clockformat SP datetime "-" datetime "/"
[ datetime ] "-" [ datetime ]
id-mapping-spec = idprefix SP idparam
Accept-Ranges = "Accept-Ranges" ":" acceptable-ranges
acceptable-ranges = 1#range-unit *( "," 1#range-unit )| "none"
;
; note this does not represent the restriction that range-units can only appear once at most
;
range-unit = bytes-unit | other-range-unit
bytes-unit = "bytes"
other-range-unit = token | timeprefix | idprefix
</pre></div><p>
Three cases can be distinguished when a User Agent needs assistance by a server to
perform the byte range mapping. In the next subsections, we'll go through the protocol
exchange action step by step.
</p><div class="div3">
<h4><a name="processing-protocol-server-mapped-default" id="processing-protocol-server-mapped-default"/>2.2.1 Server mapped byte ranges with corresponding binary data</h4><ul><li><p>User → UA (1):</p><div class="exampleInner"><pre>http://www.example.com/video.ogv#t=10,20</pre></div></li></ul><p>
The UA has to check if a local copy of the requested fragment is available in its
buffer. If it is, we revert back to the processing described in sections
<a href="#processing-protocol-UA-mapped-unchanged"><b>2.1.2 UA requests URI fragment it already has buffered</b></a>
and <a href="#processing-protocol-UA-mapped-changed"><b>2.1.3 UA requests URI fragment of a changed resource</b></a>, since the UA already
knows the mapping to byte ranges. If the requested fragment is not available in
its buffer, the UA sends an HTTP request to the server, including a Range header
with temporal dimension. The request is shown below:
</p><ul><li><p>UA (1) → Proxy (2) → Origin Server (3):</p><div class="exampleInner"><pre>
GET /video.ogv HTTP/1.1
Host: www.example.com
Accept: video/*
Range: t:npt=10-20
</pre></div></li></ul><p>
If the server does not understand a Range header, it MUST ignore the header field
that includes that range-set. This is in sync to the HTTP RFC <cite><a href="#">rfc2616</a></cite>.
This means that where a server does not support media fragments, the complete resource
will be delivered. It also means that we can combine both, byte range and fragment range
headers in one request, since the server will only react to the Range header
it understands.
</p><p>
Assuming the server can map the given Range to one or more byte ranges, it will
reply with these in a 206 HTTP response. Where multiple byte ranges are required to
satisfy the Range request, these are transmitted as a multipart message-body. The
media type for this purpose is called "multipart/byteranges". This is in sync with
the HTTP RFC <cite><a href="#">rfc2616</a></cite>.
</p><p>Here is the reply to the example above, assuming a single byte range is sufficient:</p><ul><li><p>Origin Server (3) → Proxy (4) → UA (5):</p><div class="exampleInner"><pre>
HTTP/1.1 206 Partial Content
Accept-Ranges: bytes, t, id
Content-Length: 3743
Content-Type: video/ogg
Content-Range: bytes 19147-22880/35614993
Content-Range-Mapping: { t:npt 9.85-21.16/0.0-653.79 } = { bytes 19147-22880/35614993 }
Etag: "b7a60-21f7111-46f3219476580"
{binary data}
</pre></div></li></ul><p>
Note the presence of the new reply header called Content-Range-Mapping, which provides
the mapping of the retrieved byte range to the original Content-Range request, which
was not in bytes. As we return both, byte and temporal ranges, the UA and any
intermediate caching proxy is enabled to map byte positions with time offsets and fall
back to byte range request where the fragment is re-requested. Also note that through
the extended list in the Accept-Ranges it is possible to identify which fragment
schemes a server supports.
</p><img src="MF-SD-ServerSide.png" alt="Illustration of a UA requesting a URI time to byte range mapping from the server "/><p>
In the case where a media fragment results in a multipart message-body, the
Content-Range headers will be spread throughout the binary data ranges, but the
Content-Range-Mapping of the media fragment will only be with the main header.
Note that requesting setup information with a temporal (or id) fragment typically result in multipart message-bodies, as will be illustrated in section <a href="#processing-protocol-server-mapped-setup"><b>2.2.2 Server mapped byte ranges with corresponding binary data and codec setup data</b></a>
</p><p>
Note that a caching proxy that does not understand a Range header must not cache
"206 Partial Content" responses as per HTTP RFC <cite><a href="#">rfc2616</a></cite>. Thus, the
new Range requests won't be cached by legacy Web proxies.
</p><p>Id fragments can be requested in a similar way. The following example illustrates a request for the temporal fragment with name 'chapter1':</p><ul><li><p>UA (1) → Proxy (2) → Origin Server (3):</p><div class="exampleInner"><pre>
GET /video.ogv HTTP/1.1
Host: www.example.com
Accept: video/*
Range: id=chapter1
</pre></div></li></ul><p>Assuming the server can map the given id to one or more byte ranges, it will for instance reply with the following HTTP response:</p><ul><li><p>Origin Server (3) → Proxy (4) → UA (5):</p><div class="exampleInner"><pre>
HTTP/1.1 206 Partial Content
Accept-Ranges: bytes, t, id
Content-Length: 3743
Content-Type: video/ogg
Content-Range: bytes 19147-22880/35614993
Content-Range-Mapping: { id chapter1 } = { bytes 19147-22880/35614993 }
Etag: "b7a60-21f7111-46f3219476580"
{binary data}
</pre></div></li></ul></div><div class="div3">
<h4><a name="processing-protocol-server-mapped-setup" id="processing-protocol-server-mapped-setup"/>2.2.2 Server mapped byte ranges with corresponding binary data and codec setup data</h4><p>
When the User Agent needs help from the server to setup the initial decoding pipeline
(i.e., the User Agent has no codec setup information at its disposal), the User Agent
can request, next to the bytes corresponding to the requested fragment, the bytes
necessary to setup its decoder. This is possible by adding the 'include-setup' flag to
the Range header, as illustrated below:
</p><ul><li><p>UA (1) → Proxy (2) → Origin Server (3):</p><div class="exampleInner"><pre>
GET /video.ogv HTTP/1.1
Host: www.example.com
Accept: video/*
Range: t:npt=10-20;include-setup
</pre></div></li></ul><p>
Analogous to section <a href="#processing-protocol-server-mapped-default"><b>2.2.1 Server mapped byte ranges with corresponding binary data</b></a>, the
server can map the given Range to one or more byte ranges, it will reply with these in
a 206 HTTP response. Additionally, the server adds the bytes corresponding with the
requested setup information to the response. Since this setup information usually
appears in front of a media resource, the response typically results in a multipart
message-body. The response is shown below:
</p><ul><li><p>Origin Server (3) → Proxy (4) → UA (5):</p><div class="exampleInner"><pre>
HTTP/1.1 206 Partial Content
Accept-Ranges: bytes, t, id
Content-Length: 3795
Content-Type: video/ogg
Content-Range-Mapping: { t:npt 11.85-21.16/0.0-653.79;include-setup } = { bytes 0-52,19147-22880/35614993 }
Content-type: multipart/byteranges; boundary=THIS_STRING_SEPARATES
Etag: "b7a60-21f7111-46f3219476580"
--THIS_STRING_SEPARATES
Content-type: video/ogg
Content-Range: bytes 0-52/35614993
{binary data}
--THIS_STRING_SEPARATES
Content-type: video/ogg
Content-Range: bytes 19147-22880/35614993
{binary data}
--THIS_STRING_SEPARATES--
</pre></div></li></ul><p>
Note that the Content-Range-Mapping header indicates that the codec setup information
is included in the response. In this example, the response consists of two parts of byte
ranges: the first part corresponds to the setup information, the second part corresponds
to the requested fragment.
</p><img src="MF-SD-ServerSideSetup.png" alt="Illustration of a UA requesting a URI time to byte range mapping from the server, including the codec setup information "/></div><div class="div3">
<h4><a name="processing-protocol-server-mapped-proxy" id="processing-protocol-server-mapped-proxy"/>2.2.3 Proxy cacheable server mapped byte ranges</h4><p>
The server mapped byte ranges approach can be extended to play with existing caching Web proxy infrastructure.
This is important, since video is a huge bandwidth eater in the current Internet and
falling back to using existing Web proxy infrastructure is important, particularly
since progressive download and direct access mechanisms for video rely heavily on this
functionality. Over time, the proxy infrastructure will learn how to cache media
fragment URIs directly as described in the previous section and then will not require
this extra effort.
</p><p>
To enable media-fragment-URI-supporting UAs to make their retrieval cacheable, we
introduce some extra HTTP headers, which will help tell the server and the proxy what
to do. There is an Accept-Range-Redirect request header which signals to the server
that only a redirect to the correct byte ranges is necessary and the result should be
delivered in the Range-Redirect header.
</p><p>The ABNF for these additional two HTTP headers is given as follows:</p><div class="exampleInner"><a name="rangeredirectdefs" id="rangeredirectdefs"/><pre>
Accept-Range-Redirect = "Accept-Range-Redirect" ":" bytes-unit
Range-Redirect = "Range-Redirect" ":" byte-range-resp-spec *( "," byte-range-resp-spec )
</pre></div><p>
Let's play it through on an example. A user requests a media fragment URI:
</p><ul><li><p>User → UA (1):</p><div class="exampleInner"><pre>http://www.example.com/video.ogv#t=10,20</pre></div></li></ul><p>
The UA has to check if a local copy of the requested fragment is available in its
buffer. In our case here, it is not. If it was, we would revert back to the processing
described in sections <a href="#processing-protocol-UA-mapped-unchanged"><b>2.1.2 UA requests URI fragment it already has buffered</b></a> and
<a href="#processing-protocol-UA-mapped-changed"><b>2.1.3 UA requests URI fragment of a changed resource</b></a>, since the UA already knows the
mapping to byte ranges. The UA issues a HTTP GET request with the fragment and
requesting to retrieve just the mapping to byte ranges:
</p><ul><li><p>UA (1) → Proxy (2) → Origin Server (3):</p><div class="exampleInner"><pre>
GET /video.ogv HTTP/1.1
Host: www.example.com
Accept: video/*
Range: t:npt=10-20
Accept-Range-Redirect: bytes
</pre></div></li></ul><p>
The server converts the given time range to a byte range and sends an empty reply
that refers the UA to the right byte range for the correct time range.
</p><ul><li><p>Origin Server (3) → Proxy (4) → UA (5):</p><div class="exampleInner"><pre>
HTTP/1.1 307 Temporary Redirect
Location: http://www.example.com/video.ogv
Accept-Ranges: bytes, t, id
Content-Length: 0
Content-Type: video/ogg
Content-Range-Mapping: { t:npt 11.85-21.16/0.0-653.79 } = { bytes 19147-22880/* }
Range-Redirect: 19147-22880
Vary: Accept-Range-Redirect
</pre></div></li></ul><p>
Note that codec setup information can also be requested in combination with the
Accept-Range-Redirect header, which can be realized by adding the 'include-setup'
flag to the Range request header.
</p><p>
The UA proceeds to put the actual fragment request through as a normal byte range
request as in section <a href="#processing-protocol-UA-mapped-new"><b>2.1.1 UA requests URI fragment for the first time</b></a>:
</p><ul><li><p>UA (5) → Proxy (6) → Origin Server (7):</p><div class="exampleInner"><pre>
GET /video.ogv HTTP/1.1
Host: www.example.com
Accept: video/*
Range: 19147-22880
</pre></div></li></ul><p>The Origin Server puts the data together and sends it to the UA:</p><ul><li><p>Origin Server (7) → Proxy (8) → UA (9):</p><div class="exampleInner"><pre>
HTTP/1.1 206 Partial Content
Accept-Ranges: bytes, t, id
Content-Length: 3743
Content-Type: video/ogg
Content-Range: bytes 19147-22880/35614993
Etag: "b7a60-21f7111-46f3219476580"
{binary data}
</pre></div></li></ul><p>
The UA decodes the data and displays it from the requested offset. The caching Web
proxy in the middle has now cached the byte range, since it adhered to the normal byte
range request protocol. All existing caching proxies will work with this. New caching
Web proxies may learn to interpret media fragments natively, so won't require the extra
packet exchange described in this section.
</p><img src="MF-SD-ProxyCacheable.png" alt="Illustration of a UA requesting a URI time to byte range mapping from the server with proxy capability of byte ranges"/></div></div></div><div class="div1">
<h2><a name="processing-protocol-query" id="processing-protocol-query"/>3 Protocol for URI query Resolution in HTTP</h2><p>
This section describes the protocol steps used in HTTP <cite><a href="#">rfc2616</a></cite> to resolve and deliver a media fragment specified as a URI query.
</p><p>A user requests a media fragment URI using a URI query:</p><ul><li><p>User → UA (1):</p><div class="exampleInner"><pre>http://www.example.com/video.ogv?t=10,20</pre></div></li></ul><p>This is a full resource, so it is a simple HTTP retrieval process. The UA has to check if a local copy of the requested resource is available in its buffer. If yes, it does a conditional GET with e.g. an If-Modified-Since and If-None-Match HTTP header.</p><p>Assuming the resource has not been retrieved before, the following is sent to the server:</p><ul><li><p>UA (1) → Proxy (2) → Origin Server (3):</p><div class="exampleInner"><pre>
GET /video.ogv?t=10,20 HTTP/1.1
Host: www.example.com
Accept: video/*
</pre></div></li></ul><p>If the server doesn't understand these query parameters, it typically ignores them and returns the complete resource. This is not a requirement by the URI or the HTTP standard, but the way it is typically implemented in Web browsers.</p><p>
A media fragment supporting server has to create a complete media resource for the URI query, which in the case of Ogg requires creation of a new resource by adapting the existing Ogg file headers and combining them with the extracted byte range that relates to the given fragment. Some of the codec data may also need to be re-encoded since, e.g. t=10 does not fall clearly on a decoding boundary, but the retrieved resource must match as closely as possible the URI query. This new resource is sent back as a reply:
</p><ul><li><p>Origin Server (3) → Proxy (4) → UA (5):</p><div class="exampleInner"><pre>
HTTP/1.1 200 OK
Content-Length: 3782
Content-Type: video/ogg
Etag: "b7a60-21f7111-46f3219476580"
Link: <http://www.example.com/video.ogv#t=10,20>; rel="alternate"
{binary data}
</pre></div></li></ul><p>Note that a Link header MAY be provided indicating the relationship between the requested URI query and the original media fragment URI. This enables the UA to retrieve further information about the original resource, such as its full length. In this case, the user agent is also enable to choose to display the dimensions of the primary resource or the ones created by the query.</p><p>The UA serves the decoded resource to the user. Caching in Web proxies works as it has always worked - most modern Web servers and UAs implement a caching strategy for URIs that contain a query using one of the three methods for marking freshness: heuristic freshness analysis, the Cache-Control header, or the Expires header. In this case, many copies of different segments of the original resource video.ogv may end up in proxy caches. An intelligent media proxy in future may devise a strategy to buffer such resources in a more efficient manner, where headers and byte ranges are stored differently.</p><p>
Further, media fragment URI queries can be extended to enable UAs to use the Range-Redirect HTTP header to also revert back to a byte range request. This is analogous to section <a href="#processing-protocol-server-mapped-proxy"><b>2.2.3 Proxy cacheable server mapped byte ranges</b></a>.
</p><p>Note that a server that does not support media fragments through either URI fragment or query addressing will return the full resource in either case. It is therefore not possible to first try URI fragment addressing and when that fails to try URI query addressing.</p></div></div><div class="back"><div class="div1">
<h2><a name="rtsp-media-fragment-processing" id="rtsp-media-fragment-processing"/>A Processing media fragment URIs in RTSP (Non-Normative)</h2><p>
This appendix explains how the media fragment specification is mapped to an RTSP protocol activity.
We assume here that you have a general understanding of the RTSP protocol mechanism as defined in <cite><a href="#">rtsp</a></cite>.
The general sequence of messages sent between an RTSP UA and server can be summarized as follows:
</p><ul><li>from a DESCRIBE activity, in which the UA requests from the server what resources it has available,</li><li>through a SETUP activity, which sets up the communication between the UA and the server, including the requested tracks,</li><li>to a PLAY activity, where time ranges are requested by the UA from the server for playback.</li><li>A PAUSE is always possible in the middle of a RTSP communication, and</li><li>a TEARDOWN closes the communication.</li></ul><p>
Note that the RTSP protocol is intentionally similar in syntax and operation to HTTP.
</p><div class="div2">
<h3><a name="mapping-mf-to-rtsp-methods" id="mapping-mf-to-rtsp-methods"/>A.1 How to map Media Fragment URIs to RTSP protocol methods</h3><div class="div3">
<h4><a name="rtsp-mf-dimensions" id="rtsp-mf-dimensions"/>A.1.1 Dealing with the media fragment URI dimensions in RTSP</h4><p>
We illustrated for each of the four media fragment dimensions how they can be mapped onto RTSP commands.
The following examples are used to illustrated each of the dimensions: (1) temporal: #t=10,20 (2) tracks: #track=audio&track=video (3) spatial: #xywh=160,120,320,24 (4) id: #id=Airline%20Edit
</p><div class="div4">
<h5><a name="rtsp-temporal" id="rtsp-temporal"/>A.1.1.1 Temporal Media Fragment URIs</h5><p>In RTSP, temporal fragment URIs are provided through the PLAY method. A URI such as</p><div class="exampleInner"><pre>rtsp://example.com/media#t=10,20</pre></div><p>
will be executed as a series of the following methods (all shortened for readability).
</p><ul><li>UA->S: DESCRIBE rtsp://example.com/media</li><li>S->UA: RTSP/1.0 200 OK (with an SDP description)</li><li>UA->S: SETUP rtsp://example.com/media/video</li><li>S->UA: RTSP/1.0 200 OK</li><li>UA->S: SETUP rtsp://example.com/media/audio</li><li>S->UA: RTSP/1.0 200 OK</li></ul><p>
The actual temporal selection is provided in the PLAY method:
</p><div class="exampleInner"><pre>C->S: PLAY rtsp://example.com/media
Range: npt=10-20</pre></div><p>
The server tells the UA which temporal range is returned:
</p><div class="exampleInner"><pre>S->C: RTSP/1.0 200 OK
Range: npt=9.5-20.1</pre></div><p>
We can explain this mapping for all of the media fragment defined time schemes.
Also, several temporal media fragment URI requests can be sent as pipelined commands without having to re-send the DESCRIBE and SETUP commands.
</p></div><div class="div4">
<h5><a name="rtsp-track" id="rtsp-track"/>A.1.1.2 Track Media Fragment URIs</h5><p>In RTSP, track fragment URIs are provided through the SETUP method. A URI such as</p><div class="exampleInner"><pre>rtsp://example.com/media#track=audio&track=video</pre></div><p>
will be executed as a series of the following methods (all shortened for readability).
</p><ul><li>UA->S: DESCRIBE rtsp://example.com/media</li><li>S->UA: RTSP/1.0 200 OK (with an SDP description)</li><li>UA->S: SETUP rtsp://example.com/media/video</li><li>S->UA: RTSP/1.0 200 OK</li><li>UA->S: SETUP rtsp://example.com/media/audio</li><li>S->UA: RTSP/1.0 200 OK</li></ul><p>
The discovery of available tracks is provided through the SDP reply to DESCRIBE, but it could be done through alternative methods, too.
Several consecutive track media fragment URI requests can only be sent with new SETUP commands and cannot be pipelined.
</p></div><div class="div4">
<h5><a name="rtsp-spatial" id="rtsp-spatial"/>A.1.1.3 Spatial Media Fragment URIs</h5><p>In RTSP, spatial fragment URIs are not specifically provided for. Just like in HTTP, spatial fragments are interpreted at the UA and thus not communicated to the server. A URI such as</p><div class="exampleInner"><pre>rtsp://example.com/media#xywh=160,120,320,24</pre></div><p>
will be executed as the url rtsp://example.com/media.
</p></div><div class="div4">
<h5><a name="rtsp-id" id="rtsp-id"/>A.1.1.4 Id Media Fragment URIs</h5><p>We see no easy way to support this in RTSP as currently standardised.</p></div></div><div class="div3">
<h4><a name="rtsp-combined-mf-dimensions" id="rtsp-combined-mf-dimensions"/>A.1.2 Putting the media fragment URI dimensions together in RTSP</h4><p>A URI such as</p><div class="exampleInner"><pre>rtsp://example.com/media#xywh=160,120,320,24&t=10,20&track=audio&track=video</pre></div><p>will be executed as a series of the following methods (all shortened for readability). The data selection is provided both in the SETUP method and the PLAY method:</p><div class="exampleInner"><pre>UA->S: DESCRIBE rtsp://example.com/media
S->UA: RTSP/1.0 200 OK (with an SDP description, see wiki)
UA->S: SETUP rtsp://example.com/media/video
S->UA: RTSP/1.0 200 OK
UA->S: SETUP rtsp://example.com/media/audio
S->UA: RTSP/1.0 200 OK
UA->S: PLAY rtsp://example.com/media
Range: npt=10-20
S->UA: RTSP/1.0 200 OK
Range: npt=9.5-20.1</pre></div><p>It is the UA's task to only display the rectangle xywh=160,120,320,2. It is true that the resolution of the dimensions is done at different levels of the protocol, but that does not create a problem.</p></div><div class="div3">
<h4><a name="rtsp-caching" id="rtsp-caching"/>A.1.3 Caching and RTSP for media fragment URIs</h4><p>Media fragment URIs rely only on existing protocol negotiations in RTSP. Therefore any RTSP caching scheme, assuming such a thing exists, will work fine with media fragments.</p></div></div></div><div class="div1">
<h2><a name="acknowledgments" id="acknowledgments"/>B Acknowledgements (Non-Normative)</h2><p>
This document is the work of the <a href="http://www.w3.org/2008/WebVideo/Fragments/">W3C Media Fragments Working Group</a>. Members of the Working Group are
(at the time of writing, and in alphabetical order):
Eric Carlson (Apple, Inc.),
Chris Double (Mozilla Foundation),
Michael Hausenblas (DERI Galway at the National University of Ireland, Galway, Ireland),
Philip Jägenstedt (Opera Software),
Jack Jansen (CWI),
Yves Lafon (W3C),
Erik Mannens (IBBT),
Thierry Michel (W3C/ERCIM),
Guillaume (Jean-Louis) Olivrin (Meraka Institute),
Soohong Daniel Park (Samsung Electronics Co., Ltd.),
Conrad Parker (W3C Invited Experts),
Silvia Pfeiffer (W3C Invited Experts),
Nobuhisa Shiraishi (NEC Corporation),
David Singer (Apple, Inc.),
Thomas Steiner (Google, Inc.),
Raphaël Troncy (EURECOM),
Davy Van Deursen (IBBT),
</p><p>
The people who have contributed to <a href="http://lists.w3.org/Archives/Public/public-media-fragment/">
discussions on public-media-fragment@w3.org
</a> are also gratefully acknowledged. In particular:
Olivier Aubert, Werner Bailer, Pierre-Antoine Champin, Cyril Concolato, Franck Denoual, Martin J. Dürst,
Jean Pierre Evain, Ken Harrenstien, Kilroy Hughes, Ryo Kawaguchi, Wim Van Lancker,
Véronique Malaisé, Henrik Nordstrom, Yannick Prié, Yves Raimond, Julian Reschke, Geoffrey Sneddon,
Felix Sasaki, Jakub Sendor, Philip Taylor, Christian Timmerer, Jorrit Vermeiren, Jeroen Wijering and Munjo Yu.
</p></div></div></body></html>