20-mediafrag-minutes.html
29.5 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html lang='en' xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<meta name="generator" content=
"HTML Tidy for Linux/x86 (vers 12 April 2005), see www.w3.org" />
<title>Media Fragments Working Group Teleconference -- 20 Oct
2008</title>
<link type="text/css" rel="STYLESHEET" href=
"http://www.w3.org/StyleSheets/base.css" />
<link type="text/css" rel="STYLESHEET" href=
"http://www.w3.org/StyleSheets/public.css" />
<link type="text/css" rel="STYLESHEET" href=
"http://www.w3.org/2004/02/minutes-style.css" />
<meta content="Media Fragments Working Group Teleconference"
name="Title" />
<meta content="text/html; charset=utf-8" http-equiv=
"Content-Type" />
</head>
<body>
<p><a href="http://www.w3.org/"><img src=
"http://www.w3.org/Icons/w3c_home" alt="W3C" border="0" height=
"48" width="72" /></a></p>
<h1>Media Fragments Working Group Teleconference</h1>
<h2>20 Oct 2008</h2>
<p><a href=
'http://www.w3.org/2008/WebVideo/Fragments/wiki/FirstF2FAgenda'>Agenda</a></p>
<p>See also: <a href=
"http://www.w3.org/2008/10/20-mediafrag-irc">IRC log</a></p>
<h2><a name="attendees" id="attendees">Attendees</a></h2>
<div class="intro">
<dl>
<dt>Present</dt>
<dd>Iles_C</dd>
<dt>Regrets</dt>
<dt>Chair</dt>
<dd>Erik, Raphael</dd>
<dt>Scribe</dt>
<dd>Jack</dd>
</dl>
</div>
<h2>Contents</h2>
<ul>
<li>
<a href="#agenda">Topics</a>
<ol>
<li><a href="#item01">1. Round of introductions</a></li>
<li><a href="#item02">2. Use Cases Discussion (Part
1)</a></li>
<li><a href="#item03">3. Use Case Discussion (Part
2)</a></li>
<li><a href="#item04">Media Delivery use case</a></li>
</ol>
</li>
<li><a href="#ActionSummary">Summary of Action Items</a></li>
</ul>
<hr />
<div class="meeting">
<p class='phone'> </p>
<p class='phone'> </p>
<p class='irc'><<cite>trackbot</cite>> Date: 20 October
2008</p>
<p class='irc'><<cite>nessy</cite>> Meeting openend
9:08</p>
<h3 id="item01">1. Round of introductions</h3>
<p class='irc'><<cite>nessy</cite>> Raphael</p>
<p class='irc'><<cite>nessy</cite>> Erik</p>
<p class='irc'><<cite>raphael</cite>> scribenick:
raphael</p>
<p class='phone'><cite>Davy:</cite> also in Multimedia Lab,
IBBT, Ghent (BE)</p>
<p class='phone'><cite>Silvia:</cite> involved in MPEG-7,
MPEG-21, developed Annodex (annotation format for ogg media
files)<br />
... start my own start up for measuring the audience of video
on the web + consultant for Mozilla<br />
... developped the TemporalURI specification, 6 years ago</p>
<p class='phone'>Guillaume Olivrin, South Africa, focus on
accessibility, how do you attach specific semantics to parts of
media</p>
<p class='phone'>Daniel Park, Samsung, co-chair of the Media
Annotation, focus on IPTV (background in wireless
networking)</p>
<p class='phone'>Andy Heath, Open University, UK, background on
e-learning, but develop far more general technologies, focus on
accessibility</p>
<p class='phone'><cite>scribe:</cite> experience in standards
such as LOM, DC, SKORM</p>
<p class='phone'>Colm Doyle: Blinkx</p>
<p class='phone'>Larry Masinter: Adobe, experience in
co-chairing HTTP group, focus on acquisition of metadata</p>
<p class='phone'>Khang Cham, Samsung, focus on IPTV</p>
<p class='phone'><cite>Yves:</cite> W3C team contact, expertise
in protocols, web services</p>
<p class='irc'><<cite>nessy</cite>> <a href=
"http://www.w3.org/2008/01/media-fragments-wg.html">http://www.w3.org/2008/01/media-fragments-wg.html</a></p>
<p class='irc'><<cite>nessy</cite>> ... working group
charter</p>
<p class='phone'><cite>Larry:</cite> important to define first
requirements for why these URIs will be used for<br />
... it might happen that you can not satisfy all the
requirements with a URI, don't put that out of scope now</p>
<h3 id="item02">2. Use Cases Discussion (Part 1)</h3>
<p class='phone'>Photo Use Case: <a href=
"http://www.w3.org/2008/WebVideo/Fragments/wiki/Use_Cases_%26_Requirements#Photobook_UC">
http://www.w3.org/2008/WebVideo/Fragments/wiki/Use_Cases_%26_Requirements#Photobook_UC</a></p>
<p class='phone'>Slides at: <a href=
"http://www.w3.org/2008/WebVideo/Fragments/meetings/2008-10-20-f2f_cannes/photobook_UC.pdf">
http://www.w3.org/2008/WebVideo/Fragments/meetings/2008-10-20-f2f_cannes/photobook_UC.pdf</a></p>
<p class='phone'>Erik goes through the slides</p>
<p class='phone'><cite>Erik:</cite> take parts of images ...
and assemble them together in a slideshow</p>
<p class='phone'><cite>Guillaume:</cite> unclear the value of
the fragments here<br />
... I understand fragment as taking a part of a large thing</p>
<p class='phone'><cite>Larry:</cite> is it worth at all to look
at Spatial URIs? Is it for doing partial retrieval?</p>
<p class='phone'><cite>Raphael:</cite> mention maps
applications</p>
<p class='phone'><cite>Larry:</cite> but they are
intereactive!</p>
<p class='phone'><cite>Raphael:</cite> mention multi-resolution
images, image industry has huge need and will to expose high
resolution version of images</p>
<p class='phone'><cite>Larry:</cite> they do have JPEG2000 and
protocols</p>
<p class='phone'><cite>Silvia:</cite> SMIL has ellaborate on
the need for spatial fragments</p>
<p class='phone'><cite>Jack:</cite> important needs in the SMIL
community and SVG ... image maps, pan zoom, cropping</p>
<p class='phone'><cite>Erik:</cite> continues the presentation,
after temporally assemble parts of images into a slideshow,
assemble two parts of an image into a new one (stich)<br />
... Existing technologies: RSS and Atom for the playlist
generation<br />
... W3C SMIL: XML-based markup language, requires a SMIL
player<br />
... MPEG-21: Part 17 for fragment identification of MPEG
Ressources, client-side processing ... pseudo playlist<br />
... MPEG-A: MAF (Media Application Format) that combines MPEG
technologies<br />
... XSPF (spiff): XML Shareable Playlist Format: Xiph
Community<br />
... Discussion: is it out of scope or not? specific use cases
around? other technologies around?</p>
<p class='phone'><cite>Guillaume:</cite> unclear the value of
the fragments here<br />
... I understand fragment as taking a part of a large thing</p>
<p class='phone'><cite>Silvia:</cite> we are mainly looking at
audio and videos files, but a video is a sequence of images</p>
<p class='phone'><cite>Larry:</cite> there are different
servers and clients</p>
<p class='phone'><cite>Silvia:</cite> one way to look at a
criteria is: is it a pure client-side issue or server-side +
client-side problems?</p>
<p class='phone'><cite>Larry:</cite> even if it is only a
client-side issue, it might be worth to do some
standardisation<br />
... the main point of still images fragment is the
interactivity</p>
<p class='phone'><cite>Raphael:</cite> is interactivity the key
interest in spatial fragment</p>
<p class='phone'><cite>Larry:</cite> there is a lot of work in
this area, would recommend to focus on the temporal issue<br />
... it is also a good exercise to look at the out-of-scope use
case, help to shape the scope</p>
<p class='phone'><cite>Jack:</cite> URI is good because it is
the web, the client is not necessarily aware of the time
dimension<br />
... HTML has already a notion of Area, so don't encode it in a
URI</p>
<p class='phone'><cite>Larry:</cite> need to be carreful on
URIs, resources, representations<br />
... example of an image: need to decode it, take the parts,
re-encode it<br />
... JPEG2000 might have a direct way to do that</p>
<p class='phone'><cite>Guillaume:</cite> create mosaic, collage
of parts of media</p>
<p class='phone'><cite>Yves:</cite> it depends if the
transformation needs to be on the client or not</p>
<p class='phone'><cite>Jack:</cite> be carreful, to not put SVG
in a URI :-)<br />
... good balance on which processing can be on client side, and
what is worth to put in a URL<br />
... is it better to have the processing in the URL?</p>
<p class='phone'><cite>Erik:</cite> we question again the
interest of the spatial fragment</p>
<p class='phone'><cite>Silvia:</cite> is it a question of the
size of the media? Large: worth to have fragment, Small: not
worth</p>
<p class='phone'><cite>Larry:</cite> define what do you mean by
media<br />
... it is reasonable to limit yourself to videos</p>
<p class='phone'><cite>Silvia:</cite> SMIL and Flash are
interactive media, not necessarily one timeline<br />
... we focus on a resource with one timeline<br />
... there is a whole sweat of codecs issues</p>
<p class='phone'><cite>Larry:</cite> define markers in
videos</p>
<p class='irc'><<cite>Yves</cite>> time... what is the
reference of time for a video, embedded time code? 0 for the
start?</p>
<p class='phone'>Coffee break</p>
<p class='phone'>Map Use Case: <a href=
"http://www.w3.org/2008/WebVideo/Fragments/wiki/Use_Cases_%26_Requirements/Map_Application_UC">
http://www.w3.org/2008/WebVideo/Fragments/wiki/Use_Cases_%26_Requirements/Map_Application_UC</a></p>
<p class='irc'><<cite>scribe</cite>> scribenick: erik</p>
<p class='phone'><cite>Raphael:</cite> Map UC Description</p>
<p class='irc'><<cite>nessy</cite>> <a href=
"http://www.w3.org/2008/WebVideo/Fragments/wiki/Use_Cases_%26_Requirements">
http://www.w3.org/2008/WebVideo/Fragments/wiki/Use_Cases_%26_Requirements</a></p>
<p class='phone'><cite>Raphael:</cite> Annotation is key</p>
<p class='irc'><<cite>Kangchan</cite>> Question : What is
relation between Geolocation Working Group(with <a href=
"http://www.w3.org/2008/geolocation/)">http://www.w3.org/2008/geolocation/)</a>
and Web Map Services</p>
<p class='phone'><cite>Raphael:</cite> UC examples using Yahoo,
Google & Microsoft</p>
<p class='phone'><cite>Jack:</cite> what we see here are URI's
for the applications, not images</p>
<p class='phone'>Raphael will look deeper into different specs
over the next couple of weeks for this Map UC</p>
<p class='phone'>Davy & Jack: is this a valid UC? will our
spatial URL adressing scheme will be used by Maps
Applications?</p>
<p class='phone'><cite>Raphael:</cite> as Larry said this
morning, out-of-scope UC's are valid to come up with our final
WG's scope</p>
<p class='irc'><<cite>guillaume</cite>> Must document the
out of scope UC to explain why it is out of scope.</p>
<p class='phone'><cite>Sylvia:</cite> there might be a UC when
we are talking about really large images (cfr. medical images
in really high resolutions)<br />
... having a way to get a subpart of such a big image is nice
to have, but implementation is something different ... a lot of
complications, certainly on some server-side
implimentations</p>
<p class='phone'><cite>Guillaume:</cite> codec issues not to be
underestimated, have a nice adressing scheme vs. server-side
complexity</p>
<p class='phone'><cite>Sylvia:</cite> should look further than
just server-side complexity, solutions for certain codecs will
come around eventually if needed</p>
<p class='phone'><cite>Jack:</cite> pratical issues vs.
fundamental issues have to be taken into account within this
group<br />
... media fragments are needed because some things can not be
expressed today</p>
<p class='phone'><cite>Raphael:</cite> is it worth of having an
overview of the TimedText WG?</p>
<p class='irc'><<cite>nessy</cite>> Guillaume: URI
fragment identifier for text/plain: <a href=
"http://www.ietf.org/rfc/rfc5147.txt">http://www.ietf.org/rfc/rfc5147.txt</a></p>
<p class='irc'><<cite>Yves</cite>> (multi-resolution
formats, <a href=
"http://en.wikipedia.org/wiki/FlashPix">http://en.wikipedia.org/wiki/FlashPix</a>
is a good example of a single file containing multiple
resolutions, maybe better than the map application)</p>
<p class='phone'><cite>Raphael:</cite> Zoomify is good example
of UC of very big images (life sciences) using fragments<br />
... task of this group to ensure interoperability of different
standards? (eg. MPEG-21 URI to SVG)</p>
<p class='phone'><cite>Sylvia:</cite> defining the mappings
should be out-of-scope for this WG</p>
<p class='phone'><cite>Jack:</cite> worthwile is testing our
scheme to the others out there</p>
<p class='phone'><cite>Sylvia:</cite> last thing to do &
should be straight forward by then if we did a good job</p>
<p class='phone'><cite>Raphael:</cite> what about spatial
dimension?</p>
<p class='phone'><cite>Sylvia:</cite> temporal adressing need
is biggest, but spatial adressing need is also valid</p>
<h3 id="item03">3. Use Case Discussion (Part 2)</h3>
<p class='phone'>Sylvia presenting the Media Annotation UC</p>
<p class='irc'><<cite>raphael</cite>> <a href=
"http://www.w3.org/2008/WebVideo/Fragments/wiki/Use_Cases_%26_Requirements#Media_Annotation_UC">
http://www.w3.org/2008/WebVideo/Fragments/wiki/Use_Cases_%26_Requirements#Media_Annotation_UC</a></p>
<p class='irc'><<cite>raphael</cite>> Silvia: Annotation
can be attached to the full media resource or to fragments of
media resources</p>
<p class='irc'><<cite>raphael</cite>> scribenick:
raphael</p>
<p class='phone'><cite>Sylvia:</cite> annotations to fragment
is relevant for this group</p>
<p class='phone'><cite>Guillaume:</cite> can the structure of
the video be represented in the URI</p>
<p class='phone'><cite>Silvia:</cite> difference between the
representation of the fragment and its semantics</p>
<p class='irc'><<cite>spark3</cite>> if necessary, what
about adding a new UC (naming use case for fragment) into the
Media Annotation WG UC ?</p>
<p class='phone'><cite>Silvia:</cite> drawing on the board</p>
<p class='irc'><<cite>erik</cite>> Jack: there's only 1
timeline for timed media</p>
<p class='irc'><<cite>erik</cite>> Jack: there's only 1
coordinate system for spatial media</p>
<p class='irc'><<cite>erik</cite>> Jack: Annotation UC is
important because we're reasoning on a higher abstraction
level</p>
<p class='phone'><cite>Jack:</cite> loves that use case since
it is purely about fundamental description and indexing of a
media</p>
<p class='phone'><cite>Silvia:</cite> goes through the
advantages of a possible URI scheme for media fragments<br />
... actually motivating the need for media fragments<br />
... shows the picture at <a href=
"https://wiki.mozilla.org/Image:Video_Fragment_Linking.jpg">https://wiki.mozilla.org/Image:Video_Fragment_Linking.jpg</a><br />
... jumps into the track problems<br />
... there is actually 3 dimensions: space, time and track<br />
... temporalURI just deal with cropping, no track awareness</p>
<p class='phone'><cite>Jack:</cite> rename this use case into
'Anchoring'<br />
... annotation = RDF community<br />
... structuring = SMIL community</p>
<p class='phone'><cite>Silvia:</cite> agree to rename it into
Media Anchor Definition</p>
<p class='phone'>Lunch break</p>
<p class='phone'>Media Delivery Use Case: <a href=
"http://www.w3.org/2008/WebVideo/Fragments/wiki/Use_Cases_%26_Requirements#Media_Delivery_UC">
http://www.w3.org/2008/WebVideo/Fragments/wiki/Use_Cases_%26_Requirements#Media_Delivery_UC</a></p>
<p class='irc'><<cite>scribe</cite>> scribenick: Jack</p>
<p class='irc'><<cite>Yves</cite>> Scribe: Jack</p>
<p class='irc'><<cite>jackjansen</cite>> scribenick:
jackjansen</p>
<p class='irc'><<cite>raphael</cite>> scribenick:
jackjansen</p>
<h3 id="item04">Media Delivery use case</h3>
<p class='irc'><<cite>raphael</cite>> Davy going through
the slide at: <a href=
"http://www.w3.org/2008/WebVideo/Fragments/meetings/2008-10-20-f2f_cannes/media_delivery_UC.pdf">
http://www.w3.org/2008/WebVideo/Fragments/meetings/2008-10-20-f2f_cannes/media_delivery_UC.pdf</a></p>
<p class='phone'><cite>Various:</cite> (discussing slide 3, #
vs. ? or ,): Can we use # as the only user-visible marker and
use http-ranges or something similar?</p>
<p class='irc'><<cite>raphael</cite>> Silvia drawing a
communication channel between UA and servers</p>
<p class='irc'><<cite>raphael</cite>> Discussion about
the use of the "hash" character</p>
<p class='irc'><<cite>raphael</cite>> Yves: use case is
to extract a frame of a video, and creates a new image (so a
new resource), use a '?'</p>
<p class='irc'><<cite>raphael</cite>> ... use case is to
keep the context, use a '#'</p>
<p class='irc'><<cite>raphael</cite>> Summary: there is
use cases for both, should be further discussed tomorrow
morning</p>
<p class='phone'><cite>summary:</cite> there are use cases for
both. We will get back to the subject tomorrow.</p>
<p class='irc'><<cite>guillaume</cite>> dejà vue</p>
<p class='irc'><<cite>raphael</cite>> Davy: explains the
MPEG-21 Fragment identification</p>
<p class='irc'><<cite>raphael</cite>> ... use of the '#',
but no delivery protocol</p>
<p class='irc'><<cite>raphael</cite>> ... mention also
the proposal of Dave Singer: UA get first N bytes representing
the headers with timing and bytes offset information of the
media resource</p>
<p class='irc'><<cite>raphael</cite>> ... goes through an
explanation of MPEG-21: <a href=
"http://www.w3.org/2008/WebVideo/Fragments/wiki/State_of_the_Art#MPEG-21_Part_17:_Fragment_Identification_of_MPEG_Resources_.28Davy_.2F_Silvia.29">
http://www.w3.org/2008/WebVideo/Fragments/wiki/State_of_the_Art#MPEG-21_Part_17:_Fragment_Identification_of_MPEG_Resources_.28Davy_.2F_Silvia.29</a></p>
<p class='irc'><<cite>raphael</cite>> ... 4 schemes</p>
<p class='irc'><<cite>raphael</cite>> ... ffp for the
track</p>
<p class='irc'><<cite>raphael</cite>> ... offset for
bytes range</p>
<p class='phone'><cite>all:</cite> discussing #mp() scheme</p>
<p class='irc'><<cite>raphael</cite>> ... mp for
specifying the temporal or spatial fragment (only for MPEG
mime-type resources)</p>
<p class='phone'><cite>sylvia:</cite> whoever controls the
mimetype also controls what is after the # in a url</p>
<p class='phone'><cite>Jack:</cite> is surprised, but
pleasantly so.</p>
<p class='irc'><<cite>raphael</cite>> Davy: the 4th
scheme is 'mask' (only for MPEG resources)</p>
<p class='irc'><<cite>raphael</cite>> Jack: seems they
structure the video resource and point towards this
structure</p>
<p class='irc'><<cite>raphael</cite>> Raphael: how many
user agents can understand this syntax?</p>
<p class='phone'><cite>all:</cite> none, that we know of</p>
<p class='irc'><<cite>raphael</cite>> Davy: i'm not aware
of ... altough there is a referenced implementation</p>
<p class='irc'><<cite>raphael</cite>> Larry: http is not
necessarily the best protocol to transport video</p>
<p class='irc'><<cite>Yves</cite>> in video, it depends
if you want exact timing, control of the lag, and in that case
HTTP is not the best choice</p>
<p class='irc'><<cite>raphael</cite>> Silvia: I would say
that most of the videos is transported over http</p>
<p class='irc'><<cite>raphael</cite>> ... RTP and RTSP
have their own fragments, we should learn from them</p>
<p class='irc'><<cite>raphael</cite>> ... if they do not
satisfy all our requirements, we can feed them so they extend
the use of fragments in these protocols</p>
<p class='irc'><<cite>raphael</cite>> Davy: goes through
TemporalURI</p>
<p class='irc'><<cite>raphael</cite>> ... this is the
only that specifies a delivery protocol over http</p>
<p class='phone'><cite>Silvia:</cite> Real used to allow
something similar to temporal URLs</p>
<p class='phone'><cite>Jack:</cite> thinks it may be part of
the .ram files</p>
<p class='phone'><cite>Guillaume:</cite> Flash allows doc
author to export subparts by name, these can then be accessed
with url#name</p>
<p class='phone'><cite>Davy:</cite> continues with slide 6,
http media delivery</p>
<p class='irc'><<cite>guillaume</cite>> Guillaume: Flash
could also embed internal links in movie attached to certain
frames. Once compiled with specific option, fragment of the
Flash movie could be accessed using #</p>
<p class='irc'><<cite>raphael</cite>> Silvia: draw the
four-way handshake</p>
<p class='irc'><<cite>raphael</cite>> ... 1st exchange:
User requests <a href=
"http://www.example.com/resource.ogv#t=20-30">http://www.example.com/resource.ogv#t=20-30</a></p>
<p class='irc'><<cite>raphael</cite>> ... UA does a GET
<uri stripped of hash>, Range: time 20-30</p>
<p class='irc'><<cite>raphael</cite>> ... Server send
back a Response 200, with the content-range: time 20-30 +
content-type + ogg header + time-range bytes 50000-20000</p>
<p class='irc'><<cite>raphael</cite>> ... (needs to
create a new http header, 'time-range')</p>
<p class='irc'><<cite>raphael</cite>> Raphael: can we use
content-range: bytes ... ?</p>
<p class='irc'><<cite>raphael</cite>> ... UA does a GET
<URI strriped of the hash>, Range x bytes: 5000-20000</p>
<p class='irc'><<cite>raphael</cite>> ... Server send
back a Response 200, with the content-range bytes + the cropped
data</p>
<p class='irc'><<cite>raphael</cite>> Silvia: it is not
implemented yet as far as I know</p>
<p class='irc'><<cite>raphael</cite>> ... discussion
based on a lot of discussions with proxies vendors</p>
<p class='irc'><<cite>raphael</cite>> Davy: could we
apply the same four-way handshake with RTSP?</p>
<p class='irc'><<cite>raphael</cite>> ... RTSP specifies
a Range Header, similar to the HTTP byte range mechanism</p>
<p class='irc'><<cite>raphael</cite>> ... RTSP could
support temporal fragments by a two-way handshake (using Range
header)</p>
<p class='irc'><<cite>raphael</cite>> ... Problem:
spatial fragments are not supported!</p>
<p class='irc'><<cite>raphael</cite>> Jack: the spatial
problem is kind of orthogonal</p>
<p class='irc'><<cite>raphael</cite>> ... the spatial
fragment will not be about bytes range</p>
<p class='irc'><<cite>raphael</cite>> Davy: cropping is
more complex in images</p>
<p class='irc'><<cite>raphael</cite>> Jack: you're right,
I can create a non-continous quicktime movie</p>
<p class='irc'><<cite>raphael</cite>> ... problem is it
is not necessarily possible to generate a byte range from a
time range</p>
<p class='irc'><<cite>raphael</cite>> Silvia: a single
byte range</p>
<p class='phone'><cite>all:</cite> the non-contiguous ranges
may occur more often than we like. But maybe<br />
... we can get away with ignoring them (because all relevant
formats also have a contiguous form).<br />
... need to discuss after the break.</p>
<p class='phone'><cite>raphael:</cite> suggest coffee break</p>
<p class='irc'><<cite>guillaume</cite>> or need to
coalesce</p>
<p class='phone'><cite>Larry:</cite> please decouple
representation of how you refer to fragments form he
implementations<br />
... Also think about embedded metadata: if the original has a
copyright statement, do you get it wth every fragment?</p>
<p class='phone'><cite>Sylvia:</cite> (on prev subject):
wonders whether http can do multiple byte ranges</p>
<p class='phone'><cite>Larry:</cite> yes, I think so, with
multipart</p>
<p class='irc'><<cite>erik</cite>> rssagent, draft
minutes</p>
<p class='irc'><<cite>davy</cite>> scribenick: davy</p>
<p class='phone'>Media Linking UC</p>
<p class='phone'>raphael discusses the description written by
Michael on the wiki</p>
<p class='phone'><cite>scribe:</cite> 3 things: bookmarking,
playlists, and interlinking multimedia</p>
<p class='phone'><cite>silvia:</cite> definition of playlists
is out of scope</p>
<p class='phone'><cite>guillaume:</cite> playlist is about
presentation</p>
<p class='phone'><cite>raphael:</cite> regarding interlinked:
temporal URIs can be described in RDF (RDF doc describing an
audio file)<br />
... difference between URI and RDF (or SMIL, or ...): you need
to parse the metadata<br />
... RDF description of time segment could be replaced by a
temporal URI</p>
<p class='phone'><cite>silvia:</cite> interlinking multimedia
is already covered in other UCs</p>
<p class='phone'>Video Browser UC</p>
<p class='phone'><cite>silvia:</cite> large media files
introduces special challenges<br />
... requirement for server-side processing<br />
... dynamic creation of thumbnails through URI mechanism</p>
<p class='phone'><cite>guillaume:</cite> link to PNG or
GIF<br />
... provide a preview function of the resource<br />
... trivial: get all the I-frames of a video resource<br />
... use them as thumbs<br />
... thumbnail extraction is quite easy</p>
<p class='phone'>silvia, jack: not so trivial, might be
processing-intensive</p>
<p class='phone'><cite>silvia:</cite> it should be possible to
point to one single frame with the URI scheme</p>
<p class='phone'><cite>jack:</cite> URI scheme should not know
that frame is 'the' thumbnail</p>
<p class='phone'><cite>guillaume:</cite> you can have multiple
thumbs per resource</p>
<p class='phone'><cite>raphael:</cite> URI scheme can point to
a frame, but does not have knowledge about thumbs<br />
... should we be able to address in terms of frames?</p>
<p class='phone'><cite>guillaume:</cite> no, too
coding-specific</p>
<p class='phone'><cite>silvia:</cite> previews of images?<br />
... preview is then a lower resolution image</p>
<p class='phone'><cite>guillaume:</cite> that is
processing<br />
... mostly, previews are already part of the media
resource<br />
... hence lower image resolutions are out of scope</p>
<p class='phone'><cite>jack:</cite> not too far?<br />
... is a preview embedded in a resource still a fragment?</p>
<p class='phone'><cite>guillaume:</cite> compare it with
tracks<br />
... preview is just another track</p>
<p class='phone'><cite>raphael:</cite> we put this in mind and
make a decision later</p>
<p class='phone'><cite>silvia:</cite> previews are another sort
of tracks</p>
<p class='phone'><cite>raphael:</cite> should we also to be
able to address metadata within the headers?</p>
<p class='phone'><cite>silvia:</cite> it is not a common
property of all the formats to have previews, therefore, it is
not a candidate to be standardized</p>
<p class='phone'><cite>raphael:</cite> after first fase of the
WG: report the current limitations<br />
... and wait for feedback</p>
<p class='phone'>Moving Point Of Interest UC</p>
<p class='phone'><cite>raphael:</cite> complex UC<br />
... should be for the second phase</p>
<p class='phone'><cite>jack:</cite> if this ever to be going to
used at server-side?<br />
... if not, it is out of scope</p>
<p class='phone'><cite>raphael:</cite> you can share the link
of the moving region</p>
<p class='phone'><cite>erik:</cite> delivery to mobile devices
is a use case introduced by the public flemish broadcaster</p>
<p class='phone'><cite>jack:</cite> there is no reason to use
URIs for that purpose, use metadata</p>
<p class='phone'><cite>raphael:</cite> it is like concatenating
spatial fragments over time</p>
<p class='phone'><cite>guillaume:</cite> we are addressing
points over space or time</p>
<p class='phone'><cite>raphael:</cite> refer to HTML image
maps<br />
... region, interval can be defined by a combination of
points<br />
... you need more than one point</p>
<p class='phone'>Issues</p>
<p class='phone'><cite>raphael:</cite> we will discuss this
tomorrow</p>
</div>
<h2><a name="ActionSummary" id="ActionSummary">Summary of Action
Items</a></h2><!-- Action Items -->
[End of minutes]<br />
<hr />
<address>
Minutes formatted by David Booth's <a href=
"http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm">
scribe.perl</a> version 1.133 (<a href=
"http://dev.w3.org/cvsweb/2002/scribe/">CVS log</a>)<br />
$Date: 2008/10/26 10:50:37 $
</address>
</body>
</html>