Friday, February 9, 2007


Yesterday, I added these Mojiti spots to Michael Wesch's second draft of his "Web 2.0 ... The Machine is Us/ing Us". The Mojiti interface limits meta analysis. I copy from the spotter's interface to see whether I can use it for meta reflection. Blogger strips the tabular formatting and keeps links (editing and delete) that are only useful to me. Note that some of the feedback here is really for Mojiti as much as it is for Wesch.

I tried to improve a spot that another user contributed. The Mojiti interface obliterated the text and would not allow me to modify link text. I ended up having to create a new spot and delete the one that the other user had started rather than simply revising. That is not a problem for such short text but I hope they improve the tool.

4 Spots (4 created by You)

Icon_spotorange Created by You | Created on 02/08/2007

Visit Mike Wesch's homepage at Kansas State University.

Start: 00:00:05
End: 00:00:13
Spot Type: Normal

Icon_spotorange Created by You | Created on 02/08/2007

Wesch has created a surprising irony here. He uses video to present text. But only by superimposing Mojiti can we turn his respresentation of text into links!

Start: 00:00:32
End: 00:00:42
Spot Type: Normal

Icon_spotorange Created by You | Created on 02/08/2007

Kevin Kelly published the article "We are the web" in Wired in August 2005.

Start: 00:03:12
End: 00:03:25
Spot Type: Normal

Icon_spotorange Created by You | Created on 02/08/2007

To add a clickable hyperlink, select the text, right click your mouse, and choose the "Add Hyperlink" option. Macs: the cntl+click equivalent works in the initial window but not in the revising window.

Start: 00:03:25
End: 00:03:35
Spot Type: Normal


Almansi said...


I got here from your Mojiti spot set on Michael Wesch's video. Re "the Mojiti interface limits meta analysis" - Imagine if the texting allowed those funny codes where the content of a linked page can be previewed without leaving the first one. You could view this way a video response to Wesch's video seen through another Mojiti page, etc...

(Granted, the prospect of this kind of meta is a trifle e-metic).

How did you manage to copy your whole series of spots, please?



SC Spaeth said...

Thanks for your contributions to the international, multilingual, multimedia collaboration. I hope that we can demonstrate the power of such analyses so that the community will develop tools to support them.

I captured the list of my spots for Wesch's video by "showing" the list after I logged into my Mojiti account. The following url should take you directly to that page:

After I found the structure of the url to display my list, I substituted the list ID from your list, "3443", and found a similar list of your spots:

"For an English transcript started by Jesper Rønn-Jensen - see his blog entry Web 2.0 Video: Just Text or Rethink Our Future? web-video-20-just-text-or-rethink-our-future/. The complete transcript is now available from Web 2.0 video: complete transcript. so I have unlocked this set: feel free to continue by pasting the transcript bit by bit into these comments/spots

Created by calmansi | Created on 02/08/2007 | 141 views
Web 2.0 ... The Machine is Us/ing Us

Bq 37 Spots (0 created by You)
Icon_spotgrey Created by calmansi | Created on 02/08/2007

Text is linear
Start: 00:00:02
End: 00:00:07
Spot Type: Normal
... "

The source for the page is xml so you can extract just the parts that you want.


SC Spaeth said...

Note: If you use the show links in the previous comment, e. g.,
and are not currently logged into an account, Mojiti challenges with a login screen before it will allow you to view the list. There you can either login to an existing account or create a new account. They understand recruitment through viral marketing.

SC Spaeth said...

At the end of my initial reply, I incorrectly wrote that the source was xml. Infact, it is highly structured html. The page uses many class attributes for span and div tags so it still may be possible to extract the desired information.

Almansi said...

Thank you for all the explanations. The "showing" way is great. You see, I'm interested in making SMIL accessible captioning, and a whole captured spot set might easily be translated into the .txt file that goes with the .smil file.

Re Jesper's transcript: actually, he invited me to the Google Docs page where he was working on it. But working together at the same time on the same Docs page doesn't work well, so I started the Mojiti transcription spot set and then copy-pasted the spots one by one into the Docs page. I will continue the transcription spot set as you say it makes sense.

It's been a great experience for me. First and foremost because of the intrinsic interest of Michael Wesch's video, of course. But also because it demonstrated what was a hope for me: the possibility to informally yet efficiently collaborate on captioning.

When you caption alone, it takes a while to get into the automatic routines that speed things up. And then after a while - as when you play an instrument - automatisms go awry, and you have to stop, then start again, etc.

Yet captioning - or at least transcribing if you can't caption - is one of the W3C accessibility requirements for multimedia - and a matter of common decency and common sense.

And then there is the issue of cultural diversity. Doing captioning and translated captioning is damned expensive - or entails scandalous exploitation of workers. See $0.45 per minute of audio if you regularly subscribe to their service. Which means ca $0.9 per minute of actual work (1). Minus, in all likelihood, the margins kept by CastingWords and to which CastingWords outsources the transcripts.

So in a way, people voluntarily doing transcriptions using Mojiti would undercut services like CastingWords/mturk. And this might translate into their laying off people who need that money, even if they are shamefully underpaid.

But on the other hand, there are podcasts that no one is likely to volunteer to transcribe on Mojiti, and whose authors still want to get transcribed. Let them pay a proper price or get exposed for exploitation. And conversely, there are language communities big enough to have enough volunteers to do a translated transcript for crucial audio, but not affluent enough to afford commercial transcription, even at the present CastingWords rate.

I don't have the resources or training to do an economic analysis of - let alone a projection from - these data. Let's hope some university, or UNESCO does. UNESCO blathers a lot about the importance of cultural diversity, so let them go to the nitty-gritty involved. And let them do it in a wiki, publicly. Not in an experts' commission that will take so long that by then, progresses in automated voice-to-text and inter-language translations will have made their findings obsolete.



(1) A 5/1 - 6/1 ratio between the actual time required for a transcription and the length of the audio was mentioned some time ago by several transcribers on the Turkers' CastingWord forum

SC Spaeth said...

For those who are not familiar with it, SMIL stands for Synchronized Multimedia Integration Language. It will make it possible to make more efficient use of scarce resources and make them more accessible. Let's think about ways we can facilitate the developments.