Wednesday 15 October 2008

Optiputer

This morning I went to a presentation by Larry Smarr  on Optiportal.

Optiportal is one of the spinoff's of the Optiputer  project. Essentially it's a large scale visualisation system built out of commodity hardware, cheap to build, and essentially made out of some standard LCD displays and off the shelf mounting brackets. The displays are meshed together to give a single large display. So far so good. Your average night club or gallery of modern art could probably come up with something similar. However it was more than that, it allowed large scale shared visualisation of data, and this was the more interesting parts - using high speed optical links to move large amounts of data across the network would allow people to see the same data presented in near real time - rather like my former colleague's rather clever demo  of audio mixing across a near-zero latency wide area network. It could equally be a simulation of neural processing or an analysis of radio telescope data. (or indeed, continuing the digital preservation of manuscripts theme, se set of very high resolution images of source documents for input into downstream processing)

And it was the high end data streaming that interested me most. Data is on the whole relatively slow to ship over a wide area network, which is why we still  have people fedexing terabyte disks of data round the country, or indeed people buying station wagons and filling them with tapes. Large data sets remain intinisically difficult to ship around. 

When I worked on a digital archiving project one of our aims was to distribute subsets of the data to remote locations where people could interrogate the data in whatever way they saw fit. As these remote locations were at the end of very slow links updates would have been slow and unpredictable, a set of update dvd's would work better. If we're talking about big projects such as the square kilometre array project we're talking about big data, bigger than can be easily transferred with the existing infrastructure. For example transferring a terabyte over a 100 megabit per second link takes (8*1024*1024*1024*1024/100*1024*1024)/60/60 hours - or almost a day - and that's without allowing for error correction and retries. Make it a 10 megabit link and we're talking 10 days - even slower than AustraliaPost!

What you do with the data at the other end is the more interesting problem given the limitations of computer backplanes and so on. Applications such as Optiportal serve to demonstrate the potential, its not necssarily the only application ...

[update: today's Australian had a different take on Smarr's presentation]

No comments: