Thursday, 29 May 2008

More OOXML controversy


According to the IHT,  the South African Bureau of Standards is appealing the ratification of OOXML as a standard. Something which, coupled with the recent announcement that the Dutch Council of State is open sourcing its document conversion tool certainly makes either adopting ODF, or sitting on the fence by continuing to use .doc seem to be the way to go for the moment.

A wholesale conversion to OOXML certainly looks very unlikely anytime soon 

North Carolina university does the virtual computer lab thing


I know I've been going on about this for years but North Carolina University in the States has started providing virtual computer labs. The interesting thing, which differs from my simpler view that we use these technologies to allow students access to a range of standard resources, is that they seem to be using some of the ability to clone a desktop inherent in citrix and Sun's SGD, and then stream applications onto the desktop to allow students access to these rarely used, but never play nice apps that are such a pain to add to the standard image.

Certainly one of the major problems with student computer labs is the size of the image and the sheer number of apps, and of course the more apps you have the more likely that they won't play nice.

Previously I'd been thinking along the lines of using VMWare's ACE to distribute pre rolled environments but this coupling of Softricity like application streaming and a thin client based presentation technology makes a great deal of sense ...


Sunday, 25 May 2008

An interesting if bizzarre discovery ...

As I've previously blogged, I have the $83 linux machine in the garage and connected to the internet via a linksys wireless bridge. It used to run Ubuntu 7.10, but the time came to upgrade it to Ubuntu 8.04 (Hardy Heron) with long term support (LTS).

This can be done over the internet via the package upgrade tool and takes a couple of hours over an adsl connection. So yesterday afternoon I went into the garage, started up the machine and began the upgrade process, and as it was getting dark and cold, closed the garage door with the remote and went and did something more useful - folding laundry in this case. Went back in an hour to see how it was going and found the connection had died. Restarted the upgrade closed the door, but this time went into the study and happened to notice that after a few minutes the lights on the adsl router - a telstra supplied 2wire adsl gateway - were cycling. Well initially I blamed telstra given that it was around five on a cold Canberra winter's evening and it could be that the contention rate at the exchange was too high and everybody was trying to use the internet.

Anyway went back and did the restart thing again and this time closed the garage door with the remote from the study and guess what - the light started cycling around 60 seconds later. Waited for the connection to come back, repeated the experiment, and hey presto, killed the connection again.

Well we were going to the theatre, so I closed everything down, took a shower etc.

This morning, I did the converse on an equally cold Canberra morning - opened the garage door, powered up the wireless bridge, let the connection stabilise, booted the $83 wonder, and ran the upgrade procedure but left the garage door open this time.

Despite the cat's attempts to help, the upgrade procedure ran flawlessly and two hours later I'd booted into Ubuntu 8.04 and had a fully patched and working machine.

So I can only guess that the remote control for the garage is knocking out either the gateway or the wireless bridge by causing a load of junk packets.

What's worrying of course is that maybe it's not just my own connection but some of the neighbouring ones as well ...

Friday, 23 May 2008

Open Solaris ...

For reasons I've never really understood, I like playing with operating systems. So when OpenSolaris  was released I couldn't resist. I'd previously played with Nexenta and Belinix but neither had been that satisfactory. 

Nexenta 0.6 sort of installed under parallels and bitched about not finding uarts, but did run. Belinix would start under parallels, bitch about uarts, load the live cd and hang once it had done about 80% of its install. Just for fun I tried Nexenta on an old compaq we had in the lab and it installed although it wouldn't start X. Trying the same trick with Belininx  produced a scad of atapi cdrom drive errors, suggesting that it's finicky about hardware and doesn't play nice on 4 or five year old compaqs.

So Open Solaris. Tried it on the old compaq. Just as bad as Belinix. Tried it on parallels on my Mac. Bitched about uarts, produced several screens of error messages and tried to enter single user maintenance mode. I suppose I could have read the documentation about installing it under parallels, but I was reckoning that if I could install debian, ubuntu, xp or server 2003 without reading the manuals the same ease of use should apply to open solaris, and to be fair I should probably have done a comparative install test on the old compaq with ubuntu. XP did install on the Compaq.

So I tried the Innotek virtual box virtualisation application. Innotek is now owned by Sun so I reckoned there was a chance there. First time around using the default memory settings of 128MB didn't work. The live cd image just hung. Upping the memory to 512Mb and restarting the live cd loaded and installed. Took an age, but it installed and came up again and booted into OpenSolaris.

The interface looked slick but for some unfathomable reason there was no Open Office installed. In fact there was a fairly normal set of applications like you would get in any linux distro other than Open Office. Why?

However, this would have been a golden opportunity to test the package installation tool - except, despite upgrading VirtualBox to 1.6 I couldn't get the networking to work (and yes this time I did read the instructions). All I can say is that the user interface is no better and no worse than synaptic.

The desktop interface is gnome, and works well, comes with a set of sensible seeming applications apart from office.

So would I use it in place of ubuntu?

Well I probably wouldn't cry if it was forced on me, but like Suse I can't find a reason to make the switch. It seems to be a competent operating system, as is ubuntu, as is debian. And the thing that makes ubuntu and debian popular is the software base, the range of packages available. And here my inability to test the package manager leaves me shrugging my shoulders and going "might". That and the pickiness about hardware an memory, ubuntu is definitely more tolerant of hardware variations.

Really the only way to tell would be to buy an old ex government optiplex and build it on that to see how it went. And while it's nice, I'm not that overwhelmed that I want to spend a hundred bucks to run it as a test operating system

Thursday, 22 May 2008

Microsoft Office 2007 does ODF

Microsoft has announced that Office 2007 is to support ODF, albeit not until 2009.

Which is interesting. Still doesn't solve the problem of backwards compatibility and mixed environments with Office 2003 and 2007, but at least doing ODF is a step forward.

However the cycnical me wonders if this is a ploy by Microsoft to head off being beat up by the EU about anti competitive practices. Like announce a statement of direction, talk the talk and hope that as most people upgrade pc's every three or four years that they would have built a critical mass of docx users so that they can then claim that ODF is a very minority religion.

Certainly our uni experience has been that while corporately we're still 2003 and .doc, students who have taken up the it's not cheating $75 deal have forced us to deploy the compatibility pack due to the number of .docx documents being emailed in as submissions.

Of course what's really interesting is that Open Office is even cheaper as in free, yet students won't jump for it. Crap marketing, crap distribution or what?

New York Times online archive

Interesting article on the NYT's online archive - how they did it and links to a video that gives greater detail and to other related articles.

If you're interested in the idea of using cloud computing/hadoop to handle large amounts for data certainly worth checking out ...

Friday, 16 May 2008

Microsoft Office 2007 and interoperability

This is still rumbling on. Becta, the group for IT in schools and colleges in the UK, have referred their complaint regarding interoperability to the European Commission, having previouly lodged it with the UK Office of Fair Trading.

Costing longterm data preservation

One of the great problems about long term data archiving is sustainability, ie the costs of being able to maintain it into the future, simply because the technology is new and no one really knows.

Jisc have recently published a report that attempts to quantify the costs, and cerainly the costs look fair, with the idea of it needs around 2.5 to 4 FTE staff to keep a service going and that the hardware and storage costs are not amazing.

The staffing level is probably about right, that's also around the same number of staff that used to run the old UK Mirror Service, which was also a big repository and database to tell you where everything was (sounds familiar?).

The hardware costs are probably wrong however, as they've forgotten about resilience.

Most institutional repositories are exactly that - a single instance at a single institution. While there are undoubtably steps taken to maintain the integrity of the data there's no real attempt at resilience. And resilience is important. Even if your servers are clustered, if they're in the same building and the building burns down, you are basically stuffed. And remember with a large multi terabyte repository conventional backup to tape isn't that attractive as most of the data is unchanging, it only grows in quantity which makes the backups increasingly slow.

Boxes such as Sun's Honeycomb get around this by storing multiple instances in an array of disks - think bitorrent in a box - but that immediately throws away resilience.

At the old UK mirror service we had one set of servers at UKC and another at Lancaster and replicated between them. We never backed up the data, reckoning it was unlikely that both sets of servers would be lost at the same time, and that if it did, we could repull all the data again. We later added a third node at Reading and that increased the reslience even more.

But we always had funding rows about doing this. Duplication of hardware - even if a lot of it was pretty cheap commodity hardware - duplication of effort - the need to keep people on each site to deal with local hardware failures - yet we always had 99% uptime.

Doing it that way provided good disaster recovery, and simultaeneously good service. If institutions were to peer, ie I put a box in your machine room and you put a box in mine, the extra staffing costs are probably pretty minimal, hardware is cheap, and you're not paying for backup. It's probably not cost neutral, but it's probably close...

Thursday, 1 May 2008

The rise of the vdi ...

In the last two days I've been to two presentation that mentioned vdi, virtual desktop infrastructure. Definitely this year's buzzword.

What it really is is a mixture of old fashioned thin client technology (Citrix, Sun Global Desktop) and application packaging so that each application runs in a separate container (Softgrid is the best example of this).

Application separation is useful in the windows world as it allows applications that don't play nicely to live together.

And, given that the client applications only do screen draws, it can be presented with a nice webstyle window.

But it's basically classic thin client. The basic spruik is access to a standard environment from anywhere.

It's a slightly wierd feeling to suddenly find one of the things I've been writing about for years (1) are about to become mainstream ...

keyboards as filthy as a loo seat?

There' an article in today's guardian about how someone has found that keyboards are as filthy, if not filthier, than a toilet seat.

No surprises there. To prove it to yourself pick up your keyboard, put a bit of clean A4 on your desk and then turn your keyboard upside dow and shake. If, like me, you habitually eat lunch at your desk, you'll produce a nice load of gunk - see my previous post on the subject.

The interesting thing is that no where in the article actually says how filthy a loo seat is compared with toilet door handles, ordinary door handles and the rest.

Except where precision impaired males are concerned, loo seats probably aren't that dirty compared to door handles - basically because one's bum does not come in contact with dirty things like $10 notes and the like. And as toilet paper forms some sort of barrier to transmission of bacteria - we don't normally come in that close contact with our poo and wee, so I don't see there being that much extra contamination out there. There will be some, and basic personal cleanliness is important, but really I don't see it as that much of an issue.

Yes if there were contaminants in an area anyway, as in a hospital I could see a case for a shared computer keyboard being a risk. But most keyboards are personal and most people are reasonably clean. It's fear, uncertainty and doubt territory.

Abd while I know of one incident where a student urinated on a laser printer that failed to print a term assignment, I havn't yet come across anyone shoving a keyboard up their bum...