Wednesday, 23 January 2008

Microsoft moves into desktop virtualisation (possibly)

very interesting post on desktop virtualisation on techworld - basically microsoft is partnering with citrix to redeploy what's fundamentally thin clientstyle computing as we know and love from the nineties.

In essence a lot of the compute is moved back onto servers and there's some tweaks this time to improve graphics performance.

Interestingly it's not a softgrid type environment where all the clients host a minimal operating system and execution engine and run virtual machines - the vmware style of solution.

This latter model means that execution happens locally and you can maintain a relatively close coupling between the user and the processes being executed - and all you need is a cheap device on the desktop and a deployment mechanism.

VmWare are working on this. Microsoft have their virtual pc and softgrid, not to mention sms, so why the return to thin client?

Monday, 21 January 2008

zmanda does s3 backup

now this is something that might change backup architectures :

Zmanda, who resell Amanda based backup solutions are now offering offsite backups to Amazon S3.

Bandwidth and data security issues aside this offers a really interesting way of doing backup.

Backup has historically been expensive to do properly (tape libraries are not cheap) and offsite backups even more so - having sat in some surreal discussions of putting duplicate tape libraries in other people's (commercial) data centres it's very expensive to do, plus there are all the management issues that go with tape libraries.

S3 has resilience, is build on commodity hardware, and you pay for what you use. Now what you want to do is to build a joint facility like that by a consortium of universities and site it on AARNET, and suddenly we have some offsite backups for everybody who subscribes.

Easy. Backups are basically tar files no matter who's software you use, so if you use D2D2T locally you could keep multiple copies on the cloud host by doing some simple rsync scripts and delete rules. (poor man's hsm)

Friday, 18 January 2008

Yahoo does OpenID

Yahoo have announced that they will be moving to using OpenID as of 30 Jan. Interestingly Google seem to have played with OpenID inside of blogger suggesting that they too are interested in OpenID.

Like all identity management solutions OpenID is same sign on - ie your credentials are the same on all OpenID sites but validation is still performed against your identity provider.

Unfortunately it doesn't get round the "on the internet, no one know you're a dog" problem - that is, there is no validation of your original credentials, so unlike our id management project using shibboleth, you cannot be assured that the assertion "is a member of the ANU" is almost certainly true.

All you know about an OpenID account is "person holds yahoo account" or "person holds six apart account", which means that you've filled in some registration box appropriately.

However, what is potentially interest is the social networking parts of it - if a site such as facebook that holds rich information about individuals becomes involved we start to get a degree of validation - for example you need an anu id to join the anu network on facebook - tho' what happens when you leave is an interesting question, and through these individual validations we can begin to accept these assertions as valid.

[This is an argument in progress - I havn't worked it all out, but basically external validation of information makes it worth more, in the same way that showing your drivers licence and passport allows you to open a bank account]

Friday, 11 January 2008

AAF mini-grant (again)

Some progress has been made - nothing startling, but off to a good beginning.

We have a test idP for the project and working on building an SP+lyceum in a dedicated solaris zone. There's also been some work on building ShARPE, which we don't strictly require for this project but which we need to give a consistent way of accessing anonymized services such as UQ's Fez implementation. [Technical details: one : two] ShARPE will also allow users to modify their attribute release policy to allow users to release other information outside of the default attribute release policy.

The progress notes are now online if you want to check them out.

Monday, 7 January 2008

Sakai as an LMS replacement

LMS

Charles Sturt have been working on deploying Sakai as an LMS to replace WebCT and are rolling it out for the start of semester. Interestingly it even got into the Canberra Times' computing section (no URL - for reasons best known to the CT's management their Computing section isn't online).

Collaboration tool in NZ

While googling to see if the article had turned up on some other Fairfax media website I came across the following - the Kiwis, like us are also lexperimenting with using Sakai as a collaboration environment

Thursday, 3 January 2008

Whoops, Microsoft have done it again ...

Way back in June, I blogged about the problems that Microsoft's DocX format was causing for scientific journals and document exchange. [This post resulted in an email exchange with a journalist at the WSJ and might have ended up as a source for a story, but the story didn't eventuate]

Well Microsoft have done it again. According to Wired, the latest update for Office 2003 prevents access to a lot of pre Office 97 documents.

Now a lot of people won't give a stuff, but if you're into digital archiving you may do.

There's an unresolved argument in the digital archiving world between preserving legacy files exactly as generated and converting them to a standard format and verifying the content remains the same. The core of the dispute is that parsers and conversion utilities can have bugs, especially as regards layouts fonts etc and a convert only policy can result in implicit meaning being lost, eg in poetry. Pro conversionists say that on the whole only the content is important and you can handle special cases by hand.

The pragmatic solution is to do both. Maintains access and maintains the original just in case a parsing problem is uncovered later down the track.

Now, while it's true to say that you can use Open Office and related products to read these legacy files, Microsoft's move lends weight to the pro-conversion camp, purely because almost everyone uses Office, and because people using Office can no longer read these formats, there's less pressure on the Open Office crew to maintain these filters, and so, little by little a format dies and if we want to maintain access to content we have to convert, and perhaps we lose a little bit of implicit meaning.

And while we could convert these documents to PDF/A or ODF to avoid the risk of this happening again, verifying documents to ensure the conversion was accurate is just too big a job that it's not going to happen. So a little bit of the human experience dies.

And while you can argue that it's a plausible guess that 99.5% of these documents are of no significance whatsoever consider the following story:

From the time of Augustus to the reforms and changes of the later Roman Empire - a period of around three hundred years - Roman soldiers were paid quarterly. And when they were paid they were given an individual statement of account covering deductions, payments for food, lost equipment and so on. During most of this time the army was around 40,000 men, so we can say that there must have been 4*40000*300 of these statements produced, ie something around forty eight million. So many that any single statement was not of any significance whatsoever. So they got thrown away, used to wrap equipment, used in toilets and so on, with the result that there are only around three partial copies left, all of which are of tremendous significance.

And that's the problem with archiving. It's impossible to tell what's historically significant yet saving everything is not an option, which means we have to try and get it right with what we do save. Microsoft's change just makes it a little more difficult.