Thursday 29 November 2012

User interfaces and memes


Most people who use computers (and tablets) have very little technical knowledge. And most of them have only used one or two user interfaces.

People are of course conservative, which is why change is difficult.

Most people know how to use the classic windows interface – from 95 through NT, 2000, XP, Vista and 7 it became a meme – this is an interface and if I do that this will happen, so double clicking on a little picture starts an application, and there's a menu thing down at the left hand bottom corner.

Now people on the whole don't know windows, they know how to do particular tasks.

This meant that you could give someone who 'knew' XP a computer with a Linux distro like College linux and one they'd found a word processor ad so on they were happy – in the main because KDE 2 kind of looked like XP and things worked the same.

The same people tended to find xfce, as found in xbuntu, a step too far thought Mac users didn't have a lot of trouble adapting.

I'm basing this on real anecdotal experience – I've tried out both on naïve non-technical users with a reasonable degree of success.

The same people, on the whole, don't like Openbox or similar user interfaces like fluxbox - ' what do you mean I right-click on the desktop to open a menu ??'

And of course when we look at Ubuntu, which is what is most probably the desktop linux in widest use by non-technical users, the major complaint is about the Unity UI – not because it's difficult to use, it isn't, but because non technical people have difficulty applying past experience. It's just a little too different.

Macs of course are different, but are consistent across the range. iOS is again consistent across iPads and iPhones. Android is close enough to allow you to transfer skills – just like XP and KDE2. In fact the move to smartphones has made the user experience consistent across models, and we now differentiate phones on capabilities, in much the same way we differentiate laptops disk vs memory vs weight, and actually we know they are all much the same.

Compare this to a few years ago, when each phone manufacturer had their own menu system and called things by different names - changing or upgrading your phone was a major challenge.

So we can say there are three user interface memes out there – the Mac meme, the tablet meme and the XP meme.

And then we come to Windows 8.

Now I havn't used Windows 8 but I've seen enough screen shots to know it looks different. How different I don't know but it looks different.

(Confession time – prior to writing this post someone from Microsoft did ask me if I'd had a chance to play with it – I was momentarily tempted to ask to borrow a Surface, but then decided that was a little too cheeky and anyway I'd like to do any usability tests on generic kit).

Looking different isn't bad for a tablet. After all chunky tiles kind of look like icons so tapping on them should work. On a touchscreen device with a keyboard you evolve a hybrid mix of swipes and keybourd commands – or at least that's my experience of using a no-name seven inch android tablet with a keyboard.
On a non-touchscreen laptop the experience will be different.

Remember that most people silo their experience and expect to transfer past laptop skills to the new environment, not either learn new skills or transfer tablet style skills.

So just like putting people in front of Openbox they're going to boggle. This is why people complain of no start menu in Windows 8.

People will of course learn new things. Judging by the number of MacBooks round campus compared to even two years ago, a lot of people have moved across from Windows.

My guess is not because of any inherent technical superiority on the part of OS X but because Macs are seen in some subjective way of being 'better' and thus it's worth moving your skills across and learning new things.

As I said above, people are conservative with regard to computers. They will only do something, like change or upgrade, if they perceive it has value. If I was Microsoft I'd be trying to work out how to sell Windows 8 as being cool rather than better.

Most people don't care if it's better as long as they can get their stuff done, but, if it's cool that's even better.

Apple is of course the classic example of being cool rather than better. In the mid nineties Apple as nearly broke. At the time I was doing a lot of IT procurement. The early old style CRT iMacs and iBooks looked distinctly poor in comparison to what you could get from the likes of Dell, Toshiba and Compaq. At the time I though the products were interesting, especially with OS X as a unix clone, but felt that they were a last twitch of the corpse, and if anything Linux might be the serious competitor.



Round about the same time I'd spent a lot of time on building Microsoft-lite environments, as a way of controlling or minimizing licensing costs by using alternative products, either as open source or commercial.

Given that experience and the fairly rich set of desktop applications, including business tools, that Linux came with, building an open source desktop seemed not an unreasonable idea, and if it hadn't been for the power of the Microsoft Office brand to the exclusion of all others, it might well have had legs. The lesson being that people won't transfer to another product if the product they're using has 90% of the market. They'll worry about compatibility and being different. Like I said, people are intensely conservative about desktop computing – they don't want to be guineapigs – they just want it to work for them

I was also wrong about Apple.

Apple came back from the dead by building a brand on being cool and being easy to use. If I was Microsoft I'd be studying how they did it, because in doing it, Apple established the OS X meme – and Microsoft needs to establish a Windows 8 meme ...

Monday 12 November 2012

MOOCs are not the only disruptors

As wellas writing about MOOCs and their potential to change the whole university experience, there's another disruptor out there - data publication.

No, really!

Up to now scientific publication has followed a nineteenth century model. Write a paper, send it to a learned journal, who send it to some other people established in the field to make sure it's not complete bollocks and who then publish it and retain copyright, meaning that people then have to buy the journal to read the content.

Academic journals are not cheap - several thousand dollars for an annual subscription. Basically universities and research institutions have to have them,  and yet the content has been produced by their own researchers.

The model has been very successful. The only problem has been that the multiplicity of journals has meant that research libraries can increasingly no longer afford them. The answer to this was bibliometrics - identify the journals with the most widely cited papers and buy them - that way you were getting best bang for your buck.

So less impressive journals withered and died and the more prestigious journals found they could ramp up their prices and still people wold buy them as they were the 'must haves' of the scientific publication scene. You also got strange aberrations where researchers were ranked by the number of papers published in high ranked journals - 5 points for a paper in the Journal of Very Important Things but only one point for a publication in Proceedings of the Bumpoo classical society irrespective of the worth of the actual paper

Things like open access publication represent only a partial fix - basically the researcher pays for the publication process and the validity checking. That way the content stays free or very cheap and libraries can afford them.

But they are still recognisably journals as we have grown to love them.

Possibly this is an important thing, but we are also beginning to see the emergence of a journal free publication model:

Sites like arxiv.org have shown that all digital publication is very cheap. Sure all the researchers do the work, but importantly the validity checking is done by the market place. If your research stands up people will cite it. If it's defective they won't. Basically a market driven model whereby what's good is cited and what's not isn't.

Arxiv.org is just one site. Reputedly it started on a server under someone's desk. The important thing about data publication is that it's building a web of trust and the infrastructure to let you identify and follow up with individual researchers.

Apply it to the arxiv.org model and you end up with a working model for journalless publication in the societies.

Interestingly we sort of have this in the humanities where researchers are quite often  ranked on how well their books are received and how often they are invited to give seminars - effectively post publication peer review ...

MOOCs and disruptive change


Over the weekend, the Guardian published an article on the disruptive effect of MOOC's, massive online courses.

As someone's who's pontificated about universities in the past I read it with a degree of interest.

And it does contain a word of warning to existing universities. For example, at the university where I work we've put a vast amount of coursework onto our VLE, which allows students to catch up when they miss classes and simplifies and speeds up marking of assignments, and also means that large classes can be taught more easily.

Interestingly, we have arrangements where student from some other universities that do not teach some of our specialities study them via our VLE but get credited for the module by their home institution. And it's not one way, we do the same for specialities we lack the resources to teach.

MOOC's are an extension of this. They represent a step change because of their scale, but they are only an evolution of what's already happening.

The other thing to understand is that VLE based courses have limitations. They're great for all the basic knowledge functions, like naming anatomical structures or describing chemical reactions.

Great for what used to be called General degrees some thirty years ago in Scotland, where people studied a range of subjects and once they had enough credits qualified for a degree.

The real difference is where you want people to think and discuss material. In my discipline of animal behaviour it consisted of trying to work out what a behaviour meant.

In languages it consists of trying to understand better in order to better communicate complex material.

I'm sure anyone with a different academic heritage will have other examples, but it has the common thread of moving from demonstrating knowledge and competence by dealing with closed questions to being able to apply it to the analysis of open questions – something for which discussion and interaction is essential.

In other words, I've no doubts that MOOCs can replace lectures but not special topic tutorials. I may be being snotty and out of sorts with the times but I always thought the purpose of a university education was being able to think and analyse, and along the way being extremely knowledgeable about a specialist subject or two.

It's like IT training courses – its one thing to learn how to install and configure an application – it's another thing entirely to understand the end to end design of the process in which it will be used. One is analytical, the other is not.

So MOOCs will be disruptive. But not in the way people expect. Some universities will use them as way to supplement their teaching. Others will undoubtedly give credit for successfully completing them – either as foundation material or to allow students to skip some of the entry requirements for an advanced or honours course.

And some universities will stop teaching a whole range of courses purely because the MOOCs are better.

But the thing to remember about disruptive change is that it's disruptive – things will undoubtedly turn out differently to how we expect ….

Update

While we're on this theme, Clay Shirky has a well argued post on this theme that's well worth a read and much of what he says has resonance with the above

Tuesday 6 November 2012

A dilemma for these digital times


I am the proud over of two e-book readers – a Cool-er and a Kindle.

Of the two the Kindle is the newer, sleeker and more responsive and my prefered reading device.

The Cool-er natively uses epub while the Kindle uses Amazon's azw and mobi formats natively.

Now I have a fair number of books in epub format – most came from project Gutenberg meaning that I can simply re download the if I want to reread them on the Kindle. Or I could convert them using Calibre.

The problem really comes with the DRM-epubs obtained legitimately from other ebook vendors. In technical terms the solution is simple – I could take the files and process them to remove the DRM and then convert them appropriately.

I havn't done this as, whatever my opinions about DRM, it's unethical to break the conditions of use. But this does raise an interesting point:

You don't really own an ebook. You rent it on a long term basis. This means that if you change platforms say from epub to Kindle you have to re rent any of the content from a new supplier.

Ninety percent of the time this doesn't matter as most people don't reread most of their books, or if they do, fairly soon after acquiring them.

The problem is, where do you stand if your reading device dies (and remember that in the case of the Cool-er the manufacturer has gone to the great stock market in the sky) or the ebook vendor has likewise ceased to be (as in the case of Borders Australia) or stopped selling ebooks (Bookdepository)?

Yes, you still have your content, but you can't necessarily access it in your preferred manner unless you break the conditions of use.

This is different from a book – once you buy it, you have it, you can lend it to anyone you wish, or you can sell it if you no longer want it.

With an ebook that's not necessarily the case. And when I look at my collection of a thousand odd travel and history books I begin to wonder about what would happen if they were all digital and I was to lose access to them ...

Monday 5 November 2012

Spam Spam Spam !

Ever since I mentioned our possible trip to Siberia next year, I've noticed a sudden uptick in the amount of russian spam in the spam sump.

Most of it is of the fairly transparent 'hello my name is Elena ... ' type or stuff pretending to be from an internet dating site 'Yuliya has updated her profile ...'.

Now spam's spam. The interesting thing is that instead of just broadcasting at random to a gazillion email addresses they must be running searches for keywords to build lists of likely targets.

Obviously it's a fairly simple process but it's interesting that they seem to be trying to build target lists ...

Friday 2 November 2012

Are e-readers dying?

With the launch of the iPad mini and various smaller format Android tablets there seems to be a growing belief out there that e-readers - by which most people mean dedicated single purpose reading devices such as the standard kindle - are transitional devices - ie devices once poplar but destined to fall by the wayside, in much the same way that the once universal iPod has been supplanted by the iPhone as a music player.

Rather than concentrate on the e-reader and try and list its advantages and disadvantages over a tablet let's look at its predecessor - the book.

Books are on the whole light, have a compact form factor, are superbly portable, do not require an external powersource, and can be accessed anywhere there is an external lightsource.

A kindle, or indeed any e-ink based reader comes close to this usability in that they are all compact, light, portable and do not require an external powersource very often - a month between charges is normal. And like a book it can be used anywhere with an external lightsource.

And when I look at people reading on the bus, those using reading devices other than their phones are overwhelmingly using e-readers.

Compared with a book - or a dedicated e-reader -tablets are on the whole inconviently heavy. Find for resting on one's knees in be but just a tad too heavy for prolonged use on public transport. Some tablets have a rubber grippy back to make holding it one handed an easier proposition but most do not.

Tablets also have a shorter battery life - no so short as to make them unusable on a commute, but enough to mean that you need to think about charging - unlike a kindle you can't just whip it out and have a reasonable expectation of having more than enough battery for an hour or so's reading.

So, I have no doubt that classic e-readers will lose ground to tablets, especially small form factor devices. After all not everyone is an obsessional reader, and why buy two devices when one will do most of the time.

However there will be a core of users, hard core readers, librarything members, travellers, who will continue to prefer the e-ink device for its usability as a device for linear reading ...