Monday, 31 March 2014

Installing Open Indiana

Over the years I've build a range of virtual machines using various linux distributions, the better to understand their capabilities and potential.

This time, I decided to revisit the Open Solaris code tree to see what the Open Indiana fork could deliver. While linux, usually in the form of RedHat or CentOS has supplanted Solaris as a platform in a lot of organisations, it's worth remembering that there is still a lot of legacy Solaris kit out there, and people still need to get to grips with Solaris.

Here are my installation/assessment notes for Open Indiana ...


  • Open Indiana is a fork of Open Solaris which I played with back in 2008/9. Since then Sun have been taken over by Oracle and support for Open Solaris has ended
  • Open Indiana is built on illumos, the open sourced base distribution for the various distros based on Open Solaris
  • Open Indiana have a website at


  • Installation was done from a live CD image onto virtual box
  • Initially the distribution would not boot, but making the CD the only device resolved that
  • booting into install mode straight forward
  • desktop application to install live cd
    • installations simple Q & A, keyboard, location, language, timezone user accounts
    • installation time was less than 30 minutes, not including download time for the live CD image
  • system boot after install was not particularly fast
  • out of the box the installed application set was extremely sparse
    • no office suite included
    • many standard tools missing
  • user required to run package manager to install office software
    • need to change from open indiana to all publishers as package source
    • open office distro is 3.1 and sourced from legacy open solaris repository
      • found alternative more recent version via apache open office site
    • poor selection of software in repositories
    • good selection of compilers and development tools
      • necessary for the ./configure, make, make install required to build and add some applications
    • users need to install gcc etc to build software
  • system reasonably fast and responsive when running


Nice distro, nice installer, shame about the software base, Anyone planning to use it for a roll out would need to be prepared to do a fair amount of building code from scratch.
What it does provide is a zero cost platform for anyone needing to get up to speed on legacy solaris systems prior to experimenting with a live system, and that is perhaps its real value

Thursday, 27 March 2014

So what do you use - Q1 2014 update

In truth there's not much change since my end of year posting back in December.

I'd add the 3G router as a major improvement. Otherwise the hardware in use is much the same except for my ongoing Chromebook problems.

The software in use is not much different either except that I'm moved from writing raw markdown to using a dedicated markdown editor, either retext on linux or on Windows and the mac. My chromebook problems have pushed me back to writing more on my windows netbook, and I even dug out my old Eee to use for some conferencing via Skype.

Working with both retext and Texts shows how agnostic I have become  about operating systems, ao indeed applications. There's not much to judge between the two editors- Texts is wysiwyg in style while retext requires an understanding of markdown syntax, but that's hardly a big ask.

Otherwise everything is much as it was ...

Tuesday, 25 March 2014

Clouds, outsourcing and risks ...

Cloud based technologies are, as we know, outsourcing by another name.

Outsourcing in IT is often touted as a means of getting costs under some sort of control, by turning a lot of capital expenditure into operational expenditure:

Essentially this means that capital expenditure, these large lumps of budget that traditionally went into buying new servers and new storage can be smoothed into operational expenditure, in other words normal running costs.

This has a lot of advantages, the major one being predictability, and as long as you follow Mr Micawbers’s dictum on income versus expenditure, everything should be tickety boo:
"Annual income twenty pounds, annual expenditure nineteen [pounds] nineteen [shillings] and six [pence], result happiness. Annual income twenty pounds, annual expenditure twenty pounds ought and six, result misery."
- Charles Dickens - David Copperfield
Large capital expenditures are a pain. Money has to found, fought for, accounted for, complex tender exercises and evaluations held, all of which cost money to carry out and all of which serve to reduce the agility of the organisation.

Basically they don’t sit well with the twenty first century model of a flexible, agile and responsive organisation.

Even more so with outsourcing services like email - no need to worry about a whole raft of things, including how to recruit and pay for the staff who look after the services.

Outsourced email, webmail, are easy to use, self supporting and instead of being highly technical become commodity services.

The same with outsourcing your compute and storage. While it’s not true to say that Azure or AWS are so simple that the cat can configure them, you do certainly need a lot less in terms of server administrators or storage managers. You do have a risk of sprawl, especially if you have a lot of cost centres and no centralised review process, but that’s essentially a financial management problem.

In fact with outsourced services and storage, you find yourself in a situation where increasingly IT delivery is a financial management problem rather than a technical management problem.

This has advantages:
  • standard services are delivered in a cost effective and standard way
  • costs are more easily contained
  • expenditure patterns are predictable which improves long term planning
and risks:
  • loss of in house expertise
  • increasing inability to deploy innovative services
  • loss of the ability to assess suitability of competing services
Now I don’t want to get all doom and gloom. There’s a lot to be said for outsourcing commodity services - my pet example is desktop pc maintenance - in these days where they are basically highly reliable appliances they don’t go wrong very often.

Depending on the failure rate it can be considerably more cost effective either buy a maintenance service, or even, gasp, just plain on replacing them when they fail than employing a couple of maintenance technicians at $70k a year plus on costs.

But there also needs to be an understanding in any outsourcing exercise that you are also losing in house expertise.

For maintaining commodity devices it possibly doesn’t matter. For something strategic, like your data archiving, you probably need to start thinking about risk analysis and what you would do if you needed a change of direction ….

Thursday, 20 March 2014

Chromebook woes - again !

Until it started crashing, I was enjoying my Chromebook.

Since then the wheels have kind of come off the experience, due to Acer’s inability to resolve the issue.
To recap, in the middle of January, it crashed so badly that it would not boot. Knowing my predilection for playing with things, you might ascribe this to my doing something odd, but I didn’t - in fact my Chromebook was incredibly vanilla. Apart from playing with, all I used it with was gmail, twitter, Inoreader, Docs and Stackedit.

Anyway I phoned up Acer, and the helpdesk were responsive, efficient and arranged a return.
They did whatever they did, I suspect a severe poke, and shipped it back. Except it didn’t boot. I phoned the helpdesk again and while I was on the phone it booted after sitting looking like a lump of plastic for fifteen minutes. Sod’s law.

A week or so later it crashed again, I phoned the help desk, and this time when we returned it to factory defaults it booted again, and appeared to run normally. At this time I was still tempted to blame a memory leak in ChromeOS rather than hardware.

Then, in early February it crashed in such a spectacular fashion that you couldn’t turn it off other than by pulling the battery out - and no it didn’t boot after that either. So off to the help desk again, and the return, severe poke and the rest of it.

This time when it came back it did boot, but into developer mode, not user mode, which showed a degree of sloppiness on the workshop’s part. Anyway, I treated it as a learning experience and flipped it into user mode.

A week or so later, it started crashing again, just like in January. By now I was suspecting a hardware fault, so when I phoned Acer again I asked them to either replace it or refund my money, given the sequence of faults and returns. Not only was it wsting my time, it was probably wasting their time.
Quite properly, they told me that the couldn’t authorize a replacement or refund, but told me to email their problem escalation people.

Now I thought they’d be as responsive as the rest of Acer.

Mistake. I emailed them, I faxed them, to no effect. No response. I actually got so far as opening a case with our local consumer protection people.

Coinicidentally, about then they suddenly came to life and contacted me to ask for the machine to be returned for evaluation.

Since then, nothing. I know via Autsralia Post’s tracking system they’ve had the machine for over a week but no acknowledgement, nothing.

Now, I’ve been acting as a private individual here. The days when I looked after hardware supply contracts are firmly in the past, but if one of my clients had been treated like this I’d have wanted the Account Manager in my office for a ‘Please explain’ session and apology.

So where does this leave me? Out of pocket certainly.

I still like the Chromebook concept and would be happy to buy another, it was a really useful bit of kit, however, if I do, it won’t be an Acer ...

Wednesday, 19 March 2014

e-commerce usage as a metric of internet penetration

I have just tweeted a link to piece in the Santiago Times, an English language newspaper in Chile to the effect that internet sales have jumped massively in the last twelve months in Chile.

At first sight this might seem odd, but internet sales are a valuable metric for internet adoption in a country - if people start using the internet for their daily business, it means that home internet connections are widespread and of reasonable quality.

In the case of Chile that’s hardly surprising - good banks, reasonable infrastructure and the rest.
Sri Lanka, is another example - last year when I visited, the internet was available more or less everywhere, with public subscriber wifi services in the major centres, but while there were ATMs everywhere it was still very much a cash based society - large wads of scruffy, worn, banknotes being required for most transactions. This wasn’t because there were no credit or debit cards, it’s just that the society had not yet made the transition from cash to plastic.

In the case of Sri Lanka where small change is always a problem with not enough coins or low value notes to go around, debit cards would certainly help solve the problem of small change at the supermarket checkout, and to be fair the banks were all enthusiastically promoting the use of plastic in place of cash.

When looking at internet use in areas where development has lagged there tends to be an assumption that countries will repeat the west’s path. I rapidly became disabused of this both in Turkey and Morocco at the end of the 1990’s when it was clear that mobile phones had replaced rather than supplemented the traditional inadequate phone network - who needs wires when you have a mobile.

Just as with 3G data in Sri Lanka - rather than build fixed infrastructure everywhere go for a lower cost option and add fixed infrastructure as required.

In other words if you don’t have a significant legacy infrastructure you can skip forward.

I don’t doubt that in remoter areas of Chile internet access was poor - the fact that the government invested in broadband links for schools shows a deliberate intention to counteract digital disenfranchisement in rural areas. An increase in ecommerce means that the internet is spreadng out of schools and businesses and into the home and that infrastructure is improving and access is more affordable.

It would be interesting to see comparable figures from Peru, which has also experienced a period of recent economic growth but has large remote areas on the altiplano ...

Thursday, 13 March 2014

Linux initiatives in Latin America

Following on from my quick and dirty virtual box builds of Huayra and Canaima I thought I’d try and pull together some information on Linux adoption in Latin America.

Pulling together the information turned out to be a little more difficult than I expected but the first pass is now online via my wiki site.

I’ll be working on it and updating it as I have time …

Written with StackEdit.

Disambiguation and Altmetrics

Elsewhere, I’ve posted about the fact that as part of the day job, we’ve developed a prototype solution to create ORCID identifiers for researchers programmatically, rather than the researcher having to go to to create one individually.

Why did we do this?

In a word disambiguation. Increasingly funding bodies require eveidence of activity and as the nature of scholarly publication changes this increasingly includes grey literature, conference presentations, exhibition catalogues and the like.

And people are just not consistent in the name they use. They change their surnames due to marriage (or divorce), they abbreviate their forenames, use a different forename informally from formally, they reverse their surname/forename order because they publish in a Hungarian excation report, they adopt an informal western sounding name to use among their peers, but publish under their formal Asian name, etc etc.

Names basically are completely useless as a consistent and persistent identifier. While most people use only a small set of variations on their name, the sum of all the possible variations is amazing.

ORCID solves this problem. As a sixteen digit number it is totally free of cultural biases, and also too big to be easily remembered. It even covers the problem of what to do about these cultures who refer to someone by another name once they have passed on (To explain: the Greeks and Romans had a belief that someone only lived on the afterlife as long as their name was remembered - this is why they put names on headstones, and indeed why we do the same thing. They are not unique in this belief, but a number of other cultures have an equally deep seated beleif that referring to someone by the name they used when they were alive encourages their ghost to hang around an annoy people.)

So ORCID works as an identifier, and it would be possible to build a database of other parallel identifiers to allow us to say that name or that identifier maps to a particular ORCID identifier and that would lets us measure impact.

We then have to deal with altmetrics. Altmetrics is a move to try and measure the chatter around a researcher and by implication the degree of influence. This is an area fraught with difficulty, but there is also a problem - most influence measurers are self selecting - you have to sign up to Klout or ImpactStory or some other service.

This requires that individuals do this, which means that they have to be sufficiently interested and invested in the process to do so. And being human some will and some won’t and some will have to be induced with carrots.

At the moment though there are precious few carrots. There is also half an assumption that people keep their personal blogging and tweeting free from their personal tweeting and blogging.

I suspect this is not the case. I started with one blog and then split it out. My twitter feed reflects my interests - some work based and technical, others to do with my interests in history and archaeology.

And this gives us a problem. How do we know which blog or tweet counts for impact?

We don’t. We can’t.

We can ask people to nominate particular rss feeds, be they tweets or blogs but that’s as far as it goes, and that brings us back to motivation.

And once we’ve solved that problem we ahve the interesting problem attaching twitter handles blog authors to names.

Some people, like me are fairly boring and predictable in their choice of handle, in my case it’s usually dgm or moncur_d or more rarely moncurdg.

Outliers like dougM are usually the result of some wierd automatic allocation rule.

Other people have flights of fantasy - it’s a bit like the problem of Asian researchers who adopt a western style pseudonym - sometimes it’s a rendering of the meaning of their name, sometimes it’s something that sounds like their formal Asian nae, and sometimes it’s completely random.

Essentially this means one of two things - we either have a year zero for altmetrics where everyone agrees that we all link our blog and twitter handles to something like our ORCID id’s, or we have a mess. If we have a mess altmetrics will only ever give us a partial measure of impact.

I’m betting on a mess …

Written with StackEdit.

Tuesday, 11 March 2014

3G routers revisited

A couple of months ago I blogged about the fun I had with intsalling a 3G router as a backup to our then flaky home adsl link.

That worked well - since then we’ve given our previous ISP the flick having finally found someone else prepared to provide a service over our flaky fixed infrastructure.

The end result is we now have a stable, if slower service. When I said I’d trade speed for stability I was absolutely right - predictable and reliable beats fast - suddenly things you could never guarantee before just work - such as reading the paper online rather than a physical copy - at breakfast.

Since we changed over the link has flipped over to the 3G backup link only two or three times - which means leaving it as a prepaid 3G service with a long expiry makes sense, rather than paying a monthly service fee.

This however presented us with a problem. One thing that J and I do quite often is go bush and rent a holiday cottage or two. These mostly come without internet, and increasingly we end up taking a laptop and a tablet or two on holiday. None of these have a 3G connection out of the box, and tethering to phones can be a pain.

Previously we’ve just taken a netbook and a usb stick with us, but this can be a little restricting - some sites like some news and weather sites are better accessed via their tablet app than directly over the web.

The solution is of course a portable 3G router, which gives us a portable wifi network and we can preconfigure all the devices we are likely to take with us.

I managed to find a recently discontinued slimline D-link module on ebay for 18 bucks, and an unlocked 3G modem for $25.

I chose the Huawei E173 as it has a connector for an external aerial, and being the same model as my original 3G modem I know it performs well on the Optus network.

Rather than Virgin, I went with Amaysim this time as they offer this 5 cents a meg deal which means that you don’t need to keep buying credit and throwing it away while the unit is sitting idle. 5c/MB is of course extremely expensive for data, but of course what one does is buy short expiry credit when you need it, and then let it lapse at the end of the trip and only add more credit when you need it. Buying an unlocked unit also means that I can change ISP if someone else offers a better deal.

After the fun I’d had with the TP-Link unit I expected some configuration gymnastics, but not this time. Popped in the SIM, plugged the modem into the router, clicked on the default Optus configuration (Amaysim resells Optus bandwidth) and there we were - solid cerise and a connection - and whatsmyip confirmed the connection. Ten minutes max to configure.

Now last weekend was a long weekend, and we had a few days away. I meant to take the unit with us for a field test, but when I opened the geek box - the plastic box full of phone chargers, camera chargers, and all the other tech gubbins we take with us, I discovered I’d forgotten to pack the psu for the portable router - and nothing else I had in the box was compatible.

So, the field test will have to be next time …

Written with StackEdit.