Tuesday 25 January 2022

The Gibraltar scoop on Trafalgar

 Something I did not know until last night - The first reports of the battle of Trafalgar were printed in the Gibraltar Chronicle some two weeks before the news reached London

(detailed image viewable on Christie's website)

This probably made no difference to how the news spread within the UK, but given that it was common practice for ships to carry newspapers to other ports, it may be that the news may have spread  from Gibraltar to other Royal Navy bases  in the Mediterranean and Caribbean via the Gibraltar Chronicle post and not via the London Gazette announcement on the 6th of November ...

Monday 24 January 2022

Portable surveying (take 2)

 Back in July, I blogged about how a simple wheeled box had simplified the whole basis of doing documentation down at Dow's.

And it most definitely did.

The only problem is that I've been through two of them since then. Despite being rated for a 35kg payload the first one cracked at one corner after a few weeks of use. 

I put this down to bad luck and bought a second one.

Last Wednesday, the bottom (literally!) fell out of that one as I was lifting it out of the back of my car when I was down at Dow's. Fortunately everything, including my laptop simply slid into the boot, but the box was a write off - I ended up dumping it in a recycling bin in the carpark.

Now this is a problem. My experience with the boxes was that this was definitely the way to go, but obviously the boxes themselves weren't up to it.

I could have tried a wheeled toolbox from a hardware store - and certainly they are more substantially made, but they won't hold a laptop.

However I found that Bagworld had a special deal on wheeled computer backpacks so I jumped for that, despite having had bad experiences in the past with airline baggage handlers destroying soft wheeled bags

There's enough space (just) for my camera, laptop, ipad, and notebooks, not to mention a box of rubber gloves and my toolkit.

It looks as if it might do the job. Let's hope so ...

Sunday 23 January 2022

Neatness and legibility in the nineteenth century

A few days I made this slightly intemperate reply to a tweet bemoaning an Ofsted report on UK school performance having a fixation with neatness

Well, while  some degree of legibility is clearly required, when we look at the great and the good of the nineteenth century, it's clear that many of them - Darwin and Dickens to name but two, had appalling handwriting.

But what of more ordinary people?

This is where it gets interesting.

From my experience of looking at nineteenth century documents both from the family history viewpoint and my work documenting Dow's pharmacy, I'll say that most professional people, be they ministers of religion, pharmacists, doctors, lawyers, registrars etc., had okayish handwriting. 

Madeleine Hamilton Smith, who wrote a lot of letters to her lover Emile L'angelier, who both was only recently out of education and was not someone who wrote all the time, and someone who we might expect to be neater than most, only had okayish handwriting. 

As another example, Charlotte Bronte, who course wrote her novels in longhand, again only had okay handwriting:

Not perfect, but okay, and probably better than today, and I'd conjecture that's because in the nineteenth century, prescriptions, minutes, ledgers etc were handwritten and had to be read by someone else.

For example, Lapenotiere's expenses claim for the cost of his overland journey from Falmouth to London to bring the news of Trafalgar is legible but  not overly neat.

Likewise, at the other end of the century, when the vicar of Ingleton in North Yorkshire recorded the death of Queen Victoria in his parish register, it's again legible but not overly neat:

So where did the fixation with neatness come from?

Well, in the nineteenth century there was a class of professional writers who wrote out copies of official correspondence, company minutes and the like and they wrote beautifully in copperplate script, often working from draft documents that were written in a more ordinary manner.

I have this theory, and it is only a theory, that once universal education came in there was a great emphasis on neatness simply because being a copy clerk was a well paid and aspirational job.

And this fixation continued on for years - when I was very small and in primary school they first of all taught us our letters as printed letters in pencil in rough work jotters with terrible rough woodchip paper.

Later on, they began to teach us cursive writing, which was basically a simplified copperplate in ink on better paper.

Then the curriculum changed and it all became free expression, and being able to write neatly didn't get you any points, though people did care about legibility in these pre-computer days.

University assignments and so on were all typewritten, which of course meant that as long as you could read your own notes, it didn't matter if, like me, you had handwriting like a drunken spider crawling over an inkblot ...

Monday 10 January 2022

Voice recognition and an ape's reflexion

 When I wrote my annual 'Technology and Me' post at the end of last year I mentioned that I had bought a smart speaker on a whim and how we were both amazed at how good the voice recognition was.

I must admit to being a bit of a luddite about voice recognition preferring keyboards, and I mean physical keyboards over voice commands - more privacy, and more thought, and of course the chance to review and edit.

It's why I've only just thrown out my last Nokia Asha - essentially a poor man's Blackberry with a responsive little keyboard you could type on, and importantly learn to type fast, which I'd kept for years for overseas travel even though some of the services such as push email had stopped working some time ago. 

However, luddite or not, I've helped set up voice recognition for people who for one reason or another had difficulty using keyboards, including one library systems admin who tried unsuccessfully to use Dragon Dictate with vi in a terminal emulator.

Not a great success - Gnu Nano however proved surprisingly usable as an alternative.

However I'd never used it seriously until we got a smart speaker.

While the voice recognition capability is impressive, like all such devices it's highly reliant on its backend data set,  meaning the answers are fairly standard and after a while predictable - like if asked to play ABC Newsradio it sucks it in from I💗Radio, while it sources Classic directly from the ABC's own feed, even if like all such devices sometimes its response to a query almost seems like magic.

Just as many years ago I asked the GPS system on my car to take us to a hotel in Brisbane. The system threaded us through the urban motorway system and took us past the hotel heading west, ducked us down and round an intersection and back up heading east - the GPS 'knew' that the parking garage for the hotel  was only accessible from the east bound lanes and directed us accordingly.

Of course it no more knew than the fictional Emilybot knew about Emily Bronte's inner life, what it can answer and talk about is determined by the richness of the underlying dataset.

So, my car's GPS 'knew' you needed to be in an eastbound lane to get into the underground parking garage, just as the Google assistant in the smartclock knows about radio stations, knows we have a spotify subscription and so on.

It also has some strange aberrations - asked the weather it invariably gives us the weather for Blacktown in Sydney - which is where our outward facing IP address resolves to. (Our ISP uses a combination of NAT and BGP which effectively obscures our internal NBN network address meaning the external address we are allocated at any one time comes from one of their Sydney points of presence).

However we can forgive it that, the Guardian invariably makes the same mistake as do sites with these silly clickbait ads 'The prices of cremations in Blacktown may surprise you'.

But to return to the main point, the assistant, for all its faults gives the appearance of intelligence, just as my car's GPS did.

In fact neither of the systems are intelligent. Unlike apes, or my cats, they are controlled by the richness of their dataset and outside its parameters, flounder helplessly ...

Monday 3 January 2022

Finally gave in and joined whatsapp

 For years I've resisted the Facebook, now Meta, services.

I didn't like the way they squirrelled away all your data and tried to take over your life.

Recently I softened and installed Instagram on my Huawei, because a number of regional archive services have taken to posting interesting material on Instagram.

I post little, other than the occasional cat picture, and really am just a lurker in the background.

I've also avoided Whatsapp for the same reason - the ownership, not the technology, but over the Christmas break I needed to talk to our health fund about the billing for J's op. Nothing major, just a query on one line item.

Normally, when you contact our fund you can chat with them via one of these chat apps, and it's relatively efficient - raise a query, upload a bill for payment, and it's all quite painless.

But of course, we've just had Christmas, and while the helpdesk was still there they'd changed over to using whatsapp rather than their own chat service. Whether this is a permanent change or not, I don't know, but the only way, other that sit on hold until the heat death of the universe, was to use whatsapp.

So, ever the pragmatist, that's what I did.

I'm still not going to use Facebook tho' ...😼

AI, Eliza and the Ape's Reflexion

 Yesterday I read a short story in the Guardian

I enjoyed the story - without giving too much away it imagines a world in which dead authors are re incarnated via AI as blade runner like avatars and wheeled out to various 'meet the author' events like those found at literary festivals world wide.

It's fiction, but it got me thinking about things that I hadn't thought about for a long time.

It would be entirely possible to train an AI bot on any large corpus of material - the digitised letters and notebooks of Charles Darwin - for example and have it reply to questions based on that corpus of material.

What exactly you would get out of it I'm unsure, but it would at least generate replies in the style of Charles Darwin. It would of course get things hilariously wrong, which is how we would know it was a computer program and not a reincarnated version of Old Beardy himself.

Chatbots and interactive digital assistants are of course all around - Siri's an example, as is Alexa, but one of the more interesting examples is Codi, the one Telstra uses.

If you ever have the dubious pleasure of contacting Telstra about a problem you first of all have to have a chat session with Codi to try and route the problem appropriately rather than have you transferred between numerous helpdesks. 

Personally I've always found Codi completely useless, and got better service from the humans, who always seem to be called Ben or Susi, but whom you suspect are really Ravi and Sunita sequestered in a cube farm in Bangalore.

What is interesting is the way that Telstra have tried to make Codi part of the workflow and indistinguishable from a human.

All these bots and assistants take their inspiration from ELIZA - the first chatbot program - a program that takes its input from a human, processes it and replies based on whatever model it has.

So Codi replies on the basis of whatever world model Telstra has created and the fictional Emilybot replies based on the corpus of Emily Bronte's writings.

Now a long time ago Adrian Desmond wrote a book called the Ape's Reflexion, in which he argued that all these 1960's experiments in which they attempted to teach chimpanzees American Sign Language were failures.

What Desmond argued  was that apes are very bright, so that if you made a particular series of gestures and then presented a slice of watermelon, the apes would get the idea that if they made the gestures  they would get more watermelon, and that given apes are very bright the complexity of learned responses are indistinguishable from a true linguistic response.

Which of course opens up a whole can of philosophical and psychological worms about the nature of language and communication.

Let's take a simple case, when my cats want to go out they sit by the back door - they know they're only allowed out the back door into a fenced off play area. If you ask them what they want they'll look at you and then the door and if you point at the door they'll get up and walk towards it expecting to be let out.

No one taught them this, they worked out for themselves that a particular set of interactions would result in them being allowed out. Is it language? No. Communication ? Definitely.

And so it is with digital assistants. The Eliza effect may lead us to think that they are cleverer than they are, but fundamentally they are only responding to the inputs that they parse in a certain way and respond on the basis of the dataset that they hold.

So if there was ever such a thing as an Emilybot, she would appear to tell you things about Charlotte say, but only what was in both her and Charlotte's letters and diaries, but to an uncritical observer might think they were being told secrets, but really only  stuff that you could find in Juliet Barkers' book on the Bronte sisters.

She wouldn't, for example, be able to tell you if her sister Charlotte smelled or any other personal secrets ...