I've played some more with it, and it's certainly very impressive.
It's remarkably good at producing formulaic text and you can, if you're prepared to put in some effort, get it to produce something clever and amusing.
But that's not the point.
It's simply (a bit of a misnomer here) a very clever text generation engine with a remarkably rich dataset from which to draw it's information. As with all such software, it gives the appearance of intelligence due to the complexity and richness of its responses.
If, for example, I made my living writing real estate advertisements, or some of the formulaic advertorial that goes on real estate sites, I'd be worried - tools like Chat-GPT are admirably suited to that sort of task, but none of its output is so good that it doesn't need to be reviewed by a human.
And that brings me to my second point.
Chat-GPT's rise to prominence has been accompanied by a wave of education departments and universities banning its use in essays.
Too late, the horse has bolted.
It's out in the wild, and so it, or some competitor, will inevitably get used.
So it's time to embrace it.
For example, on my little Chat-GPT exercise on wet plate photography, give the class the example and ask them to critique and expand on it.
I would expect that students would immediately run off and raid wikipedia and camerapedia for information and use that as the basis for an answer. But we're after more than the facts here, we need some critical analysis as well.
Doing so successfully will show that they understand the topic, and it doesn't matter if they use an AI tool to generate the final text - it's the requirement to critique the answer that's important.
Post a Comment