ChatGPT: can artificial intelligence create crosswords?
AI-generated clues are often bizarre and sometimes flat-out wrong – but, setters agree, that may not be a bad thing. Plus: a podcast returns
This week, some Things of Interest to Puzzlers That You Might Otherwise Miss.
Five chats with crossword makers …
First, if you’re a solver of the Mephisto series – which is unusual in giving the actual names of its setters – and have wondered what Paul McKenna does when he’s not setting, you can now find out. The same setter is the Financial Times’ Jason, and that paper interviews him as part of “an occasional series”:
Did your school mention crossword compiling in career discussions?
It was never mentioned as a career option. I am a construction manager in the oil and gas pipeline industry.
Happily, the Telegraph has also interviewed a setter, explaining:
It is still a rare event for us to welcome a new compiler to the series. While records of the early days of our flagship crossword are sketchy, it’s unlikely that there have been more than 35 compilers in all that time.
That setter is sometime languages teacher and novelist Robyn, known locally as Picaroon. Funnily enough Robyn was also interviewed by the FT, under the name of Buccaneer. (Before that it was guinea pig enthusiast and library manager Zamorca, known locally as Hectence).
The last interview for now is a lengthy chat with New York Times crossword editor and general puzzle nabob Will Shortz. Because it’s conducted by the New Yorker’s puzzles and games editor, Liz Maynes-Aminzade, it’s pleasingly technical.
There’s a constructor who gave us the word ‘dispense’ as an across, and coming down is ‘nur’, which is an Islamic term. None of us on the editorial staff was familiar with the term ‘nur’. You could change the ‘N’ to an ‘R’ and make it ‘disperse’ and ‘R.U.R.,’ as in the Čapek play, so that is my preference for the grid.
I think it’s better as an ‘R.’ We’ll reach more solvers with that. But then there was the issue: did the constructor purposely choose ‘nur’ and think that was a significant thing for people to know? And people would get it from the crossings. The clue for ‘dispense’ would be really clear, and the other two across answers were readily gettable. In the old days, I definitely would’ve changed that to an ‘R.’ Nowadays, we’re talking about it.
They also discuss the experience of finding love at 70.
… three chats trying to make crosswords …
We’ve looked at the topic of artificial intelligence and crosswords before and we’ll return to it before long; in the meantime, some experiments with varying results.
Software engineer James Williams has asked chatbot ChatGPT to try to solve some cryptic clues. The results are often bizarre …
1 Chap recalled skill: something frequently repeated (6)
Mantra – The word ‘chap’ is a shortened form of ‘chapman,’ which is an archaic term for a trader or merchant. ‘Recalled’ is a hint at the reversal of the word ‘trader’ to get ‘redarat’ [sic], which is a homophone for ‘mantra’. ‘Something frequently repeated’ is a description of the definition of a mantra.
… but by no means all bad. When it does work, James suggests, it identifies a definition and “forces the cryptic solution backwards from there”. Don’t we all.
Less successful are the efforts from Australian setter David Astle, encouraging ChatGPT to write a cryptic clue for “marmalade”:
So I lend a hand, inviting the bot to scramble marmalade. It shoots back ‘a lad, marry’, which any Charlie can see is off-beam. Still, I encourage my apprentice – as that’s how the power dynamic has shifted. I ask its e-brain to combine anagram and definition, resulting in, ‘A lad may marry this sweet spread (7) [sic]’.
And far, far less successful – counterintuitively – is ChatGPT’s attempt at a definitional crossword under the guidance of Nayanika Mukherjee of the Indian Times. Here’s a sample clue:
Small four-legged animal with long ears
You will literally never guess the answer.
… and some chat about crosswords
Finally, good news for your ears: