And the latest WotThe! AI Quirk – if it thinks it is Spock – it’s measurably better at mathematics. Go Figure.
March 9, 2024 | by electronic jude

Artificial intelligence (AI) has made significant strides in various fields, but its ability to solve math problems has been somewhat limited. However, a recent study has revealed a surprising finding: prompting AI models to respond as if they were Star Trek characters dramatically improves their math problem-solving capabilities.
The Study:
Researchers Rick Battle and Teja Gollapudi at software firm VMware conducted a study to investigate the impact of prompting AI models on their mathematical reasoning skills. They used three Large Language Models (LLMs) and presented them with 60 human-written prompts designed to encourage positive thinking.
The Results:
To their furrowed brows, the researchers found that in almost every instance, automatic optimization surpassed hand-written attempts to nudge the AI with positive thinking. However, one particular prompt stood out:
“System Message: ‘Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation.’
The prompt then asked the AI to include these words in its answer: “Captain’s Log, Stardate [insert date here]: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”
This prompt, which invoked the iconic Star Trek setting, led to a significant improvement in the AI’s mathematical reasoning abilities.
Possible Explanations:
The researchers speculate that this unexpected finding could be due to several factors. One possibility is that the AI models were trained on a dataset that has more instances of Star Trek being linked to the right answer. Another possibility is that the Star Trek prompt somehow aligns with the internal workings of the AI models, making them more efficient at solving math problems.
Genius speculation there from the researchers. Really going out on a limb.
Implications:
They say the study highlights the complex and often unpredictable nature of AI models. It demonstrates that seemingly trivial modifications to prompts can have a profound impact on their performance. Sounds to me more like a “We really don’t know what’s going on there and how these emergent characteristics and traits develop so we’ll just fall back on what we do know – which is – well we really can’t be sure.”
Conclusion:
I need to think on this one a little bit – I’ll get back with a Conclusion when I’ve got one. ~EJ
RELATED POSTS
View all