Guides
Andi Insider
April 23, 2025

Please Explain.

I asked Andi about the background of this particular phrase in Australian history and this was the response:

The phrase "please explain" became famous in Australian politics after Pauline Hanson's response during a 1996 interview on 60 Minutes. When journalist Tracey Curro asked Hanson if she was xenophobic, Hanson replied simply, "Please explain?" [1]

The line became an iconic catchphrase in Australian culture, even inspiring the title of a 2016 SBS documentary film "Pauline Hanson: Please Explain!" [1] [2]. The moment captured Hanson's persona as a controversial right-wing politician who presented herself as speaking for ordinary Australians.

The phrase was widely parodied and entered Australian popular culture as an in-joke that highlighted both Hanson's communication style and her controversial views on immigration and multiculturalism2. According to Know Your Meme, it remains "one of Australia's earliest memes" [3].

[1]    Wikipedia - Pauline Hanson

[2]    Wikipedia - Pauline Hanson: Please Explain!

[3]    Know Your Meme - Please Explain

And recently, I was reading an article from Gary Marcus, a cognitive scientist and professor emeritus at New York University who has become one of the most prominent critics of current artificial intelligence approaches, particularly deep learning and large language models. In the article, Marcus quotes Brad DeLong:

if your large language model reminds you of a brain, it’s because you’re projecting—not because it’s thinking. It’s not reasoning, it’s interpolation. And anthropomorphizing the algorithm doesn’t make it smarter—it makes you dumber.

I, of course, had no idea what this all meant so I asked Andi:

Please explain this in plain english: " if your large language model reminds you of a brain, it’s because you’re projecting—not because it’s thinking. It’s not reasoning, it’s interpolation. And anthropomorphizing the algorithm doesn’t make it smarter—it makes you dumber.”

Andi's response:

Large language models work by finding statistical patterns in text, not by thinking or reasoning like humans do [1] [2]. When these models generate human-like responses, we tend to imagine they have human-like understanding, but this is misleading. The models predict which words are likely to come next based on their training data - they don't actually comprehend meaning or have conscious thoughts [3].

Laura Gwilliams from Stanford explains this key difference: "LLMs are aware of only structure. They are entirely focused on outputting something grammatically correct—without the notion that such a thing as grammar exists" [4].

Treating these statistical models as if they think like humans is not just incorrect - it makes it harder to understand their real capabilities and limitations. The models excel at pattern matching and text prediction, but attributing human-like reasoning to them clouds our ability to evaluate them properly [5].

[1]    Ars Technica - A jargon-free explanation of how AI large language models work

[2]    Manning Publications - How Large Language Models Work

[3]    Appian - What are Large Language Models? Explained in Plain Terms

[4]    Stanford Neuroscience - What ChatGPT Understands

[5]    Springer Nature - Do Large Language Models reason like us?

And now I'm all the wiser - just another way, Andi is reimagining search for the next generation.

PS: Generally, the result cards that Andi finds will have an Explain button - if the website copy is too complex, try the Explain button to get a plain language explanation.

Related Articles

Discover Next-Gen Search

Andi uses AI everywhere and visual results with more than a chat UX to reimagine search. It's a better way to find and explore knowledge.