On a recent episode of Loopcast, Erika Ahern unpacks a chilling case study out of Columbia University—one that might say more about the future of higher ed than any campus protest or policy memo.
Meet Lee, a student who used ChatGPT to float through Ivy League coursework with minimal effort. When asked why he fought so hard to get into Columbia only to outsource the actual learning, his answer was telling:
“It’s the best place to meet your co-founder and your wife.”
Not exactly a love of knowledge.
But Lee wasn’t idle. After spending over 600 hours grinding LeetCode to prep for technical interviews, he and a classmate launched Interview Coder—an AI-powered tool that silently helps users cheat in live coding interviews. Not take-home tests. Live interviews.
Columbia found him guilty of promoting a cheating tool and slapped him with probation. Lee’s response?
“I didn’t even violate Columbia’s AI policy.”
“Everyone is using AI. It’s just the new normal.”
Ahern isn’t so sure.
She points out that this mindset is already baked in. Back in January 2023—just two months after ChatGPT launched—90% of surveyed students had already used it to complete homework. Now?
Students at every type of institution—state schools, liberal arts colleges, Ivies, professional programs—use AI to take notes, generate essays, prep for exams, and more.
In STEM? The number is even higher. 90% of students use AI to analyze data, automate research, and debug code.
The tools aren’t just ChatGPT anymore: Google Gemini, Anthropic Claude, Microsoft Copilot, and others are everywhere.
The most disturbing part? Schools’ AI policies remain vague, reactive, and easy to exploit. For many students, this isn’t cheating. It’s optimization.
But if students aren’t learning, and institutions aren't enforcing standards, what exactly is left of higher education?
Share this post