Dear Committee Members,
I’ve read the draft resolution on AI in MCPS schools, and I share many of the concerns behind it. The MIT research on cognitive engagement, the UNESCO findings on technology overreach, declining reading comprehension—these are real issues that deserve serious attention. Children’s cognitive development during formative years matters enormously.
But I think the resolution gets the problem right and the solution wrong.
The threat to our children isn’t AI in the classroom. It’s AI everywhere else—unfiltered, unguided, and increasingly weaponized. Kids already use the phrase “that’s so AI” to dismiss false and manipulative content. They’re not encountering AI in some hypothetical future workplace; they’re being targeted by it now, through social media, gaming platforms, and messaging apps, by attention seekers, trolls, propagandists, and scam artists deploying AI-generated content at scale.
A school that bans AI doesn’t protect students from this reality. It abandons them to navigate it alone.
Students already use ChatGPT on personal devices for homework. AI-generated content saturates their social media feeds. Deepfakes and synthetic voices are part of their daily information environment whether we acknowledge it or not. A school-only moratorium doesn’t prevent exposure—it ensures that exposure happens without adult guidance, without curriculum support, without the critical thinking frameworks that education exists to provide. The question isn’t whether students will encounter AI. It’s whether they’ll encounter it first in a classroom with a teacher, or alone on their phone at midnight.
The resolution cites the MIT “Your Brain on ChatGPT” study, and it’s worth reading carefully. Yes, students who used ChatGPT as a substitute for thinking—having it generate essays wholesale—showed reduced cognitive engagement. That’s unsurprising and concerning. But the study also found that students in the search engine group performed comparably to the brain-only group. Using tools didn’t inherently impair cognition; how the tool was used mattered. The study’s lead author, Dr. Nataliya Kosmyna, has explicitly asked journalists not to describe her findings as showing AI causes “brain rot.” She’s called for nuanced understanding of different usage patterns—not blanket prohibition. The resolution cites her research to support a ban. Her research suggests something else: that passive, substitutive AI use impairs learning, while active, critical engagement may not.
This distinction matters practically. There’s a fundamental difference between a student saying “write my essay about the Civil War” and a student saying “I’ve written a thesis about the Civil War—what are the strongest counterarguments I should address?” The first replaces thinking. The second requires it. AI configured for Socratic engagement doesn’t answer questions; it asks them: What’s your evidence for that claim? How would someone who disagrees respond? What assumption are you making here? This is what good tutoring looks like. It’s also what most students lack access to. A blanket ban forecloses this possibility.
I’d also note some technical issues that may undermine the resolution’s credibility with administrators and the board. The phrase “iterative learning Large Language Models” reflects a misunderstanding of how LLMs work—they don’t learn during use. The claim that “calculators don’t provide factually wrong answers” is false; they produce wrong answers from wrong inputs. The meaningful distinction is that LLMs produce confidently wrong answers that appear authoritative. And a moratorium on “AI tools and features” technically includes spell-check, autocomplete, accessibility tools, and search ranking. The resolution doesn’t define what it’s actually targeting. These aren’t fatal problems, but they’re the kind of thing that gives skeptical readers an excuse to dismiss the whole effort.
The resolution’s underlying concerns deserve response, not dismissal. Parents do deserve transparency and consent mechanisms. Technology companies shouldn’t set the educational agenda. EdTech has overpromised for decades. Students do need foundational skills. All true. But these concerns argue for thoughtful implementation with guardrails—not for ceding the entire territory to TikTok and whatever bad actors figure out how to use AI faster than educators do.
Our children are living in an AI-saturated information environment. We can respond by pretending this isn’t happening, or we can do what education has always done: take a complex, sometimes dangerous reality and teach young people to navigate it with judgment, skepticism, and skill.
I’d urge the committee to pursue the second path. Not uncritical adoption—but critical engagement, with appropriate structure, teacher oversight, and clear prohibited uses. Don’t put our heads in the sand.
Sincerely,
Oskar Austegard