ai in schools

@kumavis.me

ai in schools

i recently wrote about my experience introducing ai to young learners at my children's school. i also got to tour Punahou middle school in Honolulu and noticed they had their ai policy poster displayed in every classroom. their approach seemed nuanced and well developed (i'll detail it more below) and from this i became more interested in how different schools are integrating and setting rules around ai. this is intended to be summary of different school's approaches.

the problem with rules-only

initially some schools responded to the rise of ai tools with an outright ban. a policy that begins and ends with "you may not use ai for assignments" teaches students exactly one thing: that ai is something to hide. it doesnt build judgment. it doesnt develop critical thinking. and it doesnt prepare anyone for the world they're walking into.

the International Baccalaureate organization explicitly chose not to ban ai tools. their line is clear: ai-generated content must be cited and attributed like any other source, and submitting ai output as your own work is academic misconduct. in other words, use it to support your learning, but be transparent about it. that distinction is a good start.

the different questions inside "ai in schools"

lets breakdown "ai in schools" a bit:

  • ai as a tool for learning. if a student used ai to help them learn, thats acceptable. if they used it to pretend they did something they didnt, its not. most schools jumped to this integrity question first, but it only covers one dimension.
  • ai as a subject of study. how do these systems work? what are their limitations and failure modes? a student who doesnt understand what a language model is doing has no basis for deciding when to trust it.
  • critical evaluation of ai outputs. ai produces confident-sounding text regardless of whether its correct, biased, or missing context. teaching students to take that output apart is maybe the most important piece.
  • the institutional side. teacher training, policy development, community involvement, age-appropriate progression. you cant hand students a tool and tell them to be responsible with it if the adults havent figured out the boundaries yet.

schools must addressing all of these at once.

what the interesting schools are doing

so what does it actually look like when a school takes this seriously? a few examples.

Punahou School in Honolulu set up an Emerging Technology Executive Committee and rolled out ai literacy units in all their ninth-grade English classes. their approach is explicitly human-centered: ai supports learning, but writing stays a priority because its inseparable from critical thinking. students can use ai tools but only with instructor approval and proper citation, treating ai output the way youd treat any other source.

Sidwell Friends School launched the AI Co-Lab, which started as professional development for 9 teachers and grew to over 400, and eventually connected 788 educators across 300+ independent schools. the key insight: teachers need ongoing experimentation, not a single workshop. they learn by doing, month after month. this feels right to me as we're still all figuring out what works and how ai affects our performance and behavior.

Phillips Academy Andover has a subject-specific approach. in computer science, students generate code with ai and then dissect it line by line, paired with oral assessments to confirm understanding. in foreign languages, ai produces grammar-focused paragraphs that students have to analyze and explain. the expectation across disciplines is analytical engagement. the approach is "use it and then take it apart."

Phillips Exeter Academy is weaving ai across the school community. faculty launched a Student AI Group where students engage directly with the technology, and one senior produced a podcast series about living alongside artificial intelligence as her capstone project. the school also runs Teaching in the Age of AI Leadership, an institute where educators explore ai's impact on curriculum and assessment. i like that students arent just the subjects of ai policy here, they're active participants in the conversation.

Singapore American School created a parent advisory board specifically for ai and emerging technology, recognizing that many SAS parents work professionally with ai and can help shape school policy. they're also strengthening teachers' understanding of when and how to use generative ai in classrooms, and launching their first dedicated ai course for high school students. their approach involves the whole community in figuring this out together: parents, teachers, and students.

across grade levels

the schools that are doing this well share a rough structure across ages.

in the elementary years the focus is on foundations. computational thinking, pattern recognition, conversations about how machines learn from data. tools like Google's Teachable Machine let young kids experiment with ai concepts through play, building intuition theyll rely on later. you dont need to explain transformers to a second grader. but you can help them understand that a machine can learn patterns, and that those patterns have limits.

middle school introduces more direct engagement: algorithmic bias, filter bubbles, using ai-powered tools in structured settings. the goal is to raise informed digital citizens. understanding not just how to use these tools but how they shape what we see and believe.

high school is where the critical muscles really develop. students use ai to draft, then critique. they fact-check ai output against scholarly sources. they analyze generated text for bias, missing context, rhetorical weakness. some schools ask students to submit both the ai-generated version and their revised version with a written reflection on their editing process.

teaching the critique

the most important shift here is treating ai output as a starting point for critical thinking, not a finished product. Stanford's ai literacy framework breaks this into four domains:

  • functional literacy — how to use the tools
  • critical literacy — understanding their limitations
  • ethical literacy — grappling with fairness, bias, and societal impact
  • pedagogical literacy — knowing when ai helps learning and when it gets in the way

in practice this means students learn to ask: what did this ai output get right? what did it miss? where is it confident about something it shouldnt be? what biases might be baked into the training data?

Digital Promise's AI Literacy Framework emphasizes that understanding and evaluating ai are prerequisites for using it well. students need to know how ai systems work, critically assess their outputs, and make informed decisions about when and how to use them. a student who can do all three is far better prepared than one who was simply told not to touch the tools.

what progressive policy actually looks like

clear guidance on when ai use is appropriate. not a blanket ban but subject-by-subject, assignment-by-assignment transparency. some work should be done without ai. some should deliberately incorporate it. students deserve to know which is which and why.

explicit expectations for citing and disclosing ai use. treat ai-generated material the way we treat any source: something to be acknowledged, evaluated, and built upon. this normalizes honest engagement rather than driving it underground.

age-appropriate progression. what a kindergartner needs from ai education is fundamentally different from what a high school junior needs. policy should reflect that.

ongoing teacher development. the Sidwell Friends model (sustained, collaborative, experimental) works because it treats teachers as learners too. a single PD day wont cut it.

student voice in the conversation. Exeter's Student AI Group isnt just a nice touch. it produces better policy because students understand how these tools are actually being used.

the stakes

UNESCO has published ai competency frameworks for both students and teachers alongside guidance for generative ai in education and research. the IB has updated its academic integrity guidelines.

schools that cling to prohibition-only policies risk producing graduates who are either naively dependent on ai or artificially ignorant of it. neither outcome serves students well. the schools leading the way are choosing a harder path: teaching students to think critically about the most powerful tools of their generation, to know when to lean on them and when to set them aside, and to always question what comes back.


further reading

  • the AI Co-Lab. Sidwell Friends' open professional development initiative for teachers, now spanning 300+ independent schools.
  • AI4K12. a national framework organizing ai literacy into five "Big Ideas" across K-12 grade bands.
kumavis.me
🪷 kumavis 🪷

@kumavis.me

local loopback device operating in multicast. researching secure composability in software.
self sovereign digital identity + distributed object capabilities
𓃦

Post reaction in Bluesky

*To be shown as a reaction, include article link in the post or add link card

Reactions from everyone (0)