certainly can "augment" the school experience, which is more than just academics, but also socializing kids.
obviously some efficiencies could be generated, eg higher student/teacher ratios.
as for childcare, there are a lot cheaper ways to do that.
There aren't many childcare options that are cheaper than school. Before my son started kindergarten we were spending more per month on daycare than we were on our mortgage. But, yes, I can see AI working as an "augmentation" for things like student/teacher ratios and the like, if thoughtfully implemented. It couldn't (and shouldn't) replace human teachers - especially for younger students - because that interaction and connection is very important, but as a "tool" for teachers and students it could be very helpful and beneficial.
In my (limited) experience, when a kid wants to learn, you can pretty much use any method and it'll work out (I know I'm exaggerating, but I'm not that far off either. Hell, some kids learn despite their teachers!). I tried Duolingo myself, but I thought it was rote and I just hate digital bells and whistles that tell me I haven't achieved my objective or that I could do this or that, but that's just me.
Re AI and teaching, I think it will definitely be easier to pretend that the education system's working if AI is used instead of teachers.
Just to put things in perspective, where I teach (Bachelor's degree), students have a super wide variety of subject matters, including chemistry.
They do lab sessions all year round, usually some preparation involved before-hand, the lab session itself, and then a report on what they did. My chemistry colleagues have done what I think is a good job at setting it up. Students have prep activities on the uni's digital workspace, they're with the teacher and lab tech through the session, work in pairs and their reports are graded and given back so that they can see what they got right or wrong. At the end of the year, they have a graded lab session, which is randomly selected from the pool of all lab sessions, and my colleagues have also made videos to help students revise. (sorry for the lengthy context)
A couple of weeks ago, I hear students complain about the graded lab session, so I ask them what the problem is, and they tell me it was hard. I make the points that they had done all of the experiments and lab work in previous sessions, that they were well equipped to revise... And that's when one of them said, "Sir, you don't understand, even with work, it would have been difficult"... So unless AI can distribute candy, or drugs, or whatever it is that students find rewarding, I really don't see how it's going to help them any better than a teacher would.
certainly can "augment" the school experience, which is more than just academics, but also socializing kids.
obviously some efficiencies could be generated, eg higher student/teacher ratios.
as for childcare, there are a lot cheaper ways to do that.
Clumsily stated but uncomfortably accurate. Lots of families would have to sacrifice half their income.
Approximately zero of my school time was actually geared toward how I learn best, even though in grade school, I was guinea pigged with the ITA alphabet (I could already read in kindergarten so it was lost on me but other kids got it and were probably (?) better off for it. In 3rd grade, our new building had no walls between the classrooms, and 1-2, 3-4, 5-6 were all lumped together and then divided into 4 clusters for each subject so there were really 12 grades in grades 1-6, and we'd skip around based on what we were better at. So imagine 120 different lesson plans for 120 kids instead of 12 lesson plans.
I think it could really be great. I might even try DuoLingo if it's not rote. I need it.
Some newspapers around the country, including the Chicago Sun-Times and at least one edition of The Philadelphia Inquirer have published a syndicated summer book list that includes made-up books by famous authors.
Chilean American novelist Isabel Allende never wrote a book called Tidewater Dreams, described in the "Summer reading list for 2025" as the author's "first climate fiction novel."
Percival Everett, who won the 2025 Pulitzer Prize for fiction, never wrote a book called The Rainmakers, supposedly set in a "near-future American West where artificially induced rain has become a luxury commodity."
Only five of the 15 titles on the list are real. (...)
This is a list designed for people who pretend to have read books over the summer
Mind you, you can probably ask AI to pitch you a summary for that non-existent book.
I've read about this in the Guardian, too. They also have an interesting article on a book about Sam Altman (Empire of AI: Inside the Reckless Race for Total Domination, by Karen Hao ) and while I'm at it, another on removing "safeguards" from AI.
I'm biased, but I find this comforting. De-bugging in another sense. I find it striking that consent is now rightly regarded as an imperative but that it somehow escapes the realm of data digging AI. Thanks, Proclivities.
Some newspapers around the country, including the Chicago Sun-Times and at least one edition of The Philadelphia Inquirer have published a syndicated summer book list that includes made-up books by famous authors.
Chilean American novelist Isabel Allende never wrote a book called Tidewater Dreams, described in the "Summer reading list for 2025" as the author's "first climate fiction novel."
Percival Everett, who won the 2025 Pulitzer Prize for fiction, never wrote a book called The Rainmakers, supposedly set in a "near-future American West where artificially induced rain has become a luxury commodity."
Only five of the 15 titles on the list are real. (...)
Every vendorâEVERY VENDORâ that I deal with has an AI tool. They're the same tools as before, mostly, but they all know a buzzword when they hear one.
Every vendorâEVERY VENDORâ that I deal with has an AI tool. They're the same tools as before, mostly, but they all know a buzzword when they hear one.
the irony here is that you can use ai to illustrate ways to ai proof your lessons
(prompting is key)
some of the solutions have already been mentioned here
see results/examples below:
Teaching Framework: Fostering Authentic Learning in the Age of AI
This framework, called LEARN (Leverage Engagement, Assess Process, Redesign Tasks, Nurture Critical Thinking), provides actionable strategies for teachers to ensure students learn rather than rely on AI for assignments. Each component includes specific techniques, examples, and rationales, drawing on educational principles and current challenges with AI use in classrooms.
1. Leverage Engagement Through Active Learning
Goal: Create classroom environments where students are actively involved in learning, reducing the temptation to outsource tasks to AI.
Techniques:
In-Class Activities: Use interactive methods like group discussions, debates, or Socratic seminars to process concepts in real-time. For example, instead of assigning a written essay on a novel, host a class debate on the novelâs themes, requiring students to articulate ideas on the spot.
Flipped Classroom: Assign readings or videos as homework, then use class time for hands-on activities like problem-solving or peer teaching. This shifts focus from AI-generated summaries to applying knowledge.
Real-Time Feedback: Use tools like polling apps (e.g., Kahoot) or whiteboards for students to demonstrate understanding during lessons, making AI use less relevant.
Example: In a history class, instead of a take-home report on the French Revolution, students participate in a role-play activity as historical figures, defending their actions in a mock trial. This requires understanding context and cannot be easily outsourced to AI.
Rationale: Active learning fosters engagement and immediate application of knowledge, reducing opportunities for AI reliance. Research (e.g., Freeman et al., 2014) shows active learning improves retention and critical thinking compared to passive methods.
2. Assess Process Over Product
Goal: Focus evaluations on the learning process rather than final outputs, which AI can easily generate.
Techniques:
Drafting and Revision Stages: Require students to submit outlines, drafts, or reflections alongside final assignments. For example, ask for an annotated bibliography or a âthinking logâ detailing how students arrived at their conclusions.
In-Class Writing or Problem-Solving: Conduct assessments during class under supervised conditions, such as timed essays or math problem sets, to ensure originality.
Metacognitive Reflections: Ask students to write about their learning process, e.g., âWhat challenges did you face in this project, and how did you overcome them?â AI struggles to replicate personal, context-specific reflections.
Example: For an English essay, students submit a thesis statement, outline, and first draft for feedback before the final version. The teacher grades the process (e.g., quality of revisions) as heavily as the final essay.
Rationale: Emphasizing process makes it harder for students to submit AI-generated work, as teachers can track development. Studies (e.g., Nicol & Macfarlane-Dick, 2006) highlight that formative feedback on process enhances learning outcomes.
3. Redesign Tasks to Be AI-Resistant
Goal: Create assignments that are difficult for AI to complete effectively or that require unique, human-centric skills.
Techniques:
Personalized and Contextual Tasks: Assign projects tied to studentsâ experiences or local contexts, e.g., âInterview a family member about a historical event and compare it to a primary source.â AI cannot replicate personal narratives or local knowledge.
Complex, Open-Ended Problems: Design tasks requiring synthesis and judgment, such as case studies or ethical dilemmas, where AIâs generic responses fall short. For example, âPropose a solution to a local environmental issue, justifying your approach with evidence.â
Multimodal Outputs: Require presentations, videos, or live demonstrations instead of text-based submissions. For instance, a science project might involve creating a physical model and explaining it orally.
Example: In a social studies class, students analyze a recent local news event, incorporating a class discussion and a personal opinion piece that references their community. AI struggles with hyper-local or subjective tasks.
Rationale: AI excels at generic, text-based tasks but struggles with personalized, multimodal, or highly contextual assignments. This aligns with Bloomâs Taxonomy, encouraging higher-order skills like evaluation and creation.
4. Nurture Critical Thinking and AI Literacy
Goal: Teach students to think critically and use AI as a tool, not a crutch, while understanding its limitations.
Techniques:
Teach AI Evaluation: Have students compare AI-generated outputs to human work, identifying flaws like factual inaccuracies or lack of nuance. For example, ask students to fact-check a ChatGPT response on a scientific topic.
Integrate AI Ethically: Allow limited AI use for specific tasks (e.g., brainstorming ideas) but require students to document how they used it and improve upon it. This promotes transparency and skill development.
Critical Inquiry Projects: Assign tasks that require questioning sources or assumptions, such as analyzing biases in AI-generated texts or evaluating primary vs. secondary sources.
Example: In a media studies class, students use an AI tool to generate a news article, then critique its accuracy, tone, and bias compared to a journalistâs article, submitting a written analysis.
Rationale: Teaching AI literacy empowers students to use technology responsibly while honing critical thinking. Research (e.g., UNESCO, 2023) emphasizes AI literacy as essential for 21st-century education.
5. Reinforce Accountability and Culture of Integrity
Goal: Build a classroom culture that values learning and discourages academic dishonesty, including AI misuse.
Techniques:
Honor Codes and Discussions: Discuss academic integrity openly, explaining why learning matters over grades. Have students sign an honor code tailored to AI use.
Detection Tools with Transparency: Use AI detection tools (e.g., Turnitin, GPTZero) to identify AI-generated text, but pair this with conversations about why originality matters, not just punishment.
Positive Reinforcement: Reward effort, creativity, and improvement through rubrics that prioritize these qualities, reducing pressure to cheat.
Example: A teacher starts the semester with a workshop on academic integrity, where students brainstorm the long-term benefits of learning (e.g., career skills) vs. short-term gains of cheating. Rub clutchedrics award points for originality and effort.
Rationale: A culture of integrity, supported by clear expectations and positive incentives, reduces cheating. Studies (e.g., McCabe et al., 2012) show honor codes and proactive discussions lower academic misconduct.
Implementation Tips
Start Small: Begin with one or two strategies (e.g., in-class assessments and process-based grading) to test effectiveness without overwhelming yourself or students.
Adapt to Subject and Age: Tailor tasks to the subject (e.g., math may focus on in-class problem-solving, while humanities emphasize personal reflections) and student age (e.g., younger students may need simpler tasks).
Leverage Technology: Use learning management systems (e.g., Canvas, Google Classroom) to track drafts and provide feedback, streamlining process-based assessment.
Professional Development: Teachers can seek training on AI in education through resources like ISTE or EdTech conferences to stay updated on tools and strategies.
Addressing Challenges
Time Constraints: Redesigning assignments and grading processes can be time-intensive. Solution: Use templates for rubrics and streamline tasks (e.g., peer reviews for drafts).
Student Resistance: Some students may prefer familiar, AI-friendly tasks. Solution: Explain the purpose of new methods and highlight benefits (e.g., better skills, fairer grades).
Access to AI: Students will still have access to AI tools. Solution: Focus on tasks where AIâs output is less effective or easily detectable, and emphasize learning over punishment.
Equity Concerns: Not all students have equal access to technology or support. Solution: Ensure assignments can be completed with minimal tech (e.g., in-class work) and provide resources for struggling students.
Supporting Evidence
Educational Research: Active learning and formative assessment improve engagement and retention (Freeman et al., 2014; Nicol & Macfarlane-Dick, 2006). Process-oriented grading aligns with constructivist theories, emphasizing learning as a journey.
AI in Education: Sources like EdWeek (2023) and UNESCO (2023) note that AI misuse is common but can be countered with task redesign and critical thinking focus. Detection tools, while imperfect, support integrity when used transparently.
Teacher Insights: Discussions on X and education blogs (e.g., TeachThought, 2024) highlight successful strategies like personalized tasks and in-class assessments, with teachers reporting reduced AI reliance when students are engaged.
Example Lesson Plan (High School English)
Objective: Analyze themes in To Kill a Mockingbird without AI reliance.
Day 1 (Engage): Class discussion on justice themes, using a Socratic seminar. Students jot down ideas in real-time, graded for participation.
Day 2 (Redesign Task): Students write a short in-class response: âHow does Scoutâs perspective on justice evolve, and how does it relate to your own views?â Teacher provides immediate feedback.
Day 3 (Assess Process): Students submit an outline for a longer essay, including a thesis and two quotes from the text. Homework is to revise the outline based on peer feedback.
Day 4 (Nurture Critical Thinking): Students compare an AI-generated summary of the novel (provided by the teacher) to the text, noting inaccuracies or oversimplifications.
Day 5 (Reinforce Integrity): Final essay is written in class, with a rubric rewarding original analysis and effort. Students sign an honor pledge before starting.
Outcome: Students demonstrate understanding through discussion, process, and original work, with minimal opportunity to use AI.
Thanks, yet again, R_P for this video. Great, great examples of thinking, teaching, and learning.
Funny, because it very much echoes with what I'm trying to teach, and why I'm teaching it, or doing it this way.
So I teach English to non-native speakers. One of the first things I go through is speaking and listening skills. I try to explain about automation of pronunciation (and any kind of skill) by using a driving analogy. When you first learn how to drive a car with clutch and a gear stick, your brain actually has to focus a lot on the messages it's sending your left and right foot, dosing the pressure and release so as not to stall or stagger, which would be system 2 in the video, and then, once you've done it enough (and it can take a few or many iterations) you don't have to "think" about it, you just do it and off you go, that would be system 1. (and you can still mess it up if there's a lot of extraneous cognitive load)
And this is exactly why I think AI in education is so dangerous. As Derek Muller explains, what's going to happen when people don't "do" new things, tasks, essays etc. and ask the machine to do it for them? They'll just never learn how to do it for themselves, and that is very dangerous for all of us, I think.
In learning languages, there's definitely that question of cognitive overload, which means that you cannot achieve at least some degree of fluency without having achieved mastery of some tasks (if a student has to figure out vocabulary, pronunciation, syntax and what tense to use, they're not about to be saying anything in a reactive manner). And mastery won't be achieved without repetition, practice, feedback multiplied and multiplied over time.
My concern is also that education departments and some teachers are perhaps not so interested in teaching as a means for people to learn how to think, how to build up their knowledge and their cognitive systems, but rather as giving the 'appearance' of learning. As long as it looks like the system is working, churning out students and degrees, it's all good... When I first became a teacher, I had a small depression, among the factors that caused it was my feeling of powerlessness at helping students learn, as well as the perception that my job was not much more than giving the appearance of equal opportunity and social fairness to my society. In essence, I felt like a stamping machine, stamping poorly performing students, 'poor', average students, average, and good students good. I was of course more interested in helping them get better, regardless of where they were starting from.
In a way AI is likely to make it easier to pretend that the education system works (just like using CTRL=C and CTRL+V in essays was before) as I don't think a lot of teachers are going to bother verifying how different AIs answer the questions they have asked to compare that with the students' answers.
Finally, if one takes the GPS example in the video (you won't build up your orientation skills if you always use a GPS), it can lead to people driving into water... Happened in my country, and happened elsewhere too (here and here and there are other examples). Now, if you apply that example to the use of AI in learning, you can see where that would lead.
However, I also agree (reluctantly) with the fact that AI can be great in the learning process (as Derek says by providing feedback and so on), but so far, I've mostly seen examples of AI used stupidly in a way that reduces, rather than enhances, what the student is going to learn; and that's without even considering Grok's "white genocide".
Anyway, sorry for ranting, but I really enjoyed this video, and I might actually use it in class.
if you haven't tried experimenting with these or others you're missing out
in my world i pick something i know very well and then ask about it
results are good, very good
More and more meetings come with a follow-up AI transcription and it's a lifesaver because I like to read whatever it is they just told me because that's how I process. It can separate the channels when people are talking over the top of each other.
i've used perplexity, gemini and grok (in that order)
they are all pretty impressive (can't use for work)
if you haven't tried experimenting with these or others you're missing out
in my world i pick something i know very well and then ask about it
results are good, very good