🦃 Happy Thanksgiving!
As we gather with family and friends this week, we're grateful for the opportunity to help students build real AI skills that prepare them for the future.
Thank you for reading, sharing, and being part of our community - your support means everything as we work to prepare the next generation for an AI-driven world.
Note: There will be no newsletter next week as we take time to celebrate with our families. We'll be back in your inbox the following week!
🎁 Special Holiday Gift: We're offering $50 off your first month subscription with code HOLIDAY50 - valid through the end of December. See details in our call-to-action below!
🌟 Founder musings
The need to ask “Should we?” before we ask “Can we?”
This week, I came across an article from IBM where they're refreshingly honest about what they've learned about AI agents in the enterprise. The insights they had reinforced something we talk about at Flintolabs constantly. The challenge isn't AI capabilities - it's understanding when and how to use AI versus just deciding to use it and then finding problems to solve. IBM's research found 99% of developers are exploring agents, but many are becoming "the hammer in search of a nail," deploying agents before identifying actual business needs or building the governance frameworks to handle what happens when things go wrong. For instance, if an agent accidentally deletes sensitive records, a human will be held accountable for damage that happened faster than anyone could intervene.
This same principle came up in our Flintolabs session this week when we covered brain-computer interfaces. We explored how BCIs can control a cursor or even prosthetic limbs just with thoughts, and discussed companies like Neuralink that are pushing these boundaries. The technology is remarkable—but what fascinated our students most was the question of when and how to use it responsibly. If a BCI can decode your thoughts to control devices, what happens when it accidentally "reads" something you only meant to think, not say aloud?
The possibilities are infinite, but only if we get the foundations right. As we build impressive technology, we need to think through the implications, establish ethical guardrails, and understand the problems we're actually trying to solve. These are the conversations that matter, and exactly the kind of critical thinking we need to nurture in the students of today.
-Janani
🗓️ Opportunities to not miss for high schoolers!
Registration Opens: December 3, 2025
Registration Closes: January 28, 2026
Competition Period: February 2 - April 14, 2026
What: A FREE data science competition where student teams analyze ice hockey data to make predictions about player performance, team dynamics, and game outcomes. Using comprehensive sports datasets, teams will apply analytical techniques to uncover what drives success on the ice.
Who: High school students worldwide (ages 14-18, typically grades 9-12). Teams of 3-5 students from the same school + 1 teacher advisor. Recommended: completion of Algebra 1.
Format: Three-phase competition:
Phase 1: Analyze ice hockey data and make predictions (auto-scored)
Phase 2 (Top 25 teams): Create detailed slide deck explaining methods and findings
Phase 3 (Top 5 teams): Live Zoom presentation to judges
What Makes It Special: This isn't just about sports - it's about developing highly marketable data analysis skills through a compelling real-world application. Students learn Python, R, and other analytics tools while working with the same type of data that professional sports teams use. Last year's competition brought together 490+ teams from 30+ countries.
Why It Matters for AI Students: Data science is the foundation of AI and machine learning. This competition teaches students to work with real datasets, identify patterns, make predictions, and communicate findings, exactly the skills needed for AI development. Plus, it demonstrates the practical application of analytical thinking.
Perfect for: Students interested in data science, sports analytics, AI/ML, or anyone who wants hands-on experience with real-world data analysis. No prior data science experience required - educational resources provided throughout.
Prizes: Winners recognized on Wharton's website (great for college applications), develop portfolio-worthy projects, and gain experience with industry-standard analytics tools.
🚀 Stay Inspired
🚨 The AI data crisis: Why the internet is running out of training fuel
AI companies are hitting a wall - and it's not made of code. According to a study by the MIT-led Data Provenance Initiative, 5% of all data and 25% of data from the highest-quality sources used in three commonly used AI training datasets has been restricted
The implications? MIT graduate student Shayne Longpre warns we're "seeing a rapid decline in consent to use data across the web that will have ramifications not just for AI companies, but for researchers, academics and noncommercial entities".
Here's what's driving the crisis:
Copyright pushback is accelerating. Sites like Reddit and StackOverflow have begun charging AI companies for access to data, and publishers including The New York Times have taken legal action, suing OpenAI and Microsoft for copyright infringement over using news articles to train models without permission.
The "data wall" is real. AI executives worry about the point at which all training data on the public internet has been exhausted, with the rest hidden behind paywalls, blocked by robots.txt files, or locked up in exclusive deals.
Synthetic data can't save the day. While some companies believe they can use AI-generated synthetic data to train their models, many researchers doubt that today's AI systems are capable of generating enough high-quality synthetic data to replace the human-created data they're losing.
The deeper issue? The Data Provenance Initiative's audit of over 1,800 text datasets found license omission rates of more than 70% and error rates of more than 50% on popular dataset hosting sites, highlighting a crisis in misattribution and informed use of the datasets driving many recent AI breakthroughs.
What this means for students: The AI landscape is shifting from "who can access the most data" to "who can use data most effectively and ethically." Students learning AI today need to understand not just how to build models, but the legal, ethical, and practical challenges around data sourcing, licensing, and responsible use.
This isn't just a technical problem - it's about understanding the foundations of how AI actually works and the real-world constraints shaping its development.
Read more: Data that powers AI is disappearing fast
📚ChatGPT scored dead last on the exam. Three years later, it got a B
Three years ago, a Vanderbilt University computer science professor decided to run an experiment. He gave ChatGPT the same final exam he gave his students, creating a fictitious student named "Glen Peter Thompson" (GPT, get it?). The AI chatbot scored dead last, performing well below the mean. His students were relieved; they were significantly more prepared than ChatGPT for computer science jobs.
This spring, he repeated the experiment. ChatGPT's "younger sister" Gwen scored in the low 80s. The improvement happened in just three years, and it reveals the urgent challenge facing both students and educators: AI isn't replacing human capability yet, but the gap is closing fast.
When ChatGPT launched in late 2022, computer science major Max Moundus couldn't focus on his classes. He watched AI generate code in any language nearly instantaneously and spiraled into panic attacks: "Did everything I just learned over the past four years become obsolete?" By 2024, his fears seemed validated—Fordham University reported a one-third drop in computer science applications as coding jobs started disappearing.
But here's what's really happening in classrooms: a recent survey found 85% of undergraduates used generative AI for coursework in the past year. About half used it responsibly—for brainstorming, tutoring, and studying. But 19% admitted using AI to write full essays for them. As one student put it: "It makes it easier to do better."
The result? A trust crisis. AI detection tools are unreliable, sometimes flagging human-written work as AI-generated while missing actual AI use. Meanwhile, TikTok is flooded with students bragging: "I'm a senior at Stanford, and every single one of my essays has been written by AI." Professors are scrambling, assigning more in-class work and fewer take-home papers just to know the work is actually done by students.
English professor Leslie Clement at Johnson C. Smith University embraced AI, teaching students to use it responsibly, fact-check its outputs, and reflect on what they learn from the process. She created a course called "AI and the African Diaspora" and teaches students about tools beyond ChatGPT. Her colleague Dan Cryer at Johnson Community College sees things completely differently, he rates AI's benefit to humanities education as a "one or two out of ten," calling it actively harmful. His concern? Students are cheating themselves out of the learning process by using AI as a shortcut. He compares it to "bringing a forklift to the gym", sure, the weights move, but you're not building any muscle.
An MIT study backs up these concerns: researchers recorded brain activity of people using AI to write essays versus those using Google or their own brains. People using AI showed lower neural connectivity and engagement. One University of Minnesota student who interviewed her peers captured the tension perfectly: "We're constantly on Instagram, Snapchat, TikTok. There's always noise in our brains. Sometimes you don't have room to think. So even when you want to critically think, here's another outlet for you to not critically think."
Max Moundus is now an AI research engineer at Vanderbilt University, building AI tools for the institution. He realized his ability to understand and leverage AI was itself valuable: "My computer science knowledge wasn't obsolete. It was actually what enabled me to understand how to leverage this technology effectively."
This is a massive, unconsented experiment on an entire generation of students. As Fordham President Tanya Tetlow warns, doing nothing isn't an option, but neither is uncritical embrace: "Where we use AI as a tool to do important work better, it is responsible. Where we cede our judgment to technology without monitoring its accuracy, we have violated our own duties." If higher education gets this right, AI could supercharge learning. If we get it wrong, an entire generation might graduate without critical thinking skills, entering a job market AI has already eroded. The stakes couldn't be higher, and students learning to build with AI now, rather than just consume it, are the ones who'll determine which future we get.
Listen to the full story: NPR Sunday Story: AI in Higher Ed
🦄 Student spotlight
App to practise Mental wellness techniques
This week, we're highlighting a 7th grader from Seattle, Washington who recognized a gap that most adults miss: while schools are starting to teach about mental health, students often need ongoing support between those lessons.
During mental health awareness lessons at school, this student realized her classmates needed tools they could access anytime, anywhere - especially during moments when they're struggling but might not be ready to talk to someone.
So she built "Your Mental Wellness Journey" - a safe space for reflection, growth, and healing designed by a student, for students.
The app offers:
Daily mindfulness exercises and checklists to help students stay grounded
A personal journal for expressing thoughts and tracking emotional journey over time
A private sanctuary available 24/7 where students can process feelings at their own pace
What makes this story remarkable? This Seattle 7th grader built this functional app after just 3 sessions in the October Foundations cohort at Flintolabs. Inspired by her school's mental health curriculum, she went from learning AI basics to creating a solution that could genuinely help her classmates - all in less than a month.
This is what happens when students don't just learn about important issues - they build solutions for them.
The app includes an important reminder: "If you're in crisis or need professional support, please contact a mental health professional or emergency services." It's a supportive tool for reflection and mindfulness, not a replacement for professional mental health care.
🔥 Ready to Move from AI Consumer to AI Builder?
🎁 Special Holiday Offer: $50 Off Your First Month!
The December cohort starts December 6th - just days away!
Use code HOLIDAY50 at checkout to get $50 off your first month subscription. This special holiday gift is valid through December 31st, 2025.
Here's what makes this opportunity unique: spend just one hour every weekend for 6 months, and earn 3 college credits from an accredited university while building real AI applications.
That's one hour per week to:
Develop hands-on AI skills through actual projects, not lectures
Earn transferable college credit that counts toward your degree
Build a portfolio of real work that demonstrates capability
Move from AI consumer to AI creator
Our November cohort students have already built their first AI applications - from mental wellness tools to AI artists to problem-solving tools. They're not just learning about AI; they're using it to create solutions for challenges they care about.
Spots are limited - we cap each class at 20 students to ensure personalized attention.
Our program has a 5-star rating with reviews from both students and parents. If you have questions before signing up, email us at [email protected].
Found this valuable? Forward this newsletter to other high schoolers and parents who want to be informed about AI trends and what is needed to prepare for an AI-driven future. Every student deserves the chance to build real skills before college.
You're receiving this newsletter because you expressed interest in Flintolabs or crossed paths with our community. If you believe you received this in error, please feel free to unsubscribe using the link in this mail.
