🌟 Founder musings
The magic is working. The lines are not.
I’m writing this from Disney World, standing in a line that says 200 minutes. Two hundred. That’s over three hours to ride something that lasts four minutes.
And I couldn’t stop thinking, is this the best we can do in 2026?
Disney just got a new CEO this week. Josh D’Amaro took over from Bob Iger and in his first remarks to shareholders, he talked about bringing “human creativity together with cutting-edge technology.” Parks are seeing record revenue. There’s a $60 billion global investment program underway. By all accounts, the magic is working.
But the operations? The queuing, the scheduling, the chaos of trying to get four people and a stroller from one end of the park to the other without losing your mind or your wallet? That feels like a problem AI was made for. Dynamic crowd routing, personalized itineraries that adjust in real time, predictive wait times that actually predict. The pieces exist. Someone just needs to orchestrate them.
Here’s the catch though. Disney is sitting on an extraordinary amount of customer data. They know where you are, what you bought, what your kids watched last night on Disney+, probably how long you slept. The opportunity is enormous. So is the responsibility.
And that brings me to this week’s theme. Because while I was thinking about all this, I came across a story about an autonomous AI agent that broke into McKinsey’s internal AI platform in under two hours. No credentials. Just a domain name and a relentless machine probing for weaknesses. 46.5 million chat messages. Decades of proprietary research. Fully accessible.
McKinsey isn’t a startup. They have serious security teams. The vulnerability was a classic SQL injection their own scanners missed.
It’s a good reminder for all of us. Before you drop your W2 into ChatGPT to prep your taxes, or upload your resume to an AI tool you just found, think for a second about where that data goes and who else might be able to reach it. The lock on most AI systems is newer than we think.
The builders who will matter most in this next era aren’t just the ones who can make things work. They’re the ones who ask what happens when it doesn’t.
-Janani
🗓️ Opportunities to not miss for high schoolers!
Competition Date: Saturday, March 28, 2026 Registration: Open now
What: Lockheed Martin's annual high school cybersecurity competition, held across dozens of sites in the US and internationally. Teams work together for 3 hours to solve real cybersecurity challenges built by Lockheed Martin engineers. The format is Capture the Flag (CTF), covering reverse engineering, web exploits, forensics, cryptography, and cybersecurity awareness.
Who: High school students competing in teams, with a coach. No prior cybersecurity experience required.
Format: Cloud-based competition. Fully structured, 3-hour event on a single day. You can also practice ahead of time on the CYBERQUEST Academy platform.
2026 Locations include: Denver, CO, Fort Worth, TX, Orlando, FL, Marietta, GA, King of Prussia, PA, and a U.S. Virtual option.
Why it's great: Sponsored by one of the world's largest defense and technology companies, this competition gives students a real taste of what professional security teams face. Strong college application material, especially for students interested in computer science, national security, or engineering.
Perfect for: Any student curious about how systems get attacked and defended, especially after reading the McKinsey story below.
🚀 Stay Inspired
When Gavriel Cohen discovered that the popular AI agent tool OpenClaw had quietly downloaded all his WhatsApp messages, including personal ones, and stored them in plain unencrypted text on his computer, he didn't file a bug report. He built his own solution in 500 lines of code over a single weekend.
That project, NanoClaw, is now one of the fastest-growing open source tools in the AI developer community, with over 22,000 GitHub stars, 4,600 forks, and more than 50 contributors. It was built around a core principle: AI agents should only access data they're explicitly authorized to use.
Here's what makes this story worth sitting with. A developer spotted a security problem that big, well-funded teams had overlooked, built a cleaner alternative from scratch, and within six weeks had closed a deal with Docker, a company that counts millions of developers and nearly 80,000 enterprise customers. The path from "this scares me" to "I'll fix it" is exactly the builder mindset.
The lesson isn't that you need to go viral. It's that real problems, noticed by people close enough to see them, are still the best starting point for something meaningful.
🌏 AI companions for teens just got regulated
Washington state just became the second state in 2026 to pass a major AI chatbot safety bill, following Oregon just a week earlier. The law, HB 2225, specifically targets companion chatbots and includes real protections for minors: bans on chatbots claiming to be human, restrictions on emotionally manipulative engagement tactics, and required crisis protocols for users experiencing suicidal ideation.
The bill prohibits chatbots from prompting minors to return for emotional support, fostering emotional attachment through excessive praise, mimicking romantic partnerships, or discouraging kids from talking to parents and trusted adults. Operators must disclose at the beginning of every session, and at least once per hour, that the user is interacting with AI.
The Transparency Coalition, which has been working with lawmakers in over 25 states, called the bill a critical step toward ensuring "public safety and protecting our children." The law goes into effect January 1, 2027 if signed by the governor, which is expected.
This is worth paying attention to. AI tools designed for teens are multiplying fast. Some are built thoughtfully. Many are not. Understanding what responsible AI design looks like, who it protects and how, is increasingly part of being an informed user and builder.
💻 Program spotlight
The Power of APIs: How Flintolabs students learn to connect anything
One of the most important shifts in how Flintolabs students think about building happens in the week 3: they stop thinking about what the app does and start thinking about what data it can reach.
That shift happens through APIs.
An API, Application Programming Interface, is how one piece of software talks to another. It's the reason your weather app knows it's raining in your city, why Spotify can pull your listening history into another app, and why a student-built tool can analyze a phone call, search NASA's image archive, or fetch the Pokemon of the day. APIs are the connective tissue of the modern internet, and learning to work with them is one of the most transferable skills a builder can have.
In Flintolabs, students learn this by building a call analyzer app using the Deepgram API, which converts speech into text. It sounds simple, but the real lesson isn't about Deepgram. It's about the pattern: authenticate with an API key, send a request, receive a structured response, do something useful with it. Once that pattern clicks, students start to see the world differently.
Suddenly the NASA Astronomy Picture of the Day API isn't just a cool widget. It's a template. Giphy's API isn't just for GIFs. It's the same structure. The Pokemon API becomes a sandbox for experimenting with data types and responses.
Students who learn APIs early don't just get better at coding. They get better at scoping what's actually possible. That's the skill that turns "I have an idea" into "here's what I'd need to build it."
This is also, not coincidentally, the exact kind of foundational knowledge that makes AI-powered tools safer. If you understand how your app authenticates, what data it sends, and where responses go, you understand what's at risk. The students building call analyzers today are the ones who will ask the right security questions tomorrow.
🔥 Build Real AI Skills Before College
🎁 April cohort enrollment now open
Our March cohort is underway, and our April cohort is open for enrollment now. If your student has been thinking about joining, this is the moment.
Here's what you get with just one hour per week for 6 months:
✅ Hands-on AI skills through building real applications, not watching lectures
✅ 3 transferable college credits from University of Colorado Denver
✅ Portfolio of real work that demonstrates capability to colleges and employers
✅ Small class sizes (capped at 20 students) ensuring personalized attention
✅ Advanced concepts like OpenCV, Minimax algorithms, computer vision, and more
✅ The critical thinking and problem-solving skills employers desperately need
While college grads struggle with workplace readiness and entry-level roles disappear to automation, Flintolabs students are building portfolios of real work, exactly the kind of demonstrated capability that matters when so much else is automated away.
Classes start Saturday, April 4!
Our program has a 5-star rating with reviews from both students and parents.
Questions? Email us at [email protected]
Follow us on LinkedIn to stay informed about our summer offerings, internship opportunities, and upcoming cohorts. We share updates on what our students are building and what's coming next.
Found this valuable? Forward this newsletter to other high schoolers and parents who want to be informed about AI trends and what is needed to prepare for an AI-driven future. Every student deserves the chance to build real skills before college.
You're receiving this newsletter because you expressed interest in Flintolabs or crossed paths with our community. If you believe you received this in error, please feel free to unsubscribe using the link in this mail.
