Perils and Opportunities of ChatGPT: A High School Perspective

By Katherine McKean, Junior and President of her High School AI Club 

When ChatGPT first showed up in late 2022, it felt like something from sci‑fi class—something amazing, a little scary, and maybe a little too good to be true. Two years later, I see it in group chats, late‑night panic essays, and even the code some students turn in for computer class. As president of our high school AI Club, I’ve watched ChatGPT get woven into the fabric of high school life. It offers opportunity. It raises questions. And sometimes it makes you double‑take at what “student work” really means these days.

For many teens, ChatGPT means faster writing, clearer drafts, and a safety net when deadlines loom. For educators, it has triggered concern, policy changes, and even outright bans. From where I sit, it’s complicated. ChatGPT isn’t just a tool. It’s also a mirror. And what we decide to do with it might teach us as much about ourselves as about AI.

what ChatGPT gets right: help, clarity, and speed

Not long ago, one of my classmates spent fifteen minutes pacing before chemistry homework because she couldn’t articulate a lab‑report discussion. Then she opened ChatGPT, asked it to re‑explain the prompt in plain English, and suddenly she was writing. The report wasn’t perfect—but it was her thoughts, helped along by easy‑to‑understand guidance. Sometimes that’s the difference between getting homework done or leaving it blank.

Peer‑reviewed research supports what many students feel about ChatGPT’s benefits. A 2025 paper reviewing dozens of studies found that ChatGPT and similar tools “meet students’ unique learning needs, foster autonomy and motivation, and help students apply knowledge to real‑life problems.” [Nature](https://www.nature.com/articles/s41599-025-04787-y?) In other words: when used thoughtfully, generative AI can function like a tutor or study‑aid, rather than a shortcut — helping students clarify confusing material or draft ideas for essays and projects.

In our club, we’ve used ChatGPT to brainstorm ideas for creative writing, design simple coding projects, and simulate debates. It can help students with language difficulties draft better‑structured writing. It can give someone stuck on an outline a starting point. For students balancing sports, jobs, family, and school, it can feel less like cheating and more like a lifeline.

the flip side: mistakes, misinformation, and over‑reliance

Still, ChatGPT is not perfect. One risk is that some students trust it too much. Another is that it sometimes hallucinates—makes up facts, invents sources, or creates errors that look polished. A recent review of the effects of AI in education documented concerns about inaccuracy, bias, and decreased independent thinking when students over‑rely on tools like ChatGPT. [Taylor & Francis Online](https://www.tandfonline.com/doi/full/10.1080/0144929X.2024.2394886?)

Imagine a student using ChatGPT to draft a history essay. If the AI invents a dubious quotation or misdates an event, and the student doesn’t check, what looks like a decent paper actually becomes misinformation. Worse, if enough students treat AI output as gospel, the class ends up in a collective echo chamber of unchecked errors.

Some studies raise alarm that frequent reliance on ChatGPT may erode critical thinking and knowledge retention. [Nature](https://www.nature.com/articles/s41599-025-04787-y?) If you’re always using AI to summarize or explain, you may never build the mental muscle to read, reflect, or write on your own.

academic integrity: cheating or just new tools?

One of the biggest controversies is whether using ChatGPT counts as cheating. In a survey of teachers, a quarter said they had caught students submitting ChatGPT‑generated work. Some school districts responded by blocking access or banning generative AI entirely. [National Education Association](https://www.nea.org/nea-today/all-news-articles/chatgpt-enters-classroom-teachers-weigh-pros-and-cons?)

Across colleges, the problem seems to be growing. A 2025 investigation by The Guardian found thousands of proven cases where university students used AI tools inappropriately — a sharp rise over previous years. [oai_citation:4‡The Guardian](https://www.theguardian.com/education/2025/jun/15/thousands-of-uk-university-students-caught-cheating-using-ai-artificial-intelligence-survey?utm_source=chatgpt.com)

But not everyone agrees the tool is the problem. In a well‑known piece from Stanford Graduate School of Education, researchers argued that the increase in cheating isn’t necessarily due to AI. Instead, they note: “Students have always found shortcuts.” What has changed is how visible those shortcuts have become. [Stanford Education](https://ed.stanford.edu/news/what-do-ai-chatbots-really-mean-students-and-cheating?) If that’s true, banning AI might push misuse underground — not stop it.

equity and access: who gets left behind?

Generative AI like ChatGPT can level the field — when a student has reliable internet, a device, and the understanding of how to use prompts. But many don’t. In some districts, school‑issued devices are old, networks are slow or blocked, and not every student knows how to frame a good question. That creates a gap between those who can easily tap the AI assistant and those who can’t.

Teachers and administrators trying to respond to AI often lack a plan. A recent survey of over 100 high‑school and college administrators found that most schools still don’t have official AI policies. [arXiv](https://arxiv.org/abs/2403.15601?) If schools can’t guarantee equitable access — to devices, training, or guidance — then using ChatGPT might reinforce existing inequities rather than fix them.

teacher workload and burnout: adding AI doesn’t always help

For teachers, ChatGPT isn’t necessarily a blessing. Many find themselves rewriting lesson plans, updating syllabi, or evaluating work more carefully to avoid plagiarism. [National Education Association](https://www.nea.org/nea-today/all-news-articles/chatgpt-enters-classroom-teachers-weigh-pros-and-cons?) Some administrators are slowly rolling out professional development, but many classrooms remain under‑prepared for widespread generative AI use.

In interviews with educators, the recurring requests are for time, training, and clarity. Teachers want to know what responsible AI use looks like. They don’t want to spend their evenings chasing down AI‑generated papers. Without support, many default to banning or restricting AI access altogether. That may solve short‑term problems — but it doesn’t help anyone learn how to use the tools responsibly.

rethinking assignments and assessment

If students can use ChatGPT to get rough drafts, generate ideas, or outline arguments — maybe school assignments need to evolve too. Some educators suggest shifting toward in‑class writing, oral presentations, project‑based assessments, or portfolio work. [ACM Digital Library](https://dl.acm.org/doi/10.1145/3613904.3642785?) These formats place priority on understanding, creativity, and critical thinking — skills AI struggles to duplicate reliably.

In our AI club, we proposed just such a change: a mock‑history project where students used AI to draft a perspective, then rewrote and critiqued it using primary sources. It forced everyone to engage, think, and reflect — rather than just accept what AI said. For many, it felt more like actual learning than the usual five‑paragraph essay ever did.

supporting AI literacy instead of banning AI

Banning ChatGPT entirely is one approach — but it misses the chance to teach AI literacy: how to prompt well, how to fact‑check, how to edit, and how to use AI ethically. In a 2024 editorial exploring AI in K–16 education, scholars argue that inclusion and education are better tools than prohibition. [CITE Journal](https://citejournal.org/volume-23/issue-1-23/editorial/editorial-chatgpt-challenges-opportunities-and-implications-for-teacher-education?)

Imagine a class where students discuss bias, hallucinations, and limitations of generative AI. A class where they learn to treat AI output like a rough draft — helpful, but not authoritative. That’s not magic. That’s preparation.

what this all looks like for students

From where I sit in the Bay Area, high school feels more chaotic than ever. Some classmates treat ChatGPT like Wikipedia with personality. Others avoid it completely, worried it’ll get them in trouble. Some use it responsibly — as a brainstorming tool, a grammar helper, a tutor — while still doing their own thinking. Others… well, they disappear for an hour, then come back with polished essays that smell like they were copy‑pasted from a robot.

In our club, we lean toward caution + curiosity. We use ChatGPT to explore coding ideas, improve grammar, generate art prompts. But we always say: show your work, cite where AI helped, and treat AI like a tool, not a shortcut. It doesn’t always make things easy — but it keeps things honest.

a few stories from the real world

Last spring, a student at a university in the UK was caught using generative AI for multiple essays — and was expelled. The case, reported by the UK’s Higher Ed sector, escalated quickly after an AI‑detector flagged the work. Fake quotes, unusual syntax, and suspicious consistency gave it away. [The Guardian](https://www.theguardian.com/education/2025/jun/15/thousands-of-uk-university-students-caught-cheating-using-ai-artificial-intelligence-survey?)

Another high school in California tried a different approach: during final exams, students had to write by hand. Teachers asked for scanned pages together with a short reflection about how they used any AI assistance. Some students complained it was unfair. Others said it forced them to think instead of copy. It wasn’t perfect. But it showed another way.

this isn’t a “fix it now” moment — it’s a growing‑up moment

ChatGPT and tools like it aren’t going away. If anything, they’re just getting better. That doesn’t mean we should panic. It means we should learn. For schools, that means building policies, supporting teachers, teaching students AI literacy. For students, it means using AI responsibly — like a calculator for your brain, not a crutch for your effort.

For parents reading this: your students are growing up in a world where homework might look different than it did for you. Essays might be drafted by AI. Projects might start with a prompt. Tests might evolve. What matters is not just what they submit — but how they think. If we treat AI as a tool for thinking, not a shortcut from it, maybe we’ll come out of this smarter than we went in.

If you’re curious how students like me are leading that change, check this out: How to Start a High School AI Club in 6 Easy Steps.