Artificial intelligence (AI) is undoubtedly one of the biggest technological breakthroughs since the birth of the internet, and while AI has made giant strides in the last few years, experts say we are just getting started. To put it in perspective, we are at the same point with AI as we were with the internet when it first arrived in many of our homes in the 90s.
Over the last 30 years, we’ve evolved alongside the internet, learning its capabilities, opportunities, and pitfalls – and now it’s time to do the same with AI.
With children as young as 7 already embracing AI, many parents and teachers are concerned about the future of this technology, and whether or not dependence on AI could be detrimental to kids’ education and development.
What is AI?
You could think of AI as a robot that you want to help you solve puzzles. You start by showing the robot lots of puzzles and it attempts to solve them with smart guesses. Just like how we get better at something the more experience we have, the robot starts guessing correctly more and more often until it can solve any puzzle you throw at it.
However, when people talk about AI in a homework sense, they’re usually talking about the generative AI tool ChatGPT. After the platform launched in 2022, people from all walks of life were amazed at ChatGPT’s ability to hold natural conversations, answer questions, and generate creative content – including jokes and songs!
How might kids use AI for homework?
To give you an idea of its capabilities, school children could use AI to:
- answer questions and carry out research
- translate languages
- outline an essay
- write code
- break down and explain difficult concepts
- solve math problems and explain methodology
- check spelling and grammar
- provide writing prompts
- generate practice questions in preparation for exams
- design study plans
- convert text into speech for visually impaired students and vice versa for hearing-impaired students
Why are parents concerned about kids using AI for homework?
Shortly after ChatGPT was rolled out, some schools, including all those in New York City, banned its use due to concerns over cheating and its negative impact on student learning. However, these decisions were reversed once educators gained a better understanding of the technology. Today, many schools and educational institutes recognize the opportunities offered by AI tools and encourage their responsible use.
Although it may be allowed in schools in certain circumstances, many parents are cautious about their kids using AI for their studies and might have the following concerns.
Kids might become over-reliant on AI at the expense of their creativity and critical thinking skills
Homework is meant to help students develop skills, reinforce knowledge, and practice concepts. So if AI does too much of the work, there is the risk that children won’t have to think for themselves and miss out on valuable learning opportunities.
Although this is a valid concern, it’s really about how AI is used rather than if it is used. With clearly defined rules around its use, AI can be used to foster creativity and practice critical thinking skills. For example, a creative writing student might use AI to generate an image or a writing prompt to spark imagination for a short story or poem.
Using AI is cheating
A student who uses an AI tool like ChatGPT to complete their homework could be considered a cheat, depending on how they use it and their school’s policies.
For example, if a student uses AI to generate an entire essay or answer questions, and then submits the work as their own without stating they’ve used AI, that would be considered plagiarism.
Teachers set homework so that children can develop their understanding, critical thinking, problem-solving, and writing skills outside of class. Excessive or irresponsible use of AI can destroy any chance of this happening.
That’s why many educational institutes now explicitly distinguish what they consider responsible AI use and cheating.
AI might provide wrong or misleading information
When talking about ChatGPT’s limitations, its creators OpenAI admit that it “sometimes writes plausible-sounding but incorrect or nonsensical answers,” which isn’t exactly ideal for students who want to pass their classes!
ChatGPT’s authoritative voice makes it sound like it knows what it’s talking about, but it’s important to know where it gets its information. The AI chat tool relies on data it was trained on and statistical predictions rather than reasoning or accessing real-time knowledge.
This means it can generate information that is:
- Outdated. ChatGPT’s “built-in” knowledge is based on data available up to a certain point. For anything that happened after this, the tool would have to pull live information from the web.
- Biased or inaccurate. ChatGPT was trained on large amounts of publicly available information online – and we all know we shouldn’t trust everything we read on the internet!
- Just plain wrong. AI chat tools like ChatGPT generate text based on the data it was trained on – it doesn’t “understand” it the way humans do. This can result in responses that sound plausible but are completely false or make no sense.
While AI tools are undoubtedly getting more intuitive, students should always verify responses using a reliable source (you can ask ChatGPT to provide its sources!)
Kids use technology too much as it is – AI just adds to the screen time
Balancing screen time with learning is a hot topic in education and parenting – and with kids becoming more reliant on AI tools we could see time on devices go up.
But, as with other forms of technology, AI should be seen as a resource to be used in conjunction with books and other tech-free tools and materials. Some parents use a parental control tool like Qustodio to help their children develop a healthier relationship with AI and technology in general.
AI platforms will collect my child’s personal information
To learn and improve, AI requires data – lots of data – and it sometimes gets it by tracking usage and collecting sensitive information the user has entered. This is particularly worrying for parents wanting to protect the privacy of their children online.
Most AI platforms understand these concerns and have measures in place to protect user data. For example, according to OpenAI’s privacy policy, ChatGPT:
- anonymizes data in user conversations by removing personal identifiable information (PII).
- doesn’t request or store sensitive information like names, addresses, or other PII.
- allows users to opt out of having their conversations used to improve the model.
- has age restrictions. To maintain compliance with privacy regulations like COPPA, ChatGPT users must be 13+, and users between 13-18 must have parental consent (although, with no age verification process, this can easily be ignored).
- encrypts data shared in conversations to protect it from unauthorized access
- stores data for a limited time only.
Whichever AI platform your child uses, we recommend reading its privacy policy to get a better understanding of what it does with your kid’s data and to set your preferences.
Using AI for homework: 5 healthy AI habits
To start you and your child on the right path, here are 5 healthy AI habits:
1. Use AI as a supplement, not a substitute
No matter what some folk say, AI has not, and cannot, replace old-fashioned brain power. So rather than asking for answers to homework problems, kids can use AI to better understand concepts before solving them themselves.
2. Ask better questions
The answers AI generates are only as good as the questions it’s asked – giving kids the opportunity to practice formulating clear and specific queries. Also, thoughtful follow-up questions can lead to deeper understanding.
3. Verify with trusted sources
While they might communicate with confidence, AI chat tools’ answers can sometimes be outdated, biased, inaccurate, and just plain wrong!
So instead of blindly taking AI’s answers as fact, we can help our kids practice their cross-referencing skills with information from reliable and trusted sources.
4. Play with AI together
Sit down with your child and spend time experimenting with AI – maybe by creating funny images or songs together.
Not only will it be a chance for you both to learn more about AI capabilities, but it will also help your child feel more comfortable coming to you with their concerns about AI and its potential risks.
5. Set fair limits
AI can be addictive and easy to overuse. As with other types of technology, it’s essential to establish a few ground rules for AI. Parents can use a parental control tool like Qustodio to block AI apps and websites from being opened, or set appropriate time limits for their usage.
With AI developing at such a rapid rate in recent years, it’s understandable that many people, especially parents, look at it with suspicion – and even fear. However, AI has the potential to be an incredible educational tool that can help our children on their learning journey. By teaching kids to use AI responsibly, we can not only help them pass classes and get better grades, but also better prepare them for an AI-powered future.