Last month, our guru-of-all-things-tech wrote about how AI might impact not just the way our kids think, but whether they think! This time around, he tackles their overall social/emotional development in the age of artificial intelligence.
It’s in the news almost constantly lately: AI. Artificial Intelligence. Chatbots. It was only a few months ago that a number of tech companies almost simultaneously unleashed brave new codes that, even they admit, haven’t been fully vetted. Yet they are now online, and anyone can access them. How will they affect kids, whose minds are still developing and learning who and what to trust in the world?
There’s no doubt, some of the achievements are impressive: AI can write research papers in the blink of an eye. It’s also being tried in advertising, news writing, and even coding, as software writes software. One AI program passed the bar. Another fleshed out a famous micro-short story, often attributed to Ernest Hemingway, called “Baby Shoes.” Its conclusion implied an understanding of concepts such as the soul and immortality.
But that’s only one side of the AI coin. The bots have also been found to make up information when they don’t have an answer, cite non-existent sources, and double down when called out. You could argue they are all too human.
How will these “artificial people” impact the overall development of children, who are still putting together how the world works and now have to contend with machines that act very much like living breathing human beings? Will children emerge confused, uncertain, and frustrated? Or will AI prove to be a friend, a helper, someone to guide them through the confusing path of adolescence?
The truth is (drumroll) we really don’t know. It’s all too new. Some educators are already saying AI significantly improves children's concepts of computer science and robotics and other skills such as creativity, literacy skills, emotion control, and computational thinking. They claim there is lots of data to back these claims. I’m not an educator and I’m not a PhD, but I find myself a little suspicious. That seems like a very broad and conclusive finding for a phenomenon so new and so poorly understood.
How will these "artificial people" impact the overall development of children, who are still putting together how the world works and now have to contend with machines that act very much like living breathing human beings?
Despite some hype, AI appears both here to stay and probably the biggest technological watershed since the internet itself (as opposed to last year’s tech darling, virtual reality or VR, which went from red hot to DOA in a year).
Will AI affect our kids in some new, unknown way? Or is it just another piece of media to be assimilated? It’s true there’s nothing else quite like it. But it’s also true that kids may have become so calloused that it barely disturbs them, immersed in tech as they already are. Maybe they’ll adapt better than we will!
Can desensitization be so gradual we don't notice, the way we don't notice water as it wears away granite?
It seems hard to believe now, but back in the early days of radio, many people raised red flags over the impact of having the world come into our homes with such immediacy. The very first commercials were aired with much trepidation: The idea of a stranger’s voice invading one’s living room was a gross overstepping. What was this world coming to! And of course, explicit movies, Elvis shaking his pelvis, and Dungeons & Dragons, among many other things, were going to ruin young people. But we’re still here. (I think.)
And yet … perhaps those voices were not entirely wrong. I’m often surprised how desensitized to violence today's kids are, and can’t help but wonder if there's a corollary to today's crime rate. A young relative once explained to me that she had already seen Natural Born Killers and the Saw movies before she was 13. She said it in a monotone, as if talking about the weather. (By contrast, when I was a little kid, I first saw, on UHF television, 1931’s Frankenstein. I slept with the light on that night.) Can desensitization be so gradual we don’t notice, the way we don’t notice water as it wears away granite?
Another concern is that AI is very different from what I’ll call “static” or “passive" media. It can and does change itself, rewriting its code and evolving in an eerily naturalistic way. Because of this feature, we can never really be sure where AI is headed next. Will it learn our weaknesses and poke at them? Figure out our triggers and take advantage? One wonders what unchecked AI might do to a very young person’s security in an already-frazzled world.
Now we're throwing AI on the fire of "deep fakes" and "fake news."
What does it mean if AI is utilized in a quasi-parental role since it has been found to lie sometimes, as well as give false citations to back up its misinformation? Children, at least early on, are raised to believe in authority figures—parents and teachers, principals, police. Of course, human beings are not perfect. But on the whole, adults work to maintain trust with developing young people, and we understand how important that trust is. How might children process a slippery, even shifty authority figure? Will they believe what comes out of their devices … because it came out of their devices? Or conversely, might they start to disbelieve almost everything? After all, there are already people who insist the Earth is flat and pictures from space are hoaxes. Now we’re throwing AI on the fire of “deep fakes” and “fake news.”
Dr. Elizabeth Burns Kramer, a child psychologist in Oakland, California, isn’t overly concerned. She says we’ve been here before and believes AI is not fundamentally different from other media children are growing up with in today’s world. “As kids grow and develop, their brains begin to perceive and receive information differently. They begin to understand that sometimes people they trust lie or are unreliable; yet, their survival and success often continue to be tied to their authority figures.
"It's a huge dopamine hit when we perceive that someone, or in this case a very well-algorithmed interactive program, is actively listening."
“AI Chat needs to be considered like social media and access to the internet,” she continues. “There are developmentally appropriate times to introduce these tools and apps, ideally there is a family conversation around them, there are family agreements around how to use them.” Kramer adds, “What we know to be most helpful for kids is to have adults they can trust, that provide consistency and warmth, and that can offer empathy and understanding to kids when they are scared, worried, and so on.”
She urges caution when it comes to AI and the youngest of minds. “I do think the concept of AI will be confusing for elementary-age children possibly up to tweens or early adolescence. It's a huge dopamine hit when we perceive that someone, or in this case a very well-algorithmed interactive program, is actively listening, responding, and mirroring us. It might feel really gratifying to someone who maybe is struggling to connect. “
Kramer urges parents to set boundaries - and has concerns about children relying on technology to do their thinking for them. “I worry about what they would miss out on in terms of opportunities to build self-efficacy and positive self-worth if they are not actively engaged in their own learning … I wouldn't want an electronic friend to solve their knottiest problems.”
So where is the soft line between relying on AI and questioning it? That might be the toughest question for young minds to decide. Perhaps it would be best for parents to start by telling children artificial intelligence is simply that, intelligence. While intelligence is a great thing, it is not divine revelation. Isaac Newton was a brilliant scientist, but he also believed in the supernatural and tried to turn base metals into gold. Albert Einstein could not bring himself to admit the existence of black holes, even though they are predicted by his very own equations. Even the greatest minds get it wrong sometimes.
Then parents could explain that AI is a tremendous advancement that the world has never seen before. But it should be used with lots of caution because, so far at least, it too is prone to boneheaded mistakes.
John Grabowski is a San Francisco Bay Area writer specializing in tech—specifically AI and chatbots, real estate and real estate tech. He has worked in PR, television news and advertising. He is also the author of two novels and a collection of short fiction. His latest novel, Made in the U.S.A., will be published by Arbiter Press early next year.
AI is becoming an integral part of everyday life, influencing how we learn, work, and interact. One important aspect for making AI-driven tools accessible globally is the use of Hreflang tags. By implementing hreflang tags, content can be appropriately localized, ensuring users around the world can engage with AI insights in their native language. As AI continues to grow, combining advanced tech with thoughtful SEO practices like hreflang can help bridge cultural gaps, making learning about AI an inclusive experience for everyone.
Growing up with AI presents such unique opportunities and challenges for the next generation! It’s fascinating to see how technology is shaping our lives and learning experiences. For those looking to explore financial independence alongside their education, understanding the best ways of passive income can be incredibly beneficial. Finding smart ways to earn money while focusing on personal growth and learning can set a solid foundation for the future!
Systematic reviews are crucial for providing a comprehensive overview of existing research, but they are also incredibly time-consuming. When I needed help with my systematic review https://literaturereviewwritingservice.com/help-with-writing-a-systematic-review/ came through for me. Their experts helped me organize my sources methodically and provided clear guidance on how to critically assess each study. The attention to detail and the thoroughness of the review they delivered were outstanding. Whether you're new to writing systematic reviews or just pressed for time, this service is a great resource.
Navigating the process of data collection for effective AI was a bit overwhelming at first, but once I got the hang of it, it was actually pretty cool! I felt like a detective uncovering all these hidden patterns and insights in the data, look for more here https://ventsmagazine.com/2023/11/03/navigating-the-process-of-data-collection-for-effective-ai/ . It was like putting together a puzzle where each piece was a valuable piece of information. Overall, it was a challenging but rewarding experience that really opened my eyes to the power of AI.
Good article! I agree about many of the points. But ai is here to stay, whether it's used for good or evil. And I think it will be more evil than good, knowing humankind.