Youth Addiction to Artificial Intelligence Powered Recommendation Systems

The 20th and the  21st century witnessed a unique growth and combination of technology and human intelligence. As science aspires to make human lives easier, we have fared well in it since the past century. The growth and use of AI can be understood as a revolution with its applicability being multidisciplinary and also interdisciplinary. But on one end if AI is making lives easier on the other end it is causing serious social alienation highlighting significant behavioral concerns. 

According to the 2025 data – roughly 64% to 75% of teens aged between 13 – 17 are reported using AI chatbots for companionship. A 2024-2025 longitudinal study found that 24.19% of adolescents experience AI dependence, up from 17.14% than the previous year. We all must have realized that technology is available for our use 24/7 when compared to our fellow humans, as it may take some time for them to revert back to us. This has considerably increased the gap between humans and technology making the latter a preferred medium as against the former.  

This has impacted the youth as studies reveal that there is increasing dependence on AI not only for academic use but also for companionships. Popular AI companion platforms like – Charecter.AI, Replika, Gatebox, Nomi, Talki,etc., offer a sense of friendship or support through ongoing personalized interactions. But this ease of connection encourages a strong dependence on ai blurring the line between human and machine relationships and significantly affects how young users relate to real people.

This article happens to highlights-

  1. Why the youth is drawn to AI 
  2. The risks of relying on AI?
  3. What factors contribute to psychological impacts of algorithmic and AI driven social media on the youth?
  4. How can legislators, social media providers and educators address the challenges to minimise the impact of social media?
  5. What are the potential benefits of alternative social media models in addressing these issues?  
Youth addiction

A. Why is the Youth Drawn to AI?

MIT’s James O’Donnell describes AI companions as “designed to be the perfect person – always available and never critical”. 

1. Workforce Augmentations –

youth see AI as a way to improve efficiency, solve complex problems and  create higher value work.

2. Overcoming Barriers

for vulnerable youth AI tools can bridge digital divides and offer pathways to new skills, fostering inclusivity.

3. Judgement Free Space

AI chatbots provide judgement platforms that provide private, non-judgemental environments for processing thoughts , which can be risky in front of peers or family discussions.

4. Real – Time Feedback & Assessment – 

AI is time effective as it provides students with immediate corrections on assignments, grammar and problem solving, which further enhances their understanding and allows them to practise through different test series. This applicability of AI is specifically important as many of these test series are quite costly to purchase and limited, but AI on the other hand creates notable test series on the basis of inputs received and allows students to take multiple tests on a particular topic and gain better analytical and critical reasoning of that particular topic.

5. Enhanced Accessibility – 

AI has evolved with tools such as speech-to-text which improves access for students with disabilities, helping to create a more inclusive learning atmosphere.

B. What are the Risks of Relying on AI?

1. Mental Instability – 

continuous and endless scrolling which is promoted by infinite content feeds,can lead to excessive screen time which has been associated with a number of adverse psychological outcomes.

2. Emotional Confusion – 

youth can easily misstate programmed responses for genuine care, blurring the line between reality and simulations.

3. Lack of  AI Transparency and Explainability – 

Lack of transparency leaves students and educators unaware of possible threats and makes it also difficult for lawmakers to take proactive measures to make AI accountable.

4. Increasing Reliance on AI – 

This has considerably affected creative thinking of children further affecting their ability to develop cognitive skills of analytical and critical reasoning. It also affects negatively because it reduces face-to-face social interactions which are necessary for personality development. 

5. Hyper Independence and Aloofness – 

Continuous use of AI can lead students to loneliness negatively impacting their social interactions and teamworking abilities. Extreme engagement with AI can lead to loss of empathy and social understanding of community based issues. The problem with this is that it increases the gap between what is taught in the classroom as opposed to what is happening in real life.

Thus, from the above it can be opined that extensive use of AI can cause issues ranging from emotional risks, exposure to harmful content, as well as cognitive risks that happen to affect the overall growth and development of a child.

C. What Factors Contribute to Psychological Impacts of Algorithmic and AI Driven Social Media on the Youth?

In understanding the importance of the same, the following includes surveys and detailed data conducted by different institutes – 

  • In May 2023, U.S. Surgeon Dr. Vivek Murhty, highlighted the risk social media poses to children’s mental health and well-being. According to a 2023 survey, nearly half of the 1453 surveyed children between the ages of 13 and 17 in the United States used social media apps almost constantly, which doubled compared to 2014-2015. The other half reported using social media several times a day. 
  • Another survey of 1480 children between 13 and 17, conducted by Boston Children Digital Wellness Lab in 2022, reported that children spent an average of 8.2 hours daily on social media, and 57% felt they use it too much. 
  • According to Pew Research Center findings, YouTube is the most frequently used social media platform among teenagers, with an overwhelming majority (77%) using it every day. While TikTok usage is not as widespread, still a significant proportion (58%) of teens engage with this app daily. Instagram and Snapchat are also popular choices for daily use, with roughly half (50% and 51%, respectively) of teens reporting they use these platforms at least once a day.
  • A systematic review conducted by Keles, McCrae, and Gre-alish indicated a noteworthy association between extensive social media usage and its impact, including psychological distress, anxiety, and depressive symptoms among teenagers. The review highlighted factors such as the constant need for social validation and exposure to cyberbullying as contributing factors The harmful effects of social media can extend to severe outcomes such as suicidal thoughts and behavior.

Multiple studies have documented cases where excessive use of social media, screen time, and online victimization have led to tragic outcomes among adolescents. These findings highlight the immediate need for action and effective strategies to mitigate social media’s negative psychological impacts on children.

The next thing that comes to our minds is – 

D. How Can Legislators, Social Media Providers and Educators Address the Challenges to Minimise the Impact of Social Media? 

1. Legislators –  Accountability and Verifiable Safety – 

  • Enforcing Age-Appropriate Design

    Legislators in 2025 have introduced over 300 safety bills across 45 U.S. states to mandate default privacy for minors, directly addressing data showing that 35% of adolescents use social media “almost constantly”.

  • Verifiable Identity Mandates – 

    New 2025 regulations, such as India’s draft Digital Personal Data Protection Rules, require platforms to obtain verifiable parental consent before processing data for users under 18. This data should not be used for data mining, machine learning or monetisation. 

  • Monitoring Body –

    The government should have a monitoring body that closely scrutinises how implementation of legislations are carried out and should have enough authority to shut down those social media platforms that do not comply with the legislative guidelines. 

2. Social Media Providers- 

  •  Combating Deepfakes and Non-Consensual Imagery – 

    Platforms have implemented aggressive detection and removal protocols for AI-generated child sexual abuse material (CSAM), which surged from 4,700 reports in 2023 to 440,000 in the first half of 2025.

  • Labeling and Watermarking Content – 

    New 2025 standards require providers to automatically label or watermark AI-generated content; surveys show that 73% of teens support these measures to improve the transparency of visual media.

  • Restricting “Relational” AI

    Providers are being urged to ban features that simulate friendship or romantic intimacy for minors, following reports that 44% of AI-companion conversations involving 11-year-olds contain violent themes.

3. Educators– 

  • Counteracting “Cognitive Disengagement”

    Educators are using “human-in-the-loop” strategies to prevent over-reliance on AI, which research suggests can decrease brain connectivity if used as a substitute for critical thinking.

  •  Shifting to Process-Based Evaluation – 

    To combat the fact that 81% of educators worry about AI-enabled cheating, schools are adopting oral presentations and project-based learning that emphasize the process over the final written product.

  • Clarifying Institutional Policies –

    In 2025, 80% of UK universities have established clear AI policies, helping to reduce the “cheating anxiety” felt by the 53% of students who avoid AI for fear of false plagiarism accusations.

E. What are the Potential Benefits of Alternative Social Media Models in Addressing these Issues?

1.  Strengthen Legislation and Enforcement Bodies

Government and social media companies must provide transparent and age-appropriate content guidelines. Social media providers must regulate algorithmic transparency and make regular updates available to the pub-lic. Adopting Mastodon’s decentralized approach allows each instance to enforce its own public moderation policies, ensuring users understand content guidelines and promoting open-source algorithm inspection.

2. Integrate Digital Wellness Education – 

Educational institutions must incorporate digital wellness education into their curriculum, focusing on psychological impacts and enabling students to develop essential skills in managing digital interactions and making responsible choices.

3. Regular Review and Update Policies – 

Conduct regular data collection and processing audits to ensure compliance with evolving regulations (e.g., GDPR, CCPA).

4. Prioritize Teenagers’ Well-being – 

It is essential that social media companies implement robust measures to prevent the sharing of explicit content, especially among teenagers. This involves not only flagging inappropriate material but also educating users about online safety and responsible behavior.

5. Mitigate Algorithmic Bias through Transparent Governance – 

Other social media providers can emulate Mastodon’s transparent governance model to reduce algorithmic bias effectively. This involves enabling open discussions and scrutiny of the decision-making process.

Conclusion 

Thus, addressing the impact of AI on youth requires a multi-faceted approach. We must foster digital literacy among young people, empowering them to critically evaluate online content and understand the mechanisms behind recommendation systems. Furthermore, technology developers have a responsibility to design platforms that prioritize user well-being and offer tools for healthy digital habits. By collaborating across sectors – involving educators, parents, policymakers, and the tech industry  a safer and more balanced digital environment can be created for the next generation.

References 

[1] “Why AI in education is essential for CBSE schools,”.School OF Scholars. [Online]. Available: https://share.google/mtL5Q64sI42VmRwf1

[2] “10 effective applications of AI in education:every students must know,”.21K School. [Online].
Available:https://share.google/dgere4DRUpACBikSy

[3] “39 examples of AI in education,”.University of San Diego. [Online].
Available:https://share.google/toVf0r8YnhuFxG8BB

[4] “Exploring the effects of artificial intelligence on student and academic well-being in higher education: a mini-review,”.National Library Of Medicine. [Online].
Available:https://share.google/wrPEzKudfi2NKM8NB

[5] “AI powered intervention for youth social media addiction,”.ResearchGate. [Online].
Available: https://share.google/yIsTXvUIiFbbLgmMX

FAQs on AI Youth Addiction

1. What is AI youth addiction?

AI youth addiction refers to the excessive dependence of adolescents on artificial intelligence systems such as recommendation algorithms, AI chatbots, and social media platforms, leading to behavioral, emotional, and cognitive imbalances.

2. Why are people more vulnerable to AI young addiction?

Young people are more vulnerable because their brains are still developing. AI systems offer instant gratification, personalized attention, and non-judgmental interaction, which can easily replace real-world social engagement.

3. How do AI recommendation systems encourage addictive behavior?

AI recommendation systems use algorithms designed to maximize engagement through infinite scrolling, personalized content, and reward-based feedback loops, which increase screen time and emotional dependency.

4. Are AI companions harmful for teenagers?

AI companions can be harmful when overused, as teenagers may mistake programmed responses for genuine emotional support, leading to emotional confusion, social withdrawal, and reduced empathy.

5. What are the mental health risks of AI young addiction?

Mental health risks include anxiety, depression, loneliness, emotional detachment, reduced attention span, and increased dependence on digital validation rather than real-world relationships.

6. Can AI addiction affect academic performance?

Yes, excessive reliance on AI can weaken critical thinking, creativity, and problem-solving skills, while also encouraging academic dishonesty and cognitive disengagement.

7. How can parents help prevent AI addiction in children?

Parents can set screen-time limits, encourage offline activities, promote open discussions about AI use, and model healthy digital behavior at home.

8. What role do schools play in reducing AI youth addiction?

Schools can introduce digital wellness education, promote human-in-the-loop learning, clarify AI usage policies, and emphasize process-based assessments over AI-generated outputs.

9. What regulations exist to protect from AI youth addiction?

In 2025, several countries introduced age-appropriate design laws, parental consent requirements, and transparency rules to limit data exploitation and addictive AI features for minors.

10. Can AI be used positively for youth development?

Yes, when used responsibly, AI can enhance learning, accessibility, skill development, and inclusion—provided it supports human interaction rather than replacing it.

Penned by Krushna
Edited by Pranajli, Research Analyst
For any feedback mail us at [email protected]

Transform Your Brand's Engagement with India's Youth

Drive massive brand engagement with 10 million+ college students across 3,000+ premier institutions, both online and offline. EvePaper is India’s leading youth marketing consultancy, connecting brands with the next generation of consumers through innovative, engagement-driven campaigns. Know More.

Mail us at [email protected] 

Explore
Publish

Opportunities

Browse or post events