/vnd/media/media_files/2025/11/03/ai-data-science-skilling-initiative-netapp-nasscom-2025-11-03-14-36-04.png)
As artificial intelligence (AI) becomes increasingly embedded in everyday life, children and young people are often among the earliest and most enthusiastic adopters. From learning tools and chatbots to voice assistants and creative platforms, AI is reshaping how people study, work and interact online.
Safer Internet Day 2026, observed on 10 February, highlights the importance of making informed, safe and responsible choices when using AI. This year’s theme, “Smart tech, safe choices, Exploring the safe and responsible use of AI,” reflects the growing need to balance rapid innovation with awareness, trust and digital responsibility.
India has emerged as one of the world’s leading adopters of AI, significantly outpacing global averages across both consumer and enterprise use. According to a Microsoft study, around 65% of Indians report having used AI, compared with a global average of 31%. Adoption is particularly high among millennials aged 25 to 44, with usage reaching 84%. AI-powered tools are now commonly used for answering queries, translation, student support, content creation and improving workplace productivity.
While this rapid uptake underscores India’s digital ambition, it also highlights the need to build greater awareness of how AI systems operate, how data is collected and used, and how individuals can protect themselves online. Safer Internet Day plays an important role in promoting practical digital hygiene, helping users benefit from AI without compromising safety or privacy.
Cybercrime Evolves Alongside AI Adoption
At the same time, the technologies driving innovation are also reshaping the cybercrime landscape. AI has lowered the barriers to entry for malicious activity, enabling individuals or small groups to carry out attacks that once required highly skilled teams. AI-supported toolchains now allow cybercriminals to scale operations quickly and efficiently, often with limited technical expertise.
Cybercrime ecosystems are also becoming more specialised. Different actors increasingly focus on specific functions such as reconnaissance, access brokering, lateral movement, monetisation and deception. Despite advances in automation, human vulnerability remains the primary attack surface. Attackers continue to exploit trust, urgency and authority, with AI amplifying these tactics through more convincing phishing, faster reconnaissance and rapid iteration of attack techniques.
According to FortiGuard Labs, cybercrime is entering a fourth industrial phase characterised by automation, integration and specialisation. Credential dumps are evolving into curated, “intelligent” datasets enriched with contextual and behavioural information. Dark web marketplaces increasingly resemble legitimate e-commerce platforms, offering customer support, reputation systems and escrow services, many of which are enhanced by AI. Fraud, money laundering and other illicit activities are also becoming more interconnected, creating resilient criminal ecosystems that are increasingly difficult to disrupt.
Best Practices for the Safe Use of AI
As AI becomes embedded in everyday tools, particularly chatbots and generative AI platforms, it is important to remember that many systems learn from the data users provide. Following a few essential safety practices can help individuals protect their personal information and digital identities.
Users should avoid sharing sensitive information such as passwords, banking details, home addresses or other personal identifiers with AI tools. Strong, unique passwords should be used for different accounts, with privacy settings reviewed regularly. Care should also be taken before uploading personal or family photos, as users may lose control over how images are stored or reused.
Critical thinking remains essential when consuming online content, especially as AI-generated scams and misinformation become more convincing. Important information, particularly relating to health, finance or legal matters, should always be verified with trusted experts or official sources. Transparency is equally important, and AI-generated content should be clearly labelled when shared publicly.
Where possible, users should rely on AI platforms recommended by reputable organisations, educators or parents, and take time to understand what data is collected and how it is used. Enabling two-factor authentication can provide an additional layer of protection against unauthorised access.
“Safer Internet Day 2026 is a reminder that no single company or individual can make the internet safe on their own,” said Vishak Raman, Vice President of Sales for India, SAARC, SEA and ANZ at Fortinet. “By working together, educators strengthening digital literacy, parents fostering open communication, technology providers building secure platforms, and government agencies collaborating with law enforcement, we can ensure the internet remains a space for growth, creativity and connectivity for everyone.”
/vnd/media/agency_attachments/bGjnvN2ncYDdhj74yP9p.png)