Sunday, February 23, 2025
HomeArtificial IntelligenceAI Dominates 2025 Cybersecurity Predictions

AI Dominates 2025 Cybersecurity Predictions

In 2025, artificial intelligence will dominate discussions about cybersecurity, with many analysts and professionals focused on its implications.

Both adversaries and defenders will deploy artificial intelligence, but attackers will gain more from its use, according to Willy Leichter, CMO of AppSOC, a security and vulnerability management provider based in San Jose, Calif.

“We know AI will be used increasingly on both sides of the cyber war,” he told TechNewsWorld. “However, attackers will face fewer constraints because they are less concerned with AI accuracy, ethics, or unintended consequences. AI will enhance techniques like highly personalized phishing and the search for legacy vulnerabilities in networks.”

“While AI holds significant potential for defense, legal and practical constraints will slow its adoption,” he added.

Chris Hauk, a consumer privacy champion at Pixel Privacy, a site dedicated to online security and privacy guides, predicted that 2025 will be a year of AI versus AI, with defenders using AI to counter AI-powered cyberattacks.

“It will likely be a year of back-and-forth battles, as both sides use information from previous attacks to launch new offenses and defenses,” he told TechNewsWorld.

Reducing the Security Risks of AI

Leichter predicted that cyber adversaries would increasingly target AI systems. “AI technology significantly broadens the attack surface, introducing rapidly evolving threats to models, datasets, and machine learning operations systems,” he explained. “When AI applications are rushed from the lab to production, their full security implications won’t become clear until breaches inevitably occur.”

Karl Holmqvist, founder and CEO of Lastwall, an identity security company in Honolulu, shared a similar perspective. “The unchecked, widespread deployment of AI tools—often launched without robust security foundations—will bring severe consequences in 2025,” he told TechNewsWorld.

“Without adequate privacy safeguards and security frameworks, these systems will become prime targets for breaches and manipulation,” he warned. “This reckless approach to AI deployment will leave data and decision-making systems dangerously vulnerable, forcing organizations to prioritize foundational security controls, transparent AI frameworks, and continuous monitoring to address these escalating risks.”

Leichter asserted that security teams will need to take greater responsibility for safeguarding AI systems in 2025.

“This might seem obvious, but in many organizations, early AI projects have been led by data scientists and business specialists who often bypass traditional application security processes,” he explained. “Security teams won’t succeed if they try to block or slow down AI initiatives, but they must ensure that rogue AI projects are brought under the umbrella of security and compliance.”

He also emphasized that AI will further expand the attack surface for adversaries targeting software supply chains in 2025. “Supply chains have already become a major attack vector due to the reliance on complex software stacks that incorporate third-party and open-source code,” he noted. “The rapid adoption of AI enlarges this target, introducing new, complex attack vectors focused on datasets and models.”

“Tracking the lineage of models and ensuring the integrity of evolving datasets is a significant challenge,” he added. “Currently, there’s no effective way for an AI model to unlearn poisoned data.”

Threats of Data Poisoning in AI Models

Michael Lieberman, CTO and co-founder of Kusari, a software supply chain security company in Ridgefield, Conn., identified poisoning large language models as a key concern for 2025. “Data poisoning attacks aimed at manipulating LLMs will become increasingly common, though this approach is more resource-intensive compared to simpler tactics, like distributing malicious open LLMs,” he told TechNewsWorld.

“Most organizations aren’t training their own models,” he explained. “They rely on pre-trained models, often provided for free. This lack of transparency around the origins of these models makes it easy for bad actors to introduce compromised versions, as demonstrated by the Hugging Face malware incident.” In early 2024, it was revealed that about 100 LLMs containing hidden backdoors capable of executing arbitrary code on users’ machines had been uploaded to the Hugging Face platform.

“Future data poisoning efforts will likely target major players such as OpenAI, Meta, and Google, which train their models on vast datasets,” Lieberman predicted. “These attacks will be harder to detect due to the sheer scale and complexity of their operations.”

“In 2025, attackers will likely outpace defenders,” he noted. “Attackers are financially motivated, while defenders often face challenges securing adequate budgets since security is rarely seen as a revenue driver. It might take a significant AI supply chain breach—on the scale of the SolarWinds Sunburst incident—for the industry to recognize the gravity of the threat.”

AI’s advancement will also lead to more sophisticated attacks by a broader range of threat actors in 2025. “As AI becomes increasingly capable and accessible, the barrier to entry for less-skilled attackers will lower, while also accelerating the speed at which attacks are executed,” explained Justin Blackburn, a senior cloud threat detection engineer at AppOmni, a SaaS security management software company based in San Mateo, Calif.

“The rise of AI-powered bots will further enable attackers to carry out large-scale attacks with minimal effort,” he told TechNewsWorld. “With these AI-driven tools, even less experienced adversaries could gain unauthorized access to sensitive data and disrupt services at scales previously achievable only by more sophisticated, well-funded attackers.”

Script Kiddies Evolve

In 2025, the emergence of agentic AI—artificial intelligence capable of independent decision-making, environmental adaptation, and autonomous action—will present new challenges for defenders. “Advances in AI are expected to enable non-state actors to create autonomous cyber weapons,” said Jason Pittman, a collegiate associate professor at the University of Maryland Global Campus’s School of Cybersecurity and Information Technology in Adelphi, Md.

“Agentic AI operates autonomously with goal-driven behavior,” he explained to TechNewsWorld. “These systems can leverage advanced algorithms to identify vulnerabilities, infiltrate systems, and adapt their tactics in real-time without human guidance.”

He emphasized the distinction between agentic AI and traditional systems. “Unlike AI reliant on predefined instructions and human input, agentic AI evolves dynamically to achieve its objectives,” Pittman noted.

He also warned of unintended consequences. “Similar to the Morris Worm from decades ago, the release of agentic cyber weapons could initially occur by accident, which is particularly concerning. The widespread availability of advanced AI tools and open-source machine learning frameworks significantly lowers the barrier to developing sophisticated cyber weapons. Once unleashed, the autonomy of agentic AI could enable it to bypass safety measures and operate beyond human control.”

While AI can be a threat in the hands of cybercriminals, it also holds the potential to enhance data security, including the protection of personally identifiable information (PII). “After analyzing more than six million Google Drive files, we found that 40% of them contained PII, putting businesses at risk of a data breach,” said Rich Vibert, co-founder and CEO of Metomic, a data privacy platform in London.

“As we move into 2025, more companies will prioritize automated data classification methods to reduce the amount of sensitive information inadvertently stored in publicly accessible files and collaborative workspaces across SaaS and cloud environments,” he added.

“Businesses will increasingly adopt AI-driven tools that can automatically identify, tag, and secure sensitive data,” Vibert explained. “This shift will help companies manage the vast amounts of data generated daily, ensuring sensitive information is continuously protected and unnecessary exposure is minimized.”

However, 2025 may also bring a wave of disappointment among security professionals when the hype surrounding AI fails to meet expectations. “CISOs will reduce their focus on generative AI by 10% due to a lack of measurable value,” wrote Cody Scott, a senior analyst at Forrester Research, a market research company based in Cambridge, Mass.

“According to Forrester’s 2024 data, 35% of global CISOs and CIOs consider exploring and deploying generative AI use cases to improve employee productivity a top priority,” he noted. “The security product market has quickly hyped generative AI’s potential productivity benefits, but the lack of practical outcomes is leading to growing disillusionment.”

“The idea of an autonomous security operations center powered by generative AI generated significant buzz, but it’s far from becoming a reality,” he continued. “In 2025, this trend will persist, and security professionals will become even more disenchanted as challenges such as limited budgets and unfulfilled AI promises reduce the number of security-focused generative AI deployments.”

Source

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments

Viesearch - The Human-curated Search Engine Blogarama - Blog Directory Web Directory gma Directory Master http://tech.ellysdirectory.com