January 2026: AI-enhanced cyberattacks are on the rise
- pradelconseil
- 3 days ago
- 3 min read
According to the latest Allianz Risk Barometer 2026, published in mid-January, cyber risk remains the number one global threat, but AI has jumped from 10th to 2nd place, acting as a threat accelerator.
Cybersecurity : the age of “AI-augmented attacks”
It is the hottest topic of January 2026. Cybercriminals are no longer just hacking systems; they are industrializing their attacks with AI.
AI as a Ransomware Weapon:
Recent reports (notably the Allianz Risk Barometer 2026) show that AI is now being used to analyze a company’s vulnerabilities far faster than any human could. Incidents involving major subcontractors (such as Luxshare or Under Armour earlier this year) illustrate the rise of large-scale data theft enabled by automated tools. Beyond financial gain, these attacks—which also included R&D plans—open the door to industrial espionage and demonstrate how a vulnerability at a single supplier (Luxshare) can expose a massive number of data records belonging to its clients, including Apple, Nvidia, Tesla, and others.
Executive deepfakes (CEO Fraud 2.0):
The threat has become more sophisticated. It is no longer just about fake videos, but real-time voice cloning used during virtual meetings to authorize fraudulent wire transfers. Companies now need to implement “voice passwords” between humans to counter these AI-driven attacks.
“Shadow AI” attacks:The number one risk for companies is not official, sanctioned AI, but “shadow” AI. Employees are using unsecured AI tools to process confidential data, creating invisible breaches that hackers can exploit.
Europe: the moment of regulatory truth with the AI Act
In Europe, 2026 is no longer the year of experimentation, but one of accountability. The full entry into force of the AI Act is reshaping the risk landscape.
It has become the number one topic in European boardrooms. Since January, regulators have moved beyond education and awareness.
Audits of high-risk AI systems:Companies using AI systems classified as “High Risk”—such as HR AI, credit-scoring systems, or critical infrastructure applications like healthcare—are now subject to audits. Fines can reach up to 7% of global annual turnover.
As a result, there is a deliberate slowdown in the deployment of “consumer-grade” generative AI within enterprises, in favor of locally hosted Small Language Models (SLMs) that are easier to audit and control.
United Kingdom: the major industrial incident at Jaguar Land Rover
Although the attack began in late 2025, its repercussions have dominated British and European economic news in early 2026. A crippling cyberattack cost the automotive ecosystem nearly £2.5 billion, with full operations only resuming at the start of this year. The attack forced the group to shut down its IT systems to contain the intrusion and halt vehicle production. Unlike a conventional outage, this disruption lasted for weeks. Cars could no longer be assembled because robots and parts management systems (“just-in-time”) were offline.
This incident highlights the fragility of connected supply chains. AI, designed to optimize flows (“just-in-time”), made the system hyper-sensitive. When the AI stops, the factory can no longer operate. It is a warning to the entire industry: digital dependency has become a systemic risk.
France: double wave of DDoS attacks against La Poste during the holiday season
A pro-Russian hacker group targeted critical French infrastructure to sow disruption. Digital access to La Banque Postale accounts and the La Poste website was cut during the holiday period on an unprecedented scale, preventing users from accessing their bank accounts.
Source : siecle digital - la poste
95%of AI pilot projects fail, according to MIT
In its report “The GenAI Divide: State of AI in Business 2025,” MIT states that 95% of generative AI projects fail to achieve measurable value creation or return on investment. Companies struggle to integrate generative AI productively into their operations for several reasons:
lack of customization of generic tools not adapted to the company’s context
absence of workflow transformation
insufficient training on the tools
lack of high-quality data and technical integration
absence of a governance framework
shortage of specialized skills
cultural resistance
By contrast, successful projects:
deeply integrate AI systems into core business processes
train and fine-tune the tools to improve performance
form partnerships with specialized vendors
focus on the right use cases (automation of repetitive tasks, etc.)
Source : MIT - State of AI in Business 2025




Comments