Signs of a possible correction (“bubble”) in the cybersecurity market driven by artificial intelligence. After a year of “euphoric valuations in AI,” prominent voices such as the Chair of the Bank of England have warned of the risk of a “sudden correction” in this sector, while other leaders such as Jamie Dimon warn that “a lot of assets look like they’re entering bubble territory” and Sir Nick Clegg describes current valuations as “crackers.”
Three essential fractures for leadership in post-hype cybersecurity:
- Geopolitics and supplier concentration: It points to an excessive dependence on eleven large US and Israeli firms, which jeopardizes digital sovereignty and is presented as a “dangerous bet” considering the national relevance of cybersecurity. In response, governments and companies are promoting the diversification of suppliers and architectures to prepare for future regulations.
- Cognitive security in the face of real AI threats: The text highlights that recent reports from OpenAI, Anthropic, and Google show that the greatest danger of AI lies in “cognitive manipulation” (phishing, disinformation, and fraud powered by automation) rather than sophisticated technical attacks. It therefore recommends expanding cybersecurity mandates to include the detection of deepfakes, identity verification, and defense against disinformation campaigns.
- Importance of traditional fundamentals: Far from the discourse of “revolutionary” tools, the article emphasizes that “Securing AI agents… relies on the same cyber principles” such as access control, vulnerability management, and third-party monitoring. Recent examples show that cyberattacks continue to exploit classic vulnerabilities, confirming that resilience lies in constant vigilance and basic principles.
The analysis concludes that market corrections in technology often bring structural improvements; thus, it recommends investing in resilience and sound principles to address the new cybersecurity paradigm beyond “speculation.” Leadership in the sector will depend on prioritizing fundamentals, anticipating regulations, and combating real threats such as cognitive manipulation.
Reference
Dixon, W. (2025, October 30). Is the AI-cyber bubble about to burst? World Economic Forum. https://www.weforum.org/stories/2025/10/is-the-ai-cyber-bubble-about-to-burst
