Abstract
This article explores the evolving cybersecurity landscape in automation, from AI and machine vision risks, defensive strategies, hardware-level protections, current challenges to an outlook to the future and the possible challenges that come with the dawn of quantum computing. Through insights from industry experts, we examine how security threats are emerging alongside innovation—and how engineers are developing new strategies to protect the future of automation.
Panelists

John Stih
Sensor Specialist at Future Connectivity Solutions

Don Gunn
Processor Specialist at Future Intelligent Solutions

Ed Baca
Regional Solutions Manager at Future Intelligent Solutions

Lazina Rahman
IoT/Connectivity Specialist
Mediator
Introduction
Automation and AI-driven systems play a key role in the industry today and are enablers of the industry of tomorrow. This brings cybersecurity forward and as more systems become fully automated, its role swerves from precautionary to paramount necessity.
The market for automation is growing at an accelerated pace too.
With automation’s rapid expansion, the global industrial automation market, valued at USD 205.86 billion in 2022, is expected to nearly double to USD 395.09 billion by 2029, growing at a CAGR of 9.8%*. But with innovation comes risk. As systems become more connected and intelligent, vulnerabilities increase, consequentially fueling the growth of the cybersecurity market from USD 190.4 billion in 2023 to a projected USD 298.5 billion by 2028**.
The rise of these new technologies raises important questions: How do we secure AI-powered systems? What role does hardware-level security play in preventing cyber threats? And how can engineers stay ahead of future threats?
To tackle these challenges, we brought together a panel of experts from different domains to explore the state of cybersecurity in automation:
- John Stih, Sensor Specialist at Future Connectivity Solutions
- Don Gunn, Processor Specialist at Future Intelligent Solutions
- Ed Baca, Regional Solutions Manager at Future Intelligent Solutions
- Lazina Rahman (IoT/Connectivity Specialist) as our mediator
Through the discussion, we unpacked the challenges of securing automation at every level, from AI-driven cybersecurity threats to hardware-based protections and the coming impact of quantum computing on encryption.
This article comprises the main ideas discussed in the conversation and gathers them in four key sections:
- The Growing Security Challenge in Automation
- Threats and Security Strategies
- Case Study: The STM32N6 and the Future of Hardware-Level Security
- The Future of Cybersecurity in Automation
Continue reading to dive deep into the present state and the future of cybersecurity in the age of automation.
*As per: MarketsandMarkets. (2024). Cybersecurity market size, share, global growth analysis (2024–2029).
**As per: Fortune Business Insights. (2024). Industrial automation market size, share, growth & industry analysis (2024–2032).
Our editorial approach:
This report is edited, structured, and verified by the Future Content Development team based on the insights from our panel discussions. Transcripts were refined and polished with the assistance of AI, blending automation and human expertise to deliver a clear and accurate final piece.
The Growing Security Challenge in Automation
In the past, cybersecurity concerns centered on data privacy—protecting sensitive information from breaches and ensuring encrypted communication. But as automation expands its reach, security is no longer just about privacy. It’s about safety.
Machine vision and AI-driven automation are putting once-futuristic technologies, such as autonomous vehicles to industrial robots, into the near horizon. This exciting scenery is not without its precautions, since the same technology that makes it automation so powerful, also makes these systems vulnerable to deception, exploitation, and attacks.
AI, Machine Vision, and Safety Risks
“Traditionally, security concerns were mostly about privacy,” said John Stih, sensor specialist at Future Connectivity Solutions. “But with automation, safety has become a major issue.”
Consider an autonomous vehicle navigating a highway. Its lane-detection system relies on machine vision to distinguish between pavement and road markings. If an attacker manipulates those visual inputs, the car’s decision-making could be thrown into chaos, leading to potentially serious consequences.
AI systems are constantly trained to interpret visual data and learn from infinite scenarios. However smart, these systems can be fooled. Don Gunn, processor specialist at Future Intelligent Solutions, pointed out a relatively simple example: Drivers in automated vehicles today must be in active monitoring mode to take control of the vehicle at any required time. To ensure this, automated vehicle manufacturers first relied on steering wheel movement but are moving on to internal cameras and machine vision to track eye movements in the driver. But what if the driver wears sunglasses or is deliberately covering his eyes with fake open eye prints to take a nap while the vehicle drives itself. It’s a simple and hypothetical premise, but can AI detect the trick?
RELATED: Vehicle automation: Levels of autonomy and what does each mean
This deception extends beyond vehicles. In manufacturing plants, facial recognition systems help monitor worker movement, ensure compliance, and improve safety. Yet some individuals have managed to fool these AI-powered cameras by wearing t-shirts with printed faces, tricking the system into misidentifying their presence. “That’s why liveness detection is crucial,” Gunn explained. Some AI systems now incorporate infrared (IR) cameras, detecting heat signatures to verify whether an image represents a living, breathing human rather than a static decoy.
The Trade-Off Between Automation and Security
Here’s where the equation gets more interesting. As automation continues to advance, we find more areas where we could benefit from it. But as Ed Baca, regional solutions manager at Future Intelligent Solutions, highlighted: “The more sensors we add to automation, the more points of attack emerge.”
Cameras and sensors enhance system intelligence, improving reliability and decision-making. But each additional component also introduces new vulnerabilities.
Every sensor—whether tracking movement in a factory or scanning for obstacles in an autonomous robot—is a potential entry point for an attacker. Do we use more cameras and risk security gaps, or fewer cameras with stronger protection and reduce visibility?
Exploiting AI-Enabled Systems
It’s not just visual deception. Attackers will look for any open door to gain access, all being critical, but some being even more sensitive than others. For example, a key target could be the processors powering AI models.
Many AI-enhanced cameras and automation systems run on high-performance processors, often using Linux-based software.
“Since Linux is so well understood, bad actors can exploit vulnerabilities and load their own programs in the background,” Gunn warned. A compromised AI-powered camera, for instance, could be manipulated to stream unauthorized footage, or worse, provide false inputs to an industrial control system.
To counteract these threats, modern processors include built-in security mechanisms like secure boot protocols. These ensure that only authenticated, verified software can run on the system, preventing rogue programs from taking control.
Cybersecurity and Network Protection
Then there’s another level. Autonomous systems are rarely isolated machines. They usually form a part of a web of networked devices which enhance their functionalities, but you guessed it, each becomes a potential vector for cyberattacks.
“Security isn’t just about the software running on a device—it’s about who can access it” Gunn mentioned. A vulnerability in one node of a network can be exploited by an external hacker, a rogue insider, or an attacker leveraging a weakness in cloud-based infrastructure.
Consider a high-risk scenario like a power plant’s cooling system. If an unauthorized entity were to breach the control system and shut it down, the consequences could be catastrophic. The challenge, therefore, is not just to secure individual components but to establish robust network defenses, limiting access, isolating systems with firewalls, and ensuring critical infrastructure remains protected from intrusion.
It’s an arms race. As automation becomes more intelligent, adversaries will try to outsmart it, and so for this reason, remaining vigilant and updated is no longer optional—it’s imperative.
Threats and Security Strategies
This arms race raises a new question: can the same intelligence that enhances security be weaponized to break it? The battle between attackers and defenders is all about adaptation—each breakthrough in AI security met with an equally sophisticated method of deception, but will one ever definitively leave the other behind?
Could AI Compromise Machine Vision?
“Take Face ID,” said John Stih. “It started with passcodes, then came photo recognition—but people tricked it with a simple static image. So, we added 3D dot projection. Attackers responded with 3D-printed masks. Then came infrared detection to verify skin temperature, and even radar to detect breathing.” Each iteration was a step forward in security, yet attackers found ways to refine their methods and counter the upgrades.
This dynamic exposes a fundamental vulnerability of AI: pattern recognition is both its greatest strength and its Achilles’ heel. Machine vision systems, designed to identify faces, objects, and movements with near-human precision, can sometimes be fooled by minor inconsistencies.
Lazina Rahman noted a recent viral case exposing some of these hyper-sensitive systems, where people would no longer be recognized by their phone for something as small as a new facial injury or a different makeup.
“That’s why security can’t rely on visual analysis alone” Stih agreed.
But attacks extend far beyond facial recognition. Don Gunn pointed to an even greater looming threat: quantum computing. “Right now, AI is mostly used as a defensive tool. But in theory, AI could also be trained to crack encryption keys or automate cyberattacks. When quantum computing becomes viable, many of today’s security measures will be obsolete.”
Using AI to Defend Against Attacks
AI today remains one of the best and strongest lines of defense against attackers. But with the potential for AI-driven attacks, it’s paramount to anticipate and cover every possible scenario.
“To stay ahead, security AI has to evolve constantly,” Gunn explained. “That’s why modern authentication systems use multiple layers of verification.”
One example is NXP’s security solution, built around the i.MX RT1170 processor. Instead of relying solely on a camera, it combines:
- RGB cameras for facial detection,
- Infrared (IR) sensors to confirm heat signatures,
- Time-of-flight sensors to verify that a face is three-dimensional rather than a flat image.
See i.MX RT1170 development kits
“This makes it much harder for attackers to trick the system with simple photo-based deception,” Gunn said.
But AI-driven security isn’t just about blocking unauthorized access. It must also predict and neutralize emerging threats before they become real vulnerabilities.
“A security model might be safe today, but in six months, a new attack method could emerge,” said Ed Baca. “AI-based security has to continuously update and adapt, or it becomes obsolete.”
That adaptability, however, comes with a caveat. “If an AI model learns from real-world data, attackers can manipulate it,” Stih warned. “Going back to the makeup example, imagine an authentication system that updates every time it encounters an alteration in a face. An attacker could gradually train it to recognize someone who isn’t actually authorized. That’s why updates need to be carefully controlled.”
Case Study: The STM32N6 and the Future of Hardware-Level Security
The battle for cybersecurity is often framed as a software problem where developers patch vulnerabilities just as quickly as attackers find new ones. But as AI-driven automation scales, it’s time to think outside the box… which leads us back into the hardware.
Can security be embedded at a hardware level?
This shift is at the heart of STMicroelectronics’ STM32N6 microcontroller, a device engineered to bring security to the edge. Unlike traditional cybersecurity models that rely on firewalls and software updates, the STM32N6 integrates protection at the silicon level, ensuring security from the foundation of any design.
Bringing Security to the Edge
Historically, microcontrollers and image sensors existed in separate worlds. A camera captured data, a processor analyzed it, and a network connection sent the results elsewhere. But with the STM32N6, this equation changes.
“Traditionally, microcontrollers weren’t directly connected to image sensors,” Ed Baca explained, “But now, with the MIPI interface, the STM32N6 can integrate directly with high-resolution cameras, making it possible to process visual data at the edge—before it ever leaves the device.”
This edge-first approach reduces security risks by minimizing the transmission of raw data over potentially vulnerable networks. Instead, the microcontroller processes images locally, extracting only the most essential metadata—such as, “person detected” instead of streaming an entire video feed to show the image of a person.
Learn more: STMicroelectronics STM32N6 Microcontroller
Beyond its MIPI-enabled camera support, the STM32N6 embeds several other security mechanisms such as:
- On-the-fly encryption & secure data storage – Protects AI models and sensitive data from unauthorized access.
- Hardware-based authentication – Ensures that only trusted software and firmware can run on the device.
- Edge AI inference with security – Reduces reliance on cloud-based processing, mitigating the risk of intercepted or altered data.
The STM32N6’s is particularly suited for privacy-sensitive applications, where security and efficiency go hand in hand.
Applications
- Smart Surveillance: Instead of transmitting continuous video streams to cloud servers, smart cameras using the STM32N6 analyze footage on-site, relying only on necessary insights like motion detection or object classification.
- Access Control & Authentication: Secure implementation of facial recognition and people counting for restricted areas.
- Industrial Automation: Real-time object detection and classification within manufacturing lines, reducing errors without exposing sensitive operational data.
- Smart Parking & Traffic Management: Instant vehicle counting, license plate recognition, and occupancy tracking, all processed at the edge to prevent data leaks.
These applications leverage the STM32N6 powerhouse processing capacities for automation efficiency and innovation that is secure by design.
The Shift to Embedded Security
Hardware-level security is not just a technological evolution, it’s a change in design paradigms. With regulations like the Cyber Resilience Act raising the stakes, manufacturers can no longer afford to treat security as an afterthought. Instead, resilience must be woven into the fabric of the systems we build.
The STM32N6 is a great example of how manufacturers are setting the blueprint for the future, where vulnerabilities are prevented instead of patched over.
The Future of Cybersecurity
Certainties about cybersecurity are shifting every day. Today, as the discussions towards quantum computing step out of the realm of theoretical physics and into a visible horizon, security architectures must remain on their toes.
The Post-Quantum Encryption Challenge
For now, 256-bit encryption is unbreakable. It would take today’s most powerful supercomputers millions of years to brute-force their way through it. But with quantum computing power, that same encryption could collapse in a matter of minutes.
“It’s going to be game-changing once quantum computing comes online,” Don Gunn warns. “For the foreseeable future, the biggest challenge will be developing post-quantum encryption—because once quantum decryption becomes viable, all existing security measures could be rendered useless.”
The industry is actively bracing for this shift. Manufacturers are already developing quantum-resistant cryptographic methods, ensuring that AI models, industrial systems, and automation frameworks can withstand the brute-force capabilities of quantum machines. This will bring the focus to two key areas:
- Post-quantum cryptography (PQC): A new generation of encryption algorithms designed to resist quantum attacks.
- The evolution of secure firmware & hardware
For cybersecurity professionals, the race has started, and the post-quantum future will bring an entirely new kind of adversary.
Securing the Rise of Autonomous Systems
Beyond quantum threats, the next era of automation requires security models that are not just robust—but impossible to override.
Security in robotics, heavy equipment, and smart infrastructure is critical to keeping people safe. To that end, next-gen cybersecurity strategies are taking shape:
- Verified hardware which ensures security mechanisms cannot be bypassed or disabled.
- Depth sensing & exclusion zones which create invisible security perimeters to prevent unauthorized access.
- Self-healing AI & adaptive security which introduce systems that can respond dynamically to threats, detecting and neutralizing attacks in real time.
The Never-Ending Cybersecurity Evolution
The nature of cybersecurity is adaptative, and so its future will not be defined by a single breakthrough, but by a constant cycle of evolution.
It’s a balancing act, a tradeoff management frenzy, and a constantly evolving landscape. Everything complicates everything. The smarter AI and automation get, the more benefits they bring to people. At the same time, however, the more creative attackers will try to be at working against it. It’s a never-ending story, but not one from which we can’t keep learning and towards which we can anticipate. Some key predictions mentioned in our discussion included:
- Zero-trust architectures will become the norm—every device, every transaction, and every user will have to be verified at every step.
- Dynamic encryption frameworks will constantly shift, making data impossible to predict or intercept.
- AI-driven cybersecurity will fight fire with fire, using machine learning to predict and neutralize attacks before they happen.
The race is on. The question is: Who will adapt faster?
Contact
In the race towards cybersecurity, staying ahead of the trends is your best way forward.
At Future Electronics, we do more than distribute cutting-edge components. Our dedicated engineers support you at every step, from part selection to full system development. With our expertise and insights, we can help you stay ahead in an ever-evolving market.