AI-Powered FOD Enhances EV Wireless Charging Safety

AI-Powered Foreign Object Detection Boosts Safety in EV Wireless Charging

Electric vehicles are no longer a futuristic dream—they are rolling off production lines in record numbers, lining city streets, accelerating adoption curves, and reshaping the way we think about transportation, energy, and urban infrastructure. Yet behind the sleek exteriors and growing public enthusiasm lies a quieter but equally critical engineering frontier: the charging experience. For battery-electric vehicles to fully replace internal combustion engines on a global scale, recharging must be as seamless, fast, and safe as filling a tank with gasoline—perhaps even more so. Among the most promising yet persistently elusive technologies in this domain is wireless charging.

Unlike plug-in methods, which demand precise alignment, weather-resistant connectors, and—let’s be honest—a certain tolerance for bending over in the rain, wireless power transfer promises a future where drivers simply park, walk away, and let invisible magnetic fields do the rest. But despite more than a decade of R&D and dozens of pilot deployments worldwide, wireless charging has remained a niche solution rather than a mainstream one. The reasons are technical but tangible: efficiency losses over air gaps, misalignment sensitivity, electromagnetic compatibility concerns—and perhaps most pressingly, safety.

Enter foreign object detection, or FOD: the unsung gatekeeper standing between elegant physics and real-world risk. When a stray coin, a dropped wrench, or—more alarmingly—a curious cat wanders onto the charging pad, the consequences can be serious. Metal objects heat up rapidly due to induced eddy currents, potentially damaging pavement, degrading coil performance, and in worst-case scenarios, triggering fires. Living organisms—dogs, cats, even rodents—are vulnerable to localized heating, raising ethical and regulatory red flags that no automaker or infrastructure provider can ignore.

Historically, engineers tackled FOD with analog vigilance: embedded sensing coils monitoring impedance shifts, ultrasonic pulses scanning for anomalies, or radar modules triangulating reflections. Each method had merit, but also blind spots. Sensing coils demanded millivolt-level precision and complex calibration; ultrasonics struggled to differentiate a soda can from a squirrel; radar systems added cost and packaging headaches. Meanwhile, classical computer vision approaches—edge detection, color segmentation, template matching—faltered under variable lighting, cluttered backgrounds, and occlusions.

That is, until recently.

A novel approach, detailed in a landmark study published in Modern Electronics Technique, offers a compelling pivot: instead of fighting physics with more hardware, why not let intelligent software take the wheel? Researchers from Beijing Normal University at Zhuhai—Qian Qiang, Chen Hai, Zheng Yi, Yan Lihua, Liang Wenxi, and Wu Kairong—have developed and validated a machine-learning-based foreign object detection system that reimagines safety not as a circuit-level problem, but as a perception task.

At its core lies YOLOv5—a real-time object detection architecture that has gained traction not just in research labs, but in production environments where speed, accuracy, and robustness are non-negotiable. Trained on a custom dataset of 3,000 images capturing five high-risk object categories—cats, dogs, aluminum cans, screws, and coins—the model learns to recognize visual signatures far more nuanced than simple shape or reflectivity. A tabby cat curled on sun-bleached asphalt may share little chromatic contrast with its surroundings, yet its ear silhouette, whisker shadows, and limb articulation form a composite signal distinguishable from noise. Similarly, a rusted screw lying near tire marks doesn’t rely on brightness alone; its elongated form, metallic sheen under oblique light, and spatial relationship to parking lines contribute to reliable classification.

The brilliance of the team’s design lies not solely in algorithmic choice, but in system integration. Imagine a standard curbside wireless charging station: embedded in the pavement between wheel stops sits a compact, weatherproof digital camera—OV2640 grade—capturing one high-resolution frame per second. No exotic optics, no LiDAR sweeps. Just a steady visual stream, encoded in JPEG, piped wirelessly via an ESP32 microcontroller using MQTT over Wi-Fi to China Mobile’s OneNET cloud platform.

Security is baked in: each device ID maps to a unique API key, ensuring only authorized firmware can transmit or receive data. Once uploaded, images are pulled down to a dedicated server—running Windows 10, backed by Redis for rapid caching and MySQL for logging—and handed off to the YOLOv5 inference engine. Crucially, the system doesn’t just detect; it acts. A positive identification of any prohibited object triggers a dual-path alert: an immediate halt command to the power electronics controller (cutting high-frequency excitation to the transmitter coil), and a push notification to the driver’s mobile app—“Charging paused: object detected under vehicle. Please inspect area.”

In real-world validation across varying ground surfaces (concrete, asphalt, granite pavers), lighting conditions (midday glare, overcast dusk, artificial LED), and object scales (from fingertip-sized coins to full-grown house cats), the system maintained remarkable consistency. Detection speed clocked in at 62 frames per second—well within the latency budget for safe intervention. Precision reached 85.6%, meaning false alarms were rare; recall soared to 99.8%, meaning nearly every actual threat was caught. In detection parlance, this combination is golden: high recall ensures safety isn’t compromised by missed objects; high precision preserves user trust by avoiding unnecessary interruptions.

One striking test case involved a gray tabby cat—its fur nearly indistinguishable from the speckled gray granite of the test pad. While simpler detectors might overlook it, the neural net registered a confidence score of 0.46: low, but above the decision threshold. In contrast, a black-and-white cat against light concrete scored 0.78—clear visual contrast translating to stronger certainty. Two dogs partially overlapping? Detected separately, with bounding boxes cleanly disentangling limbs and torsos. A shiny coin resting on a textured, similarly hued stone surface? Confidence of 0.90—proof that texture and edge cues outweighed raw color similarity.

Even trickier scenarios were handled with aplomb: an aluminum can whose red-and-white branding mimicked coin-like circular patterns was never misclassified; another can, painted the same white as the parking stripe and sitting directly atop it, was still isolated thanks to 3D cues like shadow casting and perspective distortion.

This isn’t just detection—it’s contextual awareness.

From an industry standpoint, the implications are profound. Regulatory bodies—from the U.S. Federal Communications Commission to the International Electrotechnical Commission—have long treated FOD as a must-solve before mass-market wireless charging can proceed. ISO 19363 and SAE J2954 standards explicitly mandate reliable detection of both metallic and living foreign objects, with performance thresholds on response time and false-negative rates. This machine-learning framework doesn’t merely comply; it exceeds expectations, offering a scalable, upgradeable solution.

Unlike hardware-dependent FOD methods—which require redesigns for every coil geometry or power level—software-based vision can evolve independently. New object classes (e.g., plastic bottles, fallen branches, children’s toys) can be added with additional training data, not new sensors. Model refinements—better backbone networks, attention mechanisms, temporal fusion across frames—can be deployed over-the-air, turning every charging station into a learning node in a distributed safety network.

Consider the maintenance angle. Traditional FOD circuits degrade over time—capacitors drift, coils fatigue, calibration slips. Cameras, by contrast, are mature, low-cost, and long-lived. Lens contamination (dust, water spots) is a known challenge, yes—but modern image preprocessing (defogging, contrast normalization, adaptive histogram equalization) mitigates these issues effectively. And crucially, visual data is interpretable: unlike a mysterious spike in coil impedance, a flagged image can be reviewed by human operators, enabling incident analysis, liability assessment, and continuous improvement.

Automakers are taking notice. Pilot programs by BMW, Mercedes-Benz, and Chinese EV leader NIO have all explored inductive charging, but commercial rollout has been cautious. Safety certification remains the bottleneck. A system like this—validated in peer-reviewed literature, deployable with off-the-shelf components, and delivering quantifiable performance—could accelerate timelines significantly.

Moreover, the architecture supports expansion beyond binary “stop/go” logic. Future iterations, as the authors hint, could fuse visual data with electromagnetic telemetry—monitoring shifts in resonant frequency, reflected power, or coil temperature—to create a multimodal safety layer. A visual alert plus a 3% impedance deviation would yield near-zero false positives. A slight thermal anomaly paired with a low-confidence animal detection could trigger a gentle warning (“Possible small animal nearby—please verify”) instead of an abrupt shutdown.

For fleet operators—taxi companies, delivery vans, municipal bus systems—the value proposition sharpens. Downtime is revenue lost. A wireless system that self-monitors and preemptively avoids hazardous events reduces not just insurance risk, but operational headaches. Imagine a depot where 50 buses park overnight on wireless pads: instead of manual sweeps at dawn, the system emails a morning report—“All units charged safely; 3 FOD events resolved automatically (1 coin, 2 leaves).”

Consumers, too, stand to benefit psychologically. Public skepticism about wireless charging isn’t rooted in physics—it’s rooted in uncertainty. “What if my dog walks under the car while it’s charging?” is a question no spec sheet answers. But a system that sees, understands, and responds in real time transforms abstract risk into managed safety. It builds trust—not through marketing slogans, but through demonstrable competence.

That said, challenges remain. Extreme weather—blinding snow, torrential rain, fog—can degrade camera performance. Nighttime operation demands infrared or supplemental illumination, adding complexity. Adversarial conditions (deliberate occlusion, reflective surfaces causing glare) require ongoing dataset diversification. And while YOLOv5 is lightweight, edge deployment—running inference directly on the ESP32 or a co-located Raspberry Pi—would reduce cloud dependency and latency further. The Zhuhai team’s current cloud-centric approach is pragmatic for prototyping, but the next logical step is on-device AI.

Still, the foundation is solid. What makes this work stand out isn’t just technical novelty—it’s pragmatic innovation. No exotic sensors. No proprietary silicon. No reliance on perfect environmental control. Just smart use of widely available tools, rigorously tested in realistic conditions, documented transparently.

As wireless charging inches toward commercial reality—bolstered by initiatives from the U.S. Department of Energy, the EU’s Horizon Europe program, and China’s national EV strategy—safety infrastructure must keep pace. This research proves that artificial intelligence, when grounded in real-world constraints and engineered for reliability, isn’t just a buzzword. It’s the missing piece in the wireless puzzle.

The road ahead is charged—not just with electrons, but with possibility.


Qian Qiang, Chen Hai, Zheng Yi, Yan Lihua, Liang Wenxi, Wu Kairong — Beijing Normal University at Zhuhai
Modern Electronics Technique, Vol. 46, No. 13, July 2023
DOI: 10.16652/j.issn.1004-373x.2023.13.008

Leave a Reply 0

Your email address will not be published. Required fields are marked *