AI Eyewear Frenzy Triggers a “Chip” Revolution

From Meta to Xiaomi, the AI-glasses segment is evolving into a “battle of a hundred mirrors.” As the new darling of wearables, AI glasses are rapidly adding features, and behind every iteration lies a suite of chips. Whether it is breakthroughs in technology or an explosive surge in demand, semiconductors are the real engine driving this wave. In turn, the fast expansion of the AI-glasses market is injecting a fresh shot of adrenaline into the chip industry.

Core Computing Engine – Main-Control SoC

AI glasses rely on a symphony of chips, and the performance of each die directly determines frame rate, image-recognition accuracy, audio quality, and power efficiency. The system-on-chip (SoC) acts as the “brain,” integrating CPU, GPU, NPU, and ISP blocks to handle overall computation, graphics, AI tasks, and image pipelines.

Today, three mainstream SoC approaches dominate.

System-level SoC

All critical functions are merged into one die. Qualcomm’s AR1 Gen 1, built on 6 nm, exemplifies this path. A multi-core CPU/GPU plus 3rd-gen Hexagon NPU powers visual search, real-time translation, and directional audio. It is already inside Meta Ray-Ban Smart Glasses, Thunderbird X3 PRO, and Xiaomi AI Glasses. Drawbacks: higher power, heat, and cost—about US $55, or 31.6 % of BOM.

MCU-level SoC + external ISP

An MCU handles basic sensing and connectivity, while a discrete ISP covers imaging. The 6 nm BES2800 from Bestechnic follows this route. Cost drops to US $10–15 and active power to ~100 mW, but complex AI models are out of reach, so the solution targets entry-level frames.

SoC + MCU dual-core architecture

The heavy-lifting SoC runs a time-sharing OS and AI algorithms; a low-power MCU manages audio and standby. StarFive’s SSC833 adopts this split, achieving 20–30 USD cost and long battery life for all-day wear.

Demand for AI compute is rising exponentially. Yang Jungang, Deputy GM of CCID Consulting’s IC Center, notes that moving from simple voice recognition to real-time translation and scene understanding requires far more MIPS—and more milliwatts. To break the power wall, vendors are migrating to 6 nm, 4 nm nodes, and using DVFS plus big-little core scheduling. BES2800, for instance, toggles between high-performance and ultra-low-power islands to hold average power under 300 mW. 3-D TSV stacking is next, promising higher energy efficiency by stacking memory and logic vertically.

Vision – CMOS Image Sensor

The CMOS sensor is the “eye” of AI glasses, dictating image quality and AR interaction. Sony’s IMX681 currently rules the roost, appearing in Ray-Ban Meta, Thunderbird V3, Rokid Glasses, and Xiaomi AI Glasses.

IMX681 advantages

• Tiny: 3024 × 4032 (12 MP), 1 µm pixel, 4.03 mm × 3.02 mm active area—only ~25 % the size of a phone sensor.

• Low power: back-illuminated stacked process cuts dissipation versus phone CIS.

• Global shutter eliminates rolling-shutter distortion for motion-heavy AR tasks (OCR, gesture, face ID).

• Mature ecosystem: deep co-optimization with Qualcomm AR1 shortens development cycles.

Yet newcomers are emerging. OmniVision (Will Semiconductor) has entered Amazon’s AI+AR project, while SmartSens on 8 May 2025 launched the 12 MP SC1200IOT purpose-built for AI glasses. SC1200IOT uses a 1/3.57-inch optical format and 1 µm pixels in a 5.1 mm × 3.7 mm package, trimming power and raising sensitivity by 29 %. Samples are out, with mass production slated for Q2 2025.

Data Hub – Memory Chips

Memory stores firmware, models, video, and images, and commands a notable share of BOM. In Meta’s Ray-Ban glasses, Biwin ePOP (ROM + RAM) costs ~US $11, about 7 % of hardware spend.

ePOP (Embedded Package-on-Package) stacks NAND Flash and LPDDR atop the SoC, saving ~60 % PCB area and lowering power. It has passed Qualcomm platform qualification and is already inside Meta, Google, and Facebook wearables.

eMCP (Embedded Multi-Chip Package) merges eMMC and LPDDR with an on-die NAND controller, offloading the host MCU and lowering cost for mid-range frames.

As AI glasses evolve to 4K capture and larger models, capacity will climb from today’s 32 GB to 64 GB and beyond.

Beyond the big three—SoC, CIS, and memory—AI glasses still need audio DSPs, PMICs, Wi-Fi/BT combo chips, and more. Their rapid growth is widening the addressable market for chip vendors while pushing the limits on performance, power, and size—forces that will ultimately spill over into other electronics segments.

Comments