As smart wearables are increasingly adopted by enterprises, execs express concerns and guidance around data privacy.
The first wave of user-facing AI hardware came like a literal wave — and disappeared just as quickly. Google Glass, more than a decade old now, never found many takers, but even recent AI hardware attempts like the Humane AI Pin and the Rabbit R1 found even fewer users outside a small circle of enthusiasts.
That’s not to say AI hardware is dying out or won’t matter in the future. The first wave of passive, always-active devices has simply left the stage to make way for the second wave — but not without leaving behind lessons that the next generation of smart wearables is already building on.
We’ve already seen several launches. Google’s renewed interest in smart glasses through Project Astra is moving beyond Gemini Live on phones. Meta’s Ray-Ban partnership has broken records, delivering a future we thought was years away, even adding a more sci-fi-like display on the glasses for true augmented reality experiences with wrist-based control.
Then there’s Even Realities, whose glasses look like traditional eyewear, saving you from the bulk and the “nerdy” look.
Innovation in smart wearables is indeed happening, and rapidly. The pace may match the advancements in AI, but what isn’t evolving nearly as quickly are the safety and privacy safeguards, especially in business environments, where compliance and accountability demand far more than what passes in the consumer market.
AI wearable presence in enterprises
Smart wearables are gradually being adopted across the value chain. Apple’s Vision Pro, for instance, has proved useful in product design at France-based Dassault Systèmes. In retail, it’s helping customers visualize how redesigned kitchens or living rooms would look.
But these mixed-reality headsets are still very visible and deliberate in their use. The omnipresent, always-listening AI tech stays out of your line of sight, often to the point where you shouldn’t even notice it’s there. These devices work passively and autonomously, picking up subtle behavioral cues with minimal human intervention.
“For me, the big shift is that AI hardware will stop feeling like a gadget and start feeling like a natural extension of you… Screens become optional, and the experience becomes ambient: subtle audio, minimal visuals, gestures. And finally, we move from the world of apps to a world of abilities. You won’t open an app to get something done; the device will simply enhance what you can do in the moment,” Pranav Mistry, founder and CEO of Two, an AR startup working on its own AI hardware product, told FutureNexus.
These kinds of invisible wearables — glasses, necklaces, lapel pins, and similar form factors — have already entered boardrooms to take meeting notes, eliminating the need for a dedicated assistant for this single task.
Plaud’s Note, for example, magnetically attaches to the back of your iPhone and can record hours of audio from calls and in-person meetings. Its clip-on sibling works like a collar mic for more portable meeting use. Devices like these automate transcription, and with LLMs working in the background, they can understand and parse multiple languages or even mixed-language conversations.
But regardless of the form factor or where they’re in the value chain, privacy and safety concerns often come up when devices are always seeing and listening.
Concerns around safety and privacy
There are two parts to this problem. First is the constant recording of the user’s environment, which effectively means constant surveillance of peers or bystanders without their knowledge or explicit consent. Even in an era where everyone already carries a phone with cameras and microphones, this concern is still legitimate because AI devices aren’t always as apparent as phones.
Second, and perhaps more importantly, is how that captured data is processed. An always-listening device can inevitably pick up conversations it shouldn’t, while processing this data on remote servers, which creates serious privacy risks.
To address this, Mistry, who previously served as the CEO of Samsung Technology & Advanced Research Labs, suggests four filters: capture only what you need, process and store data on-device, provide visible cues to people nearby, and give the wearer ownership of the recorded data rather than the company.
In workplace settings, he says both organizations and users must share responsibility for the transparent use of wearables.
“Workplaces need more than generic surveillance rules. They need real boundaries around what these devices can sense in professional environments. This means defining clear modes — what is allowed inside a meeting room versus a sensitive zone — and ensuring both the wearer and the organization share responsibility. I also believe in having transparent, user-accessible logs, not just admin dashboards. AI wearables shouldn’t turn into invisible CCTV systems. Governance must be designed to protect workers first, not to monitor them,” he explained.
Visible cues when the device is recording, along with immediately purging any buffer recordings, can help prevent privacy violations for both wearers and bystanders.
These measures also ensure organizations aren’t using AI wearables to spy on their own employees. Putting the onus on organizations using these devices to be transparent and upfront about their policies will make the ecosystem more usable in daily operations, rather than letting it turn into a surveillance tool. That’s a crucial way to build trust.
Building public trust
Gaining public trust around AI wearables will be far harder than it was with early smartphone cameras. AI wearables, unlike a separate phone, are designed to be discreet and blend into everyday objects — eyewear, jewelry, wristbands, watches, and more.
“I want people to feel that these devices are genuinely on their side. Ideally, there should be a sense of empowerment — ‘this lets me do things I couldn’t do before’ — combined with a feeling of safety and ease… Ultimately, trust will come from simplicity and transparency, not complex documentation. If the device feels like a superpower that respects you, we’ve done our job,” Mistry added.
He also noted that trust must be built from the start, beginning with design and extending to communication with users, with full transparency. It has to be a part of the product itself instead of being just a marketing afterthought.
“Firstly, companies need to be honest about what the device can actually do — no exaggerations, no hidden behaviors. Secondly, the experience should be delightful without being addictive; people should feel enhanced, not manipulated. And finally, privacy must be enforced by design, not promised in footnotes. Build systems where even the company cannot violate user trust. AI hardware should feel like a helpful companion with boundaries, not a sensor extracting value from you,” he said.
Ultimately, the next wave of AI wearables will reshape how work is done, the same way AI tools like ChatGPT quietly became standard (sometimes even for the worse) before policies caught up. The balance can’t be accidental. Builders have to hardwire boundaries into the devices themselves, while organizations must be ready to absorb a new class of AI tools with clear rules, transparent logs, and real accountability. It’s not enough to react after adoption; enterprises need to brace for it now.
If AI hardware is going to feel like a natural extension of human ability, not a silent auditor, trust has to be designed before deployment. The real question isn’t whether these devices will enter boardrooms – it’s whether we’ll be ready when they do.

