Embedding AI into AR glasses

The current applications of consumer-focused AR glasses seem to be confined to entertainment (watching movies and playing video games). The potential of directly overlaying computer vision outputs into our vision seems too promising to ignore.

Our team is working towards embedding AI capabilities into AR glasses for consumers. However, we’re having trouble figuring out the most useful applications desired on such devices.

Here’s your chance to help shape the future of tomorrow:

  • What do you think are the most important applications on AI-powered AR devices?

  • What information would you actually like to see on such devices, such that the information access outweighs the costs (upfront monetary cost, potential social stigma, potential aesthetic cost)?

  • What is the bundle of capabilities that push AR glasses over the minimum usefulness threshold for consumers?

To prime you for the potentialities, here are some examples:

  • Run image diffusion models and modify what you see in real time

  • Emotion / intent recognition on people

  • IRL Ad-block (turn all ads into art etc)

  • Navigation overlay + reviews for places

  • Question the world around you (look at an object, ask questions about it)

Curious to see what you all think would be the “killer app” of this new computing platform. Really appreciate the feedback, and will keep you all updated!

1 Like

Hi, what makes you think you can make this work, when Google failed with its Google Glass project? I think the privacy concerns of people were the predominant show stoppers.

If I wanted to circumvent this problem, I would make a product that is only supposed to be used in certain restricted areas, for example special offices or private homes.