The first wave of AI products has been built around a core interaction: the user always initiates. The AI sits there, waiting for you to make a move. You have to prompt, and ideally prompt the right way. You have to feed it your context.

Coding agents are showing us a path for what’s going to happen next. We’re starting to see proactive coding agents across the software development lifecycle. Agents plugged into your monitoring or bug reporting systems. Automatically picking up tasks and starting working without you prompting them to. Still the early days. Even Claude Code is still primarily used as a reactive agentic product, but it gives us a signal of where we’re heading.

What needs to change is the structure of the context the AI has on you. It has to become ambient. Your AI needs to be connected to everything that happens in your world, and observe you in real-time.

It’s not about connecting your data once, it’s about creating a continuous stream of data where any single change in your personal context becomes an invisible prompt. Email received, meeting ended, location changed, message sent, purchase made. All of those are tiny context changes, and they can become invisible triggers for a proactive AI interaction.

There is still a missing link though. We need always-on AI wearables to capture our irl context. Someone’s going to crack this soon, and when they do, the shift accelerates.

This shift from reactive to proactive is a new UX paradigm for AI products, and we’re about to see a second wave of AI products built around it, across consumer, prosumer and enterprise.