The hard part was never the model. It was knowing what to build around it.
I'm Prithvi. I spent a decade convincing computers to do exactly what I told them, then switched to convincing them to figure it out themselves — which is either a career pivot or a loss of control, depending on who you ask. Either way, I've shipped things that work in both paradigms, and I find the ambiguous middle between them unreasonably fun.
I've always been drawn to problems where the answer isn't obvious yet. That curiosity is what led me through a decade of building systems that genuinely mattered — financial platforms where precision is non-negotiable, ML-driven logistics for autonomous vehicles, enterprise tools used by creative teams at some of the world's most recognisable companies.
What I love about that journey is what it taught me about craft. Every domain had its own definition of quality — and getting good at each one sharpened my instinct for what makes a system trustworthy, not just functional. That foundation turned out to be exactly what AI engineering needs most.
Two years ago I went deep on LLMs and agentic systems, and something clicked. The discipline of evaluating whether an agent is actually working, of designing routing logic so the right model handles the right task, of building RAG pipelines that retrieve the right things for the right reasons — these are fundamentally engineering problems, and I'd been training for them for a decade without knowing it.
The Workfront Approvals Agent was where it all came together. No template, no blueprint — just a bold goal, a talented team, and the belief that if you get the architecture right, the agent can earn the user's trust. That's the kind of challenge I want to keep chasing.
I'm particularly interested in roles where the problem space is still being defined — agentic systems, human-AI interaction, information retrieval at scale. If that sounds like what you're working on, I'd love to hear about it.