About Quantiphi:
Quantiphi is an award-winning Applied AI and Big Data software and services company, driven by a deep desire to solve transformational problems at the heart of businesses. Our signature approach combines groundbreaking machine-learning research with disciplined cloud and data-engineering practices to create breakthrough impact at unprecedented speed.
Quantiphi has seen 2.5x growth YoY since its inception in 2013, we don’t just innovate - we lead.
Headquartered in Boston, with 4,000+ professionals across the globe. Quantiphi leverages Applied AI technologies across multiple a. Industry Verticals (Telco, BFSI, HCLS etc.) and is an established Elite/Premier Partner of NVIDIA, Google Cloud, AWS, Snowflake, and others.
We have been recognized with:
- 17x Google Cloud Partner of the Year awards in the last 8 years
- 3x AWS AI/ML award wins
- 3x NVIDIA Partner of the Year titles
- 2x Snowflake Partner of the Year awards
- Recognized Leaders by Gartner, Forrester, IDC, ISG, Everest Group and other leading analyst and independent research firms
- We offer first-in-class industry solutions across Healthcare, Financial Services, Consumer Goods, Manufacturing, and more, powered by cutting-edge Generative AI and Agentic AI accelerators
- We have been certified as a Great Place to Work for the third year in a row- 2021, 2022, 2023
Be part of a trailblazing team that’s shaping the future of AI, ML, and cloud innovation. Your next big opportunity starts here!
For more details, visit: Website or LinkedIn Page.
Role: Senior Product Manager
Experience level: 5+
Employment type: Full Time
Location: Toronto, ON
Description:
We are seeking an experienced Product Manager to lead the development of a centralized enterprise platform for managing AI agents powered by large language models (LLMs). This role is at the intersection of AI product strategy, developer platforms, and enterprise IT transformation, and will involve deep collaboration with multiple client teams building AI-driven solutions.
You will be responsible for defining and delivering shared platform capabilities (e.g., templates, agent lifecycle tooling, observability, governance frameworks) that accelerate the safe and scalable deployment of AI agents across a distributed technology landscape. This is a highly collaborative role requiring technical fluency, product leadership, and stakeholder alignment across varying levels of maturity and adoption readiness.
Key Responsibilities:
- Define the platform roadmap and product strategy for enabling enterprise-scale development and deployment of LLM-powered agents.
- Lead the design of core platform capabilities, including.
- Reference patterns and reusable components for agent development.
- Common tooling for testing, evaluation, observability, and governance.
- Scalable inference and orchestration workflows.
- Shared deployment templates and service hooks.
- Act as the bridge between centralized engineering teams and distributed business-unit-aligned development teams, who each have varying tech stacks, priorities, and adoption curves.
- Develop a pragmatic approach to platform standardization.
- Build early champions and show value through rapid enablement.
- Identify middle-ground use cases requiring tailored paths.
- Establish minimum standards and governance for mandatory components.
- Prioritize and manage a backlog of platform features based on client needs, feedback from internal users, and evolving best practices in the AI/ML ecosystem.
- Drive developer experience improvements across onboarding, documentation, and tooling to ensure successful platform adoption.
Ideal Experience:
- 5+ years of experience in product management with a focus on internal platforms, developer tooling, or AI/ML systems.
- Strong understanding of modern AI agent architectures, orchestration frameworks, and LLM operations (LangChain, RAG, evaluation tooling, etc.).
- Familiarity with DevOps, CI/CD, SDLC governance, and cloud infrastructure in an enterprise context.
- Demonstrated success managing platform adoption across distributed or federated development teams.
- Exceptional communication and stakeholder management skills, with the ability to navigate organizational complexity and drive alignment.
- Experience working in or with consulting environments or matrixed client organizations is a plus.
Key Capabilities We Value:
- Platform Thinking – Ability to design modular, reusable systems that can scale across teams.
- Negotiation & Influence – Capable of earning trust and guiding decision-making across varied stakeholder groups.
- Technical Fluency – Comfortable speaking with engineers, understanding architecture tradeoffs, and guiding implementation paths.
- Execution Focus – Experience shipping platform features with clear success metrics and adoption KPIs.
- Client-Centric Mindset – Adaptable, professional, and focused on solving problems in dynamic and complex client environments.
What is in it for you:
- Be part of a team and company that has won NVIDIA's AI Services Partner of the Year three times in a row with an unparalleled track record of building production AI applications on DGX and Cloud GPUs.
- Strong peer learning which will accelerate your learning curve across Applied AI, GPU Computing and other softer aspects such as technical communication.
- Exposure to working with highly experienced AI leaders at Fortune 500 companies and innovative market disruptors looking to transform their business with Generative AI.
- Access to state-of-the-art GPU infrastructure on the cloud and on-premise.
- Be part of the fastest-growing AI-first digital transformation and engineering company in the world.