Our client is a fast-moving, innovation-driven team at the forefront of artificial intelligence, based in Toronto and embedded in Canada’s thriving AI ecosystem. They are seeking a Senior Data Engineer for a 12-month contract starting in January, with a strong possibility for extension or conversion to full-time. The role requires 2-3 days onsite at their Toronto office.
They specialize in transforming ambitious concepts into high-impact, real-world solutions. Their work spans intelligent automation , strategic forecasting , and agentic platforms that streamline complex workflows and unlock business value. From rapid prototyping to consortium-led initiatives, they are shaping the future of procurement and decision intelligence. Join them to be part of a team that values speed, creativity, and purpose —delivering AI that matters.
Who You Are and What You'll Do
We are looking for an experienced Senior Data Engineer to join our client’s team and play a pivotal role in building and managing large-scale data pipelines, with a focus on supporting the development of Large Language Models (LLMs) and agent-based applications . Beyond your technical expertise, you will mentor and manage junior data engineers, ensuring high standards and fostering growth.
Responsibilities:
- Develop Data Pipelines: Design and implement scalable, reliable pipelines to handle growing data volumes and complexity for LLM and agent applications.
- LLM & GenAI Support: Develop and optimize data infrastructures for predictive modeling, machine learning, and generative AI applications.
- Collaborate Across Teams: Partner with data scientists, ML engineers, and business stakeholders to understand data needs and deliver solutions.
- Data Integration: Extract, transform, and load (ETL) large datasets from structured and unstructured sources via APIs and other technologies.
- Documentation & Best Practices: Maintain clear technical documentation for data engineering workflows and processes.
- Mentorship & Growth: Foster a collaborative environment by mentoring junior engineers on best practices, new technologies, and methodologies.
Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or related field; advanced degree a plus.
- 5+ years of experience in data engineering or related roles, with at least 2 years working with LLMs, agent-based applications, or similar advanced ML technologies.
- Proficiency in SQL, Python, and ETL frameworks for building data pipelines.
- Strong experience working with APIs for ETL processes involving both structured and unstructured data (e.g., JSON, XML).
- Familiarity with machine learning models, especially LLMs and generative AI, and optimizing data pipelines to support these.
- Deep knowledge of data modeling and storage for structured and unstructured data, including cloud platforms like Google BigQuery and Azure Data Lake.
- Strong leadership skills to mentor junior engineers, with excellent communication and collaboration abilities.
- Proven problem-solving skills focused on data optimization, performance, and quality assurance.
Please note: Only candidates who meet the key criteria for this role and who submit their resume will be contacted. Thank you for your interest and time.