Role: Azure Data Engineer
Location: Toronto(Hybrid)
Responsibilities:
- Design, build, and optimize large-scale ETL/ELT pipelines using Databricks and PySpark.
- Develop and maintain data ingestion frameworks for structured and unstructured datasets in ADLS.
- Collaborate with data analysts, data scientists, and product teams to understand business requirements and implement scalable data solutions.
- Implement data transformations using functional programming principles for reusable and modular code.
- Work with Azure Data Factory (ADF) to orchestrate data workflows and integrate multiple data sources.
- Ensure high levels of data quality, integrity, governance, and security across all data processes.
- Write clean, optimized, and testable code in Python and SQL.
- Monitor performance, troubleshoot issues, and optimize data jobs for cost & speed.
- Participate actively in Agile ceremonies—sprint planning, standups, retrospectives, and backlog refinement.
Required Skills:
- Databricks
- PySpark
- Azure Data Lake Storage (ADLS)
- Strong proficiency in Python for data engineering workflows.
- Hands-on experience with Azure Data Factory (ADF) for data pipeline orchestration.
- Solid understanding of SQL and experience with performance tuning.
- Experience with functional programming concepts in data engineering projects.
- Strong understanding of cloud data architecture and modern data engineering patterns.
- Experience working in an Agile/Scrum environment.
Regards
Praveen Kumar
Talent Acquisition Group – Strategic Recruitment Manager