A client of Robert Half is looking for a highly skilled Data Engineer to design, build, and optimize data pipelines and infrastructure that enable advanced analytics and business intelligence. The ideal candidate will have strong experience in big data technologies, cloud platforms, and hands-on expertise with Databricks .
Key Responsibilities
- Data Pipeline Development:
- Design, develop, and maintain scalable ETL/ELT pipelines using Databricks and other tools.
- Integrate data from multiple sources into data lakes and warehouses.
- Data Architecture & Modeling:
- Implement robust data models for analytics and reporting.
- Ensure data quality, consistency, and governance across systems.
- Performance Optimization:
- Optimize Spark jobs and Databricks workflows for efficiency and cost-effectiveness.
- Monitor and troubleshoot data pipeline performance issues.
- Collaboration & Support:
- Work closely with data scientists, BI engineers, and business stakeholders to deliver data solutions.
- Provide technical guidance on best practices for data engineering and cloud architecture.
Required Qualifications
- Bachelor’s degree in Computer Science, Information Systems, or related field.
- 5+ years of experience in data engineering and big data technologies.
- Strong proficiency in Databricks and Apache Spark .
- Expertise in SQL and relational databases (e.g., SQL Server, PostgreSQL).
- Experience with cloud platforms (AWS, Azure, or GCP) and their data services.
- Hands-on experience with data lake and data warehouse architectures.
- Proficiency in Python or Scala for data processing.
- Solid understanding of ETL/ELT processes and data governance principles.
Preferred Skills
- Experience with Delta Lake and Lakehouse architecture.
- Familiarity with CI/CD pipelines for data workflows.
- Knowledge of big data tools (Kafka, Hadoop).
- Exposure to machine learning or advanced analytics.