Key Responsibilities
Own the health, reliability, and performance of production data pipelines across the company.
Build, maintain, and debug ETL/ELT pipelines feeding analytics, AI systems, and internal tools.
Ensure data quality, monitoring, and alerting so issues are caught early and fixed fast.
Design and maintain integrations across the data stack (APIs, warehouses, orchestration, and downstream consumers).
Support and extend AI and agentic systems by building robust data and context pipelines.
Partner closely with data science, product, and engineering to enable data-driven decision-making.
Proactively improve infrastructure to prevent failures and scale with the business.
Required Skills and Experience
Required
3+ years of professional experience in data engineering or analytics engineering.
Strong Python and SQL skills (non-negotiable).
Hands-on experience building and owning production ETL/ELT pipelines.
Experience with cloud data warehouses (Snowflake or similar).
Familiarity with orchestration tools (Airflow or equivalent).
Experience implementing data quality checks, monitoring, and alerting.
Comfort debugging complex data issues in production environments.
Strong communication skills and ability to partner with non-data stakeholders.
Nice to have
Experience with AWS (ECS, Lambdas, RDS).
Hands-on work with Airbyte, DBT, or similar tools.
Experience building API integrations or scraping pipelines.
Exposure to LLMs, agentic systems, or AI data pipelines.
Experience supporting early-stage or fast-growing startups.
Quick self-check
If you’ve owned pipelines end-to-end and been responsible when things break, you’re likely a strong fit.