Lead Data Engineer
Your role at Dynatrace
The Data Engineer builds the foundation for everything else that sits on. Without clean, well-structured, timely data, there are no models to train, no experiments to run, and no products to ship. You design, build, and operate the pipelines, data platforms, and transformation layers that power the team's AI systems — and you do this at enterprise scale, not just in sandboxed environments. You have a track record of moving beyond PoC data work into hardened production infrastructure: systems that handle real data volumes, real failure modes, and real business SLAs. You treat data lineage, contracts, and observability as engineering requirements — not afterthoughts. Snowflake is your primary platform; you know it deeply, including Cortex. This role will report up into the VP of AI, Collaboration and Data's team.
Your role at Dynatrace:
- Architect and operate Snowflake as the core data platform — including Snowpark for ML, Cortex LLM functions, Cortex Search, and Dynamic Tables for AI-ready data products.
- Design and maintain batch and streaming data pipelines for ingestion, transformation, and delivery of training and inference data at enterprise scale.
- Build and own dbt transformation layers: modular data models, tests, documentation, and CI/CD-integrated dbt deployments — including data contracts that define and enforce schema agreements between data producers and consumers.
- Maintain full data lineage from source to training artefact to inference endpoint — enabling reproducibility, auditability, and root-cause analysis when model quality degrades.
- Implement AI data observability across all pipelines: data freshness monitoring, volume anomaly detection, schema drift alerting, and data quality SLAs using dbt tests, Great Expectations, or equivalent tooling.
- Design and expose MCP (Model Context Protocol) servers on top of Snowflake data assets — enabling AI agents to query, retrieve, and act on enterprise data securely and reliably, with access-controlled and auditable tool interfaces.
- Build and maintain Power BI dataflows and semantic models that connect Snowflake data to business-facing dashboards and operational reports.
- Build and operate feature stores and vector databases that serve both model training and real-time RAG inference — with lineage, versioning, and responsible data governance policies covering PII handling, consent, and data retention.
- Manage data access controls, anonymization, and governance policies — ensuring all datasets used for AI training and inference comply with data protection regulations and responsible AI standards.
- Optimize Snowflake and AWS compute costs and query performance; implement credit governance, resource monitoring, and cost allocation for AI workloads.
This is a remote eligible position. Candidates who live within a 45 mile radius of Boston, MA; Detroit, MI; Denver, CO; and Mountain View, CA will be required to work hybrid (2 days per week) out of our Dyntrace office. Candidates are required to work EST hours for this position.
What will help you succeed
Minimum Requirements:
- 5+ years of data engineering experience with proven track record building and operating Snowflake in production at enterprise scale.
Preferred Requirements:
- Expert-level proficiency in Snowflake (Snowpark, Cortex, Dynamic Tables) and dbt (models, tests, documentation, data contracts, CI/CD integration).
- Expert-level proficiency in Python and SQL.
- Experience building and monitoring real-time data pipelines.
- Hands-on experience with AWS data services (S3, Glue, Lambda, Redshift, SageMaker Pipelines).
- Proven track record implementing data observability using Great Expectations, dbt tests, or Monte Carlo.
- Experience with orchestration tools and infrastructure-as-code.
- Demonstrated experience implementing data lineage and observability on a live platform.
- Track record applying responsible data governance practices: PII handling, access controls, anonymization, consent tracking, and data retention policies.
- Experience optimizing cloud compute costs and implementing resource monitoring for large-scale data workloads.
Why you will love being a Dynatracer
• A one-product software company creating real value for the largest enterprises and millions of end customers globally, striving for a world where software works perfectly.
• Working with the latest technologies and at the forefront of innovation in tech on scale; but also, in other areas like marketing, design, or research.
• A team that thinks outside the box, welcomes unconventional ideas, and pushes boundaries.
• An environment that fosters innovation, enables creative collaboration, and allows you to grow.
• A globally unique and tailor-made career development program recognizing your potential, promoting your strengths, and supporting you in achieving your career goals.
• A truly international mindset that is being shaped by the diverse personalities, expertise, and backgrounds of our global team.
• A relocation team that is eager to help you start your journey to a new country, always there to support and by your side.
• Attractive compensation packages and stock purchase options with numerous benefits and advantages.
Compensation and Rewards
DOE, salary $150K - $180K, plus Health, Dental, Life, STD, LTD, 401K, PTO. Total compensation may vary depending on candidate experience/education and location.