GCP Data Engineer
About the position
GCP Migration project (2 Contractors) focused on migrating Snowflake data into GCP. Project impact includes data model changes (e.g., snapshot to incremental, combining datasets). The role emphasizes building automated validation pipelines that combine data science and data engineering and requires hands-on experience with modern GCP data and AI services.
Responsibilities
Perform data validations (comparing impact and understanding changed data models). Build automated validation pipelines. Implement scalable data pipelines and system interfaces. Work with GCP cloud tools and services and collaborate with stakeholders to translate complex data into clear, actionable recommendations.
Requirements
3+ years programming experience in Python, PySpark, and SQL. Hands-on experience with GCP data and AI services such as BigQuery, Vertex AI, GCP ADK, Cloud Functions, Cloud Storage, and Looker. Knowledge of pipelines and ability to implement scalable data pipelines. Highly technical and proactive, with strong communication skills and ability to learn new tools quickly (examples referenced: integrating GCP and Palantir Foundry using REST APIs, connection mechanisms, egress/ingress policies). Experience with modern AI frameworks (e.g., GCP Vertex AI, Gemini, Palantir Foundry AIP).
Nice-to-haves
Palantir Foundry experience. Domain background in healthcare, pharmacy, or pay analytics (especially in finance or contract modeling). Experience with data science workflows and building automated validation combining data science and engineering.