Amtex Systems Inc
Department : Engineering
Company: Amtex Systems Inc
New York, New York | Full time |
Role Overview / Mission
This role designs, implements, and maintains infrastructure and processes on Google Cloud Platform (GCP) to enable the development, deployment, and monitoring of machine learning models at scale. It bridges data science, data engineering, and infrastructure to ensure machine learning systems are reliable, scalable, and optimized for GCP environments.
Key Responsibilities
- Design and implement pipelines for deploying machine learning models into production using GCP services such as AI Platform, Vertex AI, Cloud Run, and Cloud Composer, ensuring high availability and performance.
- Build and maintain scalable GCP-based infrastructure using services like Google Compute Engine, Google Kubernetes Engine (GKE), and Cloud Storage to support model training, deployment, and inference.
- Develop automated workflows for data ingestion, model training, validation, and deployment using GCP tools like Cloud Composer, and CI/CD pipelines integrated with GitLab and Bitbucket Repositories.
- Implement monitoring solutions using Google Cloud Monitoring and Logging to track model performance, data drift, and system health, and take corrective actions as needed.
- Work closely with data scientists, data engineers, infrastructure, and DevOps teams to streamline the ML lifecycle and ensure alignment with business objectives.
- Manage versioning of datasets, models, and code using GCP tools like Artifact Registry or Cloud Storage to ensure reproducibility and traceability of machine learning experiments.
- Optimize model performance and resource utilization on GCP, leveraging containerization with Docker and GKE, and utilizing cost-efficient resources like preemptible VMs or Cloud TPU/GPU.
- Ensure ML systems comply with data privacy regulations (e.g., GDPR, CCPA) using GCP’s security tools like Cloud IAM, VPC Service Controls, and Data Loss Prevention (DLP).
- Integrate GCP-native tools (e.g., Vertex AI, Cloud Composer) and open-source MLOps frameworks (e.g., MLflow, Kubeflow) to support the ML lifecycle.
Required Qualifications / Skills
- Technical Skills:
- Proficiency in programming languages such as Python.
- Expertise in GCP services, including Vertex AI, Google Kubernetes Engine (GKE), Cloud Run, BigQuery, Cloud Storage, Cloud Composer, Data Proc, PySpark, and managed Airflow.
- Experience with infrastructure-as-code using Terraform.
- Familiarity with containerization (Docker, GKE) and CI/CD pipelines, including GitLab and Bitbucket.
- Knowledge of ML frameworks (TensorFlow, PyTorch, scikit-learn) and MLOps tools compatible with GCP (MLflow, Kubeflow), and Gen AI RAG applications.
- Understanding of data engineering concepts, including ETL pipelines with BigQuery, Dataflow, and Dataproc with PySpark.
- Soft Skills:
- Strong problem-solving and analytical abilities.
- Excellent communication and collaboration skills.
- Ability to work in a fast-paced, cross-functional environment.
Preferred / Nice-to-Have Skills
- Experience with large-scale distributed ML systems on GCP, such as Vertex AI Pipelines or Kubeflow on GKE, and Feature Store.
- Exposure to Generative AI (GenAI) and Retrieval-Augmented Generation (RAG) applications and deployment strategies.
- Familiarity with GCP’s model monitoring tools and techniques for detecting data drift or model degradation.
- Knowledge of microservices architecture and API development using Cloud Endpoints or Cloud Functions.
- Google Cloud Professional certifications (e.g., Professional Machine Learning Engineer, Professional Cloud Architect).
Location & Work Setup
This is a full-time remote position.
Timezone: America/Denver
Posted: Sep 09, 2025
Expires: Oct 09, 2025