Data Ops Engineer
About the position
SCRI’s Data Solutions team is looking for a Data Ops Engineer to support strategic data initiatives. As a Data Ops Engineer, you will play a crucial role in designing, constructing, and maintaining data architectures, databases, and large-scale processing systems to support SCRI data initiatives. You will contribute to data modernization efforts by leveraging cloud solutions and optimizing data processing workflows, create and maintain technical documentation, and communicate technical concepts to technical and non-technical stakeholders.
Responsibilities
Design and implement scalable and efficient data pipelines to support various data-driven initiatives. Collaborate with cross-functional teams to understand data requirements and contribute to the development of data architecture. Work on data integration projects, ensuring seamless and optimized data flow between systems. Implement best practices for data engineering, ensuring data quality, reliability, and performance. Contribute to data modernization efforts by leveraging cloud solutions and optimizing data processing workflows. Create and maintain technical documentation, including data mapping documents, solution design documents, and data dictionaries. Automation and promotions to different environments using GitHub CI/CD with GitHub Actions / Liquibase. Participate in the evaluation and identification of new technologies. Other duties as assigned.
Requirements
5+ years of experience in data engineering. Bachelor’s degree in a related field (e.g., Computer Science, Information Technology, Data Science), or related experience. Technical expertise in building and optimizing data pipelines and large-scale processing systems. Technical expertise with Azure Cloud, Data Factory, Batch Service, and Databricks. Experience working with cloud solutions and contributing to data modernization efforts. Experience using Terraform and Bicep scripts to build Azure infrastructure. Experience implementing security changes using Azure RBAC. Experience building cloud infrastructure including Data Factory, Batch Service, Azure Gen2 Storage Account, and Azure SQL Database. Proficiency in programming languages such as SQL, Python, PySpark, or Scala for data manipulation and transformation. Excellent understanding of data engineering principles, data architecture, and database management. Strong problem-solving skills, attention to detail, and excellent communication skills.
Nice-to-haves
Knowledge of the healthcare, distribution, or software industries. Strong technical aptitude with a wide variety of technologies. Ability to rapidly learn and evaluate new tools or technologies. Demonstrated technical experience, innovative thinking, and a strong customer and quality focus. Experience or interest in incentive/bonus-linked compensation structures.
Benefits
Comprehensive benefits supporting physical, mental, and financial well-being
Competitive compensation package
Annual bonus (may be offered)
Long-term incentive opportunities (may be offered)
Equity considerations
Compensation adjusted for performance, experience, skills and geographical market evaluations