Role Summary
We are seeking an Azure DataOps Data Engineer – II with strong hands-on experience in building, operating, and supporting scalable data platforms on Microsoft Azure. This role focuses on operational excellence, production support, and continuous improvement of data systems. The ideal candidate should be comfortable working in a 24/7 rotational support model
Key Responsibilities
- Support production data platforms, ensuring high availability, reliability, and performance.
- Monitor data pipelines and jobs, proactively identifying and resolving failures, performance issues, and data discrepancies.
- Perform root cause analysis (RCA) for incidents and implement preventive measures.
- Implement DataOps best practices including automation, monitoring, alerting, and operational dashboards.
- Collaborate with cross-functional teams to support reporting, analytics, and downstream consumption.
- Maintain documentation for pipelines, operational runbooks, and support procedures.
- Participate in on-call and rotational shift support, including weekends or night shifts as required.
Required Skills & Qualifications
- 3–5 years of experience in Data Engineering / DataOps roles.
- Strong hands-on experience with:
- Azure Databricks (PySpark, Spark SQL, Delta Lake)
- Azure Data Factory (ADF) – pipelines, triggers, parameters, monitoring
- Azure Data Lake Storage (ADLS Gen2)
- Good understanding of ETL/ELT frameworks, batch and incremental processing.
- Strong SQL skills for data analysis and troubleshooting.
- Experience with production support, incident management, and SLA-driven environments.
- Familiarity with monitoring tools (Azure Monitor, Log Analytics, alerts).
- Understanding of Azure security concepts (RBAC, Managed Identity, Key Vault).
Good to Have
- Exposure to Microsoft Fabric (Lakehouse, Pipelines, Notebooks).
- Basic knowledge of Power BI and semantic models.
- Experience working in 24/7 support or rotational shift models.
Work Model
- Willingness to work in a rotational shift / on-call support model as part of a global operations team.
- Ability to handle high-priority incidents and work under time-sensitive conditions.
Key Traits
- Strong troubleshooting and analytical mindset.
- Ownership-driven and operationally focused.
- Clear communication and collaboration skills.
- Continuous learning attitude toward Azure data services and DataOps practices.
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
Apply info ->
To apply for this job, please visit jobs.lever.co

