We are seeking a Senior DevOps Engineer with strong experience in designing, deploying, and maintaining
scalable data infrastructure and CI/CD pipelines. You will play a key role in ensuring the stability, efficiency,
and automation of data operations across enterprise environments. This role requires a balance of hands-
on technical expertise, DevOps mindset, and strong collaboration skills across cross-functional and cross-
country teams.
Key Responsibilities
• Design, enhance, and maintain CI/CD pipelines for Data Warehouse and DataMart environments
using tools such as GitLab CI/CD, Jenkins, or Azure DevOps, enabling automated and consistent
deployment of schema and configuration changes across multiple environments.
• Automate operational processes using scripting and configuration management tools such as
Ansible, Bash, and Python, ensuring efficient provisioning, patching, and maintenance of DWH
servers and related components.
• Implement and optimize monitoring systems using Grafana, Prometheus, or equivalent tools to
provide real-time insights into system performance, resource utilization, and data workflows.
Configure alerting mechanisms and create custom scripts to improve visibility and reliability.
• Collaborate closely with Data Engineering and Infrastructure teams to streamline data pipelines,
improve deployment efficiency, and ensure high system availability.
• Continuously identify and implement automation opportunities to reduce manual intervention,
improve system resilience, and accelerate delivery cycles.
• Maintain infrastructure-as-code practices and promote DevOps best practices across the data
platform ecosystem.
Qualifications
• Bachelor’s degree in Computer Science, Information Systems, or a related field.
• Minimum 5 years of experience in DevOps, Data Engineering, or Data Infrastructure roles, with a
strong background in automation and system reliability.
• Proven experience in designing and managing CI/CD pipelines using GitLab CI/CD, Jenkins, or
similar tools.
• Hands-on experience in Linux administration and scripting (Bash, Python).
• Solid understanding of Data Warehouse concepts and environments (e.g., Snowflake, Redshift,
BigQuery, Azure Synapse, Teradata, or equivalent).
• Experience managing monitoring and alerting systems (Grafana, Prometheus, CloudWatch, etc.).
• Familiarity with containerization and orchestration tools such as Docker and Kubernetes is a
strong plus.
• Experience working in cross-functional and/or cross-country teams within enterprise
environments.
• Fluent in English, both written and verbal.
Application Confirmation
You're applying for the role below: