Analytics at TRACTIAN
The Data Engineering team is responsible for building and maintaining the infrastructure that handles massive datasets flowing through TRACTIAN’s systems. This department ensures the availability, scalability, and performance of data pipelines, enabling seamless access and processing of real-time and historical data. The team’s core objective is to architect robust, fault-tolerant data systems that support everything from analytics to machine learning, ensuring that the right data is in the right place, at the right time.
What you’ll do
As a Data Engineer, you will build data pipelines that enable data extraction, loading and transformation for several contexts. The goal is to have a reliable, available and trustworthy system that backbones the entire analytics pipeline. The challenges may vary from large datasets to high data throughput systems, not being reduced to a small set of techniques for data handling. You will also lead initiatives on data pipelines reliability and observability.
Responsibilities
- Develop and maintain scalable data pipelines and ETL processes.
- Design, implement, and optimize existing data extraction and loading processes with adequate data engineering design patterns.
- Lead data engineering reliability and observability, increasing analytics team awareness of the data flow processes before it becomes an issue.
- Collaborate with backend and analytics engineers in a holistic data engineering process, loading data accordingly with the technical requirements.
- Ensure data quality and consistency across various sources by implementing data validation and cleansing techniques.
- Work with cloud-based data warehouses and analytics platforms to manage and store large datasets.
- Monitor and troubleshoot data pipelines to ensure reliable and timely delivery of data.
- Document data processes, workflows, and best practices to enhance team knowledge and efficiency.
- Create dashboards as data products as internal
Requirements
- Bachelor degree in Data Science, Statistics, Computer Science, or a related field.
- Advanced English
- 2+ years of experience in Data Engineering or Analytics.
- Highly experienced in SQL and database management systems such as PostgreSQL and Clickhouse .
- Strong understanding of data warehousing concepts and experience with ETL tools (e.g., Airflow, dbt).
- Strong experience with programming languages such as Python with modern data stack for data engineering (e.g. DuckDb, Polars…)
- Experience with streaming tools (e.g. Kafka).
- Experience with cloud-based data platforms like AWS Redshift.
- Experience with GoLang/Rust is a plus.
- Experience with observability tools is a plus (e.g. Datadog, Grafana)
Apply info ->
To apply for this job, please visit jobs.lever.co

