Job Description

You will architect and implement robust data pipelines and modern analytics platforms, working in close collaboration with cross-functional teams across the US and Brazil. You will be responsible for building scalable data solutions leveraging modern cloud-native tools. Fluent English communication is essential for engaging with our global stakeholders and ensuring alignment across distributed teams. Requirements for this challenge: Solid experience as a Data Developer, strong SQL expertise with the ability to optimize, refactor, and validate large-scale data transformations. Proficiency in Python (or similar language) for scripting and automation of data workflows. Hands-on experience with Snowflake, including performance tuning, data governance, masking, and workload management. Advanced knowledge and production experience with dbt for transformation logic, testing, documentation, and CI/CD integrations. Proven experience implementing Data Vault 2.0 models, including Hubs, Links, Satellites, PIT tables, and business vault patterns using AutomateDV or similar frameworks. Experience orchestrating ETL/ELT pipelines using Airflow, with knowledge of DAG structuring, dependency management, and dynamic task generation. Familiarity with modern data orchestration tools, such as Prefect, Dagster, or AWS Glue. Comfortable working in environments using CI/CD pipelines with GitHub Actions, integrating dbt, testing, and deployment to Snowflake or similar platforms. Solid understanding of data modeling best practices, including normalization, dimensional modeling, and historization. Ability to translate business requirements into scalable data architectures, and to communicate technical concepts effectively with stakeholders.

About CI&T

We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions.

Apply for This Position