Remote Data Jobs · Brazil

Job listings

Support the development of data pipelines, ensuring quality and organization in deliveries under the guidance of more experienced members. Collaborate in the maintenance and evolution of existing solutions, contributing to the continuous improvement of processes. Participate actively in agile rituals, sharing progress and doubts clearly. Demonstrate technical curiosity and a willingness to learn new tools and good data engineering practices. Support the documentation of developed solutions, ensuring traceability and alignment with team standards.

We are looking for a talented and motivated person to act as a Data Developer, integrating with our team and contributing to the construction of scalable and intelligent solutions for large volumes of data. Responsibilities include designing, developing, and maintaining scalable and robust data pipelines, creating ingestion, transformation, and data modeling solutions using Databricks, Spark/PySpark, Cloudera, and Azure Data Factory (ADF).

Estamos em busca de uma pessoa Senior Data Engineer para atuar em um projeto internacional , na construção e evolução do ecossistema de dados, garantindo escalabilidade, qualidade e governança em todas as camadas. You will develop and maintain robust and scalable data pipelines and build and evolve data models using good engineering practices.

Responsible for designing, developing, and maintaining large-scale data ingestion and transformation pipelines on Databricks. A key contributor in implementing modern DataOps practices, ensuring data reliability, scalability, and alignment with business requirements through the integration of data contracts and automated quality checks. Design, build, and optimize data ingestion and transformation pipelines using Databricks and other modern cloud-based data platforms.

The job involves being responsible for data ingestion, analysis, and migration, with a focus on Business Intelligence and data engineering. The candidate will develop, optimize, and maintain ETL/ELT pipelines to migrate data from various sources using PostgreSQL. They will also analyze and understand business requirements, transforming insights into actions.

You'll play a crucial role in developing and maintaining scalable data pipelines and infrastructure to drive data analytics and machine learning solutions for our clients. Design and create robust, scalable data pipelines using Databricks, Apache Spark, and SQL to transform and process large datasets efficiently. You will collaborate with data architects to design data models and architecture that support data analytics and machine learning applications.

Lead the migration to a modern data platform built on Snowflake, dbt, and Prefect. As the technical architect, you will guide this transformation, define data movement, redesign workflows, and ensure business continuity throughout the transition. This is an architecture and leadership role that requires hands-on work when needed.

Work on data products, monetization, and sharing strategies. Ensure adherence to strategic plans and manage risks. Monitor client relationships to ensure maintenance and expansion. Organize roles and responsibilities for effective project outcomes. Identify opportunities for new data products through research and feedback. Support business processes to foster a data-oriented culture. Help clients recognize the value of data in their strategic objectives. Provide support to clients in creating tailored data solutions.