Source Job

US

  • Design and maintain data models that organize rich content into canonical structures optimized for product features, search, and retrieval.
  • Build high-reliability ETLs and streaming pipelines to process usage events, analytics data, behavioral signals, and application logs.
  • Develop data services that expose unified content to the application, such as metadata access APIs, indexing workflows, and retrieval-ready representations.

Python SQL BigQuery

20 jobs similar to Senior Backend Engineer, Data Pipelines and Integrations

Jobs ranked by similarity.

Europe Unlimited PTO

Design, implement, and maintain scalable ETL/ELT pipelines using Python, SQL, and modern orchestration frameworks. Build and optimize data models and schemas for cloud warehouses and relational databases, supporting AI and analytics workflows. Lead large-scale data initiatives from planning through execution, ensuring performance, cost efficiency, and reliability.

This position is posted by Jobgether on behalf of a partner company.

  • Design, build, and maintain the pipelines that power all data use cases.
  • Develop intuitive, performant, and scalable data models that support product features, internal analytics, experimentation, and machine learning workloads.
  • Define and enforce standards for accuracy, completeness, lineage, and dependency management.

Patreon is a media and community platform where over 300,000 creators give their biggest fans access to exclusive work and experiences.

Americas EMEA

  • Build and scale data services by designing, developing, and maintaining scalable backend systems and APIs.
  • Collaborate on data architecture and models, partnering with engineering and analytics teams to optimize storage and processing workflows.
  • Contribute to standards, quality, and governance by building reliable, observable data systems with strong testing and validation.

Zapier builds and uses automation every day to make work more efficient, creative, and human.

$110,000–$140,000/yr
US

  • Design, build, and maintain scalable and reliable data pipelines.
  • Develop and maintain ETL data pipelines for large volumes of data, writing clean, maintainable, and efficient code.
  • Work closely with product managers, data scientists, and software engineers to create and prepare datasets from disparate sources.

Curinos empowers financial institutions to make better, faster and more profitable decisions through industry-leading proprietary data, technologies and insights.

Mexico

  • Design, build, and maintain highly scalable, reliable, and efficient ETL/ELT pipelines.
  • Ingest data from a multitude of sources and transform raw data into clean, structured, and AI/ML-ready formats.
  • Work closely with data scientists, machine learning engineers, and business analysts to understand their data needs.

Valtech exists to unlock a better way to experience the world by blending crafts, categories, and cultures, helping brands unlock new value in an increasingly digital world.

$120,000–$138,000/yr
US 3w PTO

  • Responsible for the collection, extraction, transformation, and correlation of business data across the Subsplash platform.
  • Administer and tune data systems to optimize for performance.
  • Work with data warehousing/data lake environments to provide data marts for business analysis and intelligence.

Subsplash is an award-winning team that builds The Ultimate Engagement Platform™ for churches, Christian ministries, non-profits, and businesses around the world.

Unlimited PTO

  • Design, develop, and maintain reliable, scalable ETL/ELT pipelines across ingestion, transformation, storage, and consumption layers.
  • Build and manage data models and transformations using dbt with strong testing and documentation practices.
  • Contribute to architectural decisions, technology selection, and data engineering standards.

Vitable is a health benefits platform making healthcare better for employers of everyday workers.

$180,000–$190,000/yr
US

Build end-to-end data solutions that include ingest, logging, validation, cleaning, transformation, and security. Lead the design, development, and delivery of scalable data pipelines and ETL processes. Design and evolve robust data models and storage patterns that support analytics and efficiency use-cases.

Founded in 1997, Expression provides data fusion, data analytics, AI/ML, software engineering, information technology, and electromagnetic spectrum management solutions.

$215,000–$240,000/yr
US

Own the design, build, and optimization of end-to-end data pipelines. Establish and enforce best practices in data modeling, orchestration, and system reliability. Collaborate with stakeholders to translate requirements into robust, scalable data solutions.

YipitData is the leading market research and analytics firm for the disruptive economy and most recently raised $475M from The Carlyle Group at a valuation of over $1B.

EMEA Asia

  • Design, build, and scale performant data pipelines and infrastructure, primarily using ClickHouse, Python, and dbt.
  • Build systems that handle large-scale streaming and batch data, with a strong emphasis on correctness and operational stability.
  • Own the end-to-end lifecycle of data pipelines, from raw ingestion to clean, well-defined datasets consumed by downstream teams.

Nansen is a leading blockchain analytics platform that empowers investors and professionals with real-time, actionable insights derived from on-chain data. We’re building the world’s best blockchain analytics platform, and data is at the heart of everything we do.

Global

Build and improve processes to clean, enrich and structure large datasets. Integrate and manage existing ML models used for classification and enrichment. Apply NLP techniques to understand and categorize job descriptions and candidate profiles.

Jobgether uses an AI-powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements.

Latam

Work with data end-to-end, exploring, cleaning, and assembling large, complex datasets. Analyze raw data from multiple sources and identify trends and patterns, maintaining reliable data pipelines. Build analytics-ready outputs and models that enable self-service and trustworthy insights across the organization.

Truelogic is a leading provider of nearshore staff augmentation services headquartered in New York, delivering top-tier technology solutions for over two decades.

$0–$200,000/yr
North America Latin America

  • Architect and maintain robust data pipelines to transform diverse data inputs.
  • Integrate data from various sources into a unified platform.
  • Build APIs with AI assistance to enable secure access to consolidated insights.

Abusix is committed to making the internet a safer place. They are a globally distributed team that spans multiple countries and thrives in a culture rooted in trust, ownership, and collaboration.

Brazil

  • Projetar, desenvolver e manter pipelines de dados (batch e streaming) para ingestão, transformação e disponibilização de dados para analytics e consumo por aplicações.
  • Construir e evoluir modelagem analítica (camadas bronze/silver/gold, data marts, star schema, wide tables), garantindo consistência, documentação e reuso.
  • Implementar boas práticas de qualidade de dados (testes, validações, contratos, SLAs/SLOs, monitoramento de freshness/completeness/accuracy) e atuar na resolução de incidentes com RCA.

CI&T specializes in technological transformation, combining human expertise with AI to create scalable tech solutions. With over 8,000 CI&Ters around the world, they have partnered with more than 1,000 clients during their 30 years of history.

$155,000–$185,000/yr
US Unlimited PTO

As a Senior Data Engineer, shape a scalable data platform that drives business insights. Design and maintain robust data pipelines and collaborate with cross-functional teams. Tackle complex data challenges, implement best practices, and mentor junior engineers.

Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching.

Europe

  • Design and deliver data access patterns and ingestion workflows.
  • Work with technical roles and domain experts to turn AI use cases into production-ready capabilities.
  • Build lightweight ingestion or sync pipelines to bring priority datasets from various source systems into platforms that enable AI use.

RWS is building the next generation of AI-enabled capabilities across our products, internal production systems, and enterprise platforms.

$200,000–$225,000/yr
US Canada

  • Develop data models and pipelines for customer-facing applications, research, reporting and machine learning.
  • Optimize data models to support efficient data storage and retrieval processes for performance and scalability.
  • Optimize ETL processes for ingesting, processing and transforming large volumes of structured and unstructured data into our data ecosystem.

Inspiren offers the most complete and connected ecosystem in senior living bringing peace of mind to residents, families, and staff.

Looking for young talent ready to go all in. Offering significant equity to people who want to build something that matters. Define the future of AI in influencer marketing.

Influur is redefining how advertising works through creators, data, and AI, aiming to make influencer marketing as measurable, predictable, and scalable as paid ads.

$23,000–$322,000/yr
US

Build backend and pipeline systems that turn models into real search experiences for 110M+ daily users, owning data flows, ranking and retrieval services, and low-latency model-serving APIs. Integrate models into production through robust interfaces and DAGs, enabling fast iteration and powering discovery across the internet’s largest community platform. Ensure pipelines and systems support high scale, low latency, and operational excellence.

Reddit is a community of communities built on shared interests, passion, and trust, and is home to the most open and authentic conversations on the internet.

Brazil

Design, build, and maintain a robust, self-service, scalable, and secure data platform. Create and edit data pipelines, considering business logic, levels of aggregation, and data quality. Enable teams to access and use data effectively through self-service tools and well-modeled datasets.

We are Grupo QuintoAndar, the largest real estate ecosystem in Latin America, with a diversified portfolio of brands and solutions across different countries.