Looking for young talent ready to go all in. Offering significant equity to people who want to build something that matters. Define the future of AI in influencer marketing.
Source Job
20 jobs similar to Data Engineer
Jobs ranked by similarity.
- Design, develop, and maintain reliable, scalable ETL/ELT pipelines across ingestion, transformation, storage, and consumption layers.
- Build and manage data models and transformations using dbt with strong testing and documentation practices.
- Contribute to architectural decisions, technology selection, and data engineering standards.
Vitable is a health benefits platform making healthcare better for employers of everyday workers.
Design, implement, and maintain scalable ETL/ELT pipelines using Python, SQL, and modern orchestration frameworks. Build and optimize data models and schemas for cloud warehouses and relational databases, supporting AI and analytics workflows. Lead large-scale data initiatives from planning through execution, ensuring performance, cost efficiency, and reliability.
This position is posted by Jobgether on behalf of a partner company.
Own the design, build, and optimization of end-to-end data pipelines. Establish and enforce best practices in data modeling, orchestration, and system reliability. Collaborate with stakeholders to translate requirements into robust, scalable data solutions.
YipitData is the leading market research and analytics firm for the disruptive economy and most recently raised $475M from The Carlyle Group at a valuation of over $1B.
- Design, build, and maintain scalable and reliable data pipelines.
- Develop and maintain ETL data pipelines for large volumes of data, writing clean, maintainable, and efficient code.
- Work closely with product managers, data scientists, and software engineers to create and prepare datasets from disparate sources.
Curinos empowers financial institutions to make better, faster and more profitable decisions through industry-leading proprietary data, technologies and insights.
- Contribute to the design and implementation of highly scalable data infrastructure.
- Implement and maintain end-to-end data pipelines supporting batch & realtime analytics.
- Work with Product, Engineering, and Business teams to understand data requirements.
Docker makes app development easier so developers can focus on what matters and has a remote-first team that spans the globe.
As a key member of our Data Engineering team, you will: Collaborate with Data Science, Reporting, Analytics, and other engineering teams to build data pipelines, infrastructure, and tooling to support business initiatives. Oversee the design and maintenance of data pipelines and contribute to the continual enhancement of the data engineering architecture. Collaborate with the team to meet performance, scalability, and reliability goals.
PENN Entertainment, Inc. is North America’s leading provider of integrated entertainment, sports content, and casino gaming experiences.
- Responsible for the collection, extraction, transformation, and correlation of business data across the Subsplash platform.
- Administer and tune data systems to optimize for performance.
- Work with data warehousing/data lake environments to provide data marts for business analysis and intelligence.
Subsplash is an award-winning team that builds The Ultimate Engagement Platform™ for churches, Christian ministries, non-profits, and businesses around the world.
As a Solution Engineer, you will meet with data partners, assess requirements, and recommend pipeline improvements to manage data export and delivery in an efficient, scalable process. You will be part of a small, cross-functional team that includes other engineers, a product manager, data scientists, and others. Success requires the ability to take on ambiguous, complex problems and promote innovative solutions to address immediate needs and support future growth.
Fetch is a rewards app that empowers consumers to live rewarded throughout their day and has delivered more than $1 billion in rewards and earned over 5 million five-star reviews.
- Assist in executing data engineering projects within the Customer Intelligence portfolio to meet defined timelines and deliverables.
- Build and maintain ETL pipelines based on user and project specifications to enable reliable data movement.
- Develop and update technical documentation for key systems and data assets.
Stryker is one of the world’s leading medical technology companies and, together with its customers, is driven to make healthcare better.
- Design, build, and maintain robust and scalable data pipelines from diverse sources.
- Leverage expert-level experience with dbt and Snowflake to structure, transform, and organize data.
- Collaborate with engineering, product, and analytics teams to deliver data solutions that drive business value.
Topstep is an engaging working environment which ranges from fully remote to hybrid and they foster a culture of collaboration.
- Design, build, and maintain the pipelines that power all data use cases.
- Develop intuitive, performant, and scalable data models that support product features, internal analytics, experimentation, and machine learning workloads.
- Define and enforce standards for accuracy, completeness, lineage, and dependency management.
Patreon is a media and community platform where over 300,000 creators give their biggest fans access to exclusive work and experiences.
Build end-to-end data solutions that include ingest, logging, validation, cleaning, transformation, and security. Lead the design, development, and delivery of scalable data pipelines and ETL processes. Design and evolve robust data models and storage patterns that support analytics and efficiency use-cases.
Founded in 1997, Expression provides data fusion, data analytics, AI/ML, software engineering, information technology, and electromagnetic spectrum management solutions.
Work with data end-to-end, exploring, cleaning, and assembling large, complex datasets. Analyze raw data from multiple sources and identify trends and patterns, maintaining reliable data pipelines. Build analytics-ready outputs and models that enable self-service and trustworthy insights across the organization.
Truelogic is a leading provider of nearshore staff augmentation services headquartered in New York, delivering top-tier technology solutions for over two decades.
As a Senior Data Engineer, shape a scalable data platform that drives business insights. Design and maintain robust data pipelines and collaborate with cross-functional teams. Tackle complex data challenges, implement best practices, and mentor junior engineers.
Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching.
- Build and scale data services by designing, developing, and maintaining scalable backend systems and APIs.
- Collaborate on data architecture and models, partnering with engineering and analytics teams to optimize storage and processing workflows.
- Contribute to standards, quality, and governance by building reliable, observable data systems with strong testing and validation.
Zapier builds and uses automation every day to make work more efficient, creative, and human.
- Design, build, and maintain cloud-native data infrastructure using Terraform for IaC.
- Develop and optimize data pipelines leveraging AWS services and Snowflake.
- Build and maintain LLM frameworks, ensuring high-quality and cost-effective outputs.
ClickUp is building the first truly converged AI workspace, unifying tasks, docs, chat, calendar, and enterprise search, all supercharged by context-driven AI.
- Design, build, and maintain highly scalable, reliable, and efficient ETL/ELT pipelines.
- Ingest data from a multitude of sources and transform raw data into clean, structured, and AI/ML-ready formats.
- Work closely with data scientists, machine learning engineers, and business analysts to understand their data needs.
Valtech exists to unlock a better way to experience the world by blending crafts, categories, and cultures, helping brands unlock new value in an increasingly digital world.
- Build and monitor Cribl’s core data tech stack including data pipelines and data warehouse.
- Develop cloud-native services and infrastructure that power scalable and reliable data systems.
- Support Cribl’s growing data science and agentic initiatives by preparing model-ready datasets.
Cribl is a company that provides a data engine for IT and Security for various industries.
- Design, build, and maintain scalable data pipelines and warehouses for analytics and reporting.
- Develop and optimize data models in Snowflake or similar platforms.
- Implement ETL/ELT processes using Python and modern data tools.
Jobgether uses an AI-powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements. They identify the top-fitting candidates, and this shortlist is then shared directly with the hiring company; the final decision and next steps (interviews, assessments) are managed by their internal team.
- Design, build, and scale performant data pipelines and infrastructure, primarily using ClickHouse, Python, and dbt.
- Build systems that handle large-scale streaming and batch data, with a strong emphasis on correctness and operational stability.
- Own the end-to-end lifecycle of data pipelines, from raw ingestion to clean, well-defined datasets consumed by downstream teams.
Nansen is a leading blockchain analytics platform that empowers investors and professionals with real-time, actionable insights derived from on-chain data. We’re building the world’s best blockchain analytics platform, and data is at the heart of everything we do.