Design, build, and maintain scalable data platforms using AWS to support analytics, machine learning, and emerging generative AI use cases.
Collaborate with data scientists, analysts, and engineering teams to translate business and AI requirements into scalable data solutions.
Work with large-scale datasets to build and optimize data pipelines using AWS services such as EMR (Spark, Trino), S3, Glue, Athena, and Airflow
Experian is a global data and technology company, powering opportunities for people and businesses around the world. They invest in people and new advanced technologies to unlock the power of data and to innovate. A FTSE 100 Index company listed on the London Stock Exchange, they have a team of 23,300 people across 32 countries.
Lead, mentor, and scale a high-performing data engineering team.
Design and evolve our core data infrastructure on AWS, Apache Airflow, and Apache Spark.
Tekmetric is an all-in-one, cloud-based platform helping auto repair shops run smarter, grow faster, and serve customers better. Officially founded in Houston in 2017, Tekmetric has grown from a single shop’s vision to the industry’s leading solution. They value transparency, integrity, innovation, and a service-first mindset.
Design and optimize scalable cloud-based data architectures.
Develop and maintain robust data pipelines, models, and systems across cloud platforms.
Mentor Data Engineers, guiding them in data modeling, troubleshooting, and best practices.
Personify Health has created a personalized health platform, bringing health plan administration, well-being solutions, and care navigation together. Their team is on a mission to empower people to lead healthier lives.
Design, build, and maintain scalable ELT pipelines.
Architect and manage event-driven data pipelines in AWS.
Write and maintain infrastructure-as-code to deploy and manage data ingestion workloads.
Imagine Pediatrics is a tech-enabled, pediatrician-led medical group that reimagines care for children with special health care needs. They deliver 24/7 virtual-first and in-home medical, behavioral, and social care, working alongside families, providers, and health plans.
You will join a team of talented engineers working closely with Data Scientists to build and scale our next-generation Ad EnGage data pipeline.
You will work with large-scale datasets (hundreds of TBs to petabyte-scale systems) using a modern data stack centered on AWS, Airflow, dbt, and Snowflake.
You’ll contribute to building reliable, high-quality data pipelines and improving the performance, scalability, and observability of our data platform.
EDO is the TV outcomes company. Their leading measurement platform connects convergent TV airings to the ad-driven consumer behaviors most predictive of future sales. They are headquartered in New York City and Los Angeles with an office space in San Francisco and recognize the benefits of hybrid working.
Own and operate our data warehouse, pipelines, and transformation layer
Design, build, and maintain scalable, reliable data pipelines that ingest data from across our platform and third-party sources, ensuring data is always available and trustworthy for downstream consumers
Partner with data scientists and analysts to deliver clean, well-documented datasets and optimize query performance so teams spend less time wrangling data and more time generating insights
Atticus makes it easy for any sick or injured person in crisis to get the life-changing aid they deserve. In the last six years, they've become the leading platform connecting people with disabilities to government benefits. In 2025, their team grew from 151 to 210, and they will grow again in 2026.
Design, build, and maintain efficient data pipelines (ETL processes) to integrate data from various source systems into the data warehouse.
Develop and optimize data warehouse schemas and tables to support analytics and reporting needs.
Write and refine complex SQL queries and use scripting (e.g., Python) to transform and aggregate large datasets.
Deel is an all-in-one payroll and HR platform tailored for global teams. As one of the largest globally distributed companies, Deel's 7,000 team members span over 100 countries, fostering a dynamic culture of continuous learning and innovation.
Strong programming skills in Python and Linux Bash for automation and data workflows.
Expertise in Hadoop ecosystem tools and managing SQL databases for data storage and query optimization.
3Pillar is dedicated to engineering solutions that challenge conventional norms, going beyond traditional software development. They are an elite team of visionaries shaping the future direction of various endeavors, redefining urban living, and driving innovation across industries.
Design, build, and maintain scalable data pipelines for clients across industries.
Architect and optimize cloud data warehouse solutions, adapting to each client's stack.
Collaborate with analysts and data scientists to ensure data is clean, reliable, and well-modeled.
NuView Analytics helps companies accelerate the time to insights from their data through data analytics, diligence, and fractional data science. They are a growth-stage company looking to drive additional value from the data they are sitting on and value humility, intellectual rigor, and stewardship.
Design, develop, and maintain databases supporting IVR and contact center systems
Design and maintain relational data models for IVR event, routing, and call data
Ensure database availability, integrity, performance, and scalability in production environments
Miratech is a global IT services and consulting company that brings together enterprise and start-up innovation, supporting digital transformation for some of the world's largest enterprises. They retain nearly 1000 full-time professionals, experiencing an annual growth rate exceeding 25% and fostering a values-driven organization with a culture of relentless performance.
Lead, manage, and mentor a group of data engineers.
Own the design and development of data pipelines and systems.
Partner cross-functionally with Data Science and Product managers.
TrueML is a mission-driven financial software company that aims to create better customer experiences for distressed borrowers. The TrueML team includes inspired data scientists, financial services industry experts and customer experience fanatics building technology to serve people.
Build pipelines to load data from various systems into Dataiku via S3 or Snowflake.
Increase the robustness of existing production pipelines, identify bottlenecks, and set up a robust monitoring, testing processes, and documentation templates.
Build custom applications and integrations to automate manual tasks related to customer operations to help Product Operations / Support / SRE in their day-to-day activities
Dataiku is the Platform for AI Success, the enterprise orchestration layer for building, deploying, and governing AI. The world’s leading companies rely on Dataiku to operationalize AI and run it as a true business performance engine delivering measurable value.
Design, build, and maintain data pipelines using Snowflake, Airflow, and DBT
Lead architectural discussions around the modern data stack
Develop scalable ETL and ELT processes using Python and SQL
They are a well-funded healthcare technology company using AI and modern data infrastructure to transform how healthcare and public health decisions are made. The team is small, mission-driven, and building systems that turn raw healthcare data into actionable intelligence at scale.
Manage and guide data teams to execute on enterprise data strategy.
Provide technical guidance and mentor team members on data technologies.
Design end-to-end data processing for enterprise data warehousing.
Jobgether uses an AI-powered matching process to ensure applications are reviewed quickly, objectively, and fairly against the role's core requirements. They identify the top-fitting candidates, and this shortlist is then shared directly with the hiring company.
Leverage data tools to deliver superior service to clients.
Conduct data analysis to understand customer needs.
Partner with customers in interpreting data and remediating issues.
They connect job seekers with companies using an AI-powered matching process. They ensure applications are reviewed quickly, objectively, and fairly against the role's core requirements.
Collaborate with stakeholders to build robust services using data pipeline and ETL tools, and Snowflake data warehouse.
Translate advanced business data and analytics problems into technical approaches that yield actionable recommendations.
Communicate results and educate others through visualizations, reports, and presentations.
CNG Holdings, Inc. serves consumers by providing financial solutions which fill a need and deliver value. They strive to make a difference in their customers’ lives and the communities they serve.
Lead and manage a team of ~6 data engineers, driving execution, performance, and career development.
Own Kin’s data platform, including ingestion, storage, transformation, pipeline orchestration, and governance.
Build and optimize scalable data pipelines and architectures using tools like Snowflake, Databricks, DBT, and Airflow.
Kin simplifies homeowners' lives with smarter insurance, expanding to meet all homeowner needs. They employ Kinfolk across 35+ states and are recognized for growth, customer satisfaction, and a focus on long-term sustainability, fostering a culture of meaningful work and real impact.
Lead data architecture design, API assessment, and ETL requirements during the Discovery & Design phase.
Develop / configure CMIC ERP API integration to establish reliable data exchange between the ERP system and the AWS platform.
Design/implement data pipelines using AWS Glue for ETL processing of subcontractor documents and ERP data.
Capnexus is a comprehensive services provider with a team of experienced professionals in designing, building, and supporting retail software. They operate as a build-as-a-service provider with a culture built on outcomes and delivery.
Own organizational-wide data architecture, defining standards and designs.
Design and develop data pipelines, integrations, and platform features.
Partner with product managers to define new data features and capabilities.
They offer a connected equipment platform for managing mixed assets. The company values quality, continuous learning, and collaboration within a dynamic team environment.
Become a trusted data and AI advisor to clients, helping them translate business questions into AI-ready data architectures.
Design and implement AI-optimized data platforms, including cloud data warehouses, ETL/ELT pipelines, and analytic layers.
Engineer modern ELT/ETL pipelines that handle structured, semi-structured, and unstructured data to support AI and analytics use cases.
Aimpoint Digital is a dynamic and fully remote data and analytics consultancy. They work alongside the most innovative software providers in the data engineering space to solve their clients' toughest business problems and believe in blending modern tools and techniques with tried-and-true principles to deliver optimal data engineering solutions.