Design, build, and operate scalable data pipelines using batch and real-time processing technologies.
Build data infrastructure that ingests real-time events and stores them efficiently across databases.
Establish and enforce data contracts with backend engineering teams by implementing schema management.
Fetch provides a platform where millions of people earn rewards for buying brands they love. They have received investments from SoftBank, Univision, and Hamilton Lane and partnerships ranging from challenger brands to Fortune 500 companies. Fetch fosters a people-first culture rooted in trust, accountability, and innovation.
Design and build robust data pipelines that integrate data from diverse sources.
Build streaming data pipelines using Kafka and AWS services to enable real-time data processing.
Create and operate data services that make curated datasets accessible to internal teams and external partners.
Quanata aims to ensure a better world through context-based insurance solutions. It is a customer-centered team creating innovative technologies, digital products, and brands, backed by State Farm, blending Silicon Valley talent with insurer expertise.
Design and implement robust data infrastructure in AWS, using Spark with Scala
Evolve our core data pipelines to efficiently scale for our massive growth
Store data in optimal engines and formats, matching your designs to our performance needs and cost factors
tvScientific is the first CTV advertising platform purpose-built for performance marketers. They leverage data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. tvScientific is built by industry leaders with history in programmatic advertising, digital media, and ad verification.
Collaborate with business leaders, engineers, and product managers to understand data needs.
Design, build, and scale data pipelines across a variety of source systems and streams (internal, third-party, as well as cloud-based), distributed/elastic environments, and downstream applications and/or self-service solutions
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
NBCUniversal is one of the world's leading media and entertainment companies that creates world-class content, which we distribute across our portfolio of film, television, and streaming, and bring to life through our global theme park destinations, consumer products, and experiences. We champion an inclusive culture and strive to attract and develop a talented workforce to create and deliver a wide range of content reflecting our world.
Design, build, and maintain pipelines that power all data use cases.
Develop intuitive, performant, and scalable data models that support product features.
Pay down technical debt, improve automation, and follow best practices in data modeling.
Patreon is a media and community platform where over 300,000 creators give their biggest fans access to exclusive work and experiences. They are leaders in the space, with over $10 billion generated by creators since Patreon's inception, with a team passionate about their mission.
Design and implement scalable, performant data models.
Develop and optimize processes to improve the correctness of 3rd party data.
Implement data quality principles to raise the bar for reliability of data.
SmithRx is a venture-backed Health-Tech company disrupting the Pharmacy Benefit Management (PBM) sector with a next-generation drug acquisition platform. They have a mission-driven and collaborative culture that inspires employees to transform the U.S. healthcare system.
Serve as a primary advisor to identify technical improvements and automation opportunities.
Build advanced data pipelines using the medallion architecture in Snowflake.
Write advanced ETL/ELT scripts to integrate data into enterprise data stores.
Spring Venture Group is a digital direct-to-consumer sales and marketing company focused on the senior market. They have a dedicated team of licensed insurance agents and leverage technology to help seniors navigate Medicare.
Design, build, and optimize ETL/ELT workflows using Databricks, SQL, and Python/PySpark.
Develop and maintain robust, scalable, and efficient data pipelines for processing large datasets.
Collaborate with cross-functional teams to deliver impactful data solutions.
Jobgether is an AI-powered platform that helps job seekers find suitable opportunities. They connect top-fitting candidates with hiring companies, streamlining the recruitment process through objective and fair assessments.
Play a Sr.tech lead & architect role to build world-class data solutions and applications that power crucial business decisions throughout the organization.
Enable a world-class engineering practice, drive the approach with which we use data, develop backend systems and data models to serve the needs of insights and play an active role in building Atlassian's data-driven culture.
Maintain a high bar for operational data quality and proactively address performance, scale, complexity and security considerations.
At Atlassian, they're motivated by a common goal: to unleash the potential of every team. Their software products help teams all over the planet and their solutions are designed for all types of work. They ensure that their products and culture continue to incorporate everyone's perspectives and experience, and never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.
Design, develop, and maintain dbt data models that support our healthcare analytics products.
Integrate and transform customer data to conform to our data specifications and pipelines.
Design and execute initiatives that improve data platform and pipeline automation and resilience.
SmarterDx, a Smarter Technologies company, builds clinical AI that is transforming how hospitals translate care into payment. Founded by physicians in 2020, our platform connects clinical context with revenue intelligence, helping health systems recover millions in missed revenue, improve quality scores, and appeal every denial.
Build and optimize scalable, efficient ETL and data lake processes.
Own the ingestion, modeling, and transformation of structured and unstructured data.
Maintain and enhance database monitoring, anomaly detection, and quality assurance workflows.
Launch Potato is a digital media company that connects consumers with brands through data-driven content and technology. They have a remote-first team spanning over 15 countries and have built a high-growth, high-performance culture.
Be the Analytics Engineering lead within the Sales and Marketing organization.
Be the data steward for Sales and Marketing: architect and improve the data collection.
Develop and maintain robust data pipelines and workflows for data ingestion and transformation.
Reddit is a community-driven platform built on shared interests and trust, fostering open and authentic conversations. With over 100,000 active communities and approximately 116 million daily active unique visitors, it serves as a major source of information on the internet.
Architect, design, implement, and operate end-to-end data engineering solutions.
Develop and manage robust data integrations with external vendors.
Collaborate closely with Data Analysts, Data Scientists, DBAs, and cross-functional teams.
SmartAsset is an online destination for consumer-focused financial information and advice, helping people make smart financial decisions. With over 59 million people reached each month, they operate SmartAsset Advisor Marketing Platform (AMP) to connect consumers with fiduciary financial advisors.
Design, build, and maintain scalable data pipelines.
Develop and implement data models for analytical use cases.
Implement data quality checks and governance practices.
MO helps government leaders shape the future. They engineer scalable, human-centered solutions that help agencies deliver their mission faster and better. They are building a company where technologists, designers, and builders can serve the mission and grow their craft.
Contribute to the technical integrity and evolution of the Data Platform tech stack.
Design and implement core features and enhancements within the Data Platform.
Build and maintain robust data extraction, loading, and transformation processes.
Dimagi is an award-winning social enterprise and certified B-corp building software solutions and providing technology consulting services to improve the quality of essential services for underserved populations. They are passionate and committed to tackling complex health and social inequities and working towards a brighter future for all.
Design, build, and maintain ETL/ELT pipelines and integrations across legacy and cloud systems.
Model, store, and transform data to support analytics, reporting, and downstream applications.
Build API-based and file-based integrations across enterprise platforms.
Jobgether is a platform that uses AI-powered matching process to ensure applications are reviewed quickly, objectively, and fairly. They identify the top-fitting candidates and share this shortlist directly with the hiring company.
Build and Maintain Bronze/Silver Layer Pipelines: ensure core data sources lands accurately, on time, and with full lineage.
Lead Data Ingestion, Transformation, and Enrichment: own the end-to-end pipeline from raw file landing through cleansed, conformed staging tables, including deduplication, standardization, code mapping, and entity resolution.
Develop Automated Ingestion Pipelines: use Snowpipe, Matillion, or custom solutions with reliability, observability, and minimal manual intervention in mind.
Precision AQ is building a centralized Data Hub to consolidate fragmented data infrastructure, establish enterprise-wide data governance, and enable AI-ready analytics across our life sciences portfolio. They value innovation and a collaborative environment.
Design, build, and maintain scalable ETL pipelines for large-scale data processing.
Implement data transformations and workflows using PySpark at an intermediate to advanced level.
Optimize pipelines for performance, scalability, and cost efficiency across environments.
Truelogic is a leading provider of nearshore staff augmentation services headquartered in New York. Their team of 600+ highly skilled tech professionals, based in Latin America, drives digital disruption by partnering with U.S. companies on their most impactful projects.
Creating and maintaining optimal data pipeline architecture.
Assembling large, complex data sets that meet functional & non-functional business requirements.
Building the infrastructure required for optimal extraction, transformation and loading of data from a wide variety of data sources using relevant technologies.
Mercer Advisors works with families to help them amplify and simplify their financial lives through integrated financial planning, investment management, tax, estate, and insurance services. They serve over 31,300 families in more than 90 cities across the U.S. and are ranked the #1 RIA Firm in the nation by Barron’s.
Collaborate with cross-functional teams and business units to understand and communicate Ion’s needs and goals.
Develop, improve, and maintain robust data pipelines for extracting and transforming data from log files and event streams.
Design models and algorithms to derive insights and metrics from large datasets.
Intuitive is a global leader in robotic-assisted surgery and minimally invasive care. Their technologies, like the da Vinci surgical system and Ion, have transformed how care is delivered for millions of patients worldwide. They are a team of engineers, clinicians, and innovators united by one purpose: to make surgery smarter, safer, and more human.