Own our data ingestion layer end-to-end, including completing our migration to open-source ingestion tooling and maintaining reliability as the stack evolves.
Build monitoring, failure alerting, and anomaly detection into the stack so issues surface proactively.
Partner with our Technology and Engineering Lead on platform infrastructure, system integrations, and technical initiatives where data is a core component.
Design, build, and maintain databases that power Hologram's operations.
Build and maintain ETL pipelines that move and transform data reliably.
Audit existing pipelines and data models, identify complexity, and refactor bad decisions.
Hologram is building the future of IoT connectivity, delivering internet access to millions of connected devices worldwide. They process over 5 billion transactions per month across their global infrastructure and values a fun, upbeat, and remote-first team united by their mission.
Own, operate, and maintain the architecture and infrastructure of our data stack.
Ensure that the data we need to understand and serve our stakeholders is available, accurate, and accessible.
Partner with stakeholders to understand their data needs and identify places where the analytics stack can improve on what they’re doing.
Grafana Labs is a remote-first, open-source powerhouse with more than 20M users of Grafana around the globe. Grafana Labs helps more than 3,000 companies manage their observability strategies with the Grafana LGTM Stack.
Own high-volume data pipelines that ingest, normalize, and serve global hotel inventory, pricing, availability, and transaction data.
Strengthen data modeling & data quality foundations to establish scalable patterns for reusable data products across business domains.
Nuitée is building the API backbone for the global travel industry with a mission to transform a fragmented travel ecosystem. They are a global infrastructure provider trusted by industry leaders like Hopper, Expedia, Priceline, Google, and Uber with teams across the globe and hubs in London, New York, San Francisco, Palma de Mallorca and Casablanca.
Partner closely with business stakeholders to understand their challenges and design end-to-end architecture.
Design, develop, and own robust, efficient, and scalable data models in Snowflake and Iceberg using dbt and advanced SQL.
Build and manage reliable data pipelines and CI/CD workflows using tools like Airflow, Python, and Terraform.
Motive empowers people who run physical operations with tools to make their work safer, more productive, and more profitable. Motive serves nearly 100,000 customers and provides complete visibility and control across a wide range of industries.
Lead and develop three sub-teams: Platform Engineering & ETL, Analytics Engineering, and Data Science & Analytics.
Own the Data Lakehouse architecture: Trino, Iceberg/GCS, Airflow, Airbyte, Redpanda CDC, dbt.
Support product launches with data change management: coordinate data impact analysis for new products (fixed income, global stocks, perps, 24/5 trading) across downstream datasets, dashboards, and reverse ETL.
Alpaca is a US-headquartered self-clearing broker-dealer and brokerage infrastructure for stocks, ETFs, options, crypto, fixed income, 24/5 trading, and more. Their global team of 230+ members is a diverse group of experienced engineers, traders, and brokerage professionals who are working to achieve their mission of opening financial services to everyone on the planet.
Lead and grow a team of data engineers, providing mentorship and technical guidance.
Own execution of customer integrations across multiple product lines, ensuring on-time delivery.
Improve data quality and pipeline reliability by investing in better alerting and resilience.
Afresh is the leading AI company in fresh food, partnering with grocers to order billions of dollars of fresh food. They are on a mission to eliminate food waste and make fresh food accessible to all and has saved 200M lbs of food waste in 2025 alone.
Lead and manage a team of ~6 data engineers, driving execution, performance, and career development.
Own Kin’s data platform, including ingestion, storage, transformation, pipeline orchestration, and governance.
Build and optimize scalable data pipelines and architectures using tools like Snowflake, Databricks, DBT, and Airflow.
Kin simplifies homeowners' lives with smarter insurance, expanding to meet all homeowner needs. They employ Kinfolk across 35+ states and are recognized for growth, customer satisfaction, and a focus on long-term sustainability, fostering a culture of meaningful work and real impact.
Primarily responsible for analyzing data integrity challenges and identifying root cause analysis.
Craft client code that is efficient, performant, testable, scalable, and secure.
Actively participate in agile software development, including daily stand-ups and sprint planning.
3Pillar is a company where senior software engineers can collaborate with industry leaders and spearhead transformative projects that redefine urban living, establish new media channels, or drive innovation in healthcare. They are a global team that values well-being and offers flexible work environments.
Design, build, and maintain production data pipelines using Python, Prefect, Airflow, Jenkins or any other orchestration framework multi-phase algorithmic workflows.
Build and optimize advanced SQL transformations in Snowflake, including window functions, CTEs, stored procedures, UDFs, and semi-structured data processing.
Build and maintain dbt models for data transformation, identity resolution, and slowly changing dimension (SCD Type 2) tracking across 80+ models and multiple pipeline stages.
Kalibri helps to redefine and rebuild the hotel industry. They are looking for passionate, energetic, and hardworking people with an entrepreneurial spirit, who dream big and challenge the status quo; their team is working on cutting-edge solutions for the industry.
Design fault-tolerant dbt models to synthesize data from multiple sources into mart tables
Design and implement Sigma dashboards and Streamplit apps to provide clear insights into performance
Automate regular reporting workflows to reduce manual effort and increase data consistency
Weedmaps is a global leader in the cannabis industry. They are dedicated to transparency, education, and community and serve cannabis to consumers and businesses in the U.S. and worldwide.
Design, build, and maintain scalable batch and real-time data pipelines that power analytics, experimentation, and machine learning
Partner cross-functionally with analytics, product, engineering and operations to deliver high-quality data solutions that drive measurable business impact
Champion data quality, reliability, and observability by implementing best practices in testing, monitoring, lineage, and incident response
Gopuff is reimagining how people purchase everyday essentials, from snacks to household goods to alcohol, all delivered in minutes. They are assembling a team of thinkers, dreamers and risk-takers who know the value of peace of mind in an unpredictable world.
Analyze current SSIS packages, SQL Server Agent jobs, and cross-database synchronization processes.
Design the target-state architecture using Databricks and Delta Lake.
Guide or support the implementation of Databricks pipelines, jobs, and notebooks that replace existing SSIS workloads.
Atmosera empowers businesses to redefine what's possible with modern technology and human expertise. They expertly deliver cutting-edge, integrated solutions that deliver business value as a Microsoft Partner with seven specializations, GitHub AI Partner of the Year, a member of the GitHub Advisory Board, and a member of the prestigious Microsoft Intelligent Security Association (MISA).
Define and work within our data governance practices, including a catalog/dictionary and management of data quality.
Manage lights-out data operations of our ETL/ELT pipelines ranging from streaming inputs to batch file loads, to support customer reporting, development, and operations.
Untangle, normalize, synthesize as needed to permit joining and comparisons from disparate sources, and further analysis including ML processing.
Evermore is a technology company that administers Smart Benefits to connect people to products and services. They are backed by leading investors including General Catalyst, Define Ventures, Lightspeed Venture Partners, Pinegrove Capital Partners, and Qiming Venture Partners.
Define and execute HappyCo’s overall data strategy aligned with company and product goals
Design and implement a canonical entity model that connects data across operational systems
Oversee the architecture and evolution of HappyCo’s data platform, including the data warehouse, pipelines, and transformation layers
HappyCo builds modern property management software that helps owners and operators deliver better living experiences at scale. They are a values-driven company that offers a flexible, supportive culture. Their team is made up of thinkers, talkers, planners, makers, builders and everything in between.
Design and implement data ingestion and transformation pipelines using PySpark/SparkSQL on Databricks.
Own data pipelines end-to-end in production: freshness, correctness, availability, and SLA adherence.
Build and maintain Delta Lake tables following medallion architecture patterns.
Pismo, founded in 2016, provides a comprehensive processing platform for banking, card issuing, and financial market infrastructure. With over 500 employees across more than 10 countries and now part of Visa, they empower firms to build and launch financial products rapidly with high security and availability standards.
Design and implement scalable data models in Snowflake
Build and maintain transformation pipelines using dbt
Develop optimized star/snowflake schemas for analytics and reporting
We are looking for a highly skilled Snowflake Data Engineer. We work closely with business stakeholders and deliver high-quality data models and insights.
Create innovative solutions for handling peta-bytes of data with billions of rows & joins.
Create real time and offline features generation pipelines to managing our data infrastructure to be reliable and fast!
Develop and productionize data pipelines for our ML models in both bare-metal and the cloud environment.
Kayzen is a mobile demand-side platform (DSP) dedicated to democratizing programmatic advertising. They enable leading apps, agencies, media buyers, and brands to run programmatic customer acquisition, retargeting, and brand performance campaigns through their self-serve and managed service options.
Collaborate with Product and Strategy teams to build customized reporting solutions.
Integrate new data sources and third-party APIs to enhance reporting capabilities.
Streamline processes for rapid prototyping to meet new client reporting needs.
Verve For Advertisers is a technology company that empowers brands and agencies to connect moments of discovery and drive measurable outcomes across screens. They have unified the company's demand-side offering, bringing together the largest on-site search intent dataset outside of walled gardens, direct SDK integrations with top apps, alongside data partnerships with 3M+ websites and LLMs.
Own and evolve the data infrastructure that powers Clever's core data products.
Maintain and improve data pipeline reliability, monitoring and resolving pipeline failures.
Design and implement ingestion for new operational data sources that support Clever's speed-to-match initiative.
Clever Real Estate is a venture-backed technology company aiming to revolutionize real estate transactions. They have built a leading online education platform helping consumers save money and have earned a 4.9 TrustPilot rating with over 3,800 reviews.