Design, build, and maintain enterprise-scale data pipelines on Snowflake Data Platform.
Design, build, and maintain cloud-native AI/ML solutions (AWS, Azure) that support advanced analytics and decision making.
Implement best practices for data quality, observability, lineage, and governance.
FUJIFILM Biotechnologies is dedicated to making a real difference in people’s lives. They partner with innovative biopharma companies to advance vaccines, cures, and gene therapies, fostering a culture that fuels passion and drive, known as Genki.
Architect and develop cloud-native data platforms, focusing on modern data warehousing, transformation, and orchestration frameworks.
Design scalable data pipelines and models, ensure data quality and observability, and contribute to backend services and infrastructure supporting data-driven features.
Collaborate across multiple teams, influence architectural decisions, mentor engineers, and implement best practices for CI/CD and pipeline delivery.
Jobgether uses an AI-powered matching process to ensure applications are reviewed quickly, objectively, and fairly against the role's core requirements. Their system identifies the top-fitting candidates, and this shortlist is then shared directly with the hiring company.
Design, develop, and maintain dbt data models that support our healthcare analytics products.
Integrate and transform customer data to conform to our data specifications and pipelines.
Design and execute initiatives that improve data platform and pipeline automation and resilience.
SmarterDx, a Smarter Technologies company, builds clinical AI that is transforming how hospitals translate care into payment. Founded by physicians in 2020, our platform connects clinical context with revenue intelligence, helping health systems recover millions in missed revenue, improve quality scores, and appeal every denial.
Lead discovery conversations to understand client goals.
Design and deliver technical roadmaps for data platform adoption.
Build modern, reliable data pipelines and ETL/ELT frameworks.
InterWorks is a tech consultancy that empowers clients with customized, collaborative solutions. They value unique contributions, and their people are the glue that holds their business together as they pursue innovation alongside people who inspire them.
Design, develop, and optimize EDP data pipelines using Python, Airflow, DBT, and Snowflake for scalable financial data processing.
Build performant Snowflake data models and DBT transformations following best practices, standards, and documentation guidelines.
Own ingestion, orchestration, monitoring, and SLA-driven workflows; proactively troubleshoot failures and improve reliability.
Miratech is a global IT services and consulting company that brings together enterprise and start-up innovation, supporting digital transformation for some of the world's largest enterprises. They retain nearly 1000 full-time professionals, and their culture of Relentless Performance has enabled over 99% of Miratech's engagements to succeed.
Architect, build, and operate data infrastructure that powers Tebra’s intelligent features.
Translate business requirements into software solutions that accelerate our ability to deploy AI.
Monitor data pipelines, detect anomalies, and implement automated recovery systems.
Tebra unites Kareo and PatientPop, providing a digital backbone for practice well-being, supporting both products with a shared vision for modernized care. Over 100,000 providers trust Tebra to elevate patient experience and grow their practice, building the future of well-being with compassion and humanity.
Build and Maintain Bronze/Silver Layer Pipelines: ensure core data sources lands accurately, on time, and with full lineage.
Lead Data Ingestion, Transformation, and Enrichment: own the end-to-end pipeline from raw file landing through cleansed, conformed staging tables, including deduplication, standardization, code mapping, and entity resolution.
Develop Automated Ingestion Pipelines: use Snowpipe, Matillion, or custom solutions with reliability, observability, and minimal manual intervention in mind.
Precision AQ is building a centralized Data Hub to consolidate fragmented data infrastructure, establish enterprise-wide data governance, and enable AI-ready analytics across our life sciences portfolio. They value innovation and a collaborative environment.
Design, develop, and optimize data architecture and pipelines aligned with ETL/ELT principles.
Architect workflows using DBT to convert raw data into actionable analytics.
Maintain production data pipelines with Python, DBT, Matillion, and Snowflake.
Jobgether is a platform that connects job seekers with partner companies. They use AI-powered matching to ensure applications are reviewed quickly and fairly.
Design, build, and optimize scalable data lakes, warehouses, and data pipelines using Snowflake and modern cloud platforms.
Develop and maintain robust data models (ELT/ETL), ensuring clean, reliable, and well-documented datasets.
Design and manage end-to-end data pipelines that ingest, transform, and unify data from multiple systems.
Cobalt Service Partners is building the leading commercial access and security integration business in North America. Backed by Alpine Investors, with $15B+ in AUM, Cobalt has scaled rapidly since launch through acquisitions and is building a differentiated, data-driven platform.
Design scalable data models and transformation patterns across core data marts.
Partner cross-functionally to translate business problems into data solutions.
Define and drive data governance and certification standards.
AuditBoard is a leading audit, risk, ESG, and InfoSec platform, exceeding $300M ARR and experiencing continued growth. They empower over 50% of the Fortune 500 with their technology, fostering clarity and agility, and are recognized as one of the fastest-growing tech companies in North America.
Design, develop, and maintain ETL/ELT pipelines on cloud-based data platforms.
Build data ingestion, transformation, and orchestration workflows using tools such as Azure Data Factory, Airflow, Fivetran, or similar.
Develop transformations and data processing logic using platforms such as Databricks, Snowflake, or equivalent.
Ankura Consulting Group, LLC is an independent global expert services and advisory firm. They deliver services and end-to-end solutions to help clients at critical inflection points related to conflict, crisis, performance, risk, strategy, and transformation, and consists of more than 2000 professionals.
Assist in designing and implementing Snowflake-based analytics solutions.
Build and maintain data pipelines adhering to enterprise architecture principles.
Act as a technical leader within the team, ensuring quality deliverables.
Jobgether is a company that uses an AI-powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements. They identify the top-fitting candidates, and this shortlist is then shared directly with the hiring company.
Build and operate backend services and automation for the Snowflake data platform.
Support data ingestion pipelines (RDS/Oracle → Snowflake) and reverse ETL (Snowflake → RDS).
Develop and maintain Airflow (AWS MWAA) workflows for ingestion, data quality, and ops automation.
Upwork is the world’s work marketplace, serving everyone from one-person startups to over 30% of the Fortune 100. They provide a powerful, trust-driven platform that enables companies and talent to work together in new ways that unlock their potential. Last year, more than $3.8 billion of work was done through Upwork.
Design Scalable Data Architecture: Build modern, cloud-native data platforms (AWS, Snowflake, Databricks) supporting batch and streaming use cases.
Develop Efficient Data Pipelines & Models: Automate ETL/ELT workflows, optimise data models, and enable self-serve analytics and AI.
End-to-End Data Ownership: Manage ingestion, storage, processing, and delivery of structured and unstructured data.
Trustonic provides smartphone locking technology, enabling global access to devices and digital finance. They partner with mobile carriers, retailers, and financiers across 30+ countries, powering device financing solutions. They celebrate diversity and aim to do the right thing for each other, the community, and the planet.
Collaborate with cross-functional teams and business units to understand and communicate Ion’s needs and goals.
Develop, improve, and maintain robust data pipelines for extracting and transforming data from log files and event streams.
Design models and algorithms to derive insights and metrics from large datasets.
Intuitive is a global leader in robotic-assisted surgery and minimally invasive care. Their technologies, like the da Vinci surgical system and Ion, have transformed how care is delivered for millions of patients worldwide. They are a team of engineers, clinicians, and innovators united by one purpose: to make surgery smarter, safer, and more human.
Design, build, maintain, and operate scalable streaming and batch data pipelines.
Work with AWS services, including Redshift, EMR, and ECS, to support data processing and analytics workloads.
Develop and maintain data workflows using Python and SQL.
Southworks helps companies with software development and digital transformation. They focus on solving complex problems and delivering innovative solutions.
Re-architect the data warehouse and drive the expansion of our data platform.
Migrate data pipelines to a modern architecture, improving governance, and enabling better internal data sharing.
Lead a high-performing data engineering team by driving collaboration, growth, and accountability.
Engine is transforming business travel into something personalized, rewarding, and simple. They are building a platform that brings together corporate travel, a charge card, and modern spend management in one place and more than 20,000 companies already rely on Engine. Engine is cash flow positive with rapid growth and has been recognized as one of the fastest-growing travel and fintech platforms in North America.
Extend, optimize, and maintain core data models that support customer-facing reports, machine learning, and generative AI workloads.
Implement automation and operationalize ML models workflows that streamline operational processes, reduce manual work, and improve system efficiency.
Partner with engineering, product, and analytics teams to deliver seamless integrations and customer-facing data products.
Boulevard provides the first and only client experience platform for appointment-based, self-care businesses, empowering customers to give their clients more of the magical moments that matter most. They value diverse backgrounds and believe in equal opportunity for all.
Architect and implement Databricks Lakehouse solutions for large-scale data platforms.
Design and optimize batch & streaming data pipelines using Apache Spark (PySpark/SQL).
Implement Delta Lake best practices (ACID, schema enforcement, time travel, performance tuning).
They are looking for a Databricks Architect to design and lead modern Lakehouse data platforms using Databricks. The role focuses on building scalable, high-performance data pipelines and enabling analytics and AI use cases on cloud-native data platforms.
Build and maintain scalable data pipelines from ingestion through transformation and delivery.
Design, build, and maintain our data warehouse and data marts.
Partner with stakeholders to translate business needs into clean data models.
Gurobi Optimization focuses on mathematical optimization. They empower customers to expand their use of mathematical optimization technology in order to make smarter decisions and solve some of the world's toughest and most impactful business problems.