Design, build, and maintain the data foundation behind our next-generation AI agent platform. Youโll work closely with AI/ML teams to power training, inference, and continuous learning through highly scalable data pipelines and cloud-native architectures. If youโre passionate about data infrastructure, performance optimization, and driving data quality at scale, we want to hear from you.
Job listings
This role involves leading the Data Solutions team, building and maintaining internal data products for analysts, data scientists, and product/business teams. You'll help democratize data at scale, enabling teams to generate business value through reliable, well-designed internal solutions covering the full data lifecycle, including ingestion, transformations, ML model development, and data serving.
At CoW DAO data plays a central role in understanding on-chain activity. Weโre looking for a Data Engineer whoโs excited to dive deep into blockchain data, enrich it with insights from our off-chain auction system, and build pipelines and infrastructure to support operations and analytics. This is a hands-on role where youโll work across teams and design scalable systems.
Evolve our foundational data infrastructure to primarily support our finance department. You will own the analytics infrastructure for these products end-to-end -- from data ingestion, to reporting, to activation -- ensuring high data quality and availability for our data sets. Work with the business, engineering, and analytics teams to lay the foundation for effective reporting and analysis needed to scale up these ventures.
As an Analytics Engineer, you will be joining JobTeaserโs Analytics team, and will report to the Lead Data person. You will build data models by collecting needs for data transformations, defining sources of truth for metrics and dimensions, implementing data models in dbt. You will also govern the data model, keeping overall consistency, defining policies and standards for data management, and ensuring data quality and reliability.
Help Orita handle massive amounts of data efficiently and reliably. Play a crucial role in unifying our data pipeline with an event taxonomy and normalization layer, ensuring our machine learning models have high-quality data to work with. Enhance our data infrastructure and drive innovation in our data processing capabilities.
Design, develop, and maintain scalable, secure, and high-performance data platforms. Build and manage data pipelines (ETL/ELT) using tools such as Apache Airflow, DBT, SQLMesh or similar. Architect and optimize lakehouse solutions (e.g., Iceberg).
We are seeking a Data Engineer to help build and maintain a robust, trustworthy, and well-documented data platform, developing and maintaining data pipelines, implementing monitoring and alerting systems, establishing data quality checks, and ensuring that data structures and documentation stay up to date and aligned with business needs.
Acquire data from various data sources by developing scripts, workflows, and ETL pipelines. The job includes participating in modeling business processes with data models, maintaining existing data models' integrity and structure in the data warehouse, and identifying internal process improvements. You will troubleshoot the data pipeline, work on ad-hoc dataset creation, and work with other teams to understand their needs and objectives.
Design, develop, and maintain scalable, secure, and high-performance data platforms. Build and manage data pipelines (ETL/ELT) using tools such as Apache Airflow, DBT, SQLMesh or similar. This role requires extensive experience in data engineering or platform engineering roles, strong programming skills in Python and Java, and a deep understanding of distributed systems.