The product analytics team is responsible for developing our portfolio of reports and data insights that support product development teams seeking to improve the health of our members. Using data spanning app engagement, clinical outcomes, behavioral health indicators, and claims, you will partner with product and engineering teams to identify and solve the biggest challenges across Omadaβs customers. The key job responsibilities include owning the data architecture, delivering key performance insights, and collaborating across multiple teams.
Job listings
As a Data Engineer, you will hold a crucial position within our dynamic team, actively contributing to thrilling projects that reshape data analytics for our clients, providing them with a competitive advantage in their respective industries. You will design, develop, and maintain data pipelines using dbt to transform and load data into a data warehouse.
As a Data Engineer Azure, you will be an expert in the design and implementation of sophisticated Azure-based data solutions in hybrid and cloud-only scenarios. You will also be a technical consultant for our customers, navigating them on their individual data project journey. The role involves developing data pipelines and learning algorithms, data migrations, and the integration of AI and machine learning to automate processes.
The Senior Manager of Data & Analytics will play a key role in evolving the data practice at Zayzoon, managing a team of data specialists and building strong data foundations to support the next level of scale. This role involves moving from legacy data systems to a modern data technology stack, introducing data governance, overseeing data operations, and fostering team growth through coaching and mentorship.
The ETL Data Architect will perform data analysis, ELT/ETL design and support functions to deliver on strategic initiatives. Responsibilities include developing, documenting, and testing ELT/ETL solutions using industry standard tools (Snowflake, Denodo Data Virtualization, Looker), recommending process improvements, and extracting data from multiple sources. The candidate should be willing to explore and learn new technologies.
Participates in planning, definition, and high-level design of data solutions, exploring alternatives and evaluating new technologies. Develops and maintains scalable cloud-based data infrastructure, ensuring alignment with the organization's decentralized data management strategy. Designs and implements ETL pipelines using AWS services. Establishes and enforces data governance practices, including compliance with data privacy, access controls, and lineage requirements, across the organizationβs data assets.
Analyze data to identify trends, patterns, and insights that can drive informed business decisions. Design and create visually appealing and user-friendly Tableau and Snowsight dashboards. Build and maintain data pipelines to ensure the availability of accurate and up-to-date data.
As a Senior Data Engineer at Nearform your main task will be designing, building, and maintaining scalable data platforms, pipelines, and warehouses using SQL, Python, Spark, and other relevant technologies. You will play a key role in building efficient data solutions, optimizing performance, and ensuring seamless data integration. You'll also design, build, and optimize data pipelines and ETL processes for large-scale data ingestion and transformation.
Looking for DataBrick Engineers to support companies in cloud transformations. The work involves migrations, data collection, and solution optimization. The client values long-term collaboration and is looking for specialists for a project using Snowflake. Familiarity with cloud environments (Azure and/or AWS) and knowledge of Python and SQL are crucial, with Snowflake knowledge being a plus.
Design, develop, and maintain systems for the acquisition, storage, and retrieval of historical market data from multiple financial exchanges, brokers, and market data vendors. Build and optimize data storage solutions, ensuring they are scalable, high-performance, and capable of managing large volumes of time-series data; implement robust integrations with various market data providers, exchanges, and proprietary data sources.