As a Machine Learning Engineer on the Data Engineering team, you’ll partner closely with the Data Science team to build out our foundational Machine Learning platform. You will build internal tools and services to accelerate UD’s model building and deployment process. You will lead technical initiatives, and drive results in a fast-paced, dynamic environment and lead code reviews to maintain code and data quality.
Remote Data Jobs · Spark
48 results
FiltersJob listings
This role requires prior, proven experience building highly scalable ETL pipelines using Databricks and AWS. Looking for someone who has successfully designed, implemented, and maintained large-scale data infrastructure in a production environment, you will report to the Data Science Manager and work closely with both our Data Science and Product teams.
An exciting opportunity to design, develop, and lead advanced data engineering solutions in a fully remote environment, working with large, heterogeneous datasets to build and optimize data pipelines, data lakes, and data warehouses that drive business insights. Collaborating closely with cross-functional teams and stakeholders to ensure high-quality deliverables, create impactful reports and dashboards, and implement scalable solutions.
This role leads the data engineering practice, overseeing data engineering, analytics, machine learning, and data science initiatives across complex, large-scale projects. You will set technical direction, define standards, and ensure delivery excellence while actively mentoring and managing a team of engineers. Working closely with cross-functional stakeholders, you will drive architecture decisions, implement scalable data solutions, and apply your expertise to optimize client outcomes.
As an Advanced Data Platform Engineer, design and implement scalable, cloud-native data platforms that integrate modern lakehouse technologies, distributed compute frameworks, and cloud-native services to support diverse analytical use cases and enterprise-scale insights. This role emphasizes technical depth, performance optimization, and governance best practices to deliver secure and reliable solutions. You'll work on systems leveraging Apache Spark, Delta Lake, and Iceberg.
We are growing our data capabilities and are looking for a Senior Data Engineer to help us build the next stage of our analytics and AI journey. This is a hands-on role where you will work across the full data lifecycle—ingesting, transforming, modeling, enriching, and applying advanced analytics. You’ll help us navigate and adopt the rapidly evolving Microsoft Fabric and Azure AI ecosystem.
As a Data Engineer, you will be part of the team delivering the different parts of our production-ready product, while co-designing and implementing an architecture that can scale up with the product and the company. You will implement any type of extraction, build solutions for application integrations, task automation, and design and build data processing pipelines for large volumes of data.
We are looking for a highly skilled Senior Data Engineer with strong experience in designing, building, and maintaining scalable data solutions. The ideal candidate has a deep understanding of ETL processes, big data technologies, and modern analytics platforms, and is comfortable working in fast-paced, data-driven environments. Key Responsibilities include designing, developing, and optimizing robust ETL pipelines to support data ingestion, transformation, and processing at scale.
Experian is looking for an experienced Senior AI Data Scientist to join our Analytic Innovations group to focus on developing agentic AI solutions and advanced marketing analytics models that move strategic decision-making across the consumer credit lifecycle. This individual will have deep technical expertise in coding intelligent agents, experience in marketing analytics for consumer lending products with modeling knowledge plus experience applying AI to real-world challenges in financial services.
This role involves collaborating with clients to understand their IT environments and digital transformation goals. Responsibilities include collecting and managing large volumes of data, creating robust data pipelines for Data Products, and defining data models that integrate disparate data. The role also includes performing data transformations using tools such as Spark, Trino, and AWS Athena and developing, testing, and deploying Data API Products with Python and frameworks like Flask or FastAPI.