You'll play a key role in shaping our products and decisions by working across product, engineering, and commercial teams to ensure data is at the center of everything we do. Mine and analyze large datasets to inform product development, marketing strategies, and business processes. Build predictive models that optimize customer experiences, operational efficiency, and revenue growth.
Remote Data Jobs · Python
419 results
FiltersJob listings
Architect, build, and operate real-time/batch ETL pipelines, agentic orchestration flows, and AI/ML endpoints for autonomous, multi-agent production systems. Contribute actively to team processes, documentation, and operational quality. Build event-driven data workflows, integrate with various connectors, and expose agentic features.
Build a high-impact Talent Pool for advanced analytics, AI, and enterprise data platforms if you’re a LATAM-based Data Scientist with strong English. Solve complex problems using Python, R, SQL, Machine Learning, statistical modeling, forecasting, and cloud-based data ecosystems. The ideal candidate will have experience building predictive/prescriptive models, strong communication skills in English, and a passion for experimentation.
As a Data Systems Engineer at Northbeam, you will translate customer feedback into scalable data pipelines and products, creating, maintaining, and improving integrations and transformations in a complex network. The system is powered by data that spans numerous ad platforms and order management systems, requiring curiosity, experience, and a desire to build data pipelines and applications at scale.
As a Data Scientist on Plaid’s Fraud Data team, you will analyze customer and network traffic to understand how Protect performs across different segments and use cases. You’ll build dashboards and performance metrics that create a clear, shared view of product health for both the team and our go-to-market partners.
The Data Engineering team is focused on the design, development, and support for 'all things data' at OppFi. This includes the deployment of Postgres databases which support our applications, our Snowflake Data Warehouse and multiple Airflow and Hevo ETLs. You will work on Postgresql Database Administration and Data Engineering.
We’re hiring a Lead Product Scientist to help build our Product Analytics capabilities at Paddle. This role sits at the intersection of Product and Data — quantifying the impact of our product features through historical studies, setting up our experimentation culture, and enabling a cadence of reviewing feature performance.
As an intern, you'll contribute to the development of our in-house Large Language Model (LLM) by tackling the challenges of data collection, validation, and labeling, directly impacting the quality and effectiveness of our LLM. Work closely with our data science and engineering teams, participating in daily standups, and collaborating on live production projects to gain hands-on experience in managing and preparing large datasets.
As a Data Engineer on the Growth org, you will collaborate with our cross-functional partners in Data Science, Product, and Finance to build and maintain high quality data sources and build software that maximizes the value and efficacy of that data. The team operates independent of skill set, so your peers with software engineering backgrounds will execute on data engineering work, and you will make contributions on software systems that don’t directly work with our data systems.
This role involves collaborating with clients to understand their IT environments and digital transformation goals. Responsibilities include collecting and managing large volumes of data, creating robust data pipelines for Data Products, and defining data models that integrate disparate data. The role also includes performing data transformations using tools such as Spark, Trino, and AWS Athena and developing, testing, and deploying Data API Products with Python and frameworks like Flask or FastAPI.