Design, implement, and maintain robust data pipelines and transformation workflows. Work with cloud data warehouses, orchestration tools, and modern ETL frameworks. Collaborate with cross-functional stakeholders to standardize best practices.
Remote Data Jobs · Python
419 results
FiltersJob listings
Design, build, and maintain scalable batch and real-time data pipelines that power analytics, experimentation, and machine learning. Partner cross-functionally with analytics, product, engineering and operations to deliver high-quality data solutions that drive measurable business impact. Develop and maintain curated, well-modeled datasets that serve as trusted sources of truth across the organization.
- Build and enhance cloud-based data tools and reporting systems.
- Develop robust, scalable data pipelines.
- Leverage emerging AI tools to accelerate development.
Shape and evolve the backbone of Blip's real-time data processing layer. Design and implement streaming and batch data systems that move and transform information reliably and efficiently across Blip’s platform. Champion data modeling excellence, ensuring high-quality, discoverable data for real-time consumption. Key responsibilities include data engineering & pipeline architecture, modeling & storage, and reliability, quality & observability.
- Lead the team’s fraud monitoring and detection efforts, owning fraud escalations as Affirm scales internationally. Work with cross-functional partners to develop fraud detection and decisioning systems. The successful candidate will:
- Develop strategies for monitoring and preventing fraud.
- Leverage data analytics to optimize fraud management strategies.
- Report on fraud trends and loss drivers.
- Assist with the development of new features to improve fraud detection capabilities.
Shape and evolve the backbone of Blip's real-time data processing layer. Design and implement streaming and batch data systems that move and transform information reliably and efficiently across Blip’s platform. Champion data modeling excellence, ensuring high-quality, discoverable data for real-time consumption. Key responsibilities include data engineering & pipeline architecture, modeling & storage, and reliability, quality & observability.
- Lead the team’s fraud monitoring and detection efforts, owning fraud escalations as Affirm scales internationally. Work with cross-functional partners to develop fraud detection and decisioning systems. The successful candidate will:
- Develop strategies for monitoring and preventing fraud.
- Leverage data analytics to optimize fraud management strategies.
- Report on fraud trends and loss drivers.
- Assist with the development of new features to improve fraud detection capabilities.
Apply expertise in text mining, natural language processing, and data analysis as part of Amplity's innovative Intel team, delivering impactful data products for healthcare clients in this fully remote role. The ideal candidate is detail-oriented, self-motivated, and thrives in a fast-paced environment.
Responsible for designing, building, and maintaining scalable data infrastructure that empowers the organization to access, process, and analyze information efficiently. Works closely with data science, IT, and business teams to optimize cloud-based solutions, automate data pipelines, and ensure real-time data availability. Hands-on experience with modern data technologies is emphasized, and focuses on data quality, efficiency, and reliability.
You’ll be joining a small, high-impact data engineering team within Federato’s AI/ML organization, where the focus is on building the infrastructure and internal frameworks that empower machine learning engineers. Data and tooling are reliable, scalable, and aligned with the needs of our AI-native platform which enables machine learning engineers to develop, deploy, and iterate on AI-powered features.