Remote Data Jobs · Python

Job listings

We are looking for a Data Engineering Intern eager to gain hands-on experience in database management, data pipeline development, and ETL processes. This internship provides an excellent opportunity to work alongside experienced data engineers, building foundational skills in data integration and data quality management. Ideal candidates should have some experience in scripting (Python preferred) and a curiosity about creating effective data solutions to support business operations.

This is a 12 week internship within the Data Science team at Ever.ag, assisting in continued development and deployment of leading data science applications in the food production supply chain space. Potential projects include cleaning and analyzing large complex data sets for commodity food processors, translating customer problems into application development and data science requirements, assessing cutting edge machine learning techniques, and developing data science forecasting techniques.

Design, maintain, and scale streaming ETL pipelines for blockchain data. Build and optimize ClickHouse data models and materialized views for high-performance analytics. Develop and maintain data exporters using orchestration tools. Implement data transformations and decoding logic. Establish and improve testing, monitoring, automation, and migration processes for pipelines. Ensure timely delivery of new data features in alignment with product goals.

As a Data Scientist, you will convert in reality the business requirements related to our digital transformation where cloud technology is a key enabler. You will be working in the implementation of data MVPs as part of a Digital Builders Organization squad. You will collaborate with Business Units to have a deep understanding of the business problem that AI and Machine Learning may solve and provide technical expertise in the use cases execution.

US Unlimited PTO

Wavicle Data Solutions is hiring a Databricks Solution Architect, who will be responsible for leading design and implementation of scalable and optimized solutions that leverage the latest Databricks for features. This individual will work closely with customers, understanding their needs and business drivers, and helping them adopt and optimize Databricks for their analytics, data science, and AI/ML workloads.

Strengthen analytics layer and accelerate advancements in clinical diagnostics and therapies designed to improve brain health. Design and maintain canonical data representations and automated data pipelines that power scientific products, from automated scientific reports to interactive dashboards. Develop and monitor the team’s automated biosignal processing pipelines that generate the analytics layer inputs. Turn scientific questions into robust, versioned data products, working alongside engineers, product managers, scientists, and clinicians.

This is for an insurance analytics manager who will lead a team in building an analytics and reporting framework that enables data-driven decisions and drives the insurance organization forward. You’ll be responsible not only for ensuring core reporting that is delivered to business unit leaders is accurate, timely and actionable, but also for pushing the boundaries on how we can go deeper both in depth and breadth of analysis leveraging new data sources as well as tech stack.

$144,188–$203,560/yr

As Senior Data Analyst, Reporting, you will communicate performance clearly and effectively through dashboards, written summaries, and presentations tailored for executives and functional stakeholders. Design, build, and maintain dashboards and recurring reports in Hex that serve as the official reporting suite. You will also define, document, and uphold the gold standard of metric definitions across Calendly.