This is a remote Data Analyst position with our Columbus, OH client. They offer competitive salaries and benefits, as well as a great work-life balance. The position is remote, but you will occasionally go in for meetings or celebrations., emphasizing understanding their customers and consumers challenges and providing the right resolutions to solve them.
Remote Data Jobs · US
297 results
FiltersJob listings
We are seeking a Power BI Report Developer with strong experience in Databricks to design, build, and optimize high-impact analytics solutions. In this role, you will develop interactive dashboards and data visualizations that help transform complex data into actionable insights for business decision-makers. You’ll collaborate closely with cross-functional teams, including data engineering, analytics, and operations, to ensure data accuracy, performance, and scalability.
This role involves leadership in building and leading a healthcare data analytics team, partnering with various teams to shape a data roadmap, owning analytic datasets across different sources, building automated self-service dashboards, supporting data governance initiatives, and leveraging data to support strategic projects, requiring 8+ years of experience in healthcare data analytics and 3+ years of management experience.
The Epic Clarity Data Analyst will use SQL to analyze clinical, operational, financial, and administrative data generated in Epic’s electronic health record (EHR) and practice management (PM) systems and stored in Epic’s back-end Clarity data model. This role involves leveraging Epic’s in-app reporting tools and building business intelligence dashboards to enable evidence-based decisions. The analyst will also develop data quality tests and build externally facing reports.
Support the Claims organization by developing advanced reporting and dashboard solutions, leveraging expertise in PowerBI, DAX and associated tools, to evaluate data and identify opportunities to improve efficiency and effectiveness in our programs. Utilize business intelligence tools to identify/project significant shifts in key business metrics and communicate observations to business owners and technical partners.
Looking for a Data Engineer II to help design, build, and continuously improve the GoGuardian Analytics and AI/ML ecosystem. This position sits on the Data Engineering team, a group responsible for building and maintaining the core data platform that powers analytics, product insights, and machine learning across the company. Collaborate closely with Data Science, Business Intelligence, and other teams to enable the next generation of data-driven products and AI capabilities.
Build and be responsible for PlanetScale’s data and analytics efforts from zero to one, and play a leading role in shaping decision making through data. You will be the key lynchpin connecting the business with data to inform how we are executing against our company goals and where to direct our resources. Your scope will be broad and cover GTM Analytics, Product Analytics, and Analytics Engineering.
Design and build Data Warehouse and Data lakes. Use your expertise working in large scale Data Warehousing applications and databases such as Oracle, Netezza and SQL Server. Use public cloud-based data platforms especially Snowflake and AWS. Experience in design and development of complex data pipelines. Expert in SQL and at least one scripting language, adept in tracing and resolving data integrity issues.
Lead and execute the development, validation and automation of analytical pipelines and statistical models that support metadata-driven clinical data processing, reporting, and regulatory submissions. This is a hands-on technical leadership role, ideal for a senior data scientist or statistical programmer who enjoys coding, problem-solving, and working cross-functionally to bring rigor, reproducibility, and automation to clinical reporting workflows.
Contribute to novel research by designing and building massive scale deep learning systems focusing on modularity, composability, verifiability, and continual learning. Responsibilities include pursuing novel research, partnering with researchers and production engineers to design and run novel experiments, and owning and maintaining experimental frameworks and test benchmarks for ML research in uniquely high-scale and decentralised settings. You will follow best practices - building in the open with a keen focus on designing, testing, and documenting your code.