Design, build, and maintain scalable data processing systems and analytics platforms. In this role, you'll lead our efforts to create robust, in-house data infrastructure solutions to support our growing data needs and business intelligence requirements. Data Architecture & System Design: Design and implement efficient data storage and processing solutions for large-scale datasets; architect a new data processing framework to replace existing third-party solutions. Pipeline Development: Develop and optimize data pipelines for event data ingestion and processing; implement real-time analytics capabilities for clickstream data. Technology Implementation: Evaluate and integrate open-source technologies like Apache Druid, Spark, or similar tools based on project requirements and performance needs. Cross-Team Collaboration: Work closely with backend engineering teams using Go in a Kubernetes environment; participate in architectural decisions for scalable data systems. Performance Optimization: Optimize query performance and data access patterns for analytics platforms; ensure systems scale efficiently with growing data volumes. Data Modeling: Design and implement data models that support business intelligence needs and facilitate efficient reporting capabilities. Documentation & Best Practices: Create comprehensive documentation and establish best practices for data engineering across the organization. Technical Leadership: Provide architectural direction for data systems and mentor junior engineers on data engineering best practices.