Job Description:
• We are seeking a senior software engineer to help design and implement the core data platform powering our next-generation product architecture.
• This role will focus on building scalable data ingestion, transformation, and integration across multiple internal and external data sources, using AWS-native services.
• You will work closely with engineering leadership to establish a robust data lake and transformation layer that supports our APIs, microservices, and downstream analytics.
• The ideal candidate is comfortable working at the intersection of data engineering and application integration.
Requirements:
• Strong leadership skills - you are comfortable both leading software engineers and rolling up your sleeves to implement and deploy the best solution.
• Strong cross-functional collaboration with product, engineering, and analytics teams to ensure data availability, reliability, and performance.
• Strong communication skills and experience translating business requirements into system design, architecture diagrams, and technical documentation.
• Strong communication presenting complex concepts or solutions to diverse audiences through clear and concise written communication (e.g., report writing, email crafting) and effective verbal communication (e.g., presentations, stakeholder updates).
• Strong problem solving skills and knowledge of applied algorithms to solve real world problems efficiently.
• You have operated in a team’s on-call rotation to address complex problems in real-time and keep services operational and highly available.
• Strong hands-on experience implementing and using AWS data services: S3, Glue (ETL/Jobs, Data Catalog, DataBrew), Athena, Lake Formation, Step Functions, Lambda, Kinesis Data Stream and API Gateway.
• Expertise in designing and optimizing data pipelines for high-volume, multi-source ingestion of structured and semi-structured data in a multi-tenant data architecture using different data lake formats (AWS S3Tables, Apache Iceberg, Apache Hudi, Apache Parquet).
• Experience with entity resolution, data cleansing, data quality, and anomaly detection.
• Expert level skills developing back-end distributed systems and data pipelines using Python, Pyspark, SQL, Step Functions, Lambdas.
• Comfort working with cloud data warehouses (Athena, Snowflake, Redshift, or similar).
• Experience building event-driven and notification systems (SNS/SQS, EventBridge, Pub/Sub, webhooks) and orchestration frameworks (StepFunctions or equivalent).
• Experience integrating AWS data pipelines with external platforms (e.g., Snowflake, Metabase, reporting tools, etc.).
• Hands on experience applying security best practices for data storage, transfer, and API access.
• 7-10 years professional experience working in a software development environment, ideally with exposure to big data and data-rich applications.
• Experience working across disciplines, partnering with BI, ML, and product teams to translate ideas into customer-facing features.
• 5+ years working in an AWS cloud-native environment.
Benefits:
• Competitive salary and health benefits for eligible full time employees.
• 401k matching, and a subsidy for internet or cell phone.
• Generous PTO days, in addition to paid holidays that incorporate two days to honor and celebrate your heritage, culture, or traditions that matter most -- just tell us when!
• Half Day Summer Fridays!
• A company actively working to dismantle bias in our hiring practices, foster cultural inclusivity, and continuously examine policies and practices to ensure equity.
• We recognize and believe that diversity, equity, and inclusion constantly make us better.
• A fast-paced, team-driven culture that helps you grow your career with learning opportunities, autonomy and ownership, and chances to succeed (or fail-- hey, we’ve all learned through failure).
Apply Now
Apply Now