About arenaflex – Driving Innovation in the Digital Economy
arenaflex is a global leader in next‑generation financial technology and digital services, empowering millions of customers each day with secure, fast, and reliable solutions. Our mission is to reinvent the way people interact with money by leveraging cutting‑edge data platforms, AI‑driven insights, and cloud‑native architectures. As a member of the arenaflex family, you will be part of a vibrant community that values curiosity, collaboration, and continuous learning. Whether you thrive in an agile startup‑like environment or prefer the stability of a well‑established enterprise, arenaflex offers the perfect blend of innovation, scale, and impact.
Why This Role Is a Game‑Changer
We are seeking a passionate, high‑performing Senior Big Data Engineer to join our Remote Engineering Hub in Phoenix. In this role, you will be the architect and builder of massive data pipelines that power real‑time analytics, fraud detection, and personalized experiences for our customers worldwide. You will work side‑by‑side with visionary architects, data scientists, and product owners, and you will have the autonomy to choose the optimal tools and technologies to solve complex problems.
Key Responsibilities
End‑to‑End Data Pipeline Development: Design, implement, and maintain scalable ETL/ELT pipelines using Hadoop, Spark, Hive, and PySpark to ingest, transform, and store petabyte‑scale data.
Automation & CI/CD: Build robust CI/CD workflows with GitHub/Bitbucket, Jenkins, and Docker/Kubernetes to accelerate deployment cycles and ensure repeatable releases.
Performance Tuning: Optimize Spark jobs, SQL queries, and data models for speed, cost‑efficiency, and reliability.
Cross‑Functional Collaboration: Partner with data scientists, product managers, and business analysts to translate business requirements into technical specifications.
Code Reviews & Quality Assurance: Lead peer code reviews, enforce best practices, and champion test‑driven development for data engineering code.
Infrastructure Management: Deploy and manage big‑data workloads on GCP/AWS, leveraging cloud services such as BigQuery, Redshift, EMR, and Dataflow.
Mentorship & Leadership: Guide a small team of junior engineers, foster a culture of learning, and help them grow technical depth and ownership.
Innovation & Open Source: Contribute to or create open‑source tools, share knowledge across the organization, and stay ahead of emerging technologies.
Documentation: Produce clear design documents, data dictionaries, and operational runbooks to ensure knowledge transfer and operational excellence.
Essential Qualifications
Bachelor’s degree in Computer Science, Software Engineering, Data Engineering, or a related technical discipline (or equivalent practical experience).
3+ years of professional experience building large‑scale data pipelines with Hadoop ecosystem tools (Hive, Pig, HBase) and Spark (Scala, Python, or Java).
Strong command of SQL and advanced analytics using Hive, Spark SQL, or PySpark DataFrames.
Hands‑on experience with Unix/Linux shell scripting for automation and workflow orchestration.
Proven track record delivering production‑grade data solutions in a fast‑paced, Agile environment.
Demonstrated ability to lead small technical teams, conduct effective code reviews, and mentor junior engineers.
Effective communication skills—both written and verbal—and the ability to convey complex technical concepts to non‑technical stakeholders.
Preferred Qualifications & Nice‑to‑Have Skills
Master’s degree or advanced certifications in Big Data, Cloud Architecture, or Data Engineering.
Experience with cloud platforms (Google Cloud Platform, AWS) and cloud‑native data services.
Familiarity with streaming technologies such as Apache Kafka, Flink, or Kinesis.
Knowledge of containerization (Docker) and orchestration (Kubernetes) for scalable deployments.
Exposure to NoSQL databases (MongoDB, Couchbase, HBase) and their integration into data pipelines.
Understanding of machine‑learning pipelines and experience collaborating with data‑science teams.
Contributions to open‑source projects or public technical blogs and presentations.
Core Skills & Competencies for Success
Technical Depth: Deep expertise in Java, Python, or Scala programming for data‑intensive workloads.
Analytical Mindset: Ability to dissect complex data problems, design elegant solutions, and measure impact.
Problem‑Solving: Proactive troubleshooting, root‑cause analysis, and rapid remediation of production incidents.
Collaboration: Thrive in cross‑functional squads, contribute to shared goals, and respect diverse viewpoints.
Continuous Learning: Stay current with industry trends, experiment with emerging tools, and share findings.
Ownership: Take end‑to‑end responsibility for the health, performance, and reliability of data services.
Career Growth & Learning Opportunities at arenaflex
arenaflex invests heavily in career development. As a Senior Big Data Engineer, you will have access to:
Sponsored certifications (e.g., Google Cloud Professional Data Engineer, AWS Certified Big Data – Specialty).
Mentorship programs pairing you with senior architects and industry thought leaders.
Annual tech conferences, hackathons, and internal “innovation days” where you can pitch new ideas.
Rotational assignments across product, analytics, and platform teams to broaden your skill set.
Clear promotion pathways from Senior Engineer to Staff Engineer, Principal Engineer, and eventually Director of Data Engineering.
Work Environment & Culture at arenaflex
Our remote‑first philosophy means you can work from anywhere in the United States while staying connected through daily stand‑ups, virtual coffee chats, and quarterly in‑person meet‑ups in Phoenix. We champion a culture built on:
Inclusivity: Diverse teams where every voice matters, and inclusive hiring practices that reflect the global community we serve.
Transparency: Open communication channels, regular all‑hands updates, and clear visibility into product roadmaps.
Work‑Life Balance: Flexible schedules, generous paid time off, and a strong emphasis on mental health resources.
Innovation: A “sandbox” environment where you can prototype new data solutions without fear of failure.
Recognition: Peer‑based recognition programs, performance bonuses, and spot awards for extraordinary contributions.
Compensation, Perks & Benefits
arenaflex offers a competitive hourly rate of $28 per hour for this full‑time remote position, complemented by a comprehensive benefits package that includes:
Health, dental, and vision insurance with employer contribution.
401(k) plan with company match.
Generous paid parental leave and family‑friendly policies.
Remote‑work stipend for home‑office setup (ergonomic chair, monitor, high‑speed internet).
Annual tuition reimbursement for continued education.
Employee assistance program, mental‑health resources, and wellness initiatives.
Performance‑based bonuses and equity participation in arenaflex.
How to Apply
If you are ready to shape the future of data at a world‑class technology organization, we want to hear from you. Click the button below to submit your resume, cover letter, and any relevant portfolio links. Our recruitment team will review your application promptly and reach out to schedule a conversation.
Join arenaflex – Your Next Big Data Adventure Awaits
At arenaflex, your expertise will directly influence the products millions rely on daily. This is more than a job; it’s an opportunity to grow, innovate, and make a tangible impact on the financial technology landscape. Take the next step in your career and become part of a team that celebrates curiosity, rewards excellence, and builds the future of data‑driven experiences.