Citizen Remote logo

Other Remote jobs you may be interested in

Staff Engineer, Vendor Risk Management

Full Stack TypeScript + Golang 2D 3D floor plan editor

Senior Software Development Engineer in Test

Senior Embedded Systems Engineer

Senior Software Engineer / Full-stack Developer / DevOp

Senior Front End Developer

Software Engineer - Data Platform at Flume Health

Job details

About Flume Health

Flume Health is a software company that connects the fragmented healthcare data ecosystem for more efficient health plan administration. As a single, cloud-native integration platform, Flume’s Relay platform allows companies to easily connect various systems and vendors for efficient data exchange that’s increasingly demanded of the modern health plan. Payers, third-party administrators, prescription benefits managers, and health solutions are provided the simplicity, speed, and security they need to automate data integration and movement between relevant stakeholders. Relay supports multiple data transmission protocols, data types, and file types. By streamlining data flows between payers and solutions, a world of opportunity exists to improve access to healthcare.

 

The Role:

At Flume Health, we're simplifying healthcare administration through technology. We're seeking a talented engineer to join our team and focus on building the core data processing platform that powers our ETL, data quality, and complex data integration capabilities. If you're passionate about designing and implementing robust, scalable, and efficient data processing systems – the kind of foundational tooling that enables entire product lines – this role is for you. You'll be instrumental in building the engine that handles diverse healthcare data (claims, eligibility, EDI) reliably and securely, drawing inspiration from best-in-class systems like AWS Glue, Google Cloud Dataflow, and Fivetran.

What You’ll Do:
  • Design, build, and operate the core frameworks and infrastructure for distributed data processing using Go, Python, Apache Spark, and Google Cloud Dataproc.
  • Develop reusable tooling, libraries, and services that abstract complexities and empower other engineers to build reliable data pipelines for healthcare integrations (claims, eligibility, EDI).
  • Architect and implement scalable and resilient systems capable of handling both real-time streams and large batch data processing workloads efficiently.
  • Focus on the performance, reliability, observability, and cost-effectiveness of the underlying data processing platform.
  • Collaborate with product managers, backend engineers, and data scientists to understand requirements and deliver foundational data capabilities.
  • Define and champion best practices and standards for building, deploying, and operating data processing services within Flume.
  • Ensure the security and compliance of the data platform, particularly given the sensitive nature of healthcare data.
What You’ll Need:
  • 3+ years of professional software engineering experience, with a demonstrable focus on building backend systems, infrastructure, or platforms.
  • Strong proficiency in Go for building concurrent, high-performance systems-level software.
  • Significant experience designing, building, and operating systems leveraging distributed data processing technologies (e.g., Apache Spark, Flink, Kafka Streams) and associated cluster managers (e.g., Dataproc, EMR, YARN).
  • Solid understanding of Python, particularly for data manipulation, scripting, or building frameworks.
  • Experience building, deploying, and operating services on cloud platforms (GCP strongly preferred, AWS acceptable) using containerization (Docker, Kubernetes).
  • Proven ability to design reusable components, libraries, or frameworks that improve developer velocity and system robustness. (Changed from just "tools and systems")
  • Deep understanding of distributed systems concepts (scalability, reliability, consistency).
  • Excellent problem-solving skills and the ability to tackle ambiguous technical challenges.
  • Strong collaboration and communication skills.
Nice to Have:
  • Familiarity with data lake architectures (e.g., Apache Iceberg, Delta Lake) and associated metadata solutions.
  • Experience in compliance-heavy environments (Healthcare/HIPAA, Finance).
  • Familiarity with healthcare data standards (EDI, HL7, FHIR).
  • Experience building internal developer platforms or tooling.
  • Performance tuning and optimization expertise for Spark or similar distributed systems.
  • Experience building data quality frameworks or systems.
Technologies We Use:
  • Languages: Go, Python, SQL
  • Cloud: Google Cloud, Kubernetes, Docker, AWS, Azure
  • Databases & APIs: PostgreSQL, RESTful API
Special Note:
  • Candidates will be required to travel for team meetings once a quarter.
Salary Range:
  • $140K to $180K

 

Flume’s Perks & Benefits
  • Flexible PTO - you’re going to be working hard so enjoy time off
  • A robust stock option plan to give our employees a direct stake in Flume’s success
  • WFH stipend - we’ve always been remote first 
  • Competitive compensation and 401k with a 4% match
  • Comprehensive health coverage (medical, dental, vision) 
Apply now