🌎
This job posting isn't available in all website languages
📁
Engineer
📅
CREQ249684 Requisition #

The job details for the Data Engineer role have been organized into a more readable format below.

 

Data Engineer – Health & Total Rewards (Mercer Compensation AI)

Role Objective:
The Data Engineer is the architect of the Data Highway, responsible for designing and building automated ETL/ELT workflows that power Mercer’s consulting tools. The mission is to ingest fragmented data (from insurance carriers, TPAs, and client HRIS) and transform it into a structured format for actuarial modeling and compensation benchmarking, moving millions of sensitive records into a centralized, highperformance analytics environment.

Key Responsibilities

  Pipeline Development: Build and maintain robust data pipelines using Python, SQL, and Spark to process largescale healthcare claims and salary survey data.
  Data Normalization: Develop logic to clean and standardize diverse data formats.
  Data Governance/Compliance: Build automated masking and deidentification routines to ensure HIPAA and GDPR compliance for Protected Health Information (PHI).
  Cloud Infrastructure: Deploy and monitor data workloads on Azure (Data Factory/Databricks) or AWS (Glue/Redshift) to ensure high availability and scalability.

Technical Stack & Requirements

  Languages: Expertlevel SQL and Python (specifically for data manipulation via Pandas/PySpark).
  Big Data Tools: Handson experience with Databricks, Snowflake, or Hadoop ecosystems.
  Orchestration: Experience with Airflow or Azure Data Factory for managing complex job dependencies.
  Modeling: Understanding of Star/Snowflake schemas and Data Vault 2.0 for longterm analytical storage.

Qualifications

  Experience: 3–6 years in Data Engineering, ideally within Healthcare, Insurance, or FinTech.
  Domain Knowledge: Familiarity with ICD10/CPT codes (medical) or global payroll structures.
  Education: Bachelor’s degree in Computer Science, Data Engineering, or a related quantitative field.

Technical Stack & Requirements

  • Languages: Expertlevel SQL and Python (specifically for data manipulation via Pandas/PySpark).
  • Big Data Tools: Handson experience with Databricks, Snowflake, or Hadoop ecosystems.
  • Orchestration: Experience with Airflow or Azure Data Factory for managing complex job dependencies.
  • Modeling: Understanding of Star/Snowflake schemas and Data Vault 2.0 for long term analytical storage.

Qualifications

  • Experience: 6 years in Data Engineering, ideally within Healthcare, Insurance, or FinTech.
  • Domain Knowledge: Familiarity with ICD10/CPT codes (medical) or global payroll structures.
  • Education: Bachelor’s degree in Computer Science, Data Engineering, or a related quantitative field.

Previous Job Searches

Similar Listings

New York, New York, United States

📁 Engineer

Requisition #: CREQ247961

New York, New York, United States

📁 Engineer

Requisition #: CREQ250227

New York, New York, United States

📁 Engineer

Requisition #: CREQ247977