Data Architect
This role focuses on ingesting operational data from source systems (primarily PostgreSQL), transforming and modeling that data in the data warehouse (currently Snowflake), and preparing reporting-ready datasets and optimized queries for analytics and reporting services.
The engineer partners closely with product and software engineering teams to ensure data is reliable, performant, and easy to consume - whether through optimized SQL queries or analytics-friendly data models.
This role combines deep technical expertise in data engineering with ownership of data architecture, a version-controlled database codebase, and support for multiple environments.
Responsibilities:
● Design, develop, and maintain a scalable data warehouse architecture to support analytics and reporting needs.
● Build and manage data ingestion pipelines that move operational data (from PostgreSQL) into the data warehouse (Snowflake) with high reliability and data quality.
● Transform and model raw data into reporting-friendly schemas (e.g., dimensional models, denormalized datasets, or analytics-optimized akeholders to translate business requirements into scalable data solutions.
● Establish and promote data engineering best practices, including naming conventions, documentation, testing, and performance optimization.
● Monitor data pipelines and warehouse performance, proactively identifying and resolving data quality, latency, or scalability issues.
● Contribute to architectural decisions around data modeling, ingestion patterns, and warehouse optimization.
● Participate in agile development processes, additional tasks within the department as assigned by management. Qualifications
● 4–8 years of prof data modeling concepts (e.g., dimensional modeling, star/snowflake schemas, reporting-optimized structures).
● Familiarity with ELT/ETL concepts, data pipelines, and orchestration practices.
● Experience ingesting and transforming data from relational databases, particularly PostgreSQL.
● Solid experience with Git-based version control for database code and data transformations.
● Experience supporting multiple environments (development, staging, production) with controlled deployment processes.
● Strong problem-solving skills and ability
This role focuses on ingesting operational data from source systems (primarily PostgreSQL), transforming and modeling that data in the data warehouse (currently Snowflake), and preparing reporting-ready datasets and optimized queries for analytics and reporting services.
The engineer partners closely with product and software engineering teams to ensure data is reliable, performant, and easy to consume - whether through optimized SQL queries or analytics-friendly data models.
This role combines deep technical expertise in data engineering with ownership of data architecture, a version-controlled database codebase, and support for multiple environments.
Responsibilities:
● Design, develop, and maintain a scalable data warehouse architecture to support analytics and reporting needs.
● Build and manage data ingestion pipelines that move operational data (from PostgreSQL) into the data warehouse (Snowflake) with high reliability and data quality.
● Transform and model raw data into reporting-friendly schemas (e.g., dimensional models, denormalized datasets, or analytics-optimized akeholders to translate business requirements into scalable data solutions.
● Establish and promote data engineering best practices, including naming conventions, documentation, testing, and performance optimization.
● Monitor data pipelines and warehouse performance, proactively identifying and resolving data quality, latency, or scalability issues.
● Contribute to architectural decisions around data modeling, ingestion patterns, and warehouse optimization.
● Participate in agile development processes, additional tasks within the department as assigned by management. Qualifications
● 4–8 years of prof data modeling concepts (e.g., dimensional modeling, star/snowflake schemas, reporting-optimized structures).
● Familiarity with ELT/ETL concepts, data pipelines, and orchestration practices.
● Experience ingesting and transforming data from relational databases, particularly PostgreSQL.
● Solid experience with Git-based version control for database code and data transformations.
● Experience supporting multiple environments (development, staging, production) with controlled deployment processes.
● Strong problem-solving skills and ability