Senior ETL Data Developer
"Design, develop, and maintain ETL/ELT pipelines using SSIS (SQL Server Integration Services).
Build and optimize data workflows for ingestion, transformation, and loading into data warehouses.
Work with SQL Server databases for data extraction, transformation, and performance tuning.
Develop and support data pipelines and workflows in Databricks (PySpark/Spark SQL exposure required).
Integrate traditional ETL processes (SSIS) with modern cloud-based data platforms (Databricks/Lakehouse).
Handle data migration tasks from legacy systems to cloud or distributed environments.
Optimize existing SSIS packages for performance, reliability, and scalability.
Write complex SQL queries, stored procedures, and scripts for data processing.
Monitor and troubleshoot ETL jobs, ensuring data accuracy, completeness, and timeliness.
Implement data validation, error handling, and logging frameworks within ETL processes.
Collaborate with data architects, BI teams, and business stakeholders to understand requirements.
Support batch and near-real-time data processing pipelines where applicable.
Work with cloud platforms (Azure/AWS preferred) supporting Databricks-based architectures.
Ensure adherence to data governance, security, and compliance standards.
Participate in code reviews, documentation, and deployment activities following SDLC/Agile practices."
"Design, develop, and maintain ETL/ELT pipelines using SSIS (SQL Server Integration Services).
Build and optimize data workflows for ingestion, transformation, and loading into data warehouses.
Work with SQL Server databases for data extraction, transformation, and performance tuning.
Develop and support data pipelines and workflows in Databricks (PySpark/Spark SQL exposure required).
Integrate traditional ETL processes (SSIS) with modern cloud-based data platforms (Databricks/Lakehouse).
Handle data migration tasks from legacy systems to cloud or distributed environments.
Optimize existing SSIS packages for performance, reliability, and scalability.
Write complex SQL queries, stored procedures, and scripts for data processing.
Monitor and troubleshoot ETL jobs, ensuring data accuracy, completeness, and timeliness.
Implement data validation, error handling, and logging frameworks within ETL processes.
Collaborate with data architects, BI teams, and business stakeholders to understand requirements.
Support batch and near-real-time data processing pipelines where applicable.
Work with cloud platforms (Azure/AWS preferred) supporting Databricks-based architectures.
Ensure adherence to data governance, security, and compliance standards.
Participate in code reviews, documentation, and deployment activities following SDLC/Agile practices."