We are seeking a skilled and proactive Data Engineer to join our team.The ideal candidate will have hands-on experience with Snowflake and DBT, as well as foundational knowledge of Python for data scripting and automation. Exposure to AWS cloud environments is a plus. This role offers the opportunity to work on cutting-edge data solutions that support key business initiatives.Key Responsibilities:Design, develop, and maintain scalable data pipelines using DBT and Snowflake.Perform data modeling, transformation, and integration tasks to support business requirements.Write and maintain clean, efficient Python scripts for automation and workflow orchestration.Ensure data quality, integrity, and performance through best practices and testing.Collaborate with cross-functional teams including analytics, operations, and engineering.Participate in code reviews, sprint planning, and agile delivery processes.Monitor and optimize existing data workflows and troubleshoot issues as they arise.Must-Have Skills:Strong experience with Snowflake (data warehouse design, performance tuning, SQL).Proficient in DBT (Data Build Tool) for transforming and modeling data.Basic knowledge of Python, particularly for scripting and automation in a data environment.Nice-to-Have Skills:Familiarity with AWS services such as S3, Lambda, Glue, or Redshift.Understanding of CI/CD and version control systems like Git.Qualifications:6+ years of experience in data engineering or a related discipline.Strong analytical and problem-solving skills.Excellent communication and collaboration abilities.