Experience At least 5 years of experience in AWS based projects.Technical skills Proficiency in Python and PySpark for data engineering tasks.Big Data Strong knowledge of Big Data technologies and data warehousing concepts.AWS services Experience with AWS Data Engineering stack, including S3, RDS, Athena, Glue, Lambda, and Step Functions.SQL Strong SQL skills for data manipulation and querying.CI CD Experience with CI CD tools like Terraform and Git Actions.Soft skills Good communication skills and ability to work in a multicultural team. Design and implement data pipelines Develop ETL jobs to ingest and move data within the AWS environment using tools like AWS Glue Data storage and processing Build and maintain systems for data collection storage processing and analysis using AWS services such as S3 RDS Athena and Redshift Big Data technologies Utilize Big Data technologies like Hadoop HDFS and Spark for data processing
Experience At least 5 years of experience in AWS based projects. Technical skills Proficiency in Python and PySpark for data engineering tasks. Big Data Strong knowledge of Big Data technologies and data warehousing concepts. AWS services Experience with AWS Data Engineering stack, including S3, RDS, Athena, Glue, Lambda, and Step Functions. SQL Strong SQL skills for data manipulation and querying. CI CD Experience with CI CD tools like Terraform and Git Actions. Soft skills Good communication skills and ability to work in a multicultural team.
Design and implement data pipelines Develop ETL jobs to ingest and move data within the AWS environment using tools like AWS Glue
Data storage and processing Build and maintain systems for data collection storage processing and analysis using AWS services such as S3 RDS Athena and Redshift
Big Data technologies Utilize Big Data technologies like Hadoop HDFS and Spark for data processing