Pyspark JD:ResponsibilitiesData Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.Data Ingestion: Implement and manage data ingestion processes from a variety of sources (relational databases, APIs, file systems) to the data lake or data warehouse on CDP.Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements.Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives.Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.QualificationsEducation and ExperienceBachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field.3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform.Technical SkillsPySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques.Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQLbased tools (Hive, Impala).Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools.Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks.Scripting and Automation: Strong scripting skills in Linux.Soft SkillsStrong analytical and problem-solving skills.Excellent verbal and written communication abilities.Ability to work independently and collaboratively in a team environment.Attention to detail and commitment to data quality.