Skills Needed:Looking profiles with 5-8 yrs experiencesProficient in SQL and at least one programming language (Python, Java, or Scala).Experience with ETL tools and workflow orchestration (e.g., AWS Glue, Apache Airflow, Luigi, Talend).Hands-on experience with cloud platforms AWS, is must Good to have (GCP, or Azure) and services like S3, Redshift, BigQuery, or Data Factory.Familiarity with big data tools such as Hadoop, Spark, and Kafka.Knowledge of data modeling and warehousing concepts.Experience with version control tools (Git) and CI/CD processes.Key Responsibilities:Design, build, and manage robust and scalable data pipelines (batch and real-time).Develop ETL processes to acquire, transform, and integrate data from multiple sources.Build and maintain data warehouses, data lakes, and other storage solutions.Optimize data systems for performance, reliability, and scalability.Collaborate with cross-functional teams to understand data requirements and deliver solutions.Ensure data quality, consistency, and integrity across systems.Implement data governance, privacy, and security best practices.Monitor and troubleshoot data pipelines and flows, ensuring high availability.Document data architecture, flows, and system designs.