Having 10 years of experience in Big Data Development, Data Engineering and Analytical experience in Investment Banking, Credit Risk, AML Experience of setting up Data Pipelines on multi cloud implementations GCP /AWS / AZUREGood knowledge in Development tools like Pyspark, Python, Java and Hadoop Ecosystem with experience in working with GCP / AWS /Azure / cloud and Databricks ecosystem.Good knowledge on Data Governance , Data Lineage and Data Pipelines setup for Batch , Transactional , Streaming Data on DatabricksGood hands on Leveraging Spark Streaming Autoloaders to build Streaming Data Pipelines to consume data from Stream / Transactional sourcesExtensive experience in implementing various Data architectures Data Lake, Datawarehouse, lake house, Data vault, dimensional modelling.Strong Hand-on in leveraging Delta Live Pipelines for Transforming data between LayersExpertise in setting up ingestion framework for various types of Data (CSV,FIXED, BINARY,JSON , XML) from various sources (Mainframe/ODS/API) into Lakehouse following medallion architecture.Experience in following leveraging Docker / Kubernetes services for setting up data pipelines for handling complex high volume data, Airflow for orchestrating data pipelines on various cloud technologies by using in-house operators and building custom operators, building end to end data ingestion frameworks for Analytics , ETL and ELT processes