ExperienceMust have minimum 3 years of experience in Data Migration and Data Transformation projects on Azure cloudProven expertise in building large-scale batch and real-time data pipelines with data processing frameworks in AWS and Azure cloud platformsDesign and develop pipelines, ETL and analytical processes on Azure cloud using data services like Azure Data Factory, Azure Data Lake, Azure Databricks, Azure Synapse including Spark & SQL PoolsHands-on experience in Microsoft Fabric for implementing end-to-end data pipelines, Real-Time Analytics, OneLake integration, and Data Activator featuresDemonstrated experience with ETL development both on-premises and in the cloud using IoT, Azure Stream Analytics, Azure Event Hubs, Azure Databricks, Azure Synapse, SQL, Azure Data Lake, Azure Logic Apps, Power BI, and other ETL technologiesDesign and develop highly available, resilient, scalable, and secured data platforms adhering to best practicesProficient in writing automation scripts using Python and Spark (Scala) for data processing and transformationStrong analytical skills to solve complex issues in both existing and new codeSolid experience in building cloud data solutions and deep understanding of Cloud Storage, Big Data Platform Services, Serverless Architectures, Hadoop, RDBMS, DW/DM, NoSQL databases, and Cloud SecurityExtensive hands-on experience with Azure Data Architecture components: Data Ingestion, Data Integration, Data Pipeline, Data Orchestration tools and streaming services like Event Hubs, Databricks, Stream Analytics, Logic Apps, Data Factory, Data LakesExperience in Data Governance, including Data Security, Data Quality, and Access ControlsSkilled in monitoring, debugging, diagnosing, and troubleshooting complex production systemsSelf-driven and able to work independently in dynamic and fast-paced environmentsResponsibilityTranslate business requirements into actionable technical specifications, defining application components, enhancement needs, data models, and integration workflowsMentor and coach application teams on the adoption of modern tools, technologies, design patterns, and Microsoft Fabric capabilitiesDesign optimal cloud-hybrid architectures for Big Data, Data Warehousing, and orchestration pipelines, enabling smooth migration from on-premise to Azure Cloud and Microsoft FabricArchitect and implement end-to-end data platforms using Azure services and Microsoft Fabric, including Real-Time Analytics, OneLake, Data Activator, and unified pipeline orchestrationEnsure all solutions are highly available, resilient, scalable, and secure, adhering to Azure and Fabric best practicesDevelop robust automation scripts and transformation logic using Python and Spark for scalable and high-performance data processingDefine enterprise-wide strategies, policies, and best practices for Big Data and Cloud-native data integration and analytics, leveraging both Azure and Fabric platformsCreate and maintain detailed technical design documentation and provide accurate estimations for storage, compute resources, cost efficiency, and operational readinessCollaborate with clients, internal stakeholders, and delivery teams to align business goals with technical execution, ensuring project success through clear communication and agile delivery