Databrick Architect
- 📅
- 132705 Requisition #
- 📅
- Nov 08, 2024 Post Date
Skill: Databricks Architect
Role: T0
About the Role: As a Databricks Architect, you will play a key role in architecting, designing, and implementing data solutions on the Databricks platform. You will work closely with data engineers, data scientists, and other stakeholders to ensure scalable and efficient data pipelines, analytics, and machine learning solutions. Your expertise will guide the organization in leveraging Databricks' capabilities to meet business objectives. This role includes working with cross-functional teams to develop data-driven strategies, creating comprehensive reports, ensuring data integrity, and contributing to RFP processes.
Key Responsibilities:
- Solution Architecture:
- Design and architect scalable and robust data solutions on the Databricks platform.
- Develop blueprints and frameworks for large-scale data processing, transformation, and analysis.
- Provide architectural guidance for integrating Databricks with other data platforms (e.g., Azure, AWS, GCP).
- Data Engineering:
- Lead the development of data pipelines, ETL processes, and data transformation jobs using Databricks and Spark.
- Optimize data workflows for performance, scalability, and cost-efficiency.
- Collaboration:
- Work closely with data engineers, data scientists, and business stakeholders to understand requirements and translate them into technical solutions.
- Mentor and guide junior data engineers and architects in best practices.
- Security & Compliance:
- Ensure data security, governance, and compliance in all Databricks implementations.
- Implement best practices for data security, including data encryption, access controls, and audit logging.
- Performance Tuning:
- Optimize Databricks clusters and jobs for performance and cost.
- Troubleshoot and resolve performance issues related to data processing and analytics.
- Innovation:
- Stay updated with the latest trends and technologies in data engineering, cloud computing, and big data.
- Drive continuous improvement initiatives in data architecture and engineering practices.
Qualifications:
- Education: Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
- Experience:
- 10+ years of experience in data engineering, data architecture, or related fields.
- 5+ years of experience with Databricks and Apache Spark.
- Experience with cloud platforms such as AWS, Azure, or GCP.
- Proven experience in designing and implementing large-scale data solutions.
- Technical Skills:
- Proficiency in Databricks, Apache Spark, and related technologies.
- Strong knowledge of Python, Scala, or Java.
- Experience with data warehousing, ETL tools, and big data technologies.
- Familiarity with data security, governance, and compliance best practices.
- Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration skills.
- Ability to lead and mentor teams.
Preferred Qualifications:
- Certifications in Databricks or cloud platforms (AWS, Azure, GCP).
- Experience with machine learning and data science workflows.
- Familiarity with CI/CD practices and DevOps tools.