Automation & Benchmarking Engineer
Automation & Benchmarking Engineer
Role Overview
We are seeking Automation & Benchmarking Engineers to build scalable pipelines and tools that automate dataset ingestion, evaluation, and benchmarking of AI systems.
This role blends software engineering, automation scripting, and AI performance analysis to accelerate evaluation workflows and generate insights across Google’s AI tools.
Key Responsibilities
- Design and develop end-to-end automation pipelines for evaluation workflows—prompt submission, response collection, result aggregation, and reporting.
- Integrate evaluation tooling with developer surfaces like Gemini CLI, VS Code, and GitHub.
- Conduct competitive benchmarking against peer AI tools to measure correctness, verbosity, and usefulness.
- Build dashboards and visualization reports using Looker Studio, BigQuery, or Python-based tools.
- Optimize system performance, automate error logging, and maintain reproducibility across evaluations.
- Collaborate with TPM and data specialists to deliver evaluation automation at scale.
- Ensure source code management and deployment compliance in GitLab / Bitbucket environments.
Required Skills & Experience
- 4–8 years of experience in software engineering, test automation, or AI evaluation tooling.
- Proficiency in Python, JavaScript, or Go for automation and data handling.
- Experience building pipelines or automation frameworks (Airflow, Beam, or custom orchestration).
- Strong understanding of REST APIs, LLM integration, and evaluation metrics computation.
- Hands-on experience with data visualization tools (Looker, Tableau, Plotly).
- Familiarity with cloud platforms (GCP preferred) and source control workflows (GitHub/GitLab).
Preferred Qualifications
- Experience benchmarking AI-assisted developer tools (e.g., Copilot, TabNine, Replit).
- Knowledge of ML model evaluation metrics and comparative performance analysis.
- Background in computer science, applied ML, or automation frameworks.