Senior Staff Machine Learning Engineer - AI Safety and Guardrail

airbnbUnited Statesgreenhouse
Posted Date:

September 8, 2025

Employment Type:

Not specified

Work Arrangement:

On-site/Hybrid

Skills & Technologies

Software Engineeringpreferred

Job Description

Airbnb was born in 2007 when two hosts welcomed three guests to their San Francisco home, and has since grown to over 5 million hosts who have welcomed over 2 billion guest arrivals in almost every country across the globe. Every day, hosts offer unique stays and experiences that make it possible for guests to connect with communities in a more authentic way.

The Community You Will Join:

Machine Learning and Artificial Intelligence are at the heart of the Airbnb product. From Trust to Payments, and from Customer Service to Marketing, we rely on ML and AI to ensure that guests and hosts have the best possible experience with Airbnb.

The Community Support Products (CSP) Machine Learning team is the core team responsible for driving CSxAI (Community Support x Artificial Intelligence) initiatives by adopting the Generative AI technologies to enable an intelligent, scalable and exceptional service experience. The team develops and enhances various AI models, ML services and tools including LLM fine-tuning and optimization, RAG/Search, LLM evaluation and testing automation, feedback-based learning and guardrail for a wide range of applications in Airbnb.

The Difference You Will Make:

As a key member of Airbnb’s CSP Machine Learning team, the Guardrail and AI Safety Engineer ensures that our AI-powered systems - spanning chatbots, AI Assistants, and Agent Copilot - are reliable, safe, and aligned with both our trust and governance standards. You will collaborate cross-functionally with ML, engineering, product and policy teams to build, monitor, and enforce technical safety mechanisms across the AI system lifecycle.

A Typical Day:

    • Collaborate with cross functional teams to identify issues, evaluate risks, design monitoring systems, tailor safeguard measures and deploy efficient solutions to ensure safe, robust and responsible use of AI adoption in CS products.

    • Design and implement appropriate guardrails to mitigate risks like hallucinations, privacy breaches, prompt injections, harmful responses, or bias.

    • Set up continuous risk monitoring pipelines and alerting to enable human-in-the-loop feedback and mitigation.

    • Collaborate with trust, security, legal and operation teams to enable risk management.

    • Collaborate with evaluation and data platform to design and build out data flywheel for fixing model failure modes and guardrail improvement.

    • Partner with product, design, trust & safety, and legal teams to ensure AI-driven features meet global privacy and compliance standards.

    • Own documentation, guardrail white papers, and onboarding materials to support knowledge sharing and auditability.

    • Partner with ML infrastructure to scale safety features across Airbnb products.

Your Expertise:

    • PhD/Master’s degree, preferably in CS, or equivalent experience

    • 7/10+ years of work experience in developing and deploying machine learning models in production

    • Strong understanding of machine learning principles and algorithms

    • Hands-on programming experience in python and in-depth knowledge of machine learning frameworks

    • 2+ years of experience with one or more of the following broader areas: Content Safety/Integrity, ML Fairness and Bias, Responsible AI, AI Model Security, or related areas

    • Nice to have: Experience with AI technologies in Customer Support Products, LLM alignment techniques (SFT, RLHF, DPO, etc) and LLM evaluation.

Your Location:

This position is US - Remote Eligible. The role may include oc