Job Description

We are seeking an LLM Algorithm Engineer (Safety First) to join our AI/ML team, with a focus on building robust AI guardrails and safety frameworks for large language models (LLMs) and intelligent agents. This role is pivotal in ensuring trust, compliance, and reliability in Binanceโ€™s AI-powered products such as Customer Support Chatbots, Compliance Systems, Search, and Token Reports. Responsibilities: Design and build an AI Guardrails framework as a safety layer for LLMs and agent workflows Define and enforce safety, security, and compliance policies across applications Detect and mitigate prompt injection, jailbreaks, hallucinations, and unsafe outputs Implement privacy and PII protection : redaction, obfuscation, minimisation, data residency controls Build red-teaming pipelines, automated safety tests, and risk monitoring tools Continuously improve guardrails to address new attack vectors, policies, and regulations Fine-tune or optimise LLMs for trading, compliance, and Web3 tasks Collaborate with Product, Compliance, Security, Data, and Support to ship safe features

About Binance

Binance is a leading global blockchain ecosystem behind the worldโ€™s largest cryptocurrency exchange by trading volume and registered users.

Apply for This Position