In this role, you will act as a bridge between AI development, security, architecture, data privacy, and legal functions. You will help define and embed governance, risk, and compliance practices into AI solutions, ensuring safe and compliant-by-design AI usage across the organization.
Responsibilities
- Develop and maintain AI guidelines, policies, and technical guardrails for safe AI development and use
- Perform AI risk assessments and triage for new use cases and solutions
- Identify and evaluate risks related to AI systems (e.g., data leakage, misuse, third-party dependencies, governance gaps)
- Collaborate with Security, Architecture, Data Privacy, and Legal to embed secure and compliant AI practices
- Support AI vendor and third-party assessments as part of procurement and evaluation processes
- Design and deliver training and guidance on responsible and secure AI usage
Requirements
- Strong understanding of AI technologies, including GenAI and AI agents, and how they are built and deployed
- Solid knowledge of information security principles (secure architectures, access control, risk management)
- Familiarity with regulatory frameworks such as EU AI Act, GDPR, ISO, and NIST
- Experience translating regulatory and risk requirements into practical technical guidance
- Proven experience facilitating risk assessments and cross-functional collaboration
- Excellent communication skills and ability to work across technical, security, legal, and business stakeholders
Nice to have
- Background in information security, AI/ML engineering, or enterprise architecture
- Experience with cloud-based or third-party AI platforms