Cybersecurity & AI Governance Specialist
Secure Enterprise AI Deployments
Sydney / Contract / Hybrid
Ongoing client assignments through Galileo Search
Deploying AI into an enterprise isn't just a technology decision — it's a security decision. When Claude accesses internal systems via MCP, processes sensitive documents, and generates outputs that employees act on, the attack surface changes. The risk profile changes. The governance requirements change. And most companies haven't caught up yet.
They need someone who has.
Galileo Search is building Australia's first specialist marketplace of certified Anthropic professionals. Cybersecurity & AI Governance Specialists are the people who ensure enterprise Claude deployments meet security standards, comply with regulations, and don't create risks that nobody thought about until it was too late.
What you'll do on client engagements
- Conduct AI security assessments for enterprise Claude deployments — evaluating prompt injection risks, data exfiltration vectors, access control gaps, and output validation requirements
- Design security architecture for MCP connections — ensuring Claude's access to internal systems follows least-privilege principles with comprehensive audit logging
- Develop AI governance frameworks aligned with the Australian AI Ethics Principles, NIST AI RMF, and industry-specific regulations (APRA CPS 234, Essential Eight, ISO 27001)
- Build AI risk registers and risk treatment plans — identifying, assessing, and mitigating risks specific to large language model deployments
- Advise boards and executive leadership on AI risk posture — translating technical security findings into business risk language
- Work with Data Governance Leads to ensure data access controls and classification are enforced at the AI layer
- Design incident response procedures for AI-specific scenarios — model misuse, data leakage through AI outputs, adversarial attacks
What you bring
- 7+ years in cybersecurity, information security, or IT risk management
- Experience conducting security assessments, penetration testing, or security architecture reviews
- Understanding of enterprise security frameworks — ISO 27001, NIST CSF, Essential Eight, CPS 234, or IRAP
- Ability to assess novel technology risks — you don't need a playbook for every scenario. You can think from first principles about what could go wrong, and design controls proportionate to the risk
- Strong communication skills — you can explain AI security risks to a board audit committee without either terrifying them or boring them
- Experience in regulated industries (government, financial services, healthcare, critical infrastructure) is highly valued
Bonus (not required)
- CISSP, CISM, CRISC, or equivalent security certification
- Experience with AI/ML security risks — adversarial attacks, prompt injection, model poisoning, data poisoning
- IRAP assessor status or experience supporting IRAP assessments
- Familiarity with Anthropic's security model, Claude's safety features, or LLM security research
- Experience with GRC platforms (ServiceNow GRC, Archer, OneTrust)
What Galileo offers
- Great contract day rates — reflecting the seniority and impact of the role
- You're the person who signs off on security before any Claude deployment goes live — that's authority, not just advisory
- Free Anthropic certification — you'll understand Claude's architecture, safety mechanisms, and security model from the inside
- Work with Data Governance Leads, Solutions Architects, and Engineering teams who handle implementation while you own risk
- First-mover positioning in AI security — the professionals who develop AI governance expertise now will define the standards that every enterprise follows
AI governance is the fastest-growing specialty in cybersecurity. Every enterprise deploying AI needs someone who can bridge traditional InfoSec with the novel risks that LLMs introduce. Every board audit committee is asking about AI risk. Every regulator is developing AI-specific guidance. The security professionals who build this expertise now won't just be in demand — they'll be writing the frameworks everyone else follows.
To apply: Share your CV and a brief overview of a security assessment or governance framework you've developed — particularly any work involving emerging technology risks. If you've thought about AI security in any capacity, we want to hear about it.