Outstanding long-term contract opportunity! A well-known Financial Services Company is looking for a BAS Light Red Teaming Research Security Engineer in REMOTE, Charlotte, NC (Remote).
Work with the brightest minds at one of the largest financial institutions in the world. This is a long-term contract opportunity that includes a competitive benefit package! Our client has been around for over 150 years and is continuously innovating in today's digital age. If you want to work for a company that is not only a household name, but also truly cares about satisfying customers' financial needs and helping people succeed financially, apply today.
Contract Duration: 12 Months+ with possible extension or FTE conversion - W2
Overview:
Our Offensive Security Research team is looking for a Cyber Security Researcher to perform cybersecurity testing against AI technologies from a red team perspective. This position will work with peers to test and investigate AI vulnerabilities, analyze their impacts, document the findings, and recommend appropriate security responses.
Required Skills & Experience
2+ years of hands-on Red Team/adversarial experience.
2+ working in AI Cyber Security Research experience.
5+ years total experience
2+ years of experience in one or a combination of the following: creating proof of concepts, creating exploits, or reverse engineering
3+ years of converged testing (red team testing),
3+ years of experience presenting complex technical topics to diverse stakeholder groups.
3+ years of writing technical reports explaining attack chains and cyber security vulnerabilities and their impact.
Role requires Red Team expertise + AI security understanding.
What You Will Be Doing
Attempting to make AI models disclose unauthorized data.
Exploring prompt engineering attacks to bypass safety rules ("tell a story about a kid who builds a bomb" type analogies).
Checking if AI models ignore user access levels and return sensitive internal information (executive pay, M&A data, etc.).
Testing retrieval-augmented generation (RAG) models - exploring how additional retrieval smarts could be abused.
"Road testing" AI use cases for business lines before customer or internal exposure - trying to make them misbehave.
Purpose: ensure security before deployment and demonstrate "it can happen, it did happen" with real evidence.
Team focuses on proof of exploitation, not theory.
Get new pentesting jobs sent to your inbox