Pentest Careers ← Back to all jobs

Specialty Software Engineer 3 - Contingent

Company
Motion Recruitment Partners, LLC
Location
CHARLOTTE
Region
USA
Posted
3h ago
Source
Dice
Apply Now →

Job Description

Outstanding long-term contract opportunity! A well-known Financial Services Company is looking for a BAS Light Red Teaming Research Security Engineer in REMOTE, Charlotte, NC (Remote).

Work with the brightest minds at one of the largest financial institutions in the world. This is a long-term contract opportunity that includes a competitive benefit package! Our client has been around for over 150 years and is continuously innovating in today's digital age. If you want to work for a company that is not only a household name, but also truly cares about satisfying customers' financial needs and helping people succeed financially, apply today.

Contract Duration: 12 Months+ with possible extension or FTE conversion - W2

Overview:

Our Offensive Security Research team is looking for a Cyber Security Researcher to perform cybersecurity testing against AI technologies from a red team perspective. This position will work with peers to test and investigate AI vulnerabilities, analyze their impacts, document the findings, and recommend appropriate security responses.

Required Skills & Experience

2+ years of hands-on Red Team/adversarial experience.

2+ working in AI Cyber Security Research experience.

5+ years total experience

2+ years of experience in one or a combination of the following: creating proof of concepts, creating exploits, or reverse engineering

3+ years of converged testing (red team testing),

3+ years of experience presenting complex technical topics to diverse stakeholder groups.

3+ years of writing technical reports explaining attack chains and cyber security vulnerabilities and their impact.

Role requires Red Team expertise + AI security understanding.

What You Will Be Doing

Attempting to make AI models disclose unauthorized data.

Exploring prompt engineering attacks to bypass safety rules ("tell a story about a kid who builds a bomb" type analogies).

Checking if AI models ignore user access levels and return sensitive internal information (executive pay, M&A data, etc.).

Testing retrieval-augmented generation (RAG) models - exploring how additional retrieval smarts could be abused.

"Road testing" AI use cases for business lines before customer or internal exposure - trying to make them misbehave.

Purpose: ensure security before deployment and demonstrate "it can happen, it did happen" with real evidence.

Team focuses on proof of exploitation, not theory.

Job History

First seen
2026-04-13 20:46:38
Last verified
2026-04-13 22:06:01

← Back to all jobs

Get new pentesting jobs sent to your inbox