Assistant AI Security Researcher (Software Engineering Institute)
Job posting number: #154247 (Ref:2022489)
Job Description
Are you a cybersecurity and/or AI researcher who enjoys a challenge? Are you excited about pioneering new research areas that will impact academia, industry, and national security? If so, we want you for our team, where you’ll collaborate to deliver high-quality results in the emerging area of AI security.
The CERT Division of the Software Engineering Institute (SEI) is seeking applicants for the AI Security Researcher role. Originally created in response to one of the first computer viruses -- the Morris worm – in 1988, CERT has remained a leader in cybersecurity research, improving the robustness of software systems, and in responding to sophisticated cybersecurity threats. Ensuring the robustness and security of AI systems is the next big challenge on the horizon, and we are seeking life-long learners in the fields of cybersecurity, AI/ML, or related areas, who are willing to cross-train to address AI Security.
The Threat Analysis Directorate, is a group of security experts focused on advancing the state of the art in AI security at a national and global scale. Our tasks include vulnerability discovery and assessments, evaluation of the effectiveness and robustness of AI systems, exploit discovery and reverse engineering, and identifying new areas where security research is needed. We participate in communities of network defenders, software developers and vendors, security researchers, AI practitioners, and policymakers.
You'll get a chance to work with elite AI and cybersecurity professionals, university faculty, and government representatives to build new methodologies and technologies that will influence national AI security strategy for decades to come. You will co-author research proposals, execute studies, and present findings and recommendations to our DoD sponsors, decision makers within government and industry, and at academic conferences. The SEI is a non-profit, federally funded research and development center (FFRDC) at Carnegie Mellon University.
What you’ll do:
- Develop state of the art approaches for analyzing robustness of AI systems.
- Apply these approaches to understanding vulnerabilities in AI systems and how attackers adapt their tradecraft to exploit those vulnerabilities.
- Reverse engineer malicious code in support of high-impact customers, design and develop new analysis methods and tools, work to identify and address emerging and complex threats to AI systems, and effectively participate in the broader security community.
- Study and influence the AI security and vulnerability disclosure ecosystems.
- Evaluate the effectiveness of tools, techniques and processes developed by industry and the AI security research community.
- Uncover and shape some of the fundamental assumptions underlying current best practice in AI security.
- Develop models, tools and data sets that can be used to characterize the threats to, and vulnerabilities in, AI systems, and publish those results. You will also use these results to aid in the testing, evaluation and transition of technologies developed by government-funded research programs.
- Identify opportunities to apply AI to improve existing cybersecurity research.
Who you are:
- You have BS in machine learning, cybersecurity, statistics, or related discipline.
- You have an interest in AI/ML and cybersecurity with a penchant for intellectual curiosity and a desire to make an impact beyond your organization.
- You have practical experience with applying cybersecurity knowledge toward vulnerability research, analysis, disclosure, or mitigation.
- You have experience with advising on a range of security topics based on research and expert opinion.
- You have familiarity with implementing and applying AI/ML techniques to solving practical problems.
- You have familiarity with common AI/ML software packages and tools (e.g., Numpy, Pytorch, Tensorflow, ART).
- You have knowledge or familiarity with reverse engineering tools (e.g. NSA Ghidra, IDA Pro)
- You have experience with Python, C/C++, or low-level programming.
- You have experience developing frameworks, methodologies, or assessments to evaluate effectiveness and robustness of technologies.
- You have superb communication skills (oral and written), particularly regarding technical communications with non-experts.
- You enjoy mentoring and cross-training others and sharing knowledge within the broader community.
- Applicants with a solid technical background in AI/ML or cybersecurity, but not both, are encouraged to apply provided a strong desire to rapidly learn on the job.
Location
Pittsburgh, PAJob Function
Software/Applications Development/EngineeringPosition Type
Staff – RegularFull time/Part time
Full timePay Basis
SalaryMore Information:
Please visit “Why Carnegie Mellon” to learn more about becoming part of an institution inspiring innovations that change the world.
Click here to view a listing of employee benefits
Carnegie Mellon University is an Equal Opportunity Employer/Disability/Veteran.