AI Security Engineer, Research & Content
About CodeIntegrity
CodeIntegrity builds security infrastructure for AI agents. As AI systems connect to critical tools, sensitive data, and real-world actions, we help organizations prevent prompt injection, unauthorized tool use, and data exfiltration with real-time detection and enforcement before attacks execute.
The Role
You'll work across security engineering, applied research, and technical content for AI systems. This role is focused on understanding how modern AI applications fail in practice, turning that understanding into focused experiments and prototypes, and publishing clear technical content for practitioners and customers.
What You'll Do
- Track emerging AI security risks. Stay current on new attack techniques, research papers, public incidents, and real-world failure modes across AI applications and agent workflows.
- Run practical investigations. Reproduce findings, build lightweight proofs of concept, and create internal test cases that help us evaluate risks and shape product direction.
- Develop technical narratives. Turn research, experiments, and product insights into high-signal technical blogs, explainers, and research-backed content for a technical audience.
- Support product and engineering strategy. Work closely with the team to identify the security problems that matter most and translate them into product ideas, demonstrations, and technical guidance.
- Help define our point of view. Contribute to how CodeIntegrity explains AI security, communicates tradeoffs, and earns trust with practitioners, buyers, and technical teams.
Requirements
- 3+ years of experience in security engineering, software engineering, applied security research, or a closely related technical role.
- Strong security fundamentals. Familiarity with web security, APIs, authentication and authorization, access control, and common application security risks.
- Hands-on engineering ability. Comfort writing code in Python, Go, or a similar language to build prototypes, experiments, internal tools, or example applications.
- Research fluency. Ability to read technical papers, blog posts, incident writeups, and code repositories, then extract the practical ideas that matter.
- Strong technical writing. Ability to explain complex technical topics clearly, accurately, and credibly for engineers and security-minded audiences.
- Practical judgment. Ability to connect research and experiments to product needs, customer pain points, and deployment constraints.
- Independent execution. Comfort operating with ambiguity and driving work from idea to published output.
Nice to Have
- AI security experience. Familiarity with prompt injection, tool misuse, data exfiltration, confused deputy problems, or security issues in agentic systems.
- Technical content experience. Experience publishing technical blogs, writeups, talks, demos, or other public-facing engineering content.
- Prototype and evaluation experience. Experience building small-scale security test harnesses, adversarial evaluations, or internal research tooling.
- Startup experience. Comfort working in a fast-moving environment where the product, market, and technical priorities are evolving quickly.
Benefits
- Salary. $170,000 to $190,000.
- Health coverage. Medical, dental, and vision fully covered.
- Equipment. MacBook Pro, $500 stipend, and tools you need.
- On-site collaboration. Work in person with the team in San Francisco.
Why Join
AI systems are becoming part of critical workflows, and the security risks are already real. You'll help organizations understand these problems, influence what we build, and produce technical work that connects research ideas to practical security outcomes.
CodeIntegrity is an equal opportunity employer.