top of page

AI Security Research

As Artificial Intelligence models become more prevalent, many organizations rely on AI-powered chatbots to interact with clients and customers. However, without proper safeguards, these systems are vulnerable to attacks such as prompt injection and unauthorized data disclosure, which can lead to major security breaches. Unlike traditional code exploitation, AI exploitation involves unique techniques. I research these methods and share my findings with the community by open-sourcing my prompts and presenting at conferences. A selection of my published AI research is available in the public domain as detailed below:

  1. AVAR Knowledge Series - Delivered a webinar training on security challenges and how to safeguard against them. More details can be found here.

  2. Defcon Bangkok - Demonstrating real examples of how different LLM models' security guardrails can be bypassed.

  3. Open Source Prompt Authoring - Contributing some of my prompt bypasses on open source for learning purposes, GitHub link here.

  4. Gamma AI - Discovery of hackers abusing Gamma in the wild by creating phishing pages. More details here.

Presenting AI Security Research in Defcon Bangkok

bottom of page