

Artificial Intelligence Security Research
As Artificial Intelligence models become more prevalent, many organizations rely on AI-powered chatbots to interact with clients and customers. However, without proper safeguards, these systems are vulnerable to attacks such as prompt injection and unauthorized data disclosure, which can lead to major security breaches. Unlike traditional code exploitation, AI exploitation involves unique techniques. I research these methods and share my findings with the community by open-sourcing my prompts and presenting at conferences. Some of my research in public domain are as follows :
1. AVAR Knowledge Series – Delivered a webinar training on AI security challenges and mitigation strategies. More information is available on the AVAR website.
2. Defcon Bangkok – Presented a live demonstration of AI bypass prompts against various LLM models.
3. Gamma AI – Discovered hackers abusing Gamma by creating phishing pages. Further read : CyberSecurityNews
4. Open Source Contribution – Shared several open-source AI prompts on GitHub, link here