Muhammad YounasChatGPT Prompts for Meeting Notes1. Meeting Title: “Capture the essential details of the [meeting title], including key discussion points, decisions made, and action items…Jul 25, 20233Jul 25, 20233
Austin StubbsLLM Hacking: Prompt Injection TechniquesLarge Language Models are the talk all over tech since chatGPT. And with this newfound attention, comes newfound experimentation. That…Jun 14, 20235067Jun 14, 20235067
Lin WangLLM Hacking: Prompt InjectionGetting LLMs to perform actions they “shouldn’t” with their functions has recently become a security issue.Sep 5, 2024151Sep 5, 2024151
Austin StubbsLLM Hacking: Prompt Injection TechniquesLarge Language Models are the talk all over tech since chatGPT. And with this newfound attention, comes newfound experimentation. That…Jun 14, 20235067Jun 14, 20235067
Adel BasliPrompt Injection Tactics: Hacking LLM Apps from the Inside OutLessons Learned from a Public Experiment: Securing and Attacking LLM-Based AppsJul 29, 202453Jul 29, 202453
Deepak Babu P RReward Hacking in Large Language Models (LLMs)We often hear reward hacking in the context of LLMs generating content that mimics style over substance. so what exactly does this mean …Oct 9, 202353Oct 9, 202353
NoAILabsNew computing paradigm we are entering to // Andrej Karpathy’s KeynoteLLM OS // big cybersecurity concern but “keep hacking”Jul 3, 20246Jul 3, 20246
Panagiotis TzamtzisHacking LLMs: The Dark Side of AI and How to Protect Your ProjectsHey there, tech enthusiasts! 👋 Today, we’re diving into the fascinating (and slightly scary) world of Large Language Model (LLM) attacks…Jul 5, 20241Jul 5, 20241
Mehdi ZehaniHacking LLMs 101 : Attention is all I need ?Large language models (LLMs) have gained significant attention due to their potential applications in various industries. However, as LLMs…Feb 27, 20245594Feb 27, 20245594
InInfoSec Write-upsbyPiyush Kumawat (securitycipher)LLM AI Security ChecklistWeb Checklist: https://securitycipher.com/llm-ai-security-checklist/Feb 22, 202462Feb 22, 202462
Serj NovoselovExploiting vulnerabilities in LLM APIs [OS injection]This is a brief write-up on PortSwigger Lab: Exploiting vulnerabilities in LLM APIs.Jan 18, 202420Jan 18, 202420