Still, Enderman’s technique didn’t require any intense prompt engineering to get the AI to work around
Still, Enderman’s technique didn’t require any intense prompt engineering to get the AI to work around OpenAI’s blocks on creating product keys. Despite the moniker, AI systems like ChatGPT and GPT-4 are not actually “intelligent” and they do not know when they’re being abused save for explicit bans on generating “disallowed” content. This has more serious implications. Back in February, researchers at cybersecurity company Checkpoint showed malicious actors had used ChatGPT to “improve” basic malware. There’s plenty of ways to get around OpenAI’s restrictions, and cybercriminals have shown they are capable of writing basic scripts or bots to abuse the company’s API. Earlier this year, cybersecurity researchers said they managed to get ChatGPT to create malicious malware tools just by creating several authoritative prompts with multiple constraints. The chatbot eventually obliged and generated malicious code and was even able to mutate it, creating multiple variants of the...