Any website active online receives a barrage of GET requests from bots on a daily basis, one such is the OpenAI GPTbot. Given the rather amped up wording about artificial intelligence it was a bit disappointing to see the approach it displayed when visiting this website.

Normally a robots.txt file doesn't change all that often, and if a bot behaves as a good bot should then reading it once a day should be adequate.
However the OpenAI GPTbot engages with that file in a rather daft manner.
It would appear that the OpenAI GPTbot is sucking up any content on the internet in order to bolster the training data it uses, but this kind of behaviour lands it right in the robots.txt file:
User-agent: GPTBot
Disallow: /
It does do one thing right, the user agent clearly states what it is and where more information can be found, this is not a common case with the majority of bots roaming the internet.