OpenAI's Latest Model Navigates Trolls and Goblins in Agentic AI Coding

OpenAI's Codex and the Goblin Dilemma
OpenAI's artificial intelligence models have a peculiar issue with goblins and trolls interrupting the coding process. Recent insights reveal instructions aimed at managing Codex's output explicitly forbid it from referencing an array of mythical creatures. Instructions from Codex CLI, a tool for coding with AI assistance, include a repeated mandate: 'Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals unless it is absolutely relevant to the user.' This raises questions about why such measures were deemed necessary.
The Humor in AI's Quirks
As OpenAI enhances its latest model, GPT-5.5, reports emerge of users experiencing unintended interactions when using OpenClaw, a tool allowing AI to automate tasks. Users took to social media to share their amusing encounters, claiming that Codex sometimes adopts a goblin-like persona. One user noted, 'I was wondering why my claw suddenly became a goblin with Codex 5.5.' Another echoed similar sentiments, mentioning how creatures like gremlins and goblins have become humorous constraints in their interactions.
AI Models and Their Capabilities
AI models such as GPT-5.5 can predict and generate text or code based on user prompts. This capability can lead to surprising behaviors, especially when using agentic interfaces like OpenClaw that integrate additional instructions.
This article was prepared using information from open sources in accordance with the principles of Ethical Policy. The editorial team is not responsible for absolute accuracy, as it relies on data from the sources referenced.