GLTR (Giant Language Model Test Room) is a tool developed by MIT-IBM Watson AI Lab and HarvardNLP designed to detect automatically generated text. It allows forensic analysis of text to determine the likelihood of it being generated by AI. By leveraging the same language models used to create fake text, GLTR visually indicates how likely each word is to have been generated by a machine, making it easier to spot AI-generated content.
The tool works by analyzing the text with the GPT-2 117M language model from OpenAI, highlighting words based on their predicted likelihood in different colors. Green indicates the top 10 most likely words, yellow for the top 100, red for the top 1,000, and purple for the rest. This visual representation helps users quickly identify patterns typical of AI-generated text.