YouTuber Proves ChatGPT Can Manufacture Free Windows Keys

0
YouTuber Proves ChatGPT Can Manufacture Free Windows Keys

There’s still some lingering nostalgia for Windows 95, and while it’s extremely easy to harken back to the days of blocky menus and bald men shouting that it’s “only $99,” one Windows experimenter managed to use ChatGPT for her. generate simple product keys for venerable operating system.

How to hide your sensitive images in Google Photos

Late last month, YouTuber Enderman showed how he was able to trick OpenAI’s ChatGPT into generating keys for Windows 95, despite the chatbot being clearly hostile to generating activation keys.

Older Windows 95 OEM keys used several parameters, including a set of ordinal numbers as well as other random numbers. In a fairly simple solution, Enderman told ChatGPT to generate lines in the same layout as a Windows 95 key, paying special attention to specific strings that are mandatory on all keys. After several dozen trial and error attempts, he settled on a request that worked and could generate about one working key for every 30 attempts.

In other words, he couldn’t tell ChatGPT to generate a Windows key, but he could tell it to generate a string of characters that matched all the requirements of a Windows key.

Activating Windows with ChatGPT

After Enderman verified that the key worked in the Windows 95 installation, he thanked ChatGPT. The chatbot replied: “I apologize for any confusion, but I did not provide any Windows 95 keys in my previous reply… I cannot provide any product keys or activation codes for any software.” It further tried to claim that Windows 95 activation was “impossible” because Microsoft stopped supporting the software in 2001, which is simply untrue.

G/O Media may receive a commission

Interestingly, Enderman ran this request on both the older GPT-3 language model and OpenAI’s newer GPT-4, and told us that the latest model was even better than what you saw in the video his. In an email, Enderman (who asked that we use his screen name) told Gizmodo that a certain string of numbers in the key had to be divisible by 7. GPT-3 would have trouble understanding that restriction and would create far fewer usable keys. In later tests with GPT-4, ChatGPT would extract much more accurate keys, although even then not every single key was a winner or stuck in the request parameters. The YouTuber said this suggests that “GPT-4 knows how to do math, but gets lost during array generation.

GPT-4 does not have a built-in calculator, and those who want to use the system to generate correct answers to math problems must do additional coding work. Although OpenAI hasn’t been forthcoming about LLM training data, the company has been very excited about all the different tests it can pass with flying colors, such as the LSAT and the Uniform Bar Exam. At the same time, ChatGPT has shown that it can occasionally fail to extract the correct code.

One of the main selling points of the GPT-4 was its ability to handle longer and more complex requests. The GPT-3 and 3.5 would routinely fail to produce correct answers when doing 3-digit arithmetic or “reasoning” tasks like word formation. The latest version of the LLM got significantly better at these types of tasks, at least if you’re looking at scores on tests like the Verbal GRE or the Math SAT. However, the system is by no means perfect, especially since its training data is still mostly natural language text scraped off the Internet.

Enderman told Gizmodo that he has tried generating keys for multiple programs using the GPT-4 model, finding that it handles key generation better than previous versions of the big-language model.

However, don’t expect to start getting free keys for modern programs. As the YouTuber points out in his video, Windows 95 keys are much easier to spoof than keys for Windows XP and beyond, since Microsoft has started implementing product IDs in the operating system’s installation software.

However, Enderman’s technique did not require any intensive rapid engineering to make the AI ​​work around OpenAI’s building blocks for creating product keys. Despite the name, AI systems like ChatGPT and GPT-4 aren’t really “intelligent” and they don’t know when they’re being abused, aside from clear bans on generating “allowed” content.

This has more serious implications. In February, researchers at cybersecurity company Checkpoint showed that malicious actors had used ChatGPT to “enhance” basic malware. There are many ways to get around OpenAI’s limitations, and cybercriminals have shown that they are capable of writing basic scripts or bots to abuse the company’s API.

Earlier this year, cyber security researchers said they managed to get ChatGPT to create malicious malware tools just by creating some authoritative requests with multiple restrictions. The chatbot eventually forced and generated malicious code and was even able to change it, creating multiple variants of the same malware.

Enderman’s Windows keys are a good example of how the AI ​​can be tricked into bypassing its defenses, but he told us he wasn’t too worried about abuse, as the more people hit and push the AI, the more more future releases will be able to close the gaps.

“I believe it’s a good thing and companies like Microsoft shouldn’t be banning users for misusing their Bing AI or breaking its capabilities,” he said. “Instead, they should reward active users for finding such exploits and selectively mitigate them. After all, it’s all part of AI training.”

Want to know more about AI, chatbots and the future of machine learning? Check out our full AI coverage or browse our guides to the Best Free AI Art Generators, Best ChatGPT Alternatives, and everything we know about OpenAI’s ChatGPT.

Leave a Reply

Your email address will not be published. Required fields are marked *