


What about classic indicators of compromise, such as well-known malicious hashes and domains? Unfortunately, during our quick experiment, ChatGPT was not able to produce satisfying results: it failed to identify the well-known hash of Wannacry (hash: 5bef35496fcbdbe841c82f4d1ab8b7c2).įor various APT 1 domains, ChatGPT produced a list of mostly the same legitimate domains - or is it that we may not know something about these domains? - despite it provided description of APT actors 1.Īs for FIN7 1 domains, it correctly classified them as malicious, although the reason it gave was, “the domain name is likely an attempt to trick users into believing that it is a legitimate domain”, rather than there being well-known indicators of compromise. Therefore, we initially decided to check what ChatGPT already knows about threat research and whether it can help with identifying simple, well-known adversary tools like Mimikatz and Fast Reverse Proxy, and spotting the common renaming tactic. It is common for security and threat researches to publicly disclose the results of their investigations (adversary indicators, tactics, techniques, and procedures) in the form of reports, presentations, blog articles, tweets, and other types of content. Apparently, attempts have been made to generate malicious objects, such as phishing emails, and even polymorphic malware. It is capable of generating human-like text in a wide range of styles and formats.ĬhatGPT can be fine-tuned for specific tasks, such as answering questions, summarizing text, and even solving cybersecurity-related problems, such as generating incident reports or interpreting decompiled code. ChatGPT is a groundbreaking chatbot powered by the neural network-based language model text-davinci-003 and trained on a large dataset of text from the Internet.
