New AI-powered Detection Method from ANY.RUN
DUBAI, UNITED ARAB EMIRATES, September 14, 2023/EINPresswire.com/ -- ANY.RUN, a cybersecurity company developing an interactive sandbox analytical platform for malware researchers, presents the New AI-powered Detection Method: Analyze Sandbox results with ChatGPT. Here are some highlights from the new detection method:
𝐂𝐡𝐚𝐭𝐆𝐏𝐓 𝐡𝐞𝐥𝐩𝐬 𝐲𝐨𝐮 𝐣𝐮𝐝𝐠𝐞 𝐢𝐟 𝐚 𝐟𝐢𝐥𝐞 𝐢𝐬 𝐡𝐚𝐫𝐦𝐟𝐮𝐥 𝐨𝐫 𝐧𝐨𝐭
Over 300,000 users already rely on ANY.RUN to detect and analyze malicious files. Here's what to expect from using the new AI detector in ANY.RUN: expanded data and detailed AI-driven analysis of processes, connections, and rules.
𝐇𝐨𝐰 𝐭𝐨 𝐮𝐬𝐞 𝐭𝐡𝐞 𝐧𝐞𝐰 𝐂𝐡𝐚𝐭𝐆𝐏𝐓 𝐟𝐞𝐚𝐭𝐮𝐫𝐞
An AI-driven review will appear in all reports automatically. But beyond that, users find the ChatGPT icon next to important elements, such as processes, rules, and connections. In scenarios with many processes or events, ANY.RUN AI prioritizes those with the highest score or those considered suspicious, helping users focus their attention where needed most.
But users can also tell it what they want to analyze:
• Process trees
• Command Line
• Suricata rule triggers
• HTTP connections
• Registry
• Mutex
𝐓𝐡𝐞 𝐦𝐨𝐫𝐞 𝐚𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐦𝐞𝐭𝐡𝐨𝐝𝐬 𝐰𝐞 𝐡𝐚𝐯𝐞, 𝐭𝐡𝐞 𝐜𝐥𝐞𝐚𝐫𝐞𝐫 𝐭𝐡𝐞 𝐯𝐞𝐫𝐝𝐢𝐜𝐭
ANY.RUN’s new ChatGPT-powered analysis method breaks down complex data and concepts, providing clear, actionable summaries that not only identify threats but help users understand them.
Read the article to see how ANY.RUN can save time for cybersecurity specialists, optimize resources, and focus on key areas of their work — such as incident investigation, research, or threat response.
Vlada Belousova
ANYRUN FZCO
2027889264
email us here
Visit us on social media:
Twitter
YouTube
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
