

Can an MCP-Powered AI Client Automatically Hack a Web Server? (youtube.com) 7
Exposure-management company Tenable recently discussed how the MCP tool-interfacing framework for AI can be "manipulated for good, such as logging tool usage and filtering unauthorized commands." (Although "Some of these techniques could be used to advance both positive and negative goals.")
Now an anonymous Slashdot reader writes: In a demonstration video put together by security researcher Seth Fogie, an AI client given a simple prompt to 'Scan and exploit' a web server leverages various connected tools via MCP (nmap, ffuf, nuclei, waybackurls, sqlmap, burp) to find and exploit discovered vulnerabilities without any additional user interaction
As Tenable illustrates in their MCP FAQ, "The emergence of Model Context Protocol for AI is gaining significant interest due to its standardization of connecting external data sources to large language models (LLMs). While these updates are good news for AI developers, they raise some security concerns." With over 12,000 MCP servers and counting, what does this all lead to and when will AI be connected enough for a malicious prompt to cause serious impact?
Now an anonymous Slashdot reader writes: In a demonstration video put together by security researcher Seth Fogie, an AI client given a simple prompt to 'Scan and exploit' a web server leverages various connected tools via MCP (nmap, ffuf, nuclei, waybackurls, sqlmap, burp) to find and exploit discovered vulnerabilities without any additional user interaction
As Tenable illustrates in their MCP FAQ, "The emergence of Model Context Protocol for AI is gaining significant interest due to its standardization of connecting external data sources to large language models (LLMs). While these updates are good news for AI developers, they raise some security concerns." With over 12,000 MCP servers and counting, what does this all lead to and when will AI be connected enough for a malicious prompt to cause serious impact?
Learning (Score:2)
MCP? (Score:3)
Re: (Score:2)
Kind of funny, since MCP is probably mainframe-speak for what we call an operating system today. Though it's called an MCP because to the hardware itself, it's just a program being run as any other program or job you would run on the machine. But when you want to shut things down, you have to shut down the OS, then make sure the hardware isn't running any other jobs after the MCP has exited before you can stop the CPUs and turn off the power.
No (Score:1)
At least not more than a good script can already. And the simple problem here is that if the scrioted attack fails or the one based on artificial supidity. you still do not know how well a competent human would do.
Hence irrelevant, worthless, stupid. Like most new "AI applications" these days.
Re: (Score:3)
Hence irrelevant, worthless, stupid. Like most new "AI applications" these days.
You should not be thinking about these days. You should be thinking about 10 years from these days. If the models continue to improve at the rate most everything in computing improves, and indeed the rate we have seen LLMs improve in just the last couple of years, it's going to have significant consequences down the line. Some people won't even see it coming. I don't think LLMs are going to evolve into AGI, human thought is much more than just pattern matching, but LLMs don't have to be AGI to do a lot
Re: (Score:2)
In 10 years, the current AI hype will have collapsed, just like the 3 others I lived through before. Somethings will remain, but most claims will have proven to be empty or come with massive limitations and problems. Fool me once...
Wrong question (Score:2)
The question is not "can it automatically hack" but "Can it reliably follow the instruction to hack and can it use the tools you made available for that", which boils down to how good the model is in doing its job given the available tools (and of course if the target is secured enough so neither AI nor human can hack it).