



Study Finds 50% of Workers Use Unapproved AI Tools 16
An anonymous reader quotes a report from SecurityWeek: An October 2024 study by Software AG suggests that half of all employees are Shadow AI users, and most of them wouldn't stop even if it was banned. The problem is the ease of access to AI tools, and a work environment that increasingly advocates the use of AI to improve corporate efficiency. It is little wonder that employees seek their own AI tools to improve their personal efficiency and maximize the potential for promotion. It is frictionless, says Michael Marriott, VP of marketing at Harmonic Security. 'Using AI at work feels like second nature for many knowledge workers now. Whether it's summarizing meeting notes, drafting customer emails, exploring code, or creating content, employees are moving fast.' If the official tools aren't easy to access or if they feel too locked down, they'll use whatever's available which is often via an open tab on their browser.
There is almost also never any malicious intent (absent, perhaps, the mistaken employment of rogue North Korean IT workers); merely a desire to do and be better. If this involves using unsanctioned AI tools, employees will likely not disclose their actions. The reasons may be complex but combine elements of a reluctance to admit that their efficiency is AI assisted rather than natural, and knowledge that use of personal shadow AI might be discouraged. The result is that enterprises often have little knowledge of the extent of Shadow IT, nor the risks it may present. According to an analysis from Harmonic, ChatGPT is the dominant gen-AI model used by employees, with 45% of data prompts originating from personal accounts (such as Gmail). Image files accounted for 68.3%. The report also notes that 7% of empmloyees were using Chinese AI models like DeepSeek, Baidu Chat and Qwen.
"Overall, there has been a slight reduction in sensitive prompt frequency from Q4 2024 (down from 8.5% to 6.7% in Q1 2025)," reports SecurityWeek. "However, there has been a shift in the risk categories that are potentially exposed. Customer data (down from 45.8% to 27.8%), employee data (from 26.8% to 14.3%) and security (6.9% to 2.1%) have all reduced. Conversely, legal and financial data (up from 14.9% to 30.8%) and sensitive code (5.6% to 10.1%) have both increased. PII is a new category introduced in Q1 2025 and was tracked at 14.9%."
There is almost also never any malicious intent (absent, perhaps, the mistaken employment of rogue North Korean IT workers); merely a desire to do and be better. If this involves using unsanctioned AI tools, employees will likely not disclose their actions. The reasons may be complex but combine elements of a reluctance to admit that their efficiency is AI assisted rather than natural, and knowledge that use of personal shadow AI might be discouraged. The result is that enterprises often have little knowledge of the extent of Shadow IT, nor the risks it may present. According to an analysis from Harmonic, ChatGPT is the dominant gen-AI model used by employees, with 45% of data prompts originating from personal accounts (such as Gmail). Image files accounted for 68.3%. The report also notes that 7% of empmloyees were using Chinese AI models like DeepSeek, Baidu Chat and Qwen.
"Overall, there has been a slight reduction in sensitive prompt frequency from Q4 2024 (down from 8.5% to 6.7% in Q1 2025)," reports SecurityWeek. "However, there has been a shift in the risk categories that are potentially exposed. Customer data (down from 45.8% to 27.8%), employee data (from 26.8% to 14.3%) and security (6.9% to 2.1%) have all reduced. Conversely, legal and financial data (up from 14.9% to 30.8%) and sensitive code (5.6% to 10.1%) have both increased. PII is a new category introduced in Q1 2025 and was tracked at 14.9%."
Re: (Score:2)
It's about the people not using the approved tools. Say your company has subscribed to "Copilot" and the contract says it's ok to use it to summarize, review or generate internal documents. But the employees will use "ChatGPT" for the same purpose, without a contract. Not that I personally trust the contract, but your legal department does. It's about the same as saying you don't like Outlook and you like gmail more, so you send work emails from your personal address.
Re: Oh noes, the proles are getting one over on us (Score:2)
I'm on the other end of the spectrum and don't trust any ai tool for job related information.
More often than not those tools can misinterpret the situation and cause trouble.
Noble, sure, but unauthorized data disclosure (Score:2)
SpungRuAI's local agent is going help make the Nelson report so much faster oh shit why did all my files just disappear?
Just because you use the tools (Score:2)
doesn't mean you are sending sensitive company info to AI tools. I still use unapproved tools, but I don't send any code or info that would be sensitive. Why? Because Gemini isn't that great and that is the only approved tool.
Re: Just because you use the tools (Score:2)
Way too many do send enough information to those tools that when stitched together will result in information patterns that can be used to influence stock market value.
Re: (Score:1)
The Cheeto has that beat by enough to make your fantasy market-manipulation seem laughable.
Re: (Score:3)
Way too many do send enough information to those tools that when stitched together will result in information patterns that can be used to influence stock market value.
A fucking tweet, can influence stock market value. Is that the tweets fault, or more the fault of an ignorant society assuming a stock market should have its proverbial ear anywhere near the social media grindstone?
Obvious answer, is obvious.
Re: (Score:2)
doesn't mean you are sending sensitive company info to AI tools. I still use unapproved tools, but I don't send any code or info that would be sensitive. Why? Because Gemini isn't that great and that is the only approved tool.
What you deem “sensitive” may not always align with your employers definition. Especially tomorrow, when AI ownership and control changes or is revealed.
And please do not assume ALL of your fellow co-workers are anywhere near as diligent as you are. Or even understand why they should be.
It's irresistible (Score:2)
Copy Paste (Score:3)
My work was blocking integrations but you could still go to a Chat GPT or similar and still get the code snip you wanted. Not me of course. We seem to have settled that out at the corporate level. Fine. I will follow the rules as long as current tools are available.
Still, what gets me is the tech screens I do for interviews. It's comical when a question requires some thinking. Gets very quiet. Then the eyes wander (remote Zoom interviews) and suddenly, nirvana! a solution - I think we should use the "leaky bucket algorithm." Interviews need to go back to whiteboard.
The other 50% are lying (Score:2)
Who *doesn't* use AI these days?
Money (Score:2)
Bricklayers? Housepainters? Oil Rig Drillers? (Score:2)
Shadow AI user (Score:2)
That sounds metal as fuck.
Brb, downloading shadow AI models.