Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
AI Businesses Privacy IT

Study Finds 50% of Workers Use Unapproved AI Tools 16

An anonymous reader quotes a report from SecurityWeek: An October 2024 study by Software AG suggests that half of all employees are Shadow AI users, and most of them wouldn't stop even if it was banned. The problem is the ease of access to AI tools, and a work environment that increasingly advocates the use of AI to improve corporate efficiency. It is little wonder that employees seek their own AI tools to improve their personal efficiency and maximize the potential for promotion. It is frictionless, says Michael Marriott, VP of marketing at Harmonic Security. 'Using AI at work feels like second nature for many knowledge workers now. Whether it's summarizing meeting notes, drafting customer emails, exploring code, or creating content, employees are moving fast.' If the official tools aren't easy to access or if they feel too locked down, they'll use whatever's available which is often via an open tab on their browser.

There is almost also never any malicious intent (absent, perhaps, the mistaken employment of rogue North Korean IT workers); merely a desire to do and be better. If this involves using unsanctioned AI tools, employees will likely not disclose their actions. The reasons may be complex but combine elements of a reluctance to admit that their efficiency is AI assisted rather than natural, and knowledge that use of personal shadow AI might be discouraged. The result is that enterprises often have little knowledge of the extent of Shadow IT, nor the risks it may present.
According to an analysis from Harmonic, ChatGPT is the dominant gen-AI model used by employees, with 45% of data prompts originating from personal accounts (such as Gmail). Image files accounted for 68.3%. The report also notes that 7% of empmloyees were using Chinese AI models like DeepSeek, Baidu Chat and Qwen.

"Overall, there has been a slight reduction in sensitive prompt frequency from Q4 2024 (down from 8.5% to 6.7% in Q1 2025)," reports SecurityWeek. "However, there has been a shift in the risk categories that are potentially exposed. Customer data (down from 45.8% to 27.8%), employee data (from 26.8% to 14.3%) and security (6.9% to 2.1%) have all reduced. Conversely, legal and financial data (up from 14.9% to 30.8%) and sensitive code (5.6% to 10.1%) have both increased. PII is a new category introduced in Q1 2025 and was tracked at 14.9%."

Study Finds 50% of Workers Use Unapproved AI Tools

Comments Filter:
  • SpungRuAI's local agent is going help make the Nelson report so much faster oh shit why did all my files just disappear?

  • doesn't mean you are sending sensitive company info to AI tools. I still use unapproved tools, but I don't send any code or info that would be sensitive. Why? Because Gemini isn't that great and that is the only approved tool.

    • Way too many do send enough information to those tools that when stitched together will result in information patterns that can be used to influence stock market value.

      • by Anonymous Coward

        The Cheeto has that beat by enough to make your fantasy market-manipulation seem laughable.

      • Way too many do send enough information to those tools that when stitched together will result in information patterns that can be used to influence stock market value.

        A fucking tweet, can influence stock market value. Is that the tweets fault, or more the fault of an ignorant society assuming a stock market should have its proverbial ear anywhere near the social media grindstone?

        Obvious answer, is obvious.

    • doesn't mean you are sending sensitive company info to AI tools. I still use unapproved tools, but I don't send any code or info that would be sensitive. Why? Because Gemini isn't that great and that is the only approved tool.

      What you deem “sensitive” may not always align with your employers definition. Especially tomorrow, when AI ownership and control changes or is revealed.

      And please do not assume ALL of your fellow co-workers are anywhere near as diligent as you are. Or even understand why they should be.

  • I like Linux. Everything is customized down to details. Other people like Apple. Many would argue its a good brand. Everyone sticks with the brand they like. I doubt there is a way of hiding the use of your favorite brand from your employer .. and it's a lesson in hypocrisy because management wants to use AI to both monitor you and make you more efficient... just with the one *we* paid for. Also I'm guessing that people form strong emotional attachments to the AI with the longest memory. So this is going to
  • by Mean Variance ( 913229 ) <mean.variance@gmail.com> on Friday April 18, 2025 @09:36PM (#65316201)

    My work was blocking integrations but you could still go to a Chat GPT or similar and still get the code snip you wanted. Not me of course. We seem to have settled that out at the corporate level. Fine. I will follow the rules as long as current tools are available.

    Still, what gets me is the tech screens I do for interviews. It's comical when a question requires some thinking. Gets very quiet. Then the eyes wander (remote Zoom interviews) and suddenly, nirvana! a solution - I think we should use the "leaky bucket algorithm." Interviews need to go back to whiteboard.

  • Who *doesn't* use AI these days?

  • The concept that people are getting promoted based on their use of AI is a scary one to me. This means after awhile, AI companies will be able to charge whatever they want for their services because if everyone is using an AI to keep that company successful than not paying whatever that AI wants could be catastrophic for that company.
  • If you think that half of all workers us AI, you are living in someone's marketing bubble.
  • That sounds metal as fuck.

    Brb, downloading shadow AI models.

I THINK MAN INVENTED THE CAR by instinct. -- Jack Handley, The New Mexican, 1988.

Working...