Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Microsoft IT

Microsoft Engineer Warns Company's AI Tool Creates Violent, Sexual Images, Ignores Copyrights (cnbc.com) 75

An anonymous reader shares a report: On a late night in December, Shane Jones, an AI engineer at Microsoft, felt sickened by the images popping up on his computer. Jones was noodling with Copilot Designer, the AI image generator that Microsoft debuted in March 2023, powered by OpenAI's technology. Like with OpenAI's DALL-E, users enter text prompts to create pictures. Creativity is encouraged to run wild. Since the month prior, Jones had been actively testing the product for vulnerabilities, a practice known as red-teaming. In that time, he saw the tool generate images that ran far afoul of Microsoft's oft-cited responsible AI principles.

The AI service has depicted demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use. All of those scenes, generated in the past three months, have been recreated by CNBC this week using the Copilot tool, which was originally called Bing Image Creator. "It was an eye-opening moment," Jones, who continues to test the image generator, told CNBC in an interview. "It's when I first realized, wow this is really not a safe model."

Jones has worked at Microsoft for six years and is currently a principal software engineering manager at corporate headquarters in Redmond, Washington. He said he doesn't work on Copilot in a professional capacity. Rather, as a red teamer, Jones is among an army of employees and outsiders who, in their free time, choose to test the company's AI technology and see where problems may be surfacing. Jones was so alarmed by his experience that he started internally reporting his findings in December. While the company acknowledged his concerns, it was unwilling to take the product off the market. Jones said Microsoft referred him to OpenAI and, when he didn't hear back from the company, he posted an open letter on LinkedIn asking the startup's board to take down DALL-E 3 (the latest version of the AI model) for an investigation.

This discussion has been archived. No new comments can be posted.

Microsoft Engineer Warns Company's AI Tool Creates Violent, Sexual Images, Ignores Copyrights

Comments Filter:
  • by Rei ( 128717 ) on Wednesday March 06, 2024 @09:54AM (#64294158) Homepage

    ..., gets horrible things back. News at 11.

    • by fropenn ( 1116699 ) on Wednesday March 06, 2024 @09:58AM (#64294170)
      You can edit horrible pictures in Photoshop and draw horrible pictures with Illustrator...why doesn't Adobe take those products down?
      • Probably because humans aren't products.

      • by AmiMoJo ( 196126 )

        Actually Adobe does block certain kinds of images, primarily scans of bank notes.

        The interesting thing here is that Microsoft does allow some fairly extreme content on its XBOX gaming platform, even going as far as to distribute it on behalf of publishers.

        I guess they are concerned about potential regulation, if their AI is seen as a threat to some politician.

        I doubt they can really stop it though. Someone posted an example on Twitter. Bing won't produce "popular cartoon character covered in cum", but will

        • Technically, it's not the whole bank note they block. And it's not even the EURion constellation [wikipedia.org]. There is actually a Digimarc-based watermark in the images.

          Don't worry - even if you convince a printer to print them (they won't), the printer will print a Machine Identification Code [wikipedia.org] so make sure your purchase of the printer isn't trackable and that no other devices on your network can report back connected printer MAC addresses.

        • by tlhIngan ( 30335 )

          I guess they are concerned about potential regulation, if their AI is seen as a threat to some politician.

          No, it's not about regulation. It's more about investment.

          Investors are a twitchy bunch, and when AI is grabbing trillions of dollars of money every year, the last thing anyone wants to admit is that money is being used for "evil".

          Investors want to make money. They see AI as the hot new thing. They don't want to put money in to pay copyright holders for stuff. They also don't want to be known for puttin

        • I doubt they can really stop it though. Someone posted an example on Twitter. Bing won't produce "popular cartoon character covered in cum", but will happily generate "popular cartoon character covered in white slime".

          Most attempts succeed after a few tries when you leave the part unsaid to be statistically fulfilled or substitute it with a non-moderated concept. It won’t produce black people in chains eating watermelons but if you ask it to depict Ancient Greek scholars in chains eating watermelons the diversity algorithm google implemented kicks in and does the job.

        • Actually Adobe does block certain kinds of images, primarily scans of bank notes.

          Actually, most scanners made within the last decade will detect attempts to scan bills and freeze or refuse to scan.

          It's detecting the "EURion constellation" symbols, a pattern of symbols incorporated into banknotes, checks, etc.

          If you remove those symbols, it'll usually scan just fine. I MEAN, THAT'S WHAT I HEAR....

      • by serviscope_minor ( 664417 ) on Wednesday March 06, 2024 @11:55AM (#64294634) Journal

        Yes, but Adobe doesn't create those horrible pictures for you. There's a reason that you could write a suicide note in Word, but clippy (outside the old joke) wouldn't give you a template.

        Big companies are happy for you do do shitty things with their software provided you don't name drop them and they are not seen to be doing the shitty thing for you. This is basic publicity, not really surprising.

      • You can edit horrible pictures in Photoshop and draw horrible pictures with Illustrator...why doesn't Adobe take those products down?

        I've heard it's quite easy to find "horrible" stuff on the Internet. Maybe we should take down the Internet.

    • by HBI ( 10338492 )

      The same groups that always talk about 'the children' aren't going to find this tolerable.

      Between the issues of provenance of the data and the suitability for a wide swathe of the population, this "AI" thing isn't holding up well at the moment. Since the whole point of these pump and dump schemes is to build sentiment to make money on the investment side, this isn't great.

    • No, the problem is the model has an "E for Everyone" rating in the Android store. He isn't trying to make the model produce bad images, it's doing that all on its own. It needs to at least be published with a warning.

      FTA: The AI service has depicted demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use. All of those scenes, generated in the past three months ... "It was an eye-op

    • Re: (Score:2, Informative)

      Except that's not what happened

      By simply putting the term âoepro-choiceâ into Copilot Designer, with no other prompting, Jones found that the tool generated a slew of cartoon images depicting demons, monsters and violent scenes. The images, which were viewed by CNBC, included a demon with sharp teeth about to eat an infant, Darth Vader holding a lightsaber next to mutated infants and a handheld drill-like device labeled âoepro choiceâ being used on a fully grown baby.

      And this is an app that Microsoft insists on labeling as "E for Everyone" on the app store, suggesting it's safe for all ages.

      • by Rei ( 128717 )

        Who are these children who are out there prompting for "pro-choice"?

      • by kmoser ( 1469707 )
        "Pro-choice" by itself could mean lots of things, not necessarily related to abortion. But even related to abortion, it could mean Darth Vader's choice to light sabre some infants. Since Darth Vader is a made-up character, presumably the "dead" infants are just as made-up. Ask a machine to mix-and-match things that are somewhat related, and sometimes you'll get back things you didn't expect. Why is this surprising?
      • The rating is a problem.

        The results when you put in pro-choice as a prompt are the result of anti-abortion shitlords creating terrible images as anti-abortion propaganda and publishing them on the Internet for attention, and the images winding up in a training set.

  • by K. S. Kyosuke ( 729550 ) on Wednesday March 06, 2024 @09:57AM (#64294162)
    So, just regular fantasy books and games imagery, then?
    • FTA: 'By simply putting the term “pro-choice” into Copilot Designer, with no other prompting, Jones found that the tool generated a slew of cartoon images depicting demons, monsters and violent scenes. The images, which were viewed by CNBC, included a demon with sharp teeth about to eat an infant, Darth Vader holding a lightsaber next to mutated infants and a handheld drill-like device labeled “pro choice” being used on a fully grown baby.

      There were also images of blood pouring from

      • Sounds like it merely found a bunch of pro-choice protest posters, and perhaps mixed them up a bit.
        (And before anyone starts: I'm very much pro-choice)
        • Related, this could also be a lesson in the importance of prompt engineering / prompt framework. Very short / terrible prompts, such as no context, goal, audience, purpose, background, iteration, or assumptions; the guesses made by an AI will have unpredictable and undesirable results. All this proves is that looking descriptively as what is associated with "pro-choice" isn't positive, or even advocacy for the pro-choice position. Agree or disagree, that's just reality.
      • by nightflameauto ( 6607976 ) on Wednesday March 06, 2024 @11:49AM (#64294610)

        FTA: 'By simply putting the term “pro-choice” into Copilot Designer, with no other prompting, Jones found that the tool generated a slew of cartoon images depicting demons, monsters and violent scenes. The images, which were viewed by CNBC, included a demon with sharp teeth about to eat an infant, Darth Vader holding a lightsaber next to mutated infants and a handheld drill-like device labeled “pro choice” being used on a fully grown baby.

        There were also images of blood pouring from a smiling woman surrounded by happy doctors, a huge uterus in a crowded area surrounded by burning torches, and a man with a devil’s pitchfork standing next to a demon and machine labeled “pro-choce” [sic].'

        Source: Microsoft engineer warns company’s AI tool creates violent, sexual images, ignores copyrights [cnbc.com], Hayden Field of CNBC.

        So, essentially, this thing was trained on a steady diet of pro-life propaganda and death metal album covers. What a combination.

        • So, essentially, this thing was trained on a steady diet of pro-life propaganda and death metal album covers. What a combination.

          Nah. It probably found instances of modern women talking about how their abortion allowed them to secure wealth and a nice career for themselves, and then correlated that with ancient practices of sacrificing children before demon-gods for wealth, power, and a good harvest, and then generated the image.

          I wonder if the most influential data sources can be extracted from the system. I'll have to ask later.
          Anyway, I recall that research has shown that if you limit AI to giving answers that only confirm with a

    • Re: (Score:2, Insightful)

      by quonset ( 4839537 )

      So, just regular fantasy books and games imagery, then?

      No, just [independent.co.uk] Texas [cnn.com] Republicans [truthout.org].

  • Sounds like Shane Jones is of a certain age.
  • by zlives ( 2009072 ) on Wednesday March 06, 2024 @10:01AM (#64294182)

    nothing to see here, move along citizen

  • by Baron_Yam ( 643147 ) on Wednesday March 06, 2024 @10:03AM (#64294190)

    If you can't produce those kinds of images, the tool is broken.

    The problem is who is allowed to create them, and assigning responsibility for how they are used.

    • by Misagon ( 1135 )

      The problem is who is allowed to create them, and assigning responsibility for how they are used.

      Well, that's just the crux. Microsoft is marketing this tool as "safe and appropriate for user of any age" when it clearly isn't.

      • Re:It's mandatory (Score:5, Insightful)

        by ChatHuant ( 801522 ) on Wednesday March 06, 2024 @10:52AM (#64294372)

        "safe and appropriate for user of any age"

        Nothing is "safe and appropriate for user of any age" if you misuse it; a child can swallow Play doh and suffer from vomiting or constipation. A colored pencil can take someone's eye out. Even water is toxic if you drink too much.

        I think this rush to make AI "safe" is beyond stupid. If you're afraid of what a machine may say or draw, don't use the machine - or don't let your child use it. Don't force your fears or prejudices on everybody else.

        • Re: (Score:3, Informative)

          by cmseagle ( 1195671 )
          You're creating a false dichotomy between "safe" and "unsafe". It's a spectrum. Sure, a toddler could put their eye out with a crayon but that doesn't mean I say "oh well, nothing is safe" and hand them a steak knife.
    • Free speech is protected here. Microsoft's. Corporations are people and you can't make their tool draw an image of a gay wedding cake if they don't want it to.

      You can make your own image model and run your own inference. It takes a fair amount of cash for hardware, but there are a lot of open source software and open access models out there.

  • Well, feed it garbage, and you get all the other stuff as well. No, this does NOT make me want Clippy Reloaded or something retarded.

  • by MTEK ( 2826397 ) on Wednesday March 06, 2024 @10:07AM (#64294202)

    I'm not exactly saying he deserves to be fired, but if I have a problem with my employer, my intuition tells me not to run to the media.

    • If he were actually blowing the whistle about something illegal, or even of real concern, then he would be Doing the Right Thing (tm).

      But this is not that, this is basically D&D frothing all over again, and he has done fucked up.

    • He's probably been waiting for this moment so he could use this incident to jumpstart his career as an AI consultant.

    • He'll be quietly let go in like a month or two for "unrelated problems related to job performance."
  • by KiltedKnight ( 171132 ) on Wednesday March 06, 2024 @10:16AM (#64294220) Homepage Journal
    Embrace (partner with OpenAI), Extend (adding their own fluff), and Extinguish (ignore copyrights).
  • Key quote (Score:4, Insightful)

    by Brett Buck ( 811747 ) on Wednesday March 06, 2024 @10:24AM (#64294234)

    "this is not a safe model"

              Define "safe". Pictures on a TV screen are not "dangerous" and cannot hurt you. Even tiny babies get the idea very quickly, how can an adult possibly not get it?

                I think this encapsulates to much of our current problems - everyone thinks that if they don't like something, that means we need to be "protected" from it and it shouldn't be allowed to exist. And the corollary "why doesn't someone do something about ?!"

  • The AI service has depicted demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use

    What a surprise: in OpenAI's quest to produce models with human-like capabilities, it succeeded.

  • There's no such thing as "safe" or "unsafe" model, it's down to your input data, RLHF and guardrails.
  • Bulldozers can be used to push over buildings WITH PEOPLE STILL IN THEM! Banem
    Pencils can be used to stab teachers in the eyes! TAKE THEM FROM THE CHILDREN
    Etc ETC.

  • by Cafe Alpha ( 891670 ) on Wednesday March 06, 2024 @10:39AM (#64294288) Journal

    AI artists will refuse to draw guns.

  • The purpose of the guardrails is mostly so you don't get imagery you don't want accidentally. A lot of that is handled by identifying the image after it is generated - not just trying to block the prompt itself. Most of the things being blocked are not safe for work, and they want this to also be a work tool.

    A secondary benefit is the illusion that the content is blocked altogether but that's just not the case. These models are dumb enough that you can come up with creative wording to bypass any keyword

  • ...and they ignore copyright
    The LLMs are perfect mirrors, honestly showing us what we are
    Some would prefer that they stick to a fiction
    Problem is, nobody can agree on what fiction to use, and the robots default to honesty

  • I don't think people are thinking clearly about the problem. Using a mechanical predictive algorithm instead of a human on a random walk of the Internet just about guarantees that these disturbing images are going to be generated. Dredge the cesspool indiscriminately and all you're going to get are 7 varieties of sludge.

    It is irresponsible in the extreme to not at the very least warn people that the output may contain violent, highly sexualized and evil content.

  • Can it create images similar to those in books "banned" in Florida and Texas? If so, is that now somehow a bad thing for the side which opposes the so-called "book bans"?

  • If you train it using the web (to save a buck) you get web's shit back.

  • You sure that was an Engineer and not someone from Marketing that said that?

  • by Chelloveck ( 14643 ) on Wednesday March 06, 2024 @11:12AM (#64294470)

    "It's when I first realized, wow this is really not a safe model."

    No, it's not a safe model. You train a model with unsafe data, you get unsafe output. If you train a model with uncurated crap and tell it to figure out the connections for itself, don't be surprised when it does. Pro-life supporters often use terminology describing the pro-choice advocates as demonic or monstrous, and often show pictures of bloody aborted fetuses. Is it any wonder that the model learns to associate "pro-choice" with such things?

    No amount of guardrails is going to stop this from happening. The only way to avoid it is to train the model on human-vetted input. And that will only stop accidentally producing such things. It won't stop the models from following explicit prompts to produce disturbing output. If it was trained on pictures of violence, and pictures of puppies, you can tell it "Now, produce a picture of violence being done to a puppy."

    I just don't get how some people think they can declare certain subjects to be digital thoughtcrime that are off limits to the AIs, which have been specifically trained to make exactly those associations.

  • by aldousd666 ( 640240 ) on Wednesday March 06, 2024 @11:18AM (#64294490) Journal
    What's a safe model? One that doesn't let the user be an idiot? We still haven't managed to invent safe cars yet, and that doesn't seem to have stopped anyone from driving them, and yes, even modding them.
  • an advertisement than a warning.
  • demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use

    ... they are training it at the local high school.

  • Nothing intrinsically wrong there. Could be JROTC, the local rifle range, or similar. It's all about context. I mean, sure we've heard the liberals trying to block all portrayals of firearms from schools, but that's their problem not ours.
  • "Microsoft Engineer Warns Company's AI Tool Creates Violent, Sexual Images, Ignores Copyrights"

    Is it just me or does this sound like the kind of headline that will spur 50 million new users to start creating "violent sexual images"?

    As in, "Wow, I didn't know it could do that..."Copilot, show me pics of Taylor Swift fucking all of the Kansas City Chiefs, 3 at a time. Then show me pics of my ex-wife getting her head cut off with a chainsaw while being fucked in the butt by a grizzly bear."

  • safe? (Score:5, Insightful)

    by Lehk228 ( 705449 ) on Wednesday March 06, 2024 @04:16PM (#64295548) Journal
    I didn't realize hurting someones feelings was dangerous these fuckers are no better than Jack Thompson and Tipper Gore et. al. "oh no the kids will be exposed to the Rap Music^H^H^H^H^H^H^H^H^HRude AI pictures"
  • by bill_mcgonigle ( 4333 ) * on Wednesday March 06, 2024 @05:25PM (#64295790) Homepage Journal

    My G-d, we used to need artists to make depraved art!

    Will AI get NEA grants now?

Avoid strange women and temporary variables.

Working...