Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI IT

Machine Logic: Our Lives Are Ruled By Big Tech's 'Decisions By Data' (theguardian.com) 64

With the advent of artificial intelligence and machine learning, we are increasingly moving to a world where many decisions around us are shaped by calculations rather than traditional human judgement. The Guardian, citing many industry experts, reminds us that these technologies filter who and what counts, including "who is released from jail, and what kind of treatment you will get in hospital." A digital media professor said, these digital companies allow us to act, but in a very fine-grained, datafied, algorithm-ready way. "They put life to work, by rendering life in Taylorist data points that can be counted and measured" From the report (edited and condensed): Jose van Dijck, president of the Dutch Royal Academy and the conference's keynote speaker, expands further. Datification is the core logic of what she calls "the platform society," in which companies bypass traditional institutions, norms and codes by promising something better and more efficient -- appealing deceptively to public values, while obscuring private gain. Van Dijck and peers have nascent, urgent ideas. They commence with a pressing agenda for strong interdisciplinary research -- something Kate Crawford is spearheading at Microsoft Research, as are many other institutions, including the new Leverhulme Centre for the Future of Intelligence. There's the old theory to confront, that this is a conscious move on the part of consumers and, if so, there's always a theoretical opt-out. Yet even digital activists plot by Gmail, concedes Fieke Jansen of the Berlin-based advocacy organisation Tactical Tech. The Big Five tech companies, as well as the extremely concentrated sources of finance behind them, are at the vanguard of "a society of centralized power and wealth. "How did we let it get this far?" she asks. Crawford says there are very practical reasons why tech companies have become so powerful. "We're trying to put so much responsibility on to individuals to step away from the 'evil platforms,' whereas in reality, there are so many reasons why people can't. The opportunity costs to employment, to their friends, to their families, are so high" she says.
This discussion has been archived. No new comments can be posted.

Machine Logic: Our Lives Are Ruled By Big Tech's 'Decisions By Data'

Comments Filter:
  • The problem is not that decisions are being made by machines with little human input. The problem is that humans are getting very little insight into how the decisions are being made, and thus very little input into the decision making processes, and even less ability to find and correct errors.

    Machines making decisions can be a very good thing. Machines making decisions for reasons that humans are not given enough information to follow is likely to be not.

    • by plopez ( 54068 )

      And of course GIGO. Decisions made on incomplete and questionable data. Once a data stream is polluted there is no going back. Which is why non-ACID compliance enrages me.

      • by Anonymous Coward

        Even if there are zero errors by the computer storing the data, there will always be problems with people inputting the data. Many people game whatever systems they're using to achieve whatever outcome they want, on an individual level. The easiest example I can think of is health care billing. The doctors, nurses, whoever might input a worse diagnosis than the patient actually has in order to get insurance to pay for it. If you go looking for real information on diseases or whatever, you're wading thro

  • Simple - *Give Me Convenience or Give Me Death*

  • Bring On The AI (Score:3, Insightful)

    by brunes69 ( 86786 ) <`gro.daetsriek' `ta' `todhsals'> on Saturday October 08, 2016 @06:00PM (#53038791)

    Indeed, the biggest problem with this article is that it starts out from a base assumption that is flawed - that for some unknown reason, human "moral judgement" is superior to that of an algorithm based on big data, without giving any logical reason WHY we should trust humans more.

    However, given the way humans act around the world globally - on average I would take a machine's judgement over a human's judgement any day of the weekm

    • by Empiric ( 675968 )

      The problem is that what to "solve for" is something the machine can't self-determine. It's not a function of the data or the computer, it will always be specified by humans.

      The data itself can support "optimize for broadest human compassionate benefit" or "optimize for greatest profit"--which is better as an objective, is a value judgment. Guess which one developers are going to be told by corporate management to code for?

      • It's not a corporations job to optimize anything for human benefit. That is the job of government, or it is SUPPOSED to be, unfortunately what you have in the United States is a horribly broken system that is no longer democracy.

        • by Empiric ( 675968 )

          Yes, as you suggest, here it will make little difference, because the government will outsource it.

          Elsewhere, they probably will as well, but if they don't, the government and those humans optimizing for the government's benefit will do no better.

          Neither profit not politics will provide the "objective right thing according to the data".

        • by Anonymous Coward

          If corporations do not benefit humanity, why should they exist? Government cannot save you if corporations become more powerful and you demolish "the State".

    • by sjames ( 1099 )

      Here's a hint. HUMANS wrote the software. However, unlike the flawed humans making the decision openly where they might be vaguely accountable and where some may be willing to do the right thing, the bad human thinking that writes the software gets to hide behind the machine and never even has to see the consequences of it's flaws.

    • by Anonymous Coward

      The biggest problem with your comment is that it starts out from a base assumption that is flawed - that for some unknown reason, an algorithm based on big data is superior to that of human "moral judgement", without giving any logical reason WHY we should trust algorithms more.

      However, given the way algorithms act around the world globally - on average I would take a human's judgement over an algorithm's judgement any day of the weekm

    • You use the words "algorithm based in big data" as if they had intrinsic value. They don't, period. The data - a great foundation, but the algorithm is human-made.

      Afraid of machines? Not really. We're not really making any progress towards machines having any sort of free will. I'm afraid of generation gap.

      At the moment we have a certain view of the world. This shapes our goals and interpretations. That shapes the algorithms we create. Goal functions. Criteria. Queries.

      Now, bugs aside, those algorithms will

    • by pnutjam ( 523990 )
      Current AI is really statistics (one of the big lies). It's designed with a bias, someone is making those decisions and hiding behind "AI".
  • by Anonymous Coward

    Software engineers should read the IEEE Code of Ethics [ieee.org], especially the part about "avoid injuring others, their property, reputation, or employment by false or malicious action."

    • by plopez ( 54068 )

      Software engineers will never be true engineers.

      • Software engineers will never be true engineers.

        There is or at least has been some real engineering done for the space program. Code designed to a purpose, debugged by many, many hands and then in some cases even proven. The majority of software engineers are not really doing true engineering, but that doesn't mean none of them are.

      • by gweihir ( 88907 )

        Some of them already are. The problem is that most are not and many currently educated will not be.

      • by Anonymous Coward

        Or maybe they already are:
        https://www.youtube.com/watch?v=NP9AIUT9nos

    • Hm. How would this affect IEEE engineers who develop weapons systems?

  • And their numbers have always been abstracted. The numbers "big data" has come from somewhere, studies, manual input, algorithms written by humans to turn analog input into digital output, all are prone to error, as they have been in the past. When all these numbers are compiled and they are presented in a particular context by an interested party, a human decision / consultation will probably ensue, also not infallible.

    I don't really see what changes here...

  • I suspect a lot more lives have been ruined by the incompetent hacks at The Grauniad than by Big Data.

  • I first realized something to this effect way back in 2002, when a company called Ctrax offered a download-based DRM music service for college students for a small fee (or was it free?). This was absolutely revolutionary in 2002, when Spotify, etc. didn't exist, so if you wanted to obtain large quantities of licensed music for free, this was basically one of the best ways to do it. I guess Ctrax came so early at the beginning of the "de-DRMing" of the music industry because college students were among the m

  • "The Guardian, citing many industry experts, reminds us that these technologies filter who and what counts..."

    The Guardian has been an unreliable rag since GCHQ made them smash up their hard-drives after the Snowden disclosures.
    https://en.wikipedia.org/wiki/... [wikipedia.org]

    Any worthwhile filter would have excluded crap written by Luke Harding and Polly Toynbee.

  • With the advent of artificial intelligence and machine learning, we are increasingly moving to a world where many decisions around us are shaped by calculations rather than traditional human judgement.

    Isn't the point of AI to be indiscernible from traditional human judgement? When it's not, can we please stop calling that AI? It's just a decision made by a computer with a certain set of inputs.

One man's constant is another man's variable. -- A.J. Perlis

Working...