Some bits about ai

The impossibility of intelligence explosion

The basic premise is that, in the near future, a first “seed AI” will be created, with general problem-solving abilities slightly surpassing that of humans. This seed AI would start designing better AIs, initiating a recursive self-improvement loop that would immediately leave human intelligence in the dust, overtaking it by orders of magnitude in a short time. Proponents of this theory also regard intelligence as a kind of superpower, conferring its holders with almost supernatural capabilities to shape their environment (…)

This science-fiction narrative contributes to the dangerously misleading public debate that is ongoing about the risks of AI and the need for AI regulation. In this post, I argue that intelligence explosion is impossible — that the notion of intelligence explosion comes from a profound misunderstanding of both the nature of intelligence and the behavior of recursively self-augmenting systems.

Exhaustive post by Françoise Chollet (author of Keras, currently working on Deep Learning at Google) confuting the dangerous misconceptions that fuel the AI debate.

via medium.com
December 10, 2017

We’re building a dystopia just to make people click on ads

So when people voice fears of artificial intelligence, very often, they invoke images of humanoid robots run amok. You know? Terminator? You know, that might be something to consider, but that’s a distant threat. Or, we fret about digital surveillance with metaphors from the past. “1984,” George Orwell’s “1984,” it’s hitting the bestseller lists again. It’s a great book, but it’s not the correct dystopia for the 21st century. What we need to fear most is not what artificial intelligence will do to us on its own, but how the people in power will use artificial intelligence to control us and to manipulate us in novel, sometimes hidden, subtle and unexpected ways. Much of the technology that threatens our freedom and our dignity in the near-term future is being developed by companies in the business of capturing and selling our data and our attention to advertisers and others: Facebook, Google, Amazon, Alibaba, Tencent.

Essential talk by sociologist Zeynep Tüfekçi on the (very current) risks of machine learning applications in social media and advertising networks.

via ted.com
October 30, 2017

The Myth of a Superhuman AI


The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. (…) Intelligence is not a single dimension. It is a complex of many types and modes of cognition, each one a continuum.

The main problem with AI still remains finding a suitable definition of the concept of “intelligence”.

via wired.com
August 16, 2017
View all Bits