Arvind Narayanan:

I will focus the rest of my talk on this third category [predicting social outcomes], where there’s a lot of snake oil.

I already showed you tools that claim to predict job suitability. Similarly, bail decisions are being made based on an algorithmic prediction of recidivism. People are being turned away at the border based on an algorithm that analyzed their social media posts and predicted a terrorist risk. […]

Compared to manual scoring rules, the use of AI for prediction has many drawbacks. Perhaps the most significant is the lack of explainability. Instead of points on a driver’s license, imagine a system in which every time you get pulled over, the police officer enters your data into a computer. Most times you get to go free, but at some point the black box system tells you you’re no longer allowed to drive.

Il bot che genera farfalle

@mothgenerator è un bot di Twitter che genera sia le farfalle che i loro corrispettivi nomi in latino:

This bot tweets make-believe moths of all shapes, sizes, textures and iridescent colors. It’s programmed to generate variations in several anatomical structures of real moths, including antennas, wing shapes and wing markings.

Another program, which splices and recombines real Latin and English moth names, generates monikers for the moths. You can also reply to the account with name suggestions, and it will generate a corresponding moth.

Yuval Harari e l’uomo inutile

Ezra Klein di Vox ha intervistato Yuval Harari, autore di Sapiens e, più recentemente, di Homo Deus. Parlando di AI, Harari distingue fra intelligenza e coscienza — che vanno di pari passo negli umani, ma non per forza devono coesistere (o la seconda deve essere necessaria affinché la prima esista) in un’intelligenza artificiale:

Intelligence is not consciousness. Intelligence is the ability to solve problems. Consciousness is the ability to feel things. In humans and other animals, the two indeed go together. The way mammals solve problems is by feeling things. Our emotions and sensations are really an integral part of the way we solve problems in our lives. However, in the case of computers, we don’t see the two going together.

Over the past few decades, there has been immense development in computer intelligence and exactly zero development in computer consciousness. There is absolutely no reason to think that computers are anywhere near developing consciousness. They might be moving along a very different trajectory than mammalian evolution. In the case of mammals, evolution has driven mammals toward greater intelligence by way of consciousness, but in the case of computers, they might be progressing along a parallel and very different route to intelligence that just doesn’t involve consciousness at all.

Un passaggio importante del libro è la possibilità che, come conseguenza dell’automatizzazione, una larga fetta dell’umanità possa perdere la sua valenza economica (e di conseguenza politica) — ovvero diventare ‘inutile’ per lo stato e l’economia. Quando e se questo succederà, il sistema perderà anche l’incentivo di investire su questa classe di persone (la ragione per cui abbiamo università, assistenza sanitaria, etc. etc. è che queste cose ci rendono produttivi).

A questo punto che facciamo? Una possibilità spesso menzionata è che si finisca col nascondersi e col cercare di dare un significato alla propria esistenza tramite la realtà virtuale. Scenario triste, ma non nuovo, dice Harari: sono migliaia di anni che troviamo conforto, significato e modelliamo la nostra esistenza attorno a realtà virtuali che fino ad oggi abbiamo chiamato ‘religione’:

You can think about religion simply as a virtual reality game. You invent rules that don’t really exist, but you believe these rules, and for your entire life you try to follow the rules. If you’re Christian, then if you do this, you get points. If you sin, you lose points. If by the time you finish the game when you’re dead, you gained enough points, you get up to the next level. You go to heaven.

People have been playing this virtual reality game for thousands of years, and it made them relatively content and happy with their lives. In the 21st century, we’ll just have the technology to create far more persuasive virtual reality games than the ones we’ve been playing for the past thousands of years. We’ll have the technology to actually create heavens and hells, not in our minds but using bits and using direct brain-computer interfaces.

For voice to work really well you need a narrow and predictable domain. You need to know what the user might ask and the user needs to know what they can ask. This was the structural problem with Siri – no matter how well the voice recognition part worked, there were still only 20 things that you could ask, yet Apple managed to give people the impression that you could ask anything, so you were bound so ask something that wasn’t on the list and get a computerized shrug.

Conversely, Amazon’s Alexa seems to have done a much better job at communicating what you can and cannot ask. Other narrow domains (hotel rooms, music, maps) also seem to work well, again, because you know what you can ask. You have to pick a field where it doesn’t matter that you can’t scale.

Provate a ispezionare una delle immagini del vostro news feed del Facebook: molto probabilmente avrà un tag alt, automaticamente popolato con attributi che la descrivono – sole, montagna, natura, etc. a seconda di quale che sia il soggetto. Questi attributi non sono inseriti dagli utenti, ma automaticamente da Facebook, che analizza ciascuna immagine caricata e prova a comprenderne il contenuto.

Un’estensione per Chrome vi dà un’idea della quantità di informazioni che Facebook può ricavare con questa tecnica, permettendovi ti testarne le capacità su qualsiasi immagine.

Eliezer Yudkowsky risponde su LessWrong ad alcune domande riguardo all’automatizzazione e alla disoccupazione, due cose che (dice lui, riporto io, semplificando molto: leggetelo per intero) potrebbero essere correlate in un futuro (molto) distante – quando e se avremo un AI superintelligente –, ma che per il momento non sono causa ed effetto:

A: 

Many people would hire personal cooks or maids if we could afford them, which is the sort of new service that ought to come into existence if other jobs were eliminated – the reason maids became less common is that they were offered better jobs, not because demand for that form of human labor stopped existing. Or to be less extreme, there are lots of businesses who’d take nearly-free employees at various occupations, if those employees could be hired literally at minimum wage and legal liability wasn’t an issue. Right now we haven’t run out of want or use for human labor, so how could “The End of Demand” be producing unemployment right now? The fundamental fact that’s driven employment over the course of previous human history is that it is a very strange state of affairs for somebody sitting around doing nothing, to have nothing better to do. We do not literally have nothing better for unemployed workers to do. Our civilization is not that advanced.

[…]

Q. But AI will inevitably become a problem later?

A. Not necessarily.  We only get the Hansonian scenario if AI is broadly, steadily going past IQ 70, 80, 90, etc., making an increasingly large portion of the population fully obsolete in the sense that there is literally no job anywhere on Earth for them to do instead of nothing, because for every task they could do there is an AI algorithm or robot which does it more cheaply. That scenario isn’t the only possibility.

Harvard Business Review:

Technological revolutions tend to involve some important activity becoming cheap, like the cost of communication or finding information. Machine intelligence is, in its essence, a prediction technology, so the economic shift will center around a drop in the cost of prediction. The first effect of machine intelligence will be to lower the cost of goods and services that rely on prediction. This matters because prediction is an input to a host of activities including transportation, agriculture, healthcare, energy manufacturing, and retail.

When the cost of any input falls so precipitously, there are two other well-established economic implications. First, we will start using prediction to perform tasks where we previously didn’t. Second, the value of other things that complement prediction will rise.

Un esperimento di Google in grado di riconoscere quello che disegnate. Avete venti secondi di tempo per disegnare un concetto, e Google in quegli stessi venti secondi dovrebbe riuscire a identificare quel che state disegnando.

Ci ha preso quasi sempre con i miei decisamente orribili e confusi schizzi. Per intenderci: l’umano che siede di fronte a me ci ha preso meno di Google su quel che rappresentassero.

(Altri esperimenti inquietanti qui)

Un nuovo trailer di Lo and Behold, il documentario di Werner Herzog su internet e l’intelligenza artificiale. Ne attendo l’uscita trepidante.

S’intitolerà Lo And Behold: Reveries Of The Connected World; Verrà presentato al Sundance Festival, che inizia domani. Ci sarà anche Elon:

LO AND BEHOLD traces what Herzog describes as “one of the biggest revolutions we as humans are experiencing,” from its most elevating accomplishments to its darkest corners. Featuring original interviews with cyberspace pioneers and prophets such as Elon Musk, Bob Kahn, and world-famous hacker Kevin Mitnick, the film travels through a series of interconnected episodes that reveal the ways in which the online world has transformed how virtually everything in the real world works, from business to education, space travel to healthcare, and the very heart of how we conduct our personal relationships.

Arstechnica:

There are two main problems for any brain simulator. The first is that the human brain is extraordinarily complex, with around 100 billion neurons and 1,000 trillion synaptic interconnections. None of this is digital; it depends on electrochemical signaling with inter-related timing and analogue components, the sort of molecular and biological machinery that we are only just starting to understand.

Even much simpler brains remain mysterious. The landmark success to date for Blue Brain, reported this year, has been a small 30,000 neuron section of a rat brain that replicates signals seen in living rodents. 30,000 is just a tiny fraction of a complete mammalian brain, and as the number of neurons and interconnecting synapses increases, so the simulation becomes exponentially more complex—and exponentially beyond our current technological reach.

This yawning chasm of understanding leads to the second big problem: there is no accepted theory of mind that describes what “thought” actually is.

Un post da leggere per intero. Per il futuro prossimo, credo che possiamo stare tranquilli.

Steven Levy ha fatto una chiacchierata con Elon Musk (e Sam Altman) per capire meglio che piani hanno per OpenAI:

I want to return to the idea that by sharing AI, we might not suffer the worst of its negative consequences. Isn’t there a risk that by making it more available, you’ll be increasing the potential dangers?

Altman: I wish I could count the hours that I have spent with Elon debating this topic and with others as well and I am still not a hundred percent certain. You can never be a hundred percent certain, right? But play out the different scenarios. Security through secrecy on technology has just not worked very often. If only one person gets to have it, how do you decide if that should be Google or the U.S. government or the Chinese government or ISIS or who? There are lots of bad humans in the world and yet humanity has continued to thrive. However, what would happen if one of those humans were a billion times more powerful than another human?

Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.

Elon Musk, Sam Altman e un numero piuttosto cospicuo di nomi della Silicon Valley ha presentato OpenAI, una no-profit dedicata ad avanzare lo stato dell’intelligenza artificiale senza concentrarsi sui profitti, ma sul “bene” che l’umanità può derivarne (se volete spaventarvi, c’è una serie di lunghi post del blog Wait but Why che spiega cosa potrebbe succedere nel caso vada tutto storto).

OpenAI:

Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.

We’re hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.

Cosa sognano le macchine

Google si è chiesta cosa vedano le sue macchine — quelle che a cui ha insegnato a riconoscere e analizzare le foto. Simulando una rete neurale, ha provato a dare una risposta a questa domanda (è tutto ben spiegato nel loro post).

Queste reti non devono solamente riconoscere un’immagine, ma possono anche generarne di nuove. Si sono quindi chiesti cosa una macchina veda, partendo da un’immagine (delle nuvole, ad esempio) fornita a caso — quali oggetti riconosca, e cosa enfatizzi.

Scrivono:

This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.

Ovvero, questa è una delle tante cose che è le macchine hanno immaginato:

Il video giusto da guardare dopo essersi letti i due pezzi di Wait But Why? su AI.

(via @diegopetrucci)