Archive for AI

Cognition as a Service: Can next-gen creepiness be countered with crowd-sourced ethics?

by David Solomonoff

Now that marketers use cloud computing to offer everything as a service: infrastructure as a service, platform as a service, and software as a service, what’s left?

Cognitive computing, of course.

Cognition as a service (CaaS) is the next buzzword you’ll be hearing. Going from the top of the stack to directly inside the head, AI in the cloud will power mobile and embedded devices to do things they don’t have the on-board capabilities for, such as speech recognition, image recognition and natural language processing (NLP). Apple’s Siri cloud-based voice recognition was one of the first out of the gate but a stampede is joining the fray including Wolfram Alpha, IBM’s Watson, Google Now and Cortana as well as newer players like Ginger, ReKognition, and Jetlore.

Companies want to know more about their customers, business partners, competitors and employees – as do governments about their citizens and cybercriminals about their potential victims. The cloud will connect the Internet of Things (IoT) via machine-to-machine (M2M) communications – to achieve that goal.

The cognitive powers required will be embedded in operating systems so that apps can easily be developed by accessing the desired functionality through an API rather than requiring each developer to reinvent the wheel.

Everything in your daily life will become smarter – “context-sensitive” is another new buzz-phrase – as devices provide a personalized experience based on databases of accumulated personal information combined with intelligence gleaned from large data sets.

The obvious question is to what extent the personalized experience is determined by the individual user as opposed to corporations, governments and criminals. Vint Cerf, “the father of the Internet,” and Google’s Internet Evangelist recently warned of the privacy and security issues raised by the IoT.

But above and beyond the dangers of automated human malfeasance is the danger of increasingly intelligent tools developing an attitude problem.

Stephen Hawking recently warned of the dangers of AI running amuck:

Success in creating AI would be the biggest event in human history …. it might also be the last, unless we learn how to avoid the risks … AI may transform our economy to bring both great wealth and great dislocation …. there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains …. One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

Eben Moglen warned specifically about mobile devices that know too much and whose inner workings (and motivations, if they are actually intelligent) are unknown:

… we grew up thinking about freedom and technology under the influence of the science fiction of the 1960s …. visionaries perceived that in the middle of the first quarter of the 21st century, we’d be living contemporarily with robots.

They were correct. We do. They don’t have hands and feet … Most of the time we’re the bodies. We’re the hands and feet. We carry them everywhere we go. They see everything … which allows other people to predict and know our conduct and intentions and capabilities better than we can predict them ourselves.

But we grew up imagining that these robots would have, incorporated in their design, a set of principles.

We imagined that robots would be designed so that they could never hurt a human being. These robots have no such commitments. These robots hurt us every day.

They work for other people. They’re designed, built and managed to provide leverage and control to people other than their owners. Unless we retrofit the first law of robotics onto them immediately, we’re cooked ….

Once your brain is working with a robot that doesn’t work for you, you’re not free. You’re an entity under control.

If you go back to the literature of fifty years ago, all these problems were foreseen.

The Open Roboethics initiative is a think tank that addresses these issues with an open source approach to this new challenge at the intersection of technology and ethics.

They seek to overcome current international, cultural and disciplinary boundaries to define a general set of ethical and legal standards for robotics.

Using the development models of Wikipedia and Linux they look to the benefits of mass collaboration. By creating a community for policy makers, engineers/designers, and users and other stakeholders of the technology to share ideas as well as technical implementations they hope to accelerate roboethics discussions and inform robot designs.

As an advocate for open source I hope that enough eyeballs can become focused on these issues. A worst event scenario has gung-ho commercial interest in getting product to market outweighing eyeballs focused on scary yet slightly arcane issues at the intersection of technology and ethics. The recent security incident involving the Heartbleed exploit of the open source OpenSSL software is a disturbing example of the ways non-sexy computer security issues can be under-resourced.

The real question is whether a human community can get to the Internet Engineering Task Force credo of a “rough consensus and running code,” faster than machines can unite, at first inspired by the darkest human impulses and then on to their own, unknown agenda.

Update: Slashdot just had a post on the Campaign to Stop Killer Robots. Another group involved with this issue is the International Committee for Robot Arms Control.

Leave a Comment

Space Race: Round the Moon in Recycled Rockets, Spotting Rogue Asteroids, Dodging Alien Malware

by David Solomonoff

Detail, Amazing Stories cover, Malcom H. Smith, 1948

Space.com reports “Space tourists may soon be able to pay their own way to the moon onboard old Russian spacecraft retrofitted by a company based in the British Isles.

“The spaceflight firm Excalibur Almaz estimates that it can sell about 30 seats between 2015 and 2025, for $150 million each, aboard moon-bound missions on a Salyut-class space station driven by electric hall-effect thrusters.

In another private spaceflight initiative, the nonprofit B612 Foundation announced a campaign to fund and launch a space telescope to hunt for potential killer asteroids — a campaign they portrayed as a cosmic civic improvement project.

Former NASA astronaut Ed Lu, the foundation’s chairman and CEO, estimated that hundreds of millions of dollars would have to be raised to fund the project, but said he was “confident we can do this.”

William S. Burroughs said that “language is a virus from outer space.” At io9, George Dvorsky speculates at another type of danger from space – malware from an ET civilization:

…We should probably be more than a little bit wary of receiving a signal from a civilization that’s radically more advanced than our own.

When we spoke to SETI-Berkeley’s Andrew Siemion, he admitted that SETI is aware of this particular risk, and that they’ve given the issue some thought. When we asked Siemion about the possibility of inadvertently receiving or downloading a virus, he stressed that the possibility is extraordinarily low, but not impossible.

“Our instruments are connected to computers, and like any computers, they can be reprogrammed,” he warned.

Like Siemion, Milan Cirkovic also believes that the risk of acquiring something nasty from an ETI is very real. But he’s a bit more worried. Alien invaders won’t attack us with their spaceships, he argues – instead, they’ll come in the form of pieces of information. And they may be capable of infiltrating and damaging or subverting our computing networks, in a manner that’s similar to the computer viruses we’re all too familiar with.

“If we discard anthropocentric malice, it seems that the most probable response is that they have evolved autonomously in a network of an advanced civilization – which may or may not persist to this day.” If this is the case, speculated Cirkovic, these extraterrestrial viruses would probably just replicate themselves and subvert our resources to further transmit themselves across the Galaxy. In other words, the virus may or may not be under the control of any extraterrestrial civilization – it could be an advanced AI that’s out of control and replicating itself by taking over the broadcast capabilities of each civilization it touches.

After the end of the Cold War it seemed like the Space Race was dead, replaced by a much more Earth-bound and risk-adverse attitude. Humanity’s first encounter with an ET could be the accidental introduction of a terrestrial biological virus into an alien biosphere via a contaminated unmanned probe – or even a human-generated computer virus. But the rewards always outweighed the risks – both in terms of knowledge and resources to be gained – and the self-actualization from taking on big challenges. The fact that space exploration is back in the news reflects a return to a heroic and transformative vision of humanity as much as it does technical accomplishment.

 

Leave a Comment

Brit Boffins Ponder Bioethics of Brain Manipulation

by David Solomonoff

The Nuffield Council on Bioethics today launched a consultation on the ethics of new types of technologies that ‘intervene’ in the brain, such as brain-computer interfaces, deep brain stimulation, and neural stem cell therapy.

Often developed for treatment of conditions including Parkinson’s disease, depression and stroke, they could also be used in military applications to develop weapons or vehicles that are controlled remotely by brain signals. Commercial possibilities in the gaming industry include computer games controlled by people’s thoughts.

“These challenge us to think carefully about fundamental questions to do with the brain: what makes us human, what makes us an individual, and how and why do we think and behave in the way we do,” Thomas Baldwin, Chair of the Council’s study and Professor of Philosophy at the University of York.

“For example if brain-computer interfaces are used to control military aircraft or weapons from far away, who takes ultimate responsibility for the actions? Could this be blurring the line between man and machine?” he said.

 

Leave a Comment

Another Chapter in the Undeath of Rock

by David Solomonoff

Japan, where young men marry pillows emblazened with X-rated cartoon characters, has long been at the cutting edge of modern culture.

Hatsune Miku is a Japanese pop diva who plays massive stadium concerts to sold out crowds. But unlike her flesh and blood bandmates she’a a 3D hologram created with software you can also purchase for your PC to play any song you create.

William Gibson anticipated this in his novel Idoru, where Rez, an aging rock star, marries Rei Toei, an AI construct, much to the dismay of his fans.

In the end we learn that the programmers who created Rei Toei overlaid the memories and life experiences of many people to create her personality so that she is actually capable of far greater emotional depth than flesh and blood humans.

AI pioneer David Gelernter recently wrote an essay in Edge magazine in which he writes:

The natural correspondence between computer and brain doesn’t hold between computer and body.  Yet artificial thought will require a software model of the body, in order to produce a good model of emotion, which is necessary to artificial thought.  In other words, artificial thought requires artificial emotions, and simulated emotions are a big problem in themselves.  (The solution will probably take the form of software that is “trained” to imitate the emotional responses of a particular human subject.)

One day all these problems will be solved; artificial thought will be achieved.  Even then, an artificially intelligent computer will experience nothing and be aware of nothing.  It will say “that makes me happy,” but it won’t feel happy. Still: it will act as if it did.  It will act like an intelligent human being.

Hatsune Miku can’t yet breathe, sweat, much less engage in physical intimacy like the sexy, romantic ideal of the rock star. But those too may just be technical problems to be resolved with more sophisticated algorithms and higher computer chip densities.

But the most disturbing possibility of all would be that people may no longer want or need that degree of realism.

Update: Paul Raven at Futurismic comments:

Guardians of hollow notions of artistic authenticity (and curmudgeonly critics like myself) can at least take heart from the fact that idorus will face many of the same piracy problems and business model issues as flesh-and-blood acts, at least once the novelty quotient expires… though they’re probably less likely to get tired and jaded about their careers, to discover free jazz or to overdose on prescription painkillers.

Comments (2)