Archive for Cybersecurity

Cognition as a Service: Can next-gen creepiness be countered with crowd-sourced ethics?

by David Solomonoff

Now that marketers use cloud computing to offer everything as a service: infrastructure as a service, platform as a service, and software as a service, what’s left?

Cognitive computing, of course.

Cognition as a service (CaaS) is the next buzzword you’ll be hearing. Going from the top of the stack to directly inside the head, AI in the cloud will power mobile and embedded devices to do things they don’t have the on-board capabilities for, such as speech recognition, image recognition and natural language processing (NLP). Apple’s Siri cloud-based voice recognition was one of the first out of the gate but a stampede is joining the fray including Wolfram Alpha, IBM’s Watson, Google Now and Cortana as well as newer players like Ginger, ReKognition, and Jetlore.

Companies want to know more about their customers, business partners, competitors and employees – as do governments about their citizens and cybercriminals about their potential victims. The cloud will connect the Internet of Things (IoT) via machine-to-machine (M2M) communications – to achieve that goal.

The cognitive powers required will be embedded in operating systems so that apps can easily be developed by accessing the desired functionality through an API rather than requiring each developer to reinvent the wheel.

Everything in your daily life will become smarter – “context-sensitive” is another new buzz-phrase – as devices provide a personalized experience based on databases of accumulated personal information combined with intelligence gleaned from large data sets.

The obvious question is to what extent the personalized experience is determined by the individual user as opposed to corporations, governments and criminals. Vint Cerf, “the father of the Internet,” and Google’s Internet Evangelist recently warned of the privacy and security issues raised by the IoT.

But above and beyond the dangers of automated human malfeasance is the danger of increasingly intelligent tools developing an attitude problem.

Stephen Hawking recently warned of the dangers of AI running amuck:

Success in creating AI would be the biggest event in human history …. it might also be the last, unless we learn how to avoid the risks … AI may transform our economy to bring both great wealth and great dislocation …. there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains …. One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

Eben Moglen warned specifically about mobile devices that know too much and whose inner workings (and motivations, if they are actually intelligent) are unknown:

… we grew up thinking about freedom and technology under the influence of the science fiction of the 1960s …. visionaries perceived that in the middle of the first quarter of the 21st century, we’d be living contemporarily with robots.

They were correct. We do. They don’t have hands and feet … Most of the time we’re the bodies. We’re the hands and feet. We carry them everywhere we go. They see everything … which allows other people to predict and know our conduct and intentions and capabilities better than we can predict them ourselves.

But we grew up imagining that these robots would have, incorporated in their design, a set of principles.

We imagined that robots would be designed so that they could never hurt a human being. These robots have no such commitments. These robots hurt us every day.

They work for other people. They’re designed, built and managed to provide leverage and control to people other than their owners. Unless we retrofit the first law of robotics onto them immediately, we’re cooked ….

Once your brain is working with a robot that doesn’t work for you, you’re not free. You’re an entity under control.

If you go back to the literature of fifty years ago, all these problems were foreseen.

The Open Roboethics initiative is a think tank that addresses these issues with an open source approach to this new challenge at the intersection of technology and ethics.

They seek to overcome current international, cultural and disciplinary boundaries to define a general set of ethical and legal standards for robotics.

Using the development models of Wikipedia and Linux they look to the benefits of mass collaboration. By creating a community for policy makers, engineers/designers, and users and other stakeholders of the technology to share ideas as well as technical implementations they hope to accelerate roboethics discussions and inform robot designs.

As an advocate for open source I hope that enough eyeballs can become focused on these issues. A worst event scenario has gung-ho commercial interest in getting product to market outweighing eyeballs focused on scary yet slightly arcane issues at the intersection of technology and ethics. The recent security incident involving the Heartbleed exploit of the open source OpenSSL software is a disturbing example of the ways non-sexy computer security issues can be under-resourced.

The real question is whether a human community can get to the Internet Engineering Task Force credo of a “rough consensus and running code,” faster than machines can unite, at first inspired by the darkest human impulses and then on to their own, unknown agenda.

Update: Slashdot just had a post on the Campaign to Stop Killer Robots. Another group involved with this issue is the International Committee for Robot Arms Control.

Leave a Comment

Secure Cloud Computing: Virtualizing the FreedomBox

by David Solomonoff

In 2010 I asked Professor Eben Moglen to speak to the Internet Society of New York about software freedom, privacy and security in the context of cloud computing and social media. In his Freedom in the Cloud talk, he proposed the FreedomBox as a solution: a small inexpensive computer which would provide secure encrypted communications in a decentralized way to defeat data mining and surveillance by governments and large corporations. Having physical control and isolating the hardware can be crucial to maintaining computer security which is why data centers are kept under lock and key. Each FreedomBox user would physically possess their own machine.

The U.S. National Institute for Standards and Technology (NIST) defines cloud computing (PDF with full definition) as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

Cloud computing, for all its advantages in terms of flexibility and scalability, has been fundamentally insecure. While the technology exists to secure information while it is being stored and while it is in transit, computers must process information in an unencrypted form. This means that a rogue systems administrator, malicious hacker or government can extract information from the system while it is being processed.

Adoption of cloud computing services by large enterprises has been hindered by this except when they maintain a private cloud in their own facilities.

Homomorphic encryption allows data to be processed in an encrypted form so that only the end user can access it in a readable form. So far it has been too demanding for normal computers to handle. In 2012 I invited Shai Halevi, a cryptography researcher at IBM, to discuss work he was doing in this area. He was able to execute some basic functions slowly with specialized hardware but the technology was not ready for general use.

Recently researchers at MIT have made breakthroughs that promise to bring homomorphic encryption to the mainstream, finally making secure cloud computing possible.

Mylar is a platform for building secure web applications.

Mylar stores only encrypted data on the server, and decrypts data only in users’ browsers. Beyond just encrypting each user’s data with a user key, Mylar addresses three other security issues:

  • It is a secure multi-user system – it can perform keyword search over encrypted documents, even if the documents are encrypted with different keys owned by different users

  • Mylar allows users to share keys and data securely in the presence of an active adversary

  • Mylar ensures that client-side application code is authentic, even if the server is malicious

Results with a prototype of Mylar built on top of the Meteor framework are promising: porting 6 applications required changing just 35 lines of code on average, and the performance overheads are modest, amounting to a 17% throughput loss and a 50 msec latency increase for sending a message in a chat application.

To further secure a web app in the cloud, an encrypted distributed filesystem such as Tahoe-LAFS can be used. It distributes data across multiple servers so that even if some of the servers fail or are taken over by an attacker, the entire filesystem continues to function correctly, preserving privacy and security.

By combining these two technologies, data can be encrypted at every point until it is accessed by its legitimate owner, combining privacy and security with the flexibility and scalability of cloud computing.

No longer confined behind a locked down private data center or hidden under the end user’s bed, a virtual FreedomBox can finally escape to the clouds.

Leave a Comment

Heartbleed bug not a technical problem – it’s an awareness and support problem

by David Solomonoff

While free/open source software (FOSS) may be a better development model and Richard Stallman argues, an ethical one, it doesn’t guarantee good software by itself. Software development, like any other human endeavor, depends on the skills, resources and motivations of the people doing it.

FOSS advocates argue that the inner workings of technology should be open to inspection and modification by their users.

While the Heartbleed bug was a technical problem that is being fixed, the real problem is the lack of awareness or interest in of back-end technologies that we rely on.

Encryption used on the Internet is now critical infrastructure and unfortunately with OpenSSL, has not been allocated the needed resources. That two thirds of websites relied on security tools developed and maintained by four people, only one of them a paid full time employee, is clearly a formula for disaster.

However the prospect of having a government maintain this type of infrastructure in the wake of the NSA spying scandals (as well as allegations that they were aware of the bug and exploited it) is not likely to gain a lot of traction.

FOSS uses a variety of business models but the reliance on volunteers for critical infrastructure may have hit its limit.

In the end the solution to security problems like Heartbleed may be one of funding and awareness rather fixing a specific programming error.

All too often there has been confusion as to whether the “free” in FOSS refers to “free” speech or to “free beer”.

It looks like the bar tab has come due.

Leave a Comment

Space Race: Round the Moon in Recycled Rockets, Spotting Rogue Asteroids, Dodging Alien Malware

by David Solomonoff

Detail, Amazing Stories cover, Malcom H. Smith, 1948

Space.com reports “Space tourists may soon be able to pay their own way to the moon onboard old Russian spacecraft retrofitted by a company based in the British Isles.

“The spaceflight firm Excalibur Almaz estimates that it can sell about 30 seats between 2015 and 2025, for $150 million each, aboard moon-bound missions on a Salyut-class space station driven by electric hall-effect thrusters.

In another private spaceflight initiative, the nonprofit B612 Foundation announced a campaign to fund and launch a space telescope to hunt for potential killer asteroids — a campaign they portrayed as a cosmic civic improvement project.

Former NASA astronaut Ed Lu, the foundation’s chairman and CEO, estimated that hundreds of millions of dollars would have to be raised to fund the project, but said he was “confident we can do this.”

William S. Burroughs said that “language is a virus from outer space.” At io9, George Dvorsky speculates at another type of danger from space – malware from an ET civilization:

…We should probably be more than a little bit wary of receiving a signal from a civilization that’s radically more advanced than our own.

When we spoke to SETI-Berkeley’s Andrew Siemion, he admitted that SETI is aware of this particular risk, and that they’ve given the issue some thought. When we asked Siemion about the possibility of inadvertently receiving or downloading a virus, he stressed that the possibility is extraordinarily low, but not impossible.

“Our instruments are connected to computers, and like any computers, they can be reprogrammed,” he warned.

Like Siemion, Milan Cirkovic also believes that the risk of acquiring something nasty from an ETI is very real. But he’s a bit more worried. Alien invaders won’t attack us with their spaceships, he argues – instead, they’ll come in the form of pieces of information. And they may be capable of infiltrating and damaging or subverting our computing networks, in a manner that’s similar to the computer viruses we’re all too familiar with.

“If we discard anthropocentric malice, it seems that the most probable response is that they have evolved autonomously in a network of an advanced civilization – which may or may not persist to this day.” If this is the case, speculated Cirkovic, these extraterrestrial viruses would probably just replicate themselves and subvert our resources to further transmit themselves across the Galaxy. In other words, the virus may or may not be under the control of any extraterrestrial civilization – it could be an advanced AI that’s out of control and replicating itself by taking over the broadcast capabilities of each civilization it touches.

After the end of the Cold War it seemed like the Space Race was dead, replaced by a much more Earth-bound and risk-adverse attitude. Humanity’s first encounter with an ET could be the accidental introduction of a terrestrial biological virus into an alien biosphere via a contaminated unmanned probe – or even a human-generated computer virus. But the rewards always outweighed the risks – both in terms of knowledge and resources to be gained – and the self-actualization from taking on big challenges. The fact that space exploration is back in the news reflects a return to a heroic and transformative vision of humanity as much as it does technical accomplishment.

 

Leave a Comment

Network security pros say they won’t bet they can prevent compromises

by David Solomonoff

A survey of IT professionals found more than two-thirds are only somewhat confident or not at all confident that an unauthorized person could not gain access to their networks. Top reasons respondents said they believe their networks may be vulnerable are malware, use of personal devices to access company resources, sheer volume of attacks, and widespread use of remote network access.

via Dark Reading

Leave a Comment

%d bloggers like this: