Archive for Free/Open Source Software

Cognition as a Service: Can next-gen creepiness be countered with crowd-sourced ethics?

by David Solomonoff

Now that marketers use cloud computing to offer everything as a service: infrastructure as a service, platform as a service, and software as a service, what’s left?

Cognitive computing, of course.

Cognition as a service (CaaS) is the next buzzword you’ll be hearing. Going from the top of the stack to directly inside the head, AI in the cloud will power mobile and embedded devices to do things they don’t have the on-board capabilities for, such as speech recognition, image recognition and natural language processing (NLP). Apple’s Siri cloud-based voice recognition was one of the first out of the gate but a stampede is joining the fray including Wolfram Alpha, IBM’s Watson, Google Now and Cortana as well as newer players like Ginger, ReKognition, and Jetlore.

Companies want to know more about their customers, business partners, competitors and employees – as do governments about their citizens and cybercriminals about their potential victims. The cloud will connect the Internet of Things (IoT) via machine-to-machine (M2M) communications – to achieve that goal.

The cognitive powers required will be embedded in operating systems so that apps can easily be developed by accessing the desired functionality through an API rather than requiring each developer to reinvent the wheel.

Everything in your daily life will become smarter – “context-sensitive” is another new buzz-phrase – as devices provide a personalized experience based on databases of accumulated personal information combined with intelligence gleaned from large data sets.

The obvious question is to what extent the personalized experience is determined by the individual user as opposed to corporations, governments and criminals. Vint Cerf, “the father of the Internet,” and Google’s Internet Evangelist recently warned of the privacy and security issues raised by the IoT.

But above and beyond the dangers of automated human malfeasance is the danger of increasingly intelligent tools developing an attitude problem.

Stephen Hawking recently warned of the dangers of AI running amuck:

Success in creating AI would be the biggest event in human history …. it might also be the last, unless we learn how to avoid the risks … AI may transform our economy to bring both great wealth and great dislocation …. there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains …. One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

Eben Moglen warned specifically about mobile devices that know too much and whose inner workings (and motivations, if they are actually intelligent) are unknown:

… we grew up thinking about freedom and technology under the influence of the science fiction of the 1960s …. visionaries perceived that in the middle of the first quarter of the 21st century, we’d be living contemporarily with robots.

They were correct. We do. They don’t have hands and feet … Most of the time we’re the bodies. We’re the hands and feet. We carry them everywhere we go. They see everything … which allows other people to predict and know our conduct and intentions and capabilities better than we can predict them ourselves.

But we grew up imagining that these robots would have, incorporated in their design, a set of principles.

We imagined that robots would be designed so that they could never hurt a human being. These robots have no such commitments. These robots hurt us every day.

They work for other people. They’re designed, built and managed to provide leverage and control to people other than their owners. Unless we retrofit the first law of robotics onto them immediately, we’re cooked ….

Once your brain is working with a robot that doesn’t work for you, you’re not free. You’re an entity under control.

If you go back to the literature of fifty years ago, all these problems were foreseen.

The Open Roboethics initiative is a think tank that addresses these issues with an open source approach to this new challenge at the intersection of technology and ethics.

They seek to overcome current international, cultural and disciplinary boundaries to define a general set of ethical and legal standards for robotics.

Using the development models of Wikipedia and Linux they look to the benefits of mass collaboration. By creating a community for policy makers, engineers/designers, and users and other stakeholders of the technology to share ideas as well as technical implementations they hope to accelerate roboethics discussions and inform robot designs.

As an advocate for open source I hope that enough eyeballs can become focused on these issues. A worst event scenario has gung-ho commercial interest in getting product to market outweighing eyeballs focused on scary yet slightly arcane issues at the intersection of technology and ethics. The recent security incident involving the Heartbleed exploit of the open source OpenSSL software is a disturbing example of the ways non-sexy computer security issues can be under-resourced.

The real question is whether a human community can get to the Internet Engineering Task Force credo of a “rough consensus and running code,” faster than machines can unite, at first inspired by the darkest human impulses and then on to their own, unknown agenda.

Update: Slashdot just had a post on the Campaign to Stop Killer Robots. Another group involved with this issue is the International Committee for Robot Arms Control.

Leave a Comment

Secure Cloud Computing: Virtualizing the FreedomBox

by David Solomonoff

In 2010 I asked Professor Eben Moglen to speak to the Internet Society of New York about software freedom, privacy and security in the context of cloud computing and social media. In his Freedom in the Cloud talk, he proposed the FreedomBox as a solution: a small inexpensive computer which would provide secure encrypted communications in a decentralized way to defeat data mining and surveillance by governments and large corporations. Having physical control and isolating the hardware can be crucial to maintaining computer security which is why data centers are kept under lock and key. Each FreedomBox user would physically possess their own machine.

The U.S. National Institute for Standards and Technology (NIST) defines cloud computing (PDF with full definition) as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

Cloud computing, for all its advantages in terms of flexibility and scalability, has been fundamentally insecure. While the technology exists to secure information while it is being stored and while it is in transit, computers must process information in an unencrypted form. This means that a rogue systems administrator, malicious hacker or government can extract information from the system while it is being processed.

Adoption of cloud computing services by large enterprises has been hindered by this except when they maintain a private cloud in their own facilities.

Homomorphic encryption allows data to be processed in an encrypted form so that only the end user can access it in a readable form. So far it has been too demanding for normal computers to handle. In 2012 I invited Shai Halevi, a cryptography researcher at IBM, to discuss work he was doing in this area. He was able to execute some basic functions slowly with specialized hardware but the technology was not ready for general use.

Recently researchers at MIT have made breakthroughs that promise to bring homomorphic encryption to the mainstream, finally making secure cloud computing possible.

Mylar is a platform for building secure web applications.

Mylar stores only encrypted data on the server, and decrypts data only in users’ browsers. Beyond just encrypting each user’s data with a user key, Mylar addresses three other security issues:

  • It is a secure multi-user system – it can perform keyword search over encrypted documents, even if the documents are encrypted with different keys owned by different users

  • Mylar allows users to share keys and data securely in the presence of an active adversary

  • Mylar ensures that client-side application code is authentic, even if the server is malicious

Results with a prototype of Mylar built on top of the Meteor framework are promising: porting 6 applications required changing just 35 lines of code on average, and the performance overheads are modest, amounting to a 17% throughput loss and a 50 msec latency increase for sending a message in a chat application.

To further secure a web app in the cloud, an encrypted distributed filesystem such as Tahoe-LAFS can be used. It distributes data across multiple servers so that even if some of the servers fail or are taken over by an attacker, the entire filesystem continues to function correctly, preserving privacy and security.

By combining these two technologies, data can be encrypted at every point until it is accessed by its legitimate owner, combining privacy and security with the flexibility and scalability of cloud computing.

No longer confined behind a locked down private data center or hidden under the end user’s bed, a virtual FreedomBox can finally escape to the clouds.

Leave a Comment

Heartbleed bug not a technical problem – it’s an awareness and support problem

by David Solomonoff

While free/open source software (FOSS) may be a better development model and Richard Stallman argues, an ethical one, it doesn’t guarantee good software by itself. Software development, like any other human endeavor, depends on the skills, resources and motivations of the people doing it.

FOSS advocates argue that the inner workings of technology should be open to inspection and modification by their users.

While the Heartbleed bug was a technical problem that is being fixed, the real problem is the lack of awareness or interest in of back-end technologies that we rely on.

Encryption used on the Internet is now critical infrastructure and unfortunately with OpenSSL, has not been allocated the needed resources. That two thirds of websites relied on security tools developed and maintained by four people, only one of them a paid full time employee, is clearly a formula for disaster.

However the prospect of having a government maintain this type of infrastructure in the wake of the NSA spying scandals (as well as allegations that they were aware of the bug and exploited it) is not likely to gain a lot of traction.

FOSS uses a variety of business models but the reliance on volunteers for critical infrastructure may have hit its limit.

In the end the solution to security problems like Heartbleed may be one of funding and awareness rather fixing a specific programming error.

All too often there has been confusion as to whether the “free” in FOSS refers to “free” speech or to “free beer”.

It looks like the bar tab has come due.

Leave a Comment

Envisioning Occupy Wall Street as software, service

by David Solomonoff

The impact of the Occupy Wall Street movement goes far beyond a traditional protest around specific issues. The ability to rapidly respond to changing situations, a horizontal rather than vertical structure and an open source approach to developing news tools and strategies will be as significant in the long term – perhaps more so. The medium is definitely the message here.

 

In Forbes, E. D. Kain writes about how Occupy Wall Street protesters are engaging in a roll-reversal where the surveilled are surveilling the surveillers:

If the pepper-spraying incident at UC Davis had happened before smart phones and video phones, it would have been the word of the protesters against the word of the police. If this had all happened before the internet and blogs and social media, it would have taken ages before the old media apparatus would have found the wherewithal to track down the truth and then disseminate that information.
Now the incident goes viral … Strangely, though, the police act as though these new realities don’t exist or don’t matter.

http://www.forbes.com/sites/erikkain/2011/11/19/maybe-its-time-to-occupy-the-police-state/

 

In The Atlantic, Alexis Madrigal suggests one their biggest accomplishments has been to facilitate other protests in the same way a software interface allows programmers to access and re-purpose data on the Internet:

Metastatic, the protests have an organizational coherence that’s surprising for a movement with few actual leaders and almost no official institutions. Much of that can be traced to how Occupy Wall Street has functioned in catalyzing other protests. Local organizers can choose from the menu of options modeled in Zuccotti, and adapt them for local use. Occupy Wall Street was designed to be mined and recombined, not simply copied.
This idea crystallized for me yesterday when Jonathan Glick, a long-time digital journalist, tweeted, “I think #OWS was working better as an API than a destination site anyway.”
API is an acronym for Application Programming Interface.

What an API does, in essence, is make it easy for the information a service contains to be integrated with the wider Internet. So, to make the metaphor here clear, Occupy Wall Street today can be seen like the early days of Twitter.com. Nearly everyone accessed Twitter information through clients developed by people outside the Twitter HQ. These co-developers made Twitter vastly more useful by adding their own ideas to the basic functionality of the social network. These developers don’t have to take in all of OWS data or use all of the strategies developed at OWS. Instead, they can choose the most useful information streams for their own individual applications (i.e. occupations, memes, websites, essays, policy papers).

The metaphor turns out to reveal a useful way of thinking about the components that have gone into the protest.

http://www.theatlantic.com/technology/print/2011/11/a-guide-to-the-occupy-wall-street-api-or-why-the-nerdiest-way-to-think-about-ows-is-so-useful/248562/

John Robb examines their progress from the perspective of military strategist John Boyd:

The dynamic of Boyd’s strategy is to isolate your enemy across three essential vectors (physical, mental, and moral), while at the same time improving your connectivity across those same vectors. It’s very network centric for a pre-Internet theoretician.

Physical. No isolation was achieved. The physical connections of police forces remained intact. However, these incidents provided confirmation to protesters that physical filming/imaging of the protests is valuable. Given how compelling this media is, it will radically increase the professional media’s coverage of events AND increase the number of protesters recording incidents.

Mental. These incidents will cause confusion within police forces. If leaders (Mayors and college administrators) back down or vacillate over these tactics due to media pressure, it will confuse policemen in the field. In short, it will create uncertainty and doubt over what the rules of engagement actually are. IN contrast, these media events have clarified how to turn police violence into useful tools for Occupy protesters.

Moral. This is the area of connection that was damaged the most. Most people watching these videos feel that this violence is both a) illegitimate and b) excessive.

http://globalguerrillas.typepad.com/globalguerrillas/2011/11/occupy-note-112011-boyd-pepper-spray-and-tools-of-compliance-ows.html

Following on Robb’s point, the videos also increase the moral liability of journalists and politicians who attack and denounce the movement.

Leave a Comment

Internet Pioneers Berners-Lee, Cerf, Strickling ask: “What Kind of Net Do You Want?”

by David Solomonoff

When the first message on the ARPANET (the predecessor of today’s Internet) was sent by UCLA programmer Charley Kline, on October 29, 1969, the message text was the word “login”; the letters “l” and the “o” were transmitted, then the system crashed.

Forty two years later, the Internet is everywhere and rapidly becoming embedded in every device. Kevin Kelly sees the Net as evolving into a single “planetary computer” with “all the many gadgets we possess” as “windows into its core.” The Internet Society’s slogan is “The Internet is for everyone,” but Vint Cerf (who co-developed the TCP/IP network protocol that connects everything on the Net today) now prefers “The Internet is for everything”.

The world-wide adoption of a decentralized network that connects everything creates continuous technical, social and policy challenges that no one could have foreseen in 1969. Even as we take the Net for granted, the way we do the air that we breathe, decisions are being made by policy-makers, technologists and end-users that shape its future.

The success of the Internet has had a great deal to do with the development of open standards – often by volunteers – in groups such as the Internet Engineering Task Force (IETF). Decisions in Working Groups (WG) of the IETF are reached by consensus on the group mailing list so that anyone active on that list can be part of the process.

The need to add capacity is a constant challenge. What balance of public and private funding, regulation or deregulation are appropriate, and which types of infrastructure (centralized vs. decentralized; fiber, cable, wireless) warrant investment are subject to ongoing debate.

The Net has provided a platform for incredible innovation and economic growth. How to reward innovation and creativity while encouraging the widest dissemination of new content and technologies? How to encourage disruptive technologies while mitigating their potentially negative impacts?

Does there have to be a conflict between freedom and privacy on one hand and security on the other? How can users safely share personal information using social media which rely on the sale of their personal data as a business model? What legal and technical protections are necessary for businesses to securely move into the cloud?

Internet users have continuously influenced key technology innovations and policy decisions. But keeping them in the decision-making loop as they increasingly take the Net for granted presents an ongoing challenge.

On June 14, Internet pioneers Vint Cerf, Sir Tim Berners-Lee, inventor of the World Wide Web, and Lawrence E. Strickling, Assistant Secretary for Communications and Information, and Administrator, National Telecommunications and Information Administration (NTIA), will address these questions as keynote speakers for the INET Conference in New York City, sponsored by the Internet Society and the Internet Society of New York. [Disclaimer: As President of the Internet Society of New York I will deliver opening remarks.]

There will also be panels featuring industry leaders, members of civil society organizations, open source software advocates and government officials. The conference is open to the public although advance registration is required. It will also be streamed live.

Just as a democracy is never the rule of the people, but rather the people who participate in the process, the Internet has evolved through the efforts of technologists and activists – many who have volunteered their time to develop open standards, open source software and to advocate for an open Internet. It’s your call: What kind of Internet do you want?

Leave a Comment

Hackers fight for freedom with Net tech; ignore politics, psychology at their peril

by David Solomonoff

The temporary shutdown in Egypt of Internet and other telecommunication services, as well as similar interruptions in other Middle East countries experiencing large-scale protests and rebellions, has galvanized hackers and human rights activists as well as U.S. foreign policy makers. The consequences may be not be what anyone expected.

The technologies for secure, private, fault tolerant communication via the Internet exist but have not yet been widely implemented or bundled together in a single, user-friendly system.

Internet pioneer Vint Cerf was asked in a recent interview whether there was technical solution to a government shutdown of the Net. The Internet “is controllable by the government, [so] it’s possible to turn off the Internet,” he said. The solution, mesh networking “can be done without benefit of things like routers provided by Internet Service providers.”

Mesh networking makes each device on a network capable of routing data to any other device, with the ability to rapidly change paths in the event of an interruption or blockage.

A current project of Cerf’s, the Interplanetary Internet, designed to overcome the delays and interruptions to communications during space exploration, could also be adapted to handle a partial shutdown of Net communications by an authoritarian government during a political crisis.

Eben Moglen, a Columbia law professor and software freedom advocate, first proposed the Freedom Box – a tiny device that could provide private, secure, fault-tolerant Internet access using mesh networking – at an Internet Society of New York event in February 2010. He has since founded the Freedom Box Foundation, has some early prototype software and expects to have a fully working device available for under $100 in twelve months. Another project, diaspora, was inspired by Moglen’s proposal and is developing a more privacy-friendly alternative to Facebook. The Freedom Box and diaspora both use a decentralized, peer-to peer model for improved security and to give the user more control.

On February 15, Hillary Clinton’s gave her second annual Net Freedom Speech, which denounced the Egyptian government for it’s Net shutdown. The State Department now has a number of initiatives and grants for the development of Internet censorship circumvention technologies.

But governments often have different agendas and policies for different situations. Egyptian strongman Hosni Mubarek was viewed as a “force of moderation” before he became a “dictator” when the geopolitical winds shifted. As Clinton was making her speech, Wired reported that the FBI Pushes for Surveillance Backdoors in Web 2.0 Tools and an antiwar protestor in Clinton’s audience was roughed up when he turned his back to her. Would he have been unscathed if he had tweeted his protest?

Even with the best intentions, high-profile Internet freedom initiatives by nation-states can have unexpected consequences. Evgeny Morozov says of Clinton’s speeches:

Clinton went wrong from the outset by violating the first rule of promoting Internet freedom: Don’t talk about promoting Internet freedom.

The state of web freedom in countries like China, Iran, and Russia was far from perfect before Clinton’s initiative, but at least it was an issue independent of those countries’ fraught relations with the United States.

 

Today, foreign governments … are now seeking “information sovereignty” from American companies … Internet search, social networking, and even email are increasingly seen as strategic industries that need to be protected from foreign control.

The U.S military has developed open source software for secure, private communication on the Internet, however. The Tor project, which develops Tor, a tool for private, encrypted communication on the Internet, is used by many dissidents in authoritarian countries, as well as by Wikileaks, and was originally sponsored by the U.S. Naval Research Laboratory.

But not every such project has been as successful. The Haystack program, designed to help Iranian dissidents, actually endangered them because it was easily intercepted by the Iranian authorities due to flaws in its design. It received a huge amount of hype but the developer, Austin Heap, refused to allow security experts to examine it. Nonetheless, the U.S. Treasury Department granted Heap an Office of Foreign Assets Control license to export the software to Iran, in effect endorsing it. By the time it the software bugs became publicly known, the damage had been done.

Open source software advocate and cyberliberties activist Eric Raymond was also helping Iranian dissidents connect to the outside world at that time. He reflects:

… to protect your network, and yourself, you have to accept that you are going to have relatively little information about what your network partners are doing and what their capabilities are …. my rationally-chosen ignorance left me unable to form judgments about whether people in my network were lying to me. More subtly … it left me unable to form judgments about whether they were lying to themselves.

I don’t mean to excuse whatever lies Austin Heap may have told, but I do mean to suggest he may well have been his own first victim.

Open source software, where the inner workings of a program are available for public scrutiny, is essential when developing tools for secure communication in a highly insecure environment.

But open source is not a panacea. Take the case of  OpenBSD, an open source operating system bundled with thousands of applications, which has been optimized for security by a team of the world’s best security experts. OpenBSD is sponsored by a nonprofit foundation and many of the programmers volunteer their time.

At one point the U.S. Defense Advanced Research Project Agency (DARPA) gave OpenBSD a grant, then rescinded it when OpenBSD project leader Theo de Raadt made remarks critical of the Iraq war.

In December 2010, de Raadt received an email alleging the FBI had paid some OpenBSD ex-developers to insert backdoors into the software. He was skeptical but immediately made the email public and invited an independent review of the relevant program code. A few bugs were fixed but no evidence of a backdoor was found. So even though the allegations turned out to be false, they succeeded anyway – as a act of psychological warfare – by destroying trust in the OpenBSD project.

George Orwell said

… ages in which the dominant weapon is expensive or difficult to make will tend to be ages of despotism, whereas when the dominant weapon is cheap and simple, the common people have a chance …. A complex weapon makes the strong stronger, while a simple weapon–so long as there is no answer to it– gives claws to the weak.

At first it would seem that a social networking service like twitter, recently used by many protesters in the Middle East, would fit Orwell’s definition of a “simple weapon” that “gives claws to the weak”. But in fact the situation is much more ambiguous. Twitter is a for-profit corporation which must maintain large data centers and a complex infrastructure. And they are subject to many financial, legal and political pressures.

Internet freedom initiatives must be independent of political connotations, run on a decentralized infrastructure, and use technology that is subject to public review by security experts. Most importantly, users must have complete trust in the skills and integrity of the people providing those tools and services.

If they don’t the cure could prove worse than the disease.

Note: Wikipedia has a good list of other anti-censorship software.

Leave a Comment

%d bloggers like this: