Google's Chrome OS, Cloud-oriented Computing, and User Freedom; or The Emperor has No Clothes!

Google has recently announced their plan to release a new open source operating system called Chrome OS, which they hope to make freely available by mid 2010. A quick review of write-ups about Google's announcement (like this piece by Miguel Helft and Ashlee Vance in the New York Times) reveals that many view the proposed Chrome OS to be an innovative approach to desktop computing, since the idea is to make Google's Chrome browser the focus of the operating system. With little or no native applications to bog it down, Chrome OS would thus provide the user with quick access to the Internet, and nearly all computing tasks would take place through web applications such as Google Docs and Picasa.

It might first seem counter-intuitive, but by giving away Chrome OS and their web services, Google will ultimately grow their business. Google makes its money from targeted advertising. Widespread usage of Chrome OS would mean that more people would be doing their computing in the clouds , which would in turn provide Google greater opportunities to offer more precisely targeted advertisements at a greater quantity, and hence greater opportunity to increase revenue.

If Google's cloud-centric operating system catches on, it will most certainly pose a challenge to businesses that currently sell traditional desktop operating systems, such as Microsoft and Apple. With an orientation on native rather than web-based applications (e.g. Word, Windows Media Player/Quicktime, iPhoto, etc.), Windows and OS X stand in the way of Google making more money, since the more time one spends on those native applications, the less time one spends online using Google's web-based services. By offering Chrome OS for free, Google undercuts Windows, OS X, and their expensive software, driving consumers to use Google's services instead.

All this might sound like a big win for the consumer, since he or she will get to use quality applications on a purportedly more reliable and efficient operating system at no cost. Under any ordinary circumstance, who could argue with that? But herein lies the problem. This is no ordinary circumstance. We're now dealing with cloud-oriented computing, and with this, free comes at a high price.

Cloud computing can be quite useful. However, to make an operating system completely dependent upon web services for its most basic functions poses certain dangers to the user. First, all one's computing becomes dependent on having an Internet connection, which means one must have an Internet service provider in order to utilize the system to its fullest potential. Google will likely further develop Google Gears, which currently allows the user to work with certain web-based applications offline, but it will probably never be able to provide the same functionality to web-based applications as one has with native applications. So for those who don't want or can't afford to pay for an Internet connection (yes, I know, this is a small demographic in many nations), or for those who have no access to the internet for an extended period of time, Chrome OS would appear to be practically useless. Even for those who have an Internet connection, why would they want to have that cost become an inherent part of their ability to use their computer?

Second, it isn't clear whether one will have the ability to write and run non-web-based applications on one's computer. Google may allow for such a feature, but it will probably be disabled by default, seriously restricted, or come at a price. These applications will compete for the user's time, which he or she would otherwise spend using the web-based applications that bring Google revenue. So the freedom of the user to write his or her own program and run it on his or her computer the way he or she sees fit will likely be restricted or taken away.

Finally, cloud-oriented computing means that one's private data will not (only?) be on your personal hard drive, but it will (also?) be sitting on the hard drive of some third party server, meaning God-knows-who could have access to your private information, doing God-knows-what with it. This raises a series of important related questions: is data in the clouds ultimately public? We're talking about a centralized storage location that contains very intimate details about the lives and dealings of billions of people the world over. This places unthinkable power in the hands of who or whatever owns that centralized storage location. If the information is there for another to be accessed, in what sense is it still private information? Do the storage location owners only own the storage, or can they lay any claim to the contents of the storage, as well? What can they do with the information, and who or what is to stop them if they try to do something with it that they shouldn't? Certainly the private/public distinction begins to get blurred; at the very least, this would involve having limited control over any personal information stored in the cloud. And when all or nearly all one's computing takes place in the cloud, one would have limited control over a large bulk of that information.

When all is said and done, the Chrome OS platform may end up being cheaper, more efficient, and more innovative than Windows or OS X. It may be built around the Linux kernel. It may also be touted as a free/libre open source project, but it cannot help but result in an arrangement that is at least as equally unethical as the arrangement between proprietary software companies and their end users, since users will end up surrendering to their service provider a modest amount of control over system functionality, as well as security of personal data. Chrome OS may be free/libre open source in name and in practice, but the very nature of the relationship between user and provider that cloud-centric computing fosters entails that it cannot be fee/libre open source in spirit.

Creative Commons License
Google's Chrome OS, Cloud-oriented Computing, and User Freedom; or The Emperor has No Clothes! by Nathan M. Blackerby is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.


Has Web 2.0 had a Corrosive Effect on Democracy?

Below is an entry I published awhile back on a collaborative blog that apparently (and unfortunately) isn't going to materialize anytime soon. The piece contains some of my thoughts on a particular criticism often leveled against the principles behind Web 2.0. I specifically address Andrew Keen's view in this piece, but his is representative of a select group of other media and technology analysts (e.g. Bryan Appleyard), so my remarks should be more widely applicable:

About two months ago, I watched a documentary on PBS entitled The Truth According to Wikipedia. As the title suggests, the film focuses on Wikipedia. But it's really an exploration of how the Internet has enabled worldwide collaborative ventures, and how this has affected the way the world gathers, assembles, shares, uses, and discusses information. The creators were able to present in a clear, informative manner (a rarity for issues-based documentaries, in my experience) the reasoning of both proponents and critics of Wikipedia-style approaches to online media.

One of the critics featured in the film is Andrew Keen, founder of the defunct website, audiocafe.com. During the 90's and early 2000's, Keen was a strong proponent of Internet ideals, such as universal access to digital content, but more recently has developed a profound distaste for all things Web 2.0. In the past few years, he has gone on the offensive, arguing that the core values of the New Internet - decentralization, participation, and user-generation of content - have had an increasingly corrosive effect on our economy, politics, and culture. Consider the following excerpt from the film, where Keen addresses an audience of Web 2.0 architects and enthusiasts who believe that their,

'MyWorld' ... will lead to 'more democracy, more equality, and more freedom.' Now I [i.e. Keen] strongly disagree; that's the essence of my polemic. I argue 'me' - that this personalization of media, personalization of culture, the fragmentation of society, indeed, into 'me,' into everything becoming increasingly more personalized, is resulting in reality in less democracy, less equality, and less freedom. (The Truth According to Wikipedia, @ circa 12:15 - 13:01)

A bold thesis, to be sure. One that challenged me to evaluate my own intuitions on this issue and intrigued me enough to read Keen's book on the subject, The Cult of the Amateur. I wanted to see how strong his argument was in support of his view. It essentially boiled down to this:

1. In order for our culture to survive. society needs "gatekeepers," individuals whose judgments and abilities to perform certain duties can be trusted.

2. Experts and professionals are the gatekeepers of society.

3. But Web 2.0 principles destroy expertise and professionalism, since they require that one extol the anonymous amateur, elevating amateur judgment and performance to a level equal with and sometimes even superior to that of the expert or professional.

4. Therefore, Web 2.0 principles are a threat to the survival of our culture.

Keen spends virtually no time arguing for (1) and (2), instead opting to make the case for (3). He cites statistics about how, since the mid-90's, profits have increasingly fallen in professional journalism and the music and film industries; he highlights cases where misinformation spread via the Internet has had damaging effects on people's personal and professional lives; and he points to trends in marketing that increasingly blur the line between advertisement and content. According to Keen, this evidence not only shows that blogs, social networking cites, and peer-to-peer file sharing technologies are responsible for lost revenue in journalism and the entertainment industry, effectively ruining the careers of media professionals the world over, it shows that amateurs and advertisers are taking over their roles and filling the Web with untrustworthy, low-quality content.

Of course, Keen's own interpretation of this evidence is questionable: one could maintain that professional journalism and the music and movie industries are seeing reduced profits because they employ a legacy business model ill-equipped for the digital age, and one could dig in one's heels and cite contrary evidence regarding the amount and quality of trustworthy content on the Internet. But debating the fine points about how Keen's evidence should be interpreted is really a moot point. His analysis suffers from a more basic problem insofar as he has, at best, shown only that increased untrustworthiness of Internet content and decreased revenue in professional journalism has coincided with the implementation of Web 2.0 principles when what he needs to show in order for his argument to work is that the implementation of Web 2.0 principles has caused the supposed 'destruction of expertise and professionalism.'

But suppose for a moment that Keen does establish a causal connection. Should it then be beyond any reasonable doubt that Web 2.0 threatens to unravel our culture? This hinges on the plausibility of Keen's assumptions that gatekeepers are needed for the continuation of culture and that only experts and professionals can fill that niche. Now even granting that professionals are to be exclusively identified with gate keepers, it doesn't necessarily follow that professionalization of a field or cultural activity will guarantee its survival. Indeed, as I have written elsewhere about my own field (i.e. philosophy), professionalization has largely proven to have a cannibalizing effect and the key to its survival may perhaps involve some degree of "informalizing" and "amatuerizing." So, contra Keen, Web 2.0 principles might enhance rather than threaten the survivability of culture in at least some cases.

Of course, Keen could admit that a healthy dose of amateurism is needed, while still maintaining that a society's culture can't do without its gatekeepers. But this begs the question: just what is it about the role of the gatekeeper that makes him or her so indispensable? According to Keen, culture is about truth, and "the gate keeper is the key player in the truth, because the gatekeeper, whether they're an editor at an encyclopedia, or a record agent, or a newspaper publisher, they're the one's who determine truth" (The Truth According to Wikipedia, @ circa 24:00-24:20) And in The Cult of the Amatuer, Keen draws on the work of anthropologist Ernest Gellner and political scientist Benedict Anderson to explain that gate keepers provide society cohesiveness by presenting the public a shared narrative and common worldview:

As anthropologist Ernest Gellner argues in his classic Nations and Nationalism, the core of the modern social contract is rooted in our common culture, in our language, and in our shared assumptions about the world. Modern man is socialized by what the anthropologist calls a common "high culture." Our community and cultural identity, Geller says, come from newspapers and magazines, television, books, and movies. Mainstream media provides us with common frames of reference, a common conversation, and common values. Benedict Anderson, in Imagined Communities, explains that modern communities are established through the telling of common stories, the formation of communal myths, the shared sense of participating in the same daily narrative of life (The Cult of the Amatuer, p. 80).

The notion that truth, trustworthiness, and their intimate relationship (among other things) lie at the heart of culture should be unproblematic. However, Keen's description of culture's gatekeepers as determiners of truth sounds far less like our own cultural ideal - namely, a culture that is both free and democratic - and more like the ideal state of Plato's Republic. Indeed, Keen's gate keepers are virtually indistinguishable from Plato's description of the guardian class, whose role is to present a noble lie to an overwhelming mass of people inherently incapable of understanding truth. The culture Keen envisions is oligarchic, one in which societal control is placed in the hands of an elite class who have presumably exclusive access to truth and a monopoly on creativity. By contrast, a democratic society, whether it has professionals or not, leaves no room for Keen's gate keepers. It assumes a fundamentally different epistemology and "technology" - one in which each person is presumed to be endowed with an inborn capability to discern truth and to utilize their creativity for productive purposes. Reality is supposed to be the determiner of truth, and it is through observation, conversation, and debate that we arrive at it. If anything, then, the principles of Web 2.0 would seem to compliment or support the ideal of democratic culture, rather than usher in it's demise.

Creative Commons License
Has Web 2.0 had a Corrosive Effect on Democracy? by Nathan M. Blackerby is licensed under a Creative Commons Attribution-No Derivative Works 3.0 United States License.


The Distributist Review - Three Acres and a Penguin: Why Distributists Should Try Linux

Originally published by Bill Powell at The Distributist Review.

Gripe, gripe, gripe. Globalization swallows the globe. Monsanto poisons your popcorn. Big Business and Big Government team up to embed RFID tracking chips in schoolkids. And distributists love to hate the whole mess. Cheers!

Well, friends, I have good news. Linux. It's time to free your computer.

Have you heard of Linux? Maybe you went to download Firefox (a free web browser), clicked around, and noticed that after "Windows" and "Mac" there was "Linux", with a little penguin. (His name is Tux.) Maybe you're periodically forced to interact with your IT department, and you've overheard "Linux" as they discuss their arcane secrets. Maybe you're way ahead of me, and are irritated because I'm probably not going to mention OpenBSD.

Or maybe you have no clue what I'm talking about. What is Linux? Basically, Linux is a pile of programs that lets you take your computer, strip it down to the bare hardware, and start fresh. Linux is an alternative operating system . If you just download Firefox, you're still in Microsoft Windows or OS X. When you download Linux, you're in Linux.

Why is this good news for distributists? Because Linux is free . Not only "free as in beer," but far more importantly, "free as in speech." You can download Linux and use it as you will. You can try free alternatives for almost any task you can think of: email, browser, word processor, spreadsheet, graphic design, typesetting, games, and many more. You can customize most of these programs, as well as the overall window manager, beyond your wildest pre-Linux dreams. You can also remove any application that annoys you. (Try removing IE.)

You can even read the source code; and if that sounds silly, you can rest assured that thousands of other programmers do read the source code. Why does this matter? Computer programs are made up of hundred, thousands, or millions of lines of code, and in the world of Microsoft or Apple, that code is proprietary . It's generally illegal to read the code unless you work for Microsoft or Apple. In fact, when you buy the program, you don't even get the source code. You only get (no, you rent) the computer-readable binary code, which looks like gibberish and can't be altered. It works (hopefully) but you are not allowed to know how. Or fix it.

Imagine if you could only fill your car with gas from Exxon. Or only get an oil change at the dealership. Or if it was illegal to open the hood of your car unless you worked for the manufacturer. Even if you had no desire to be your own car mechanic, these rules would seem a bit draconian.

So why this paroxysm of intellectual property law for computer software? It's understandable; when advances in computing made it possible for companies to sell software to non-programmers, they quickly noted that you could pay a hundred thousand dollars to develop a vital program, and your competition could copy it the next day. They thought sharing wouldn't work. They were wrong.

Whether the proprietary model is moral is beyond my allotted portion. It's certainly obvious that, permissible or not, it drastically curtails the freedom of the user. It seriously tips the balance of power towards the corporation. How would you feel about a brake job if it was illegal to have a rival company check up on the work? You probably store plenty of private information on the same computer that mysteriously connects you to the Internet; wouldn't you prefer that this computer had no secrets?

Linux is exciting because it turns the proprietary model on its head, and it works . Linux is often called open source or simply free (or libre) software ; the basic idea is that you can read the code, tweak it, add to it, re-release it, even charge money for it. For instance, I charge money for customizing an installation of web site software. You get a web site that's based on a common, powerful, well-supported program, but I make it unique for you. Anyone can do anything they like with the code except try to lock up the portions they used. The code stays free.

So where does all this code come from? Why do programmers spend millions of hours on code they will give away?

This also should excite distributists. Free software is a unique ecosystem. (I'm going to stop saying "Linux" now; it sounds cooler than "free software," but it actually has a definite technical meaning, and it isn't the only free OS in town, either.) A program is not like an apple. If I share my apple with you, we each only get half. (Which is why it matters who owns an apple tree.) If I share my program, we both have a full copy; and I benefit from your feedback.

Every program's niche is different. Many programs happen simply because the programmers want or need them. Major programs might be the work of a non-profit foundation, as with Apache (which runs more than half the servers on the Internet), or subsidized by a for-profit company so the code can be reused elsewhere, as with OpenOffice.org (a free office suite which also runs on Windows or a Mac). Some companies offer free software, and charge money for support. Some programmers seem to live on donations and advertising. People do what works.

Chesterton fought for economic liberty, and knew it was bound up with political liberty. Today, he would say that both are bound up with digital liberty. Do you own a computer? Especially a spare older computer you can wipe clean without fear? Try a few free Linux lessons. Or if you'd like to stay on your current operating system, at least try a free web browser or word processor. If these are the tools you use every day, why not choose tools you can make your own?

Creative Commons License
This work is licensed under a Creative Commons Attribution-No Derivative Works 3.0 United States License


Philosophy and Free Culture, Part I

This is the first in a series of articles that I will be writing for a new magazine, called The Coffee Companion.

During the last century continuing through to the present day, philosophy has come to be identified increasingly with the work of the professional philosopher; its techniques and rich vocabulary needing years of study to master, its history seen as an artefactual object best suited for academic analysis, its practice relegated to classrooms and professional conferences, and its ideas monologically transmitted to a select audience of experts, eventually calcified in journals and books inaccessible or unknown to the general public. Unclear is whether this causes or is symptomatic of a focus on issues so esoteric and obscure as to appear altogether divorced from the questions and concerns that arise from reflection on everyday experience. What is clear is that philosophy's professionalization marks the beginning of its virtual extinction outside the cloistered halls of the University.

The logic of professionalism demands that the responsibility of doing philosophy rests on the shoulders of those who receive pay for it. The reality of professionalism demands that the non-philosopher have no time for it. This leaves the general impression nowadays that philosophers make a career of dealing with philosophical issues so that the public no longer needs to. Jane Doe, Eddy Punchclock, and Joe the Plumber can rest at night knowing that their tax dollars and payments on their children's college tuition support Steve the Scientist's research which will spark technological innovation, Bob the Business Professor's training of legions of entrepreneurs destined to create new markets or redefine old ones, and Bella the Biologist's work on fighting life-threatening diseases. The tasks of one's own profession coupled with the hustle and bustle of day to day living are often so consuming that simultaneously taking on the task of another profession becomes practically unimaginable. Jane, Eddy, and Joe aren't expected to perform the tasks that Steve, Bob, and Bella's respective professions demand. So why should philosophy be any different? What makes Pete the Philosopher's quest to tackle Life's Big Questions - or whatever it is that philosophers do - an exception?

Often coupled with the logic and reality of professionalism is the notion that the worth of an activity or discipline should be measured by the degree to which it can maximize productivity and financial benefit. All this coalesces to the point where utilization of one's talents and intellectual abilities for reflection on things beyond one's own profession becomes optional. This spells bad news for philosophy: not only does it “bake no bread,” it doesn't even help one effectively sell the bread one bakes. For all intents and purposes, the professional non-philosopher's engagement in philosophy reduces to recreation and even the professional philosophers' work is best regarded with marginal importance.

In the current state of the art, then, consideration about whether one should refrain from doing philosophy is virtually self-affirming, since philosophical reflection appears to bear little significance to action. Yet, ironically, a tinge of reflection on the above appraisal quickly reveals that one should proceed with caution in endorsing a system that takes action as the sole determinant of value. Though it may be true that certain principles are rejected or endorsed on account of the actions to which they lead, it is nevertheless also true that actions are treated with contempt or esteem on account of their agreement or disagreement with principles. Thus, if action is itself treated as the final evaluative principle, one should only unreflectively endorse those actions one already engages in.

The danger in all this is that as actions change, one would lack the sense to determine whether one's actions should have changed. What hangs in balance here outstrips individual concern. An unreflective public in the habit of making irrationally uninformed decisions would be prepared to surrender voluntarily whatever social and political power they might have for the sake of salvaging or enhancing some feature of commercially productive action. Were the loss of critical self-awareness to become commonplace (as some may argue it already has), this would spell disaster for free and democratic culture, since the latter depends on individuals taking responsibility for making rationally informed decisions in the common interest. As such, widespread philosophical reflection treated in high regard appears essential to the preservation of free, democratic culture. Yet in order for this to be realized, philosophy would need to be restored in some measure to its Socratic origins as an activity in which members of society participate in a collective, public, and sustained cross-examination of tacit assumptions about human conduct and the world. That is, philosophy must be understood to be more than mere profession.


The Utility of Ubuntu

Yesterday my friend Matt brought to my attention an article entitled "A Software Populist Who Doesn't Do Windows," which recently appeared in the Business Secion of the New York Times. It's an interview story on Mark Shuttleworth, the founder of Canonical, but is equally about the rise in use of Canonical's desktop Linux distribution, Ubuntu. Ashlee Vance, the author of the article, contends that Ubuntu may have the wherewithal to become a competitor in the desktop market, precisely because it succeeds in areas where Linux has a reputation for failing: user-friendliness. The fact that it comes at no cost helps too. But what Vance giveth with one hand, he taketh away with the other, arguing that usability, compatibility issues, and price are also its major stumbling blocks to success. Consider, for instance, the following statements:
While relatively easy to use for the technologically savvy, Ubuntu — and all other versions of Linux — can challenge the average user. Linux cannot run many applications created for Windows, including some of the most popular games and tax software, for example. And updates to Linux can send ripples of problems through the system, causing something as basic as a computer’s display or sound system to malfunction. (New York Times)
Parts of Mr. Shuttleworth’s venture continue to look quixotic. Linux remains rough around the edges, and Canonical’s business model seems more like charity than the next great business story. And even if the open Ubuntu proves a raging success, the operating system will largely be used to reach proprietary online services from Microsoft, Yahoo, Google and others. “Mark is very genuine and fundamentally believes in open source,” said Matt Asay, a commentator on open-source technology and an executive at the software maker Alfresco. “But I think he’s going to have a crisis of faith at some point.” Mr. Asay wonders if Canonical can sustain its “give everything away” model and “always open” ideology. (New York Times)
Press coverage of free/libre, open source software always has the potential to be a positive; you never know just whose curiosity it might pique. However, articles like Vance's seem to do more harm than good. Henry Kingman has described Vance's portrayal of Ubuntu as "the flawed plaything of an eccentric billionaire, an OS likely to appeal only to the disaffected, marginalized, deeply technical, or all of the above." I think Kingman is correct. Vance's comments about Ubuntu certainly seem to suggest that he believes it's an unstable system with too much of a learning curve for the non-specialist, that it has too many quirks to be functional, and that its sustained existence depends on Mark Shuttleworth's attention span. All this serves to scare the unsuspecting Windows or Mac user away from exploring the world of Linux. "Ubuntu may be free of cost," the warning begins, "but it's largely useless."

There are greater, ethical reasons for choosing to use a free, open-source operating system, but since Vance focuses on utility and price, I'll ignore those ethical reasons and make some brief remark on how my own experience (as well as others' experience) with Ubuntu confirms the opposite of Vance's claims.

I have used computers for a long enough time and have had enough working experience with computers that, for instance, I don't freeze at the sight of a command line interface or panic when a program needs code fixed in a text editor. Nevertheless, I'd still classify myself as a "regular user." Yet I run Ubuntu on each of my computers every day with relative ease. Others who are less "tech-savvy" than me have had the same or similar experiences with Ubuntu. Neither they nor I find using Ubuntu a "challenge" at all (including system updates and upgrades).

Moreover, I don't miss Windows programs. There are two reasons for this. Nearly every application that runs on Windows has its corollary in Linux. For Microsoft Office there is Openoffice; for Internet Explorer there is Firefox, Konquerer, and Opera; for Adobe Creative Suite there is F-spot, the GIMP, Inkscape, and Blender; for Windows Media Player there is Amarok, Rhythmbox, Mplayer, and VLC. The list goes on. However, if one fails to find, say, an adequate tax program that runs natively on Linux, he or she can always run his or her Windows tax program of choice in Linux through WINE.

So, contrary to Vance, my own experience and the experience of others stand as testament that Ubuntu is a stable operating system. Moreover, the fact that Ubuntu receives a steady stream of updates, that new versions are released every six months, and that it has an active and large community of developers and contributers indicates that Ubuntu should provide a stable system in the future as well.

Creative Commons License
The Utility of Ubuntu by Nathan M. Blackerby is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.