“Web 2.0” A chapter from The Academic Library and the Net Gen Student by Susan Gibbons ALA Editions purchases fund advocacy, awareness, and accreditation programs for library professionals worldwide. ChAPTEr fOur web 2.0 ONLINE GAMES ARE NOT THE ONLY ROUTE TO AN engaging experience on the Net. The rest of the Web is rapidly catching up. The Web has never been a purely static experience, but it has not been all that interactive either. With the vast majority of web pages, when a user arrives at the page of HTML code, displayed through an Internet browser, there is usually little more than text to be read and images to be viewed. Movement through the website is accomplished through user-initiated mouse clicks, to which the web server responds with a repeatable, usually predictable, response. The delivery of content is predominantly a one-way conversation, with the website, as proxy for its author, the speaker and the web surfer the listener. A productive experience with the Web requires the user to be able to locate and pull the appropriate web pages out of the vast sea of possible websites. This is the familiar world of Web 1.0 to which the majority of us have grown accustomed. The concept of Web 2.0 promises to be very different. Tim O’Reilly, founder and CEO of O’Reilly Media, is credited with coining the term “Web 2.0” in 2004. Although there is certainly no consensus about what Web 2.0 fully entails, there are some shared principles, which were presented by O’Reilly in 2005 and are captured in figure 4-1. The first is the concept of “the Web as platform.” In the Web 1.0 world, a website with its static text and images is the deliverable. In the 2.0 world, however, the Web is just the platform or foundation, which supports the delivery of myriad dynamic services. O’Reilly (2005) uses Google to demonstrate the concept of the Web as platform: Google’s service is not a server—though it is delivered by a massive collection of internet servers—nor a browser—though it is experienced by the user Web 2.0 Flickr, del. PageRank, Blogs: BitTorrent: icio.us: eBay reputation, Participation, Radical Tagging, not Amazon reviews: User not publishing decentralization taxonomy as contributor Gmail, Google Google Maps, and AdSense: Customer Wikipedia: AJAX: Rich user self-service enabling Radical experiences the long tail trust Strategic Positioning: • The Web as platform “An attitude, Trust your User Positioning: not a users • You control your own data technology” Core Competencies: • Services, not packaged software Small The • Architecture of participation pieces loosely Long Tail • Cost-effective scalability joined (Web as • Remixable data source and data transformations • Software above the level of a single device • Harnessing collective intelligence Data as the Rich user “Intel Inside” experience The Software perpetual that gets better Play beta the more people use it Granular Hackability The Right Emergent: addressability of to Remix: “Some User behavior content rights reserved” not predetermined FiGUre 4-1 Web 2.0 meme map. Originally published in Tim O’Reilly’s “What Is Web 2.0” (http://tim.oreilly.com/news/2005/09/30what-is-web-20.html) within the browser. Nor does its flagship search service even host the content it enables users to find. . . . Google happens in the space between browser and search engine and destination content server, as an enabler or middleman between the user and his or her online experience. The Web has become a computing platform that can deliver a dizzying array of services through little more than a web browser, thereby eliminating the need for the end user to install special software on her own personal computer. As Web 2.0 Google makes incremental changes to its product, we never have to download or install new releases. Rather, the web platform hosts these product changes on our behalf. The second principle of Web 2.0 is the “harnessing of collective intelli- gence.” In the 1.0 world, when a user arrives and engages a website, that interaction has little consequence for the website, except to add another hit to the usage statistics. With Web 2.0 products, it is the user’s engagement with the website that literally drives it. Amazon.com is an excellent example of this. Each time you visit Amazon.com, you leave behind a virtual pile of useful data. The search terms you use, the sequence of books you examine, the reviews you read and write, and ultimately the books you buy are collected and combined with similar data from other users to form an enormous body of information about user behavior. Buried within are discernable patterns, which, once recognized, can be leveraged and turned into new and improved features. For example, Amazon.com is using this immense collection of past usage data to create the features “Customers who bought this item also bought . . .” and “What do customers ultimately buy after viewing items like this?” which are marvelous recommender systems that no one person alone could create. Digg (digg.com) is another example of a website harnessing the collective intelligence of its users. Digg users submit links to news stories they have found interesting. As explained on the website, “After you submit content, other digg users read your submission and digg [vote for] what they like best. If your story rocks and receives enough diggs, it is promoted to the front page for the millions of digg visitors to see.” The result is a news outlet where the community of users, not an elite group of individuals, act as the editors. The third Web 2.0 principle is the primacy of data and the databases that house it. At the core of Google’s service is an immense database of metadata for billions of web pages. A database of available books is at the core of Amazon .com; MapQuest (www.mapquest.com) rests on a database of maps. The successful firms of Web 2.0 are those that not only have the best data but also know how to harness it well. For example, although the book data Amazon .com controls is quite similar to that within a library catalog and Bowker’s Books in Print, the presentation, channeling, and harnessing of this data are strikingly different. Few would argue with the assessment that Amazon.com does a far better job than a library catalog of realizing the full potential of that data. The “end of the software release cycle” is O’Reilly’s fourth principle. Successful Web 2.0 companies do not have rigid, predetermined software releases. Instead, the software is tweaked and improved on an ongoing, sometimes daily basis, dependent upon a continuous flow of user feedback. Web 2.0 This feedback is obtained by direct means, such as through a customer comment system, but also indirectly via the “real time monitoring of user behavior to see just which new features are used, and how they are used” (O’Reilly 2005). This continuous cycle of improvement actually places website users in the role of “codevelopers,” whether they are conscious of this or not. Moreover, it means that a Web 2.0 product is in “perpetual beta” because there is never an official, finished product. O’Reilly’s fifth principle is the reliance on lightweight programming models. A website undergoing continual change requires simplicity. Instead of tightly intertwining the various components of a website, Web 2.0 products strive for loosely coupled, often modular systems that allow pieces to be swapped in and out easily. The sixth principle pushes this flexibility of options to the end user. The Web is no longer limited to personal computers but can embrace a whole suite of devices. For example, the digital music distribution company iTunes (www.apple.com/itunes/) and TiVo (www.tivo.com), a personal digital recorder of television, “are not web applications per se, but they leverage the power of the web platform, making it a seamless, almost invisible part of their infrastructure” (O’Reilly 2005). When these principles are combined and actualized, the Web becomes a more interactive, dynamic experience for all users. There is, in essence, a continuous dialogue between the users and the web pages they encounter, and the result is an increasingly personalized, customized experience. This rich user experience need not, however, stop at the outer edges of an academic library’s website. Rather, the concept of Library 2.0 has been recently posited by several writers (see, e.g., Casey and Savastinuk 2006; Chad and Miller 2005; Miller 2005, 2006a, 2006b). Library 2.0 is a concept of a very different library service, geared towards the needs and expectations of today’s library users. In this vision, the library makes information available wherever and whenever the user requires it, and seeks to ensure that barriers to use and reuse are removed. (Miller 2006b, 2) In other words, the same concepts and technologies that are creating the Web 2.0 experience should also be used to build the Library 2.0 experience. Actualizing Web 2.0 is a growing set of simple yet powerful tools that are turning the Web into an interactive, context-rich, and highly personalized experience. This list of tools is continually expanding, and consequently any attempt to mention them all is rather futile. There are, however, several tools that have become the 2006 poster children for Web 2.0. This small subset is our focus for the remainder of this chapter and the next. Web 2.0 rss RSS, an acronym for Really Simple Syndication or Rich Site Summary, denotes a class of web feeds, specified in XML (Extensible Markup Language). In layperson’s terms, RSS is a way to syndicate the content of a website. From a user’s perspective, this means that you do not have to visit a website continually to see if there is new information. Instead, you subscribe to the RSS feed, and every time the website changes an RSS feed is sent, alerting you to the change. RSS feeds are easier to explain with an example. Suppose you are an avid reader of the New York Times online. Throughout the day, the Times website is regularly updated with breaking stories, and you find yourself constantly returning to www.nytimes.com to see what has been added since the last time you visited the site. RSS feeds provide an alternative to this time-consuming process. Instead of visiting the Times website again and again, you could subscribe to the Times RSS feeds. Whenever something is added, the headline, a short summary, and a link back to the full article are sent to your RSS reader (explained below). Although the Times has an all-encompassing RSS feed, it has also divided up its content into smaller, more refined feeds. Consequently, if your interest is only in “International News,” “College Basketball,” or “Movie Reviews,” you can subscribe to a feed limited to just that topic. An RSS reader is the receiver and aggregator of all the RSS feeds you are receiving. The RSS reader can come in many different forms. Some readers work by sending you the RSS feed through your e-mail. As an example, “Blog Alert” is a free system that sends you daily e-mail notifications of new RSS feeds.1 No special software is needed. Just enter the URL of the RSS feed and your e-mail address into the web form, and the e-mail alerts start arriving daily. If you would prefer not to clog your e-mail in-box, there are many RSS reader applications available for download onto your computer. Awasu (www.awasu.com), for example, is a free RSS reader that runs on Window computers. Through a rich graphic interface, Awasu keeps track of all your subscribed RSS feeds and alerts you when something new arrives. As your list of RSS subscriptions grows, you can arrange the feeds into categories, or channels. The software keeps track of what you have already read so that you are not looking at the same content repeatedly. If, however, you use many different computers throughout the day, you can avoid loading an RSS reader application onto all of them and eliminate the inevitable synchronization problems (e.g., you read a feed on your office computer, but your home computer still has it marked as new and unread) by using a web-based reader such as Bloglines (www.bloglines.com; see Web 2.0 fig. 4-2). Any time you have access to the Web, you can log into your Bloglines account and get your latest RSS feeds. Registration and setup are simple and currently free. In 2005 the Pew Internet and American Life Project found that 5% of Internet users in the United States use RSS readers “to get the news and other information delivered from blogs and content-rich Web sites as it is posted online” (Rainie 2005, 1). Although 5% may not seem significant, it becomes a much more impressive number when translated into 6 million Americans. RSS feeds can be used to stay current on content from a wide variety of information sources including formal news outlets (e.g., New York Times and CNN.com), publishers (e.g., Nature and the U.S. Government Printing Office), alerting services (e.g., National Hurricane Center), and vendors (e.g., Target and iTunes Store). The bottom line is that RSS feeds are a cost-effective and time-effective way for anyone to stay current in this fast-paced, digital world. FiGUre 4-2 Author’s RSS feeds in the Bloglines RSS reader, displaying an entry from Andrew K. Pace’s blog Hectic Pace Web 2.0 blogs In addition to the list above, one could also subscribe to the RSS feeds of interesting blogs. The term “blog” is actually a shortened version of the word “weblog.” Wikipedia describes a blog as “a type of website where entries are made (such as in a journal or diary), and displayed in reverse chronological order.”2 Blogs are simply online journals in which writers can easily jot down their thoughts or comments, accompanied by any related links. It has always been possible for a plain HTML website to function as an online diary, but the popularity of blogs really began to flourish in the late 1990s with the availability of free and cheap blogging platforms such as Xanga (www.xanga .com), LiveJournal (www.livejournal.com), and Blogger (www.blogger.com). An individual’s blog is a personal communications venue through which to share thoughts, comments, beliefs, rants, and raves with the world. A 2006 national survey by the Pew Internet and American Life Project found that 8% of Internet users in the United States (about 12 million adults) keep a blog. There are, however, significantly more blog readers, with an active audience of approximately 57 million American adults (Lenhart and Fox 2006, i). Over the past three years, the number of blogs has doubled every six months, with close to 175,000 new ones created each day. The total number of blogs exceeded 50 million in July 2006 (Lanchester 2006). Blogger demographics are interesting. Only 54% are under the age of 30, with an even split between male and female. Half of all bloggers live in the suburbs, and a third live in urban areas. Surprisingly, African Americans and English-speaking Hispanics have a greater representation in the blogger population than in the general Internet population (Lenhart and Fox 2006, ii). For the vast majority of bloggers (84%), blogging is just a hobby or casual pastime. Although some of the highest-profile blogs focus on politics, such as Daily Kos (www.dailykos.com) and Crooks and Liars (www.crooksandliars .com), the most popular blogging topic (37%) is actually focused on one’s life and experiences. Although blogging is by its nature a public activity, the Pew study found that “most bloggers view it as a personal pursuit,” and yet 87% of bloggers allow comments on their blogs, suggesting an awareness of visitors (Lenhart and Fox 2006, ii, iv). As with RSS readers, blogs can be hosted locally or remotely. Locally installed blogging software such as Movable Type (www.movabletype.org) is more feature rich and able to support significant customizations and branding. As Stephens (2006, 27) notes, however, the software can be difficult to install and requires some level of technical and programming support. The remotely hosted blogging systems, including Blogger and WordPress (wordpress.org), 0 Web 2.0 are accessible from any computer with an Internet connection and require no technical expertise or support. Customization is, however, limited, and the blogs reside on the host’s branded site as just one of several thousand hosted by the service. The mechanics for creating a blog entry are quite simple. Through a straightforward, web-based form, the author enters text and adds any relevant links and images. When the entry is complete, the author submits the entry and it then automatically appears at the top of the blog, date and time stamped. The blog owner/author can elect to make the blog public to the world or available to just a subset of people and can decide whether to allow others to comment on the blog entries. (See fig. 4-3.) Accompanying the explosion in the number of blogs is the emergence of blog-specific search engines that crawl the “blogosphere.” Popular examples of these include Technorati (www.technorati.com), Feedster (www.feedster .com), and IceRocket (www.icerocket.com). Wikis Blogs essentially follow a diary metaphor, with the entries in reverse chrono- logical order and “penned” by a single, primary author. Wikis, on the other FiGUre 4-3 Screen shot demonstrating how to create a blog entry in Bloglines Web 2.0 hand, are subject-driven information sites that deliberately have a shared and distributed authorship (Ferris and Wilder 2006). Wiki is the Hawaiian word for “quick,” which characterizes the speed with which a person can use a wiki. A wiki, as described by the world’s most popular instantiation of it, Wikipedia, “is a type of website that allows the visitors themselves to easily add, remove and otherwise edit and change some available content, sometimes without the need for registration.”3 The basic component of a wiki is a web page with some informational content. Without the use of any special locally hosted software, a person can click on a page’s “edit” button, make changes to the content, and then save those changes. All of the older versions of the page are saved in a history log, whereby errors or malicious acts can be corrected by simply reverting to an older version of the page. Because wikis can allow literally anyone to add or edit their content, one might presume that the outcome would be of poor quality or chaotic form. In reality, it is the highly collaborative nature of wikis that ensure both quality and order. Wikis harness the power of collective knowledge, because presumably no single person could possibly create all the content. Rather, anyone with expertise, knowledge, interest, or enthusiasm can contribute to the effort. In addition, the numerous sets of eyes that work with the content ensure a high level of quality: “It is the community of users acting as quality control that keeps content in-line and on-topic” (Guenther 2005, 53). As with RSS readers and blogs, there are many wiki software options available, with a range in complexity. At the easier end of the scale are the open- source Tipiwiki (tipiwiki.sourceforge.net) and JotSpot (www.jot.com), which was acquired by Google in 2006. The more complex and fuller-featured wikis include Tikiwiki (tikiwiki.org) and the German system MoinMoin (moinmoin .wikiwikiweb.de). Tonkin (2005) and Stephens (2006) provide useful overviews of the features and functionalities of available wikis. Wikis are used as the foundation of all sorts of projects. For example, the wiki Memory Alpha is a large and popular encyclopedic reference for all things related to Star Trek.4 There are wikis focused on political campaigns, comic books, travel, and cooking. Wikis can be found in many different languages as well, including French, Polish, Russian, Esperanto, Kurdish, and Bengali.5 By far the best-known wiki is Wikipedia (en.wikipedia.org), an immense, collaboratively authored encyclopedia. Founded in January 2001 by Larry Sanger and Jimmy Wales, Wikipedia began as “an effort to create and distribute a multi-lingual free encyclopedia of the highest possible quality to every single person on the planet in their own language.”6 Literally anyone can contribute to Wikipedia by adding new entries or editing the entries that already exist, and the result thus far has been quite astonishing.
Description: