Networks, Societies, Spheres: Reflections of an Actor-network theorist
Reported by Drew Margolin & Anna Li
Bruno Latour, born in 1947 in Beaune, Burgundy, from a wine grower family, was trained first as a philosopher and then an anthropologist. From 1982 to 2006, he has been professor at the Centre de sociologie de l’Innovation at the Ecole nationale supérieure des mines in Paris and, for various periods, visiting professor at UCSD, at the London School of Economics and in the history of science department of Harvard University. He is now professor at Sciences Po Paris where he is also the vice-president for research of that school.
Professor Latour’s lecture combines a discussion of the core themes in Actor Network Theory with insights regarding the enormous quantities of data that are now being produced and made available to researchers. Using a broad array of examples, including Isaac Newton, the Space Shuttle Disaster, and a comparison of Marcel Proust’s childhood to the world faced by youth today, Latour explains and elaborates on the idea notion networks should replace objects as our locus of attention. In particular, Latour recalls the insights of Gabriel Tarde and his criticism of the idea that there exists “a society” which is an object separate from individuals.
Latour argues that the notion of society was invented to compensate for the lack of data available to researchers in earlier eras. Social theory, he suggests, is a function of the “datascape” — what we can record about behavior. Today we have access to enormous amounts of data, and it is a mistake to try to fit our treatments of these data into traditional constructs such as “the individual” and “society.” Instead of trying to understand individuals, who are irreducibly complex, we should focus attention on the networks through which they distribute action. Unlike the notion of society, these networks are simplifications rather than aggregations.
Latour ends the lecture by pointing to two challenges that face researchers. First, he argues that we must confront the technical and theoretical challenge posed by the new mass of data. Second, he cites Walter Lippman and his concern with controversy and the fragility of public discourse. Latour addresses these remarks in particular to the climate change controversy and the role that scholars could play in re-inventing the newspaper.
The Dark Side of Metcalfe’s Law: Multiple and Growing Costs of Network Exclusion
Ernest J. Wilson III & Rahul Tongia
Reported by: Drew Margolin & Cuihua Shen
Ernest James Wilson III, Ph.D., is Walter Annenberg Chair in Communication and dean of the Annenberg School for Communication and Journalism at the University of Southern California. He is also a professor of political science, a faculty fellow at the USC Center on Public Diplomacy at the Annenberg School and an adjunct fellow at the Pacific Council on International Policy. He was elected the first African-American chairman of the Corporation of Public Broadcasting in September 2009.
Rahul Tongia is a Program Director at the Center for Study of Science, Technology and Policy (CSTEP) in Bangalore. His research focuses on infrastructure and technology for sustainable development, especially for underserved regions such as India or Africa. His current work covers the broad areas of Digital Divide, ICT for Sustainable Development, Smart Metering for Electricity Networks, and Energy for Developing Regions. He has a doctorate in Engineering and Public Policy from Carnegie Mellon, and an Sc.B. in Electrical Engineering from Brown University.
Dean Wilson introduces this work as an attempt to focus on the “dark side” of networks. Much research has emphasized the collective benefits of network size to the individuals who are included in a network. There is little research on the consequences to those who are excluded from the network. Wilson and Professor Tongia address this topic with the following questions:
Why is “exclusion” excluded from so much work on technology and networks?
Who benefits from inclusion and exclusion?
What are the distribution and utility of network membership? In this work we propose that inclusion should be paired with the issue of exclusion.
The discussion centers on the digital divide and related policy concerns, but it is informed by work in China, India and Africa where inclusion and exclusion from technology and institutions can be applied more broadly.
Tongia reviews several approaches to valuing networks based on their size. Laws proposed by Sarnoff, Odlyzko, Metcalfe, Nivi, and Reed each posit that as networks grow, the utility to those in network also grows. These laws do not specify, however, the consequences in utility to those who are excluded. More broadly, there is a question of whether to value exclusion with an “inclusion framing,” wherein the cost of exclusion is the the opportunity cost of foregone inclusion, or an “exclusion framing” in which those that are excluded bear additional cost as their numbers fall. For example, being the “last person on earth” that is not in the network may bring unique and substantial disadvantages beyond the foregone benefits of inclusion.
Tongia proposes several ways that such exclusion costs might be calculated depending on the framing. Of particular interest is the case where growth in value is exponential, as in this case the excluded (or even those who join late) may be permanently prevented from capturing equal value. He points out that in the early phases of network formation and growth, the inclusion benefits must be substantial to get people to join. Conversely, when the being part of the network is the norm, it may become “a necessity,” imposing additional costs on those that still remain outside.
What are the costs to society as a whole? In many areas societies must keep parallel/dual networks. For digital and analog television; a health insurance system and an emergency room system; a cell phone network and a pay phone system. An exclusion framing of network utility would suggest that as the concentration of usage shifts from 50-50 to 85-15 the dis-utility to those excluded from the dominant system may outweigh the benefits to those who are included. The public phones fall in disrepair as cell phones become more popular. In some cases, the included also bear the cost for the excluded such as when the unpaid emergency room bills for the uninsured force hospitals to pass costs along to the insured.
Dean Wilson concludes the talk asking why there is so little scholarly attention to the excluded. Is it because there are too few of them? Perhaps it is because it is difficult to gather data on the excluded. Or perhaps it is because of homophily — the included are doing the analysis and assume everyone is like them.
The discussion featured comments on a variety of issues, including the theoretical and methodological approaches in the paper as well as the larger social implications of Wilson and Tongia’s findings.
Carter Butts and Michael Macy suggest ways the model might be more clearly bounded and described. Butts argues that the notion of value in network inclusion models are from the operator’s point of view, whereas the paper asks about the value to the user. He also suggests that the function is polynomic, rather than exponential. Michael Macy suggests this model captures average value to the user rather than marginal value and is concerned with usual, rather than extreme cases.
Jonathan Taplin asks how this argument can be used in a policy debate regarding. He asks, for example, whether public money should be used to subsidize broadband access for the poor? Tongia points out that the argument applies here but also in any scenario where there are two infrastructures, what he refers to as “dual networks.” David Grewal points out that this dual networks problem has been in political economy for some time, for example, Adam Smith argued that for a boy to go shoeless in Scotland there was no shame, but to go shoeless in London there was shame.
Fritjof Capra argues that the most harm brought by network exclusion is that alternative technologies are being eliminated. e.g., no more tellers, no more payphones. There is s simple solution — companies should support “lifelines” of basic services. A small tax on cell phone usage could pay for public phones. Woody Powell suggests that the paper shows how far we’ve moved, in terms of politics and policy, away from suggestions like Capra’s . He reminds the group that during the Carter administration the FTC seriously considered giving out “cable TV stamps” to be sure everyone had access. Now this idea is laughable.
Several participants considered the application of this work to network research. Nosh Contractor wonders whether the assumption that all individuals are completely connected to the network makes sense in all cases. What if individuals in a telephone network don’t have the social capital to knwo whom to call? How can the notion of exclusion be extended to an incompletely connected network? Peter Monge points out that fully connected networks are extremely rare. For example, in the densification study by Kleinberg they showed that in most networks the exponent of growth was very low — 1.45 or so in co-citation networks and 1.1 in e-mail networks. Given these empirical findings, the fully connected network (as implied in the study) is not a very good generalization
Some scholars raise questions about the generalizability of this approach. Macy argues that the effects for telecom networks may not generalize to other kinds of networks, such as terrorist networks. Janet Fulk provides an example of a case where those included in the network experience costs because of those who refuse to join the network. She describes the findings of a study she did of law enforcement agencies trying to pool data about their operations. The failure of the network to recruit key municipalities undermined the strength of the network as a whole. Butt suggests there might be a competency trap here. Those excluded may be doing very well. For example, the early elites cannot use email and the switching costs are high. So they have their assistants print out their e-mails for them. People who are the last to adopt may be doing very well and they may well be the elites rather than the poor.
Study of Science, Technology and Policy (CSTEP), in Bangalore
Manuel Castells is University Professor and the Wallis Annenberg Chair of Communication Technology and Society at the University of Southern California, Los Angeles, and Professor of Sociology and director of the Internet Interdisciplinary Institute at the Open University of Catalonia in Barcelona. He is also Professor Emeritus of Sociology and of City & Regional Planning at the University of California, Berkeley, where he taught for 24 years.
Professor Castells started his talk with his view of how theory serves his work. He suggested that theory should be used to produce knowledge through research, so it is instrumental, not the value itself. Therefore, theory should always be specific to context. Viewing the networked society as the background of the talk, Manuel proposed that power relationships are fundamental. In any society, those with power determine the rules. Fortunately, power is always balanced by counter-power. In this way, the society is going through constant challenges and evolving. Echoing Bruno Latour’s keynote speech, Castells pointed out that systems move through the actions of individuals. In the current global network society context, specific social structures are produced, characterized by key organizational forms organized in interwoven networks, in which micro-electronic based technologies function as the key elements underlying these networks.
In suggesting that all this supports a theory of power and counter-power in the network society, Castells identified four different forms of power:
1) Networking power refers to the power of the actors and organizations included in the networks that constitute the core of the global network society over human collectives or individuals who are not included in these global networks.
2) Network power (as Grewall defined in the previous session) means the power resulting from the standards required to coordinate social interaction in the networks. In this case power is exercised not by exclusion from the networks but by the imposition of the rules of inclusion. Any social or network action requires social coordination, so it requires standards. These standards display network power. For example, once one protocol of communication gets accepted in the network, it becomes a form of power through imposing the rules of inclusion.
3) Networked power indicates the power of social actors over other social actors in the network. The forms and processes of networked power are specific to each network. This type of power is the most complicated form. Throughout human history, there are two basic forms of power. The first one is coercive power. In this manner, actors can impose their will over others. The second form is persuasive power, which functions in the minds of people through constructing the meaning of actions. These two forms of power can combine in different proportions. But having the capacity to construct meaning through discourses (persuasive power) is fundamental. In other words, shaping the minds is the more effective way than torturing the bodies.
A worthy following question would be who has the networked power in a global network society. According to Castells, the answer is totally undetermined. However, that does not mean that dominance does not exist. Essentially, different forms of power organized in different networks of power are not unified. There is a distinction between the differentiation of power elite and the formation of ad-hoc elites in particular contexts. The traditional definition of power is not useful here; how these different networked powers connect to each other requires specific analysis.
4) Finally, network-making power refers to the power to program specific networks according to the interests and values of the programmers, and the power to switch different networks by forming strategic alliances with different networks. Programmers and switchers are not abstract concepts; they are simply people or actors in the networks. For instance, MIT establishes networks between scientific and military networks, which ensures the domination of MIT in scientific networks and of the US in military technology. Ultimately, these ideas materialize in the brains of social actors. Therefore, the key becomes connecting human networks via communication networks. Here, Castells emphasized that the shaping of communication networks has a decisive effect on other networks (e.g., agenda setting, gatekeeping effect of traditional media).
Of course, there are also mechanisms of counter-power in any society. People are not passive; they receive, challenge, and produce their own products. Thus, counter-power is exercised in a manner symmetrical to power. For instance, in the financial markets, a number of new criteria such as environmental standards have been introduced. Counter-power also works through disrupting network switches. For example, protests against the FCC remind the FCC to take the citizen rights into consideration.
Barzilai-Nahon: In terms of the identity of switchers and programmers, do these labels primarily refer to individuals, or can collectives have the same function? Is it possible that collectives create patterns of interaction which in turn provide the basis for the emergence of new switchers?
Castells: Actors in a network are not always individuals. However, when looking at the formation of an actor, individuals are at the root of collectives, and individuals are the ones that change the collective. However, not all individuals in a collective are equal. Spontaneous networks of protest emerge when individuals, by responding to some event, suddenly form a collective. A second example of collective action is the movement to control the FCC. Some individuals created a loose activist structure, and the identity of those who joined the movement and their reasons for joining have a decisive impact on the identity of the collective. This also has implications for movement evolution, something that is understudied in social movement research: the motivations and background of the first individuals that created a collective before it grew big are important but usually not observed.
Capra: Networked power has existed throughout history, and might be even more representative of the Renaissance and other historical periods than of the network society.
Castells: Networked power is not unique to the network society. All of the four types of power presented above are present in the contemporary period. The important question in that respect is who is included in these networks, and who holds power positions within them. The answer to this question depends on the nature of the particular network, its goals, components, and technology.
Latour: In the term network power, neither “network” nor “power” is enlightening. First, he suggested abandoning the concept of power, asking if there is anything that is not power. Second, he pointed to the inflationary, even hegemonic use of the term network today. Latour asserted that because of the traceability of human action today we tend to call every phenomenon network, rather than using traditional categories such as territory, society, macro. In addition, Latour disagreed with Castells about the importance of theory – only theory can give precision to a confused concept of networks.
Castells agreed to disagree completely with Latour. For him, power is not everywhere. It is a fundamental, but particular type of relationship. He also distinguished between power as relational and domination as an institutional concept. In contrast to earlier societies, the core activities of the network society are organized in networks, based on information and communication technology. Therefore, there are quantitative and qualitative differences between contemporary networks and those of other societies and historical periods.
Fulk: A them of the conference has been that those excluded from networks are less powerful. Instead, depending on the network, the excluded can have more power than the included, using the examples of small-business and police coordination networks.
Castells acknowledged this as a very relevant point. He stated that the included are more powerful in terms of the program of the network itself. Therefore, criminal networks have no problem in being excluded. A second order analysis of the relationship between networks is important; power resides in those networks that succeed in competition and are able to impose their will onto other networks.
Grewall: Does it make sense to distinguish between different kinds of programmers and switchers?
Castells agrees that programmers can be switchers and vice versa. Networks operate efficiently once they have a clear goal and program. He pointed to the connections between business and academic networks and how the former influence the latter’s research agenda. Therefore, switchers are important actors in all networks.
Tongia: What is your opinion regarding the recent Supreme Court ruling on corporations as individuals?
Castells returned to his point that corporations are ultimately run by individuals. He emphasized the connection between collective actors and the individual, which he hopes to connect ultimately to the individual brain.
Powell: Social scientists until now have been unable to measure power. He expressed his doubts about mashing up network theory and the phenomenon of power. Powell suggested that Castells’ presentation could as well be titled the “technology of power”, and Powell is not sure whether the concept of networks is useful for an analysis of power.
Castells pointed to the transformative role of technology. He proposed a network theory of power in which power ultimately flows through communication networks. The construction of meaning is the most important form of power. For the first time in history, the system of communication networks provides the basis of this construction of meaning in immersive, interactive discourses that shape people’s minds.
Castells, M. (2009). Communication Power. New York: Oxford University Press, Inc.
Varieties of Networks, Varieties of Power: Network Multidimensionality in Historical Perspective
David Singh Grewal
Reported by: Sandi Evans & Anna Li
David Singh Grewal, a Junior Fellow at the Harvard Society of Fellows and a Director of the Biobricks Foundation, is a graduate student at the Harvard University’s Government Department. He studies network power in the context of globalization and is the author of Network Power: The Social Dynamics of Globalization (2008).
Grewal began his talk on a tangent discussing how the Biobricks Foundation related to the previous talk on the semantic web. The Biobricks Foundation is a site for the emerging field of synthetic biology, and serves as an online registry for the standardization of biological parts. He described this integration of biological data and metadata as “Web 3.1″ – meaning biopower plus network power. Here, as in the earlier semantic web talks, the issue of privacy loomed large over this potential informational boon, though this topic was not the focus of the rest of this presentation.
Multidimensionality and a historical perspective Grewal’s starting point consisted of three research questions: 1) what kind of power is at work in the network society? 2) how do networks structure power? 3) do different kinds of networks structure power differently? He then went on to provide a review of literature on network and anthropological theory. Because his presentation was exploratory he elicited and received a great deal of interesting feedback from the audience.
Grewal addressed the methodological argument that a synchronic approach to studying networks provides a single snapshot, and does not measure change over time, which can be considered problematic. By taking a historical perspective he suggests one can analyze networks as processes. Grewal provided a review of literature to support his ideas. First, Grewal discussed various theories of network power. Grewal included his own definition from his book “Network Power” (2008), and emphasized Castells’ (2009) typology: networking power, network power, networked power, and network-making power. Secondly, Grewal addressed a network typology of structures, which included references to Ouchi’s framework on organizational failure (1980), Powell’s (1990) research on network forms of organization, Lipnack and Stamps’ (2000) research on virtual teams, and Ronfeld’s (2006) research on organizational forms. Thirdly, Grewal addressed some anthropological views on networks and related topics such as tribes. He covered the development of tribes, transitions, and concepts of exchanges (reciprocity, redistributive) and related these concepts to communication networks. He also addressed historical models including the ancient, feudal and modern. This broad review of literature was rich in its coverage of conceptualizations about power such as the role of switchers and programmers and their function as the “new citizens” of the network society. He also provided a picture of the modern model where the state has removed the need for hierarchical reciprocity; instead, everybody can be connected through digital technology.
Questions from the audience Because Grewal’s work was exploratory, the questions and comments from the audience were integral to this talk. Woody Powell suggested that he consider the political, economic and social networks as separate levels, each with its own network structure and then compare them in order to assess issues of power.
Another theoretical question that emerged from the audience was: under what conditions could you predict a major transition in networks? Members of the audience agreed that the focus on networks in transition rather than stages or periodicity was central, though there were many questions about how this question could be studied effectively. One audience member suggested that Grewal consider using cities as a level of analysis because cities, defined as large concentrations of work, could be considered as singular large networks or as a population.
Manuel Castells noted that trying to map out an evolutionary theory of networks was akin to “stepping into a minefield,” but that it was a worthwhile endeavor. He noted that networks need to be put in context in order to observe how they operate, and that the role of technology is integral to the study of networks, particularly in relation to the concept of a Network Society.
Overall, Grewal’s talk brought up several intriguing questions about the role of time and history in network analysis, and he provided a review on both network and anthropological theory.
Grewal, D. S. (2008). Network Power: The Social Dynamics of Globalization. Yale University Press, 2008.
Nigel Shadbolt is Professor of Artificial Intelligence (AI) and Deputy Head (Research) of the School of Electronics and Computer Science at the University of Southampton. He was a Founding Director of the Web Science Research Initiative, a joint endeavour between the University of Southampton and MIT, and is a Founding Director and Trustee of the Web Science Trust. He is also a Director of the World Wide Web Foundation. His current research focuses on developing Web-Based Semantic Technologies.
Nigel Shadbolt spoke about the Semantic Web and the potential for research. The Semantic Web refers to emerging syntax-based architecture that enables the sharing of data on the Web. The Semantic Web is also referred as Linked Data and Web 3.0. The Semantic Web reflects a rich opportunity for researchers because of the potential for access to large amounts of data. Shadbolt also discussed Uniform Resource Identifiers (URI), a language used to represent information on the World Wide Web, and Resource Description Framework (RDF), a language connected with W3C.
In his lecture, Shadbolt noted that people developed a ‘romantic’ idea that the Semantic Web would be artificial intelligence (AI) ‘magic.’ It would create “proof and trust” but AI never “had a hope of that.” He felt that people had gotten sidetracked from the point, which is its great potential for information sharing. The Semantic Web, according to Shadbolt, is about moving from a web of documents to “a web of data.” He noted that all HTTP (hyper text transfer protocol) does is put “a thin layer of abstraction onto a hideous web of documents.” It creates physical connections between abstract machines. He cited Web addresses, domain name services, rooting systems and HTML (hyper text markup language) as examples of abstract protocols designed to “sit on top” of a variety of operating systems. What the Semantic Web does is to create a method for abstracting and linking the internal components of this “web of data.” The essential idea, says Shadbolt, is to “give Web addresses to atomic facts.” What we have then is a set of principles for the Semantic Web that developers can then attempt to scale.
Shadbolt brings up a few conceptual problems with the Semantic Web, comparing it to dark matter; it is ‘there’ but we can’t ‘feel’ it. A major difficulty lies in the problem of co-referencing. He notes that although he and Wendy Hall often work together they do not often publish together and thus the Semantic Web as such does not recognize that they are linked. It is thus necessary, argues Shadbolt, to take a closer look at how the Semantic Web is constituted.
Shadbolt discussed the significance of URIs, which are Web-based identifiers providing information about properties, values, objects, and relations (Uniform Resource Identifier, n.d.). Shadbolt defined RDFs as a “knowledge representation language for the Web” that “represents information as sets of triples.” RDF is affiliated with W3C and has become a widely used method for modeling information through syntax formats (Resource Description Framework, n.d.)
Examples of RDF Sites Shadbolt illustrated his discussion of Linked Data with several examples of current RDF sites. These include DBpedia, SPARQL, SameAs.org, and data.gov.uk. DBpedia is a site that extracts structured information from Wikipedia. It is unique in that it enables new mechanism for navigating, linking to and building upon Wikipedia. According to Shadbolt, DBpedia describes about 3 million pieces of data. It also is an example of triple store technology that enables browsing, navigating and semantic queries.
The UK site, data.gov.uk, is a second example of Linked Data. This site stems from a public service mandate by the UK government to provide open access to much government data, including health, education, crime, transportation and fiscal data. Shadbolt states that this site reflects themes of transparency and citizen engagement.
Opportunities and potential threats This discussion brought up several opportunities and some potential challenges. Shadbolt stated that there is a need for further research into the “shape and structure” of networks. Nosher Contractor noted that these new forms of large, global data sets are huge opportunities for researchers. Though it may be a challenge to get access to some forms of data, publicly available data from sources like government agencies may be useful. Additionally, as the data.gov.uk site exemplifies, this form of data can act as both a public service and as a means to keep governments accountable by making data accessible and understandable. Arguing that data empowers, Shadbolt used the example of the UK government’s decision to make bike accident data available and the resulting production of accident-avoidance Web applications in under 24 hours. He proposed that similar linked data efforts in Haiti could aid in the coordination of relief efforts. URI, according to Shadbolt, frees data in a way that being “locked up inside spreadsheets or large databases” does not. It is Shadbolt’s conviction that governments “should establish the principle that all public services should publish in reusable form all objective data.”
However, the Semantic Web also brings up issues including privacy and data literacy. Shadbolt noted that although some people may feel comfortable with private firms, for example, Google managing health records, governments have “a rule and responsibility to the people.” He argues, it is time for the invocation of data portability and transparency. Shadbolt pointed out that the Obama administration has not adopted full data portability, and that if a person visits data.gov.uk they are faced with large downloadable files that may or may not be useful. He is anticipating the creation of semantic.data.gov.
Shadbolt noted also that some governments think that raw data can be too dangerous, and that some data should not be authorized for circulation because people are not data literate and cannot interpret it correctly. He then asked how this is different from the data literacy problems we witness in print media. In terms of data literacy, a seminar participant argued that this form of literacy was necessary to enable people to understand these newly accessible forms of data and metadata. In terms of privacy, one seminar participant asked, what mechanisms exist to balance audience rights with the availability of information? She gave the example of the sex offender database in the U.S., which names offenders and has been controversial for taking away individuals’ privacy without providing enough context about the seriousness of past crimes. Hall responded that the Semantic Web is akin to the World Wide Web of 1994 — it is new, and the rules are still being established. So far, there is no such privacy mechanism yet. Shadbolt also touched upon the issue of granularity in relation to privacy. If Semantic Web networks scale down to the level of the individual level, this further touches upon the issue of privacy.
Reported by: Jaclyn Selby, Youngji Kim & Amanda Beacom
Dame Wendy Hall is Professor of Computer Science at the University of Southampton School of Electronics and Computer Science and a founding director of the Web Science Research Initiative, now the Web Science Trust (http://webscience.org). Her current research focuses on the Semantic Web and Web science. In her seminar presentation, Professor Hall provided a historical context for online networks, tracing the emergence and growth of the World Wide Web to the current development of the Semantic Web.
Hall began her presentation by discussing how throughout history, people have been writing about linking information and how difficult it is to do. She noted that the brain does this very well and so scholars have sought to develop tools that use the human brain as a model for the organization and management of information. With the creation of increasingly sophisticated machines, people began to think about how machines could be used to create cross-references, links, and associations between related units of information. In 1945, for example, Vannevar Bush, scientific advisor to U.S. President Franklin Roosevelt during World War II, wrote an Atlantic Monthly article titled “As We May Think” which advocated the need for new technologies that use the brain as a model for storing and finding information. This article, which Hall highlighted in her lecture as one of the inspirations for her own work, discusses how a machine could create a system of automatic, “associative indexing,” and uses terms such as “trails” and “web” to describe this system.
Hall described how innovations in computers beginning in the 1960s continued to reference or attempt extensions or augmentations of the human brain. She mentioned her colleague, Ted Nelson, who coined the phrase “everything is deeply intertwingled” to express the complexity of interrelations in human knowledge. In the 1960s, Ted Nelson first used the terms hypertext and hypermedia and created Xanadu, a hypermedia system; and Douglas Engelbart developed Augment, a project with hypertext features that envisioned the use of computers to enhance intellect. In the 1970s and ‘80s, hypertext systems were further developed in research labs and commercially with the introduction of personal computers. Hall and her colleagues created Microcosm (the Mountbatten Archive Application) in the late 1980s to store information links in databases. These ‘linkbases,’ as she referred to them, were to capture all the relationships between different pieces of information. All the links were triples. Source, destination, scripts. Hall noted that although the Internet existed, there was no real web. Her hypothesis was that hierarchical indexing is what is necessary to store information.
Apple’s HyperCard became available on Macintosh computers in 1987. In 1989, Tim Berners-Lee began development of the World Wide Web to facilitate information sharing among scientists, creating a system of open protocols and universal standards involving Hypertext Transfer Protocol (HTTP) and Hypertext Mark-up Language (HTML). He wrote a paper called “Information Management: A Proposal” and then went on to work on a demo of the ‘World Wide Web’ which he debuted in 1991 to much skepticism. The ACM hypertext conference famously rejected Tim’s paper but by 1993 the idea was widely accepted. In just a few years, the system became user-friendly with the introduction of the Mosaic, and later the Netscape and Explorer browsers.
After outlining these key historical events, Hall offered some lessons learned in the development of the Web. First, she said, “big is beautiful,” meaning that as Berners-Lee argued, the network is the most important feature of the Web as a hypertext information system. She emphasized that we had lost (for a time) conceptual and contextual linking and that the Web had been “a strangely linkless world” with search engines filling the gap where the missing links were. Other such systems that were developed around the same time as the Web operated on stand-alone workstations, and could be accessed only at those workstations. The Web, in contrast, may be accessed anywhere. Second, “scruffy works,” meaning that the system did not need to be perfect in order to be effective. Links could fail. The third lesson, Hall said, is that “democracy rules.” The Web is based on non-proprietary protocols and universal standards, and demonstrates how everyone has to use such a system, or no one will. Hall points out that ironically, Web search engines such as Google, which are so integral to Web use today, operate in a spirit opposite that of this third lesson. Whereas the Web is an open and transparent system, Google is a closed system with proprietary search algorithms and little transparency. The irony is that when Brinn and Page published their paper on their page ranking algorithm in 1997, they were told it wouldn’t scale. They had to get financial support and do the math to prove that it would, which they did in 1999. But then they couldn’t make any money so they came up with this idea of auctioning words which turned out to be very successful. One of Hall’s key lessons regarding the rise of Google, which depends on the links we make to make its results more accurate, is that Links equal Power. Ie. if more people point to you than you are rewarded with status you don’t have to pay for.
Hall then described a situation where she asked her students if the Web was truly a hypertext system? Links are unidirectional and don’t point back to where they came from. However, the World Wide Web was so much better than what came before it that researchers didn’t care and busied themselves exploring “the new universe.” However, the web did not become a truly ‘social web’ until it completed the transformation from Read Only Web to Read/Write Web. Hall cites a number of revolutionary social sites (Wikipedia, Galaxy Zoo, Twitter) that are a product of our growing ability to ‘write’ to the web.
The lessons from the development of the Web, of course, also inform the development of the Semantic Web. Hall explained that whereas the Web is built on links between documents, the Semantic Web is built on links between data. This shift from documents to data allows for data re-use, reduces the requirements for human information processing, and releases the large quantity of currently inaccessible data stored in relational databases and Excel spreadsheets by allowing these data to be directly processed by machine. The building blocks of the semantic web are Universal Resource Identifiers (URIs) and the Resource Description Framework (RDF), which describes and links the data, and which Hall equated to HTML. (Nigel Shadbolt’s seminar lecture, which followed Professor Hall’s, provided additional detail on these concepts.). Hall suggested that the aggregation of all this information in a standard manner might make it possible for people to pose queries to the system such as “where is the best place to study journalism?” and receive structured and useful answers.
Hall posed the question of what will be the tipping points for widespread adoption and use of the Semantic Web. One possible tipping point, she said, is the use of the Semantic Web by governments. Both the administration of U.K. Prime Minister Gordon Brown and the administration of U.S. President Barack Obama have announced initiatives for using the Semantic Web. (See the following sites for more information: http://data.gov.uk/ and http://www.sitepoint.com/blogs/2009/03/19/obama-groundbreaking-use-semantic-web/.)
Hall concluded her talk by introducing the emerging interdisciplinary field of Web science, which she refers to also as part of Web 3.0. She envisions Web science as “a process of creative innovation, design and engineering, the social and the technical, and interpretation and analysis,” and “inter-/multi-/trans-disciplinary”—not the union of disciplines, but their intersection. Understanding the web, according to Hall, is a major challenge as large as any other global cause because nobody (as of yet) owns the web and there are possible scenarios which could end in its demise. She argued that the field—and the questions it will investigate—matter because the Web has become our cultural legacy and social heritage, and because we cannot afford to take the freedom to exchange information online for granted.
Q: What are the limits of the knowledge available online?
A: Aside from some archival data, most information is accessible on the Web.
Q: Is a limitation of the Semantic web its objective view of associations, given that the association one person makes between two pieces of information may differ from the association another person makes, depending on different ontologies?
A: Given that the World Wide Web functions without every link to every document, the Semantic Web should be able to function without all possible associations.
The Fuzziness of Inclusion/Exclusion: Network Gatekeeping Theory
Reported by: Amanda Beacom & Cuihua (Cindy) Shen
Karine Barzilai-Nahon is an assistant professor at the University of Washington Information School. She studies information policy and politics, particularly information control and gatekeeping, the digital divide, and e-government and e-business in comparative analysis. Recent work has focused on network gatekeeping theory, digital divide metrics, the organizational impact of digital natives, and the development of the concept of “cultured technology” to understand information control in secluded communities.
In her seminar presentation, Professor Barzilai-Nahon used network gatekeeping theory to examine inclusion, exclusion, and power in networks. Network gatekeeping theory, which Barzilai-Nahon has proposed and developed in a series of recent publications, departs from other approaches to information control and gatekeeping in several ways. First, it recognizes three means of exercising power in social networks: (1) decisions; (2) non-decisions; and (3) inactions that shape preferences and awareness. Previous research has emphasized decision-making by elites as a means of exerting power in social interactions. Barzilai-Nahon argues that while decision-making may be an easier mechanism for researchers to observe, non-decisions and the shaping of preferences and awareness are also significant tools for controlling information in networks.
Second, network gatekeeping theory gives equal weight to gatekeepers and the “gated,” which Barzilai-Nahon defines as “the entity subjected to gatekeeping.” In a comprehensive review of gatekeeping theories across a range of scholarly disciplines, Barzilai-Nahon found that most research has focused on the gatekeeper, and little attention has been paid to the concept of the gated. Network gatekeeping theory identifies four attributes of the gated that affect information control in networks: (1) their political power relative to the gatekeeper, an attribute commonly studied in political science; (2) their information production ability, an attribute of traditional interest to economists; (3) their relationships with the gatekeeper, a major focus of social network analysts; and (4) their alternatives in the context of gatekeeping, which is of particular interest to Barzilai-Nahon.
To consider the gated’s alternatives is to acknowledge potential fluidity in the boundaries between gatekeeper and gated in social networks. These fuzzy boundaries are a third distinct feature of network gatekeeping theory. In her seminar presentation, Barzilai-Nahon argued that there is a dynamic flow of power between the identities of the gatekeeper and the gated, and that elite status is transient. According to network gatekeeping theory, a gated actor may become a gatekeeper when the gated possesses the capability to control information and the appropriate social context exists. Gatekeeping is a dynamic status dependent upon social context. Barzilai-Nahon used the example of the Huffington Post to illustrate gatekeeper-gated dynamics. In the social context of its readers, the Huffington Post may be viewed as a gatekeeper of information, but in other contexts, such as that of news sources or of all non-readers, the Huffington Post may be viewed as a gated actor. New information and communication technologies offer novel contexts for gatekeeper-gated dynamics.
Finally, Barzilai-Nahon pointed out that fluidity or fuzziness exists not only in gatekeeper-gated status but also in the broader question of whom or what is included or excluded in social networks. Barzilai-Nahon argued that inclusion/exclusion often reflects self regulation and the social norms of specific contexts. As a member of one network, we may highlight only certain characteristics of ourselves and exclude others. For example, in a professional network, an actor may not discuss a family vacation, whereas in a friend network, an actor may share vacation photos but not work projects. Information control occurs across multiple social dimensions and spheres, and therefore an actor may be included in one social network, and excluded in another, or be a gatekeeper in one network, and the gated in another.
Macy: When it is fuzzy, you throw someone out. So you identify the deviant and exclude them (e.g., trolls in a forum), which help identify the inclusion criteria.
Capra: Huffington Post only serves as gatekeeper for those who read huffpost. Arianna Huffington already adopted the political culture (selected by the culture as the gatekeeper). She appeals to a certain group and the group selects her. Now she is influencing the group. The gatekeeper emerged. It is not the mob, it is the elite.
If elites exist ,why would there be the fuzziness of inclusion/exclusion? Because elites would like to dominate, and a lot of self-regulation processes take them to different paths. Unintended outcomes.
Taplin: There used to be gatekeepers and no alternatives (if movie is made but without a distributor, then the movie doesn’t exist) . Now there are gatekeepers and alternatives. You can put the movie on Youtube. The role of gatekeepers has changed – there are multiple dimensions of power.
Macy: “never having to say sorry” (love story) is not love, but power. the structure guarantees that it will happen – power.
Borner: this is an active view of gatekeeping. But there are also passive ways of gatekeeping.
Barzilai-Nahon: It’s difficult to operationalize passive gatekeeping. Passive gatekeeping gets to the second dimension of power.
Tongia – it is interesting to see what is excluded – what is taken out.
Barzilai-Nahon: Google is a gatekeeper when it exercises info control, but it can be gated in other circumstances (such as in China)
Butts: In social network theory, brokerage (Burt’s argument) and exchange theory is very similar to your argument. What you have is equilibrium of gatekeepers in different contexts. Gatekeeping is a global property, not local property. It also depends on the structure of the network (context) where people are embedded.
Castells: you are talking about press and internet, this is just one type of gatekeeping. But there are other forms of gatekeeping, like in a club. Journalists are previously the powerful gatekeepers, but now journlaism crumbles. What is happening is a transformation of gatekeeping, and the gatekeepers. It is still unfolding. We don’t know for sure yet.
Macy – Nature open review experiment in 2006 is a good example.
Tongia – am I not the biggest gatekeeper for myself? We tend to look at gatekeepers from the supply side perspective.
Barzilai-Nahon K. (2009). Gatekeeping: A critical review. Annual Review of Information Science and Technology, 43, 433-478.
Barzilai-Nahon K. (2008). Towards a theory of network gatekeeping: A framework for exploring
information control. Journal of the American Society for Information Science and Technology, 59(9),1493-1512.