The ideas behind open source software are currently applied to the production of encyclopedias. A sample of six English text-based, neutral-point-of-view, online encyclopedias of the kind are identified: h2g2, Wikipedia, Scholarpedia, Encyclopedia of Earth, Citizendium and Knol. How do these projects deal with the problem of trusting their participants to behave as competent and loyal encyclopedists? Editorial policies for soliciting and processing content are shown to range from high discretion to low discretion; that is, from granting unlimited trust to limited (...) trust. Their conceptions of the proper role for experts are also explored and it is argued that to a great extent they determine editorial policies. Subsequently, internal discussions about quality guarantee at Wikipedia are rendered. All indications are that review and ?super-review? of new edits will become policy, to be performed by Wikipedians with a better reputation. Finally, while for encyclopedias the issue of organizational trust largely coincides with epistemological trust, a link is made with theories about the acceptance of testimony. It is argued that both non-reductionist views (the ?acceptance principle? and the ?assurance view?) and reductionist ones (an appeal to background conditions, and a?newly defined??expertise view?) have been implemented in editorial strategies over the past decade. (shrink)
Can trust evolve on the Internet between virtual strangers? Recently, Pettit answered this question in the negative. Focusing on trust in the sense of ‘dynamic, interactive, and trusting’ reliance on other people, he distinguishes between two forms of trust: primary trust rests on the belief that the other is trustworthy, while the more subtle secondary kind of trust is premised on the belief that the other cherishes one’s esteem, and will, therefore, reply to an act of trust in kind (‘trust-responsiveness’). (...) Based on this theory Pettit argues that trust between virtual strangers is impossible: they lack all evidence about one another, which prevents the imputation of trustworthiness and renders the reliance on trust-responsiveness ridiculous. I argue that this argument is flawed, both empirically and theoretically. In several virtual communities amazing acts of trust between pure virtuals have been observed. I propose that these can be explained as follows. On the one hand, social cues, reputation, reliance on third parties, and participation in (quasi-) institutions allow imputing trustworthiness to varying degrees. On the other, precisely trust-responsiveness is also relied upon, as a necessary supplement to primary trust. In virtual markets, esteem as a fair trader is coveted while it contributes to building up one’s reputation. In task groups, a hyperactive style of action may be adopted which amounts to assuming (not: inferring) trust. Trustors expect that their virtual co-workers will reply in kind while such an approach is to be considered the most appropriate in cyberspace. In non-task groups, finally, members often display intimacies while they are confident someone else ‘out there’ will return them. This is facilitated by the one-to-many, asynchronous mode of communication within mailing lists. (shrink)
Many virtual communities that rely on user-generated content (such as social news sites, citizen journals, and encyclopedias in particular) offer unrestricted and immediate ‘write access’ to every contributor. It is argued that these communities do not just assume that the trust granted by that policy is well-placed; they have developed extensive mechanisms that underpin the trust involved (‘backgrounding’). These target contributors (stipulating legal terms of use and developing etiquette, both underscored by sanctions) as well as the contents contributed by them (...) (patrolling for illegal and/or vandalist content, variously performed by humans and bots; voting schemes). Backgrounding trust is argued to be important since it facilitates the avoidance of bureaucratic measures that may easily cause unrest among community members and chase them away. (shrink)
Open-content communities that focus on co-creation without requirements for entry have to face the issue of institutional trust in contributors. This research investigates the various ways in which these communities manage this issue. It is shown that communities of open-source software—continue to—rely mainly on hierarchy (reserving write-access for higher echelons), which substitutes (the need for) trust. Encyclopedic communities, though, largely avoid this solution. In the particular case of Wikipedia, which is confronted with persistent vandalism, another arrangement has been pioneered instead. (...) Trust (i.e. full write-access) is ‘backgrounded’ by means of a permanent mobilization of Wikipedians to monitor incoming edits. Computational approaches have been developed for the purpose, yielding both sophisticated monitoring tools that are used by human patrollers, and bots that operate autonomously. Measures of reputation are also under investigation within Wikipedia; their incorporation in monitoring efforts, as an indicator of the trustworthiness of editors, is envisaged. These collective monitoring efforts are interpreted as focusing on avoiding possible damage being inflicted on Wikipedian spaces, thereby being allowed to keep the discretionary powers of editing intact for all users. Further, the essential differences between backgrounding and substituting trust are elaborated. Finally it is argued that the Wikipedian monitoring of new edits, especially by its heavy reliance on computational tools, raises a number of moral questions that need to be answered urgently. (shrink)
Two property regimes for software development may be distinguished. Within corporations, on the one hand, a Private Regime obtains which excludes all outsiders from access to a firm's software assets. It is shown how the protective instruments of secrecy and both copyright and patent have been strengthened considerably during the last two decades. On the other, a Public Regime among hackers may be distinguished, initiated by individuals, organizations or firms, in which source code is freely exchanged. It is argued that (...) copyright is put to novel use here: claiming their rights, authors write `open source licenses' that allow public usage of the code, while at the same time regulating the inclusion of users. A `regulated commons' is created. The analysis focuses successively on the most important open source licenses to emerge, the problem of possible incompatibility between them (especially as far as the dominant General Public License is concerned), and the fragmentation into several user communities that may result. (shrink)
English - language Wikipedia is constantly being plagued by vandalistic contributions on a massive scale. In order to fight them its volunteer contributors deploy an array of software tools and autonomous bots. After an analysis of their functioning and the ‘ coactivity ’ in use between humans and bots, this research ‘ discloses ’ the moral issues that emerge from the combined patrolling by humans and bots. Administrators provide the stronger tools only to trusted users, thereby creating a new hierarchical (...) layer. Further, surveillance exhibits several troubling features : questionable profiling practices, the use of the controversial measure of reputation, ‘ oversurveillance ’ where quantity trumps quality, and a prospective loss of the required moral skills whenever bots take over from humans. The most troubling aspect, though, is that Wikipedia has become a Janus - faced institution. One face is the basic platform of MediaWiki software, transparent to all. Its other face is the anti - vandalism system, which, in contrast, is opaque to the average user, in particular as a result of the algorithms and neural networks in use. Finally it is argued that this secrecy impedes a much needed discussion to unfold ; a discussion that should focus on a ‘ rebalancing ’ of the anti - vandalism system and the development of more ethical information practices towards contributors. (shrink)
Hacker communities of the 1970s and 1980s developed a quite characteristic work ethos. Its norms are explored and shown to be quite similar to those which Robert Merton suggested govern academic life: communism, universalism, disinterestedness, and organized scepticism. In the 1990s the Internet multiplied the scale of these communities, allowing them to create successful software programs like Linux and Apache. After renaming themselves the `open source software' movement, with an emphasis on software quality, they succeeded in gaining corporate interest. As (...) one of the main results, their `open' practices have entered industrial software production. The resulting clash of cultures, between the more academic CUDOS norms and their corporate counterparts, is discussed and assessed. In all, the article shows that software practices are a fascinating seedbed for the genesis of work ethics of various kinds, depending on their societal context. (shrink)
In order to fight massive vandalism the English- language Wikipedia has developed a system of surveillance which is carried out by humans and bots, supported by various tools. Central to the selection of edits for inspection is the process of using filters or profiles. Can this profiling be justified? On the basis of a careful reading of Frederick Schauer’s books about rules in general (1991) and profiling in particular (2003) I arrive at several conclusions. The effectiveness, efficiency, and risk-aversion of (...) edit selection all greatly increase as a result. The argument for increasing predictability suggests making all details of profiling manifestly public. Also, a wider distribution of the more sophisticated anti-vandalism tools seems indicated. As to the specific dimensions used in profiling, several critical remarks are developed. When patrollers use ‘assisted edit- ing’ tools, severe ‘overuse’ of several features (anonymity, warned before) is a definite possibility, undermining profile efficacy. The easy remedy suggested is to render all of them invisible on the interfaces as displayed to patrollers. Finally, concerning not only assisted editing tools but tools against vandalism generally, it is argued that the anonymity feature is a sensitive category: anons have been in dispute for a long time (while being more prone to vandalism). Targeting them as a special category violates the social contract upon which Wikipedia is based. The feature is therefore a candidate for mandatory ‘underuse’: it should be banned from all anti-vandalism filters and profiling algorithms, and no longer be visible as a special edit trait. (shrink)
During the two last decades, speeded up by the development of the Internet, several types of commons have been opened up for intellectual resources. In this article their variety is being explored as to the kind of resources and the type of regulation involved. The open source software movement initiated the phenomenon, by creating a copyright-based commons of source code that can be labelled `dynamic': allowing both use and modification of resources. Additionally, such a commons may be either protected from (...) appropriation (by `copyleft' licensing), or unprotected. Around the year 2000, this approach was generalized by the Creative Commons initiative. In the process they added a `static' commons, in which only use of resources is allowed. This mould was applied to the sciences and the humanities in particular, and various Open Access initiatives unfolded. A final aspect of copyright-based commons is the distinction between active and passive commons: while the latter is only a site for obtaining resources, the former is also a site for production of new resources by communities of volunteers (`peer production'). Finally, several patent commons are discussed, which mainly aim at preventing patents blocking the further development of science. Throughout, attention is drawn to interrelationships between the various commons. (shrink)
How may human agents come to trust (sophisticated) artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet (ro)bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; (...) as a consequence, they approach the robots involved as being trustworthy (“zones of trust”). Properly speaking, users rely on the overall accountability of the institution. Besides this option we explore some novel ways for trust development: trust becomes normatively laden and thereby the mechanism of exclusive reliance on the normative force of trust (as-if trust) may come into play - the efficacy of which has already been proven for persons meeting face-to-face or over the Internet (virtual trust). For one thing, machines may evolve into moral machines, or machines skilled in the art of deception. While both developments might seem to facilitate proper trust and turn as-if trust into a feasible option, they are hardly to be taken seriously (while being science-fiction, immoral, or both). For another, the new trend in robotics is towards coactivity between human and machine operators in a team (away from making robots as autonomous as possible). Inside the team trust is a necessity for smooth operations. In support of this, humans in particular need to be able to develop and maintain accurate mental models of their machine counterparts. Nevertheless, the trust involved is bound to remain nonnormative. It is argued, though, that excellent opportunities exist to build relations of trust toward outside users who are pondering their reliance on the coactive team. The task of managing this trust has to be allotted to human operators of the team, who operate as linking pin between the outside world and the team. Since the robotic team has now been turned into an anthropomorphic team, users may well develop normative trust towards them; correspondingly, trusting the team in as-if fashion becomes feasible. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.