To find section titles, it then looks for similar existing articles. With these titles, the system searches the web for information, and eventually uses content summarization and a paraphrasing algorithm. The researchers uploaded 5. Wikipedia, and found that 4. Some were heavily edited after upload, others not so much. While I was enthusiastic about the results, I was surprised by the suboptimal quality of the articles I reviewed – three that were mentioned in the paper. After a brief discussion with the authors, a wider discussion was initiated on the Wiki- research mailing list. This was followed by an entry on the English Wikipedia administrators’ noticeboard (which includes a list of all accounts used for this particular research paper). The discussion led to the removal of most of the remaining articles. The discussion concerned the ethical implications of the research, and using Wikipedia for such an experiment without the consent of Wikipedia contributors or readers. The first author of the paper was an active member of the discussion; he showed a lack of awareness of these issues, and appeared to learn a lot from the discussion. He promised to take these lessons to the relevant research community – a positive outcome. In general, this sets an example for engineers and computer- science engineers, who often show a lack of awareness of certain ethical issues in their research. Computer scientists are typically trained to think about bits and complexities, and rarely discuss in depth how their work impacts human lives. Whether it’s social networks experimenting with the mood of their users, current discussions of biases in machine- learned models, or the experimental upload of automatically created content in Wikipedia without community approval, computer science has generally not reached the level of awareness of some other sciences for the possible effects of their research on human subjects, at least as far as this reviewer can tell. Even in Wikipedia, there’s no clear- cut, succinct Wikipedia policy I could have pointed the researchers to. The use of sockpuppets was a clear violation of policy, but an incidental component of the research. Sexy Spirits was established in 1999 in New York City to initiate and expand greater consciousness and awareness into our Livingness as a sexual body. We depart from the premise that we, as human beings, are attempting to. Filme eclipse dublado download dvdrip; reggae riddims mp3 free s; tom jerry games pc; troll vs elves 3.9 download; asphalt 6 adrenaline free download for ipod; queer as folk legendado segunda temporada; mapas para cs 1.6. WP: POINT was a stretch to cover the situation at hand. In the end, what we can suggest to researchers is to check back with the Wikimedia Research list. A lot of people there have experience with designing research plans with the community in mind, and it can help to avoid uncomfortable situations. See also our 2. 01. Bot detects theatre play scripts on the web and writes Wikipedia articles about them” and other similarly themed papers they have published since then: “Wiki. Kreator: Automatic Authoring of Wikipedia Content”. While providing justification for the system’s efficacy and largely absolving it of some of the objections that are commonly associated with the use of profiling in e. Although generally well- informed about both the practice and the academic research of vandalism fighting, the paper unfortunately fails to connect to an existing debate about very much the same topic – potential biases of artificial intelligence- based anti- vandalism tools against anonymous edits – that was begun last year. There, de Laat’s concerns included the fact that some stronger tools (rollback, Huggle, and STiki) are available only to trusted users and “cause a loss of the required moral skills in relation to newcomers”, and that they a lack of transparency about how the tools operate (in particular when more sophisticated artificial intelligence/machine learning algorithms such as neural networks are used). The present paper expands on a separate but related concern, about the use of “profiling” to pre- select which recent edits will be subject to closer human review. The author emphasizes that on Wikipedia this usually does not mean person- based offender profiling (building profiles of individuals committing vandalism), citing only one exception in form of a 2. Rather, “the anti- vandalism tools exemplify the broader type of profiling” that focuses on actions. Based on Schauer’s work, the author asks the following questions: “Is this profiling profitable, does it bring the rewards that are usually associated with it?”“is this profiling approach towards edit selection justified? In particular, do any of the dimensions in use raise moral objections? If so, can these objections be met in a satisfactory fashion, or do such controversial dimensions have to be adapted or eliminated?”But snakes are much more dangerous! According to Schauer, while general rules are always less fair than case- by- case decisions, their existence can be justified by other arguments. To answer the first question, the author turns to Schauer’s work on rules, in a brief summary that is worth reading for anyone interested in Wikipedia policies and guidelines – although de Laat instead applies the concept to the “procedural rules” implicit in vandalism profiling (such as that anonymous edits are more likely to be worth scrutinizing). First, Schauer “resolutely pushes aside the argument from fairness: decision- making based on rules can only be less just than deciding each case on a particularistic basis “. If change is on a society’s agenda, the stability argument turns into an argument against having (simple) rules.”The author cautions that these four arguments have to be reinterpreted when applying them to vandalism profiling, because it consists of “procedural rules” (which edits should be selected for inspection) rather than “substantive rules” (which edits should be reverted as vandalism, which animals should be disallowed from the restaurant). While in the case of substantive rules, their absence would mean having to judge everything on a case- by- case basis, the author asserts that procedural rules arise in a situation where the alternative would be to to not judge at all in many cases: Because “we have no means at our disposal to check and pass judgment on all of them; a selection of a kind has to be made. So it is here that profiling comes in”. With that qualification, Schauer’s second argument provides justification for “Wikipedian profiling . Here, though, he fails to explain the benefits of vandals being able to predict which kind of edits will be subject to scrutiny. This also calls into question his subsequent remark that “it is unfortunate that the anti- vandalism system in use remains opaque to ordinary users”. The remaining two of Schauer’s four arguments are judged as less pertinent. But overall the paper concludes that it is possibile to justify the existence of vandalism profiling rules as beneficial via Schauer’s theoretical framework. Next, de Laat turns to question 2, on whether vandalism profiling is also morally justified. Here he relies on later work by Schauer, from a 2. Profiles, Probabilities, and Stereotypes”, that studies such matters as profiling by tax officials (selecting which taxpayers have to undergo an audit), airport security (selecting passengers for screening) and by police officers (e. While profiling of some kind is a necessity for all these officials, the particular characteristics (dimensions) used for profiling can be highly problematic (see e. For de Laat’s study of Wikipedia profiling, “two types of complications are important: (1) possible . This is also an example for the second type of complication profiling, where the selected dimensions are socially sensitive – indeed, for the specific case of luggage screening in the US, “the factors of race, religion, ethnicity, nationality, and gender have expressly been excluded from profiling” since 1. Applying this to the case of Wikipedia’s anti- vandalism efforts, de Laat first observes that complication (1) (overuse) is not a concern for fully automated tools like Clue. Bot. NG – obviously their algorithm applies the existing profile directly without a human intervention that could introduce this kind of bias. For Huggle and STiki, however, “I see several possibilities for features to be overused by patrollers, thereby spoiling the optimum efficacy achievable by the profile embedded in those tools.” This is because both tools do not just use these features in their automatic pre- selection of edits to be reviewed, but expose at least the fact whether an edit was anonymous to the human patroller in the edit review interface. However, there seems to have been no attempt to study empirically whether this overuse actually occurs.)Regarding complication (2), whether some of the features used for vandalism profiling are socially sensitive, de Laat highlights that they include some amount of discrimination by nationality: IP edits geolocated to the US, Canada, and Australia have been found to contain vandalism more frequently and are thus more likely to be singled out for inspection.
However, he does not consider this concern “strong enough to warrant banning the country- dimension and correspondingly sacrifice some profiling efficacy”, chiefly because there do not appear to be a lot of nationalistic tensions within the English Wikipedia community that could be stirred up by this. In contrast, de Laat argues that “the targeting of contributors who choose to remain anonymous . Also, he rejects the concern that they might be more likely to be the victim of false positives: “normally . At any rate, the harm involved would seem to be small in comparison with the harassment of racial profiling—let alone that an . The 'Guardians of the Galaxy' characters have been integrated into an International Space Station (ISS) U.S. National Laboratory (Managed by CASIS) patch in partnership with Marvel's Custom Solutions Group. Instead of concerns about individual harm,“my main argument for the ban is a decidedly moral one. From the very beginning the Wikipedian community has operated on the basis of a . Notice that I argue, in effect, that the Wikipedian community has only two choices: either accept anons as full citizens or not; but there is no morally defensible social contract in between.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
November 2017
Categories |