The Web Science Trust

Crowdsourcing in Article Evaluation

Peters, Isabella and Haustein, Stefanie and Terliesner, Jens (2011) Crowdsourcing in Article Evaluation. pp. 1-4. In: Proceedings of the ACM WebSci'11, June 14-17 2011, Koblenz, Germany.

PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Image (PNG)

Official URL:


Qualitative journal evaluation makes use of cumulated content descriptions of single articles. These can either be represented by author-generated keywords, professionally indexed subject headings, automatically extracted terms or by reader-generated tags as used in social bookmarking systems. It is assumed that particularly the users’ view on article content differs significantly from the authors’ or indexers’ perspectives. To verify this assumption, title and abstract terms, author keywords, Inspec subject headings, KeyWords PlusTM and tags are compared by calculating the overlap between the respective datasets. Our approach includes extensive term preprocessing (i.e. stemming, spelling unifications) to gain a homogeneous term collection. When term overlap is calculated for every single document of the dataset, similarity values are low. Thus, the presented study confirms the assumption, that the different types of keywords each reflect a different perspective of the articles’ contents and that tags (cumulated across articles) can be used in journal evaluation to represent a reader-specific view on published content.

Item Type:Conference or Workshop Item (Poster)
Web Science Comments:WebSci Conference 2011
Subjects:WS1 Computer Science
WS9 Sociology > WS92 Theory of Groups
Web Science Events > Web Science 2011
ID Code:487
Deposited By: Lisa Sugiura
Deposited On:07 Jun 2011 16:41
Last Modified:25 Oct 2011 17:11

Repository Staff Only: item control page

EPrints Logo
Web Science Repository is powered by EPrints 3 which is developed by the School of Electronics and Computer Science at the University of Southampton. More information and software credits.