June 1: an auspicious date, to be sure. For 'twas on such date that Madison asked Congress to declare war on the
UK. Then, precisely 155 years later, Sgt.
Pepper was released. And now, in 2006, June 1 is to be the deadline
for the submission of abstracts of papers for this year's Classification
Research Workshop. The workshop is organized annually by the Special Interest
Group on Classification Research (SIG/CR) of the American Society for
Information Science and Technology (ASIS&T), as a preliminary to
ASIS&T's main Annual Meeting. This year, it's taking place in Austin, TX, on
November 4, and the theme is to be social classification. Y'know: tagging,
folksonomies, and whatnot. So, dear Dewey blog reader, sharpen your pencil and
get cracking on your 500-word masterpiece. There's only ninety days to go!
Here's the full call for papers in all its glory.
SOCIAL
CLASSIFICATION: PANACEA OR PANDORA?
The aims of
this year’s Classification Research Workshop are to provide a forum for
researchers, practitioners, and users to share their knowledge, perspectives,
and opinions on social classification (SC), and (in the form of the proceedings)
to make a lasting and authoritative contribution to our understanding of the
benefits that SC-based systems may provide. Papers on any aspect of the
conceptualization and/or evaluation of social classification are invited for
presentation at the workshop and publication in the open-access, peer-reviewed
proceedings.
Social
classification is a convenient, generic label that may be used to refer to any
of a number of broadly related processes by which the resources in a collection
are categorized by multiple people over an ongoing period, with the potential
result that any given resource will come to be represented by a set of labels or
descriptors that have been generated by different people. The specific processes
in question include indexing, tagging, bookmarking, annotation, and description
of the kinds that may be characterized as collaborative, cooperative,
distributed, dynamic, community-based, folksonomic, wikified, democratic,
user-assigned, or user-generated. The mid-2000s have seen rapid growth in levels
of interest in these kinds of technique for generating descriptions of resources
for the purposes of discovery, access, and retrieval. Systems that provide
automated support for social classification may be implemented at low cost, and
are perceived to contribute to the democratization of classification by
empowering people who might otherwise remain strictly information consumers to
become information producers.
Efforts to
conduct serious evaluations of the comparative effectiveness of such systems
have begun, but results are scattered and piecemeal. Compared with retrieval
systems based on traditional methods -- manual or automatic -- of classifying
resources, how effectively are users of SC-based systems able to find the
resources that they want? What is the impact on retrieval effectiveness of
systems designers’ decisions to pay limited attention to traditionally important
components such as vocabulary control, facet analysis, and systematic
hierarchical arrangement? Current implementations of SC tend to shy away, for
instance, from imposing the kind of vocabulary control on which classification
schemes and thesauri are conventionally founded: proponents argue that social
classifiers should be free, as far as possible, to supply precisely those class
labels that they believe will be useful to searchers in the future, whether or
not those labels have proven useful in the past. But do the advantages that are
potentially to be gained from allowing classifiers free rein in the choice of
labels outweigh those that may be obtainable by imposing some form of vocabulary
and authority control, or by offering browsing-based interfaces to
hierarchically structured vocabularies, by establishing and complying with
policies for the specificity and exhaustivity of sets of labels, and by other
devices that are designed to improve classifier--searcher
consistency?
Other
questions arise as a result of the reliance of SC-based systems on volunteer
labor. Given the distributed nature of SC, for example, how can it be ensured
that every resource attracts a critical mass of descriptors, rather than just
the potentially-quirky choices of a small number of volunteers? Given the
self-selection of classifiers, how can it be ensured that they are motivated to
supply class labels that they would expect other searchers to use? In general,
are reductions in the costs of classification (borne by information producers)
achieved only at the expense of increases in the costs of resource discovery
(borne by consumers)?
Abstracts
(500-1000 words) of papers should be submitted to both workshop co-chairs by
JUNE 1, 2006. Authors will
be notified of the program committee’s decision by JULY 1, 2006. Full papers
(3000-5000 words) should be submitted to both workshop co-chairs by SEPTEMBER 1,
2006. The workshop
will be held on NOVEMBER 4, 2006, as part of the Annual Meeting of the American
Society for Information Science and Technology (ASIS&T) in Austin, TX. It
will be the 17th in a series of annual workshops organized by ASIS&T’s
Special Interest Group on Classification Research. Workshop
co-chairs: Jonathan
Furner, Assistant
Editor, Dewey Decimal Classification, OCLC Online Computer Library Center, Inc.,
Washington, DC; Joseph Tennis, Assistant
Professor, School of Library, Archival and Information Studies, University of
British Columbia, Vancouver, BC.
Recent Comments