The web-site is now in readonly mode. Login and registration are disabled. (28 June 2019)

FoRC Rm S005 (assessment and impact)

Cloud created by:

Simon Buckingham Shum
15 August 2011

Working materials from the FoRC Workshop group meeting in this room.

Add notes, slides, links, references, comments, etc...

Extra content

Embedded Content


Anita de Waard
10:09pm 16 August 2011

Assessment / Impact session August 16


  • Eve Gray
  • Laura Czerniewicz
  • Ivan Herman (chair)
  • Herbert van den Sompel (notes)
  • Micheal Kurtz
  • Jarkko Siren
  • Peter van den Besselaar
  • Anita de Waard (part of the time)

Opening statements:

- current assessment mechanism is counter productive to schol comm. Need to make policy makers realize and accept that. Only formal citations count. Not other impact.
- What is impact assessment? assess based on what? Do we need assessment of individuals?
- Impact factor doesn't work for across disciplines. Metrics on people or on artifacts?
- perspective should be about value and how value relates to business and impact. Measure value! But how?
- schol comm system is skewed by impact assessment as it is.
- system counter productive. But measures are essential because of assessing individuals, setting funding policy. Question how to come up with other metrics that can be generated in an open and scaleable way. Question how to get those metrics accepted.

- Peter: does not think institutions, funding agencies base decisions on impact factor. He means not element in decision whether a project gets funded. Does play role in setting funding policies. Quite some disagreements from others re not used for individuals.
- different layers of usage of IF: individual => aggregate / science funding policy
- nat research council ranks departments, universities: via IF. Every 10 years. Is now too complex in its multiple dimensions to be usable.

Multidimensional metrics model. Count various things. That apply across disciplines, possibly. Simplicity of metric is important.

Do we need to also talk about e.g. service to community as part of assessment? Is that sci comm?

What are those new dimensions? Smells like alt.metrics. Nope, this ended up not smelling like alt.metrics at all.

Why broken
- Africa can not publish in ranking journals even if paper is about millions of people dying of some disease
- the way we conducts science has changed so fundamentally that a metrics mechanism that ignores this change is totally passé
- real impact is manifested in different ways now (e.g. We know who the core players are in a scientific community and that is not based on "objective" metrics).

The stellar researchers are known by their community. All the others not necessarily. Metrcis can help. And the further one gets removed from a certain sci community, towards other sci community, level of institution, level of country, funding agency ... Indicators are necessary.

What are metrics to assess research communication system, rather than to assess individuals?

Which dimensions:
- how do we measure how research contributes to society (E.g. Development goal in Africa)
- Netherlands: "evaluate research in context" effort. Quality of communication between research and community at large determines societal impact.
- local versus global impact
- economical impact
- quality of communication to general public
- measures depend on goals. In many cases citations are good. But, for example, in nursing, readership becomes important.
- need to be able to get at metrics otherwise you have done nothing
- download counts (better to measure social impact). Can be gamed. Can use under right conditions.
- crowd sourcing evaluation (e.g. Faculty of 1000)
- used for teaching (knowledge with of being transferred)
- used in lectures
- general level of reuse
- openness

Innovations systems thinking. Research => Patent => Commercial. Need to change that thinking.

Problem we see with IF and other simple metrics are individually taken with grain of salt. but if we would use multiple dimensions we might get a more just system. Decisions makers may choose which dimensions to use.

------ post meeting ------

Accessibility of metrics (or data from which to derive metrics) across systems is big issue, e.g. Download data not consistently available; API to obtain metric only allows limited amount of calls a day; ...

Author disambiguation - ORCID???

Reputation management

Output types and metrics for output types

Sent from my iPad


Contribute to the discussion

Please log in to post a comment. Register here if you haven't signed up yet.