Giada Di Stefano in the SSRN Weekly Top 5 Papers

For the second time, the article Learning by Thinking: How Reflection Aids Performance, of which Giada Di Stefano is one of the authors, has been classed in the Weekly Top 5 papers of SSRN.

To find out more, click on this link to acces the SSRN Blog

 

 top5

 

5. Learning by Thinking: How Reflection Aids Performance by Giada Di Stefano (HEC Paris – Strategy & Business Policy) and Francesca Gino (Harvard Business School) and Gary Pisano (Harvard Business School) and Bradley Staats (University of North Carolina Kenan-Flagler Business School)

Productivity and time efficiency are significant concerns in modern Western societies, with time being perceived as a precious resource to guard and protect. In our daily battle against the clock, taking time to reflect on one’s work may seem to be a luxurious pursuit. Though reflection entails the high opportunity cost of one’s time, in this paper we argue and show that deliberate reflection is no wasteful pursuit: it can powerfully enhance the learning process. Learning, we find, can be augmented if one deliberately focuses on thinking about what one has been doing. Results from our analyses show that employees who spent the last 15 minutes of each day in their training period writing and reflecting on the lessons they had learned that day did 23% better in the final training test than employees who didn’t take time to consider how they had approached the task. This improvement, we find, is explained by greater self-efficacy, i.e. confidence in one’s ability to complete tasks competently and effectively.

Advertisements

Evaluating the h-index

Stacy Konkiel of ImpactStory  has published an article entitled “Four great reasons to stop caring so much about the h-index”  questioning the reliability of the h-index in assessing a scholars prominence.

ImpactStory is an “open-source, web-based tool that helps researchers explore and share the diverse impacts of all their research products—from traditional ones like journal articles, to emerging products like blog posts, datasets, and software.” Their aim is to create a new recognition system for scholars based on data and web impact.

The h-index was developed by Jorge E. Hirsch, and is an index that attempts to measure the productivity and impact of scholars via their published works. The index is based on a scholars most cited papers and publications, and the number of citations that they have received in other academic publications from other scholars.hindex

So why, according to Konkiel, should we ‘stop caring so much’ about it?

Firstly, Konkiel likens comparison via an h-index as ‘comparing apples and oranges.’
The h-index does not consider the field of study of an author. This means that finding a ‘good’ h-index comparatively across domains is difficult, as an author in medicine for example may have a much higher index than an author in mathematics, not necessarily because they are a better scholar, but simply because medicinal works may be published or cited more. Yet the h-index does not take this into account.

Furthermore, the h-index does not differentiate according to age or career advancement. A younger scholar will most likely have published fewer papers than one further along their career path, yet this is not taken into account. Similarly, there is even the presence of more than one h-index per author, depending on the databases consulted.

Secondly, Konkiel highlights the ignorance of articles that aren’t ‘shaped like an article.’ H-index only accounts for academic articles; therefore blog posts, patents, software and even some books are omitted from the count, which the author notes, affects the h-index in fields such as chemistry.
The analysed sphere of influence is also limited to academic citations, therefore even if an article had great social implications or forced a change in policy, this is not taken into account.

The author goes on to question the validity of assessing an author by a single number. One figure is too closed an area to assess a scholars full prominence, with parallels drawn with the limited accuracy of journal impact factors. Whilst it may provide valid information in one area, the basis of quantity vs influence does not create a full picture of an author.

Finally, Konkiel goes as far as to suggest the h-index is ‘dumb’ when it comes to authorship, as it does not consider different weighting depending on whether a paper was written alone or collaboratively, or the position of an author in a collaboration.

In conclusion, the author details some of the attempted ‘fixes’ for the h-index, none of which have been widely implemented.

Therefore she suggests looking at alternatives, which base their rankings on a wider range of data, for example altmetrics.

You can find out more about altmetrics here