A comparison of two ways of evaluating research units working in different scientific fields
详细信息    查看全文
  • 作者:Antonio Perianes-Rodriguez ; Javier Ruiz-Castillo
  • 关键词:Citation analysis ; Aggregation ; All ; sciences case ; Field ; normalization
  • 刊名:Scientometrics
  • 出版年:2016
  • 出版时间:February 2016
  • 年:2016
  • 卷:106
  • 期:2
  • 页码:539-561
  • 全文大小:862 KB
  • 参考文献:Albarrán, P., Crespo, J., Ortuño, I., & Ruiz-Castillo, J. (2011a). The skewness of science in 219 sub-fields and a number of aggregates. Scientometrics, 88, 385–397.CrossRef
    Albarrán, P., Ortuño, I., & Ruiz-Castillo, J. (2011b). The measurement of low- and high-impact in citation distributions: Technical results. Journal of Informetrics, 5, 48–63.CrossRef
    Bornmann, L., De Moya Anegón, F., & Leydesdorff, L. (2012). The new excellence indicator in the world report of the SCImago Institutions Rankings 2011. Journal of Informetrics, 6, 333–335.CrossRef
    Brzezinski, M. (2015). Power laws in citation distributions: Evidence from Scopus. Scientometrics, 103, 213–228.CrossRef
    Cowell, F. (2000). Measurement of inequality. In A. B. Atkinson & F. Bourguignon (Eds.), Handbook of income distribution (Vol. 1, pp. 87–166). Amsterdam: Elsevier.CrossRef
    Crespo, J. A., Herranz, N., Li, Y., & Ruiz-Castillo, J. (2014). The effect on citation inequality of differences in citation practices at the Web of Science subject category level. Journal of the American Society for Information Science and Technology, 65, 1244–1256.CrossRef
    Crespo, J. A., Li, Y., & Ruiz-Castillo, J. (2013). The measurement of the effect on citation inequality of differences in citation practices across scientific fields. PLoS ONE, 8, e58727.CrossRef
    Foster, J. E., Greeer, J., & Thorbecke, E. (1984). A class of decomposable poverty measures. Econometrica, 52, 761–766.MATH CrossRef
    Foster, J. E., & Shorrocks, A. (1991). Subgroup consistent poverty indices. Econometrica, 59, 687–709.MATH MathSciNet CrossRef
    Groeneveld, R. A., & Meeden, G. (1984). Measuring skewness and kurtosis. The Statistician, 33, 391–399.CrossRef
    Li, Y., Castellano, C., Radicchi, F., & Ruiz-Castillo, J. (2013). Quantitative evaluation of alternative field normalization procedures. Journal of Informetrics, 7, 746–755.CrossRef
    Perianes-Rodriguez, A., & Ruiz-Castillo, J. (2015a). An alternative to field-normalization in the aggregation of heterogeneous scientific fields. In A. Ali Salah, Y. Tonta, A. A. Akdag Salah, C. Sugimoto & U. Al (Eds.), Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics conference (pp. 294–304), Istanbul, Turkey, 29 June–3 July, 2015. Istanbul: Bogaziçi University.
    Perianes-Rodriguez, A., & Ruiz-Castillo, J. (2015b). Multiplicative versus fractional counting methods for co-authored publications. The case of the 500 universities in the Leiden Ranking. Journal of Econometrics 9, 917–989.
    Perianes-Rodriguez, A., & Ruiz-Castillo, J. (2015c). University citation distributions. Journal of the American Society for Information Science and Technology,. doi:10.​1002/​asi.​23619 .
    Radicchi, F., & Castellano, C. (2012). A reverse engineering approach to the suppression of citation biases reveals universal properties of citation distributions. PLoS ONE, 7, e33833.CrossRef
    Radicchi, F., Fortunato, S., & Castellano, C. (2008). Universality of citation distributions: Toward an objective measure of scientific impact. PNAS, 105, 17268–17272.CrossRef
    Ruiz-Castillo, J. (2014). The comparison of classification-system-based normalization procedures with source normalization alternatives in Waltman and Van Eck. Journal of Informetrics, 8, 25–28.CrossRef
    Ruiz-Castillo, J., & Waltman, L. (2015). Field-normalized citation impact indicators using algorithmically constructed classification systems of science. Journal of Informetrics, 9, 102–117.CrossRef
    Schubert, A., Glänzel, W., & Braun, T. (1987). Subject field characteristic citation scores and scales for assessing research performance. Scientometrics, 12, 267–292.CrossRef
    Thelwall, M., & Wilson, P. (2014). Distributions for cited articles from individual subjects and years. Journal of Informetrics, 8, 824–839.CrossRef
    Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E. C. M., Tijssen, R. J. W., Van Eck, N. J., et al. (2012a). The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation. Journal of the American Society for Information Science and Technology, 63, 2419–2432.CrossRef
    Waltman, L., & Schreiber, M. (2013). On the calculation of percentile-based bibliometric indicators. Journal of the American Society for Information Science and Technology, 64, 372–379.CrossRef
    Waltman, L., & Van Eck, N. J. (2012). A new methodology for constructing a publication-level classification system of science. Journal of the American Society for Information Science and Technology, 63, 2378–2392.CrossRef
    Waltman, L., & Van Eck, N. J. (2013). A systematic empirical comparison of different approaches for normalizing citation impact indicators. Journal of Informetrics, 7, 833–849.CrossRef
    Waltman, L., Van Eck, N. J., & Van Raan, A. F. J. (2012b). Universality of citation distributions revisited. Journal of the American Society for Information Science and Technology, 63, 72–77.CrossRef
  • 作者单位:Antonio Perianes-Rodriguez (1)
    Javier Ruiz-Castillo (2)

    1. SCImago Research Group, Departamento de Biblioteconomía y Documentación, Universidad Carlos III, Madrid, Spain
    2. Departamento de Economía, Universidad Carlos III, Madrid, Spain
  • 刊物主题:Information Storage and Retrieval; Library Science; Interdisciplinary Studies;
  • 出版者:Springer Netherlands
  • ISSN:1588-2861
文摘
This paper studies the evaluation of research units that publish their output in several scientific fields. A possible solution relies on the prior normalization of the raw citations received by publications in all fields. In a second step, a citation indicator is applied to the units’ field-normalized citation distributions. In this paper, we also study an alternative solution that begins by applying a size- and scale-independent citation impact indicator to the units’ raw citation distributions in all fields. In a second step, the citation impact of any research unit is calculated as the average (weighted by the publication output) of the citation impact that the unit achieves in each field. The two alternatives are confronted using the 500 universities in the 2013 edition of the CWTS Leiden Ranking, whose research output is evaluated according to two citation impact indicators with very different properties. We use a large Web of Science dataset consisting of 3.6 million articles published in the 2005–2008 period, and a classification system distinguishing between 5119 clusters. The main two findings are as follows. Firstly, differences in production and citation practices between the 3332 clusters with more than 250 publications account for 22.5 % of the overall citation inequality. After the standard field-normalization procedure, where cluster mean citations are used as normalization factors, this quantity is reduced to 4.3 %. Secondly, the differences between the university rankings according to the two solutions for the all-sciences aggregation problem are of a small order of magnitude for both citation impact indicators. Keywords Citation analysis Aggregation All-sciences case Field-normalization

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700