A Sciento-text framework to characterize research strength of institutions at fine-grained thematic area level
详细信息    查看全文
  • 作者:Ashraf Uddin ; Jaideep Bhoosreddy ; Marisha Tiwari ; Vivek Kumar Singh
  • 关键词:Computer science research ; Research competitiveness ; Field ; based ranking ; Scientometrics ; UniversitySelectPlus
  • 刊名:Scientometrics
  • 出版年:2016
  • 出版时间:March 2016
  • 年:2016
  • 卷:106
  • 期:3
  • 页码:1135-1150
  • 全文大小:809 KB
  • 参考文献:Alwahaishi, S., Martinovič, J., & Snášel, V. (2011). Analysis of the DBLP Publication Classification Using Concept Lattices. Digital enterprise and information systems (pp. 99–108). Berlin: Springer.CrossRef
    Avkiran, N. K., & Alpert, K. (2015). The influence of co-authorship on article impact in OR/MS/OM and the exchange of knowledge with Finance in the twenty-first century. Annals of Operations Research, 235(1), 1–23.
    Basu, A., & Aggarwal, R. (2001). International collaboration in science in India and its impact on institutional performance. Scientometrics, 52(3), 379–394.CrossRef
    Bordons, M., Aparicio, J., González-Albo, B., & Díaz-Faes, A. A. (2015). The relationship between the research performance of scientists and their position in co-authorship networks in three fields. Journal of Informetrics, 9(1), 135–144.CrossRef
    Bornmann, L., Leydesdorff, L., & Mutz, R. (2013a). The use of percentiles and percentile rank classes in the analysis of bibliometric data: Opportunities and limits. Journal of Informetrics, 7(1), 158–165.CrossRef
    Bornmann, L., Leydesdorff, L., & Wang, J. (2013b). Which percentile-based approach should be preferred for calculating normalized citation impact values? An empirical comparison of five approaches including a newly developed citation-rank approach (P100). Journal of Informetrics, 7(4), 933–944.CrossRef
    Bornmann, L., & Marx, W. (2011). The h index as a research performance indicator. EurSci Ed, 37(3), 77–80.
    Bornmann, L., & Marx, W. (2014). How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations. Scientometrics, 98(1), 487–509.CrossRef
    Bornmann, L., Moya Anegón, F., & Mutz, R. (2013c). Do universities or research institutions with a specific subject profile have an advantage or a disadvantage in institutional rankings? Journal of the American Society for Information Science and Technology, 64(11), 2310–2316.CrossRef
    Bornmann, L., Stefaner, M., de Moya Anegón, F., & Mutz, R. (2014). Ranking and mapping of universities and research-focused institutions worldwide based on highly-cited papers: A visualisation of results from multi-level models. Online Information Review, 38(1), 43–58.CrossRef
    Ductor, L. (2015). Does co-authorship lead to higher academic productivity? Oxford Bulletin of Economics and Statistics, 77(3), 385–407.CrossRef
    García, J. A., Rodriguez-Sánchez, R., Fdez-Valdivia, J., Torres-Salinas, D., & Herrera, F. (2012). Ranking of research output of universities on the basis of the multidimensional prestige of influential fields: Spanish universities as a case of study. Scientometrics, 93(3), 1081–1099.CrossRef
    Glänzel, W., & Moed, H. F. (2013). Opinion paper: Thoughts and facts on bibliometric indicators. Scientometrics, 96(1), 381–394.CrossRef
    Golub, K. (2006). Automated subject classification of textual Web pages, based on a controlled vocabulary: Challenges and recommendations. New Review of Hypermedia and Multimedia, 12(1), 11–27.CrossRef
    Gupta, B. M., Kshitij, A., & Verma, C. (2011). Mapping of Indian computer science research output, 1999–2008. Scientometrics, 86(2), 261–283.CrossRef
    Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572.CrossRef
    Janssens, F., Zhang, L., De Moor, B., & Glänzel, W. (2009). Hybrid clustering for validation and improvement of subject-classification schemes. Information Processing and Management, 45(6), 683–702.CrossRef
    Lazaridis, T. (2009). Ranking university departments using the mean h-index. Scientometrics, 82(2), 211–216.CrossRef
    Leydesdorff, L., & Bornmann, L. (2011). Integrated impact indicators compared with impact factors: An alternative research design with policy implications. Journal of the American Society for Information Science and Technology, 62(11), 2133–2146.CrossRef
    Leydesdorff, L., & Bornmann, L. (2012). The integrated impact indicator (I3), the top-10% excellence indicator, and the use of non-parametric statistics. Research Trends, 29, 5–8.
    Leydesdorff, L., Bornmann, L., Mutz, R., & Opthof, T. (2011). Turning the tables on citation analysis one more time: Principles for comparing sets of documents. Journal of the American Society for Information Science and Technology, 62(7), 1370–1381.CrossRef
    Liu, N. C., & Liu, L. (2005). University rankings in China. Higher Education in Europe, 30(2), 217–227.CrossRef
    Molinari, A., & Molinari, J. F. (2008). Mathematical aspects of a new criterion for ranking scientific institutions based on the h-index. Scientometrics, 75(2), 339–356.CrossRef MathSciNet
    Rafols, I., & Leydesdorff, L. (2009). Content-based and algorithmic classifications of journals: Perspectives on the dynamics of scientific communication and indexer effects. Journal of the American Society for Information Science and Technology, 60(9), 1823–1835.CrossRef
    Rehn, C., Kronman, U., & Wadskog, D. (2007). Bibliometric indicators—definitions and usage at Karolinska Institutet. Karolinska Institutet, 13, 2012.
    Singh, V. K., Uddin, A., & Pinto, D. (2015). Computer science research: The top 100 institutions in India and in the world. Scientometrics, 104(2), 539–563.CrossRef
    Uddin, A., & Singh, V. K. (2015). A quantity–quality composite ranking of Indian institutions in computer science research. IETE Technical Review, 32(4), 273–283.CrossRef
    Van Raan, A. (1998). The influence of international collaboration on the impact of research results: Some simple mathematical considerations concerning the role of self-citations. Scientometrics, 42(3), 423–428.CrossRef
    Waltman, L., & Eck, N. J. (2012). A new methodology for constructing a publication-level classification system of science. Journal of the American Society for Information Science and Technology, 63(12), 2378–2392.CrossRef
    Waltman, L., & Schreiber, M. (2013). On the calculation of percentile-based bibliometric indicators. Journal of the American Society for Information Science and Technology, 64(2), 372–379.CrossRef
    Zhang, L., Liu, X., Janssens, F., Liang, L., & Glänzel, W. (2010). Subject clustering analysis based on ISI category classification. Journal of Informetrics, 4(2), 185–193.CrossRef
  • 作者单位:Ashraf Uddin (1)
    Jaideep Bhoosreddy (2)
    Marisha Tiwari (3)
    Vivek Kumar Singh (4)

    1. Department of Computer Science, South Asian University, New Delhi, India
    2. Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY, USA
    3. DST-CIMS, Banaras Hindu University, Varanasi, India
    4. Department of Computer Science, Banaras Hindu University, Varanasi, 221005, India
  • 刊物主题:Information Storage and Retrieval; Library Science; Interdisciplinary Studies;
  • 出版者:Springer Netherlands
  • ISSN:1588-2861
文摘
This paper presents a Sciento-text framework to characterize and assess research performance of leading world institutions in fine-grained thematic areas. While most of the popular university research rankings rank universities either on their overall research performance or on a particular subject, we have tried to devise a system to identify strong research centres at a more fine-grained level of research themes of a subject. Computer science (CS) research output of more than 400 universities in the world is taken as the case in point to demonstrate the working of the framework. The Sciento-text framework comprises of standard scientometric and text analytics components. First of all every research paper in the data is classified into different thematic areas in a systematic manner and then standard scientometric methodology is used to identify and assess research strengths of different institutions in a particular research theme (say Artificial Intelligence for CS domain). The performance of framework components is evaluated and the complete system is deployed on the Web at url: www.​universityselect​plus.​com. The framework is extendable to other subject domains with little modification.
NGLC 2004-2010.National Geological Library of China All Rights Reserved.
Add:29 Xueyuan Rd,Haidian District,Beijing,PRC. Mail Add: 8324 mailbox 100083
For exchange or info please contact us via email.