Multi-Threaded Parallel I/O for OpenMP Applications
详细信息    查看全文
  • 作者:Kshitij Mehta (1)
    Edgar Gabriel (1)

    1. Department of Computer Science
    ; University of Houston ; Houston ; TX ; 77204 ; USA
  • 关键词:Shared memory system ; Parallel I/O ; OpenMP
  • 刊名:International Journal of Parallel Programming
  • 出版年:2015
  • 出版时间:April 2015
  • 年:2015
  • 卷:43
  • 期:2
  • 页码:286-309
  • 全文大小:1,406 KB
  • 参考文献:1. Bent, J., Gibson, G., Grider, G., McClelland, B., Nowoczynski, P., Nunez, J., Polte, M., Wingate, M.: PLFS: a checkpoint filesystem for parallel applications. In: Proceedings of the conference on high performance computing networking, storage and analysis, SC 鈥?9, pp. 21:1鈥?1:12, ACM, New York (2009)
    2. Brown, S.A., Folk, M., Goucher, G., Rew, R.: Software for portable scientific data management. Comput. Phys. 7(3), 304鈥?08 (1993) CrossRef
    3. Buttlar, D., Nichols, B., Farrell, J.P.: Pthreads Programming. O鈥橰eilly & Associates Inc, Sebastopol (1996)
    4. Chaarawi, M., Edgar, G.: Automatically selecting the number of aggregators for collective I/O operations. Workshop on interfaces and abstractions for scientific data storage. IEEE Cluster 2011 conference, pp. 428鈥?37. Austin, Texas, USA (2011)
    5. Chaarawi, M., Gabriel, E., Keller, R., Graham, R.L., Bosilca, G., Dongarra, J.J.: OMPIO: a modular software architecture for MPI I/O. In: Cotronis, Y., Danalis, A., Nikolopoulos, D., Dongarra, J., (Eds.) Recent Advances in Message Passing Interface, pp. 90鈥?8, Santorini, Greece, September 2011. Springer, Berlin (2011)
    6. Dickens, P.M., Logan, J.: Y-lib: a user level library to increase the performance of MPI-IO in a lustre file system environment. In HPDC 鈥?9: Proceedings of the 18th ACM international symposium on High performance distributed computing, pp. 31鈥?8, ACM, New York (2009)
    7. El-Ghazawi, T., Cantonnet, F., Saha, P., Thakur, R.v., Ross, R., Bonachea, D.: UPC-IO: a parallel I/O API for UPC. V1.0 (2004)
    8. Eachempati, D., Jun, H.J., Chapman, B.: An open-source compiler and runtime implementation for coarray fortran, (2010)
    9. Furber, B.S.: ARM System Architecture. Addison-Wesley Longman Publishing Co. Inc, Boston (1996)
    10. Gabriel, E., Fagg, G.E., Bosilca, G., Angskun, T., Dongarra, J.J., Squyres, J.M., Sahay, V. Kambadur, P., Barrett, B., Lumsdaine, A., Castain, R.H., Daniel, D.J., Graham, R.L., Woodall, T.S.: Open MPI: goals, concept, and design of a next generation MPI implementation. In Proceedings of the 11th European PVM/MPI Users鈥?Group Meeting, pp. 97鈥?04, Budapest, Hungary, September (2004)
    11. Gabriel, E., Venkatesan, V., Shah, S.: Towards high performance cell segmentation in multispectral fine needle aspiration cytology of thyroid lesions. Comput. Methods Programs Biomed. 98(3), 231鈥?40 (2009) CrossRef
    12. Hierarchical Data Format Group. HDF5 reference manual, September 2004. Release 1.6.3, National Center for Supercomputing Application (NCSA), University of Illinois at Urbana-Champaign (2004)
    13. Hedges, R., Loewe, B., McLarty, T., Morrone, C.: Parallel file system testing for the lunatic fringe: the care and feeding of restless I/O power users. In: Proceedings of the 22nd IEEE / 13th NASA Goddard Conference on Mass Storage Systems and Technologies, MSST 鈥?5, pp. 3鈥?7, IEEE Computer Society, Washington (2005)
    14. Liao, C., Hernandez, O., Chapman, B., Chen, W., Zheng, W.: OpenUH: an optimizing, portable OpenMP compiler. In 12th workshop on compilers for parallel computers, January (2006)
    15. Lustre webpage. http://www.lustre.org
    16. Mehta, K., Gabriel, E., Chapman, B.: Specification and performance evaluation of parallel I/O interfaces for OpenMP. In: OpenMP in a Heterogeneous World, pp. 1鈥?4. Springer, Berlin (2012)
    17. Message Passing Interface Forum. MPI-2.2: Extensions to the Message Passing Interface. http://www.mpi-forum.org September (2009)
    18. OpenMP Application Review Board. OpenMP Application Program Interface, Draft 3.0, October (2007)
    19. Oral, S., Wang, F., Dillow, D., Shipman, G.M., Miller, R., Drokin, O.: Efficient object storage journaling in a distributed parallel file system. In: Burns, R.C., Keeton, K. (Eds), 8th USENIX Conference on File and Storage Technologies, pp. 143鈥?54, (2010)
    20. Owens, J.D., Houston, M., Luebke, D., Green, S., Stone, J.E., Phillips, J.C.: GPU computing. Proc. IEEE 96, 879鈥?99 (2008) CrossRef
    21. PVFS2 webpage. Parallel Virtual File System. http://www.pvfs.org
    22. Ross, R., Nurmi, D., Cheng, A., Zingale, M.: A case study in application I/O on linux clusters. In ACM/IEEE supercomputing conference, Denver, CO, USA, (2001)
    23. Sarkar, V.: Exascale software study: software challenges in extreme scale systems. DARPA Technical report, (2009)
    24. Sumimoto, S.: An overview of fujitsu鈥檚 lustre based file system. Technical report, Fujitsu, (2011)
    25. Thakur, R., Gropp, W., Lusk, E.: On implementing MPI-IO portably and with high performance. In: Proceedings of the sixth workshop on I/O in parallel and distributed systems, pp. 23鈥?2, (1999)
    26. Wang, Y. Davis, K., Xu, Y., Jiang, S.: Iharmonizer: improving the disk efficiency of i/o-intensive multithreaded codes. In: Parallel Distributed Processing Symposium (IPDPS), 2012 IEEE 26th, International, pp. 921鈥?32, May (2012)
    27. Wong P., Van der Wijngaart, R. F.: NAS parallel benchmarks I/O version 3.0. Technical Report NAS-03-002, Computer Sciences Corporation, NASA Advanced Supercomputing (NAS) Division
    28. Zeyao, M., Zhengfeng, H.: Application of MPI-IO in parallel particle transport monte-carlo simulation. Int. J. Parallel Emergent Distrib.Syst. 19(4), 227鈥?36 (2004)
  • 刊物类别:Computer Science
  • 刊物主题:Theory of Computation
    Processor Architectures
    Software Engineering, Programming and Operating Systems
  • 出版者:Springer Netherlands
  • ISSN:1573-7640
文摘
Processing large quantities of data is a common scenario for parallel applications. While distributed memory applications are able to improve the performance of their I/O operations by using parallel I/O libraries, there is no support for parallel I/O operations for applications using shared-memory programming models such as OpenMP available as of today. This paper presents parallel I/O interfaces for OpenMP. We discuss the rationale of our design decisions, present the interface specification, an implementation within the OpenUH compiler and discuss a number of optimizations performed. We demonstrate the benefits of this approach on different file systems for multiple benchmarks and application scenarios. In most cases, we observe significant improvements in I/O performance as compared to the sequential version. Furthermore, we perform a comparison of the OpenMP I/O functions introduced in this paper to message passing interface I/O, and demonstrate the benefits of the new interfaces.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700