IWOMP 2024 Call For Papers (Closed)
The International Workshop on OpenMP (IWOMP) is the annual workshop dedicated to the promotion and advancement of all aspects of parallel programming with OpenMP. This pioneering workshop has been attracting an international audience of leading academic and industrial experts since 2005 and is the premier forum to present and discuss issues, trends, recent research ideas, and results related to parallel programming with OpenMP.
Until June 21, 2024, we solicited quality submissions of unpublished technical papers that detail innovative, original research and development related to OpenMP. The planned program will be available soon.
IWOMP 2024 will be hosted by the Pawsey Supercomputing Research Centre in Perth, Australia.
Important Dates
Submission Deadline (Closed) |
Friday, June 21, 2024 (AoE) |
Acceptance Notifications | Friday, July 12, 2024 (AoE) |
Camera Ready Copy Deadline | Friday, July 26, 2024 (AoE) |
Background
As computing hardware has evolved from simple core reproduction to advanced SIMD units, deeper memories, and heterogeneous computing, OpenMP has also evolved and extended its application interface to harness new capabilities throughout the spectrum of hardware advances. The 5.0, 5.1, 5.2 and 6.0 versions of the OpenMP specification have established OpenMP as the leading API for on-node heterogeneous parallelism that supports all versions of the C/C++ and Fortran base programming languages.
Advances in technologies, such as multicore processors and OpenMP devices (accelerators such as GPGPUs, DSPs or FPGAs), Multiprocessor Systems on a Chip (MPSoCs), and recent developments in OpenMP itself (e.g., metadirectives and variants for selecting device- and architecture-specific directives) present new opportunities and challenges for software and hardware developers. Recent advances in the C, C++ and Fortran base languages also offer interesting opportunities and challenges to the OpenMP programming model.
Theme for 2024:
Advancing OpenMP for Future Accelerators
Future HPC systems will include tighter integration of multiple compute devices, such as CPUs, GPUs, FPGAs and even QPUs. Therefore, programming model support for handling multiple levels of parallelism and managing data across memory spaces is growing in importance. Further, the diversity in compute architectures makes programming mechanisms that enable portability and performance portability essential. This year’s theme highlights OpenMP extensions, implementations and applications that facilitate using such systems and papers that detail them are particularly welcome.
Topics
The topics include but are not limited to the following:
- Accelerated computing and offloading to devices
- Applications (in any domain) that rely on OpenMP
- Data mining and analysis or text processing and OpenMP
- Machine learning and OpenMP
- Memory model
- Memory policies and management
- Performance analysis and modeling
- Performance portability
- Proposed OpenMP extensions
- Runtime environment
- Scientific and numerical computations
- Tasking
- Tools
- Vectorization
Proceedings
As in previous years, IWOMP 2024 will publish formal proceedings of the accepted papers in Springer’s Lecture Notes in Computer Science (LNCS) series.
Organizing Committee
General Co-Chairs
- Alexis Espinosa, Pawsey Supercomputing Centre, Australia
- Michael Klemm, AMD & OpenMP ARB, Germany
Program Co-Chairs
- Bronis R. de Supinski, Lawrence Livermore National Laboratory (LLNL)
- Maciej Cytowski, Pawsey Supercomputing Centre, Australia
Publications Co-Chairs
- Jannis Klinkenberg, RWTH Aachen University
- Sam Yates, Pawsey Supercomputing Centre, Australia
Program Committee
- Ilkhom Abdurakhmanov, Pawsey Supercomputing Centre
- Mark Bull, EPCC
- Mark Cheeseman, DUG
- Florina Ciorba, University of Basel
- Johannes Doerfert, Argonne National Laboratory
- Alejandro Duran, Intel
- Deepak Eachempati, Hewlett Packard Enterprise
- Jini George, AMD
- Joachim Jenke, RWTH Aachen University
- Emily Kahl, Pawsey Supercomputing Centre
- Jannis Klinkenberg, RWTH Aachen University
- Melissa Kozul, University of Melbourne
- Michael Kruse, AMD
- Kelvin Li, IBM
- Chunhua Liao, Lawrence Livermore National Laboratory
- Stephen Olivier, Sandia National Laboratories
- Swaroop Pophale, Oak Ridge National Laboratory
- Tom Scogland, Lawrence Livermore National Laboratory
- Xavier Teruel, Barcelona Supercomputing Center