FOA 25-3450: Funding opportunity solicitation in the Early Career Research Program; $136 million over 5 years (FY25-30)
Computer Science: Systems
Programming Models and Environments: Innovative programming models for developing applications on next-generation platforms, exploiting unprecedented parallelism, heterogeneity of memory systems (e.g. non-uniform memory access [NUMA], non-coherent shared memory, high-bandwidth memory [HBM], scratchpads, and heterogeneity of processing (e.g. graphics processing units [GPUs], field- programmable gate arrays [FPGAs], coarse-grained reconfigurable architectures [CGRAs], other types of accelerators, big-small cores, processing in memory, and near memory, etc.), with particular emphasis on making it easier to program at scale. All phases of the software-development cycle are relevant, including but not limited to, design, implementation, verification, optimization, and integration. Particularly welcome are methods that infuse artificial intelligence/machine learning into the programming environment.
Operating and Runtime Systems: System software that provides intelligent, adaptive resource management and support for highly-parallel software and workflow-management systems, and that facilitates effective and efficient use of heterogeneous computing technologies, including diverse execution models, processors, accelerators, memory, and storage systems. Target workloads include modeling and simulation, data analysis, and the processing of large- scale, streaming data from experiments.
Performance Portability and Co-design: Methods that support performance portability, which provides the ability to efficiently use diverse kinds of hardware platforms with minimal changes to the application source code, and/or hardware/software co-design, which is a method for designing and/or adapting both hardware and software design as part of a holistic process. These methods include automated and semi-automated refinements from high-level specification of an application and/or hardware design to low-level code, optimized when compiled and/or, for software, at runtime, to different HPC platforms. The focus is on enabling performance portability of, and/or the design of future hardware for, applications developed for extreme-scale computing and beyond.
Memory-Aware Systems: Advances in memory technologies are creating new opportunities and challenges where it is unclear how to best introduce or abstract memory awareness and composition. Memory is evolving in highly asymmetric and distributed directions, with new industry standards greatly expanding memory sharing and capacities to much larger sizes, largely in backward-compatible system architectures. Research is needed to uncover new possibilities for solving larger scientific computing problems with such highly asymmetric and distributed memory architectures. Innovations in algorithms, software interfaces, programming languages and models are needed to also effectively exploit new processing-in-memory architectures that are emerging as a relatively newer paradigm for scientific computing. Memory safety needs to be revisited in new research aimed at a more fundamental level of programming languages, runtimes, and operating systems, considering the multi-developer and shared nature of modern scientific programming eco-systems. The smoothening of the spectrum from volatile to non- volatile memories needs to be investigated for revisiting out-of-core algorithms to expand the limits of scientific computing. On-the-fly compression and decompression needs investigation for increasing the problem sizes without detriment to performance. The intersection of machine learning with memory systems opens the potential for new solutions, including smarter ML- informed cache prefetching and replacement policies potentially customizable for specific scientific applications via signatures and other mechanisms.
Applications are not restricted to a single Systems topic above and may span all of them, provided the scope of work remains appropriate for the program.