The field of High-Performance Computing (HPC) is experiencing significant advancements in benchmarking and performance analysis. Researchers are developing innovative methods to reduce the computational cost of benchmarks while maintaining ranking stability, and new frameworks are being proposed to enable portable and efficient sampling across simulators and real hardware. Notable developments include the Nugget framework, which cuts interval-analysis cost by orders of magnitude, and the BISection Sampling approach, which reduces computational cost by up to 99%. The integration of cutting-edge technologies, such as exascale supercomputers and novel storage solutions, is also accelerating scientific discovery. The Aurora supercomputer, for example, leverages Intel's oneAPI programming environment and integrates the Distributed Asynchronous Object Storage (DAOS) solution.
In the field of data storage and analytics, there is a growing interest in object-based storage systems, which facilitate column-oriented access and support in-storage execution of data reduction operators. The OASIS system, for instance, proposes a novel object-based analytics storage system, while the GeoLayer framework presents a geo-distributed graph storage framework. Other notable developments include the Peekaboo attack framework against dynamic searchable symmetric encryption and the Tiga design for geo-replicated and scalable transactional databases. The Membrane system, which proposes a cryptographic access control system for data lakes, is also worth mentioning.
The field of querying and indexing is moving towards more efficient and scalable solutions, with a focus on improving the performance of various query types, such as nearest neighbor searches and rank aggregation. Graph-based methods have shown promise in achieving superior efficiency and accuracy, and there is a growing interest in designing dynamic and self-balancing data structures. Notable papers include Efficient Computation of Trip-based Group Nearest Neighbor Queries and SINDI, which introduces an efficient index for approximate maximum inner product search on sparse vectors.
In the area of object detection and neural architecture search, researchers are exploring new approaches to improve the robustness and performance of object detection models in adverse lighting and weather conditions. The use of multi-representation and neural architecture search has shown promise in optimizing model performance and efficiency. Notable papers include Multi-Representation Adapter with Neural Architecture Search for Efficient Range-Doppler Radar Object Detection and SAR-NAS: Lightweight SAR Object Detection with Neural Architecture Search.
The field of deep learning is moving towards developing more efficient and scalable systems, with a focus on improving the efficiency of deep learning models through architecture design, model compression, and optimization. Notable papers include Principled Approximation Methods for Efficient and Scalable Deep Learning and A Continuous Encoding-Based Representation for Efficient Multi-Fidelity Multi-Objective Neural Architecture Search.
Finally, the field of neural network compression is rapidly advancing, with a focus on developing innovative techniques to reduce model size and complexity while preserving accuracy. Notable papers include Attention as an Adaptive Filter and Dynamic Sensitivity Filter Pruning using Multi-Agent Reinforcement Learning For DCNN's. The field of deep neural network acceleration and optimization is also rapidly evolving, with a focus on improving performance, reducing power consumption, and increasing efficiency. Notable papers include Bit Transition Reduction by Data Transmission Ordering in NoC-based DNN Accelerator and COMET: A Framework for Modeling Compound Operation Dataflows with Explicit Collectives.