The field of deep neural networks (DNNs) and Field-Programmable Gate Arrays (FPGAs) is rapidly advancing with a focus on optimizing performance, energy efficiency, and area utilization. Researchers are exploring innovative approaches to leverage bit-level sparsity, improve technology mapping, and enhance FPGA architecture to increase arithmetic density and reduce computations. Notably, novel computing approaches such as bit-serial and bit-column-serial computations are being developed to facilitate bit-wise sequential data processing and reduce memory accesses. Additionally, new FPGA architecture designs are being proposed to enable concurrent use of adders and look-up tables, leading to area reductions and improved performance. These advancements have the potential to significantly improve the efficiency and effectiveness of DNNs and FPGAs. Noteworthy papers include: The BitParticle paper which proposes a MAC unit that leverages dual-factor sparsity through particlization, achieving a 29.2% improvement in area efficiency. The BitWave paper which introduces a novel computing approach called bit-column-serial and a compatible architecture design named BitWave, achieving up to 13.25x higher speedup and 7.71x efficiency compared to state-of-the-art sparsity-aware accelerators. The Double Duty paper which proposes a logic block architecture to enable the concurrent use of adders and LUTs, demonstrating area reductions of 21.6% on adder-intensive circuits.