The field of hardware design and optimization is witnessing significant advancements with the integration of Large Language Models (LLMs). Recent developments indicate a shift towards leveraging LLMs as interactive agents that collaborate with compilers and hardware feedback to optimize code generation and improvement. This approach has shown promise in enhancing the efficiency and accuracy of hardware design, particularly in the context of Register Transfer Level (RTL) design and CUDA kernel optimization. Notably, innovative frameworks and workflows are being proposed to address the challenges of noise propagation, constrained reasoning space exploration, and limited parametric knowledge. These advancements have the potential to revolutionize the field by enabling cost-effective, generalizable, and high-performance hardware design and optimization. Noteworthy papers include: VeriMoA, which proposes a training-free mixture-of-agents framework for spec-to-HDL generation, achieving 15-30% improvements in Pass@1 across diverse LLM backbones. CudaForge, an agent framework with hardware feedback for CUDA kernel optimization, demonstrating 97.6% correctness of generated kernels and an average 1.68x speedup over PyTorch baselines.