The field of 3D content creation is rapidly evolving with the integration of generative AI, enabling rapid and accessible creation of 3D models from text or image inputs. Researchers are exploring ways to translate these digital outputs into physical objects, considering fabrication constraints, sustainability, time, functionality, and accessibility. A key challenge is developing fabrication methods that can adapt to the variability of generative AI outputs, while also ensuring sustainability and usability of the physical objects. Meanwhile, the application of generative AI in augmented reality (AR) environments is also gaining traction, allowing users to generate, manipulate, and interact with 3D models in real-time. Noteworthy papers in this area include: MagicCraft, which generates functional 3D objects from natural language prompts for metaverse platforms, significantly reducing the time and skill required to create 3D objects. ImaginateAR, a mobile AI-assisted AR authoring system that lets users build personalized AR content by speaking their imagination, producing more accurate outdoor scene graphs and generating 3D meshes faster than prior methods.