3584 papers published on ArXiv in the cs* category. 397 excluded by clustering as noise.

388 clusters identified with an average of 8.16 papers

Largest clusters:

  1. Advances in Adversarial Robustness and Defense - 25 papers
  2. Advancements in Robotic Design and Control - 24 papers
  3. Advancements in Large Language Model Agents - 24 papers
  4. Advancements in Efficient and Adaptive Reasoning for Large Language Models - 23 papers
  5. Advances in Mitigating Bias in AI Systems - 23 papers
  6. Advances in Machine Learning and Optimization - 22 papers
  7. Advances in Molecular Modeling and Machine Learning - 21 papers
  8. Advances in Multimodal Medical Models - 21 papers
  9. Advances in Federated Learning for Healthcare and Privacy-Preserving Applications - 20 papers
  10. Advances in Explainable AI and Interpretable Machine Learning - 20 papers

57 clusters of clusters identified with an average of 51.32 papers

Largest clusters:

  1. Advancements in Brain-Computer Interfaces, Robotics, and Artificial Intelligence - 120 papers
  2. Advances in Speech and Language Processing - 96 papers
  3. Advances in 3D Scene Understanding and Generation - 90 papers
  4. Immersive Technologies and Robotics: Advancements in Interaction and Collaboration - 84 papers
  5. Advances in Machine Learning and Optimization - 77 papers
  6. Emerging Trends in Error-Correcting Codes, Music Generation, and Multimodal Processing - 75 papers
  7. Mixture-of-Experts Models and Beyond: Advances in Scalability, Efficiency, and Performance - 75 papers
  8. Advances in Text-to-SQL, Recommender Systems, and Retrieval-Augmented Generation - 73 papers
  9. Advancements in Large Language Models - 67 papers
  10. Diffusion Models: Emerging Trends and Advances - 67 papers

Advancements in Brain-Computer Interfaces, Robotics, and Artificial Intelligence - 120 papers

Researchers have developed innovative approaches to decoding brain signals, optimizing task allocation, and improving human-machine interaction through projects like EMG-UP and Neuroprobe. Notable papers like Bridging the behavior-neural gap and Uncovering the Computational Ingredients of Human-Like Representations in LLMs have also advanced AI capabilities in areas like emotion recognition and large language models.

Advances in Speech and Language Processing - 96 papers

Researchers have developed innovative approaches such as simulated data augmentation and hierarchical evaluation frameworks to improve speaker diarization and speech recognition systems. New methods, including psycholinguistic features and contrastive learning, are also being explored to detect AI-generated text and improve speech recognition accuracy.

Advances in 3D Scene Understanding and Generation - 90 papers

Researchers have achieved a 36.8% improvement in mapping accuracy with Real-Time Indoor Object SLAM and proposed novel frameworks like SAGE for scene graph-aware guidance. Innovative approaches, such as physics-informed models and open-world part segmentation, are being explored to generate high-quality 3D content and enable dynamic relighting and faithful material recovery.

Immersive Technologies and Robotics: Advancements in Interaction and Collaboration - 84 papers

Researchers have made significant breakthroughs in immersive technologies, such as reducing VR sickness and improving task guidance in AR environments. Advances in robotics have also led to the development of more agile and adaptive systems, including compliant robots, humanoid robots, and legged robots with improved control and manipulation capabilities.

Advances in Machine Learning and Optimization - 77 papers

Researchers have proposed novel methods, such as DRIFT and MANI-Pure, to defend against adversarial attacks and improve model robustness. New techniques, including bilevel optimization and Bayesian optimization, have also been developed to improve system performance, trustworthiness, and calibration in various domains.

Emerging Trends in Error-Correcting Codes, Music Generation, and Multimodal Processing - 75 papers

Researchers are developing new error-correcting codes and decoding algorithms to improve data transmission efficiency and reliability. Innovations in music generation, multimodal processing, and audio generation are also being driven by advances in models such as diffusion-based models, transformer architectures, and multi-agent systems.

Mixture-of-Experts Models and Beyond: Advances in Scalability, Efficiency, and Performance - 75 papers

Researchers have made significant progress in designing novel training frameworks and routing mechanisms for Mixture-of-Experts models, enabling elastic inference-time expert utilization and improved model performance. Notable works, such as Elastic MoE and Dynamic Experts Search, have introduced innovative strategies for scalable and efficient model training and deployment.

Advances in Text-to-SQL, Recommender Systems, and Retrieval-Augmented Generation - 73 papers

Researchers have achieved state-of-the-art results in Text-to-SQL using reinforcement learning and test-time scaling, and in recommender systems using discrete diffusion models. Innovative approaches, such as graph-based retrieval-augmented generation and multi-agent systems, have also demonstrated significant performance gains in various benchmarks.

Advancements in Large Language Models - 67 papers

Researchers have developed innovative pretraining methods, such as curriculum learning and synthetic data techniques, to improve large language models' representation quality and performance. Notable advancements also include more efficient fine-tuning methods, like low-rank adaptation, and applications in various domains, such as cybersecurity, education, and finance.

Diffusion Models: Emerging Trends and Advances - 67 papers

Diffusion models have achieved state-of-the-art performance in tasks like language generation, video generation, and image restoration, offering improved efficiency and accuracy. Notable papers have introduced novel frameworks, techniques, and decoding strategies, enabling significant speedups and performance gains in areas like logical reasoning, math reasoning, and video super-resolution.

Advances in Neurosymbolic Integration and Multimodal Reasoning - 67 papers

Researchers have introduced novel languages and frameworks that combine neural-network learning with symbolic reasoning, enabling more flexible and effective integration of data-driven rule learning with expert knowledge. Noteworthy models and architectures, such as WAVE and LOGicalThought, have improved performance in multimodal reasoning, video understanding, and logical inference.

Advances in Multi-Agent Systems and Reinforcement Learning - 65 papers

Researchers have proposed innovative techniques such as policy-based continuous extension and diffusion models to improve the efficiency and stability of multi-agent systems. These advancements have the potential to enable more reliable robotic navigation, efficient traffic control, and effective strategic decision-making in complex environments.

Advances in Medical Image Analysis and Related Fields - 65 papers

Researchers are using deep learning models, such as GANs and transformers, to improve medical image analysis and segmentation. Innovations like on-the-fly data augmentation, coreset selection, and intent-based management frameworks are also being explored to enhance efficiency and accuracy in various fields.

Advancements in Electronic Health Record Analysis and Time Series Forecasting - 62 papers

Researchers are developing innovative models that integrate physical priors and inductive biases to improve efficiency and efficacy in fields like healthcare and climate modeling. These advancements include new frameworks for handling irregular time series data, hybrid models for time series forecasting, and foundation models for scientific machine learning.

Advancements in Deep Learning, Semantic Communications, and Wireless Systems - 62 papers

Researchers have proposed innovative methods such as domain-informed monotonicity in deep neural networks and token-based multimodal interactive coding frameworks to improve model efficiency and performance. The integration of techniques like machine learning and semantic-aware paradigms is also enhancing wireless communication, spatial flexibility, and sensing accuracy in various systems.

Multimodal Large Language Models: Progress and Innovations - 62 papers

Researchers have developed innovative models like ProfVLM and EVLF-FM, which achieve superior accuracy with fewer parameters and unify diagnostic capability with explainability. New benchmarks and frameworks, such as Neural-MedBench and EAGLE, have also been proposed to evaluate and analyze MLLM performance and decisions.

Advancements in Vision-Language Navigation and Related Fields - 61 papers

Researchers have developed methods to disentangle foreground and background information and utilized spatiotemporal knowledge graphs to improve scene understanding. Diffusion models have also been successfully applied to generate high-quality synthetic images, particularly in medical image synthesis, and to improve text-to-image alignment.

Safety and Reliability in Large Language Models - 58 papers

Researchers have proposed novel approaches to inference-time safety, including reachability analysis and inverse reasoning, to mitigate risks associated with large language models. New defense mechanisms, such as system vectors and type-directed privilege separation, have been developed to prevent prompt injection attacks and improve model safety.

Advances in Large Language Models for Strategic Reasoning and Decision Making - 58 papers

Large language models are being integrated with traditional game-theoretic methods to improve performance in complex games and tasks. Researchers are also developing innovative approaches to enhance reasoning capabilities, such as reinforcement learning with verifiable rewards and bilinear relational structures, to enable more logically consistent decision-making.

Efficient and Adaptive Reasoning in Large Language Models - 58 papers

Techniques like adaptive reasoning, latent reasoning, and compressed knowledge distillation have improved the efficiency and effectiveness of large language models. Researchers have also developed methods to reduce overthinking, optimize computation, and improve explicit reasoning, leading to more accurate and reliable models.

Mitigating Sycophancy and Advancing AI Research in Labor Markets, Natural Language Processing, and Ethics - 57 papers

Researchers have proposed strategies like Visual Information Purification for Evidence-based Response (VIPER) to mitigate sycophancy in AI systems and developed frameworks like AutoPK for extracting pharmacokinetic data. Large language models have also shown promise in improving labor market analysis, biomedical data analysis, and natural language processing for low-resource languages.

Advances in Real-Time Object Detection and Autonomous Systems - 56 papers

Researchers have developed innovative architectures like MS-YOLO and HierLight-YOLO, achieving state-of-the-art performance in object detection. Novel methods, such as spatial-preserving token merging and heterogeneous graph reinforcement learning, have also shown promising results in vision transformers and autonomous vehicle navigation.

Mitigating Hallucinations in Large Language and Vision Models - 55 papers

Researchers are developing methods to mitigate hallucinations in large language models, including retrieval-augmented generation and uncertainty estimation. Innovations like Spectral Uncertainty and linguistic confidence are also being introduced to improve uncertainty estimation and management in natural language processing.

Advances in Artificial General Intelligence and Large Language Models - 55 papers

Entropy-regularized policy optimization and self-imitation learning have improved the performance and stability of Large Language Model agents. Novel frameworks and benchmarks, such as QuantMind and UltraHorizon, have also enabled the evaluation and improvement of LLM agents in complex, real-world scenarios.

Efficient Neural Architectures and Optimization Techniques - 55 papers

Researchers have introduced techniques like layer skipping and compute-in-memory-aware neural architecture search, achieving state-of-the-art results and significant reductions in energy consumption. New architectures and optimization methods have also enabled the training of deeper networks, improved model compression, and enhanced robustness and efficiency in various applications.

Large Language Models in Software Development and Testing - 54 papers

LLMs are being used to automate software testing, including generating unit tests and analyzing code for errors, with frameworks like JUnitGenie and TENET facilitating evaluation. Researchers have also made progress in using LLMs to detect bugs and security vulnerabilities in software and blockchain applications, such as reproducing Android app crashes and identifying subcontract misuse vulnerabilities.

Equivariant Learning and Graph Neural Networks: Emerging Trends and Innovations - 53 papers

Researchers have developed innovative models such as the Clebsch-Gordan Transformer and SIM(3)-equivariant shape completion network, achieving state-of-the-art results in tasks like 3D shape completion. Foundation models pretrained on synthetic graphs have also shown effectiveness in capturing complex graph structural dependencies and achieving state-of-the-art results on real-world graph datasets.

Trends in Interpretable AI: Transparency and Accountability in Oncology, Language Models, and Decision-Making Systems - 52 papers

Researchers have developed innovative models that provide accurate and transparent predictions, such as SHAPoint and Automated and Interpretable Survival Analysis, which integrate clinical variables and medical imaging data. Noteworthy papers like CE-FAM, ACE, and MAGIC-MASK propose novel explanation methods and frameworks for interpretable AI in various areas, including oncology, image classification, and sports analytics.

Advancements in Sequence Modeling and Natural Language Processing - 52 papers

Researchers are developing innovative architectures, such as hybrid models and sparse transformers, to improve long-sequence processing and contextual dependencies. Notable models, including StateX, SWAX, and ResFormer, have achieved significant advancements in efficient attention mechanisms, state tracking, and representation learning.

Advances in Digital Human Modeling and 3D Reconstruction - 49 papers

Researchers have developed innovative frameworks such as X-Streamer and StableDub, which enable multimodal human interactions and visual dubbing with high fidelity. Deep learning techniques, like those used in SDPose and LieHMR, have also achieved state-of-the-art results in human pose estimation and 3D reconstruction.

Efficient Training and Inference of Large Language Models - 49 papers

Quantization techniques, such as AxLLM and InfiR2, have achieved significant model compression while maintaining performance. Novel optimizers like Conda and AuON, and scaling laws, have also shown promising results in improving convergence speed, stability, and model accuracy.

Fairness, Privacy, and Efficiency in Machine Learning - 47 papers

Researchers have developed innovative algorithms to learn fair representations without individual demographic information and proposed novel optimization techniques to achieve fairness and privacy. New frameworks and methods have also been introduced to address challenges such as imbalanced data, continual learning, and federated learning, preserving data privacy and improving model performance.

Advances in Robot Learning and Manipulation - 45 papers

Researchers have developed novel frameworks for robot learning and manipulation, such as automatic dense reward generation and stage-aware reward modeling. These advances enable robots to perform complex tasks with improved dexterity, adaptability, and human-robot interaction, and have the potential to significantly improve robot manipulation and control.

Integrating Large Language Models and Embodied Intelligence for Autonomous Systems - 44 papers

Researchers have developed innovative frameworks that integrate large language models with embodied intelligence, enabling more efficient and autonomous systems in fields like ocean dynamics and robotics. Notable papers have proposed novel architectures and methods for planning, navigation, and decision-making, achieving significant improvements in task success rates and planning efficiency.

Advances in Sensor Fusion, Remote Sensing, and Geospatial Intelligence - 44 papers

Researchers have introduced novel sensor fusion methods, such as reducing computational costs in radar-LiDAR-inertial systems and fusing radar signal spectra with inertial data. Innovations in computer vision, multimodal learning, and deep learning have also improved remote sensing and geospatial analysis, enabling more accurate and robust localization, tracking, and decision-making.

Advances in Molecular Modeling and Multimodal Generation - 44 papers

Transformers and deep learning architectures are being used to develop more accurate models for predicting molecular properties and behavior. Researchers are also incorporating physical and biological priors into generative models, achieving state-of-the-art performance in tasks like protein structure prediction and image generation.

Integrating Renewable Energy and Enhancing Grid Stability - 42 papers

Novel power sharing schemes and data-driven frameworks are being developed to optimize power allocation and improve grid stability. Machine learning and physics-informed models are also being used to create more accurate and efficient solutions for power systems.

Human-AI Collaboration: Emerging Trends and Innovations - 42 papers

Researchers have developed novel frameworks and models that integrate human factors into AI systems, strengthening their robustness and decision-making capabilities. AI-assisted tools and systems are being designed to support personalized education, creative design, and collaborative workflows, with potential to enhance human capabilities and productivity.

Geometric Awareness in Machine Learning: Emerging Trends and Innovations - 41 papers

Researchers have developed innovative models, such as HyperHELM and CAT, that leverage hyperbolic geometry to improve performance in tasks like language modeling and sequence classification. New architectures and techniques, like asymmetric autoencoders and manifold-probabilistic projection models, are also enabling more accurate and robust modeling of complex data distributions.

Fractional Dynamics and Numerical Analysis: Emerging Trends and Innovations - 39 papers

Researchers have developed innovative methods, such as neural operators and physics-informed neural networks, to improve the accuracy and efficiency of numerical solutions for partial differential equations. These advancements have achieved significant results, including a six-order magnitude acceleration in calculations and lower error rates, with applications in fields like physics, engineering, and computer science.

Interpretable Neural Networks and Machine Learning - 39 papers

Researchers have developed innovative models such as sparse autoencoders and Concept Bottleneck Models, which provide insights into decision-making processes and improve interpretability. New techniques, including logic-based models and kernel methods, have also shown promising results in terms of predictive performance and transparency.

Molecular Optimization and Design: Leveraging Reinforcement Learning and Generative Models - 39 papers

Generative models and reinforcement learning are being integrated to efficiently explore chemical space and discover novel molecules with improved properties. This integration enables the simultaneous optimization of multiple therapeutic properties, leading to significant advancements in drug discovery, peptide design, and biomolecule generation.

Advancements in Log Analysis, Formal Methods, and Research Automation - 37 papers

Researchers are using large language models to improve log analysis, formal methods, and research automation, achieving breakthroughs in tasks like automated log diagnosis and formal verification. Notable papers like LogPilot, AssertGen, and EEsizer demonstrate the effectiveness of large language models in these areas.

Advancements in Large Language Models: Enhancing Reasoning and Efficiency - 37 papers

Model merging techniques and novel frameworks have achieved state-of-the-art results on various benchmarks by enhancing reasoning capabilities. New benchmarks have also revealed consistent reasoning deficiencies, driving innovations in memory and reasoning capabilities, such as human-inspired cognitive architectures and self-evolving frameworks.

Multimodal Knowledge Graphs and Embodied AI: Progress and Innovations - 35 papers

Researchers have proposed novel frameworks, such as knowledge graph-guided cross-modal hypergraph learning and hypercomplex-driven robust multi-modal knowledge graph completion, to improve pedestrian attribute recognition and embodied AI. New benchmarks, like RoboView-Bias and HomeSafeBench, have also been developed to evaluate embodied agents and vision-language models, enabling more robust assessment of visual bias and safety perception.

Efficient Inference and Interpretability in AI Models - 34 papers

Researchers have introduced innovative pruning methods and techniques like context-aware cache compression to improve computational efficiency in AI models. Notable models like KV-Efficient VLA, OjaKV, and DynaNav have achieved significant reductions in computational overhead and latency while maintaining performance.

Aligning Large Language Models with Human Preferences - 31 papers

Researchers have developed innovative techniques such as meta-frameworks, strategic error amplification, and integrative causal router training to improve model performance and safety. Notable methods like progressive weight loading, circuit distillation, and anchored supervised fine-tuning have also shown promising results in enhancing model truthfulness, calibration, and alignment with human preferences.

Advancements in Network Protocol Optimization, Cybersecurity, and Artificial Intelligence - 31 papers

Researchers are leveraging smartNICs, FPGAs, and novel machine learning approaches to optimize network protocols and enhance cybersecurity. Innovations in deep learning, such as influence-guided concolic testing and physics-informed machine learning, are also improving model resilience and anomaly detection accuracy.

Advances in Numerical Methods and Dimensionality Reduction - 30 papers

Researchers have developed well-balanced methods for hyperbolic systems and efficient algorithms for search, dimensionality reduction, and linear system solving. These advancements enable faster and more accurate solutions for complex problems in various fields, including fluid dynamics, materials science, biology, and medicine.

Advances in Physiological Monitoring and Signal Processing - 30 papers

Deep learning techniques and physics-informed models have improved the accuracy and robustness of physiological measurements, enabling innovations such as non-contact digital twin synthesis and remote PPG measurement. Novel wearable devices and datasets have also been developed to support emergency response, education, and healthcare outcomes, leveraging advancements in physiological signal processing and spiking neural networks.

Sustainable Computing and Energy Systems: Advances and Innovations - 29 papers

Researchers are developing innovative techniques such as automated code translation and energy-efficient software design to reduce energy consumption in computing. Novel solutions in edge computing, climate modeling, and predictive modeling are also being proposed to improve efficiency, reduce latency and carbon emissions, and optimize resource management.

Convergence of Speech, Music, and Human-Computer Interaction - 27 papers

Large language models are being used to improve text-to-speech synthesis, enabling robust and controllable speech generation. AI-based tools are also being developed to predict music trends, generate music variations, and facilitate natural human-computer interaction through full-duplex speech interaction.

Advancements in Image Editing and Blockchain Research - 26 papers

Researchers are developing innovative image editing methods using diffusion models and reinforcement learning, enabling precise control and high-fidelity results. New blockchain protocols and metrics, such as Voting-Bloc Entropy, are also being developed to improve decentralization, security, and resilience in decentralized systems.

Advances in Video Generation and Editing - 25 papers

Researchers have made significant breakthroughs in video generation, achieving state-of-the-art results in quality, temporal coherence, and generation speed. Notable papers have introduced innovative approaches, such as inversion-free methods and object-centric synthesis, enabling more precise editing and manipulation of images and videos.

Advances in Language Analysis and AI-Driven Discourse - 21 papers

Researchers have developed methods to analyze opinion shifts, media bias, and user preferences using large language models and computational frameworks. Noteworthy papers include C-QUERI, The Media Bias Detector, and ProPerSim, which showcase innovative approaches to personalized interactions, bias detection, and opinion shift analysis.

Smart Contract Generation and Code Development with Large Language Models - 19 papers

Researchers are developing new benchmarks and frameworks, such as SolContractEval, to evaluate the effectiveness of large language models in generating functional contracts. Innovative approaches like curriculum-guided reinforcement learning and Automatic Prompt Optimization are also being explored to improve smart contract synthesis and code generation.

Advances in Uncertainty Quantification and Adaptation - 18 papers

Deep learning models have achieved performance close to human experts in plant identification and traffic management, with notable results including a 29% reduction in average queue lengths. Techniques like transfer learning and conformal prediction are improving model robustness and reliability, enabling deployment in critical applications such as autonomous driving.

Subsections

Unclustered

(241 papers)

Advancements in Brain-Computer Interfaces, Robotics, and Artificial Intelligence

(120 papers)

Advances in Speech and Language Processing

(96 papers)

Advances in 3D Scene Understanding and Generation

(90 papers)

Immersive Technologies and Robotics: Advancements in Interaction and Collaboration

(84 papers)

Advances in Machine Learning and Optimization

(77 papers)

Emerging Trends in Error-Correcting Codes, Music Generation, and Multimodal Processing

(75 papers)

Mixture-of-Experts Models and Beyond: Advances in Scalability, Efficiency, and Performance

(75 papers)

Advances in Text-to-SQL, Recommender Systems, and Retrieval-Augmented Generation

(73 papers)

Advancements in Large Language Models

(67 papers)

Diffusion Models: Emerging Trends and Advances

(67 papers)

Advances in Neurosymbolic Integration and Multimodal Reasoning

(67 papers)

Advances in Multi-Agent Systems and Reinforcement Learning

(65 papers)

Advances in Medical Image Analysis and Related Fields

(65 papers)

Advancements in Electronic Health Record Analysis and Time Series Forecasting

(62 papers)

Advancements in Deep Learning, Semantic Communications, and Wireless Systems

(62 papers)

Multimodal Large Language Models: Progress and Innovations

(62 papers)

Advancements in Vision-Language Navigation and Related Fields

(61 papers)

Safety and Reliability in Large Language Models

(58 papers)

Advances in Large Language Models for Strategic Reasoning and Decision Making

(58 papers)

Efficient and Adaptive Reasoning in Large Language Models

(58 papers)

Mitigating Sycophancy and Advancing AI Research in Labor Markets, Natural Language Processing, and Ethics

(57 papers)

Advances in Real-Time Object Detection and Autonomous Systems

(56 papers)

Mitigating Hallucinations in Large Language and Vision Models

(55 papers)

Advances in Artificial General Intelligence and Large Language Models

(55 papers)

Efficient Neural Architectures and Optimization Techniques

(55 papers)

Large Language Models in Software Development and Testing

(54 papers)

Equivariant Learning and Graph Neural Networks: Emerging Trends and Innovations

(53 papers)

Trends in Interpretable AI: Transparency and Accountability in Oncology, Language Models, and Decision-Making Systems

(52 papers)

Advancements in Sequence Modeling and Natural Language Processing

(52 papers)

Advances in Digital Human Modeling and 3D Reconstruction

(49 papers)

Efficient Training and Inference of Large Language Models

(49 papers)

Fairness, Privacy, and Efficiency in Machine Learning

(47 papers)

Advances in Robot Learning and Manipulation

(45 papers)

Integrating Large Language Models and Embodied Intelligence for Autonomous Systems

(44 papers)

Advances in Sensor Fusion, Remote Sensing, and Geospatial Intelligence

(44 papers)

Advances in Molecular Modeling and Multimodal Generation

(44 papers)

Integrating Renewable Energy and Enhancing Grid Stability

(42 papers)

Human-AI Collaboration: Emerging Trends and Innovations

(42 papers)

Geometric Awareness in Machine Learning: Emerging Trends and Innovations

(41 papers)

Fractional Dynamics and Numerical Analysis: Emerging Trends and Innovations

(39 papers)

Interpretable Neural Networks and Machine Learning

(39 papers)

Molecular Optimization and Design: Leveraging Reinforcement Learning and Generative Models

(39 papers)

Advancements in Log Analysis, Formal Methods, and Research Automation

(37 papers)

Advancements in Large Language Models: Enhancing Reasoning and Efficiency

(37 papers)

Multimodal Knowledge Graphs and Embodied AI: Progress and Innovations

(35 papers)

Efficient Inference and Interpretability in AI Models

(34 papers)

Aligning Large Language Models with Human Preferences

(31 papers)

Advancements in Network Protocol Optimization, Cybersecurity, and Artificial Intelligence

(31 papers)

Advances in Numerical Methods and Dimensionality Reduction

(30 papers)

Advances in Physiological Monitoring and Signal Processing

(30 papers)

Sustainable Computing and Energy Systems: Advances and Innovations

(29 papers)

Convergence of Speech, Music, and Human-Computer Interaction

(27 papers)

Advancements in Image Editing and Blockchain Research

(26 papers)

Advances in Video Generation and Editing

(25 papers)

Advances in Language Analysis and AI-Driven Discourse

(21 papers)

Smart Contract Generation and Code Development with Large Language Models

(19 papers)

Advances in Uncertainty Quantification and Adaptation

(18 papers)

Built with on top of