Advances in Multimodal Analysis and Hate Detection

The field of multimodal analysis is rapidly evolving, with a focus on developing innovative methods for understanding and detecting hate speech in online content. Research is shifting towards leveraging large vision-language models to improve multimodal understanding, with applications in meme analysis and hate detection. Notable papers include CAMU, which introduces a novel framework for multimodal hate detection, and MemeBLIP2, which presents a lightweight multimodal system for detecting harmful memes. Additionally, papers such as Detecting and Mitigating Hateful Content in Multimodal Memes with Vision-Language Models propose effective methods for transforming hateful content in memes, promoting safer online environments.

Sources

CAMU: Context Augmentation for Meme Understanding

Clustering of return words in languages of interval exchanges

A Unified MDL-based Binning and Tensor Factorization Framework for PDF Estimation

Factor Analysis with Correlated Topic Model for Multi-Modal Data

Score-Debiased Kernel Density Estimation

Sequence Reconstruction for Sticky Insertion/Deletion Channels

How Cohesive Are Community Search Results on Online Social Networks?: An Experimental Evaluation

How Group Lives Go Well

Sequence Reconstruction under Channels with Multiple Bursts of Insertions or Deletions

MemeBLIP2: A novel lightweight multimodal system to detect harmful memes

Mining and Intervention of Social Networks Information Cocoon Based on Multi-Layer Network Community Detection

Efficiently Finding All Minimal and Shortest Absent Subsequences in a String

Belief System Dynamics as Network of Single Layered Neural Network

Clustering Internet Memes Through Template Matching and Multi-Dimensional Similarity

Detecting and Mitigating Hateful Content in Multimodal Memes with Vision-Language Models

Algorithmic Collective Action with Two Collectives

Built with on top of