SiMBA
Simplified Mamba-Based Architecture for Vision and Multivariate Time series.
Transformers have widely adopted attention networks for sequence mixing and MLPs for channel mixing, playing a pivotal role in achieving breakthroughs across domains.
Join the discussion on this paper page.
Comments are closed.