Computer Science, Georgia State University, USA.
World Journal of Advanced Research and Reviews, 2026, 29(01), 1886-1901
Article DOI: 10.30574/wjarr.2026.29.1.0242
Received on 22 December 2025; revised on 28 January 2026; accepted on 31 January 2026
Multimodal large language models (MLLMs) rely heavily on vision encoders to understand diverse image content. While recent approaches have explored combining multiple vision experts to address the limitations of single encoders, they typically perform image-level expert selection and fusion, ignoring the spatial heterogeneity within images where different regions may benefit from different experts. In this paper, we propose ViMoE (Vision Mixture of Experts with Multimodal Context Awareness), a novel MLLM that introduces three key innovations: (1) Token-Level Sparse Expert Activation (TLSEA) that enables different spatial tokens to utilize different expert combinations, allowing fine-grained, content-aware feature extraction; (2) Hierarchical Context Aggregation (HCA) that captures multi-scale visual context to guide expert routing at different granularities; and (3) Expert Confidence Calibration (ECC) that learns to estimate and calibrate expert contribution confidence to reduce noise from unreliable features. Through these innovations, ViMoE achieves more precise expert utilization by recognizing that a single image often contains diverse content requiring different visual expertise. Extensive experiments demonstrate that ViMoE achieves significant improvements over state-of-the-art methods across challenging multimodal benchmarks including MME, MMBench, and various VQA tasks, while maintaining computational efficiency through sparse activation patterns. Code is available at: https://arrel.github.io/vimoe/
Vision Mixture of Experts; Token-level routing; Multimodal large language mode; Hierarchical context aggregation; Confidence calibration; Sparse expert activation
Get Your e Certificate of Publication using below link
Preview Article PDF
Adele Chinda. ViMoE: Vision Mixture of Experts with Multimodal Context Awareness. World Journal of Advanced Research and Reviews, 2026, 29(01), 1886-1901. Article DOI: https://doi.org/10.30574/wjarr.2026.29.1.0242.
Copyright © 2026 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0