Home
World Journal of Advanced Research and Reviews
International Journal with High Impact Factor for fast publication of Research and Review articles

Main navigation

  • Home
    • Journal Information
    • Abstracting and Indexing
    • Editorial Board Members
    • Reviewer Panel
    • Journal Policies
    • WJARR CrossMark Policy
    • Publication Ethics
    • Instructions for Authors
    • Article processing fee
    • Track Manuscript Status
    • Get Publication Certificate
    • Current Issue
    • Issue in Progress
    • Past Issues
    • Become a Reviewer panel member
    • Join as Editorial Board Member
  • Contact us
  • Downloads

eISSN: 2581-9615 || CODEN (USA): WJARAI || Impact Factor: 8.2 || ISSN Approved Journal

ViMoE: Vision Mixture of Experts with Multimodal Context Awareness

Breadcrumb

  • Home
  • ViMoE: Vision Mixture of Experts with Multimodal Context Awareness

Adele Chinda *

Computer Science, Georgia State University, USA.

Research Article

World Journal of Advanced Research and Reviews, 2026, 29(01), 1886-1901

Article DOI: 10.30574/wjarr.2026.29.1.0242

DOI url: https://doi.org/10.30574/wjarr.2026.29.1.0242

Received on 22 December 2025; revised on 28 January 2026; accepted on 31 January 2026

Multimodal large language models (MLLMs) rely heavily on vision encoders to understand diverse image content. While recent approaches have explored combining multiple vision experts to address the limitations of single encoders, they typically perform image-level expert selection and fusion, ignoring the spatial heterogeneity within images where different regions may benefit from different experts. In this paper, we propose ViMoE (Vision Mixture of Experts with Multimodal Context Awareness), a novel MLLM that introduces three key innovations: (1) Token-Level Sparse Expert Activation (TLSEA) that enables different spatial tokens to utilize different expert combinations, allowing fine-grained, content-aware feature extraction; (2) Hierarchical Context Aggregation (HCA) that captures multi-scale visual context to guide expert routing at different granularities; and (3) Expert Confidence Calibration (ECC) that learns to estimate and calibrate expert contribution confidence to reduce noise from unreliable features. Through these innovations, ViMoE achieves more precise expert utilization by recognizing that a single image often contains diverse content requiring different visual expertise. Extensive experiments demonstrate that ViMoE achieves significant improvements over state-of-the-art methods across challenging multimodal benchmarks including MME, MMBench, and various VQA tasks, while maintaining computational efficiency through sparse activation patterns. Code is available at: https://arrel.github.io/vimoe/ 

Vision Mixture of Experts; Token-level routing; Multimodal large language mode; Hierarchical context aggregation; Confidence calibration; Sparse expert activation

https://journalwjarr.com/sites/default/files/fulltext_pdf/WJARR-2026-0242.pdf

Get Your e Certificate of Publication using below link

Download Certificate

Preview Article PDF

Adele Chinda. ViMoE: Vision Mixture of Experts with Multimodal Context Awareness. World Journal of Advanced Research and Reviews, 2026, 29(01), 1886-1901. Article DOI: https://doi.org/10.30574/wjarr.2026.29.1.0242.

Copyright © 2026 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0

Footer menu

  • Contact

Copyright © 2026 World Journal of Advanced Research and Reviews - All rights reserved

Developed & Designed by VS Infosolution