Hopp til hovedinnhold
Fredag 24. april 2026AI-nyheter, ferdig filtrert for ledere
SISTE:
DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investeringDeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investering
LG Unveils EXAONE 4.5: Multimodal AI That Outperforms GPT-5 Mini
Multimodal AILGComputer Vision

LG Unveils EXAONE 4.5: Multimodal AI That Outperforms GPT-5 Mini

JH
Joachim Høgby
9. april 20269. april 20263 min lesingKilde:

LG AI Research has launched EXAONE 4.5, its latest multimodal AI model capable of understanding and reasoning across both text and images simultaneously.

The model was announced on April 9, 2026, representing a significant upgrade from previous versions. According to LG, EXAONE 4.5 surpasses OpenAI's GPT-5 mini and Alibaba's Qwen-3-VL across visual assessment benchmarks.

What Makes EXAONE 4.5 Special?

The new model combines text understanding with advanced image analysis in a unified architecture. This enables it to:

  • Analyze complex visual scenarios with contextual understanding
  • Integrate text and image data for more accurate responses
  • Deliver reasoning across modalities

Performance Against Competitors

LG's internal testing shows that EXAONE 4.5 consistently outperforms established models like GPT-5 mini on standardized visual benchmarks. This positions it as a serious challenger to Western AI giants.

Strategic Significance

The launch underscores South Korea's ambition to compete with the US and China in AI. LG has invested heavily in AI research and sees EXAONE as a key technology for future products.

EXAONE 4.5 is available for developers and businesses looking to implement multimodal AI in their applications.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.