Back to Blogs

Top 10 Multimodal Models

July 16, 2024
|
5 mins
blog image

The current era is witnessing a significant revolution as artificial intelligence (AI) capabilities expand beyond straightforward predictions on tabular data. With greater computing power and state-of-the-art (SOTA) deep learning algorithms, AI is approaching a new era where large multimodal models dominate the AI landscape.

Reports suggest the multimodal AI market will grow by 35% annually to USD 4.5 billion by 2028 as the demand for analyzing extensive unstructured data increases. These models can comprehend multiple data modalities simultaneously and generate more accurate predictions than their traditional counterparts.

In this article, we will discuss what multimodal models are, how they work, the top models in 2024, current challenges, and future trends.

What are Multimodal Models?

Multimodal models are AI deep-learning models that simultaneously process different modalities, such as text, video, audio, and image, to generate outputs. Multimodal frameworks contain mechanisms to integrate multimodal data collected from multiple sources for more context-specific and comprehensive understanding.

In contrast, unimodal models use traditional machine learning (ML) algorithms to process a single data modality simultaneously. For instance, You Only Look Once (YOLO) is a popular object detection model that only understands visual data.

blog_image_1886

Unimodal vs. Multimodal Framework

While unimodal models are less complex than multimodal algorithms, multimodal systems offer greater accuracy and enhanced user experience. Due to these benefits, multimodal frameworks are helpful in multiple industrial domains.

For instance, manufacturers use autonomous mobile robots that process data from multiple sensors to localize objects. Moreover, healthcare professionals use multimodal models to diagnose diseases using medical images and patient history reports.

How Multimodal Models Work?

Although multimodal models have varied architectures, most frameworks have a few standard components. A typical architecture includes an encoder, a fusion mechanism, and a decoder.

blog_image_2918

Architecture

Encoders

Encoders transform raw multimodal data into machine-readable feature vectors or embeddings that models use as input to understand the data’s content. 

blog_image_3448

Embeddings

Multimodal models often have three types of encoders for each data type - image, text, and audio.

  • Image Encoders: Convolutional neural networks (CNNs) are a popular choice for an image encoder. CNNs can convert image pixels into feature vectors to help the model understand critical image properties.
  • Text Encoders: Text encoders transform text descriptions into embeddings that models can use for further processing. They often use transformer models like those in Generative Pre-Trained Transformer (GPT) frameworks.
  • Audio Encoders: Audio encoders convert raw audio files into usable feature vectors that capture critical audio patterns, including rhythm, tone, and context. Wav2Vec2 is a popular choice for learning audio representations.

Fusion Mechanism Strategies

Once the encoders transform multiple modalities into embeddings, the next step is to combine them so the model can understand the broader context reflected in all data types. Developers can use various fusion strategies according to the use case.

The list below mentions key fusion strategies.

  • Early Fusion: Combines all modalities before passing them to the model for processing.
  • Intermediate Fusion: Projects each modality onto a latent space and fuses the latent representations for further processing.
  • Late Fusion: Processes all modalities in their raw form and fuses the output for each.
  • Hybrid Fusion: Combines early, intermediate, and late fusion strategies at different model processing phases.

Fusion Mechanism Methods

While the list above mentions the high-level fusion strategies, developers can use multiple methods within each strategy to fuse the relevant modalities.

Attention-based Methods

Attention-based methods use the transformer architecture to convert embeddings from multiple modalities into a query-key-value structure. The technique emerged from a seminal paper - Attention is All You Need - published in 2017.

Researchers initially employed the method for improving language models, as attention networks allowed these models to have longer context windows. However, developers now use attention-based methods in other domains, including computer vision (CV) and generative AI.

Attention networks allow models to understand relationships between embeddings for context-aware processing. Cross-modal attention frameworks fuse different modalities in a multimodal context according to the inter-relationships between each data type.

For instance, an attention filter will allow the model to understand which parts of a text prompt relate to an image’s visual embeddings, leading to a more efficient fusion output.

Concatenation

Concatenation is a straightforward fusion technique that merges multiple embeddings into a single feature representation.

For instance, the method will concatenate a textual embedding with a visual feature vector to generate a consolidated multimodal feature.

The method helps in intermediate fusion strategies by combining the latent representations for each modality.

Dot-Product

The dot-product method involves element-wise multiplication of feature vectors from different modalities. It helps capture the interactions and correlations between modalities, assisting models to understand the commonalities among different data types.

However, it only helps in cases where the feature vectors do not suffer from high dimensionality. Taking dot-products of high-dimensional vectors may require extensive computational power and result in features that only capture common patterns between modalities, disregarding critical nuances.

Decoders

The last component is a decoder network that processes the feature vectors from different modalities to produce the required output.

Decoders can contain cross-modal attention networks to focus on different parts of input data and produce relevant outputs. For instance, translation models often use cross-attention techniques to understand the meanings of sentences in different languages simultaneously.

Recurrent neural network (RNN), Convolutional Neural Networks (CNN), and Generative Adversarial Network (GAN) frameworks are popular choices for constructing decoders to perform tasks involving sequential, visual, or generative processes.

Learn how multimodal models work in our detailed guide on multimodal learning

Multimodal Models - Use Cases

With recent advancements in multimodal models, AI systems can perform complex tasks involving the simultaneous integration and interpretation of multiple modalities.

The capabilities allow users to implement AI in large-scale environments with extensive and diverse data sources requiring robust processing pipelines.

The list below mentions a few of these tasks that multimodal models perform efficiently.

  • Visual Question-Answering (VQA): VQA involves a model answering user queries regarding visual content. For instance, a healthcare professional may ask a multimodal model regarding the content of an X-ray scan. By combining visual and textual prompts, multimodal models provide relevant and accurate responses to help users perform VQA.
  • Image-to-Text and Text-to-Image Search: Multimodal models help users build powerful search engines that can type natural language queries to search for particular images. They can also build systems that retrieve relevant documents in response to image-based queries. For instance, a user may give an image as input to prompt the system to search for relevant blogs and articles containing the image.
  • Generative AI: Generative AI models help users with text and image generation tasks that require multimodal capabilities. For instance, multimodal models can help users with image captioning, where they ask the model to generate relevant labels for a particular image. They can also use these models for natural language processing (NLP) use cases that involve generating textual descriptions based on video, image, or audio data.
  • Image Segmentation: Image segmentation involves dividing an image into regions to distinguish between different elements within an image.

blog_image_11900

Segmentation

Multimodal models can help users perform segmentation more quickly by segmenting areas automatically based on textual prompts. For instance, users can ask the model to segment and label items in the image’s background.

Top Multimodal Models

Multimodal models are an active research area where experts build state-of-the-art frameworks to address complex issues using AI.

The following sections will briefly discuss the latest models to help you understand how multimodal AI is evolving to solve real-world problems in multiple domains.

CLIP

Contrastive Language-Image Pre-training (CLIP) is a multimodal vision-language model by OpenAI that performs image classification tasks. It pairs descriptions from textual datasets with corresponding images to generate relevant image labels.

blog_image_13199

CLIP

Key Features

  • Contrastive Framework: CLIP uses the contrastive loss function to optimize its learning objective. The approach minimizes a distance function by associating relevant text descriptions with related images to help the model understand which text best describes an image’s content.
  • Text and Image Encoders: The architecture uses a transformer-based text encoder and a Vision Transformer (ViT) as an image encoder.
  • Zero-shot Capability: Once CLIP learns to associate text with images, it can quickly generalize to new data and generate relevant captions for new unseen images without task-specific fine-tuning.

Use Case

Due to CLIP’s versatility, CLIP can help users perform multiple tasks, such as image annotation for creating training data, image retrieval for AI-based search systems, and generation of textual descriptions based on image prompts.

Want to learn how to evaluate the CLIP model? Read our blog on evaluating CLIP with Encord Active

DALL-E

DALL-E is a generative model by Open AI that creates images based on text prompts using a framework similar to GPT-3. It can combine unrelated concepts to produce unique images involving objects, animals, and text.

blog_image_15974

DALL-E

Key Features

  • CLIP-based architecture: DALL-E uses the CLIP model as a prior for associated textual descriptions to visual semantics. The method helps DALL-E encode the text prompt into a relevant visual representation in the latent space.
  • A Diffusion Decoder: The decoder module in DALL-E uses the diffusion mechanism to generate images conditioned on textual descriptions.
  • Larger Context Window: DALL-E is a 12-billion parameter model that can process text and image data streams containing up to 1280 tokens. The capability allows the model to generate images from scratch and manipulate existing images.

Use Case

DALL-E can help generate abstract images and transform existing images. The functionality can allow businesses to visualize new product ideas and help students understand complex visual concepts.

LLaVA

Large Language and Vision Assistant (LLaVA) is an open-source large multimodal model that combines Vicuna and CLIP to answer queries containing images and text. The model achieves SOTA performance in chat-related tasks with a 92.53% accuracy on the Science QA dataset.

blog_image_17901

LLaVA

Key Features

  • Multimodal Instruction-following Data: The model uses instruction-following textual data generated from ChatGPT/GPT-4 to train LLaVA. The data contains questions regarding visual content and responses in the form of conversations, descriptions, and complex reasoning.
  • Language Decoder: LLaVA connects Vicuna as the language decoder with CLIP for model fine-tuning on the instruction-following dataset.
  • Trainable Project Matrix: The model implements a trainable projection matrix to map the visual representations onto the language embedding space.

Use Case

LLaVA is a robust visual assistant that can help users create advanced chatbots for multiple domains. For instance, LLaVA can help create a chatbot for an e-commerce site where users can provide an item’s image and ask the bot to search for similar items across the website.

CogVLM

Cognitive Visual Language Model (CogVLM) is an open-source visual language foundation model that uses deep fusion techniques to achieve superior vision and language understanding. The model achieves SOTA performance on seventeen cross-modal benchmarks, including image captioning and VQA datasets.

blog_image_19726

CogVLM

Key Features

  • Attention-based Fusion: The model uses a visual expert module that includes attention layers to fuse text and image embeddings. The technique helps retain the performance of the LLM by keeping its layers frozen.
  • ViT Encoder: It uses EVA2-CLIP-E as the visual encoder and a multi-layer perceptron (MLP) adapter to map visual features onto the same space as text features.
  • Pre-trained Large Language Model (LLM): CogVLM 17B uses Vicuna 1.5-7B as the LLM for transforming textual features into word embeddings.

Use Case

Like LLaVA, CogVLM can help users perform VQA tasks and generate detailed textual descriptions based on visual cues. It can also supplement visual grounding tasks that involve identifying the most relevant objects within an image based on a natural language query.

Gen2

Gen2 is a powerful text-to-video and image-to-video model that can generate realistic videos based on textual and visual prompts. It uses diffusion-based models to create context-aware videos using image and text samples as guides.

blog_image_21575

Gen2

Key Features

  • Encoder: Gen2 uses an autoencoder to map input video frames onto a latent space and diffuse them into low-dimensional vectors.
  • Structure and Content: It uses MiDaS, an ML model that estimates the depth of input video frames. It also uses CLIP for image representations by encoding video frames to understand content.
  • Cross-Attention: The model uses a cross-modal attention mechanism to merge the diffused vector with the content and structure representations derived from MiDaS and CLIP. It then performs the reverse diffusion process conditioned on content and structure to generate videos.

Use Case

Gen2 can help content creators generate video clips using text and image prompts. They can generate stylized videos that map a particular image’s style on an existing video.

ImageBind

ImageBind is a multimodal model by Meta AI that can combine data from six modalities, including text, video, audio, depth, thermal, and inertial measurement unit (IMU), into a single embedding space. It can then use any modality as input to generate output in any of the mentioned modalities.

blog_image_23256

ImageBind

Key Features

  • Output: ImageBind supports audio-to-image, image-to-audio, text-to-image and audio, audio and image-to-image, and audio to generate corresponding images.
  • Image Binding: The model pairs image data with other modalities to train the network. For instance, it finds relevant textual descriptions related to specific images and pairs videos from the web with similar images.
  • Optimization Loss: It uses the InfoNCE loss, where NCE stands for noise-contrastive estimation. The loss function uses contrastive approaches to align non-image modalities with specific images.

Use Cases

ImageBind’s extensive multimodal capabilities make the model applicable in multiple domains. For instance, users can generate relevant promotional videos with the desired audio by providing a straightforward textual prompt.

Flamingo

Flamingo is a vision-language model by DeepMind that can take videos, images, and text as input and generate textual responses regarding the image or video. The model allows for few-shot learning, where users provide a few samples to prompt the model to create relevant responses.

blog_image_25336

Flamingo

Key Features

  • Encoders: The model consists of a frozen pre-trained Normalizer-Free ResNet as the vision encoder trained on the contrastive objective. The encoder transforms image and video pixels into 1-dimensional feature vectors.
  • Perceiver Resampler: The perceiver resampler generates a small number of visual tokens for every image and video. This method helps reduce computational complexity in cases of images and videos with an extensive feature set.
  • Cross-Attention Layers: Flamingo incorporates cross-attention layers between the layers of the frozen LLM to fuse visual and textual features.

Use Case

Flamingo can help in image captioning, classification, and VQA. The user must frame these tasks as task prediction problems conditioned on visual cues.

GPT-4o

GPT-4 Omni (GPT4o) is a large multimodal model that can take audio, video, text, and image as input and generate any of these modalities as output in real time. The model offers a more interactive user experience as it can respond to prompts with human-level efficiency.

blog_image_26945

GPT-4o

Key Features

  • Response Time: The model can respond within 320 milliseconds on average, achieving human-level response time.
  • Multilingual: GPT-4o can understand over fifty languages, including Hindi, Arabic, Urdu, French, and Chinese.
  • Performance: The model achieves GPT-turbo-level performance on multiple benchmarks, including text, reasoning, and coding expertise.

Use Case

GPT-4o can generate text, video, audio, and image with nuances such as tone, rhythm, and emotion provided in the user prompt. The capability can help users create more engaging and relevant content for marketing purposes.

Gemini

Google Gemini is a set of multimodal models that can process audio, video, text, and image data. It offers Gemini in three variants: Ultra for complex tasks, Pro for large-scale deployment, and Nano for on-device implementation.

blog_image_28307

Gemini

Key Features

  • Larger Context Window: The latest Gemini versions, 1.5 Pro and 1.5 Flash, have long context windows, making it capable of processing long-form videos, text, code, and words. For instance, Gemini 1.5 Pro supports up to two million tokens, and 1.5 Flash supports up to one million tokens, 
  • Transformer-based Architecture: Google trained the model on interleaved text, image, video, and audio sequences using a transformer. Using the multimodal input, the model generates images and text as output.
  • Post-training: The model uses supervised fine-tuning and reinforcement learning with human feedback (RLHF) to improve response quality and safety.

Use Case

The three Gemini model versions allow users to implement Gemini in multiple domains. For instance, Gemini Ultra can help developers generate complex code, Pro can help teachers check students’ hand-written answers, and Nano can help businesses build on-device virtual assistants.

Claude 3

Claude 3 is a vision-language model by Anthropic that includes three variants in increasing order of performance: Haiku, Sonnet, and Opus. Opus exhibits SOTA performance across multiple benchmarks, including undergraduate and graduate-level reasoning.

blog_image_30203

Claude Intelligence vs. Cost by Variant

Key Features

  • Long Recall: Claude 3 can process input sequences of more than 1 million tokens with powerful recall.
  • Visual Capabilities: The model can understand photos, charts, graphs, and diagrams while processing research papers in less than three seconds.
  • Better Safety: Claude 3 recognizes and responds to harmful prompts with more subtlety, respecting safety protocols while maintaining higher accuracy.

Use Case

Claude 3 can be a significant educational tool as it comprehends dense data and technical language, including complex diagrams and figures.

Challenges and Future Trends

While multimodal models offer significant benefits through superior AI capabilities, building and deploying these models is challenging. The list below mentions a few of these challenges to help developers understand possible solutions to overcome these problems.

Challenges

  • Data Availability: Although data for each modality exists, aligning these datasets is complex and results in noise during multimodal learning. Helpful mitigation strategies include using pre-trained foundation models, data augmentation techniques, and few-shot learning techniques to train multimodal models.
  • Data Annotation: Annotating multimodal data requires extensive expertise and resources to ensure consistent and accurate labeling across different data types. Developers can address this issue using third-party annotation tools to streamline the annotation process.
  • Mode Complexity: The complex architectural design makes training a multimodal model computationally expensive and prone to overfitting. Strategies such as knowledge distillation, quantization, and regularization can help mitigate these problems and boost generalization performance.

Future Trends

Despite the challenges, research in multimodal systems is ongoing, leading to productive developments concerning data collection and annotation tools, training methods, and explainable AI.

  • Data Collection and Annotation Tools: Users can invest in end-to-end AI platforms that offer multiple tools to collect, curate, and annotate complex datasets. For instance, Encord is an end-to-end AI solution that offers Encord Index to collect, curate, and organize image and video datasets, and Encord Annotate to label data items using micro-models and automated labeling algorithms.
  • Training Methods: Advancements in training strategies allow users to develop complex models using small data samples. For instance, few-shot, one-shot, and zero-shot learning techniques can help developers train models on small datasets while ensuring high generalization ability to unseen data.
  • Explainable AI (XAI): XAI helps developers understand a model’s decision-making process in more detail. For instance, attention-based networks allow users to visualize which parts of data the model focuses on during inference. Development in XAI methods will enable experts to delve deeper into the causes of potential biases and inconsistencies in model outputs.

Multimodal Models: Key Takeaways

Multimodal models are revolutionizing human-AI interaction by allowing users and businesses to implement AI in complex environments requiring an advanced understanding of real-world data.

Below are a few critical points regarding multimodal models:

  1. Multimodal Model Architecture: Multimodal models include an encoder to map raw data from different modalities into feature vectors, a fusion strategy to consolidate data modalities, and a decoder to process the merged embeddings to generate relevant output.
  2. Fusion Mechanism: Attention-based methods, concatenation, and dot-product techniques are popular choices for fusing multimodal data.
  3. Multimodal Use Cases: Multimodal models help in visual question-answering (VQA), image-to-text and text-to-image search, generative AI, and image segmentation tasks.
  4. Top Multimodal Models: CLIP, Dall-E, and LLaVA are popular multimodal models that can process video, image, and textual data.
  5. Multimodal Challenges: Building multimodal models involves challenges such as data availability, annotation, and model complexity. However, experts can overcome these problems through modern learning techniques, automated labeling tools, and regularization methods.


encord logo

Power your AI models with the right data

Automate your data curation, annotation and label validation workflows.

Get started
Written by
author-avatar-url

Haziqa Sajid

View more posts
Frequently asked questions
  • Multimodal models are AI algorithms that simultaneously process multiple data modalities such as text, image, video, and audio to generate more context-aware output.
  • Large Multimodal Models (LMMs) process data from multiple data modalities, while Large Language Models (LLMs) only work with textual data.
  • Multimodal models have applications in healthcare for medical image analysis, in retail for visual search, and in education to teach students concepts through images, audio, text, and videos.
  • GPT-4o, Gemini, and ImageBind are a few popular multimodal models released in 2024.
  • Data availability, annotation, and model complexity are a few issues that developers face when building multimodal models.

Explore our products