Learn how multimodal AI merges text, image, and audio for smarter models
-
Updated
Jan 21, 2025 - Jupyter Notebook
Learn how multimodal AI merges text, image, and audio for smarter models
Neocortex Unity SDK for Smart NPCs and Virtual Assistants
VLDBench: A large-scale benchmark for evaluating Vision-Language Models (VLMs) and Large Language Models (LLMs) on multimodal disinformation detection.
Gallery showcasing AI-generated images and videos created using the Nova model
This repository contains 12+ hands-on projects built using Google Gemini Pro, Gemini Flash, and Gemini Pro 1.5 models. It covers fine-tuning, training, and deploying generative AI applications for text generation, image synthesis, language translation, chatbots, and more.
Lab website
A basic application that uses Google's Gemini AI to automatically capture, analyze, and answer quiz questions from screenshots in real-time. Current setup is made for MacOS. Needs further testing.
Al-Asma''i: The Digital Poet is an innovative AI model. When you give it the name of an Arabic poem, it creates the poem's text and converts it into visual and audio presentations. Combining AI with traditional Arabic poetry, it offers a unique experience where users can see and hear their favorite poems in a new and expressive way.
The teaches you to integrate text, images, and videos into applications using Gemini's state-of-the-art multimodal models. Learn advanced prompting techniques, cross-modal reasoning, and how to extend Gemini's capabilities with real-time data and API integration.
Add a description, image, and links to the multimodal-ai topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-ai topic, visit your repo's landing page and select "manage topics."