LLaVA – This Open Source Model Can SEE Just like GPT-4-V

HomeOther Content, TechLLaVA – This Open Source Model Can SEE Just like GPT-4-V
LLaVA - This Open Source Model Can SEE Just like GPT-4-V
LLaVA – This Open Source Model Can SEE Just like GPT-4-V
In this video, we look at the newly released LLaVA-1.5-13B which is the latest Open Source Multi-Modal model that can see images.

LLaVA represents a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding, achieving impressive chat capabilities mimicking spirits of the multimodal GPT-4 and setting a new state-of-the-art accuracy on Science QA.

LET’S CONNECT:

Buy me a Coffee: https://ko-fi.com/promptengineering
Support my work on Patreon: Patreon.com/PromptEngineering
Discord: https://discord.com/invite/t4eYQRUcXB
Business Contact: [email protected]
Consulting: https://calendly.com/engineerprompt/consulting-call

LINKS:
Llava-Github: https://llava-vl.github.io/
Llava Demo: https://llava.hliu.cc/

Take the opportunity to connect and share this video with your friends and family if you find it useful.

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *