Paper: https://arxiv.org/pdf/2410.18967)
The paper introduces Ferret-UI 2, a multimodal large language model (MLLM) that significantly improves upon its predecessor, Ferret-UI, by enabling universal user interface (UI) understanding across diverse platforms (iPhone, Android, iPad, webpages, and AppleTV). Key improvements include multi-platform support, high-resolution perception through adaptive scaling, and advanced task training data generation using GPT-4o with visual prompting. Ferret-UI 2 demonstrates superior performance on various benchmarks, showcasing strong cross-platform transfer capabilities and surpassing existing models in UI referring, grounding, and user-centric advanced tasks. The enhanced model architecture and higher-quality training data contribute to these advancements. The authors conclude by outlining future work focusing on broader platform coverage and the development of a truly generalist UI navigation agent.
ai , artificial intelligence , arxiv , research , paper , publication , llm, genai, generative ai , large visual models, large language models, large multi modal models, nlp, text, machine learning, ml, nividia, openai, anthropic, microsoft, google, technology, cutting-edge, meta, llama, chatgpt, gpt, elon musk, sam altman, deployment, engineering, scholar, science, apple, samsung