Abu Dhabi’s TII Launches Arabic Version Of Its Falcon LLM
Technology Innovation Institute (TII) has also unveiled a new model called Falcon H1, which has been able to outperform comparable offerings from Meta's LLaMA and Alibaba's Qwen.

Technology Innovation Institute (TII), the applied research arm of Abu Dhabi’s Advanced Technology Research Council (ATRC), has unveiled Falcon Arabic, the first-ever Arabic edition of its Falcon large language model series.
“We’re proud to finally bring Arabic to Falcon, and prouder still that the best-performing large language model in the Arab world was built in the UAE,” said HE Faisal Al Bannai, Advisor to the UAE President and Secretary General of ATRC, as he announced the new model at this year’s edition of the Make it in the Emirates event in Abu Dhabi.
Built on top of the seven-billion-parameter Falcon 3-7B, Falcon Arabic has been billed as one of the most advanced Arabic AI models developed to date. Trained on a high-quality native (non-translated) Arabic dataset spanning modern standard Arabic and regional dialects, it captures the full linguistic diversity of the Arab world.
According to the Open Arabic LLM Leaderboard benchmarks, Falcon Arabic outperforms all other regionally available Arabic language models, reinforcing its leadership in sovereign, multilingual AI. It also ranks as the best-performing Arabic model in its class, matching the performance of models up to 10 times its size, proving that smart architecture can outperform sheer scale.
Besides unveiling Falcon Arabic, Al Bannai also introduced the Falcon H1, a new model from TII that has been designed to dramatically expand access to high-performance artificial intelligence (AI) by reducing the computing power and technical expertise traditionally required to run advanced systems.
A statement from TII noted that Falcon H1 has been able to outperform comparable offerings from Meta's LLaMA and Alibaba's Qwen, thereby enabling real-world artificial intelligence (AI) on everyday devices as well as in resource-limited settings.
“Today, AI leadership is not about scale for the sake of scale,” Al Bannai said. “It is about making powerful tools useful, usable, and universal. Falcon-H1 reflects our commitment to delivering AI that works for everyone—not just the few.”
Besides also supporting European-origin languages, Falcon-H1 additionally has, for the first time in the Falcon series, scalable capability to support over 100 languages, thanks to a multilingual tokenizer trained on diverse datasets.
Falcon-H1, named “H” for its hybrid architecture combining the strengths of the Transformer and Mamba designs, enables significantly faster inference speeds and lower memory consumption, while maintaining high performance across a range of benchmarks.
The Transformer, introduced by Google in 2017, is a neural network architecture known for its accuracy and scalability, widely used in models like GPT and BERT, while Mamba is a newer, more efficient design that reduces memory use and speeds up processing by handling input sequentially. Together, they combine power and efficiency for advanced AI performance.
“We approached Falcon-H1 not just as a research milestone but as an engineering challenge: how to deliver exceptional efficiency without compromise,” said Dr. Najwa Aaraj, CEO of TII. “This model reflects our commitment to building technically rigorous systems with real-world utility. Falcon isn’t just a model; it’s a foundation that empowers researchers, developers, and innovators, especially in environments where resources are limited but ambitions are not.”
The Falcon-H1 family includes models of various sizes: 34B, 7B, 3B, 1.5B, 1.5B-deep, and 500M. These models offer users a wide range of performance-to-efficiency ratios, allowing developers to choose the most appropriate model for their deployment scenarios.
“The Falcon-H1 series demonstrates how new architectures can unlock new opportunities in AI training while showcasing the potential of ultra-compact models,” added Dr. Hakim Hacid, Chief Researcher at the AI and Digital Science Research Center at TII. “This fundamentally shifts what’s possible at the smallest scale, enabling powerful AI on edge devices where privacy, efficiency, and low latency are critical. Our focus has been on reducing complexity without compromising capability.”
All Falcon models are open source and available on Hugging Face and FalconLLM.TII.ae under the TII Falcon License, an Apache 2.0-based license, which promotes responsible and ethical AI development.