Bhavish Aggarwal, the founder of Ola, is channeling $230 million into an AI startup he initiated as India aims to carve out a space in a sector predominantly led by companies from the U.S. and China.
According to a source familiar with the details, Aggarwal is funding the investment in Krutrim — a company focused on developing large language models (LLMs) for Indian languages — primarily through his family office, as reported by TechCrunch. In a recent post on X, Aggarwal stated that Krutrim aims to secure a total investment of $1.15 billion by next year, with plans to raise the remaining funds from external investors.
This funding announcement coincides with the unicorn startup Krutrim making its AI models open source and revealing its intention to create what it claims will be the largest supercomputer in India, in collaboration with Nvidia.
The lab has introduced Krutrim-2, a 12-billion-parameter language model that demonstrates impressive capabilities in processing various Indian languages. According to sentiment analysis tests shared by Krutrim on Tuesday, it scored 0.95, significantly higher than the 0.70 score of competing models, while achieving an 80% success rate in code-generation tasks.
The lab has also made several specialized models available as open source, including those for image processing, speech translation, and text search, all tailored to Indian languages.
“We’re still far from global benchmarks, but we’ve made significant progress in just a year,” wrote Aggarwal, whose other enterprises have received backing from SoftBank, on X. “By open sourcing our models, we hope to foster collaboration within the broader Indian AI community to build a world-class AI ecosystem in India.”
This initiative comes as India strives to assert its presence in the artificial intelligence landscape, which is currently dominated by U.S. and Chinese firms. The recent introduction of DeepSeek’s R1 “reasoning” model, developed on what is described as a modest budget, has created considerable buzz in the tech arena.
Last week, India commended DeepSeek’s advancements and stated that the nation would host the Chinese AI lab’s LLMs on local servers. Krutrim’s cloud division started offering DeepSeek on Indian servers just last week.
Krutrim has also established its own evaluation framework, BharatBench, to assess the effectiveness of AI models in Indian languages, addressing a noticeable gap in existing evaluations that mainly focus on English and Chinese.
The lab’s technical approach employs a 128,000-token context window, enabling its systems to manage longer texts and more intricate dialogues. Performance data released by the startup indicated that Krutrim-2 achieved impressive scores in grammar correction (0.98) and multi-turn conversations (0.91).
This investment follows the debut of Krutrim-1 in January, which was India’s first large language model with 7 billion parameters. The supercomputer, in partnership with Nvidia, is set to go online in March, with further expansions planned throughout the year.
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence


