A recent study by Carnegie Mellon University (CMU) shows that Google’s latest large language model, Gemini Pro, lags behind GPT-3.5 and far behind GPT-4 in benchmarks.
The results contradict the information provided by Google at the Gemini presentation. They highlight the need for neutral benchmarking institutions or processes.
Gemini Pro loses out to GPT-3.5 in benchmarks.
Comments are closed.