Google launches Gemini 1.5 with ‘experimental’ 1M token context

Google has unveiled its latest AI model, Gemini 1.5, which features what the company calls an "experimental" one million token context window. 

The new capability allows Gemini 1.5 to process extremely long text passages – up to one million characters – to understand context and meaning. This dwarfs previous AI systems like Claude 2.1 and GPT-4 Turbo, which max out at 200,000 and 128,000 tokens respectively:

“Gemini 1.5 Pro achieves near-perfect recall on...

Google launches Gemini to replace Bard chatbot

Google has launched its AI chatbot called Gemini, which replaces its short-lived Bard service.

Unveiled in December, Bard was touted as a competitor to chatbots like ChatGPT but failed to impress in demos. Google staff even called the launch “botched” and slammed CEO Sundar Pichai.

Now rebranded as Gemini, Google says it represents the company's "most capable family of models" for natural conversations. Two experiences are being launched: Gemini Advanced and a mobile...

IBM and Hugging Face release AI foundation model for climate science

In a bid to democratise access to AI technology for climate science, IBM and Hugging Face have announced the release of the watsonx.ai geospatial foundation model.

The geospatial model, built from NASA's satellite data, will be the largest of its kind on Hugging Face and marks the first-ever open-source AI foundation model developed in collaboration with NASA.

Jeff Boudier, head of product and growth at Hugging Face, highlighted the importance of information sharing and...

OpenAI is not currently training GPT-5

Experts calling for a pause on AI development will be glad to hear that OpenAI isn’t currently training GPT-5.

OpenAI CEO Sam Altman spoke remotely at an MIT event and was quizzed about AI by computer scientist and podcaster Lex Fridman.

Altman confirmed that OpenAI is not currently developing a fifth version of its Generative Pre-trained Transformer model and is instead focusing on enhancing the capabilities of GPT-4, the latest version.

Altman was asked...

Meta’s NLLB-200 AI model improves translation quality by 44%

Meta has unveiled a new AI model called NLLB-200 that can translate 200 languages and improves quality by an average of 44 percent. 

Translation apps have been fairly adept at the most popular languages for some time. Even when they don’t offer a perfect translation, it’s normally close enough for the native speaker to understand.

However, there are hundreds of millions of people in regions with many languages – like Africa and Asia – that still suffer from...

State of ModelOps: 90% expect a dedicated budget within 12 months, 80% say risk-management is a key AI barrier

The first annual State of ModelOps report highlights some interesting trends about the real-world adoption of AI in enterprises.

Independent research firm Corinium Intelligence conducted the study on behalf of ModelOp and aims to summarise the state of model operationalisation today.

Stu Bailey, Co-Founder and Chief Enterprise AI Architect at ModelOp, said:

“As the report shows, enterprises increasingly view ModelOps as the key to ensuring operational...

NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training

NVIDIA’s latest breakthrough emulates new images from existing small datasets with truly groundbreaking potential for AI training.

The company demonstrated its latest AI model using a small dataset – just a fraction of the size typically used for a Generative Adversarial Network (GAN) – of artwork from the Metropolitan Museum of Art.

From the dataset, NVIDIA’s AI was able to create new images which replicate the style of the original artist’s work. These images...

MIT has removed a dataset which leads to misogynistic, racist AI models

MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies.

The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what they feature.

Machine-learning models are trained using these images and their labels. An image of a street – when fed into an AI trained...