AWS and NVIDIA expand partnership to advance generative AI

amazon web services project ceiba nvidia re:invent generative ai artificial intelligence

Amazon Web Services (AWS) and NVIDIA have announced a significant expansion of their strategic collaboration at AWS re:Invent. The collaboration aims to provide customers with state-of-the-art infrastructure, software, and services to fuel generative AI innovations.

The collaboration brings together the strengths of both companies, integrating NVIDIA's latest multi-node systems with next-generation GPUs, CPUs, and AI software, along with AWS technologies such as Nitro System...

Amazon is building a LLM to rival OpenAI and Google

Amazon is reportedly making substantial investments in the development of a large language model (LLM) named Olympus. 

According to Reuters, the tech giant is pouring millions into this project to create a model with a staggering two trillion parameters. OpenAI’s GPT-4, for comparison, is estimated to have around one trillion parameters.

This move puts Amazon in direct competition with OpenAI, Meta, Anthropic, Google, and others. The team behind Amazon’s initiative...

AWS announces nine major updates for its ML platform SageMaker

Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.

SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.

During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.

Swami Sivasubramanian, VP of Amazon Machine Learning at AWS,...

NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud

NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud.

Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation of groundbreaking innovations.

The A100 outperformed CPUs in this year’s MLPerf by up to 237x in data centre inference. A single NVIDIA DGX A100...