Nvidia Archives - AI News https://www.artificialintelligence-news.com/tag/nvidia/ Artificial Intelligence News Mon, 17 Jun 2024 16:05:05 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Nvidia Archives - AI News https://www.artificialintelligence-news.com/tag/nvidia/ 32 32 NVIDIA presents latest advancements in visual AI https://www.artificialintelligence-news.com/2024/06/17/nvidia-presents-latest-advancements-visual-ai/ https://www.artificialintelligence-news.com/2024/06/17/nvidia-presents-latest-advancements-visual-ai/#respond Mon, 17 Jun 2024 16:05:03 +0000 https://www.artificialintelligence-news.com/?p=15026 NVIDIA researchers are presenting new visual generative AI models and techniques at the Computer Vision and Pattern Recognition (CVPR) conference this week in Seattle. The advancements span areas like custom image generation, 3D scene editing, visual language understanding, and autonomous vehicle perception. “Artificial intelligence, and generative AI in particular, represents a pivotal technological advancement,” said... Read more »

The post NVIDIA presents latest advancements in visual AI appeared first on AI News.

]]>
NVIDIA researchers are presenting new visual generative AI models and techniques at the Computer Vision and Pattern Recognition (CVPR) conference this week in Seattle. The advancements span areas like custom image generation, 3D scene editing, visual language understanding, and autonomous vehicle perception.

“Artificial intelligence, and generative AI in particular, represents a pivotal technological advancement,” said Jan Kautz, VP of learning and perception research at NVIDIA.

“At CVPR, NVIDIA Research is sharing how we’re pushing the boundaries of what’s possible — from powerful image generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.”

Among the over 50 NVIDIA research projects being presented, two papers have been selected as finalists for CVPR’s Best Paper Awards – one exploring the training dynamics of diffusion models and another on high-definition maps for self-driving cars.

Additionally, NVIDIA has won the CVPR Autonomous Grand Challenge’s End-to-End Driving at Scale track, outperforming over 450 entries globally. This milestone demonstrates NVIDIA’s pioneering work in using generative AI for comprehensive self-driving vehicle models, also earning an Innovation Award from CVPR.

One of the headlining research projects is JeDi, a new technique that allows creators to rapidly customise diffusion models – the leading approach for text-to-image generation – to depict specific objects or characters using just a few reference images, rather than the time-intensive process of fine-tuning on custom datasets.

Another breakthrough is FoundationPose, a new foundation model that can instantly understand and track the 3D pose of objects in videos without per-object training. It set a new performance record and could unlock new AR and robotics applications.

NVIDIA researchers also introduced NeRFDeformer, a method to edit the 3D scene captured by a Neural Radiance Field (NeRF) using a single 2D snapshot, rather than having to manually reanimate changes or recreate the NeRF entirely. This could streamline 3D scene editing for graphics, robotics, and digital twin applications.

On the visual language front, NVIDIA collaborated with MIT to develop VILA, a new family of vision language models that achieve state-of-the-art performance in understanding images, videos, and text. With enhanced reasoning capabilities, VILA can even comprehend internet memes by combining visual and linguistic understanding.

NVIDIA’s visual AI research spans numerous industries, including over a dozen papers exploring novel approaches for autonomous vehicle perception, mapping, and planning. Sanja Fidler, VP of NVIDIA’s AI Research team, is presenting on the potential of vision language models for self-driving cars.

The breadth of NVIDIA’s CVPR research exemplifies how generative AI could empower creators, accelerate automation in manufacturing and healthcare, while propelling autonomy and robotics forward.

(Photo by v2osk)

See also: NLEPs: Bridging the gap between LLMs and symbolic reasoning

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NVIDIA presents latest advancements in visual AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/17/nvidia-presents-latest-advancements-visual-ai/feed/ 0
UAE unveils new AI model to rival big tech giants https://www.artificialintelligence-news.com/2024/05/15/uae-unveils-new-ai-model-to-rival-big-tech-giants/ https://www.artificialintelligence-news.com/2024/05/15/uae-unveils-new-ai-model-to-rival-big-tech-giants/#respond Wed, 15 May 2024 09:53:41 +0000 https://www.artificialintelligence-news.com/?p=14818 The UAE is making big waves by launching a new open-source generative AI model. This step, taken by a government-backed research institute, is turning heads and marking the UAE as a formidable player in the global AI race. In Abu Dhabi, the Technology Innovation Institute (TII) unveiled the Falcon 2 series. As reported by Reuters, this series includes Falcon 2 11B,... Read more »

The post UAE unveils new AI model to rival big tech giants appeared first on AI News.

]]>
The UAE is making big waves by launching a new open-source generative AI model. This step, taken by a government-backed research institute, is turning heads and marking the UAE as a formidable player in the global AI race.

In Abu Dhabi, the Technology Innovation Institute (TII) unveiled the Falcon 2 series. As reported by Reuters, this series includes Falcon 2 11B, a text-based model, and Falcon 2 11B VLM, a vision-to-language model capable of generating text descriptions from images. TII is run by Abu Dhabi’s Advanced Technology Research Council.

As a major oil exporter and a key player in the Middle East, the UAE is investing heavily in AI. This strategy has caught the eye of U.S. officials, leading to tensions over whether to use American or Chinese technology. In a move coordinated with Washington, Emirati AI firm G42 withdrew from Chinese investments and replaced Chinese hardware, securing a US$1.5 billion investment from Microsoft.

Faisal Al Bannai, Secretary General of the Advanced Technology Research Council and an adviser on strategic research and advanced technology, proudly states that the UAE is proving itself as a major player in AI. The release of the Falcon 2 series is part of a broader race among nations and companies to develop proprietary large language models. While some opt to keep their AI code private, the UAE, like Meta’s Llama, is making its groundbreaking work accessible to all.

Al Bannai is also excited about the upcoming Falcon 3 generation and expresses confidence in the UAE’s ability to compete globally: “We’re very proud that we can still punch way above our weight, really compete with the best players globally.”

Reflecting on his earlier statements this year, Al Bannai emphasised that the UAE’s decisive advantage lies in its ability to make swift strategic decisions.

It’s worth noting that Abu Dhabi’s ruling family controls some of the world’s largest sovereign wealth funds, worth about US$1.5 trillion. These funds, formerly used to diversify the UAE’s oil wealth, are now critical for accelerating growth in AI and other cutting-edge technologies. In fact, the UAE is emerging as a key player in producing advanced computer chips essential for training powerful AI systems. According to Wall Street Journal, OpenAI CEO Sam Altman met with investors, including Sheik Tahnoun bin Zayed Al Nahyan, who runs Abu Dhabi’s major sovereign wealth fund, to discuss a potential US$7 trillion investment to develop an AI chipmaker to compete with Nvidia.

Furthermore, the UAE’s commitment to generative AI is evident in its recent launch of a ‘Generative AI’ guide. This guide aims to unlock AI’s potential in various fields, including education, healthcare, and media. It provides a detailed overview of generative AI, addressing digital technologies’ challenges and opportunities while emphasising data privacy. The guide is designed to assist government agencies and the community leverage AI technologies by demonstrating 100 practical AI use cases for entrepreneurs, students, job seekers, and tech enthusiasts.

This proactive stance showcases the UAE’s commitment to participating in and leading the global AI race, positioning it as a nation to watch in the rapidly evolving tech scene.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UAE unveils new AI model to rival big tech giants appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/15/uae-unveils-new-ai-model-to-rival-big-tech-giants/feed/ 0
NVIDIA unveils Blackwell architecture to power next GenAI wave  https://www.artificialintelligence-news.com/2024/03/19/nvidia-unveils-blackwell-architecture-power-next-genai-wave/ https://www.artificialintelligence-news.com/2024/03/19/nvidia-unveils-blackwell-architecture-power-next-genai-wave/#respond Tue, 19 Mar 2024 10:44:25 +0000 https://www.artificialintelligence-news.com/?p=14575 NVIDIA has announced its next-generation Blackwell GPU architecture, designed to usher in a new era of accelerated computing and enable organisations to build and run real-time generative AI on trillion-parameter large language models. The Blackwell platform promises up to 25 times lower cost and energy consumption compared to its predecessor: the Hopper architecture. Named after... Read more »

The post NVIDIA unveils Blackwell architecture to power next GenAI wave  appeared first on AI News.

]]>
NVIDIA has announced its next-generation Blackwell GPU architecture, designed to usher in a new era of accelerated computing and enable organisations to build and run real-time generative AI on trillion-parameter large language models.

The Blackwell platform promises up to 25 times lower cost and energy consumption compared to its predecessor: the Hopper architecture. Named after pioneering mathematician and statistician David Harold Blackwell, the new GPU architecture introduces six transformative technologies.

“Generative AI is the defining technology of our time. Blackwell is the engine to power this new industrial revolution,” said Jensen Huang, Founder and CEO of NVIDIA. “Working with the most dynamic companies in the world, we will realise the promise of AI for every industry.”

The key innovations in Blackwell include the world’s most powerful chip with 208 billion transistors, a second-generation Transformer Engine to support double the compute and model sizes, fifth-generation NVLink interconnect for high-speed multi-GPU communication, and advanced engines for reliability, security, and data decompression.

Central to Blackwell is the NVIDIA GB200 Grace Blackwell Superchip, which combines two B200 Tensor Core GPUs with a Grace CPU over an ultra-fast 900GB/s NVLink interconnect. Multiple GB200 Superchips can be combined into systems like the liquid-cooled GB200 NVL72 platform with up to 72 Blackwell GPUs and 36 Grace CPUs, offering 1.4 exaflops of AI performance.

NVIDIA has already secured support from major cloud providers like Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure to offer Blackwell-powered instances. Other partners planning Blackwell products include Dell Technologies, Meta, Microsoft, OpenAI, Oracle, Tesla, and many others across hardware, software, and sovereign clouds.

Sundar Pichai, CEO of Alphabet and Google, said: “We are fortunate to have a longstanding partnership with NVIDIA, and look forward to bringing the breakthrough capabilities of the Blackwell GPU to our Cloud customers and teams across Google to accelerate future discoveries.”

The Blackwell architecture and supporting software stack will enable new breakthroughs across industries from engineering and chip design to scientific computing and generative AI.

Mark Zuckerberg, Founder and CEO of Meta, commented: “AI already powers everything from our large language models to our content recommendations, ads, and safety systems, and it’s only going to get more important in the future.

“We’re looking forward to using NVIDIA’s Blackwell to help train our open-source Llama models and build the next generation of Meta AI and consumer products.”

With its massive performance gains and efficiency, Blackwell could be the engine to finally make real-time trillion-parameter AI a reality for enterprises.

See also: Elon Musk’s xAI open-sources Grok

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NVIDIA unveils Blackwell architecture to power next GenAI wave  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/19/nvidia-unveils-blackwell-architecture-power-next-genai-wave/feed/ 0
AWS and NVIDIA expand partnership to advance generative AI https://www.artificialintelligence-news.com/2023/11/29/aws-nvidia-expand-partnership-advance-generative-ai/ https://www.artificialintelligence-news.com/2023/11/29/aws-nvidia-expand-partnership-advance-generative-ai/#respond Wed, 29 Nov 2023 14:30:14 +0000 https://www.artificialintelligence-news.com/?p=13962 Amazon Web Services (AWS) and NVIDIA have announced a significant expansion of their strategic collaboration at AWS re:Invent. The collaboration aims to provide customers with state-of-the-art infrastructure, software, and services to fuel generative AI innovations. The collaboration brings together the strengths of both companies, integrating NVIDIA’s latest multi-node systems with next-generation GPUs, CPUs, and AI... Read more »

The post AWS and NVIDIA expand partnership to advance generative AI appeared first on AI News.

]]>
Amazon Web Services (AWS) and NVIDIA have announced a significant expansion of their strategic collaboration at AWS re:Invent. The collaboration aims to provide customers with state-of-the-art infrastructure, software, and services to fuel generative AI innovations.

The collaboration brings together the strengths of both companies, integrating NVIDIA’s latest multi-node systems with next-generation GPUs, CPUs, and AI software, along with AWS technologies such as Nitro System advanced virtualisation, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability.

Key highlights of the expanded collaboration include:

  1. Introduction of NVIDIA GH200 Grace Hopper Superchips on AWS:
    • AWS becomes the first cloud provider to offer NVIDIA GH200 Grace Hopper Superchips with new multi-node NVLink technology.
    • The NVIDIA GH200 NVL32 multi-node platform enables joint customers to scale to thousands of GH200 Superchips, providing supercomputer-class performance.
  2. Hosting NVIDIA DGX Cloud on AWS:
    • Collaboration to host NVIDIA DGX Cloud, an AI-training-as-a-service, on AWS, featuring GH200 NVL32 for accelerated training of generative AI and large language models.
  3. Project Ceiba supercomputer:
    • Collaboration on Project Ceiba, aiming to design the world’s fastest GPU-powered AI supercomputer with 16,384 NVIDIA GH200 Superchips and processing capability of 65 exaflops.
  4. Introduction of new Amazon EC2 instances:
    • AWS introduces three new Amazon EC2 instances, including P5e instances powered by NVIDIA H200 Tensor Core GPUs for large-scale generative AI and HPC workloads.
  5. Software innovations:
    • NVIDIA introduces software on AWS, such as NeMo Retriever microservice for chatbots and summarisation tools, and BioNeMo to speed up drug discovery for pharmaceutical companies.

This collaboration signifies a joint commitment to advancing the field of generative AI, offering customers access to cutting-edge technologies and resources.

Internally, Amazon robotics and fulfilment teams already employ NVIDIA’s Omniverse platform to optimise warehouses in virtual environments first before real-world deployment.

The integration of NVIDIA and AWS technologies will accelerate the development, training, and inference of large language models and generative AI applications across various industries.

(Photo by ANIRUDH on Unsplash)

See also: Inflection-2 beats Google’s PaLM 2 across common benchmarks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AWS and NVIDIA expand partnership to advance generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/29/aws-nvidia-expand-partnership-advance-generative-ai/feed/ 0
Inflection-2 beats Google’s PaLM 2 across common benchmarks https://www.artificialintelligence-news.com/2023/11/23/inflection-2-beats-google-palm-2-across-common-benchmarks/ https://www.artificialintelligence-news.com/2023/11/23/inflection-2-beats-google-palm-2-across-common-benchmarks/#respond Thu, 23 Nov 2023 09:54:15 +0000 https://www.artificialintelligence-news.com/?p=13947 Inflection, an AI startup aiming to create “personal AI for everyone”, has announced a new large language model dubbed Inflection-2 that beats Google’s PaLM 2. Inflection-2 was trained on over 5,000 NVIDIA GPUs to reach 1.025 quadrillion floating point operations (FLOPs), putting it in the same league as PaLM 2 Large. However, early benchmarks show... Read more »

The post Inflection-2 beats Google’s PaLM 2 across common benchmarks appeared first on AI News.

]]>
Inflection, an AI startup aiming to create “personal AI for everyone”, has announced a new large language model dubbed Inflection-2 that beats Google’s PaLM 2.

Inflection-2 was trained on over 5,000 NVIDIA GPUs to reach 1.025 quadrillion floating point operations (FLOPs), putting it in the same league as PaLM 2 Large. However, early benchmarks show Inflection-2 outperforming Google’s model on tests of reasoning ability, factual knowledge, and stylistic prowess.

On a range of common academic AI benchmarks, Inflection-2 achieved higher scores than PaLM 2 on most. This included outscoring the search giant’s flagship on the diverse Multi-task Middle-school Language Understanding (MMLU) tests, as well as TriviaQA, HellaSwag, and the Grade School Math (GSM8k) benchmarks:

The startup’s new model will soon power its personal assistant app Pi to enable more natural conversations and useful features.

Inflection said its transition from NVIDIA A100 to H100 GPUs for inference – combined with optimisation work – will increase serving speed and reduce costs despite Inflection-2 being much larger than its predecessor.  

An Inflection spokesperson said this latest model brings them “a big milestone closer” towards fulfilling the mission of providing AI assistants for all. They added the team is “already looking forward” to training even larger models on their 22,000 GPU supercluster.

Safety is said to be a top priority for the researchers, with Inflection being one of the first signatories to the White House’s July 2023 voluntary AI commitments. The company said its safety team continues working to ensure models are rigorously evaluated and rely on best practices for alignment.

With impressive benchmarks and plans to scale further, Inflection’s latest effort poses a serious challenge to tech giants like Google and Microsoft who have so far dominated the field of large language models. The race is on to deliver the next generation of AI.

(Photo by Johann Walter Bantz on Unsplash)

See also: Anthropic upsizes Claude 2.1 to 200K tokens, nearly doubling GPT-4

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Inflection-2 beats Google’s PaLM 2 across common benchmarks appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/23/inflection-2-beats-google-palm-2-across-common-benchmarks/feed/ 0
Amdocs, NVIDIA and Microsoft Azure build custom LLMs for telcos https://www.artificialintelligence-news.com/2023/11/16/amdocs-nvidia-microsoft-azure-build-custom-llms-for-telcos/ https://www.artificialintelligence-news.com/2023/11/16/amdocs-nvidia-microsoft-azure-build-custom-llms-for-telcos/#respond Thu, 16 Nov 2023 12:09:48 +0000 https://www.artificialintelligence-news.com/?p=13907 Amdocs has partnered with NVIDIA and Microsoft Azure to build custom Large Language Models (LLMs) for the $1.7 trillion global telecoms industry. Leveraging the power of NVIDIA’s AI foundry service on Microsoft Azure, Amdocs aims to meet the escalating demand for data processing and analysis in the telecoms sector. The telecoms industry processes hundreds of... Read more »

The post Amdocs, NVIDIA and Microsoft Azure build custom LLMs for telcos appeared first on AI News.

]]>
Amdocs has partnered with NVIDIA and Microsoft Azure to build custom Large Language Models (LLMs) for the $1.7 trillion global telecoms industry.

Leveraging the power of NVIDIA’s AI foundry service on Microsoft Azure, Amdocs aims to meet the escalating demand for data processing and analysis in the telecoms sector.

The telecoms industry processes hundreds of petabytes of data daily. With the anticipation of global data transactions surpassing 180 zettabytes by 2025, telcos are turning to generative AI to enhance efficiency and productivity.

NVIDIA’s AI foundry service – comprising the NVIDIA AI Foundation Models, NeMo framework, and DGX Cloud AI supercomputing – provides an end-to-end solution for creating and optimising custom generative AI models.

Amdocs will utilise the AI foundry service to develop enterprise-grade LLMs tailored for the telco and media industries, facilitating the deployment of generative AI use cases across various business domains.

This collaboration builds on the existing Amdocs-Microsoft partnership, ensuring the adoption of applications in secure, trusted environments, both on-premises and in the cloud.

Enterprises are increasingly focusing on developing custom models to perform industry-specific tasks. Amdocs serves over 350 of the world’s leading telecom and media companies across 90 countries. This partnership with NVIDIA opens avenues for exploring generative AI use cases, with initial applications focusing on customer care and network operations.

In customer care, the collaboration aims to accelerate the resolution of inquiries by leveraging information from across company data. In network operations, the companies are exploring solutions to address configuration, coverage, or performance issues in real-time.

This move by Amdocs positions the company at the forefront of ushering in a new era for the telecoms industry by harnessing the capabilities of custom generative AI models.

(Photo by Danist Soh on Unsplash)

See also: Wolfram Research: Injecting reliability into generative AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amdocs, NVIDIA and Microsoft Azure build custom LLMs for telcos appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/16/amdocs-nvidia-microsoft-azure-build-custom-llms-for-telcos/feed/ 0
Azure and NVIDIA deliver next-gen GPU acceleration for AI https://www.artificialintelligence-news.com/2023/08/09/azure-nvidia-deliver-next-gen-gpu-acceleration-ai/ https://www.artificialintelligence-news.com/2023/08/09/azure-nvidia-deliver-next-gen-gpu-acceleration-ai/#respond Wed, 09 Aug 2023 15:47:51 +0000 https://www.artificialintelligence-news.com/?p=13446 Microsoft Azure users are now able to harness the latest advancements in NVIDIA’s accelerated computing technology, revolutionising the training and deployment of their generative AI applications. The integration of Azure ND H100 v5 virtual machines (VMs) with NVIDIA H100 Tensor Core GPUs and Quantum-2 InfiniBand networking promises seamless scaling of generative AI and high-performance computing... Read more »

The post Azure and NVIDIA deliver next-gen GPU acceleration for AI appeared first on AI News.

]]>
Microsoft Azure users are now able to harness the latest advancements in NVIDIA’s accelerated computing technology, revolutionising the training and deployment of their generative AI applications.

The integration of Azure ND H100 v5 virtual machines (VMs) with NVIDIA H100 Tensor Core GPUs and Quantum-2 InfiniBand networking promises seamless scaling of generative AI and high-performance computing applications, all at the click of a button.

This cutting-edge collaboration comes at a pivotal moment when developers and researchers are actively exploring the potential of large language models (LLMs) and accelerated computing to unlock novel consumer and business use cases.

NVIDIA’s H100 GPU achieves supercomputing-class performance through an array of architectural innovations. These include fourth-generation Tensor Cores, a new Transformer Engine for enhanced LLM acceleration, and NVLink technology that propels inter-GPU communication to unprecedented speeds of 900GB/sec.

The integration of the NVIDIA Quantum-2 CX7 InfiniBand – boasting 3,200 Gbps cross-node bandwidth – ensures flawless performance across GPUs, even at massive scales. This capability positions the technology on par with the computational capabilities of the world’s most advanced supercomputers.

The newly introduced ND H100 v5 VMs hold immense potential for training and inferring increasingly intricate LLMs and computer vision models. These neural networks power the most complex and compute-intensive generative AI applications, spanning from question answering and code generation to audio, video, image synthesis, and speech recognition.

A standout feature of the ND H100 v5 VMs is their ability to achieve up to a 2x speedup in LLM inference, notably demonstrated by the BLOOM 175B model when compared to previous generation instances. This performance boost underscores their capacity to optimise AI applications further, fueling innovation across industries.

The synergy between NVIDIA H100 Tensor Core GPUs and Microsoft Azure empowers enterprises with unparalleled AI training and inference capabilities. This partnership also streamlines the development and deployment of production AI, bolstered by the integration of the NVIDIA AI Enterprise software suite and Azure Machine Learning for MLOps.

The combined efforts have led to groundbreaking AI performance, as validated by industry-standard MLPerf benchmarks:

The integration of the NVIDIA Omniverse platform with Azure extends the reach of this collaboration further, providing users with everything they need for industrial digitalisation and AI supercomputing.

(Image Credit: Uwe Hoh from Pixabay)

See also: Gcore partners with UbiOps and Graphcore to empower AI teams

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Azure and NVIDIA deliver next-gen GPU acceleration for AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/09/azure-nvidia-deliver-next-gen-gpu-acceleration-ai/feed/ 0
Oracle teams up with NVIDIA to quicken enterprise AI adoption https://www.artificialintelligence-news.com/2022/10/19/oracle-teams-up-with-nvidia-to-quicken-enterprise-ai-adoption/ https://www.artificialintelligence-news.com/2022/10/19/oracle-teams-up-with-nvidia-to-quicken-enterprise-ai-adoption/#respond Wed, 19 Oct 2022 09:22:51 +0000 https://www.artificialintelligence-news.com/?p=12382 Oracle and NVIDIA have formed a multi-year partnership to help customers solve business challenges with accelerated computing and AI. The collaboration aims to bring the full NVIDIA accelerated computing stack – from GPUs to systems to software—to Oracle Cloud Infrastructure (OCI). OCI is adding tens of thousands more NVIDIA GPUs, including the A100 and upcoming... Read more »

The post Oracle teams up with NVIDIA to quicken enterprise AI adoption appeared first on AI News.

]]>
Oracle and NVIDIA have formed a multi-year partnership to help customers solve business challenges with accelerated computing and AI.

The collaboration aims to bring the full NVIDIA accelerated computing stack – from GPUs to systems to software—to Oracle Cloud Infrastructure (OCI).

OCI is adding tens of thousands more NVIDIA GPUs, including the A100 and upcoming H100, to its capacity. Combined with OCI’s AI cloud infrastructure of bare metal, cluster networking, and storage, this provides enterprises a broad, easily accessible portfolio of options for AI training and deep learning inference at scale.

Safra Catz, CEO, Oracle, said: “To drive long-term success in today’s business environment, organizations need answers and insight faster than ever.

“Our expanded alliance with NVIDIA will deliver the best of both companies’ expertise to help customers across industries – from healthcare and manufacturing to telecommunications and financial services – overcome the multitude of challenges they face.”

Accelerated computing and AI are key to tackling rising costs in every aspect of operating businesses,” said Jensen Huang, CEO and founder, NVIDIA. “Enterprises are increasingly turning to cloud-first AI strategies that enable fast development and scalable deployment. Our partnership with Oracle will put NVIDIA AI within easy reach for thousands of companies.”

NVIDIA and Oracle have been serving enterprises together for years with accelerated computing instances and software available via OCI. With the full NVIDIA AI platforms available on OCI instances, the extended partnership is designed to accelerate AI-powered innovation for a broad range of industries to better serve customers and support sales.

NVIDIA AI Enterprise, the globally adopted software of the NVIDIA AI platform, includes essential processing engines for each step of the AI workflow, from data processing and AI model training to simulation and large-scale deployment. NVIDIA AI enables organizations to develop predictive models to automate business processes and gain rapid business insights with applications such as conversational AI, recommender systems, computer vision and more. The parties plan to make an upcoming release of NVIDIA AI Enterprise available on OCI, providing customers with easy access to NVIDIA’s accelerated, secure and scalable platform for end-to-end AI development and deployment

Additionally, Oracle is now offering early access to NVIDIA RAPIDS acceleration for Apache Spark data processing on the OCI Data Flow fully-managed Apache Spark service. Data processing is one of the top cloud computing workloads. To support this demand, OCI Data Science plans to offer support for OCI bare metal shapes, including BM.GPU.GM4.8 with NVIDIA A100 Tensor Core GPUs across managed notebook sessions, jobs, and model deployment.

NVIDIA Clara, a healthcare AI and HPC application framework for medical imaging, genomics, natural language processing, and drug discovery, is also coming soon. Oracle and NVIDIA are additionally collaborating on new AI-accelerated Oracle Cerner offerings for healthcare, which span analytics, clinical solutions, operations, patient management systems and more.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Oracle teams up with NVIDIA to quicken enterprise AI adoption appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/10/19/oracle-teams-up-with-nvidia-to-quicken-enterprise-ai-adoption/feed/ 0
US introduces new AI chip export restrictions https://www.artificialintelligence-news.com/2022/09/01/us-introduces-new-ai-chip-export-restrictions/ https://www.artificialintelligence-news.com/2022/09/01/us-introduces-new-ai-chip-export-restrictions/#respond Thu, 01 Sep 2022 16:01:15 +0000 https://www.artificialintelligence-news.com/?p=12228 NVIDIA has revealed that it’s subject to new laws restricting the export of AI chips to China and Russia. In an SEC filing, NVIDIA says the US government has informed the chipmaker of a new license requirement that impacts two of its GPUs designed to speed up machine learning tasks: the current A100, and the... Read more »

The post US introduces new AI chip export restrictions appeared first on AI News.

]]>
NVIDIA has revealed that it’s subject to new laws restricting the export of AI chips to China and Russia.

In an SEC filing, NVIDIA says the US government has informed the chipmaker of a new license requirement that impacts two of its GPUs designed to speed up machine learning tasks: the current A100, and the upcoming H100.

“The license requirement also includes any future NVIDIA integrated circuit achieving both peak performance and chip-to-chip I/O performance equal to or greater than thresholds that are roughly equivalent to the A100, as well as any system that includes those circuits,” adds NVIDIA.

The US government has reportedly told NVIDIA that the new rules are geared at addressing the risk of the affected products being used for military purposes.

“While we are not in a position to outline specific policy changes at this time, we are taking a comprehensive approach to implement additional actions necessary related to technologies, end-uses, and end-users to protect US national security and foreign policy interests,” said a US Department of Commerce spokesperson.

China is a large market for NVIDIA and the new rules could affect around $400 million in quarterly sales.

AMD has also been told the new rules will impact its similar products, including the MI200.

As of writing, NVIDIA’s shares were down 11.45 percent from the market open. AMD’s shares are down 6.81 percent. However, it’s worth noting that it’s been another red day for the wider stock market.

(Photo by Wesley Tingey on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US introduces new AI chip export restrictions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/09/01/us-introduces-new-ai-chip-export-restrictions/feed/ 0
Nvidia exits from its proposed $40B acquisition of Arm https://www.artificialintelligence-news.com/2022/02/08/nvidia-exits-from-its-proposed-40b-acquisition-of-arm/ https://www.artificialintelligence-news.com/2022/02/08/nvidia-exits-from-its-proposed-40b-acquisition-of-arm/#respond Tue, 08 Feb 2022 15:30:49 +0000 https://artificialintelligence-news.com/?p=11674 Nvidia is walking away from its proposed $40 billion acquisition of British chip designer Arm. The deal caught the attention of global regulators with anti-competition investigations launched in several jurisdictions including the UK, EU, and US. In November 2021, UK Digital Secretary Nadine Dorries decided to block the merger pending the results of a 24-week... Read more »

The post Nvidia exits from its proposed $40B acquisition of Arm appeared first on AI News.

]]>
Nvidia is walking away from its proposed $40 billion acquisition of British chip designer Arm.

The deal caught the attention of global regulators with anti-competition investigations launched in several jurisdictions including the UK, EU, and US.

In November 2021, UK Digital Secretary Nadine Dorries decided to block the merger pending the results of a 24-week ‘Phase 2’ investigation.

With the merger looking almost impossible to be approved by regulators, Nvidia has decided to throw in the towel.

Jensen Huang, Founder and CEO of Nvidia, said:

“Arm has a bright future, and we’ll continue to support them as a proud licensee for decades to come.

Arm is at the centre of the important dynamics in computing. Though we won’t be one company, we will partner closely with Arm.

The significant investments that Masa has made have positioned Arm to expand the reach of the Arm CPU beyond client computing to supercomputing, cloud, AI, and robotics.

I expect Arm to be the most important CPU architecture of the next decade.”

Arm has struggled from relatively flat revenues and rising costs despite the huge success of the company’s licensees such as Apple, Qualcomm, and Amazon.

SoftBank, Arm’s current owner, considered and subsequently rejected the idea of pursuing an IPO (Initial Public Offering) of the company in 2019 and again in early 2020.

“We contemplated an IPO but determined that the pressure to deliver short-term revenue growth and profitability would suffocate our ability to invest, expand, move fast, and innovate,” explained Simon Segars, CEO of Arm, last month.

Following the collapse of the Nvidia acquisition, Softbank will now have to reconsider an IPO for Arm.

Dr Lil Read, Analyst in the Thematic Research Team at GlobalData, commented:

“Softbank now needs to think of Arm’s future. An initial public offering (IPO) looks likely – the UK government would surely like to see the home-grown chip designer float in London, and potential IPO reforms could create the perfect environment for this. 

Otherwise, Arm may be ripe for a takeover by a private equity consortium backed by chip-friendly giants such as Apple, Qualcomm, and TSMC – Arm’s largest customers.”

Some of Nvidia’s rivals are said to have offered to invest in Arm if it helps the company to remain independent. A takeover from a private equity consortium looks to be Arm’s best option. If the company has to launch an IPO, it could struggle and will face some difficult choices.

Arm’s largest market, mobile, is saturated. The company will struggle to crack the datacentre and PC markets in the face of strong incumbents like Intel and AMD that have established ecosystem of developers, software, systems, and peripherals, and profits that enable them to make large R&D investments.

In an earlier response to the UK’s Competition and Markets Authority, aiming to quell the regulator’s fears about its acquisition of Arm, Nvidia wrote:

“Nvidia is particularly concerned that these pressures would drive Arm to deprioritize datacenter and PC and to instead focus on its core mobile and growing IoT businesses.

The result would be a concentrated CPU market largely controlled by Intel/AMD (x86).”

Capital markets would likely expect Arm to cut costs to maximise the company’s value. However, SoftBank sounds bullish on its prospects.

“Arm is becoming a centre of innovation not only in the mobile phone revolution, but also in cloud computing, automotive, the Internet of Things, and the metaverse, and has entered its second growth phase,” said Masayoshi Son, Representative Director, Corporate Officer, Chairman, and CEO of SoftBank Group.

Arm has announced a management shake-up in the wake of Nvidia’s exit from the deal.

Rene Haas, the former head of Arm’s intellectual property unit, will take over as the company’s chief executive and lead it during these challenging times. Haas previously worked at Nvidia for seven years.

With the Nvidia acquisition off the table, we can only hope that Haas finds a way to ensure Arm can continue to deliver the semiconductor innovation that it has for three decades.

(Photo by Dustin Tramel on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Nvidia exits from its proposed $40B acquisition of Arm appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/08/nvidia-exits-from-its-proposed-40b-acquisition-of-arm/feed/ 0