aws Archives - AI News https://www.artificialintelligence-news.com/tag/aws/ Artificial Intelligence News Wed, 29 Nov 2023 14:30:16 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png aws Archives - AI News https://www.artificialintelligence-news.com/tag/aws/ 32 32 AWS and NVIDIA expand partnership to advance generative AI https://www.artificialintelligence-news.com/2023/11/29/aws-nvidia-expand-partnership-advance-generative-ai/ https://www.artificialintelligence-news.com/2023/11/29/aws-nvidia-expand-partnership-advance-generative-ai/#respond Wed, 29 Nov 2023 14:30:14 +0000 https://www.artificialintelligence-news.com/?p=13962 Amazon Web Services (AWS) and NVIDIA have announced a significant expansion of their strategic collaboration at AWS re:Invent. The collaboration aims to provide customers with state-of-the-art infrastructure, software, and services to fuel generative AI innovations. The collaboration brings together the strengths of both companies, integrating NVIDIA’s latest multi-node systems with next-generation GPUs, CPUs, and AI... Read more »

The post AWS and NVIDIA expand partnership to advance generative AI appeared first on AI News.

]]>
Amazon Web Services (AWS) and NVIDIA have announced a significant expansion of their strategic collaboration at AWS re:Invent. The collaboration aims to provide customers with state-of-the-art infrastructure, software, and services to fuel generative AI innovations.

The collaboration brings together the strengths of both companies, integrating NVIDIA’s latest multi-node systems with next-generation GPUs, CPUs, and AI software, along with AWS technologies such as Nitro System advanced virtualisation, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability.

Key highlights of the expanded collaboration include:

  1. Introduction of NVIDIA GH200 Grace Hopper Superchips on AWS:
    • AWS becomes the first cloud provider to offer NVIDIA GH200 Grace Hopper Superchips with new multi-node NVLink technology.
    • The NVIDIA GH200 NVL32 multi-node platform enables joint customers to scale to thousands of GH200 Superchips, providing supercomputer-class performance.
  2. Hosting NVIDIA DGX Cloud on AWS:
    • Collaboration to host NVIDIA DGX Cloud, an AI-training-as-a-service, on AWS, featuring GH200 NVL32 for accelerated training of generative AI and large language models.
  3. Project Ceiba supercomputer:
    • Collaboration on Project Ceiba, aiming to design the world’s fastest GPU-powered AI supercomputer with 16,384 NVIDIA GH200 Superchips and processing capability of 65 exaflops.
  4. Introduction of new Amazon EC2 instances:
    • AWS introduces three new Amazon EC2 instances, including P5e instances powered by NVIDIA H200 Tensor Core GPUs for large-scale generative AI and HPC workloads.
  5. Software innovations:
    • NVIDIA introduces software on AWS, such as NeMo Retriever microservice for chatbots and summarisation tools, and BioNeMo to speed up drug discovery for pharmaceutical companies.

This collaboration signifies a joint commitment to advancing the field of generative AI, offering customers access to cutting-edge technologies and resources.

Internally, Amazon robotics and fulfilment teams already employ NVIDIA’s Omniverse platform to optimise warehouses in virtual environments first before real-world deployment.

The integration of NVIDIA and AWS technologies will accelerate the development, training, and inference of large language models and generative AI applications across various industries.

(Photo by ANIRUDH on Unsplash)

See also: Inflection-2 beats Google’s PaLM 2 across common benchmarks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AWS and NVIDIA expand partnership to advance generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/29/aws-nvidia-expand-partnership-advance-generative-ai/feed/ 0
Amazon is building a LLM to rival OpenAI and Google https://www.artificialintelligence-news.com/2023/11/08/amazon-is-building-llm-rival-openai-and-google/ https://www.artificialintelligence-news.com/2023/11/08/amazon-is-building-llm-rival-openai-and-google/#respond Wed, 08 Nov 2023 14:53:52 +0000 https://www.artificialintelligence-news.com/?p=13861 Amazon is reportedly making substantial investments in the development of a large language model (LLM) named Olympus.  According to Reuters, the tech giant is pouring millions into this project to create a model with a staggering two trillion parameters. OpenAI’s GPT-4, for comparison, is estimated to have around one trillion parameters. This move puts Amazon... Read more »

The post Amazon is building a LLM to rival OpenAI and Google appeared first on AI News.

]]>
Amazon is reportedly making substantial investments in the development of a large language model (LLM) named Olympus. 

According to Reuters, the tech giant is pouring millions into this project to create a model with a staggering two trillion parameters. OpenAI’s GPT-4, for comparison, is estimated to have around one trillion parameters.

This move puts Amazon in direct competition with OpenAI, Meta, Anthropic, Google, and others. The team behind Amazon’s initiative is led by Rohit Prasad, former head of Alexa, who now reports directly to CEO Andy Jassy.

Prasad, as the head scientist of artificial general intelligence (AGI) at Amazon, has unified AI efforts across the company. He brought in researchers from the Alexa AI team and Amazon’s science division to collaborate on training models, aligning Amazon’s resources towards this ambitious goal.

Amazon’s decision to invest in developing homegrown models stems from the belief that having their own LLMs could enhance the attractiveness of their offerings, particularly on Amazon Web Services (AWS).

Enterprises on AWS are constantly seeking top-performing models and Amazon’s move aims to cater to the growing demand for advanced AI technologies.

While Amazon has not provided a specific timeline for the release of the Olympus model, insiders suggest that the company’s focus on training larger AI models underscores its commitment to remaining at the forefront of AI research and development.

Training such massive AI models is a costly endeavour, primarily due to the significant computing power required.

Amazon’s decision to invest heavily in LLMs is part of its broader strategy, as revealed in an earnings call in April. During the call, Amazon executives announced increased investments in LLMs and generative AI while reducing expenditures on retail fulfillment and transportation.

Amazon’s move signals a new chapter in the race for AI supremacy, with major players vying to push the boundaries of the technology.

(Photo by ANIRUDH on Unsplash)

See also: OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon is building a LLM to rival OpenAI and Google appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/08/amazon-is-building-llm-rival-openai-and-google/feed/ 0
Q&A: Felipe Chies, Amazon Web Services: Democratising ML https://www.artificialintelligence-news.com/2022/09/21/qa-felipe-chies-amazon-web-services-democratising-ml/ https://www.artificialintelligence-news.com/2022/09/21/qa-felipe-chies-amazon-web-services-democratising-ml/#respond Wed, 21 Sep 2022 19:24:00 +0000 https://www.artificialintelligence-news.com/?p=12289 Amazon Web Services (AWS), the leader in public cloud infrastructure now has more than 200 fully featured services, including compute, storage, databases, networking, analytics, robotics, Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, application development, deployment, management, and machine learning and artificial intelligence (AI). For the latter, the... Read more »

The post Q&A: Felipe Chies, Amazon Web Services: Democratising ML appeared first on AI News.

]]>
Amazon Web Services (AWS), the leader in public cloud infrastructure now has more than 200 fully featured services, including compute, storage, databases, networking, analytics, robotics, Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, application development, deployment, management, and machine learning and artificial intelligence (AI). For the latter, the message is clear: AWS wants to democratise ML technologies.

AWS has the most comprehensive set of AI and Machine Learning services for all skill levels. The most well-known is arguably the platform Amazon SageMaker, a fully managed service that removes the heavy lifting, complexity, and guesswork from each step of the machine learning process, empowering everyday developers and scientists to successfully use machine learning. Since AWS launched SageMaker in 2017, the company has added more than 150 capabilities and features, and already in December 2020 at that year’s re:Invent – when the first machine learning keynote took place – the message was simple.

As SiliconAngle put it, the company’s ‘overall aim is to enable machine learning to be embedded into most applications before the decade is out by making it accessible to more than just experts.’

With the AI & Big Data Expo, taking place in Amsterdam on September 20-21, AI News spoke with Felipe Chies, senior business development manager for AI and ML for the Benelux at AWS. Chies has strong experience in the field, having co-founded semiconductor startup Axelera AI, which has since been incubated by Bitfury.

Chies is speaking on the subject of accelerating innovation with no-code and low-code machine learning, and AI News spoke with him about key use cases, industries, and the different AWS products:

AI News: Tell us about the overall AWS ML and AI product set, how you talk about them with clients and how they help democratise machine learning.

Felipe Chies: We are very proud to have the most robust and most complete set of machine learning capabilities, and at AWS, we always approach everything we do by focusing on our customers. We think of our machine learning offerings in three different layers. First comes Frameworks and Interfaces for machine learning practitioners. These are people comfortable building deep learning models, working with deep learning frameworks, building clusters, etc. They can get extremely deep. Secondly the middle layer makes it much easier and more accessible for developers and data scientists to build, train, tune, and deploy machine learning models today with Amazon SageMaker. And last, Application Services, which enable developers to plug-in pre-built AI functionality into their apps without having to worry about the machine learning models that power these services. Many of our API services require no machine learning for customers, and in some cases, end-users may not even realize machine learning is being used to power experiences with services like Amazon Kendra, Amazon CodeGuru, Contact Lens for Amazon Connect, and Amazon HealthLake. The services make it really easy to incorporate AI into applications without having to build and train ML algorithms.

How does that help to democratise?

If we want machine learning to be as expansive as we really want it to be, we need to make it much more accessible to people who aren’t machine learning practitioners. Today, there are very few of these experts out there. So, when we built Amazon SageMaker, we designed it as a fully managed service that removes the heavy lifting, complexity, and guesswork from each step of the machine learning process, empowering everyday developers and scientists to successfully use machine learning. SageMaker is a step-level change for everyday developers and data scientists being able to access and build machine learning models.

To further democratize machine learning, we launched Amazon SageMaker Canvas, which enables business users and analysts to generate highly accurate machine-learning predictions using a visual point-and-click interface—with no coding required.

AI: How sophisticated does a customer of AWS have to be to use your AI/ML tools?

FC: AWS wants to take technology that until a few years ago was only within reach of a small number of well-funded organizations and make it as broadly distributed as possible. We’ve done that with storage, computing, analytics, databases and data warehousing, and we’ve taken the exact same approach with machine learning. We want it to be as broadly distributed as possible.

AI: What are the common use cases and industries that you see, and how can you help?

FC: Today, more than 100,000 customers use AWS Machine learning.  One example of an industry where we see a lot of usage is manufacturing; and supply chain. With what has happened in the world most recently, there are many challenges in the supply chain area – so being able to forecast demand is very important. Customers ask us; ‘how can you help us to anticipate changes, to anticipate demand, to save cost to make our customers happy and deliver on time?’ Those kinds of things are common. For manufacturing, predictive maintenance, quality control – those are easy use cases to apply machine learning. For predictive maintenance, you can use computer vision to do quality control and more inspection. In marketing and sales, it is again forecasts. Forecasts are an area where it is easier to understand the value it brings to the business.

AI: What are the key roadblocks to ML adoption in your opinion and why?

FC: Many of the organisations I talk to already have a machine learning mindset so that is not a problem. One of the biggest challenges nowadays is the backlog of human resources– there’s just a lot to do for the development teams. One way to solve it is to get more people, but that’s another challenge – there’s just not enough specialists – it can be data science, machine learning, engineering – it’s really hard to find the people in the market.

This is really where the democratisation of machine learning comes in. Why not enable more people in the company to do machine learning? Instead of having only data scientists and machine learning engineers, why not also business analysts, or finance, or marketing people? An example of this is a tool like Amazon SageMaker Canvas. It enables business users and analysts to generate highly accurate machine-learning predictions using a visual point-and-click interface—with no coding required.

AI: What would you like attendees at the AI & Big Data Expo to learn from your keynote presentation?

FC: There are people who think maybe machine learning is something out of their reach, they need to go and send a requirement to the data science team and wait for weeks. This is not really the case – they can get started in a few minutes. This awareness that people can use machine learning nowadays without needing to know about it, how to build models – that is a key take away.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

The post Q&A: Felipe Chies, Amazon Web Services: Democratising ML appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/09/21/qa-felipe-chies-amazon-web-services-democratising-ml/feed/ 0
AWS announces nine major updates for its ML platform SageMaker https://www.artificialintelligence-news.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/ https://www.artificialintelligence-news.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/#comments Wed, 09 Dec 2020 14:47:48 +0000 http://artificialintelligence-news.com/?p=10096 Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker. SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case. During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.... Read more »

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.

SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.

During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.

Swami Sivasubramanian, VP of Amazon Machine Learning at AWS, said:

“Hundreds of thousands of everyday developers and data scientists have used our industry-leading machine learning service, Amazon SageMaker, to remove barriers to building, training, and deploying custom machine learning models. One of the best parts about having such a widely-adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables.

Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug, and run custom machine learning models with greater visibility, explainability, and automation at scale.”

The first announcement is Data Wrangler, a feature which aims to automate the preparation of data for machine learning.

Data Wrangler enables customers to choose the data they want from their various data stores and import it with a single click. Over 300 built-in data transformers are included to help customers normalise, transform, and combine features without having to write any code.

Frank Farrall, Principal of AI Ecosystems and Platforms Leader at Deloitte, comments:

“SageMaker Data Wrangler enables us to hit the ground running to address our data preparation needs with a rich collection of transformation tools that accelerate the process of machine learning data preparation needed to take new products to market.

In turn, our clients benefit from the rate at which we scale deployments, enabling us to deliver measurable, sustainable results that meet the needs of our clients in a matter of days rather than months.”

The second announcement is Feature Store. Amazon SageMaker Feature Store provides a new repository that makes it easy to store, update, retrieve, and share machine learning features for training and inference.

Feature Store aims to overcome the problem of storing features which are mapped to multiple models. A purpose-built feature store helps developers to access and share features that make it much easier to name, organise, find, and share sets of features among teams of developers and data scientists. Because it resides in SageMaker Studio – close to where ML models are run – AWS claims it provides single-digit millisecond inference latency.

Mammad Zadeh, VP of Engineering, Data Platform at Intuit, says:

“We have worked closely with AWS in the lead up to the release of Amazon SageMaker Feature Store, and we are excited by the prospect of a fully managed feature store so that we no longer have to maintain multiple feature repositories across our organization.

Our data scientists will be able to use existing features from a central store and drive both standardisation and reuse of features across teams and models.”

Next up, we have SageMaker Pipelines—which claims to be the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning.

Developers can define each step of an end-to-end machine learning workflow including the data-load steps, transformations from Amazon SageMaker Data Wrangler, features stored in Amazon SageMaker Feature Store, training configuration and algorithm set up, debugging steps, and optimisation steps.

SageMaker Clarify may be one of the most important features being debuted by AWS this week considering ongoing events.

Clarify aims to provide bias detection across the machine learning workflow, enabling developers to build greater fairness and transparency into their ML models. Rather than turn to often time-consuming open-source tools, developers can use the integrated solution to quickly try and counter any bias in models.

Andreas Heyden, Executive VP of Digital Innovations for the DFL Group, says:

“Amazon SageMaker Clarify seamlessly integrates with the rest of the Bundesliga Match Facts digital platform and is a key part of our long-term strategy of standardising our machine learning workflows on Amazon SageMaker.

By using AWS’s innovative technologies, such as machine learning, to deliver more in-depth insights and provide fans with a better understanding of the split-second decisions made on the pitch, Bundesliga Match Facts enables viewers to gain deeper insights into the key decisions in each match.”

Deep Profiling for Amazon SageMaker automatically monitors system resource utilisation and provides alerts where required for any detected training bottlenecks. The feature works across frameworks (PyTorch, Apache MXNet, and TensorFlow) and collects system and training metrics automatically without requiring any code changes in training scripts.

Next up, we have Distributed Training on SageMaker which AWS claims makes it possible to train large, complex deep learning models up to two times faster than current approaches.

Kristóf Szalay, CTO at Turbine, comments:

“We use machine learning to train our in silico human cell model, called Simulated Cell, based on a proprietary network architecture. By accurately predicting various interventions on the molecular level, Simulated Cell helps us to discover new cancer drugs and find combination partners for existing therapies.

Training of our simulation is something we continuously iterate on, but on a single machine each training takes days, hindering our ability to iterate on new ideas quickly.

We are very excited about Distributed Training on Amazon SageMaker, which we are expecting to decrease our training times by 90% and to help us focus on our main task: to write a best-of-the-breed codebase for the cell model training.

Amazon SageMaker ultimately allows us to become more effective in our primary mission: to identify and develop novel cancer drugs for patients.”

SageMaker’s Data Parallelism engine scales training jobs from a single GPU to hundreds or thousands by automatically splitting data across multiple GPUs, improving training time by up to 40 percent.

With edge computing advancements increasing rapidly, AWS is keeping pace with SageMaker Edge Manager.

Edge Manager helps developers to optimise, secure, monitor, and maintain ML models deployed on fleets of edge devices. In addition to helping optimise ML models and manage edge devices, Edge Manager also provides the ability to cryptographically sign models, upload prediction data from devices to SageMaker for monitoring and analysis, and view a dashboard which tracks and provided a visual report on the operation of the deployed models within the SageMaker console.

Igor Bergman, VP of Cloud and Software of PCs and Smart Devices at Lenovo, comments:

“SageMaker Edge Manager will help eliminate the manual effort required to optimise, monitor, and continuously improve the models after deployment. With it, we expect our models will run faster and consume less memory than with other comparable machine-learning platforms.

As we extend AI to new applications across the Lenovo services portfolio, we will continue to require a high-performance pipeline that is flexible and scalable both in the cloud and on millions of edge devices. That’s why we selected the Amazon SageMaker platform. With its rich edge-to-cloud and CI/CD workflow capabilities, we can effectively bring our machine learning models to any device workflow for much higher productivity.”

Finally, SageMaker JumpStart aims to make it easier for developers which have little experience with machine learning deployments to get started.

JumpStart provides developers with an easy-to-use, searchable interface to find best-in-class solutions, algorithms, and sample notebooks. Developers can select from several end-to-end machine learning templates(e.g. fraud detection, customer churn prediction, or forecasting) and deploy them directly into their SageMaker Studio environments.

AWS has been on a roll with SageMaker improvements—delivering more than 50 new capabilities over the past year. After this bumper feature drop, we probably shouldn’t expect any more until we’ve put 2020 behind us.

You can find coverage of AWS’ more cloud-focused announcements via our sister publication CloudTech here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/feed/ 1
NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud https://www.artificialintelligence-news.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/ https://www.artificialintelligence-news.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/#comments Tue, 03 Nov 2020 15:55:37 +0000 http://artificialintelligence-news.com/?p=9998 NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud. Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation... Read more »

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud.

Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation of groundbreaking innovations.

The A100 outperformed CPUs in this year’s MLPerf by up to 237x in data centre inference. A single NVIDIA DGX A100 system – with eight A100 GPUs – provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, Vice President of Accelerated Computing at NVIDIA, following the benchmark results.

Businesses can access the A100 in AWS’ P4d instance. NVIDIA claims the instances reduce the time to train machine learning models by up to 3x with FP16 and up to 6x with TF32 compared to the default FP32 precision.

Each P4d instance features eight NVIDIA A100 GPUs. If even more performance is required, customers are able to access over 4,000 GPUs at a time using AWS’s Elastic Fabric Adaptor (EFA).

Dave Brown, Vice President of EC2 at AWS, said:

“The pace at which our customers have used AWS services to build, train, and deploy machine learning applications has been extraordinary. At the same time, we have heard from those customers that they want an even lower-cost way to train their massive machine learning models.

Now, with EC2 UltraClusters of P4d instances powered by NVIDIA’s latest A100 GPUs and petabit-scale networking, we’re making supercomputing-class performance available to virtually everyone, while reducing the time to train machine learning models by 3x, and lowering the cost to train by up to 60% compared to previous generation instances.”

P4d supports 400Gbps networking and makes use of NVIDIA’s technologies including NVLink, NVSwitch, NCCL, and GPUDirect RDMA to further accelerate deep learning training workloads.

Some of AWS’ customers across various industries have already begun exploring how the P4d instance can help their business.

Karley Yoder, VP & GM of Artificial Intelligence at GE Healthcare, commented:

“Our medical imaging devices generate massive amounts of data that need to be processed by our data scientists. With previous GPU clusters, it would take days to train complex AI models, such as Progressive GANs, for simulations and view the results.

Using the new P4d instances reduced processing time from days to hours. We saw two- to three-times greater speed on training models with various image sizes while achieving better performance with increased batch size and higher productivity with a faster model development cycle.”

For an example from a different industry, the research arm of Toyota is exploring how P4d can improve their existing work in developing self-driving vehicles and groundbreaking new robotics.

“The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours,” explained Mike Garrison, Technical Lead of Infrastructure Engineering at Toyota Research Institute.

“We are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

P4d instances are currently available in the US East (N. Virginia) and US West (Oregon) regions. AWS says further availability is planned soon.

You can find out more about P4d instances and how to get started here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/feed/ 2
Amazon makes three major AI announcements during re:Invent 2019 https://www.artificialintelligence-news.com/2019/12/03/amazon-ai-announcements-reinvent-2019/ https://www.artificialintelligence-news.com/2019/12/03/amazon-ai-announcements-reinvent-2019/#respond Tue, 03 Dec 2019 15:45:54 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6270 Amazon has kicked off its annual re:Invent conference in Las Vegas and made three major AI announcements. During a midnight keynote, Amazon unveiled Transcribe Medical, SageMaker Operators for Kubernetes, and DeepComposer. Transcribe Medical The first announcement we’ll be talking about is likely to have the biggest impact on people’s lives soonest. Transcribe Medical is designed... Read more »

The post Amazon makes three major AI announcements during re:Invent 2019 appeared first on AI News.

]]>
Amazon has kicked off its annual re:Invent conference in Las Vegas and made three major AI announcements.

During a midnight keynote, Amazon unveiled Transcribe Medical, SageMaker Operators for Kubernetes, and DeepComposer.

Transcribe Medical

The first announcement we’ll be talking about is likely to have the biggest impact on people’s lives soonest.

Transcribe Medical is designed to transcribe medical speech for primary care. The feature is aware of medical speech in addition to standard conversational diction.

Amazon says Transcribe Medical can be deployed across “thousands” of healthcare facilities to provide clinicians with secure note-taking abilities.

Transcribe Medical offers an API and can work with most microphone-equipped smart devices. The service is fully managed and sends back a stream of text in real-time.

Furthermore, and most importantly, Transcribe Medical is covered under AWS’ HIPAA eligibility and business associate addendum (BAA). This means that any customer that enters into a BAA with AWS can use Transcribe Medical to process and store personal health information legally.

SoundLines and Amgen are two partners which Amazon says are already using Transcribe Medical.

Vadim Khazan, president of technology at SoundLines, said in a statement:

“For the 3,500 health care partners relying on our care team optimisation strategies for the past 15 years, we’ve significantly decreased the time and effort required to get to insightful data.”

SageMaker Operators for Kubernetes

The next announcement is Amazon SageMaker Operators for Kubernetes.

Amazon’s SageMaker is a machine learning development platform and this new feature lets data scientists using Kubernetes train, tune, and deploy AI models.

SageMaker Operators can be installed on Kubernetes clusters and jobs can be created using Amazon’s machine learning platform through the Kubernetes API and command line tools.

In a blog post, AWS deep learning senior product manager Aditya Bindal wrote:

“Customers are now spared all the heavy lifting of integrating their Amazon SageMaker and Kubernetes workflows. Starting today, customers using Kubernetes can make a simple call to Amazon SageMaker, a modular and fully-managed service that makes it easier to build, train, and deploy machine learning (ML) models at scale.”

Amazon says that compute resources are pre-configured and optimised, only provisioned when requested, scaled as needed, and shut down automatically when jobs complete.

SageMaker Operators for Kubernetes is generally available in AWS server regions including US East (Ohio), US East (N. Virginia), US West (Oregon), and EU (Ireland).

DeepComposer

Finally, we have DeepComposer. This one is a bit more fun for those who enjoy playing with hardware toys.

Amazon calls DeepComposer the “world’s first” machine learning-enabled musical keyboard. The keyboard features 32-keys and two octaves, and is designed for developers to experiment with pretrained or custom AI models.

In a blog post, AWS AI and machine learning evangelist Julien Simon explains how DeepComposer taps a Generative Adversarial Network (GAN) to fill in gaps in songs.

After recording a short tune, a model for the composer’s favourite genre is selected in addition to setting the model’s parameters. Hyperparameters are then set along with a validation sample.

Once this process is complete, DeepComposer then generates a composition which can be played in the AWS console or even shared to SoundCloud (then it’s really just a waiting game for a call from Jay-Z).

Developers itching to get started with DeepComposer can apply for a physical keyboard for when they become available, or get started now with a virtual keyboard in the AWS console.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Amazon makes three major AI announcements during re:Invent 2019 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/12/03/amazon-ai-announcements-reinvent-2019/feed/ 0
Amazon joins calls to establish facial recognition standards https://www.artificialintelligence-news.com/2019/02/08/amazon-calls-facial-recognition-standards/ https://www.artificialintelligence-news.com/2019/02/08/amazon-calls-facial-recognition-standards/#respond Fri, 08 Feb 2019 15:36:58 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4911 Amazon has put its weight behind the growing number of calls from companies, individuals, and rights groups to establish facial recognition standards. Michael Punke, VP of Global Public Policy at Amazon Web Services, said. “Over the past several months, we’ve talked to customers, researchers, academics, policymakers, and others to understand how to best balance the... Read more »

The post Amazon joins calls to establish facial recognition standards appeared first on AI News.

]]>
Amazon has put its weight behind the growing number of calls from companies, individuals, and rights groups to establish facial recognition standards.

Michael Punke, VP of Global Public Policy at Amazon Web Services, said.

“Over the past several months, we’ve talked to customers, researchers, academics, policymakers, and others to understand how to best balance the benefits of facial recognition with the potential risks.

It’s critical that any legislation protect civil rights while also allowing for continued innovation and practical application of the technology.”

In a blog post today, Amazon highlighted five guidelines to ensure facial recognition is developed and used ethically.

The first of the five calls for facial recognition to follow existing laws which protect civil liberties. To ensure accountability, the second guideline wants all facial recognition to be reviewed by humans before any decision is taken.

Other guidelines include a call for transparancy in how agencies are using facial recognition technology, and visual notices placed where it’s being used in public or commercial settings.

Facial Recognition Concerns

The company has faced criticism of its ‘Rekognition’ system which is used by police forces and has been pitched to agencies such as US Immigration and Customs Enforcement (ICE).

In a letter addressed to Amazon CEO Jeff Bezos, employees wrote:

“We refuse to build the platform that powers ICE, and we refuse to contribute to tools that violate human rights.

As ethically concerned Amazonians, we demand a choice in what we build and a say in how it is used.”

The letter was sent following ICE’s separation of immigrant children from their families at the US border and subsequent detainment. There’s no evidence ICE ultimately purchased or used Amazon’s technology.

In July last year, the American Civil Liberties Union tested Amazon’s facial recognition technology on members of Congress to see if they match with a database of criminal mugshots.

Rekognition compared pictures of all members of the House and Senate against 25,000 arrest photos. The false matches disproportionately affected members of the Congressional Black Caucus.

Dr Matt Wood, General Manager of AI at Amazon Web Services, commented on the ACLU’s findings later that month. He said the ACLU left Rekognition’s default confidence setting of 80 percent on when it suggests 95 percent or higher for law enforcement.

Wood, however, went on to say it showed how standards are needed to ensure facial recognition systems are used properly. He called for “the government to weigh in and specify what temperature (or confidence levels) it wants law enforcement agencies to meet to assist in their public safety work.”

The call for facial recognition standards extends beyond the US. In China, the CEO of SenseTime – the world’s most funded AI startup – also said he wants to see facial recognition standards established for a ‘healthier’ industry.

In the UK, Information Commissioner Elizabeth Denham announced her office has identified facial recognition technology as a priority to establish what protections are needed for the public.

SenseTime is so well-funded not just because of its powerful facial recognition technology, but also from adoption by the Chinese government. The firm aims to process and analyse over 100,000 simultaneous real-time streams from traffic cameras, ATMs, and more as part of its ‘Viper’ system.

If such a system was deployed with biased algorithms, it will exacerbate current societal problems. Algorithmic Justice League founder Joy Buolamwini gave a fantastic presentation during the World Economic Forum last month on the need to fight AI bias.

As Spider-Man’s Uncle Ben would say: “With great power, comes great responsibility”.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Amazon joins calls to establish facial recognition standards appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/02/08/amazon-calls-facial-recognition-standards/feed/ 0
AI to fuel cloud computing growth, says ACCA chairman https://www.artificialintelligence-news.com/2017/10/13/ai-cloud-computing-growth-acca/ https://www.artificialintelligence-news.com/2017/10/13/ai-cloud-computing-growth-acca/#respond Fri, 13 Oct 2017 11:52:28 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2550 Bernie Trudel, chairman of the Asia Cloud Computing Association (ACCA), while addressing the audience at the Cloud Expo Asia conference in Singapore, said that he expects AI to fuel cloud computing growth. He said that although AI only accounts for 1% of the global cloud computing market currently, its share of the overall IT market is... Read more »

The post AI to fuel cloud computing growth, says ACCA chairman appeared first on AI News.

]]>
Bernie Trudel, chairman of the Asia Cloud Computing Association (ACCA), while addressing the audience at the Cloud Expo Asia conference in Singapore, said that he expects AI to fuel cloud computing growth. He said that although AI only accounts for 1% of the global cloud computing market currently, its share of the overall IT market is growing at 52%.

Trudel added, “We’re starting to see AI having a significant impact on cloud computing. If you extrapolate what the analysts are saying, there’s faster growth in AI, with 10% of cloud revenue expected to come from AI by 2025.”

Trudel noted that although major cloud suppliers are already offering AI capabilities, cloud-based AI services market is still at a nascent stage. He said AWS’s approach to AI is interesting because it uses AI to help organisations easily determine the best open-source machine learning frameworks, such as TensorFlow or MXNet to use for crunching different types of data. AWS also offers Common Crawl, a publicly available dataset that contains petabytes of data collected over years of web crawling, Trudel said.

Trudel noted that Google has been doing a lot of work in AI, whether it is “helping developers choose the best algorithm for AI projects, using its Deepmind technology to build AI services or open-sourcing TensorFlow and Android to gain mindshare in AI developments”. Additionally, Google is also using AI to improve the energy efficiency of its datacentres. “Now, national grid providers are also looking at leveraging the same [AI] models to drive efficiency in their networks,” he said.

Trudel said that major cloud suppliers are working towards delivering general AI services. He is also expecting companies such as Apple, Baidu, Alibaba, Intel, Tencent and Facebook to join the fray with their own AI services.

Do you agree with Trudel’s comments? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post AI to fuel cloud computing growth, says ACCA chairman appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2017/10/13/ai-cloud-computing-growth-acca/feed/ 0