GAN Archives - AI News https://www.artificialintelligence-news.com/tag/gan/ Artificial Intelligence News Thu, 07 Dec 2023 16:20:38 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png GAN Archives - AI News https://www.artificialintelligence-news.com/tag/gan/ 32 32 AI multi-speaker lip-sync has arrived https://www.artificialintelligence-news.com/2023/12/07/ai-multi-speaker-lip-sync-has-arrived/ https://www.artificialintelligence-news.com/2023/12/07/ai-multi-speaker-lip-sync-has-arrived/#respond Thu, 07 Dec 2023 16:20:36 +0000 https://www.artificialintelligence-news.com/?p=14025 Rask AI, an AI-powered video and audio localisation tool, has announced the launch of its new Multi-Speaker Lip-Sync feature. With AI-powered lip-sync, 750,000 users can translate their content into 130+ languages to sound as fluent as a native speaker.   For a long time, there has been a lack of synchronisation between lip movements and voices... Read more »

The post AI multi-speaker lip-sync has arrived appeared first on AI News.

]]>
Rask AI, an AI-powered video and audio localisation tool, has announced the launch of its new Multi-Speaker Lip-Sync feature. With AI-powered lip-sync, 750,000 users can translate their content into 130+ languages to sound as fluent as a native speaker.  

For a long time, there has been a lack of synchronisation between lip movements and voices in dubbed content. Experts believe this is one of the reasons why dubbing is relatively unpopular in English-speaking countries. In fact, lip movements make localised content more realistic and therefore more appealing to audiences.

There is a study by Yukari Hirata, a professor known for her work in linguistics, which says that watching lip movements (rather than gestures) helps to perceive difficult phonemic contrasts in the second language. Lip reading is also one of the ways we learn to speak in general.   

Today, with Rask’s new feature, it’s possible to take localised content to a new level, making dubbed videos more natural.

The AI automatically restructures the lower face based on references. It takes into account how the speaker looks and what they are saying to make the end result more realistic. 

How it works:

  1. Upload a video with one or more people in the frame.
  2. Translate the video into another language.
  3. Press the ‘Lip Sync Check’ button and the algorithm will evaluate the video for lip sync compatibility.
  4. If the video passes the check, press ‘Lip Sync’ and wait for the result.
  5. Download the video.

According to Maria Chmir, founder and CEO of Rask AI, the new feature will help content creators expand their audience. The AI visually adjusts lip movements to make a character appear to speak the language as fluently as a native speaker. 

The technology is based on generative adversarial network (GAN) learning, which consists of a generator and a discriminator. Both the generator and the discriminator compete with each other to stay one step ahead of the other. The generator clearly generates content (lip movements), while the discriminator is responsible for quality control.     

The beta release is available to all Rask subscription customers.

(Editor’s note: This article is sponsored by Rask AI)

The post AI multi-speaker lip-sync has arrived appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/07/ai-multi-speaker-lip-sync-has-arrived/feed/ 0
Google no longer accepts deepfake projects on Colab https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/ https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/#respond Tue, 31 May 2022 14:01:05 +0000 https://www.artificialintelligence-news.com/?p=12025 Google has added “creating deepfakes” to its list of projects that are banned from its Colab service. Colab is a product from Google Research that enables AI researchers, data scientists, or students to write and execute Python in their browsers. With little fanfare, Google added deepfakes to its list of banned projects. Deepfakes use generative... Read more »

The post Google no longer accepts deepfake projects on Colab appeared first on AI News.

]]>
Google has added “creating deepfakes” to its list of projects that are banned from its Colab service.

Colab is a product from Google Research that enables AI researchers, data scientists, or students to write and execute Python in their browsers.

With little fanfare, Google added deepfakes to its list of banned projects.

Deepfakes use generative neural network architectures – such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content.

The technology is often used for malicious purposes such as generating sexual content of individuals without their consent, fraud, and the creation of deceptive content aimed at changing views and influencing democratic processes.

Such concerns around the use of deepfakes is likely the reason behind Google’s decision to ban relevant projects.

It’s a controversial decision. Banning such projects isn’t going to stop anyone from developing them and may also hinder efforts to build tools for countering deepfakes at a time when they’re most needed.

In March, a deepfake purportedly showing Ukrainian President Volodymyr Zelenskyy asking troops to lay down their arms in their fight to defend their homeland from Russia’s invasion was posted to a hacked news website.

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in an official video to refute the fake. “We are at home and defending Ukraine.”

Fortunately, the deepfake was of low quality by today’s standards. The fake Zelenskyy had a comically large and noticeably pixelated head compared to the rest of his body. The video probably didn’t fool anyone, but it could have had serious consequences if people did believe it.

One Russia-linked influence campaign – removed by Facebook and Twitter in March – used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website. That one was more believable and likely fooled some people.

However, not all deepfakes are malicious. They’re also used for music, activism, satire, and even helping police solve crimes.

Historical data from archive.org suggests Google silently added deepfakes to its list of projects banned from Colab sometime between 14-24 July 2022.

(Photo by Markus Spiske on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google no longer accepts deepfake projects on Colab appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/feed/ 0
Apple’s former ML director reportedly joins Google DeepMind https://www.artificialintelligence-news.com/2022/05/18/apple-former-ml-director-reportedly-joins-google-deepmind/ https://www.artificialintelligence-news.com/2022/05/18/apple-former-ml-director-reportedly-joins-google-deepmind/#respond Wed, 18 May 2022 12:11:54 +0000 https://www.artificialintelligence-news.com/?p=11984 A machine learning exec who left Apple due to its return-to-office policy has reportedly joined Google DeepMind.  Ian Goodfellow is a renowned machine learning researcher. Goodfellow invented generative adversarial networks (GANs), developed a system for Google Maps that transcribes addresses from Street View car photos, and more. In a departure note to his team at... Read more »

The post Apple’s former ML director reportedly joins Google DeepMind appeared first on AI News.

]]>
A machine learning exec who left Apple due to its return-to-office policy has reportedly joined Google DeepMind

Ian Goodfellow is a renowned machine learning researcher. Goodfellow invented generative adversarial networks (GANs), developed a system for Google Maps that transcribes addresses from Street View car photos, and more.

In a departure note to his team at Apple, Goodfellow cited the company’s much-criticised lack of flexibility in its work policies.

Many companies were forced into supporting remote work during the pandemic and many have since decided to keep flexible working due to the recruitment advantages, mental/physical health benefits, lowering the impact of rocketing fuel costs, improved productivity, and reduced office space costs.

Apple planned for employees to work from the office on Mondays, Tuesdays, and Thursdays, starting this month. However, following backlash, on Tuesday the company put the plan on hold—officially citing rising Covid cases.

Goodfellow already decided to hand in his resignation and head to a company with more forward-looking, modern working policies.

The machine learning researcher had worked for Apple since 2019. Prior to Apple, Goodfellow had previously worked for Google as a senior research scientist.

Goodfellow is now reportedly returning to Google, albeit to its DeepMind subsidiary. Google is currently approving requests from most employees seeking to work from home.

More departures are expected from Apple if it proceeds with its return-to-office mandate.

“Everything happened with us working from home all day, and now we have to go back to the office, sit in traffic for two hours, and hire people to take care of kids at home,” a different former Apple employee told Bloomberg.

Every talented AI researcher like Goodfellow that leaves Apple is a potential win for Google and other companies.

(Photo by Viktor Forgacs on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Apple’s former ML director reportedly joins Google DeepMind appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/18/apple-former-ml-director-reportedly-joins-google-deepmind/feed/ 0
MIT researchers develop AI to calculate material stress using images https://www.artificialintelligence-news.com/2021/04/22/mit-researchers-developer-ai-calculate-material-stress-using-images/ https://www.artificialintelligence-news.com/2021/04/22/mit-researchers-developer-ai-calculate-material-stress-using-images/#respond Thu, 22 Apr 2021 09:21:13 +0000 http://artificialintelligence-news.com/?p=10488 Researchers from MIT have developed an AI tool for determining the stress a material is under through analysing images. The pesky laws of physics have been used by engineers for centuries to work out – using complex equations – the stresses the materials they’re working with are being put under. It’s a time-consuming but vital... Read more »

The post MIT researchers develop AI to calculate material stress using images appeared first on AI News.

]]>
Researchers from MIT have developed an AI tool for determining the stress a material is under through analysing images.

The pesky laws of physics have been used by engineers for centuries to work out – using complex equations – the stresses the materials they’re working with are being put under. It’s a time-consuming but vital task to prevent structural failures which could be costly at best or cause loss of life at worst.

“Many generations of mathematicians and engineers have written down these equations and then figured out how to solve them on computers,” says Markus Buehler, the McAfee Professor of Engineering, director of the Laboratory for Atomistic and Molecular Mechanics, and one of the paper’s co-authors.

“But it’s still a tough problem. It’s very expensive — it can take days, weeks, or even months to run some simulations. So, we thought: Let’s teach an AI to do this problem for you.”

By employing computer vision, the AI tool developed by MIT’s researchers can generate estimates of material stresses in real-time.

A Generative Adversarial Network (GAN) was used for the breakthrough. The network was trained using thousands of paired images—one showing the material’s internal microstructure when subjected to mechanical forces, and the other labelled with colour-coded stress and strain values.

Using game theory, the GAN is able to determine the relationships between the material’s appearance and the stresses it’s being put under.

“From a picture, the computer is able to predict all those forces: the deformations, the stresses, and so forth,” Buehler adds.

Even more impressively, the AI can recreate issues like cracks developing in a material that can have a major impact on how it reacts to forces.

Once trained, the neural network can run on consumer-grade computer processors. This makes the AI accessible in the field and enables inspections to be carried out with just a photo.

You can find a full copy of the paper here.

(Photo by CHUTTERSNAP on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post MIT researchers develop AI to calculate material stress using images appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/04/22/mit-researchers-developer-ai-calculate-material-stress-using-images/feed/ 0
Researchers find systems to counter deepfakes can be deceived https://www.artificialintelligence-news.com/2021/02/10/researchers-find-systems-counter-deepfakes-can-be-deceived/ https://www.artificialintelligence-news.com/2021/02/10/researchers-find-systems-counter-deepfakes-can-be-deceived/#comments Wed, 10 Feb 2021 17:26:35 +0000 http://artificialintelligence-news.com/?p=10256 Researchers have found that systems designed to counter the increasing prevalence of deepfakes can be deceived. The researchers, from the University of California – San Diego, first presented their findings at the WACV 2021 conference. Shehzeen Hussain, a UC San Diego computer engineering PhD student and co-author on the paper, said: “Our work shows that... Read more »

The post Researchers find systems to counter deepfakes can be deceived appeared first on AI News.

]]>
Researchers have found that systems designed to counter the increasing prevalence of deepfakes can be deceived.

The researchers, from the University of California – San Diego, first presented their findings at the WACV 2021 conference.

Shehzeen Hussain, a UC San Diego computer engineering PhD student and co-author on the paper, said:

“Our work shows that attacks on deepfake detectors could be a real-world threat.

More alarmingly, we demonstrate that it’s possible to craft robust adversarial deepfakes even when an adversary may not be aware of the inner-workings of the machine learning model used by the detector.”

Two scenarios were tested as part of the research:

  1. The attackers have complete access to the detector model, including the face extraction pipeline and the architecture and parameters of the classification model
  2. The attackers can only query the machine learning model to figure out the probabilities of a frame being classified as real or fake.

In the first scenario, the attack’s success rate is above 99 percent for uncompressed videos. For compressed videos, it was 84.96 percent. In the second scenario, the success rate was 86.43 percent for uncompressed and 78.33 percent for compressed videos.

“We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector,” the researchers wrote.

Deepfakes use a Generative Adversarial Network (GAN) to create fake imagery and even videos with increasingly convincing results. So-called ‘DeepPorn’ has been used to cause embarrassment and even blackmail.

There’s the old saying “I won’t believe it until I see it with my own eyes,” which is why convincing fake content is such a concern. As humans, we’re rather hard-wired to believe what we (think) we can see with our eyes.

In an age of disinformation, people are gradually learning not to believe everything they readespecially when it comes from unverified sources. Teaching people not to necessarily believe the images and video they see is going to pose a serious challenge.

Some hope has been placed on systems to detect and counter deepfakes before they cause harm. Unfortunately, the UC San Diego researchers’ findings somewhat dash those hopes.

“If the attackers have some knowledge of the detection system, they can design inputs to target the blind spots of the detector and bypass it,” ” said Paarth Neekhara, another co-author on the paper.

In separate research from University College London (UCL) last year, experts ranked what they believe to be the most serious AI threats. Deepfakes ranked top of the list.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” said Dr Matthew Caldwell of UCL Computer Science.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words.

The video of Pelosi was likely created with the intention of being amusing rather than particularly maliciousbut shows how deepfakes could be used to cause disrepute and even influence democratic processes.

As part of a bid to persuade Facebook to change its policies on deepfakes, last year Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Now imagine the precise targeting of content provided by platforms like Facebook combined with deepfakes which can’t be detected… actually, perhaps don’t, it’s a rather squeaky bum thought.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Researchers find systems to counter deepfakes can be deceived appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/02/10/researchers-find-systems-counter-deepfakes-can-be-deceived/feed/ 1
NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training https://www.artificialintelligence-news.com/2020/12/07/nvidia-emulates-images-small-datasets-ai-training/ https://www.artificialintelligence-news.com/2020/12/07/nvidia-emulates-images-small-datasets-ai-training/#respond Mon, 07 Dec 2020 16:08:23 +0000 http://artificialintelligence-news.com/?p=10069 NVIDIA’s latest breakthrough emulates new images from existing small datasets with truly groundbreaking potential for AI training. The company demonstrated its latest AI model using a small dataset – just a fraction of the size typically used for a Generative Adversarial Network (GAN) – of artwork from the Metropolitan Museum of Art. From the dataset,... Read more »

The post NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training appeared first on AI News.

]]>
NVIDIA’s latest breakthrough emulates new images from existing small datasets with truly groundbreaking potential for AI training.

The company demonstrated its latest AI model using a small dataset – just a fraction of the size typically used for a Generative Adversarial Network (GAN) – of artwork from the Metropolitan Museum of Art.

From the dataset, NVIDIA’s AI was able to create new images which replicate the style of the original artist’s work. These images can then be used to help train further AI models.

The AI achieved this impressive feat by applying a breakthrough neural network training technique similar to the popular NVIDIA StyleGAN2 model. 

The technique is called Adaptive Discriminator Augmentation (ADA) and NVIDIA claims that it reduces the number of training images required by 10-20x while still getting great results.

David Luebke, VP of Graphics Research at NVIDIA, said:

“These results mean people can use GANs to tackle problems where vast quantities of data are too time-consuming or difficult to obtain.

I can’t wait to see what artists, medical experts and researchers use it for.”

Healthcare is a particularly exciting field where NVIDIA’s research could be applied. For example, it could help to create cancer histology images to train other AI models.

The breakthrough will help with the issues around most current datasets.

Large datasets are often required for AI training but aren’t always available. On the other hand, large datasets are difficult to ensure their content is suitable and does not unintentionally lead to algorithmic bias.

Earlier this year, MIT was forced to remove a large dataset called 80 Million Tiny Images. The dataset is popular for training AIs but was found to contain images labelled with racist, misogynistic, and other unacceptable terms.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset – with sizes of just 32×32 pixels – meant that manual inspection would be almost impossible and couldn’t guarantee all offensive images would be removed.

By starting with a small dataset that can be feasibly checked manually, a technique like NVIDIA’s ADA could be used to create new images which emulate the originals and can scale up to the required size for training AI models.

In a blog post, NVIDIA wrote:

“It typically takes 50,000 to 100,000 training images to train a high-quality GAN. But in many cases, researchers simply don’t have tens or hundreds of thousands of sample images at their disposal.

With just a couple thousand images for training, many GANs would falter at producing realistic results. This problem, called overfitting, occurs when the discriminator simply memorizes the training images and fails to provide useful feedback to the generator.”

You can find NVIDIA’s full research paper here (PDF). The paper is being presented at this year’s NeurIPS conference as one of a record 28 NVIDIA Research papers accepted to the prestigious conference.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/12/07/nvidia-emulates-images-small-datasets-ai-training/feed/ 0
Deepfake app puts your face on GIFs while limiting data collection https://www.artificialintelligence-news.com/2020/01/14/deepfake-app-face-gifs-data-collection/ https://www.artificialintelligence-news.com/2020/01/14/deepfake-app-face-gifs-data-collection/#comments Tue, 14 Jan 2020 15:11:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6356 A new app called Doublicat allows users to superimpose their face into popular GIFs using deep learning technology. In the name of research, here’s one I made earlier: Doublicat uses a Generative Adversarial Network (GAN) to do its magic. The GAN is called RefaceAI and is developed by a company of the same name. RefaceAI... Read more »

The post Deepfake app puts your face on GIFs while limiting data collection appeared first on AI News.

]]>
A new app called Doublicat allows users to superimpose their face into popular GIFs using deep learning technology.

In the name of research, here’s one I made earlier:

Doublicat uses a Generative Adversarial Network (GAN) to do its magic. The GAN is called RefaceAI and is developed by a company of the same name.

RefaceAI was previously used in a face swapping app called Reflect. Elon Musk once used Reflect to put his face on Dwayne Johnson’s body. 

The app is a lot of fun, but – after concerns about viral Russian app FaceApp – many will be wondering just how much data is being collected in return.

Doublicat’s developers are upfront with asking for consent to store your photos upon first opening the app and this is confirmed in their privacy policy: “We may collect the photos, that you take with your camera while using our application.”

However, Doublicat says that photos are only stored on their server for 24 hours before they’re deleted. “The rest of the time your photos used in Doublicat application are stored locally on your mobile device and may be removed any time by either deleting these photos from your mobile device’s file system.”

The app also collects data about facial features but only the vector representations of each person’s face is stored. Doublicat assures that the facial recognition data collected “is not biometric data” and is deleted from their servers within 30 calendar days.

“In no way will Doublicat use your uploaded content for face recognition as Doublicat does not introduce the face recognition technologies or other technical means for processing biometric data for the unique identification or authentication of a user.”

The amount of data Doublicat can collect is limited compared to some alternatives. Apps such as Zao require users to 3D model their face whereas Doublicat only takes a front-facing picture.

RefaceAI is now looking to launch an app which can make deepfake videos rather than just GIFs. The move is likely to be controversial given the concerns around deepfakes and how such videos could be used for things such as political manipulation.

A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last year after purportedly showing her slurring her words as if she was intoxicated. The clip shows how even a relatively unsophisticated video (it wasn’t an actual deepfake in this case) could be used to cause reputational damage and even swing votes.

A report from the NYU Stern Center for Business and Human Rights last September, covered by our sister publication MarketingTech, highlighted the various ways disinformation could be used ahead of this year’s presidential elections. One of the eight predictions is that deepfake videos will be used “to portray candidates saying and doing things they never said or did”.

Earlier this month, Facebook announced new policies around deepfakes. Any deepfake video that is designed to be misleading will be banned. The problem with the rules is they don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to anyone wanting a firm stance against manipulation.

Doublicat is available for Android and iOS.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Deepfake app puts your face on GIFs while limiting data collection appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/01/14/deepfake-app-face-gifs-data-collection/feed/ 2
Amazon makes three major AI announcements during re:Invent 2019 https://www.artificialintelligence-news.com/2019/12/03/amazon-ai-announcements-reinvent-2019/ https://www.artificialintelligence-news.com/2019/12/03/amazon-ai-announcements-reinvent-2019/#respond Tue, 03 Dec 2019 15:45:54 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6270 Amazon has kicked off its annual re:Invent conference in Las Vegas and made three major AI announcements. During a midnight keynote, Amazon unveiled Transcribe Medical, SageMaker Operators for Kubernetes, and DeepComposer. Transcribe Medical The first announcement we’ll be talking about is likely to have the biggest impact on people’s lives soonest. Transcribe Medical is designed... Read more »

The post Amazon makes three major AI announcements during re:Invent 2019 appeared first on AI News.

]]>
Amazon has kicked off its annual re:Invent conference in Las Vegas and made three major AI announcements.

During a midnight keynote, Amazon unveiled Transcribe Medical, SageMaker Operators for Kubernetes, and DeepComposer.

Transcribe Medical

The first announcement we’ll be talking about is likely to have the biggest impact on people’s lives soonest.

Transcribe Medical is designed to transcribe medical speech for primary care. The feature is aware of medical speech in addition to standard conversational diction.

Amazon says Transcribe Medical can be deployed across “thousands” of healthcare facilities to provide clinicians with secure note-taking abilities.

Transcribe Medical offers an API and can work with most microphone-equipped smart devices. The service is fully managed and sends back a stream of text in real-time.

Furthermore, and most importantly, Transcribe Medical is covered under AWS’ HIPAA eligibility and business associate addendum (BAA). This means that any customer that enters into a BAA with AWS can use Transcribe Medical to process and store personal health information legally.

SoundLines and Amgen are two partners which Amazon says are already using Transcribe Medical.

Vadim Khazan, president of technology at SoundLines, said in a statement:

“For the 3,500 health care partners relying on our care team optimisation strategies for the past 15 years, we’ve significantly decreased the time and effort required to get to insightful data.”

SageMaker Operators for Kubernetes

The next announcement is Amazon SageMaker Operators for Kubernetes.

Amazon’s SageMaker is a machine learning development platform and this new feature lets data scientists using Kubernetes train, tune, and deploy AI models.

SageMaker Operators can be installed on Kubernetes clusters and jobs can be created using Amazon’s machine learning platform through the Kubernetes API and command line tools.

In a blog post, AWS deep learning senior product manager Aditya Bindal wrote:

“Customers are now spared all the heavy lifting of integrating their Amazon SageMaker and Kubernetes workflows. Starting today, customers using Kubernetes can make a simple call to Amazon SageMaker, a modular and fully-managed service that makes it easier to build, train, and deploy machine learning (ML) models at scale.”

Amazon says that compute resources are pre-configured and optimised, only provisioned when requested, scaled as needed, and shut down automatically when jobs complete.

SageMaker Operators for Kubernetes is generally available in AWS server regions including US East (Ohio), US East (N. Virginia), US West (Oregon), and EU (Ireland).

DeepComposer

Finally, we have DeepComposer. This one is a bit more fun for those who enjoy playing with hardware toys.

Amazon calls DeepComposer the “world’s first” machine learning-enabled musical keyboard. The keyboard features 32-keys and two octaves, and is designed for developers to experiment with pretrained or custom AI models.

In a blog post, AWS AI and machine learning evangelist Julien Simon explains how DeepComposer taps a Generative Adversarial Network (GAN) to fill in gaps in songs.

After recording a short tune, a model for the composer’s favourite genre is selected in addition to setting the model’s parameters. Hyperparameters are then set along with a validation sample.

Once this process is complete, DeepComposer then generates a composition which can be played in the AWS console or even shared to SoundCloud (then it’s really just a waiting game for a call from Jay-Z).

Developers itching to get started with DeepComposer can apply for a physical keyboard for when they become available, or get started now with a virtual keyboard in the AWS console.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Amazon makes three major AI announcements during re:Invent 2019 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/12/03/amazon-ai-announcements-reinvent-2019/feed/ 0