mit Archives - AI News https://www.artificialintelligence-news.com/tag/mit/ Artificial Intelligence News Mon, 11 Dec 2023 16:34:21 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png mit Archives - AI News https://www.artificialintelligence-news.com/tag/mit/ 32 32 MIT publishes white papers to guide AI governance https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/ https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/#respond Mon, 11 Dec 2023 16:34:19 +0000 https://www.artificialintelligence-news.com/?p=14040 A committee of MIT leaders and scholars has published a series of white papers aiming to shape the future of AI governance in the US. The comprehensive framework outlined in these papers seeks to extend existing regulatory and liability approaches to effectively oversee AI while fostering its benefits and mitigating potential harm. Titled “A Framework... Read more »

The post MIT publishes white papers to guide AI governance appeared first on AI News.

]]>
A committee of MIT leaders and scholars has published a series of white papers aiming to shape the future of AI governance in the US. The comprehensive framework outlined in these papers seeks to extend existing regulatory and liability approaches to effectively oversee AI while fostering its benefits and mitigating potential harm.

Titled “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” the main policy paper proposes leveraging current US government entities to regulate AI tools within their respective domains.

Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, emphasises the pragmatic approach of initially focusing on areas where human activity is already regulated and gradually expanding to address emerging risks associated with AI.

The framework underscores the importance of defining the purpose of AI tools, aligning regulations with specific applications and holding AI providers accountable for the intended use of their technologies.

Asu Ozdaglar, deputy dean of academics in the MIT Schwarzman College of Computing, believes having AI providers articulate the purpose and intent of their tools is crucial for determining liability in case of misuse.

Addressing the complexity of AI systems existing at multiple levels, the brief acknowledges the challenges of governing both general and specific AI tools. The proposal advocates for a self-regulatory organisation (SRO) structure to supplement existing agencies, offering responsive and flexible oversight tailored to the rapidly evolving AI landscape.

Furthermore, the policy papers call for advancements in auditing AI tools—exploring various pathways such as government-initiated, user-driven, or legal liability proceedings.

The consideration of a government-approved SRO – akin to the Financial Industry Regulatory Authority (FINRA) – is proposed to enhance domain-specific knowledge and facilitate practical engagement with the dynamic AI industry.

MIT’s involvement in AI governance stems from its recognised expertise in AI research, positioning the institution as a key contributor to addressing the challenges posed by evolving AI technologies. The release of these whitepapers signals MIT’s commitment to promoting responsible AI development and usage.

You can find MIT’s series of AI policy briefs here.

(Photo by Aaron Burden on Unsplash)

See also: AI & Big Data Expo: Demystifying AI and seeing past the hype

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MIT publishes white papers to guide AI governance appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/feed/ 0
MIT launches cross-disciplinary program to boost AI hardware innovation https://www.artificialintelligence-news.com/2022/03/31/mit-launches-cross-disciplinary-program-boost-ai-hardware-innovation/ https://www.artificialintelligence-news.com/2022/03/31/mit-launches-cross-disciplinary-program-boost-ai-hardware-innovation/#respond Thu, 31 Mar 2022 15:31:40 +0000 https://artificialintelligence-news.com/?p=11825 MIT has launched a new academia and industry partnership called the AI Hardware Program that aims to boost research and development. “A sharp focus on AI hardware manufacturing, research, and design is critical to meet the demands of the world’s evolving devices, architectures, and systems,” says Anantha Chandrakasan, dean of the MIT School of Engineering,... Read more »

The post MIT launches cross-disciplinary program to boost AI hardware innovation appeared first on AI News.

]]>
MIT has launched a new academia and industry partnership called the AI Hardware Program that aims to boost research and development.

“A sharp focus on AI hardware manufacturing, research, and design is critical to meet the demands of the world’s evolving devices, architectures, and systems,” says Anantha Chandrakasan, dean of the MIT School of Engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science. 

“Knowledge-sharing between industry and academia is imperative to the future of high-performance computing.”

There are five inaugural members of the program:

  • Amazon
  • Analog Devices
  • ASML
  • NTT Research
  • TSMC

As the diversity of the inaugural members shows, the program is intended to be a cross-disciplinary effort.

“As AI systems become more sophisticated, new solutions are sorely needed to enable more advanced applications and deliver greater performance,” commented Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science

 “Our aim is to devise real-world technological solutions and lead the development of technologies for AI in hardware and software.”

A key goal of the program is to help create more energy-efficient systems.

“We are all in awe at the seemingly superhuman capabilities of today’s AI systems. But this comes at a rapidly increasing and unsustainable energy cost,” explained Jesús del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science.

“Continued progress in AI will require new and vastly more energy-efficient systems. This, in turn, will demand innovations across the entire abstraction stack, from materials and devices to systems and software. The program is in a unique position to contribute to this quest.”

Other key areas of exploration include:

  • Analog neural networks
  • New CMOS designs
  • Heterogeneous integration for AI systems
  • Monolithic-3D AI systems
  • Analog nonvolatile memory devices
  • Software-hardware co-design
  • Intelligence at the edge
  • Intelligent sensors
  • Energy-efficient AI
  • Intelligent Internet of Things (IIoT)
  • Neuromorphic computing
  • AI edge security
  • Quantum AI
  • Wireless technologies
  • Hybrid-cloud computing
  • High-performance computation

It’s an exhaustive list and an ambitious project. However, the AI Hardware Program is off to a great start with the inaugural members bringing significant talent and expertise in their respective fields to the table.

“We live in an era where paradigm-shifting discoveries in hardware, systems communications, and computing have become mandatory to find sustainable solutions—solutions that we are proud to give to the world and generations to come,” says Aude Oliva, Senior Research Scientist in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Director of Strategic Industry Engagement at the MIT Schwarzman College of Computing.

The program is being co-led by Jesús del Alamo and Aude Oliva. Anantha Chandrakasan will serve as its chair.

More information about the AI Hardware Program can be found here.

(Photo by Nejc Soklič on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MIT launches cross-disciplinary program to boost AI hardware innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/31/mit-launches-cross-disciplinary-program-boost-ai-hardware-innovation/feed/ 0
MIT researchers develop AI to calculate material stress using images https://www.artificialintelligence-news.com/2021/04/22/mit-researchers-developer-ai-calculate-material-stress-using-images/ https://www.artificialintelligence-news.com/2021/04/22/mit-researchers-developer-ai-calculate-material-stress-using-images/#respond Thu, 22 Apr 2021 09:21:13 +0000 http://artificialintelligence-news.com/?p=10488 Researchers from MIT have developed an AI tool for determining the stress a material is under through analysing images. The pesky laws of physics have been used by engineers for centuries to work out – using complex equations – the stresses the materials they’re working with are being put under. It’s a time-consuming but vital... Read more »

The post MIT researchers develop AI to calculate material stress using images appeared first on AI News.

]]>
Researchers from MIT have developed an AI tool for determining the stress a material is under through analysing images.

The pesky laws of physics have been used by engineers for centuries to work out – using complex equations – the stresses the materials they’re working with are being put under. It’s a time-consuming but vital task to prevent structural failures which could be costly at best or cause loss of life at worst.

“Many generations of mathematicians and engineers have written down these equations and then figured out how to solve them on computers,” says Markus Buehler, the McAfee Professor of Engineering, director of the Laboratory for Atomistic and Molecular Mechanics, and one of the paper’s co-authors.

“But it’s still a tough problem. It’s very expensive — it can take days, weeks, or even months to run some simulations. So, we thought: Let’s teach an AI to do this problem for you.”

By employing computer vision, the AI tool developed by MIT’s researchers can generate estimates of material stresses in real-time.

A Generative Adversarial Network (GAN) was used for the breakthrough. The network was trained using thousands of paired images—one showing the material’s internal microstructure when subjected to mechanical forces, and the other labelled with colour-coded stress and strain values.

Using game theory, the GAN is able to determine the relationships between the material’s appearance and the stresses it’s being put under.

“From a picture, the computer is able to predict all those forces: the deformations, the stresses, and so forth,” Buehler adds.

Even more impressively, the AI can recreate issues like cracks developing in a material that can have a major impact on how it reacts to forces.

Once trained, the neural network can run on consumer-grade computer processors. This makes the AI accessible in the field and enables inspections to be carried out with just a photo.

You can find a full copy of the paper here.

(Photo by CHUTTERSNAP on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post MIT researchers develop AI to calculate material stress using images appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/04/22/mit-researchers-developer-ai-calculate-material-stress-using-images/feed/ 0
MIT has removed a dataset which leads to misogynistic, racist AI models https://www.artificialintelligence-news.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/ https://www.artificialintelligence-news.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/#comments Thu, 02 Jul 2020 15:43:05 +0000 http://artificialintelligence-news.com/?p=9728 MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies. The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what... Read more »

The post MIT has removed a dataset which leads to misogynistic, racist AI models appeared first on AI News.

]]>
MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies.

The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what they feature.

Machine-learning models are trained using these images and their labels. An image of a street – when fed into an AI trained on such a dataset – could tell you about things it contains such as cars, streetlights, pedestrians, and bikes.

Two researchers – Vinay Prabhu, chief scientist at UnifyID, and Abeba Birhane, a PhD candidate at University College Dublin in Ireland – analysed the images and found thousands of concerning labels.

MIT’s training set was found to label women as “bitches” or “whores,” and people from BAME communities with the kind of derogatory terms I’m sure you don’t need me to write. The Register notes the dataset also contained close-up images of female genitalia labeled with the C-word.

The Register alerted MIT to the concerning issues found by Prabhu and Birhane with the dataset and the college promptly took it offline. MIT went a step further and urged anyone using the dataset to stop using it and delete any copies.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset, with sizes of just 32×32 pixels, means that manual inspection would be almost impossible and cannot guarantee all offensive images will be removed.

“Biases, offensive and prejudicial images, and derogatory terminology alienates an important part of our community – precisely those that we are making efforts to include. It also contributes to harmful biases in AI systems trained on such data,” wrote Antonio Torralba, Rob Fergus, and Bill Freeman from MIT.

“Additionally, the presence of such prejudicial images hurts efforts to foster a culture of inclusivity in the computer vision community. This is extremely unfortunate and runs counter to the values that we strive to uphold.”

You can find a full pre-print copy of Prabhu and Birhane’s paper here (PDF)

(Photo by Clay Banks on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post MIT has removed a dataset which leads to misogynistic, racist AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/feed/ 5
MIT’s AI paints a dire picture if social distancing is relaxed too soon https://www.artificialintelligence-news.com/2020/04/17/mit-ai-social-distancing-relaxed-too-soon/ https://www.artificialintelligence-news.com/2020/04/17/mit-ai-social-distancing-relaxed-too-soon/#comments Fri, 17 Apr 2020 12:37:25 +0000 http://artificialintelligence-news.com/?p=9553 According to an AI system built by MIT to predict the spread of COVID-19, relaxing social distancing rules too early would be catastrophic. Social distancing measures around the world appear to be having the desired effect. In many countries, the “curve” appears to be flattening with fewer deaths and hospital admissions per day. No healthcare... Read more »

The post MIT’s AI paints a dire picture if social distancing is relaxed too soon appeared first on AI News.

]]>
According to an AI system built by MIT to predict the spread of COVID-19, relaxing social distancing rules too early would be catastrophic.

Social distancing measures around the world appear to be having the desired effect. In many countries, the “curve” appears to be flattening with fewer deaths and hospital admissions per day.

No healthcare system in the world is prepared to handle a vast number of its population hospitalised at once. Even once relatively trivial ailments can become deadly if people cannot access the care they need. Until a vaccine is found, that’s why maintaining social distancing is vital even as lockdown measures ease.

With the curve now flattening, the conversation is switching to how lockdowns can be lifted safely. Contact-tracing apps, which keep track of everyone an individual passes and alerts them to self-isolate if they’ve been near anyone subsequently diagnosed with COVID-19, are expected to be key in easing measures.

MIT’s AI corroborates what many health officials are showing in their figures; that we should now be seeing new cases of COVID-19 levelling off in many countries.

“Our results unequivocally indicate that the countries in which rapid government interventions and strict public health measures for quarantine and isolation were implemented were successful in halting the spread of infection and prevent it from exploding exponentially,” the researchers wrote.

However, the situation could be similar to Singapore where lockdown measures almost completely flattened the curve before an early return to normal resulted in a massive resurgence in cases.

“Relaxing or reversing quarantine measures right now will lead to an exponential explosion in the infected case count, thus nullifying the role played by all measures implemented in the US since mid-March 2020.”

The team from MIT trained their AI using public data on COVID-19’s spread and how each government implemented various measures to contain it. It was trained on known data from January to March, and then was found to accurately predict the spread in April so far.

While the researchers’ work focused on COVID-19 epidemics in the US, Italy, South Korea, and Wuhan, there’s no reason to think that relaxing social distancing rules anywhere else in the world at this stage would be any less dire.

You can find the full paper from MIT here.

(Photo by engin akyurt on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post MIT’s AI paints a dire picture if social distancing is relaxed too soon appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/04/17/mit-ai-social-distancing-relaxed-too-soon/feed/ 1
MIT researchers use AI to discover a welcome new antibiotic https://www.artificialintelligence-news.com/2020/02/21/mit-researchers-use-ai-to-discover-a-welcome-new-antibiotic/ https://www.artificialintelligence-news.com/2020/02/21/mit-researchers-use-ai-to-discover-a-welcome-new-antibiotic/#respond Fri, 21 Feb 2020 15:49:32 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6423 A team of MIT researchers have used AI to discover a welcome new antibiotic to help in the fight against increasing resistance. Using a machine learning algorithm, the MIT researchers were able to discover a new antibiotic compound which did not develop any resistance during a 30-day treatment period on mice. The algorithm was trained... Read more »

The post MIT researchers use AI to discover a welcome new antibiotic appeared first on AI News.

]]>
A team of MIT researchers have used AI to discover a welcome new antibiotic to help in the fight against increasing resistance.

Using a machine learning algorithm, the MIT researchers were able to discover a new antibiotic compound which did not develop any resistance during a 30-day treatment period on mice.

The algorithm was trained using around 2,500 molecules – including about 1,700 FDA-approved drugs and a set of 800 natural products – to seek out chemical features that make molecules effective at killing bacteria. 

After the model was trained, the researchers tested it on a library of about 6,000 compounds known as the Broad Institute’s Drug Repurposing Hub.

“We wanted to develop a platform that would allow us to harness the power of artificial intelligence to usher in a new age of antibiotic drug discovery,” explains James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering.

“Our approach revealed this amazing molecule which is arguably one of the more powerful antibiotics that has been discovered.”

Antibiotic resistance is terrifying. Researchers have already discovered bacterias that are immune to current antibiotics and we’re very much in danger of illnesses that have become simple to treat becoming deadly once more.

Data from the Centers for Disease Control and Prevention (CDC) already indicates that antibiotic-resistant bacteria and antimicrobial-resistant fungi cause more than 2.8 million infections and 35,000 deaths a year in the United States alone.

“We’re facing a growing crisis around antibiotic resistance, and this situation is being generated by both an increasing number of pathogens becoming resistant to existing antibiotics, and an anaemic pipeline in the biotech and pharmaceutical industries for new antibiotics,” Collins says.

The recent coronavirus outbreak leaves many patients with pneumonia. With antibiotics, pneumonia is not often fatal nowadays unless a patient has a substantially weakened immune system. The current death toll for coronavirus would be much higher if antibiotic resistance essentially sets healthcare back to the 1930s.

MIT’s researchers claim their AI is able to check more than 100 million chemical compounds in a matter of days to pick out potential antibiotics that kill bacteria. This rapid checking reduces the time it takes to discover new lifesaving treatments and begins to swing the odds back in our favour.

The newly discovered molecule is called halicin – after the AI named Hal in the film 2001: A Space Odyssey – and has been found to be effective against E.coli. The team is now hoping to develop halicin for human use (a separate machine learning model has already indicated that it should have low toxicity to humans, so early signs are positive.)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post MIT researchers use AI to discover a welcome new antibiotic appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/02/21/mit-researchers-use-ai-to-discover-a-welcome-new-antibiotic/feed/ 0
MIT software shows how NLP systems are snookered by simple synonyms https://www.artificialintelligence-news.com/2020/02/12/mit-software-shows-how-nlp-systems-are-snookered-by-simple-synonyms/ https://www.artificialintelligence-news.com/2020/02/12/mit-software-shows-how-nlp-systems-are-snookered-by-simple-synonyms/#respond Wed, 12 Feb 2020 11:48:11 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6411 Here’s an example of how artificial intelligence can still seriously lack behind some human attributes: tests have shown how natural language processing (NLP) systems can be tricked into misunderstanding text by merely swapping one word for a synonym. A research team at MIT developed software, called TextFooler, which looked for words which were most crucial... Read more »

The post MIT software shows how NLP systems are snookered by simple synonyms appeared first on AI News.

]]>
Here’s an example of how artificial intelligence can still seriously lack behind some human attributes: tests have shown how natural language processing (NLP) systems can be tricked into misunderstanding text by merely swapping one word for a synonym.

A research team at MIT developed software, called TextFooler, which looked for words which were most crucial to an NLP classifier and replaced them. The team offered an example:

“The characters, cast in impossibly contrived situations, are totally estranged from reality”, and
“The characters, cast in impossibly engineered circumstances, are fully estranged from reality”

No problem for a human to decipher. Yet the results on the AIs were startling. For instance BERT, Google’s neural net, was worse by a factor of up to seven at identifying whether reviews on Yelp were positive or negative.

Douglas Heaven, writing a roundup of the study for MIT Technology Review, explained why the research was important. “We have seen many examples of adversarial attacks, most often with image recognition systems, where tiny alterations to the input can flummox an AI and make it misclassify what it sees,” Heaven wrote. “TextFooler shows that this style of attack also breaks NLP, the AI behind virtual assistants – such as Siri, Alexa and Google Home – as well as other language classifiers like spam filters and hate-speech detectors.”

This publication has explored various methods where AI technologies are outstripping human efforts, such as detecting breast cancer, playing StarCraft, and public debating. In other fields, resistance – however futile – remains. In December it was reported that human drivers were still overall beating AIs at drone racing, although the chief technology officer of the Drone Race League predicted that 2023 would be the year where AI took over.

The end goal for software such as TextFooler, the researchers hope, is to make NLP systems more robust.

Postscript: For those reading from outside the British Isles, China, and certain Commonwealth countries – to ‘snooker’ someone, deriving from the sport of the same name, is to ‘leave one in a difficult position.’ The US equivalent is ‘behind the eight-ball’, although that would have of course thrown the headline out.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G ExpoIoT Tech ExpoBlockchain ExpoAI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post MIT software shows how NLP systems are snookered by simple synonyms appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/02/12/mit-software-shows-how-nlp-systems-are-snookered-by-simple-synonyms/feed/ 0
Deepfake shows Nixon announcing the moon landing failed https://www.artificialintelligence-news.com/2020/02/06/deepfake-nixon-moon-landing-failed/ https://www.artificialintelligence-news.com/2020/02/06/deepfake-nixon-moon-landing-failed/#respond Thu, 06 Feb 2020 16:42:59 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6403 In the latest creepy deepfake, former US President Nixon is shown to announce that the first moon landing failed. Nixon was known to be a divisive figure but certainly recognisable. The video shows Nixon in the Oval Office, surrounded by flags, giving a presidential address to an eagerly awaiting world. However, unlike the actual first... Read more »

The post Deepfake shows Nixon announcing the moon landing failed appeared first on AI News.

]]>

In the latest creepy deepfake, former US President Nixon is shown to announce that the first moon landing failed.

Nixon was known to be a divisive figure but certainly recognisable. The video shows Nixon in the Oval Office, surrounded by flags, giving a presidential address to an eagerly awaiting world.

However, unlike the actual first moon landing – unless you’re a subscriber to conspiracy theories – this one failed.

“These brave men, Neil Armstrong and Edwin Aldrin, know that there is no hope for their recovery,” Nixon says in his trademark growl. “But they also know that there is hope for mankind in their sacrifice.”

Here are some excerpts from the full video:

What makes the video more haunting is that the speech itself is real. Although never broadcast, it was written for Nixon by speechwriter William Safire in the eventuality the moon landing did fail.

The deepfake was created by a team from MIT’s Center for Advanced Virtuality and put on display at the IDFA documentary festival in Amsterdam.

In order to recreate Nixon’s famous voice, the MIT team partnered with technicians from Ukraine and Israel and used advanced machine learning techniques.

We’ve covered many deepfakes here on AI News. While many are amusing, there are serious concerns that deepfakes could be used for malicious purposes such as blackmail or manipulation.

Ahead of the US presidential elections, some campaigners have worked to increase the awareness of deepfakes and get social media platforms to help tackle any dangerous videos.

Back in 2018, speaker Nancy Pelosi was the victim of a deepfake that went viral across social media which made her appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

As part of a bid to persuade the social media giant to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg – making it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Last month, Facebook pledged to crack down on deepfakes ahead of the US presidential elections. However, the new rules don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to those wanting a firm stance against potential voter manipulation.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Deepfake shows Nixon announcing the moon landing failed appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/02/06/deepfake-nixon-moon-landing-failed/feed/ 0
Microsoft and MIT develop AI to fix driverless car ‘blind spots’ https://www.artificialintelligence-news.com/2019/01/28/microsoft-mit-develop-ai-driverless-car/ https://www.artificialintelligence-news.com/2019/01/28/microsoft-mit-develop-ai-driverless-car/#respond Mon, 28 Jan 2019 16:18:30 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4846 Microsoft and MIT have partnered on a project to fix so-called virtual ‘blind spots’ which lead driverless cars to make errors. Roads, especially while shared with human drivers, are unpredictable places. Training a self-driving car for every possible situation is a monumental task. The AI developed by Microsoft and MIT compares the action taken by... Read more »

The post Microsoft and MIT develop AI to fix driverless car ‘blind spots’ appeared first on AI News.

]]>
Microsoft and MIT have partnered on a project to fix so-called virtual ‘blind spots’ which lead driverless cars to make errors.

Roads, especially while shared with human drivers, are unpredictable places. Training a self-driving car for every possible situation is a monumental task.

The AI developed by Microsoft and MIT compares the action taken by humans in a given scenario to what the driverless car’s own AI would do. Where the human decision is more optimal, the vehicle’s behaviour is updated for similar future occurrences.

Ramya Ramakrishnan, an author of the report, says:

“The model helps autonomous systems better know what they don’t know.

Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents.

The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.”

For example, if an emergency vehicle is approaching then a human driver should know to let them pass if safe to do so. These situations can get complex dependent on the surroundings.

On a country road, allowing the vehicle to pass could mean edging onto the grass. The last thing you, or the emergency services, want a driverless car to do is to handle all country roads the same and swerve off a cliff edge.

Humans can either ‘demonstrate’ the correct approach in the real world, or ‘correct’ by sitting at the wheel and taking over if the car’s actions are incorrect. A list of situations is compiled along with labels whether its actions were deemed acceptable or unacceptable.

The researchers have ensured a driverless car AI does not see its action as 100 percent safe even if the result has been so far. Using the Dawid-Skene machine learning algorithm, the AI uses probability calculations to spot patterns and determine if something is truly safe or still leaves the potential for error.

We’re yet to reach a point where the technology is ready for deployment. Thus far, the scientists have only tested it with video games. It offers a lot of promise, however, to help ensure driverless car AIs can one day safely respond to all situations.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Microsoft and MIT develop AI to fix driverless car ‘blind spots’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/01/28/microsoft-mit-develop-ai-driverless-car/feed/ 0
Researchers get public to decide who to save in a driverless car crash https://www.artificialintelligence-news.com/2018/10/25/researchers-save-driverless-car-crash/ https://www.artificialintelligence-news.com/2018/10/25/researchers-save-driverless-car-crash/#comments Thu, 25 Oct 2018 16:40:43 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4130 Researchers have conducted an experiment intending to solve the ethical conundrum of who to save if a fatal driverless car crash is unavoidable. A driverless car AI will need to be programmed with decisions such as who to prioritise if it came down to choices such as between swerving and hitting a child on the... Read more »

The post Researchers get public to decide who to save in a driverless car crash appeared first on AI News.

]]>
Researchers have conducted an experiment intending to solve the ethical conundrum of who to save if a fatal driverless car crash is unavoidable.

A driverless car AI will need to be programmed with decisions such as who to prioritise if it came down to choices such as between swerving and hitting a child on the left, or an elderly person on the right.

It may seem a fairly simple choice for some – children have their whole life in front of them, the elderly have fewer years ahead. However, arguments could be made such as younger people often have a greater recovery chance so both people could ultimately survive.

This is a fairly simple example, but things could get even more controversial when taking into account things such as choosing between someone with a criminal record, or a law-abiding citizen.

No single person should be made to make such decisions, nobody wants to be accountable for explaining to a family member why their loved one was chosen to die over another.

In their paper, the researchers wrote:

“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision.

We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation.

Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”

The best way forward is establishing what the majority feel should happen in such accidents for collective accountability.

Researchers from around the world conducted research involving millions of participants from more than 200 countries to answer hypothetical questions in an experiment called the Moral Machine.

Here are the results:

In the driverless car world, you’re relatively safe if you’re not:

    • A passenger
    • Male
    • Unhealthy
    • Considered poor / low status
    • Unlawful
    • Elderly
  • An animal

If you’re any of these, I suggest you start taking extra care crossing the road.

The research was conducted by researchers from Harvard University and MIT in the US, University of British Columbia in Canada, and the Université Toulouse Capitole in France.

 Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Researchers get public to decide who to save in a driverless car crash appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/10/25/researchers-save-driverless-car-crash/feed/ 1