robots Archives - AI News https://www.artificialintelligence-news.com/tag/robots/ Artificial Intelligence News Tue, 14 May 2024 09:23:47 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png robots Archives - AI News https://www.artificialintelligence-news.com/tag/robots/ 32 32 The market size in the AI market is projected to reach $184bn in 2024 https://www.artificialintelligence-news.com/2024/05/14/the-market-size-in-the-ai-market-is-projected-to-reach-184bn-in-2024/ https://www.artificialintelligence-news.com/2024/05/14/the-market-size-in-the-ai-market-is-projected-to-reach-184bn-in-2024/#respond Tue, 14 May 2024 09:23:45 +0000 https://www.artificialintelligence-news.com/?p=14796 We can often get very excited about breakthroughs in Artificial Intelligence and how there will be seismic changes in the way in which it shapes the future. However, as those interested in AI know, the technology is very much already embedded in so many of our day-to-day transactions that it is already transforming the ways... Read more »

The post The market size in the AI market is projected to reach $184bn in 2024 appeared first on AI News.

]]>
We can often get very excited about breakthroughs in Artificial Intelligence and how there will be seismic changes in the way in which it shapes the future. However, as those interested in AI know, the technology is very much already embedded in so many of our day-to-day transactions that it is already transforming the ways in which we work, rest and play.

For decades, the media has jumped on the big tech stories, including human-like robots that will do all the basic household chores for us. As far back as 1966, we were introduced to Mabel The Robot Housemaid, who was going to be doing all the tasks by 1976. While that failed to be a reality, AI has seamlessly integrated itself into all our lives, and while there might not be any Mables, many of us have assistants called Alexa, Siri, and Cortana.

These robots may not be able to do the ironing for us, but they can be used to turn the lights on and off, program the oven, or control our heating systems when we are not around. Rather than take over all the physical work, they help us in the background and are integrated into our homes. According to today’s experts, by 2033, robots will be doing almost 40% of our housework. This seems somewhat similar to the 1966 claims, but this is backed up by data from Japan’s Ochanomizu and the UK’s University of Oxford. 65 AI experts were asked to predict what everyday tasks will become automated within the next five to ten years.

The study looked at the question “what kind of futures are imagined for unpaid work? If robots take our jobs, will they at least also take out the trash for us?” It is suggested that the time people spend doing housework will decrease by 46% in the next decade. However, the biggest task that is likely to become more automated is grocery shopping. The experts predict that by 2033 almost 60% of our grocery shop will be performed by AI. However, it is unlikely that machines will be trusted with caring responsibilities such as looking after the elderly or children. Even if AI had the technical ability to undertake these tasks, the studies experts believe there would be acceptability issues of delegating childcare to machines due to potential developmental impacts on the child and privacy implications.

So, if AI is not looking after our children or doing the ironing, what tasks is it doing? Given the market size, this sector is a massive part of the global economy. The most recent statistics predict it will be worth US$ 184.00bn in 2024. However, that is a small fry compared to forecasts for 2030. The market is expected to grow at almost 29% and will be worth a staggering US$826 billion by the end of the decade.

Here are some areas where AI plays an integral part in our lives, so much so that we almost forget how we functioned before.

We open our phones with face IDs. It is AI that enables this functionality. Using biometrics, the device can see you in 3D and capture images of your face using 30,000 invisible infrared dots. Then, using machine learning algorithms, it compares the scan of your face with what it has stored on file to determine if it is you or an intruder trying to access your phone. Apple claims that the chance of fooling its FaceID is one in a million

Once our phones are open, there are many places we might choose to go. Some people head off to check social media or catch up on the news. Other people use their phones for entertainment, like online games, or to visit an online casino. AI and algorithms are integral to the functioning of these sites, with AI involved in everything from customer services to verifying payments and paying out winnings. Players get a personalised experience as the AI learns which games they enjoy playing, which means players can choose from the newest games that are on offer. However, rather than trawling through all the latest releases, the system can learn from what they have played before and offer them something similar to play next.

AI also updates social media feeds. What a user sees is personalised because the algorithm has learned what posts you react to based on your history. It makes friend suggestions and news posts. The next step for AI is to recognise better and, filter out misinformation, and prevent cyberbullying. Getting rid of fake news is even more crucial as 2024 is a global year of general elections.

We use spell check and other tools like Grammarly when we write on our computers and phones, whether to send emails, messages, or reports. These help us create error-free messages by using natural language processing and suggestions. More AI is involved when we send and receive messages with spam filters, blocking some emails and sending them to our junk boxes. In addition, anti-virus software employs machine learning to protect our email accounts and computers.

While these examples all happen behind the scenes, one of the most notable changes in recent years is our use of digital voice assistants. Whether we want to get directions or find out what the weather will be like, Siri, Alexa, Google Home, and Cortana accompany us wherever we go. They have become indispensable for many people who use them as a co-pilot when driving and a general source of endless information around the home. These assistants use natural language processors and generators driven by AI to answer all the questions. They are increasingly programmed to give ‘human-like’ responses and can even sound offended at times.

Since 1966, we have dreamed of robots doing the housework, and while that is not a reality, our homes are becoming increasingly ‘smart’. We have thermostats that allow us to control the heating from our phones and fridges that can create shopping lists based on what is no longer in the refrigerator. They can also recommend what you might like to buy as an accompaniment based on what is in your fridge, such as wine or condiments.

There is still no sign of Mabel, but maybe she will put in an appearance one of these days.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The market size in the AI market is projected to reach $184bn in 2024 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/14/the-market-size-in-the-ai-market-is-projected-to-reach-184bn-in-2024/feed/ 0
National Robotarium pioneers AI and telepresence robotic tech for remote health consultations https://www.artificialintelligence-news.com/2021/09/20/national-robotarium-pioneers-ai-and-telepresence-robotic-tech-for-remote-health-consultations/ https://www.artificialintelligence-news.com/2021/09/20/national-robotarium-pioneers-ai-and-telepresence-robotic-tech-for-remote-health-consultations/#respond Mon, 20 Sep 2021 13:45:11 +0000 http://artificialintelligence-news.com/?p=11095 The National Robotarium, hosted by Heriot-Watt University in Edinburgh, has unveiled an AI-powered telepresence robotic solution for remote health consultations. Using the solution, health practitioners would be able to assess a person’s physical and cognitive health from anywhere in the world. Patients could access specialists no matter whether they’re based in the UK, India, the... Read more »

The post National Robotarium pioneers AI and telepresence robotic tech for remote health consultations appeared first on AI News.

]]>
The National Robotarium, hosted by Heriot-Watt University in Edinburgh, has unveiled an AI-powered telepresence robotic solution for remote health consultations.

Using the solution, health practitioners would be able to assess a person’s physical and cognitive health from anywhere in the world. Patients could access specialists no matter whether they’re based in the UK, India, the US, or anywhere else.

Iain Stewart, UK Government Minister for Scotland, said:

“It was fascinating to visit the National Robotarium and see first-hand how virtual teleportation technology could revolutionise healthcare and assisted living.

Backed by £21 million UK Government City Region Deal funding, this cutting-edge research centre is a world leader for robotics and AI, bringing jobs and investment to the area.”

The project is part of the National Robotarium’s assisted living lab which explores how to improve the lives of people living with various conditions.

Dr Mario Parra Rodriguez, an expert in cognitive assessment from the University of Strathclyde, is working on the project and believes the solution will enable more regular monitoring and health assessments that are critical for people living with conditions like Alzheimer’s disease and other cognitive impairments.

“The experience of inhabiting a distant robot through which I can remotely guide, assess, and support vulnerable adults affected by devastating conditions such as Alzheimer’s disease, grants me confidence that challenges we are currently experiencing to mitigate the impact of such diseases will soon be overcome through revolutionary technologies,” commented Rodriguez.

“The collaboration with the National Robotarium, hosted by Heriot-Watt University is combining experience from various disciplines to deliver technologies that can address the ever-changing needs of people affected by dementia.”

Dr Mauro Dragone is leading the research and explains how AI was vital for the project:

“Our prototype makes use of machine learning and artificial intelligence techniques to monitor smart home sensors to detect and analyse daily activities. We are programming the system to use this information to carry out a thorough, non-intrusive assessment of an older person’s cognitive abilities, as well as their ability to live independently.

Combining the system with a telepresence robot brings two major advances: Firstly, robots can be equipped with powerful sensors and can also operate in a semi-autonomous mode, enriching the capability of the system to deliver quality data, 24 hours a day, seven days a week. 

Secondly, telepresence robots keep clinicians and carers in the loop. These professionals can benefit from the data provided by the project’s intelligent sensing system, but they can also control the robot directly, over the Internet, to interact with the individual under their care. They can see through the eyes of the robot, move around the room or between rooms and operate its arms and hands to carry out more complex assessment protocols. They can also respond to emergencies and provide assistance when needed.”

Earlier this month, the UK government announced tax rises to fund social care, give people the dignity they deserve, and help the NHS recover from the pandemic.

However, some believe further rises are on the horizon. Innovative technologies could help to reduce costs while maintaining or improving care.

“Blackwood is always looking for solutions that help our customers to live more independently whilst promoting choice and control for the individual. Robotics has the potential to improve independent living, provide new levels of support, and integrate with our digital housing and care system CleverCogs,” said Mr Colin Foskett, Head of Innovation at Blackwood Homes and Care.

“Our partnership with the National Robotarium and the design of the assisted living lab ensures that our customers are involved in the co-design and co-creation of new products and services, increasing our investment in innovation and in the future leading to new solutions that will aid independent living and improve outcomes for our customers.”

Our sister publication, IoT News, reported on the construction of the £22.4 million National Robotarium earlier this year—including some of the facilities, equipment, and innovative projects that it hosts.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post National Robotarium pioneers AI and telepresence robotic tech for remote health consultations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/20/national-robotarium-pioneers-ai-and-telepresence-robotic-tech-for-remote-health-consultations/feed/ 0
Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ https://www.artificialintelligence-news.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/ https://www.artificialintelligence-news.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/#respond Mon, 23 Sep 2019 12:06:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6040 Microsoft chief Brad Smith issued a warning over the weekend that killer robots are ‘unstoppable’ and a new digital Geneva Convention is required. Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers... Read more »

The post Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ appeared first on AI News.

]]>
Microsoft chief Brad Smith issued a warning over the weekend that killer robots are ‘unstoppable’ and a new digital Geneva Convention is required.

Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers around the world. Concluding that humans would attempt to shut it down, Skynet sought to exterminate all of mankind in the interest of self-preservation.

While it was once just a popcorn flick, Terminator now offers a dire warning of what could be if precautions are not taken.

As with most technologies, AI will find itself increasingly used for military applications. The ultimate goal for general artificial intelligence is to self-learn. Combine both, and Skynet no longer seems the wild dramatisation that it once did.

Speaking to The Telegraph, Smith seems to agree. Smith points towards developments in the US, China, UK, Russia, Isreal, South Korea, and others, who are all developing autonomous weapon systems.

Wars could one day be fought on battlefields entirely with robots, a scenario that has many pros and cons. On the one hand, it reduces the risk to human troops. On the other, it makes declaring war easier and runs the risk of machines going awry.

Many technologists have likened the race to militarise AI to the nuclear arms race. In a pursuit to be the first and best, dangerous risks may be taken.

There’s still no clear responsible entity for death or injuries caused by an autonomous machine – the manufacturer, developer, or an overseer. This has also been a subject of much debate in regards to how insurance will work with driverless cars.

With military applications, many technologists have called for AI to never make a combat decision – especially one that would result in fatalities – on its own. While AI can make recommendations, a final decision must be made by a human.

Preventing unimaginable devastation

The story of Russian lieutenant colonel Stanislav Petrov in 1983 offers a warning of how a machine without human oversight may cause unimaginable devastation.

Petrov’s computers reported that an intercontinental missile had been launched by the US towards the Soviet Union. The Soviet Union’s strategy was an immediate and compulsory nuclear counter-attack against the US in such a scenario. Petrov used his instinct that the computer was incorrect and decided against launching a nuclear missile, and he was right. 

Had the decision in 1983 whether to deploy a nuclear missile been made solely on the computer, one would have been launched and met with retaliatory launches from the US and its allies.

Smith wants to see a new digital Geneva Convention in order to bring world powers together in agreement over acceptable norms when it comes to AI. “The safety of civilians is at risk today. We need more urgent action, and we need it in the form of a digital Geneva Convention, rules that will protect civilians and soldiers.” 

Many companies – including thousands of Google employees, following backlash over a Pentagon contract to develop AI tech for drones – have pledged not to develop AI technologies for harmful use.

Smith has launched a new book called Tools and Weapons. At the launch, Smith also called for stricter rules over the use of facial recognition technology. “There needs to be a new law in this space, we need regulation in the world of facial recognition in order to protect against potential abuse.”

Last month, a report from Dutch NGO PAX said leading tech firms are putting the world ‘at risk’ of killer AI. Microsoft, along with Amazon, was ranked among the highest risk. Microsoft itself warned investors back in February that its AI offerings could damage the company’s reputation. 

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of PAX’s report.

A global campaign simply titled Campaign To Stop Killer Robots now includes 113 NGOs across 57 countries and has doubled in size over the past year.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/feed/ 0
NHS report suggests AI will give docs more patient time https://www.artificialintelligence-news.com/2019/02/11/nhs-report-ai-docs-patient-time/ https://www.artificialintelligence-news.com/2019/02/11/nhs-report-ai-docs-patient-time/#respond Mon, 11 Feb 2019 12:22:35 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4916 A report from the NHS suggests the impending technological ‘revolution’ in healthcare will increase the amount of time doctors can spend with patients. NHS doctors are overburdened; a problem only getting worse from a growing and ageing population, and not enough funding. The report was led by US academic Eric Topol and calls for a... Read more »

The post NHS report suggests AI will give docs more patient time appeared first on AI News.

]]>
A report from the NHS suggests the impending technological ‘revolution’ in healthcare will increase the amount of time doctors can spend with patients.

NHS doctors are overburdened; a problem only getting worse from a growing and ageing population, and not enough funding.

The report was led by US academic Eric Topol and calls for a reskilling of NHS staff to harness new digital skills. AI and robotics can reduce the burden on healthcare professionals, but only if they’re utilised effectively.

Doctors will not be replaced by robots but instead will have their abilities “enhanced” to improve care. Around 90 percent of all NHS jobs are predicted to require digital skills within the next 20 years.

The use of virtual assistants such as those offered by Apple, Google, and Amazon are expected to be among the closest innovations to being ready.

Assistants can help with checking whether symptoms require urgent care, a GP appointment, or whether a doctor needs to be seen at all. This would help prevent the misuse of A&E by people with trivial ailments or the booking of GP appointments for otherwise healthy adults with things such as a common cold.

Virtual assistants could also be used to book and remind of appointments. This would help to reduce the number of unattended appointments that someone else could have needed.

Yet another concept is the use of a ‘mental health triage bot’ that engages in conversations while analysing text and voice for suicidal ideas and emotion. This could help reduce the ~6000 suicides per year.

The main concern preventing uptake is the potential for errors, which in healthcare could be fatal.

AI News previously reported on the findings of NHS consultant ‘Dr Murphy’ who reached out to us after using ‘GP at Hand’ from Babylon Health, an AI-powered service promoted by health secretary Matt Hancock.

Dr Murphy has since posted many flawed experiences with the service, but one example of a “48yr old obese 30/day male smoker develop[ing] sudden onset central chest pain & sweating” suggested booking a GP appointment. Anyone with common sense would say call 999 urgently.

That example could have meant life or death and shows, while such a system could one day provide huge benefits, it must undergo rigorous testing.

Commenting on the report, Hancock said:

Our health service is on the cusp of a technology revolution and our brilliant staff will be in the driving seat when it happens.

Technology must be there to enhance and support clinicians. It has the potential to make working lives easier for dedicated NHS staff and free them up to use their medical expertise and do what they do best: care for patients.”

In the NHS report, it’s claimed the use of virtual assistants could save 5.7 million hours of GP’s time across England per year.

Further AI use cases include speeding up the interpretation of scans; improving accuracy while enabling treatments to begin sooner. We’ve created a dedicated ‘healthcare’ category on AI News highlighting the incredible advances in this area.

When it comes to robotics, their assistance in surgery could be expanded in addition to being used for simple tasks which are important but time-consuming such as dispensing medicines.

Other emerging technologies such as VR also present exciting opportunities. Virtual reality could help with pain reduction and treating mental conditions such as post-traumatic stress, anxiety, and phobias.

The report’s authors conclude: “Our review of the evidence leads us to suggest that these technologies will not replace healthcare professionals, but will enhance them … giving them more time to care for patients.”

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post NHS report suggests AI will give docs more patient time appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/02/11/nhs-report-ai-docs-patient-time/feed/ 0
Experts warn of AI disasters leading to research lockdown https://www.artificialintelligence-news.com/2018/09/13/experts-warn-ai-disasters-research/ https://www.artificialintelligence-news.com/2018/09/13/experts-warn-ai-disasters-research/#respond Thu, 13 Sep 2018 15:40:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3734 Experts from around the world have warned of potential AI disasters that could lead to a subsequent lockdown of research. Andrew Moore, the new head of AI at Google Cloud, is one such expert who has warned of scenarios that would lead to public backlash and restrictions that would prevent AI from reaching its full... Read more »

The post Experts warn of AI disasters leading to research lockdown appeared first on AI News.

]]>
Experts from around the world have warned of potential AI disasters that could lead to a subsequent lockdown of research.

Andrew Moore, the new head of AI at Google Cloud, is one such expert who has warned of scenarios that would lead to public backlash and restrictions that would prevent AI from reaching its full potential.

Back in November, Moore spoke at the Artificial Intelligence and Global Security Initiative. In his keynote, he said:

“If an AI disaster happens – and that would, for instance, be an autonomous car killing people due to serious bugs – then at some point AI systems are going to be locked down for development, at least in the US.

There are some even more horrible scenarios — which I don’t want to talk about on the stage, which we’re really worried about — that will cause the complete lockdown of robotics research.”

Autonomous vehicles have indeed already been involved with accidents.

Back in March, just four months after Moore’s warning, an Uber self-driving vehicle caused a fatality. The subsequent investigation found Elaine Herzberg and her bicycle were acknowledged by the car’s sensors but then flagged as a ‘false positive’ and dismissed.

Following years of sci-fi movies featuring out-of-control AI robots, it’s unsurprising the public are on edge about the pace of recent developments. There’s a lot of responsibility on researchers to conduct their work safely and ethically.

Professor Jim al-Khalili, the incoming president of the British Science Association, told the Financial Times:

“It is quite staggering to consider that until a few years ago AI was not taken seriously, even by AI researchers.

We are now seeing an unprecedented level of interest, investment and technological progress in the field, which many people, including myself, feel is happening too fast.”

In the race between world powers to become AI leaders, many fear it will lead to rushed and dangerous results. This is of particular concern with regards to AI militarisation.

Many researchers believe AI should not be used for military purposes. Several Google employees recently left the company over its contract with the Pentagon to develop recognition software for its drones.

Over 4,000 other employees signed a petition demanding that Google’s management cease the project and promise to never again ‘build warfare technology.’

Google has since made the decision not to renew its Pentagon contract when it expires. However, it’s already caused ripples across Silicon Valley with many employees for companies such as Microsoft and Amazon demanding not to be involved with military contracts.

Much like the development of nuclear weapons, however, AI being developed for military purposes seems inevitable and there will always be players willing to step in. Last month, AI News reported Booz Allen secured an $885 million Pentagon AI contract.

From a military standpoint, maintaining similar capabilities as a potential adversary is necessary. Back in July, China announced plans to upgrade its naval power with unmanned AI submarines that provide an edge over the fleets of their global counterparts.

Russian President Vladimir Putin, meanwhile, recently said: “[AI] comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Few dispute that AI will have a huge impact on the world, but the debate rages on about whether it will be primarily good or bad. Beyond the potential dangers of rogue AIs, there’s also the argument over the impact on jobs.

Al-Khalili wants to see AI added to school curriculums – as well as public information programmes launched – to educate good practices, prepare the workforce, and reduce fears created by sci-fi.

What are your thoughts on AI fears? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Experts warn of AI disasters leading to research lockdown appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/09/13/experts-warn-ai-disasters-research/feed/ 0
AI robots will solve underwater infrastructure damage checks https://www.artificialintelligence-news.com/2018/07/20/ai-robots-underwater-infrastructure/ https://www.artificialintelligence-news.com/2018/07/20/ai-robots-underwater-infrastructure/#respond Fri, 20 Jul 2018 15:12:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3529 Robots will be paired with a versatile AI that can quickly adapt to unpredictable conditions when examining underwater infrastructure. Some of a nation’s most vital infrastructure hides beneath the water. The difficulty in accessing most of it, however, makes important damage checks infrequent. Sending humans down requires significant training and can take several weeks to... Read more »

The post AI robots will solve underwater infrastructure damage checks appeared first on AI News.

]]>
Robots will be paired with a versatile AI that can quickly adapt to unpredictable conditions when examining underwater infrastructure.

Some of a nation’s most vital infrastructure hides beneath the water. The difficulty in accessing most of it, however, makes important damage checks infrequent.

Sending humans down requires significant training and can take several weeks to recover due to the often extreme depths. There are far more underwater structures than skilled divers to inspect them.

Robots have been designed to carry out some of these dangerous tasks. The problem is until now they’ve lacked the smarts to deal with the unpredictable and rapidly-changing nature of underwater conditions.

Researchers from Stevens Institute of Technology are working on algorithms which enable these underwater robots to check and protect infrastructure.

Their work is led by Brendan Englot, Professor of Mechanical Engineering at Stevens.

“There are so many difficult disturbances pushing the robot around, and there is often very poor visibility, making it hard to give a vehicle underwater the same situational awareness that a person would have just walking around on the ground or being up in the air,” says Englot.

Englot and his team are using reinforcement learning for training algorithms. Rather than use an exact mathematical model, the robot performs actions and observes whether it helps to attain its goal.

Through a case of trial-and-error, the algorithm is updated with the collected data to figure out the best ways to deal with changing underwater conditions. This will enable the robot to successfully manoeuvre and navigate even in previously unmapped areas.

A robot was recently sent on a mission to map a pier in Manhattan.

“We didn’t have a prior model of that pier,” says Englot. “We were able to just send our robot down and it was able to come back and successfully locate itself throughout the whole mission.”

The robots use sonar for data, widely regarded as the most reliable for undersea navigation. It works similar to a dolphin’s echolocation by measuring how long it takes for high-frequency chirps to bounce off nearby structures.

A pitfall with this approach is you’re only going to be able to receive imagery similar to a grayscale medical ultrasound. Englot and his team believe that once a structure has been mapped out, a second pass by the robot could use a camera for a high-resolution image of critical areas.

For now, it’s early days but Englot’s project is an example of how AI is enabling a new era for robotics that improves efficiency while reducing the risks to humans.

What are your thoughts on the use of AI-powered robots for underwater checks? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post AI robots will solve underwater infrastructure damage checks appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/07/20/ai-robots-underwater-infrastructure/feed/ 0
Scientists pledge not to build AIs which kill without oversight https://www.artificialintelligence-news.com/2018/07/18/scientists-build-ai-kill-oversight/ https://www.artificialintelligence-news.com/2018/07/18/scientists-build-ai-kill-oversight/#respond Wed, 18 Jul 2018 13:44:50 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3510 Thousands of scientists have signed a pledge not to have any role in building AIs which have the ability to kill without human oversight. When many think of AI, they at least give some passing thought of rogue AIs seen in sci-fi movies such as the infamous Skynet in Terminator. In an ideal world, AI... Read more »

The post Scientists pledge not to build AIs which kill without oversight appeared first on AI News.

]]>
Thousands of scientists have signed a pledge not to have any role in building AIs which have the ability to kill without human oversight.

When many think of AI, they at least give some passing thought of rogue AIs seen in sci-fi movies such as the infamous Skynet in Terminator.

In an ideal world, AI would never be used in any military capacity. However, it was almost certainly be developed one way or another because of the advantage it would provide to an adversary without similar capabilities.

Russian President Vladimir Putin, when asked his thoughts on AI, recently said: “Whoever becomes the leader in this sphere will become the ruler of the world.”

Putin’s words sparked fears of a race in AI development similar to that of the nuclear arms race, and one which could be potentially reckless.

Rather than attempting to stop military AI development, a more attainable goal is to at least ensure any AI decision to kill is subject to human oversight.

Demis Hassabis at Google DeepMind and Elon Musk from SpaceX are among the more than 2,400 scientists who signed the pledge not to develop AI or robots which kill without human oversight.

The pledge was created by The Future of Life Institute and calls on governments to agree on laws and regulations that stigmatise and effectively ban the development of killer robots.

“We the undersigned agree that the decision to take a human life should never be delegated to a machine,” the pledge reads. It goes on to warn “lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual.”

Programming Humanity

Human compassion is difficult to program, we’re certainly many years away from being able to do so. However, it’s vital when it comes to life-or-death matters.

Consider a missile defense AI set up to protect a nation. Based on pure logic, it may determine that wiping out another nation which begins a missile program is the best way to protect its own. Humans would take into account these are people’s lives and seeking alternatives such as diplomatic resolutions should be sought.

Robots may one day be used for policing to reduce the risk to human officers. They could be armed, with firearms or tasers, but the responsibility to fire should always come down to a human operator.

Although it will undoubtedly improve with time, AI has been proven to have a serious bias problem. A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

An armed robot who mistakenly identifies someone for another person could end up killing that individual simply due to a flaw with its algorithms. Confirming the AI’s assessment with a human operator may be enough to prevent such a disaster.

Read more: INTERPOL investigates how AI will impact crime and policing

Do you agree with the pledge made by the scientists? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Scientists pledge not to build AIs which kill without oversight appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/07/18/scientists-build-ai-kill-oversight/feed/ 0