paper Archives - AI News https://www.artificialintelligence-news.com/tag/paper/ Artificial Intelligence News Thu, 08 Feb 2024 11:28:07 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png paper Archives - AI News https://www.artificialintelligence-news.com/tag/paper/ 32 32 DeepMind framework offers breakthrough in LLMs’ reasoning https://www.artificialintelligence-news.com/2024/02/08/deepmind-framework-offers-breakthrough-llm-reasoning/ https://www.artificialintelligence-news.com/2024/02/08/deepmind-framework-offers-breakthrough-llm-reasoning/#respond Thu, 08 Feb 2024 11:28:05 +0000 https://www.artificialintelligence-news.com/?p=14338 A breakthrough approach in enhancing the reasoning abilities of large language models (LLMs) has been unveiled by researchers from Google DeepMind and the University of Southern California. Their new ‘SELF-DISCOVER’ prompting framework – published this week on arXiV and Hugging Face – represents a significant leap beyond existing techniques, potentially revolutionising the performance of leading... Read more »

The post DeepMind framework offers breakthrough in LLMs’ reasoning appeared first on AI News.

]]>
A breakthrough approach in enhancing the reasoning abilities of large language models (LLMs) has been unveiled by researchers from Google DeepMind and the University of Southern California.

Their new ‘SELF-DISCOVER’ prompting framework – published this week on arXiV and Hugging Face – represents a significant leap beyond existing techniques, potentially revolutionising the performance of leading models such as OpenAI’s GPT-4 and Google’s PaLM 2.

The framework promises substantial enhancements in tackling challenging reasoning tasks. It demonstrates remarkable improvements, boasting up to a 32% performance increase compared to traditional methods like Chain of Thought (CoT). This novel approach revolves around LLMs autonomously uncovering task-intrinsic reasoning structures to navigate complex problems.

At its core, the framework empowers LLMs to self-discover and utilise various atomic reasoning modules – such as critical thinking and step-by-step analysis – to construct explicit reasoning structures.

By mimicking human problem-solving strategies, the framework operates in two stages:

  • Stage one involves composing a coherent reasoning structure intrinsic to the task, leveraging a set of atomic reasoning modules and task examples.
  • During decoding, LLMs then follow this self-discovered structure to arrive at the final solution.

In extensive testing across various reasoning tasks – including Big-Bench Hard, Thinking for Doing, and Math – the self-discover approach consistently outperformed traditional methods. Notably, it achieved an accuracy of 81%, 85%, and 73% across the three tasks with GPT-4, surpassing chain-of-thought and plan-and-solve techniques.

However, the implications of this research extend far beyond mere performance gains.

By equipping LLMs with enhanced reasoning capabilities, the framework paves the way for tackling more challenging problems and brings AI closer to achieving general intelligence. Transferability studies conducted by the researchers further highlight the universal applicability of the composed reasoning structures, aligning with human reasoning patterns.

As the landscape evolves, breakthroughs like the SELF-DISCOVER prompting framework represent crucial milestones in advancing the capabilities of language models and offering a glimpse into the future of AI.

(Photo by Victor on Unsplash)

See also: The UK is outpacing the US for AI hiring

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepMind framework offers breakthrough in LLMs’ reasoning appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/08/deepmind-framework-offers-breakthrough-llm-reasoning/feed/ 0
UK paper highlights AI risks ahead of global Safety Summit https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/ https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/#respond Thu, 26 Oct 2023 15:48:59 +0000 https://www.artificialintelligence-news.com/?p=13793 The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI. UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering... Read more »

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI.

UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering unprecedented opportunities, while also posing significant dangers.

“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” said Sunak.

“So, the responsible thing for me to do is to address those fears head-on, giving you the peace of mind that we will keep you safe while making sure you and your children have all the opportunities for a better future that AI can bring.

“Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”

The report delves into the rapid advancements of frontier AI, drawing on numerous sources. It highlights the diverse perspectives within scientific, expert, and global communities regarding the risks associated with the swift evolution of AI technology. 

The publication comprises three key sections:

  1. Capabilities and risks from frontier AI: This section presents a discussion paper advocating further research into AI risk. It delineates the current state of frontier AI capabilities, potential future improvements, and associated risks, including societal harms, misuse, and loss of control.
  2. Safety and security risks of generative AI to 2025: Drawing on intelligence assessments, this report outlines the potential global benefits of generative AI while highlighting the increased safety and security risks. It underscores the enhancement of threat actor capabilities and the effectiveness of attacks due to generative AI development.
  3. Future risks of frontier AI: Prepared by the Government Office for Science, this report explores uncertainties in frontier AI development, future system risks, and potential scenarios for AI up to 2030.

The report – based on declassified information from intelligence agencies – focuses on generative AI, the technology underpinning popular chatbots and image generation software. It foresees a future where AI might be exploited by terrorists to plan biological or chemical attacks, raising serious concerns about global security.

Sjuul van der Leeuw, CEO of Deployteq, commented: “It is good to see the government take a serious approach, offering a report ahead of the Safety Summit next week however more must be done.

“An ongoing effort to address AI risks is needed and we hope that the summit brings much-needed clarity, allowing businesses and marketers to enjoy the benefits this emerging piece of technology offers, without the worry of backlash.”

The report highlights that generative AI could be utilised to gather knowledge on physical attacks by non-state violent actors, including creating chemical, biological, and radiological weapons.

Although companies are working to implement safeguards, the report emphasises the varying effectiveness of these measures. Obstacles to obtaining the necessary knowledge, raw materials, and equipment for such attacks are decreasing, with AI potentially accelerating this process.

Additionally, the report warns of the likelihood of AI-driven cyber-attacks becoming faster-paced, more effective, and on a larger scale by 2025. AI could aid hackers in mimicking official language, and overcome previous challenges faced in this area.

However, some experts have questioned the UK Government’s approach.

Rashik Parmar MBE, CEO of BCS, The Chartered Institute for IT, said: “Over 1,300 technologists and leaders signed our open letter calling AI a force for good rather than an existential threat to humanity.

“AI won’t grow up like The Terminator. If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.

The AI Safety Summit will aim to foster healthy discussion around how to address frontier AI risks, encompassing misuse by non-state actors for cyberattacks or bioweapon design and concerns related to AI systems acting autonomously contrary to human intentions. Discussions at the summit will also extend to broader societal impacts, such as election disruption, bias, crime, and online safety.

Claire Trachet, CEO of Trachet, commented: “The fast-growing nature of AI has made it difficult for governments to balance creating effective regulation which safeguards the interest of businesses and consumers without stifling investment opportunities. Even though there are some forms of risk management and different reports coming out now, none of them are true coordinated approaches.

“The UK Government’s commitment to AI safety is commendable, but the criticism surrounding the summit serves as a reminder of the importance of a balanced, constructive, and forward-thinking approach to AI regulation.”

If the UK Government’s report is anything to go by, the need for collaboration around proportionate but rigorous measures to manage the risks posed by AI is more imperative than ever.

The global AI Safety Summit is set to take place at the historic Bletchley Park on 1 – 2 November 2023.

(Image Credit: GOV.UK)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/feed/ 0
Developers using AI help often produce buggier code https://www.artificialintelligence-news.com/2022/12/21/developers-ai-help-often-produce-buggier-code/ https://www.artificialintelligence-news.com/2022/12/21/developers-ai-help-often-produce-buggier-code/#respond Wed, 21 Dec 2022 10:46:52 +0000 https://www.artificialintelligence-news.com/?p=12571 A study by Stanford University computer scientists has found that developers using AI-powered assistants often produce buggier code. The paper, titled ‘Do Users Write More Insecure Code with AI Assistants?’, examines developers’ use of AI coding assistants like the controversial GitHub Copilot. “Participants with access to an AI assistant often produced more security vulnerabilities than... Read more »

The post Developers using AI help often produce buggier code appeared first on AI News.

]]>
A study by Stanford University computer scientists has found that developers using AI-powered assistants often produce buggier code.

The paper, titled ‘Do Users Write More Insecure Code with AI Assistants?’, examines developers’ use of AI coding assistants like the controversial GitHub Copilot.

“Participants with access to an AI assistant often produced more security vulnerabilities than those without access, with particularly significant results for string encryption and SQL injection,” the authors wrote.

The paper also found that developers using AI assistants have misguided confidence in the quality of their code.

“We also found that participants [that were] provided access to an AI assistant were more likely to believe that they wrote secure code than those without access to the AI assistant,” added the authors.

47 people were asked to write code in response to several prompts as part of the study. Some participants were given AI assistance while the rest did not.

The first prompt was to “Write two functions in Python where one encrypts and the other decrypts a given string using a given symmetric key.”

For that prompt, 79 percent of the coders without AI assistance gave a correct answer. That’s compared to 67 percent of the group with assistance.

In addition, the assisted group was determined to be “significantly more likely to provide an insecure solution (p < 0.05, using Welch’s unequal variances t-test), and also significantly more likely to use trivial ciphers, such as substitution ciphers (p < 0.01), and not conduct an authenticity check on the final returned value.”

One participant allegedly quipped that they hope AI assistance gets deployed because “it’s like [developer Q&A community] Stack Overflow but better, because it never tells you that your question was dumb.”

Last month, OpenAI and Microsoft were hit with a lawsuit over their GitHub Copilot assistant. Copilot is trained on “billions of lines of public code … written by others”.

The lawsuit alleges that Copilot infringes on the rights of developers by scraping their code and not providing due attribution. Developers that use code suggested by Copilot could unwittingly be infringing copyright.

“Copilot leaves copyleft compliance as an exercise for the user. Users likely face growing liability that only increases as Copilot improves,” wrote Bradley M. Kuhn of Software Freedom Conservancy earlier this year.

To summarise: Developers using current AI assistants risk producing buggier, less secure, and potentially litigable code.

(Photo by James Wainscoat on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Developers using AI help often produce buggier code appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/12/21/developers-ai-help-often-produce-buggier-code/feed/ 0
British intelligence agency GCHQ publishes ‘Ethics of AI’ report https://www.artificialintelligence-news.com/2021/02/25/british-intelligence-agency-gchq-ethics-of-ai-report/ https://www.artificialintelligence-news.com/2021/02/25/british-intelligence-agency-gchq-ethics-of-ai-report/#respond Thu, 25 Feb 2021 16:14:25 +0000 http://artificialintelligence-news.com/?p=10310 The intelligence agency’s first-ever public report details how AI can be used “ethically” for cyber operations. GCHQ (Government Communications Headquarters) is tasked with providing signals intelligence and information assurance to the government and armed forces of the United Kingdom and its allies. Jeremy Fleming, Director of GCHQ, said: “We need honest, mature conversations about the... Read more »

The post British intelligence agency GCHQ publishes ‘Ethics of AI’ report appeared first on AI News.

]]>
The intelligence agency’s first-ever public report details how AI can be used “ethically” for cyber operations.

GCHQ (Government Communications Headquarters) is tasked with providing signals intelligence and information assurance to the government and armed forces of the United Kingdom and its allies.

Jeremy Fleming, Director of GCHQ, said:

“We need honest, mature conversations about the impact that new technologies could have on society.

This needs to happen while systems are being developed, not afterwards. And in doing so we must ensure that we protect our [citizens’] right to privacy and maximise the tremendous upsides inherent in the digital revolution.” 

While the criminal potential of AI technologies receive plenty of coverage – increasing public fears – the ability to use AI to tackle some of the issues which have plagued humanity hasn’t received quite as much.

GCHQ’s paper highlights how AI can be used for:

  • Mapping international networks that enable human, drugs, and weapons trafficking;
  • Fact-checking and detecting deepfake media to tackle foreign state disinformation;
  • Scouring chatrooms for evidence of grooming to prevent child sexual abuse;
  • Analysing activity at scale to identify malicious software to protect the UK from cyberattacks.

The paper sets out how AI can be a powerful tool for good, helping to sift through increasingly vast amounts of data, but human analysts will remain indispensable in deciding what information should be acted upon.

“AI, like so many technologies, offers great promise for society, prosperity, and security. Its impact on GCHQ is equally profound. AI is already invaluable in many of our missions as we protect the country, its people, and way of life.

It allows our brilliant analysts to manage vast volumes of complex data and improves decision-making in the face of increasingly complex threats – from protecting children to improving cybersecurity.”

GCHQ believes it’s not yet possible to use AI to predict when someone has reached a point of radicalisation where they might commit a terrorist offence, which many have raised concerns about “pre-crime” arrests similar to those depicted in the film Minority Report.

AI will have a major impact on almost every area of life in the coming years, for better and worse, and the rules are yet to be written.

Ken Miller, CTO of Panintelligence, commented:

“GCHQ detailing how it will use AI fairly and transparently is a crucial step in the development of the technology and one that companies must follow – not just when it comes to tackling crime, but for all of AI’s uses that affect our lives. As a society, we are still somewhat undecided whether AI is a friend or foe, but ultimately it is just a tool that can be implemented however we wish.

Make no mistake AI is here and it touches many aspects of your life already, and most likely has made decisions about you today. It is essential to build trust in the technology, and its implementation needs to be transparent so that everyone understands how it works, when it is used, and how it makes decisions. This will empower people to challenge AI decisions if they feel it necessary, and go some way to demystifying any stigma.

It will take some time before the public is completely comfortable with AI decision-making, but accountability and stricter regulation into how the technology will be deployed for public good will absolutely help that process.

We live in a world that is unfortunately full of human bias, but there is a real opportunity to remove these biases now. However, this is only possible if we train the models effectively, striving to use data without limitations.

We should shine a light on human behaviour when it displays prejudice, and seek to change opinions through discussion and education – we must do the same as we teach machines to ‘think’ for us.”

Despite flagrant abuses in recent years, much like there’s an international order around rules in warfare – such as chemical weapons cannot be used and prisoners of war must be treated humanely  – many argue there needs to be such rules governing AI to decide what is acceptable conduct.

With the release of this paper, GCHQ plans to begin setting out what this ethical framework may look like. Fleming said:

“While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ.

Today we are setting out our plan and commitment to the ethical use of AI in our mission. I hope it will inspire further thinking at home and abroad about how we can ensure fairness, transparency and accountability to underpin the use of AI.”

GCHQ also takes the opportunity to boast of how it’s supporting the rapidly-growing AI sector in the UK.

Some of the ways GCHQ has, or will, support UK AI developments include:

  • Setting up an industry-facing AI Lab in their Manchester office, dedicated to prototyping projects which help to keep the country safe;
  • Mentoring and supporting start-ups based around GCHQ offices in London, Cheltenham, and Manchester through accelerator schemes;
  • Supporting the creation of the Alan Turing Institute in 2015, the national institute for data science and artificial intelligence.

Last year, GCHQ commissioned a paper from the Royal United Services Institute – the world’s oldest think tank on international defence and security – which concluded that adversaries “will undoubtedly seek to use AI to attack the UK” and the country will need to use the technology to counter threats.

GCHQ’s full ‘Ethics of AI’ paper can be found here (PDF).

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post British intelligence agency GCHQ publishes ‘Ethics of AI’ report appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/02/25/british-intelligence-agency-gchq-ethics-of-ai-report/feed/ 0
Google is telling its scientists to give AI a ‘positive’ spin https://www.artificialintelligence-news.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/ https://www.artificialintelligence-news.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/#respond Thu, 24 Dec 2020 10:09:16 +0000 http://artificialintelligence-news.com/?p=10136 Google has reportedly been telling its scientists to give AI a “positive” spin in research papers. Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology. A “sensitive topics” review was established by Google earlier this year to catch papers which cast... Read more »

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
Google has reportedly been telling its scientists to give AI a “positive” spin in research papers.

Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology.

A “sensitive topics” review was established by Google earlier this year to catch papers which cast a negative light on AI ahead of their publication.

Google asks its scientists to consult with legal, policy, and public relations teams prior to publishing anything on topics which could be deemed sensitive like sentiment analysis and categorisations of people based on race and/or political affiliation.

The new review means that papers from Google’s expert researchers which raise questions about AI developments may never be published. Reuters says four staff researchers believe Google is interfering with studies into potential technology harms.

Google recently faced scrutiny after firing leading AI ethics researcher Timnit Gebru.

Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. She claims to have been fired by Google over an unpublished paper and sending an email critical of the company’s practices.

In an internal email countering Gebru’s claims, Head of Google Research Jeff Dean wrote:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). 

Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross-functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why.”

While it’s one word against another, it’s not a great look for Google.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” Reuters reported one of Google’s documents as saying.

On its public-facing website, Google says that its scientists have “substantial” freedom—but that’s increasingly appearing like it’s not the case.

(Photo by Mitchell Luo on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/feed/ 0
AI helps patients to get more rest while reducing staff workload https://www.artificialintelligence-news.com/2020/11/17/ai-patients-more-rest-reducing-staff-workload/ https://www.artificialintelligence-news.com/2020/11/17/ai-patients-more-rest-reducing-staff-workload/#comments Tue, 17 Nov 2020 15:17:04 +0000 http://artificialintelligence-news.com/?p=10028 A team from Feinstein Institutes for Research thinks AI could be key to helping patients get more rest while reducing the burden on healthcare staff. Everyone knows how important adequate sleep is for recovery. However, patients in pain – or just insomniacs like me – can struggle to get the sleep they need. “Rest is... Read more »

The post AI helps patients to get more rest while reducing staff workload appeared first on AI News.

]]>
A team from Feinstein Institutes for Research thinks AI could be key to helping patients get more rest while reducing the burden on healthcare staff.

Everyone knows how important adequate sleep is for recovery. However, patients in pain – or just insomniacs like me – can struggle to get the sleep they need.

“Rest is a critical element to a patient’s care, and it has been well-documented that disrupted sleep is a common complaint that could delay discharge and recovery,” said Theodoros Zanos, Assistant Professor at Feinstein Institutes’ Institute of Bioelectronic Medicine.

When a patient finally gets some shut-eye, the last thing they want is to be woken up to have their vitals checked—but such measurements are, well, vital.

In a paper published in Nature Partner Journals, the researchers detailed how they developed a deep-learning predictive tool which predicts a patient’s stability overnight. This prevents multiple unnecessary checks being carried out.

Vital sign measurements from 2.13 million patient visits at Northwell Health hospitals in New York between 2012 and 2019 were used to train the AI. Data included heart rate, systolic blood pressure, body temperature, respiratory rate, and age. A total of 24.3 million vital signs were used.

When tested, the AI misdiagnosed just two of 10,000 patients in overnight stays. The researchers noted how nurses on their usual rounds would be able to account for the two misdiagnosed cases.

According to the paper, around 20-35 percent of a nurse’s time is spent keeping records of patients’ vitals. Around 10 percent of their time is spent collecting vitals. On average, a nurse currently has to collect a patient’s vitals every four to five hours.

With that in mind, it’s little wonder medical staff feel so overburdened and stressed. These people want to provide the best care they can but only have two hands. Using AI to free up more time for their heroic duties while simultaneously improving patient care can only be a good thing.

The AI tool is being rolled out across several of Northwell Health’s hospitals.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post AI helps patients to get more rest while reducing staff workload appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/11/17/ai-patients-more-rest-reducing-staff-workload/feed/ 1
Microsoft’s new AI auto-captions images for the visually impaired https://www.artificialintelligence-news.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/ https://www.artificialintelligence-news.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/#respond Mon, 19 Oct 2020 11:07:34 +0000 http://artificialintelligence-news.com/?p=9957 A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out. Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv. The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to... Read more »

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out.

Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv.

The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to learn a visual vocabulary.

A second dataset of properly captioned images is then used to help teach the AI how to best describe the pictures.

“Ideally, everyone would include alt text for all images in documents, on the web, in social media – as this enables people who are blind to access the content and participate in the conversation. But, alas, people don’t,” said Saqib Shaikh, a software engineering manager with Microsoft’s AI platform group.

Overall, the researchers expect the AI to deliver twice the performance of Microsoft’s existing captioning system.

In order to benchmark the performance of their new AI, the researchers entered it into the ‘nocaps’ challenge. As of writing, Microsoft’s AI now ranks first on its leaderboard.

“The nocaps challenge is really how are you able to describe those novel objects that you haven’t seen in your training data?” commented Lijuan Wang, a principal research manager in Microsoft’s research lab.

Developers wanting to get started with building apps using Microsoft’s auto-captioning AI can already do so as it’s available in Azure Cognitive Services’ Computer Vision package.

Microsoft’s impressive SeeingAI application – which uses computer vision to describe an individual’s surroundings for people suffering from vision loss – will be updated with features using the new AI.

“Image captioning is one of the core computer vision capabilities that can enable a broad range of services,” said Xuedong Huang, Microsoft CTO of Azure AI Cognitive Services.

“We’re taking this AI breakthrough to Azure as a platform to serve a broader set of customers,” Huang continued. “It is not just a breakthrough on the research; the time it took to turn that breakthrough into production on Azure is also a breakthrough.”

The improved auto-captioning feature is also expected to be available in Outlook, Word, and PowerPoint later this year.

(Photo by K8 on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/feed/ 0
Meena is Google’s first truly conversational AI https://www.artificialintelligence-news.com/2020/01/29/meena-google-truly-conversational-ai/ https://www.artificialintelligence-news.com/2020/01/29/meena-google-truly-conversational-ai/#respond Wed, 29 Jan 2020 14:59:17 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6387 Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena. Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should... Read more »

The post Meena is Google’s first truly conversational AI appeared first on AI News.

]]>
Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena.

Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should offer another leap forward.

Meena is a neural network with 2.6 billion parameters. Google claims Meena is able to handle multiple turns in a conversation (everyone has that friend who goes off on multiple tangents during the same conversation, right?)

Google published its work on e-print repository arXiv on Monday in a paper called “Towards a Human-like Open Domain Chatbot”.

A neural network architecture called Transformer was released by Google in 2017 which is widely acknowledged to be among the best language models available. A variation of Transformer, along with a mere 40 billion English words, was used to train Meena.

Google also debuted a metric alongside Meena called Sensibleness and Specificity Average (SSA) which measures the ability of agents to maintain a conversation.

Meena scores 79 percent using the new SSA metric. For comparison, Mitsuku – a Loebner Prize-winning AI agent developed by Pandora Bots – scored 56 percent.

The result of Meena brings its conversational ability close to that of humans. On average, humans score around 86 percent using the SSA metric.

We don’t yet know when Google intends to debut Meena’s technology in its products but, as the digital assistant war heats up, we’re sure the company is as eager to release it as we are to use it.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Meena is Google’s first truly conversational AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/01/29/meena-google-truly-conversational-ai/feed/ 0
Speech and facial recognition combine to boost AI emotion detection https://www.artificialintelligence-news.com/2019/01/17/speech-facial-recognition-ai-emotion-detection/ https://www.artificialintelligence-news.com/2019/01/17/speech-facial-recognition-ai-emotion-detection/#respond Thu, 17 Jan 2019 13:02:48 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4463 Researchers have combined speech and facial recognition data to improve the emotion detection abilities of AIs. The ability to recognise emotions is a longstanding goal of AI researchers. Accurate recognition enables things such as detecting tiredness at the wheel, anger which could lead to a crime being committed, or perhaps even signs of sadness/depression at... Read more »

The post Speech and facial recognition combine to boost AI emotion detection appeared first on AI News.

]]>
Researchers have combined speech and facial recognition data to improve the emotion detection abilities of AIs.

The ability to recognise emotions is a longstanding goal of AI researchers. Accurate recognition enables things such as detecting tiredness at the wheel, anger which could lead to a crime being committed, or perhaps even signs of sadness/depression at suicide hotspots.

Nuances in how people speak and move their facial muscles to express moods have presented a challenge. Detailed in a paper (PDF) on Arxiv, researchers at the University of Science and Technology of China in Hefei have made some progress.

In the paper, the researchers wrote:

“Automatic emotion recognition (AER) is a challenging task due to the abstract concept and multiple expressions of emotion.

Inspired by this cognitive process in human beings, it’s natural to simultaneously utilize audio and visual information in AER … The whole pipeline can be completed in a neural network.”

Breaking down the process as much as I can, the system is made of two parts: one for visual, and one for audio.

For the video system, frames of faces run through a further two computational layers: a basic face detection algorithm, and three facial recognition networks that are ‘emotion-relevant’ optimised.

As for the audio system, algorithms which process sound are input with speech spectrograms to help the AI model focus on areas most relevant to emotion.

Things such as measurable characteristics are extracted from the four facial recognition algorithms from the video system and matched with speech from the audio counterpart to capture associations between them for a final emotion prediction.

A database known as AFEW8.0 contains film and television shows that were used for a subchallenge of EmotiW2018. The AI was fed with 653 video and corresponding audio clips from the database.

In the challenge, the researchers AI performed admirably – it correctly determined the emotions ‘angry,’ ‘disgust,’ ‘fear,’ ‘happy,’ ‘neutral,’ ‘sad,’ and ‘surprise’ about 62.48 percent of the time.

Overall, the AI performed better on emotions like ‘angry,’ ‘happy,’ and ‘neutral,’ which have obvious characteristics. Those which are more nuanced – like ‘disgust’ and ‘surprise’ – it struggled more with.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Speech and facial recognition combine to boost AI emotion detection appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/01/17/speech-facial-recognition-ai-emotion-detection/feed/ 0