applications Archives - AI News https://www.artificialintelligence-news.com/tag/applications/ Artificial Intelligence News Wed, 22 Dec 2021 12:27:34 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png applications Archives - AI News https://www.artificialintelligence-news.com/tag/applications/ 32 32 EU clears $19.7B Microsoft-Nuance deal without any small print https://www.artificialintelligence-news.com/2021/12/22/eu-clears-19-7b-microsoft-nuance-deal-without-small-print/ https://www.artificialintelligence-news.com/2021/12/22/eu-clears-19-7b-microsoft-nuance-deal-without-small-print/#respond Wed, 22 Dec 2021 12:27:33 +0000 https://artificialintelligence-news.com/?p=11543 The EU has concluded Microsoft’s $19.7 billion acquisition of Nuance doesn’t pose competition concerns. Nuance gained renown for originally creating the backend of that little old virtual assistant called Siri (you might have heard of it?) The company has since continued to focus on building its speech recognition capabilities and has a number of solutions... Read more »

The post EU clears $19.7B Microsoft-Nuance deal without any small print appeared first on AI News.

]]>
The EU has concluded Microsoft’s $19.7 billion acquisition of Nuance doesn’t pose competition concerns.

Nuance gained renown for originally creating the backend of that little old virtual assistant called Siri (you might have heard of it?)

The company has since continued to focus on building its speech recognition capabilities and has a number of solutions which span particular industries such as healthcare to general omni-channel customer experience services.

Earlier this year, Microsoft decided Nuance is worth coughing up $19.7 billion for.

As such large deals often do, the proposed acquisition caught the eyes of several global regulators. In the case of the EU, it was referred to the Commission’s regulators on 16 November.

The regulator said on Tuesday that the proposed acquisition “would raise no competition concerns” within the bloc and that “Microsoft and Nuance offer very different products” after looking at potential horizontal overlaps between the companies’ transcription solutions.

Vertical links in the healthcare space were also analysed but it was determined that “competing transcription service providers in healthcare do not depend on Microsoft for cloud computing services” and that “transcription service providers in the healthcare sector are not particularly important users of cloud computing services”.

Furthermore, the regulator concluded:

  • Microsoft-Nuance will continue to face stiff competition from rivals in the future.
  • There’d be no ability/incentive to foreclose existing market solutions.
  • Nuance can only use the data it collects for its own services.
  • The data will not provide Microsoft with an advantage to shut out competing software providers.

The EU’s decision mirrors that of regulators in the US and Australia. However, the UK’s Competition and Markets Authority (CMA) announced its own investigation earlier this month.

When it announced the deal, Microsoft said that it aims to complete its acquisition by the end of 2021. The CMA is accepting comments until 10 January 2022 so it seems that Microsoft may have to hold out a bit longer.

(Photo by Annie Spratt on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

The post EU clears $19.7B Microsoft-Nuance deal without any small print appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/22/eu-clears-19-7b-microsoft-nuance-deal-without-small-print/feed/ 0
Twitter turns to HackerOne community to help fix its AI biases https://www.artificialintelligence-news.com/2021/08/02/twitter-turns-hackerone-community-help-fix-ai-biases/ https://www.artificialintelligence-news.com/2021/08/02/twitter-turns-hackerone-community-help-fix-ai-biases/#respond Mon, 02 Aug 2021 17:04:36 +0000 http://artificialintelligence-news.com/?p=10816 Twitter is recruiting the help of the HackerOne community to try and fix troubling biases with its AI models. The image-cropping algorithm used by Twitter was intended to keep the most interesting parts of an image in the preview crop in people’s timelines. That’s all good, until users found last year that it favoured lighter... Read more »

The post Twitter turns to HackerOne community to help fix its AI biases appeared first on AI News.

]]>
Twitter is recruiting the help of the HackerOne community to try and fix troubling biases with its AI models.

The image-cropping algorithm used by Twitter was intended to keep the most interesting parts of an image in the preview crop in people’s timelines. That’s all good, until users found last year that it favoured lighter skin colours over dark and the breasts and legs of women over their faces.

When researchers fed a picture of a black man and a white woman into the system, the algorithm displayed the white woman 64 percent of the time and the black man just 36 percent of the time. For images of a white woman and a black woman, the algorithm displayed the white woman 57 percent of the time.

Twitter has offered bounties ranging between $500 and $3500 to anyone who finds evidence of harmful bias in their algorithms. Anyone successful will also be invited to DEF CON, a major hacker convention.

Rumman Chowdhury, Director of Software Engineering at Twitter, and Jutta Williams, Product Manager, wrote in a blog post:

“We want to take this work a step further by inviting and incentivizing the community to help identify potential harms of this algorithm beyond what we identified ourselves.”

After initially denying the problem, it’s good to see Twitter taking responsibility and attempting to fix the issue. By doing so, the company says it wants to “set a precedent at Twitter, and in the industry, for proactive and collective identification of algorithmic harms.”

Three staffers from Twitter’s Machine Learning Ethics, Transparency, and Accountability department found biases in their own tests and claim the algorithm is, on average, around four percent more likely to display people with lighter skin compared to darker and eight percent more likely to display women compared to men.

However, the staffers found no evidence that certain parts of people’s bodies were more likely to be displayed than others.

“We found that no more than 3 out of 100 images per gender have the crop not on the head,” they explained in a paper that was published on arXiv.

Twitter has gradually ditched its problematic image-cropping algorithm and doesn’t seem to be in a rush to reinstate it anytime soon:

In its place, Twitter has been rolling out the ability for users to control how their images are cropped.

“We considered the trade-offs between the speed and consistency of automated cropping with the potential risks we saw in this research,” wrote Chowdhury in a blog post in May.

“One of our conclusions is that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people.”

The HackerOne page for the challenge can be found here.

(Photo by Edgar MORAN on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Twitter turns to HackerOne community to help fix its AI biases appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/02/twitter-turns-hackerone-community-help-fix-ai-biases/feed/ 0
Apple considers using ML to make augmented reality more useful https://www.artificialintelligence-news.com/2021/07/22/apple-considers-using-ml-to-make-augmented-reality-more-useful/ https://www.artificialintelligence-news.com/2021/07/22/apple-considers-using-ml-to-make-augmented-reality-more-useful/#respond Thu, 22 Jul 2021 14:39:31 +0000 http://artificialintelligence-news.com/?p=10792 A patent from Apple suggests the company is considering how machine learning can make augmented reality (AR) more useful. Most current AR applications are somewhat gimmicky, with barely a handful that have achieved any form of mass adoption. Apple’s decision to introduce LiDAR in its recent devices has given AR a boost but it’s clear... Read more »

The post Apple considers using ML to make augmented reality more useful appeared first on AI News.

]]>
A patent from Apple suggests the company is considering how machine learning can make augmented reality (AR) more useful.

Most current AR applications are somewhat gimmicky, with barely a handful that have achieved any form of mass adoption. Apple’s decision to introduce LiDAR in its recent devices has given AR a boost but it’s clear that more needs to be done to make applications more useful.

A newly filed patent suggests that Apple is exploring how machine learning can be used to automatically (or “automagically,” the company would probably say) detect objects in AR.

The first proposed use of the technology would be for Apple’s own Measure app.

Measure’s previously dubious accuracy improved greatly after Apple introduced LiDAR but most people probably just grabbed an actual tape measure unless they were truly stuck without one available.

The patent suggests machine learning could be used for object recognition in Measure to help users simply point their devices at an object and have its measurements automatically presented in AR.

Specifically, Apple’s patent suggests displaying a “measurement of the object determined using one of a plurality of class-specific neural networks selected based on the classifying of the object.”

This simplicity benefit over a traditional tape measure would likely drive greater adoption.

Machine learning is already used for a number of object recognition and labelling tasks within Apple’s ecosystem. Image editor Pixelmator Pro, for example, uses it to automatically label layers.

Apple’s implementation suggests an object is measured “by first generating a 3D bounding box for the object based on the depth data”. This boundary box is then refined “using various neural networks and refining algorithms described herein.”

Not all objects are measured the same so Apple suggests that a neural network could also step in here to determine what could be useful for the user. For example, “a seat height for chairs, a display diameter for TVs, a table diameter for round tables, a table length for rectangular tables, and the like.”

To accomplish what Apple envisions here, a lot of models will need to be trained for all objects. However, there are many of the more everyday items that could be supported early on—with more added over time.

“One model may be trained and used to determine measurements for chair type objects (e.g., determining a seat height, arm length, etc.),” Apple wrote, “and another model may be trained and used to determine measurements for TV type objects (e.g., determining a diagonal screen size, greatest TV depth, etc.)”

Five inventors are credited with the patent: Amit Jain, Aditya Sankar; Qi Shan, Alexandre Da Veiga, and Shreyas V Joshi.

Apple’s patent is another example of how machine learning can be combined with other technologies to add real utility and ultimately improve lives. There’s no telling when, or even if, Apple will release an updated Measure app based on this patent—but it seems more plausible in the not-so-distant future than many of the company’s patents.

(Image Credit: Apple)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Apple considers using ML to make augmented reality more useful appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/07/22/apple-considers-using-ml-to-make-augmented-reality-more-useful/feed/ 0
How custom algorithms will shape the future of media buying https://www.artificialintelligence-news.com/2021/07/14/how-custom-algorithms-will-shape-the-future-of-media-buying/ https://www.artificialintelligence-news.com/2021/07/14/how-custom-algorithms-will-shape-the-future-of-media-buying/#respond Wed, 14 Jul 2021 10:06:36 +0000 http://artificialintelligence-news.com/?p=10776 The digital advertising industry ingests and processes millions of data signals per second, generating immense volumes of data. While the industry is hyper-focused on the cookie deprecation, the third-party cookie is actually only one marketing input, there are many other data signals, both on and offline, available to optimise media buying. Algorithms based on artificial-intelligence... Read more »

The post How custom algorithms will shape the future of media buying appeared first on AI News.

]]>
The digital advertising industry ingests and processes millions of data signals per second, generating immense volumes of data. While the industry is hyper-focused on the cookie deprecation, the third-party cookie is actually only one marketing input, there are many other data signals, both on and offline, available to optimise media buying.

Algorithms based on artificial-intelligence (AI) can be tailored to brands’ unique goals, allowing marketers to find pockets of performance within vast amounts of data and optimise media buying to drive real business outcomes. By combining custom AI approaches that integrate a brand’s key performance indicators (KPIs), and shaking off our third-party cookie dependence, we can welcome a new era of transparent and effective programmatic media.

User matching via first-party data signals

One way AI and custom algorithms will shape media buying, is by matching converted consumers with prospects that have similar digital patterns. Rather than focussing on who consumers are – their age and gender, or where they live – AI looks beyond basic characteristics to focus on the most important behavioural signals of a likely customer. Two consumers can have completely different profiles but ultimately want the same thing. Where traditional audience targeting would miss this opportunity, algorithmic matching enables brands to identify and take advantage of these similar needs.

Algorithmic consumer matching is currently based on first-party data signals, from retailers, brands or publishers. Moving forward, an explosion in new types of data is expected from connected cars and homes, internet-of-things devices, virtual and augmented reality, and biometrics, which will all feed into this process. AI will be vital to manage this data, and there must always be an emphasis on balancing the relationship between AI and ethics to ensure advertising works better for everyone while individual identities are protected.

Aligning media buying with brand objectives

A second way tailored algorithms will make media buying more effective is by aligning activity with brand objectives to deliver real business performance. Brands decide on the outcomes they want to achieve, allowing multi-metric KPIs and offline data inputs to be integrated into customised algorithms and ensure media buying is focussed on attaining those goals.

AI can increase efficiency by automatically directing spend towards areas of strong performance. The technology constantly checks itself to shift delivery and improve execution. Algorithms can predict which impressions will perform well, based on a huge variety of factors such as the length of time since a user visited an advertiser’s website, and generate far better conversion rates than can be achieved though manual optimisation.

In addition, once desired outcomes are established, custom algorithms can run thousands of real-time tests to determine the exact bid required to win media placements in an ad exchange. The performance of media buys can be continually measured, with results fed back into algorithms to create a closed loop of optimisation.

While AI is vital to enhance and streamline digital media buying, it doesn’t remove people from the process by any means. Success relies on the initial input and continuous management of campaigns by highly skilled people from data scientists to media planners. Algorithmic success is about finding harmony between man and machine by optimising towards goals set and overseen by real people to ensure ethical application of technology.

Dynamically optimising creative for performance

The role of custom algorithms doesn’t end with buying the right impression at the right price, it also includes ad execution, and specifically optimising ad creative to maximise the chances of conversion. Sophisticated algorithms are used to select the most relevant and effective creative elements, according to a variety of data points, and to assemble ads that appeal to individuals at different stages of the purchase journey.

Volvo, for instance, recently used AI to generate cost-effective conversions from a digital advertising campaign in Norway. Custom algorithms were used to test creative elements such as logos, layouts, and messaging at scale to determine which creative versioning drove the most conversions at the lowest cost. As a result, Volvo saw a 440% increase in audiences configuring new cars and booking test-drives and made more efficient use of its marketing budget with a 66% reduction in cost-per-acquisition (CPA).

As technologies evolve and volumes of data increase in digital advertising, the creative applications of custom algorithms will continue to grow in ways we may not be able to imagine yet. What we can be sure of, is that AI will be a necessary component in the toolbox of any marketer looking to optimise media buying, and better deliver business outcomes.

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post How custom algorithms will shape the future of media buying appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/07/14/how-custom-algorithms-will-shape-the-future-of-media-buying/feed/ 0
Applause’s new AI solution helps tackle bias and sources data at scale https://www.artificialintelligence-news.com/2019/11/06/applause-ai-tackle-bias-sources-data-scale/ https://www.artificialintelligence-news.com/2019/11/06/applause-ai-tackle-bias-sources-data-scale/#respond Wed, 06 Nov 2019 14:00:44 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6164 Testing specialists Applause have debuted an AI solution promising to help tackle algorithmic bias while providing the scale of data needed for robust training. Applause has built a vast global community of testers for its app testing solution which is trusted by brands including Google, Uber, PayPal, and more. The company is leveraging this relatively... Read more »

The post Applause’s new AI solution helps tackle bias and sources data at scale appeared first on AI News.

]]>
Testing specialists Applause have debuted an AI solution promising to help tackle algorithmic bias while providing the scale of data needed for robust training.

Applause has built a vast global community of testers for its app testing solution which is trusted by brands including Google, Uber, PayPal, and more. The company is leveraging this relatively unique asset to help overcome some of the biggest hurdles facing AI development.

AI News spoke with Kristin Simonini, VP of Product at Applause, about the company’s new solution and what it means for the industry ahead of her keynote at AI Expo North America later this month.

“Our customers have been needing additional support from us in the area of data collection to support their AI developments, train their system, and then test the functionality,” explains Simonini. “That latter part being more in-line with what they traditionally expect from us.”

Applause has worked predominantly with companies in the voice space but also their increasing expansion into things such as gathering and labelling images and running documents through OCR.

This existing breadth of experience in areas where AI is most commonly applied today puts the company and its testers in a good position to offer truly useful feedback on where improvements can be made.

Specifically, Applause’s new solution operates across five unique types of AI engagements:

  • Voice: Source utterances to train voice-enabled devices, and test those devices to ensure they understand and respond accurately.
  • OCR (Optimized Character Recognition): Provide documents and corresponding text to train algorithms to recognize text, and compare printed docs and the recognized text for accuracy.
  • Image Recognition: Deliver photos taken of predefined objects and locations, and ensure objects are being recognized and identified correctly.
  • Biometrics: Source biometric inputs like faces and fingerprints, and test whether those inputs result in an experience that’s easy to use and actually works
  • Chatbots: Give sample questions and varying intents for chatbots to answer, and interact with chatbots to ensure they understand and respond accurately in a human-like way.

“We have this ready global community that’s in a position to pull together whatever information an organisation might be looking for, do it at scale, and do it with that breadth and depth – in terms of locations, genders, races, devices, and all types of conditions – that make it possible to pull in a very diverse set of data to train an AI system.”

Some examples Simonini provides of the types of training data which Applause’s global testers can supply includes voice utterances, specific documents, and images which meet set criteria like “street corners” or “cats”. A lack of such niche data sets with the diversity necessary is one of the biggest obstacles faced today and one which Applause hopes to help overcome.

A significant responsibility

Everyone involved in developing emerging technologies carries a significant responsibility. AI is particularly sensitive because everyone knows it will have a huge impact across most parts of societies around the world, but no-one can really predict how.

How many jobs will AI replace? Will it be used for killer robots? Will it make decisions on whether to launch a missile? To what extent will facial recognition be used across society? These are important questions that no-one can give a guaranteed answer, but it’s certainly on the minds of a public that’s grown up around things like 1984 and Terminator.

One of the main concerns about AI is bias. Fantastic work by the likes of the Algorithmic Justice League has uncovered gross disparities between the effectiveness of facial recognition algorithms dependent on the race and gender of each individual. For example, IBM’s facial recognition algorithm was 99.7 percent accurate when used on lighter-skinned males compared to just 65.3 percent on darker-skinned females.

Simonini highlights another study she read recently where voice accuracy for white males was over 90 percent. However, for African-American females, it was more like 30 percent.

Addressing such disparities is not only necessary to prevent things such as inadvertently automating racial profiling or giving some parts of society an advantage over others, but also to allow AI to reach its full potential.

While there are many concerns, AI has a huge amount of power for good as long as it’s developed responsibly. AI can drive efficiencies to reduce our environmental impact, free up more time to spend with loved ones, and radically improve the lives of people with disabilities.

A failure of companies to take responsibility for their developments will lead to overregulation, and overregulation leads to reduced innovation. We asked Simonini whether she believes robust testing will reduce the likelihood of overregulation.

“I think it’s certainly improved the situation. I think that there’s always going to probably be some situations where people attempt to regulate, but if you can really show that effort has been put forward to get to a high level of accuracy and depth then I think it would be less likely.”

Human testing remains essential

Applause is not the only company working to reduce bias in algorithms. IBM, for example, has a tool called Fairness 360 which is essentially an AI itself used to scan other algorithms for signs of bias. We asked Simonini why Applause believes human testing is still necessary.

“Humans are unpredictable in how they’re going to react to something and in what manner they’re going to do it, how they choose to engage with these devices and applications,” comments Simonini. “We haven’t yet seen an advent of being able to effectively do that without the human element.”

An often highlighted challenge with voice recognition is the wide variety of languages spoken and their regional dialects. Many American voice recognition systems even struggle with my accent from the South West of England.

Simonini adds in another consideration about slang words and the need for voice services to keep up-to-date with changing vocabularies.

“Teenagers today like to, when something is hot or cool, say it’s “fire” [“lit” I believe is another one, just to prove I’m still down with the kids],” explains Simonini. “We were able to get these devices into homes and really try to understand some of those nuances.”

Simonini then further explains the challenge of understanding the context of these nuances. In her “fire” example, there’s a very clear need to understand when there’s a literal fire and when someone is just saying that something is cool.

“How do you distinguish between this being a real emergency? My volume and my tone and everything else about how I’ve used that same voice command is going to be different.”

The growth of AI apps and services

Applause established its business in traditional app testing. Given the expected growth in AI apps and services, we asked Simonini whether Applause believes its AI testing solution will become as big – or perhaps even bigger – than its current app testing business.

“We do talk about that; you know, how fast is this going to grow?” says Simonini. “I don’t want to keep talking about voice, but if you look statistically at the growth of the voice market vis-à-vis the growth and adoption of mobile; it’s happening at a much faster pace.”

“I think that it’s going to be a growing portion of our business but I don’t think it necessarily is going to replace anything given that those channels [such as mobile and desktop apps] will still be alive and complementary to one another.”

Simonini will be speaking at AI Expo North America on November 13th in a keynote titled Why The Human Element Remains Essential In Applied AI. We asked what attendees can expect from her talk.

“The angle that we chose to sort of speak about is really this intersection of the human and the AI and why we – given that it’s the business we’re in and what we see day-in, day-out – don’t believe that it becomes the replacement of but how it can work and complement one another.”

“It’s really a bit of where we landed when we went out to figure out whether you can replace an army of people with an army of robots and get the same results. And basically that no, there are still very human-focused needs from a testing perspective.”

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Applause’s new AI solution helps tackle bias and sources data at scale appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/11/06/applause-ai-tackle-bias-sources-data-scale/feed/ 0