Featured Archives - AI News https://www.artificialintelligence-news.com/tag/featured/ Artificial Intelligence News Fri, 05 Nov 2021 13:42:12 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Featured Archives - AI News https://www.artificialintelligence-news.com/tag/featured/ 32 32 British court disagrees with Australia, rules that AIs cannot be patent inventors https://www.artificialintelligence-news.com/2021/09/22/british-court-disagrees-australia-rules-ai-cannot-be-patent-inventors/ https://www.artificialintelligence-news.com/2021/09/22/british-court-disagrees-australia-rules-ai-cannot-be-patent-inventors/#respond Wed, 22 Sep 2021 09:46:38 +0000 http://artificialintelligence-news.com/?p=11098 The UK and Australia may have made a historic pact last week, but one thing they can’t agree on is whether AIs can be patent inventors. AIs are increasingly being used to come up with new ideas and there’s an argument they should therefore be listed as the inventor by patent agencies. However, opponents say... Read more »

The post British court disagrees with Australia, rules that AIs cannot be patent inventors appeared first on AI News.

]]>
The UK and Australia may have made a historic pact last week, but one thing they can’t agree on is whether AIs can be patent inventors.

AIs are increasingly being used to come up with new ideas and there’s an argument they should therefore be listed as the inventor by patent agencies. However, opponents say that patents are a statutory right and can only be granted to a person.

US-based Dr Stephen Thaler, the founder of Imagination Engines, has been leading the fight to give credit to machines for their creations.

Dr Thaler’s AI device, DABUS, consists of neural networks and was used to invent an emergency warning light, a food container that improves grip and heat transfer, and more.

In August, a federal court in Australia ruled that AI systems can be credited as inventors under patent law after Ryan Abbott, a professor at the University of Surrey, filed applications in the country on behalf of Dr Thaler. Similar applications were also filed in the UK, US, and New Zealand.

In the UK, the Intellectual Property Office rejected the applications on the grounds that – under the country’s Patents Act – only mere mortals can be recognised as inventors. The decision was appealed in the High Court in London and failed once again.

Undeterred, the case was taken to the Court of Appeal back in July this year. Thaler argued that he truly believed DABUS was the inventor which should be enough to satisfy section 13(2)(a) of the aforementioned Patent Act.

Lord Justice Birss entertained the idea and stated that “Dr Thaler has complied with his legal obligations under s13(2)(a)”.

“The fact that no inventor, properly so-called, can be identified, simply means that there is no name which the Comptroller has to mention on the patent as the inventor,” Lord Justice Birss went on to say.

However, Lord Justice Birss’ colleagues – Lord Justice Arnold and Lady Justice Elisabeth Laing – ultimately disagreed and dismissed the appeal. Their reasoning goes back to the view that only humans can be credited as inventors.

“Dr Thaler did not identify ‘the person or persons whom he believes to be the inventor or inventors’ as required,” said Lord Justice Arnold.

“A patent is a statutory right and it can only be granted to a person,” added Lady Justice Liang. “Only a person can have rights. A machine cannot.”

The fight to give machines rights will have to continue another day.

(Photo by Jem Sahagun on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post British court disagrees with Australia, rules that AIs cannot be patent inventors appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/22/british-court-disagrees-australia-rules-ai-cannot-be-patent-inventors/feed/ 0
National Robotarium pioneers AI and telepresence robotic tech for remote health consultations https://www.artificialintelligence-news.com/2021/09/20/national-robotarium-pioneers-ai-and-telepresence-robotic-tech-for-remote-health-consultations/ https://www.artificialintelligence-news.com/2021/09/20/national-robotarium-pioneers-ai-and-telepresence-robotic-tech-for-remote-health-consultations/#respond Mon, 20 Sep 2021 13:45:11 +0000 http://artificialintelligence-news.com/?p=11095 The National Robotarium, hosted by Heriot-Watt University in Edinburgh, has unveiled an AI-powered telepresence robotic solution for remote health consultations. Using the solution, health practitioners would be able to assess a person’s physical and cognitive health from anywhere in the world. Patients could access specialists no matter whether they’re based in the UK, India, the... Read more »

The post National Robotarium pioneers AI and telepresence robotic tech for remote health consultations appeared first on AI News.

]]>
The National Robotarium, hosted by Heriot-Watt University in Edinburgh, has unveiled an AI-powered telepresence robotic solution for remote health consultations.

Using the solution, health practitioners would be able to assess a person’s physical and cognitive health from anywhere in the world. Patients could access specialists no matter whether they’re based in the UK, India, the US, or anywhere else.

Iain Stewart, UK Government Minister for Scotland, said:

“It was fascinating to visit the National Robotarium and see first-hand how virtual teleportation technology could revolutionise healthcare and assisted living.

Backed by £21 million UK Government City Region Deal funding, this cutting-edge research centre is a world leader for robotics and AI, bringing jobs and investment to the area.”

The project is part of the National Robotarium’s assisted living lab which explores how to improve the lives of people living with various conditions.

Dr Mario Parra Rodriguez, an expert in cognitive assessment from the University of Strathclyde, is working on the project and believes the solution will enable more regular monitoring and health assessments that are critical for people living with conditions like Alzheimer’s disease and other cognitive impairments.

“The experience of inhabiting a distant robot through which I can remotely guide, assess, and support vulnerable adults affected by devastating conditions such as Alzheimer’s disease, grants me confidence that challenges we are currently experiencing to mitigate the impact of such diseases will soon be overcome through revolutionary technologies,” commented Rodriguez.

“The collaboration with the National Robotarium, hosted by Heriot-Watt University is combining experience from various disciplines to deliver technologies that can address the ever-changing needs of people affected by dementia.”

Dr Mauro Dragone is leading the research and explains how AI was vital for the project:

“Our prototype makes use of machine learning and artificial intelligence techniques to monitor smart home sensors to detect and analyse daily activities. We are programming the system to use this information to carry out a thorough, non-intrusive assessment of an older person’s cognitive abilities, as well as their ability to live independently.

Combining the system with a telepresence robot brings two major advances: Firstly, robots can be equipped with powerful sensors and can also operate in a semi-autonomous mode, enriching the capability of the system to deliver quality data, 24 hours a day, seven days a week. 

Secondly, telepresence robots keep clinicians and carers in the loop. These professionals can benefit from the data provided by the project’s intelligent sensing system, but they can also control the robot directly, over the Internet, to interact with the individual under their care. They can see through the eyes of the robot, move around the room or between rooms and operate its arms and hands to carry out more complex assessment protocols. They can also respond to emergencies and provide assistance when needed.”

Earlier this month, the UK government announced tax rises to fund social care, give people the dignity they deserve, and help the NHS recover from the pandemic.

However, some believe further rises are on the horizon. Innovative technologies could help to reduce costs while maintaining or improving care.

“Blackwood is always looking for solutions that help our customers to live more independently whilst promoting choice and control for the individual. Robotics has the potential to improve independent living, provide new levels of support, and integrate with our digital housing and care system CleverCogs,” said Mr Colin Foskett, Head of Innovation at Blackwood Homes and Care.

“Our partnership with the National Robotarium and the design of the assisted living lab ensures that our customers are involved in the co-design and co-creation of new products and services, increasing our investment in innovation and in the future leading to new solutions that will aid independent living and improve outcomes for our customers.”

Our sister publication, IoT News, reported on the construction of the £22.4 million National Robotarium earlier this year—including some of the facilities, equipment, and innovative projects that it hosts.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post National Robotarium pioneers AI and telepresence robotic tech for remote health consultations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/20/national-robotarium-pioneers-ai-and-telepresence-robotic-tech-for-remote-health-consultations/feed/ 0
Web data is driving AI development https://www.artificialintelligence-news.com/2021/09/14/web-data-is-driving-ai-development/ https://www.artificialintelligence-news.com/2021/09/14/web-data-is-driving-ai-development/#respond Tue, 14 Sep 2021 16:03:59 +0000 http://artificialintelligence-news.com/?p=11058 Artificial Intelligence (AI) is fast shaping the world around us and is becoming increasingly important within business operations. In fact, research by Deloitte shows that 73% of IT and line-of-business executives see AI as an indispensable part of their current business. It’s clear to see there is great potential for AI in virtually all areas... Read more »

The post Web data is driving AI development appeared first on AI News.

]]>
Artificial Intelligence (AI) is fast shaping the world around us and is becoming increasingly important within business operations. In fact, research by Deloitte shows that 73% of IT and line-of-business executives see AI as an indispensable part of their current business. It’s clear to see there is great potential for AI in virtually all areas of our lives, but AI systems can only ever be as powerful as the information that they are built on. With huge quantities of very specific data needed to effectively train systems in the right way, we’ll explore the key points behind the data required and how it is being sourced. 

Web data – The AI goldmine 

First, we will look at where the data comes from, and it is more easily available than you might have assumed. That’s because it often comes from the largest source of information that has ever existed – publicly available web data. Public social media data, to give just one example, is being utilised by organisations as a source of information about consumer sentiment and behaviour. This data is being used to develop AI systems by businesses in industries as varied as insurance, market research, consumer finance, and real estate to gain an edge over their competition.

In these instances, information such as Twitter posts and online reviews data is leveraged to develop the AI insights needed to stay afloat in a volatile business environment. For example, hiring announcements on Twitter or other job websites for positions in the automotive industry could indicate an economic rebound in that sector, or that the industry itself anticipates an uptick in demand.

Overcoming data hurdles 

Although the data is widely available, accessing public web data at this mammoth scale is not without its challenges. Organisations are often blocked by competitors or for other reasons in the process of retrieving data, or they encounter difficulties accessing data in every region they are looking to target globally. Therefore it is important businesses adopt a web data platform that can consistently feed them the data they need. It will need to be a global network, with the capacity to handle gargantuan data volumes. 

Being able to access the correct data is essential as teaching AI systems properly is impossible without following the proper data retrieving protocols because only “clean” accurate data can create the right level of ROI for businesses. Often, requests seen as coming from data centres are blocked by websites, or fed incorrect information, as businesses want to prevent accessing data by their competition to gain a competitive advantage. Using a flexible web platform solves this problem as it provides you with a transparent view of the internet – just like it initially intended to do. 

The power of correct data 

Data is growing at an exponential rate and although businesses can benefit from this, they must take steps to ensure the right technology and processes are in place to generate real value. When looking at building an AI system, you could see it like building a house. You can have the best architect or the best team of builders on the planet, but if there are any flaws with the raw materials, they are the wrong type, or there are simply not enough of them, there are going to be serious issues with the final product. If you build on a foundation consisting of clean and accurate web data sources, you will have a robust base that you can build powerful AI systems on top of. These systems will be able to provide effective, dependable, and relevant business insights despite the unprecedented volatility in market trends.

Editor’s note: This article is in association with Bright Data

(Photo by calvin chou on Unsplash)

The post Web data is driving AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/14/web-data-is-driving-ai-development/feed/ 0
Twitter begins labelling ‘good’ bots on the social media platform https://www.artificialintelligence-news.com/2021/09/10/twitter-begins-labelling-good-bots-on-the-social-platform/ https://www.artificialintelligence-news.com/2021/09/10/twitter-begins-labelling-good-bots-on-the-social-platform/#respond Fri, 10 Sep 2021 14:13:40 +0000 http://artificialintelligence-news.com/?p=11039 Twitter is testing a new feature that will give the good kind of bots some due recognition. Bots have become a particularly hot topic in recent years, but mainly for negative reasons. We’ve all seen their increased use to share propaganda to sway democratic processes and spread disinformation around things like COVID-19 vaccines. However, despite... Read more »

The post Twitter begins labelling ‘good’ bots on the social media platform appeared first on AI News.

]]>
Twitter is testing a new feature that will give the good kind of bots some due recognition.

Bots have become a particularly hot topic in recent years, but mainly for negative reasons. We’ve all seen their increased use to share propaganda to sway democratic processes and spread disinformation around things like COVID-19 vaccines.

However, despite their image problem, bots can be an important tool for good.

Some bots share critical information around things like severe weather, natural disasters, active shooters, and other emergencies. Others can be educational and provide facts or dig up historical events and artifacts to remind us of the past as we’re browsing on our modern devices.

On Thursday, Twitter announced that it’s testing a new label to let users know the account is using automated but legitimate content.

Twitter says the new feature is based on research from users that found they want more context about non-human accounts.

A study by Carnegie Mellon University last year found that almost half of Twitter accounts tweeting about the coronavirus pandemic were likely automated accounts. Twitter says it will continue to remove fake accounts that break its rules.

The move could be likened to Twitter’s verified accounts scheme that puts a little blue tick mark next to a user’s name to show others that it belongs to the person in question and isn’t a fake, often created for scam purposes.

However, unlike Twitter’s verified accounts scheme that provides no guarantees about the content of a user’s tweets, the social network is taking a bit of a gamble that tweets from a ‘good’ bot account will remain accurate.

(Photo by Jeremy Bezanger on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Twitter begins labelling ‘good’ bots on the social media platform appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/10/twitter-begins-labelling-good-bots-on-the-social-platform/feed/ 0
Janine Lloyd-Jones, Faculty: On the ethical considerations of AI and ensuring it’s a tool for positive change https://www.artificialintelligence-news.com/2021/09/06/janine-lloyd-jones-faculty-ethical-considerations-ai-ensuring-tool-positive-change/ https://www.artificialintelligence-news.com/2021/09/06/janine-lloyd-jones-faculty-ethical-considerations-ai-ensuring-tool-positive-change/#respond Mon, 06 Sep 2021 13:31:41 +0000 http://artificialintelligence-news.com/?p=11007 The benefits of AI are becoming increasingly clear as deployments ramp up, but fully considering the technology’s impact must remain a priority to build public trust. AI News caught up with Janine Lloyd-Jones, director of marketing and communication at Faculty, to discuss how the benefits of AI can be unlocked while ensuring deployments remain a... Read more »

The post Janine Lloyd-Jones, Faculty: On the ethical considerations of AI and ensuring it’s a tool for positive change appeared first on AI News.

]]>
The benefits of AI are becoming increasingly clear as deployments ramp up, but fully considering the technology’s impact must remain a priority to build public trust.

AI News caught up with Janine Lloyd-Jones, director of marketing and communication at Faculty, to discuss how the benefits of AI can be unlocked while ensuring deployments remain a tool for positive change.

AI News: What are the constraints, ethical considerations, and potential for deep reinforcement learning?

Janine Lloyd-Jones: Whilst reinforcement learning can be used in video games, robotics and chatbots, the reality is that we can’t fully unlock the power of these tools as the risks are high and models like these are hard to maintain.

As AI makes more and more critical decisions about our everyday lives, it becomes even more important to know it’s operating safely. We’ve developed a first-of-its-kind explainability tool, which generates explanations quickly, incorporating causality; making it easier to improve the performance of models, because users understand how it makes decisions. This has been integral in our Early Warning System (EWS), allowing NHS staff to understand and interpret each forecast, which has increased the adoption of the tool.

It’s our view that clearer regulation is needed to ensure AI is being used safely, but this needs to be informed by what’s practical and possible to implement. We also need to ensure we don’t stifle innovation. Any regulation needs to be context-dependent. For example, when AI is used to make decisions in a medical diagnostics context, safety becomes far more important than if an AI algorithm is trying to choose which advertisement to show you. As we acquire the right tools and regulation, it’s exciting to see what complex AI like deep reinforcement learning will achieve in our industry and society.

AN: Faculty is among the companies taking admirable steps to offset its carbon emissions. How can AI play a role in combating climate change?

JL: We’re here because we believe that AI can change the world – we want to take this technology and use it to solve real, tangible, important problems.

Like many tech companies, the biggest sources of our carbon emissions are cloud computing (the tech sector has a greater carbon footprint than the aviation industry) but sustainable AI can be part of the solution. Our work includes analysing data for Arctic Basecamp and regulating pressure on the UK gas grid. We’re expanding our sustainable AI work with environmental organisations, supporting them to tackle climate change.

AN: How quickly do you think most factories will either go entirely “dark” – as in having no or very few humans working in them – or at least have a portion of them being fully autonomous? How can the workforce prepare for such changes?

JL: AI is not universal just yet, so we don’t expect we’ll see factories going entirely dark anytime soon. Most companies are using AI to automate, save time and increase productivity, but the potential of AI is huge – it will transform industries. AI can become the unconscious mind of an organisation, processing vast volumes of data quickly, and freeing humans to focus on what they’re best at and where their input is needed; humans have a far greater appreciation for nuance and context for example.

We’ve already helped clients across industries do this from cutting the backlog of cases from four years to just four weeks, developing models which detect harmful content online with a positive rate of 94% to helping large retailers ensure they are marketing to the customers most likely to purchase, increasing profits by 5%.

AN: The NHS was able to enhance its forecasting abilities thanks to its partnership with Faculty. What successes were achieved and was anything learnt from the experience that could improve future predictions?

JL: We’re really proud of our partnership with the NHS; our groundbreaking Early Warning System (EWS) was crucial in the NHS’ nationwide pandemic data strategy, forecasting spikes in Covid-19 cases and hospital admissions weeks in advance. These forecasts allowed the NHS to ensure there were enough staff, beds and vital equipment allocated for patients. There are over 1000 users of the model across the NHS.

Following the success of the tool, we are addressing new areas where AI forecasting can be used to improve service delivery and patient care in the NHS, including predicting A&E demand and winter pressures. The EWS uses our Operational Intelligence software; leveraging Bayesian hierarchical modelling to form forecasts on a national level to an individual trust level. We’ve used the same software in scenarios where demand forecasting is needed, including for consumer goods. 

AN: Faculty continues to expand rapidly and recently raised £30m that it expects to use to create 400 new jobs and accelerate its international expansion. What else is a key focus for Faculty over the coming years?

JL: We’re excited to be able to bring the power of AI to even more customers, helping them to make effective decisions with real-world impact. We are enhancing our technology offering, hiring 400 new people over the next few years and accelerating our international expansion. We are also doubling down on our AI safety research programme, so our customers have the assurance that all of our AI models are always performing safely and to the best of their ability.

AN: What will Faculty be sharing with the audience at this year’s AI Expo Global?

We’re glad to be at in-person events again, and we’re looking forward to meeting fellow exhibitors and attendees. Our focus at this year’s AI Expo will be on our Customer Intelligence software – which we are predominantly using within the consumer industries to demonstrate the impact marketing has on individual customer behaviour. Millions of marketing spend is wasted each year, being spent on the wrong people. With our technology, marketers will finally have the insight to know when and who they should be focusing their efforts on. 

We’re also sharing more about our Faculty Fellowship, our in-house L&D programme where organisations looking to expand their data science teams can hire top data scientists for six weeks before they decide to hire. This is particularly critical as the UK tech industry looks to hire and attract the top talent. We’ve already had some great companies take part in this programme from Virgin Media and Vodafone, through to leading startups like The Trade Desk and JustEat.

AN: It’s the 20th anniversary of the Faculty Fellowship in October – what’s the focus for the Fellowship over the coming years?

JL: Faculty began with the Fellowship, so it’s a really special milestone to be celebrating the 20th anniversary. With demand for data scientists at an all-time high – with over 100,000 vacancies in 2020 alone, it’s a competitive space. We expanded the programme this year to include an additional fellowship, and we’re continuously working to ensure we are attracting top talent, and making the process as easy as possible for our partner companies. 

Overstretched teams are fed up of spending their time on hiring and long interview rounds—the fellowship is designed so companies only invest 2-3 hours in total, but have an elite data scientist embedded in their team within weeks.

(Photo by Clark Tibbs on Unsplash)

Faculty will be sharing their invaluable insights during this year’s AI & Big Data Expo Global which runs from 6-7 September 2021. Faculty’s stand number is 178. Find out more about the event here.

The post Janine Lloyd-Jones, Faculty: On the ethical considerations of AI and ensuring it’s a tool for positive change appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/06/janine-lloyd-jones-faculty-ethical-considerations-ai-ensuring-tool-positive-change/feed/ 0
Sebastian Santibanez, SoftServe: On helping enterprises successfully use AI in their digital transformations https://www.artificialintelligence-news.com/2021/09/03/sebastian-santibanez-softserve-helping-enterprises-use-ai-digital-transformations/ https://www.artificialintelligence-news.com/2021/09/03/sebastian-santibanez-softserve-helping-enterprises-use-ai-digital-transformations/#respond Fri, 03 Sep 2021 16:21:43 +0000 http://artificialintelligence-news.com/?p=10997 AI News spoke with Sebastian Santibanez, Associate Director of the Advanced Technologies Group at SoftServe, about how the company is helping enterprises to successfully use AI in their digital transformations. AI News: What work do you do in the artificial intelligence space?  Sebastian Santibanez: We understand that the truly successful data-minded organizations are very fluid... Read more »

The post Sebastian Santibanez, SoftServe: On helping enterprises successfully use AI in their digital transformations appeared first on AI News.

]]>
AI News spoke with Sebastian Santibanez, Associate Director of the Advanced Technologies Group at SoftServe, about how the company is helping enterprises to successfully use AI in their digital transformations.

AI News: What work do you do in the artificial intelligence space? 

Sebastian Santibanez: We understand that the truly successful data-minded organizations are very fluid in their definition of AI and SoftServe has embraced this fluidity by thinking of AI organizations and solutions as those who touch, even transversally, on ML, big data, XR, IoT, robotics and many other advanced technologies. With that said, our AI work spans the full business cycle, from strategic digital consulting to solution design and build to maintenance. Depending on the maturity level of our clients we support them in different ways:

  • Clients who are at the beginning of their digital journey get more value when we work together revealing the possibilities of technology. Especially now that a larger share of a business value is linked to certain AI initiatives, our clients trust us for designing a sound digital strategy around their AI-related goals, and conversely, ensuring that their AI dreams advance well anchored to a digital strategy. We see companies who what to start building AI projects before they have a sound strategy and sometimes help them step back and reframe their strategy before going forwards too quickly. Often, building PoCs are part of finding the strategy
  • Clients who have taken their first steps already in their digital journey often see more value when SoftServe helps drive their transformation. In this area, we do a lot of work with our clients accelerating their innovation and IP generation; as well as developing their raw ideas into market-ready AI solutions. 
  • Clients who are more digitally savvy often engage us to accelerate and optimize their AI-backed initiatives. We normally see extremely valuable market solutions that were created in a semi-artisanal fashion and of course are very hard to optimize and maintain effectively.  This is where our experience in Cloud-AI and XOPs really shines as we are able to transform good AI ideas into well-tuned production machines.

From a technical and organizational point of view, we support our clients with our Centers of Excellence in data science, big data analytics, IoT, XR, robotics, and cybersecurity, in addition to our in-house R&D department and our vast organizational experience in cloud, DevOps, and general software development. We’re prime partners of all major cloud providers and just got awarded Google Cloud Partner of the Year in Machine Learning, we have 10k associates around the world with a very well-established presence in Europe and North America, and a fast-growing presence in the Middle East, Latin America, and Asia.

Transversally, we are known in the market for our obsession with driving measurable solutions. Long ago, we collectively realized that many clients were really struggling with identifying the value potential of their current AI initiatives, or designing AI solutions that drove measurable value, which of course was hurting their stance in front of shareholders and leadership.

Our work in AI and associated technologies goes very deeply into identifying the actual business value of the solutions we design and find ways to effectively measure and communicate the outputs of our clients’ AI initiatives. Historically, clients have been tempted to measure the outcome of their AI solutions in terms of cost savings and revenue increase, which is of course important but certainly not the only metrics that matter.

AI has the tremendous potential of driving a competitive edge by accelerating speed to value when correctly aligned with an organization’s digital journey. We make sure that our clients develop their business with these goals in mind.

AN: What are the latest trends you’ve noticed developing in artificial intelligence and how do you think this will impact businesses and society in general? 

SS: Over the last few years, we have seen a transition in the market from a one-off experimental AI mindset to a more intentional, mature, and business-centred approach to data-backed solutions, which is likely fuelled by the availability of empirical data on what makes AI organizations successful.

Organizations are finding the right recipes to delight their customers and increase loyalty with AI and are investing in the right things: strengthening their data management tools and practices; improving (or even initiating) their data governance programs, better aligning AI initiatives to strategy, and creating a more AI-friendly culture.

We have no doubt that this paradigmatic shift is positive for businesses and for us and our communities. This mature, business-centred approach to AI means that a larger number of optimal solutions will reach the market and will positively affect the lives of billions.

As consumers, we will enjoy access to higher quality, cheaper goods and services which are optimized with AI, and as members of our communities we might see that our essential services such as public transportation, infrastructure or health also become more efficient and affordable for all which, of course, has the added value of reducing the negative impacts of our lifestyles in our planet.

From a more technical point of view, we’re seeing rising expectations of how AI and related technologies like robotics and XR can benefit organizations. Take the case of manufacturers as an example; more of them are accelerating their transition from reactive maintenance to predictive maintenance informed by IoT-Big Data-AI combination, and more of them are also evolving their sample-based quality controls to 100% sample methodologies assisted by computer vision, XR, edge computing, and other technologies.

These new expectations add a burden to AI-adjacent technologies, like IoT or MLOPs because they demand the enablement of heavy workloads at the edge and continuous development and management of algorithms to satisfy very fastly evolving needs, which in turn requires complex containerization and orchestration of physical resources and code across the globe. The industry is, in general, responding well to this challenge and we’re observing a mindset change from creating siloed solutions that are conceived with a focus on one part of the value chain, to a mindset that values convergence of technologies along the value chain.

Clients are also sunsetting their Hadoop clusters and switching back to SQL-based solutions like cloud-native warehouses and distributed query engines, which tremendously help to streamline the cloud-native AI lifecycle. We’re also constantly hearing about the desire to virtualize processes, which is something that Digital Twins, Simulations, reinforcement learning and other data science methods along with sensorization is enabling.

Organizations are using or gearing towards using this virtualization to analyse a variety of scenarios in the safety of the cloud and optimize their real-world operations; not only operations of their physical assets and systems of course, but also process optimization via process twinning, which helps organizations optimize their business workflows. Clients have seen the first wave of successful projects in these areas in the past years and are much more comfortable in investing in these solutions.

If this rising of expectations keeps coming informed by empirical evidence and within the goldilocks zone of the art of the possible, I think the implications for businesses and society in general are going to be very positive. The call to action however is to be very careful in identifying which expectations are rooted in solid evidence and which expectations need to be treated as pie in the sky. Both have their place and need to exist to have a healthy AI market but we can’t let the audience confuse both.

Another aspect we are also starting to see, even if just more recently and not forming a critical mass yet, is an increased awareness on security issues, fairness and explainability in AI.  Executives are starting to understand how fragile some AI solutions can be to attacks that manipulate data in order to change an AI result and are designing their solutions with that added layer of robustness in mind.

Curiously enough, this security awareness seems to have started unidirectionally, from the AI layer towards the data-generating layers, but it hasn’t yet reached the data-generating end of the AI lifecycle; there is still a lot of work to do in the industry so the numerous sensorization efforts are as secure as the cloud workloads.

On the fairness and explainable AI front, policymakers and technologists are coming to terms with some societal implications of trusting AI to make decisions that directly affect people. We are seeing more social actors asking the right questions of “what criteria is this algorithm using to decide on X or Y”, and at the same time, technologists are starting to promote more and more the use of explainable AI models.

As a matter of fact, only in the last year or so, the three largest cloud providers are joining the efforts initiated by IBM a few years back in promoting explainable and fair AI tools. Again, the business and societal implications of these aspects are in general very positive. Safer workloads and transparent analytics mean that life-impacting decisions can be well informed by AI, which is of course in everyone’s best interest. The big caveat will be making sure that technologists and policymakers work together in ensuring that we are able to secure the whole data pipeline, from collection to analytics

AN: The company recently became an advisor and technology partner on UNICEF Ukraine. What does this partnership entail and why did you choose to partner with UNICEF?

SS: We are expanding our strategic partnership with UNICEF Ukraine through 2023. SoftServe will now serve as an advisor and technology partner on UNICEF Ukraine’s projects working toward the goals of sustainable development for children. We have outlined opportunities for cooperation in software development and other activities to support UNICEF programs in Ukraine in education, health, child protection, social policy, communication for development, and others.

Our partnership with UNICEF Ukraine began in April 2020. To date, we have implemented numerous initiatives, including a platform for collecting and analyzing COVID-19 statistics in Ukraine, the launch of the country’s National Volunteer Platform, a web portal dedicated to reforming Ukraine’s school nutrition system, an infant care app for young parents, and an evidence-based medicine website. In 2021, SoftServe will also work on updating the national vaccination portal.

UNICEF’S projects in Ukraine systematically address social issues in child protection. The goal of these initiatives – to enable talented people to change the world – aligns perfectly with SoftServe’s mission.

AN: The company has also become an official member of the United Nations (UN) Global Compact. What do you hope to achieve as part of the Global Compact?

SS: It’s an opportunity for us to become part of the global movement of companies that are changing the world for the better and it’s a new step for us in creating a sustainable business. We are committed to the UN Global Compact initiative and its principles in the areas of human rights, labour, the environment, and anti-corruption. 

Our cooperation with the UN began in 2019. We participated in the ‘Hack for Locals’ hackathon that aimed to develop creative digital solutions to solve problems in local communities.

This year, we joined ‘Co-create with Locals’, the pilot program for the United Nations Development Programme (UNDP), which aims to engage activists in developing innovative solutions in public safety and social cohesion and will be implemented on SoftServe’s Innovation Platform.

AN: Finally, what other notable latest developments have there been recently at SoftServe?

SS: SoftServe surpassed ten thousand employees, a significant milestone, as of July 2021. Our headcount has grown by 26% since the beginning of the year thanks to the growing demand for digital services and an expanding customer base.

SoftServe also won the 2020 Google Cloud Global Specialization Partner of the Year – Machine Learning award.

Finally, SoftServe appointed Adriyan Pavlykevych as Chief Information Security Officer (CISO) as of June 2021. Pavlykevych has almost 20 years of experience with SoftServe. As CISO, he will be responsible for shaping and implementing SoftServe’s information governance and security strategy, including ensuring the secure delivery of the company’s engineering services and maintaining and developing its cyber defense capabilities.

(Photo by Cytonn Photography on Unsplash)

Santibanez will be sharing his invaluable insights during this year’s AI & Big Data Expo Global, which runs from 6-7 September 2021. Find out more about his sessions and how to attend here.

The post Sebastian Santibanez, SoftServe: On helping enterprises successfully use AI in their digital transformations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/03/sebastian-santibanez-softserve-helping-enterprises-use-ai-digital-transformations/feed/ 0
The UK is changing its data laws to boost its digital economy https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/ https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/#respond Thu, 26 Aug 2021 12:17:50 +0000 http://artificialintelligence-news.com/?p=10985 Britain will diverge from EU data laws that have been criticised as being overly strict and driving investment and innovation out of Europe. Culture Secretary Oliver Dowden has confirmed the UK Government’s intention to diverge from key parts of the infamous General Data Protection Regulation (GDPR). Estimates suggest there is as much as £11 billion... Read more »

The post The UK is changing its data laws to boost its digital economy appeared first on AI News.

]]>
Britain will diverge from EU data laws that have been criticised as being overly strict and driving investment and innovation out of Europe.

Culture Secretary Oliver Dowden has confirmed the UK Government’s intention to diverge from key parts of the infamous General Data Protection Regulation (GDPR). Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers

“Now that we have left the EU, I’m determined to seize the opportunity by developing a world-leading data policy that will deliver a Brexit dividend for individuals and businesses across the UK,” said Dowden.

When GDPR came into effect, it received its fair share of both praise and criticism.  On the one hand, GDPR admirably sought to protect the data of consumers. On the other, “pointless” cookie popups, extra paperwork, and concerns about hefty fines have caused frustration and led many businesses to pack their bags and take their jobs, innovation, and services to less strict regimes.

GDPR is just one example. Another would be Article 11 and 13 of the EU Copyright Directive that some – including the inventor of the World Wide Web Sir Tim Berners-Lee, and Wikipedia founder Jimmy Wales – have opposed as being an “upload filter”, “link tax”, and “meme killer”. This blog post from YouTube explained why creators should care about Europe’s increasingly strict laws.

Mr Dowden said the new reforms would be “based on common sense, not box-ticking” but uphold the necessary safeguards to protect people’s privacy.

What will the impact be on the UK’s AI industry?

AI is, of course, powered by data—masses of it. The idea of mass data collection terrifies many people but is harmless so long as it’s truly anonymised. Arguably, it’s a lack of data that should be more concerning as biases in many algorithms today are largely due to limited datasets that don’t represent the full diversity of our societies.

Western facial recognition algorithms, for example, have far more false positives against minorities than they do white men—leading to automated racial profiling. A 2010 study (PDF) by researchers at NIST and the University of Texas found that algorithms designed and tested in East Asia are better at recognising East Asians.

However, the data must be collected responsibly and checked as thoroughly as possible. Last year, MIT was forced to take offline a popular dataset called 80 Million Tiny Images that was created in 2008 to train AIs to detect objects after discovering that images were labelled with misogynistic and racist terms.

While a European leader in AI, few people are under any illusion that the UK could become a world leader in pure innovation and deployment because it’s simply unable to match the funding and resources available to powers like the US and China. Instead, experts believe the UK should build on its academic and diplomatic strengths to set the “gold standard” in ethical artificial intelligence.

“There’s an opportunity for us to set world-leading, gold standard data regulation which protects privacy, but does so in as light touch a way as possible,” Mr Dowden said.

As it diverges from the EU’s laws in the first major regulatory shakeup since Brexit, the UK needs to show it can strike a fair balance between the EU’s strict regime and the arguably too lax protections in many other countries.

The UK also needs to promote and support innovation while avoiding the “Singapore-on-Thames”-style model of a race to the bottom in standards, rights, and taxes that many Remain campaigners feared would happen if the country left the EU. Similarly, it needs to prove that “Global Britain” is more than just a soundbite.

To that end, Britain’s data watchdog is getting a shakeup and John Edwards, New Zealand’s current privacy commissioner, will head up the regulator.

“It is a great honour and responsibility to be considered for appointment to this key role as a watchdog for the information rights of the people of the United Kingdom,” said Edwards.

“There is a great opportunity to build on the wonderful work already done and I look forward to the challenge of steering the organisation and the British economy into a position of international leadership in the safe and trusted use of data for the benefit of all.”

The UK is also seeking global data partnerships with six countries: the United States, Australia, the Republic of Korea, Singapore, the Dubai International Finance Centre, and Colombia. Over the long-term, agreements with fast-growing markets like India and Brazil are hoped to be striked to facilitate data flows in scientific research, law enforcement, and more.

Commenting on the UK’s global data plans Andrew Dyson, Global Co-Chair of DLA Piper’s Data Protection, Privacy and Security Group, said:

“The announcements are the first evidence of the UK’s vision to establish a bold new regulatory landscape for digital Britain post-Brexit. Earlier in the year, the UK and EU formally recognised each other’s data protection regimes—that allowed data to continue to flow freely after Brexit.

This announcement shows how the UK will start defining its own future regulatory pathways from here, with an expansion of digital trade a clear driver if you look at the willingness to consider potential recognition of data transfers to Australia, Singapore, India and the USA.

It will be interesting to see the further announcements that are sure to follow on reforms to the wider policy landscape that are just hinted at here, and of course the changes in oversight we can expect from a new Information Commissioner.”

An increasingly punitive EU is not likely to react kindly to the news and added clauses into the recent deal reached with the UK to avoid the country diverging too far from its own standards.

Mr Dowden, however, said there was “no reason” the EU should react with too much animosity as the bloc has reached data agreements with many countries outside of its regulatory orbit and the UK must be free to “set our own path”.

(Photo by Massimiliano Morosinotto on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post The UK is changing its data laws to boost its digital economy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/feed/ 0
NIST: VisionLabs, IDEMIA, and CloudWalk lead in facial recognition accuracy https://www.artificialintelligence-news.com/2021/08/25/nist-visionlabs-idemia-and-cloudwalk-lead-in-facial-recognition-accuracy/ https://www.artificialintelligence-news.com/2021/08/25/nist-visionlabs-idemia-and-cloudwalk-lead-in-facial-recognition-accuracy/#respond Wed, 25 Aug 2021 11:53:16 +0000 http://artificialintelligence-news.com/?p=10980 A report from the US government’s National Institute of Standards and Technology (NIST) reveals the accuracy of various facial recognition algorithms. The latest edition of the report currently has VisionLabs, IDEMIA, and CloudWalk in the lead: Higher numbers are better as they indicate a lower prevalence of false positives. The “N” values represent the number... Read more »

The post NIST: VisionLabs, IDEMIA, and CloudWalk lead in facial recognition accuracy appeared first on AI News.

]]>
A report from the US government’s National Institute of Standards and Technology (NIST) reveals the accuracy of various facial recognition algorithms.

The latest edition of the report currently has VisionLabs, IDEMIA, and CloudWalk in the lead:

Higher numbers are better as they indicate a lower prevalence of false positives.

The “N” values represent the number of individuals enrolled in each simulation of aircraft boarding. The N = 42,000 simulation, for example, is designed to represent an airport security line where many people are expected. The “k” values give the number of images of each en- rollee in each gallery.

In the test simulating an airport security line, it’s CloudWalk that currently leads the pack in accuracy.

CloudWalk is a controversial facial recognition software developer based in Guangzhou, China that has been sanctioned by the United States government for allegedly participating in major human rights abuses.

According to US officials, CloudWalk was “complicit in human rights violations and abuses committed in China’s campaign of repression, mass arbitrary detention, forced labour and high-technology surveillance against Uighurs, ethnic Kazakhs, and other members of Muslim minority groups in the Xinjiang Uighur Autonomous Region”.

France-based IDEMIA ranks a respectable second in the test overall while beating CloudWalk in the N = 420 simulations and avoiding the same controversies.

Jean-Christophe Fondeur, Chief Technology Officer at IDEMIA, said:

“IDEMIA’s technologies are based on over 30 years of expertise in deep learning and artificial intelligence and we see it as our responsibility to bring this expertise to the everyday traveler, keeping passengers safe across the world.

NIST’s results confirm the robustness of our technologies with regard to managing different demographics. IDEMIA’s facial recognition technology achieves the most accurate results and delivers a key competitive advantage when handling complex scenarios.”

Amsterdam-based VisionLabs is the leader in both of the N = 420 simulations. The company boasts of a face descriptor that’s five times smaller than competitors and with a matching speed that’s at least 50 times faster.

(Image Credit: IDEMIA)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post NIST: VisionLabs, IDEMIA, and CloudWalk lead in facial recognition accuracy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/25/nist-visionlabs-idemia-and-cloudwalk-lead-in-facial-recognition-accuracy/feed/ 0
Unity devs aren’t too happy their work is being sold for military AI purposes https://www.artificialintelligence-news.com/2021/08/24/unity-devs-arent-too-happy-work-sold-military-ai-purposes/ https://www.artificialintelligence-news.com/2021/08/24/unity-devs-arent-too-happy-work-sold-military-ai-purposes/#respond Tue, 24 Aug 2021 14:06:09 +0000 http://artificialintelligence-news.com/?p=10954 Developers from Unity are calling for more transparency after discovering their AI work is being sold to the military. Video games have pioneered AI developments since Nim was released in 1951. In the decades since, game developers have worked to improve AIs to provide a more enjoyable experience for a growing number of people around... Read more »

The post Unity devs aren’t too happy their work is being sold for military AI purposes appeared first on AI News.

]]>
Developers from Unity are calling for more transparency after discovering their AI work is being sold to the military.

Video games have pioneered AI developments since Nim was released in 1951. In the decades since, game developers have worked to improve AIs to provide a more enjoyable experience for a growing number of people around the world.

Just imagine the horror if those developers found out their work was instead being used for real military purposes without their knowledge. That’s exactly what developers behind the popular Unity game engine discovered.

According to a Vice report, three former and current Unity employees confirmed that much of the company’s contract work is to do with AI programming. That’s of little surprise and wouldn’t be of too much concern if it wasn’t conducted under the “GovTech” department with seemingly a high degree of secrecy.

“It should be very clear when people are stepping into the military initiative part of Unity,” one of Vice’s sources said, on condition of anonymity for fear of reprisal.

Vice discovered several deals with the Department of Defense, including two six-figure contracts for “modeling and simulation prototypes” with the US Air Force.

Unity bosses clearly understand that some employees may not be entirely comfortable with knowing their work could be used for war. One memo instructs managers to use the terms “government” or “defense” instead of “military.”

In an internal Slack group, Unity CEO John Riccitello promised to have a meeting with employees.

“Whether or not I’m working directly for the government team, I’m empowering the products they’re selling,” wrote Riccitello. “Do you want to use your tools to catch bad guys?”

That question is likely to receive some passionate responses. After all, few of us are going to forget the backlash and subsequent resignation of Googlers following revelations about the company’s since-revoked ‘Project Maven’ contract with the Pentagon.

You can find Vice’s full report here.

(Photo by Levi Meir Clancy on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Unity devs aren’t too happy their work is being sold for military AI purposes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/24/unity-devs-arent-too-happy-work-sold-military-ai-purposes/feed/ 0
Nvidia-Arm merger in doubt as CMA has ‘serious’ concerns https://www.artificialintelligence-news.com/2021/08/23/nvidia-arm-merger-doubt-cma-serious-concerns/ https://www.artificialintelligence-news.com/2021/08/23/nvidia-arm-merger-doubt-cma-serious-concerns/#respond Mon, 23 Aug 2021 12:56:04 +0000 http://artificialintelligence-news.com/?p=10943 The proposed merger between Nvidia and British chip technology giant Arm is looking increasingly doubtful as the CMA (Competition & Markets Authority) believes the deal “raises serious competition concerns”. A $40 billion merger of two of the biggest names in chip manufacturing was always bound to catch the eye of regulators, especially when it’s received... Read more »

The post Nvidia-Arm merger in doubt as CMA has ‘serious’ concerns appeared first on AI News.

]]>
The proposed merger between Nvidia and British chip technology giant Arm is looking increasingly doubtful as the CMA (Competition & Markets Authority) believes the deal “raises serious competition concerns”.

A $40 billion merger of two of the biggest names in chip manufacturing was always bound to catch the eye of regulators, especially when it’s received such vocal opposition from around the world.

Hermann Hauser, Founder of Arm, even suggested that “surrendering the UK’s most powerful trade weapon to the US is making Britain a US vassal state”. As the UK continues to sign post-Brexit trade deals as part of its “Global Britain” endeavour and seeks to join such growing economic partnerships as the CPTPP, it’s a strong argument.

Given the size of the acquisition and its potential impact, UK Culture Secretary Oliver Dowden referred the deal to the Competition and Markets Authority (CMA) and asked the regulator to prepare a report on whether the deal is anti-competitive.

The executive summary of the CMA’s report was now been published.

Andrea Coscelli, chief executive of the CMA, said:

“We’re concerned that Nvidia controlling Arm could create real problems for Nvidia’s rivals by limiting their access to key technologies, and ultimately stifling innovation across a number of important and growing markets. This could end up with consumers missing out on new products, or prices going up.

The chip technology industry is worth billions and is vital to products that businesses and consumers rely on every day. This includes the critical data processing and datacentre technology that supports digital businesses across the economy, and the future development of artificial intelligence technologies that will be important to growth industries like robotics and self-driving cars.”

As the global tech industry continues to struggle from a chip shortage, further restricting access to new developments would provide a double-hit that’s bound to impact businesses and their consumers.

Rivals have understandably also raised concerns, of which the CMA claims to have received “a substantial number”. Some of Nvidia’s competitors have offered to invest in Arm if it helps the company to remain independent.

Nvidia, for its part, has promised to work with UK regulators to alleviate concerns. The company has already pledged to keep the business in the UK and hire more staff. Nvidia also announced a new AI centre in Cambridge –  home to an increasing number of leading startups in the field such as FiveAI, Prowler.io, Fetch.ai, and Darktrace – that features an ARM/Nvidia-based supercomputer, set to be one of the most powerful in the world.

Following receipt of the CMA’s report, Dowden will now make a decision on whether to ask the CMA to conduct a ‘Phase Two’ investigation.

An executive summary of the CMA’s report is available here.

(Photo by Alireza Khatami on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Nvidia-Arm merger in doubt as CMA has ‘serious’ concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/23/nvidia-arm-merger-doubt-cma-serious-concerns/feed/ 0