edge computing Archives - AI News https://www.artificialintelligence-news.com/tag/edge-computing/ Artificial Intelligence News Fri, 15 Dec 2023 17:55:44 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png edge computing Archives - AI News https://www.artificialintelligence-news.com/tag/edge-computing/ 32 32 AI & Big Data Expo: Unlocking the potential of AI on edge devices https://www.artificialintelligence-news.com/2023/12/15/ai-big-data-expo-unlocking-potential-ai-on-edge-devices/ https://www.artificialintelligence-news.com/2023/12/15/ai-big-data-expo-unlocking-potential-ai-on-edge-devices/#respond Fri, 15 Dec 2023 17:55:42 +0000 https://www.artificialintelligence-news.com/?p=14080 In an interview at AI & Big Data Expo, Alessandro Grande, Head of Product at Edge Impulse, discussed issues around developing machine learning models for resource-constrained edge devices and how to overcome them. During the discussion, Grande provided insightful perspectives on the current challenges, how Edge Impulse is helping address these struggles, and the tremendous... Read more »

The post AI & Big Data Expo: Unlocking the potential of AI on edge devices appeared first on AI News.

]]>
In an interview at AI & Big Data Expo, Alessandro Grande, Head of Product at Edge Impulse, discussed issues around developing machine learning models for resource-constrained edge devices and how to overcome them.

During the discussion, Grande provided insightful perspectives on the current challenges, how Edge Impulse is helping address these struggles, and the tremendous promise of on-device AI.

Key hurdles with edge AI adoption

Grande highlighted three primary pain points companies face when attempting to productise edge machine learning models, including difficulties determining optimal data collection strategies, scarce AI expertise, and cross-disciplinary communication barriers between hardware, firmware, and data science teams.

“A lot of the companies building edge devices are not very familiar with machine learning,” says Grande. “Bringing those two worlds together is the third challenge, really, around having teams communicate with each other and being able to share knowledge and work towards the same goals.”

Strategies for lean and efficient models

When asked how to optimise for edge environments, Grande emphasised first minimising required sensor data.

“We are seeing a lot of companies struggle with the dataset. What data is enough, what data should they collect, what data from which sensors should they collect the data from. And that’s a big struggle,” explains Grande.

Selecting efficient neural network architectures helps, as does compression techniques like quantisation to reduce precision without substantially impacting accuracy. Always balance sensor and hardware constraints against functionality, connectivity needs, and software requirements.

Edge Impulse aims to enable engineers to validate and verify models themselves pre-deployment using common ML evaluation metrics, ensuring reliability while accelerating time-to-value. The end-to-end development platform seamlessly integrates with all major cloud and ML platforms.

Transformative potential of on-device intelligence

Grande highlighted innovative products already leveraging edge intelligence to provide personalised health insights without reliance on the cloud, such as sleep tracking with Oura Ring.

“It’s sold over a billion pieces, and it’s something that everybody can experience and everybody can get a sense of really the power of edge AI,” explains Grande.

Other exciting opportunities exist around preventative industrial maintenance via anomaly detection on production lines.

Ultimately, Grande sees massive potential for on-device AI to greatly enhance utility and usability in daily life. Rather than just raw data, edge devices can interpret sensor inputs to provide actionable suggestions and responsive experiences not previously possible—heralding more useful technology and improved quality of life.

Unlocking the potential of AI on edge devices hinges on overcoming current obstacles inhibiting adoption. Grande and other leading experts provided deep insights at this year’s AI & Big Data Expo on how to break down the barriers and unleash the full possibilities of edge AI.

“I’d love to see a world where the devices that we were dealing with were actually more useful to us,” concludes Grande.

Watch our full interview with Alessandro Grande below:

(Photo by Niranjan _ Photographs on Unsplash)

See also: AI & Big Data Expo: Demystifying AI and seeing past the hype

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI & Big Data Expo: Unlocking the potential of AI on edge devices appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/15/ai-big-data-expo-unlocking-potential-ai-on-edge-devices/feed/ 0
Dave Barnett, Cloudflare: Delivering speed and security in the AI era https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/ https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/#respond Fri, 13 Oct 2023 15:39:34 +0000 https://www.artificialintelligence-news.com/?p=13742 AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era. According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably,... Read more »

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era.

According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably, a significant portion of their services are offered to consumers for free.

“We continuously reinvent, we’re very comfortable in the digital space. We’re very proud that the vast majority of our customers actually consume our services for free because it’s our way of giving back to society,” said Barnett.

Barnett also revealed Cloudflare’s focus on AI during their anniversary week. The company aims to enable organisations to consume AI securely and make it accessible to everyone. Barnett says that Cloudflare achieves those goals in three key ways.

“One, as I mentioned, is operating AI inference engines within Cloudflare close to consumers’ eyeballs. The second area is securing the use of AI within the workplace, because, you know, AI has some incredibly positive impacts on people … but the problem is there are some data protection requirements around that,” explains Barnett.

“Finally, is the question of, ‘Could AI be used by the bad guys against the good guys?’ and that’s an area that we’re continuing to explore.”

Just a day earlier, AI News heard from Raviv Raz, Cloud Security Manager at ING, during a session at the expo that focused on the alarming potential of AI-powered cybercrime.

Regarding security models, Barnett discussed the evolution of the zero-trust concept, emphasising its practical applications in enhancing both usability and security. Cloudflare’s own journey with zero-trust began with a focus on usability, leading to the development of its own zero-trust network access products.

“We have servers everywhere and engineers everywhere that need to reboot those servers. In 2015, that involved VPNs and two-factor authentication… so we built our own zero-trust network access product for our own use that meant the user experiences for engineers rebooting servers in far-flung places was a lot better,” says Barnett.

“After 2015, the world started to realise that this approach had great security benefits so we developed that product and launched it in 2018 as Cloudflare Access.”

Cloudflare’s innovative strides also include leveraging NVIDIA GPUs to accelerate machine learning AI tasks on an edge network. This technology enables organisations to run inference tasks – such as image recognition – close to end-users, ensuring low latency and optimal performance.

“We launched Workers AI, which means that organisations around the world – in fact, individuals as well – can run their inference tasks at a very close place to where the consumers of that inference are,” explains Barnett.

“You could ask a question, ‘Cat or not cat?’, to a trained cat detection engine very close to the people that need it. We’re doing that in a way that makes it easily accessible to organisations looking to use AI to benefit their business.”

For developers interested in AI, Barnett outlined Cloudflare’s role in supporting the deployment of machine learning models. While machine learning training is typically conducted outside Cloudflare, the company excels in providing low-latency inference engines that are essential for real-time applications like image recognition.

Our conversation with Barnett shed light on Cloudflare’s commitment to cloud-native architecture, AI accessibility, and cybersecurity. As the industry continues to advance, Cloudflare remains at the forefront of delivering speed and security in the AI era.

You can watch our full interview with Dave Barnett below:

(Photo by ryan baker on Unsplash)

See also: JPMorgan CEO: AI will be used for ‘every single process’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo, Edge Computing Expo, and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/feed/ 0
The need for ruggedised edge: Bringing data centre-class performance closer to your data https://www.artificialintelligence-news.com/2022/10/11/the-need-for-ruggedised-edge-bringing-data-centre-class-performance-closer-to-your-data/ https://www.artificialintelligence-news.com/2022/10/11/the-need-for-ruggedised-edge-bringing-data-centre-class-performance-closer-to-your-data/#respond Tue, 11 Oct 2022 15:53:43 +0000 https://www.artificialintelligence-news.com/?p=12364 Oil and gas stations, automotive manufacturing plants, warehouses, and remote store locations are environments that are not as conducive to traditional computing. But instead of a natural trade-off in performance, edge computing – much like the Internet of Things (IoT) before it – is seeing a rightful place in these rugged environments. These industry applications... Read more »

The post The need for ruggedised edge: Bringing data centre-class performance closer to your data appeared first on AI News.

]]>
Oil and gas stations, automotive manufacturing plants, warehouses, and remote store locations are environments that are not as conducive to traditional computing. But instead of a natural trade-off in performance, edge computing – much like the Internet of Things (IoT) before it – is seeing a rightful place in these rugged environments.

These industry applications can be similar to the Industrial Internet of Things (IIoT). Take underground mining, for instance, with its remote control and driverless equipment functions and the need to gather insights on predictive maintenance and energy management. Computers in these conditions must operate reliably under vibration, shock, and hot temperatures.

Seventy-five percent of data is forecasted to process outside the cloud by 2025, according to Gartner. Organizations need features in their servers such as higher performance, less data latency, and remote manageability.

Alongside withstanding extreme conditions, ruggedized edge platforms require various other functionality. The platforms need to perform real-time processing through an array of performance accelerators, offer sufficient storage capacity, and have rich I/O ports to be compatible with new and legacy machines. 

In collaboration with Arrow, Dell’s PowerEdge XR11 and XR12 servers aim to offer enterprise compute capabilities in the harshest edge environments. On the performance side, the XR12 is the more expandable of the two, coming across in its third-generation Intel Xeon scalable processors and GPU options, with support for up to two NVIDIA T4 cards or two of the A100, A10, or A40 GPUs, and flexible I/O choices. Storage for the XR11 and XR12 includes the Intel Optane Persistent Memory 200 series. At 16 inches, the chassis can maintain performance while being less than half the depth of a standard server. 

“The servers help OEM customers address the edge computing challenges faced outside the data center,” Dell notes. “Businesses can move workloads to the network edge and run AI algorithms to analyze and act on data near where it’s generated, reducing latency and providing quicker access to data for real-time decision making, saving time and money.”

By bringing computing power to the edge, innovative enterprises realize that a mix of centralized cloud and distributed edge environments are essential. Meanwhile, innovative vendors can now get data center-class performance right where the action is for their customers’ increasing edge deployments. Arrow can provide customized Dell solutions for enterprises that wish to take the next step, going through the entire lifecycle, from ideation development, prototyping, manufacturing, and global distribution.

The post The need for ruggedised edge: Bringing data centre-class performance closer to your data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/10/11/the-need-for-ruggedised-edge-bringing-data-centre-class-performance-closer-to-your-data/feed/ 0
IBM’s AI-powered Mayflower ship crosses the Atlantic https://www.artificialintelligence-news.com/2022/06/06/ibm-ai-powered-mayflower-ship-crosses-the-atlantic/ https://www.artificialintelligence-news.com/2022/06/06/ibm-ai-powered-mayflower-ship-crosses-the-atlantic/#respond Mon, 06 Jun 2022 15:13:54 +0000 https://www.artificialintelligence-news.com/?p=12045 A groundbreaking AI-powered ship designed by IBM has successfully crossed the Atlantic, albeit not quite as planned. The Mayflower – named after the ship which carried Pilgrims from Plymouth, UK to Massachusetts, US in 1620 – is a 50-foot crewless vessel that relies on AI and edge computing to navigate the often harsh and unpredictable... Read more »

The post IBM’s AI-powered Mayflower ship crosses the Atlantic appeared first on AI News.

]]>
A groundbreaking AI-powered ship designed by IBM has successfully crossed the Atlantic, albeit not quite as planned.

The Mayflower – named after the ship which carried Pilgrims from Plymouth, UK to Massachusetts, US in 1620 – is a 50-foot crewless vessel that relies on AI and edge computing to navigate the often harsh and unpredictable oceans.

IBM’s Mayflower has been attempting to autonomously complete the voyage that its predecessor did over 400 years ago but has been beset by various problems.

The initial launch was planned for June 2021 but a number of technical glitches forced the vessel to return to Plymouth.

Back in April 2022, the Mayflower set off again. This time, an issue with the generator forced the boat to divert to the Azores Islands in Portugal.

The Mayflower was patched up and pressed on until late May when a problem developed with the charging circuit for the generator’s starter batteries. This time, a course for Halifax, Nova Scotia was charted.

After more than five weeks since it departed Plymouth, the modern Mayflower is now docked in Halifax. While it’s yet to reach its final destination, the Mayflower has successfully crossed the Atlantic (hiccups aside.)

While mechanically the ship leaves a lot to be desired, IBM says the autonomous systems have worked flawlessly—including the AI captain developed by MarineAI.

It’s beyond current AI systems to instruct and control robotics to carry out mechanical repairs for any number of potential failures. However, the fact that Mayflower’s onboard autonomous systems have been able to successfully navigate the ocean and report back mechanical issues is an incredible achievement.

“It will be entirely responsible for its own navigation decisions as it progresses so it has very sophisticated software on it—AIs that we use to recognise the various obstacles and objects in the water, whether that’s other ships, boats, debris, land obstacles, or even marine life,” Robert High, VP and CTO of Edge Computing at IBM, told Edge Computing News in an interview.

IBM designed Mayflower 2.0 with marine research nonprofit Promare. The ship uses a wind/solar hybrid propulsion system and features a range of sensors for scientific research on its journey including acoustic, nutrient, temperature, and water and air samplers.

You can find out more about the Mayflower and view live data and webcams from the ship here.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IBM’s AI-powered Mayflower ship crosses the Atlantic appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/06/ibm-ai-powered-mayflower-ship-crosses-the-atlantic/feed/ 0
Google picks ASUS IoT to help scale its Coral edge AI platform https://www.artificialintelligence-news.com/2022/05/06/google-picks-asus-iot-scale-coral-edge-ai-platform/ https://www.artificialintelligence-news.com/2022/05/06/google-picks-asus-iot-scale-coral-edge-ai-platform/#respond Fri, 06 May 2022 15:45:24 +0000 https://www.artificialintelligence-news.com/?p=11939 Google has picked ASUS IoT to help scale the manufacturing, distribution, and support of its Coral edge AI platform. Coral was launched in 2019 with the goal of making edge AI more accessible. Google says that it’s witnessed strong demand since its launch – across industries and geographies – and needs a reliable partner able... Read more »

The post Google picks ASUS IoT to help scale its Coral edge AI platform appeared first on AI News.

]]>
Google has picked ASUS IoT to help scale the manufacturing, distribution, and support of its Coral edge AI platform.

Coral was launched in 2019 with the goal of making edge AI more accessible. Google says that it’s witnessed strong demand since its launch – across industries and geographies – and needs a reliable partner able to help it scale.

ASUS IoT is a sub-brand of the wider ASUS brand that has decades of experience in global electronics manufacturing.

The sub-brand was the first partner to launch a Coral SoM (System-on-Module) product with the Tinker Edge T development board. Since then, ASUS IoT has integrated Coral accelerators into their intelligent edge computers and was first to release a multi Edge TPU device with the AI Accelerator PCIe Card.

In a blog post, Google wrote:

“We continue to be impressed by the innovative ways in which our customers use Coral to explore new AI-driven solutions.

And now with ASUS IoT bringing expanded sales, support, and resources for long-term availability, our Coral team will continue to focus on building the next generation of privacy-preserving features and tools for neural computing at the edge.”

Google will remain in control of the Coral brand and product portfolio but ASUS IoT will become the primary channel for sales, distribution, and support.

ASUS IoT will work to make Coral available in more countries while Google focuses its efforts on “building the next generation of privacy-preserving features and tools for neural computing at the edge.”

(Image Credit: Google)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google picks ASUS IoT to help scale its Coral edge AI platform appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/06/google-picks-asus-iot-scale-coral-edge-ai-platform/feed/ 0
AI system inspects astronauts’ gloves for damage in real-time https://www.artificialintelligence-news.com/2022/04/05/ai-system-inspects-astronauts-gloves-damage-real-time/ https://www.artificialintelligence-news.com/2022/04/05/ai-system-inspects-astronauts-gloves-damage-real-time/#respond Tue, 05 Apr 2022 09:00:29 +0000 https://artificialintelligence-news.com/?p=11836 Microsoft and Hewlett Packard Enterprise (HSE) are working with NASA scientists to develop an AI system for inspecting astronauts’ gloves. Space is an unforgiving environment and equipment failures can be catastrophic. Gloves are particularly prone to wear and tear as they’re used for just about everything, including repairing equipment and installing new equipment. Currently, astronauts... Read more »

The post AI system inspects astronauts’ gloves for damage in real-time appeared first on AI News.

]]>
Microsoft and Hewlett Packard Enterprise (HSE) are working with NASA scientists to develop an AI system for inspecting astronauts’ gloves.

Space is an unforgiving environment and equipment failures can be catastrophic. Gloves are particularly prone to wear and tear as they’re used for just about everything, including repairing equipment and installing new equipment.

Currently, astronauts will send back images of their gloves to Earth to be manually examined by NASA analysts.

“This process gets the job done with the ISS’s low orbit distance of about 250 miles from Earth, but things will be different when NASA once again sends people to the moon, and then to Mars – 140 million miles away from Earth,” explains Tom Keane, Corporate Vice President of Mission Engineering at Microsoft, in a blog post.

Harnessing the power of HPE’s Spaceborne Computer-2, the teams from the three companies are developing an AI system that can quickly detect even small signs of wear and tear on astronauts’ gloves that could end up compromising their safety.

Astronauts’ gloves are built to be robust and have five layers. The outer layer features a rubber coating for grip and acts as the first defense. Next up is the Vectran® layer, a cut-resistant material. The final three layers maintain pressure and protect against the extreme temperatures of space.

However, space does its best to do all it can to get through these defenses and problems can occur when the Vectran® layer is reached. Aside from the usual day-to-day wear that happens even from using gloves here on Earth, astronauts’ gloves have to deal with a variety of additional hazards.

Micrometeorites, for example, create numerous sharp edges on handrails and other components. On arrival to locations like the moon and Mars, the lack of natural erosion means rock particles are more like broken glass than sand.

To create the glove analyser, the project’s team first started with images of new, undamaged gloves and those which featured wear and tear from spacewalk and terrestrial training. NASA engineers went through the images and tagged specific types of wear through Azure Cognitive Services’ Custom Vision.

A cloud-based AI system was trained using the data and the results were comparable to NASA’s own actual damage reports. The tool generates a probability score of damage to areas of each glove.

In space, images would be taken of astronauts’ gloves while they remove their equipment in the airlock. These images would then be analysed locally using HPE’s Spaceborne Computer-2 for signs of damage and, if any is detected, a message will be sent to Earth with areas highlighted for additional human review by NASA engineers.

“What we demonstrated is that we can perform AI and edge processing on the ISS and analyse gloves in real-time,” said Ryan Campbell, senior software engineer at Microsoft Azure Space. 

“Because we’re literally next to the astronaut when we’re processing, we can run our tests faster than the images can be sent to the ground.”

The project serves as a great example of the power of AI combined with edge computing, in areas with as limited connectivity as space.

Going forward, the project could extend to detecting early damage to other areas like docking hatches before they become a serious problem. Microsoft even envisions that a device like HoloLens 2 or a successor could be used to enable astronauts to visually scan for damage in real-time.

“Bringing cloud computing power to the ultimate edge through projects like this allows us to think about and prepare for what we can safely do next – as we expect longer-range human spaceflights in the future and as we collectively begin pushing that edge further out,” concludes Jennifer Ott, Data and AI Specialist at Microsoft. 

(Photo by NASA on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI system inspects astronauts’ gloves for damage in real-time appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/04/05/ai-system-inspects-astronauts-gloves-damage-real-time/feed/ 0
Editorial: Our predictions for the AI industry in 2022 https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/ https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/#respond Thu, 23 Dec 2021 11:59:08 +0000 https://artificialintelligence-news.com/?p=11547 The AI industry continued to thrive this year as companies sought ways to support business continuity through rapidly-changing situations. For those already invested, many are now doubling-down after reaping the benefits. As we wrap up the year, it’s time to look ahead at what to expect from the AI industry in 2022. Tackling bias Our... Read more »

The post Editorial: Our predictions for the AI industry in 2022 appeared first on AI News.

]]>
The AI industry continued to thrive this year as companies sought ways to support business continuity through rapidly-changing situations. For those already invested, many are now doubling-down after reaping the benefits.

As we wrap up the year, it’s time to look ahead at what to expect from the AI industry in 2022.

Tackling bias

Our ‘Ethics & Society’ category got more use than most others this year, and with good reason. AI cannot thrive when it’s not trusted.

Biases are present in algorithms that are already causing harm. They’ve been the subject of many headlines, including a number of ours, and must be addressed for the public to have confidence in wider adoption.

Explainable AI (XAI) is a partial solution to the problem. XAI is artificial intelligence in which the results of the solution can be understood by humans.

Robert Penman, Associate Analyst at GlobalData, comments:

“2022 will see the further rollout of XAI, enabling companies to identify potential discrimination in their systems’ algorithms. It is essential that companies correct their models to mitigate bias in data. Organisations that drag their feet will face increasing scrutiny as AI continues to permeate our society, and people demand greater transparency. For example, in the Netherlands, the government’s use of AI to identify welfare fraud was found to violate European human rights.

Reducing human bias present in training datasets is a huge challenge in XAI implementation. Even tech giant Amazon had to scrap its in-development hiring tool because it was claimed to be biased against women.

Further, companies will be desperate to improve their XAI capabilities—the potential to avoid a PR disaster is reason enough.”

To that end, expect a large number of acquisitions of startups specialising in synthetic data training in 2022.

Smoother integration

Many companies don’t know how to get started on their AI journeys. Around 30 percent of enterprises plan to incorporate AI into their company within the next few years, but 91 percent foresee significant barriers and roadblocks.

If the confusion and anxiety that surrounds AI can be tackled, it will lead to much greater adoption.

Dr Max Versace, PhD, CEO and Co-Founder of Neurala, explains:

“Similar to what happened with the introduction of WordPress for websites in early 2000, platforms that resemble a ‘WordPress for AI’ will simplify building and maintaining AI models. 

In manufacturing for example, AI platforms will provide integration hooks, hardware flexibility, ease of use by non-experts, the ability to work with little data, and, crucially, a low-cost entry point to make this technology viable for a broad set of customers.”

AutoML platforms will thrive in 2022 and beyond.

From the cloud to the edge

The migration of AI from the cloud to the edge will accelerate in 2022.

Edge processing has a plethora of benefits over relying on cloud servers including speed, reliability, privacy, and lower costs.

Versace commented:

“Increasingly, companies are realising that the way to build a truly efficient AI algorithm is to train it on their own unique data, which might vary substantially over time. To do that effectively, the intelligence needs to directly interface with the sensors producing the data. 

From there, AI should run at a compute edge, and interface with cloud infrastructure only occasionally for backups and/or increased functionality. No critical process – for example,  in a manufacturing plant – should exclusively rely on cloud AI, exposing the manufacturing floor to connectivity/latency issues that could disrupt production.”

Expect more companies to realise the benefits of migrating from cloud to edge AI in 2022.

Doing more with less

Among the early concerns about the AI industry is that it would be dominated by “big tech” due to the gargantuan amount of data they’ve collected.

However, innovative methods are now allowing algorithms to be trained with less information. Training using smaller but more unique datasets for each deployment could prove to be more effective.

We predict more startups will prove the world doesn’t have to rely on big tech in 2022.

Human-powered AI

While XAI systems will provide results which can be understood by humans, the decisions made by AIs will be more useful because they’ll be human-powered.

Varun Ganapathi, PhD, Co-Founder and CTO at AKASA, said:

“For AI to truly be useful and effective, a human has to be present to help push the work to the finish line. Without guidance, AI can’t be expected to succeed and achieve optimal productivity. This is a trend that will only continue to increase.

Ultimately, people will have machines report to them. In this world, humans will be the managers of staff – both other humans and AIs – that will need to be taught and trained to be able to do the tasks they’re needed to do.

Just like people, AI needs to constantly be learning to improve performance.”

Greater human input also helps to build wider trust in AI. Involving humans helps to counter narratives about AI replacing jobs and concerns that decisions about people’s lives could be made without human qualities such as empathy and compassion.

Expect human input to lead to more useful AI decisions in 2022.

Avoiding captivity

The telecoms industry is currently pursuing an innovation called Open RAN which aims to help operators avoid being locked to specific vendors and help smaller competitors disrupt the relative monopoly held by a small number companies.

Enterprises are looking to avoid being held in captivity by any AI vendor.

Doug Gilbert, CIO and Chief Digital Officer at Sutherland, explains:

“Early adopters of rudimentary enterprise AI embedded in ERP / CRM platforms are starting to feel trapped. In 2022, we’ll see organisations take steps to avoid AI lock-in. And for good reason. AI is extraordinarily complex.

When embedded in, say, an ERP system, control, transparency, and innovation is handed over to the vendor not the enterprise. AI shouldn’t be treated as a product or feature: it’s a set of capabilities. AI is also evolving rapidly, with new AI capabilities and continuously improved methods of training algorithms.

To get the most powerful results from AI, more enterprises will move toward a model of combining different AI capabilities to solve unique problems or achieve an outcome. That means they’ll be looking to spin up more advanced and customizable options and either deprioritising AI features in their enterprise platforms or winding down those expensive but basic AI features altogether.”

In 2022 and beyond, we predict enterprises will favour AI solutions that avoid lock-in.

Chatbots get smart

Hands up if you’ve ever screamed (internally or externally) that you just want to speak to a human when dealing with a chatbot—I certainly have, more often than I’d care to admit.

“Today’s chatbots have proven beneficial but have very limited capabilities. Natural language processing will start to be overtaken by neural voice software that provides near real time natural language understanding (NLU),” commented Gilbert.

“With the ability to achieve comprehensive understanding of more complex sentence structures, even emotional states, break down conversations into meaningful content, quickly perform keyword detection and named entity recognition, NLU will dramatically improve the accuracy and the experience of conversational AI.”

In theory, this will have two results:

  • Augmenting human assistance in real-time, such as suggesting responses based on behaviour or based on skill level.
  • Change how a customer or client perceives they’re being treated with NLU delivering a more natural and positive experience.  

In 2022, chatbots will get much closer to offering a human-like experience.

It’s not about size, it’s about the quality

A robust AI system requires two things: a functioning model and underlying data to train that model. Collecting huge amounts of data is a waste of time if it’s not of high quality and labeled correctly.

Gabriel Straub, Chief Data Scientist at Ocado Technology, said:

“Andrew Ng has been speaking about data-centric AI, about how improving the quality of your data can often lead to better outcomes than improving your algorithms (at least for the same amount of effort.)

So, how do you do this in practice? How do you make sure that you manage the quality of data at least as carefully as the quantity of data you collect?

There are two things that will make a big difference: 1) making sure that data consumers are always at the heart of your data thinking and 2) ensuring that data governance is a function that enables you to unlock the value in your data, safely, rather than one that focuses on locking down data.”

Expect the AI industry to make the quality of data a priority in 2022.

(Photo by Michael Dziedzic on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

The post Editorial: Our predictions for the AI industry in 2022 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/feed/ 0
Paravision boosts its computer vision and facial recognition capabilities https://www.artificialintelligence-news.com/2021/09/29/paravision-boosts-its-computer-vision-and-facial-recognition-capabilities/ https://www.artificialintelligence-news.com/2021/09/29/paravision-boosts-its-computer-vision-and-facial-recognition-capabilities/#respond Wed, 29 Sep 2021 13:06:14 +0000 http://artificialintelligence-news.com/?p=11143 US-based Paravision has announced updates to boost its computer vision and facial recognition capabilities across mobile, on-premise, edge, and cloud deployments. “From cloud to edge, Paravision’s goal is to help our partners develop and deploy transformative solutions around face recognition and computer vision,” said Joey Pritikin, Chief Product Officer at Paravision. “With these sweeping updates... Read more »

The post Paravision boosts its computer vision and facial recognition capabilities appeared first on AI News.

]]>
US-based Paravision has announced updates to boost its computer vision and facial recognition capabilities across mobile, on-premise, edge, and cloud deployments.

“From cloud to edge, Paravision’s goal is to help our partners develop and deploy transformative solutions around face recognition and computer vision,” said Joey Pritikin, Chief Product Officer at Paravision.

“With these sweeping updates to our product family, and with what has become possible in terms of accuracy, speed, usability and portability, we see a remarkable opportunity to unite disparate applications with a coherent sense of identity that bridges physical spaces and cyberspace.”

A new Scaled Vector Search (SVS) capability acts as a search engine to provide accurate, rapid, and stable face matching on large databases that may contain tens of millions of identities. Paravision claims the SVS engine supports hundreds of transactions per second with extremely low latencies.

Another scaling solution called Streaming Container 5 enables the processing of video at over 250 frames per second from any number of streams. The solution features advanced face tracking to ensure that identities remain accurate even in busy environments.

With more enterprises than ever looking to the latency-busting and privacy-enhancing benefits of edge computing, Paravision has partnered with Teknique to co-create a series of hardware and software reference designs that enable the rapid development of face recognition and computer vision capabilities at the edge.

Teknique is a leader in the development of hardware based on designs from California-based fabless semiconductor company Ambarella.

Paravision’s Face SDK has been enhanced for smart cameras powered by Ambarella CVflow chipsets. The update enables facial recognition on CVflow-powered cameras to achieve up to 40 frames per second full pipeline performance.

A new Liveness and Anti-spoofing SDK also adds new safeguards for Ambarella-powered facial recognition solutions. The toolkit uses Ambarella’s visible light, near-infrared, and depth-sensing capabilities to determine whether the camera is seeing a live subject or whether it’s being tricked by recorded footage or a dummy image.

On the mobile side, Paravision has released its Face SDK for Android. The SDK includes face detection, landmarks, quality assessment, template creation, and 1-to-1 or 1-to-many matching. Reference applications are included which include UI/UX recommendations and tools.

Last but certainly not least, Paravision has announced the availability of its first person-level computer vision SDK. The new SDK is designed to go “beyond face recognition” to detect the presence and position of individuals and unlock new use cases.

Provided examples of real-world applications for the computer vision SDK include occupancy analysis, the ability to spot tailgating, as well as custom intention or subject attributes.

“With Person Detection, users could determine whether employees are allowed access to a specific area, are wearing a mask or hard hat, or appear to be in distress,” the company explains. “It can also enable useful business insights such as metrics about queue times, customer throughput or to detect traveller bottlenecks.”

With these exhaustive updates, Paravision is securing its place as one of the most exciting companies in the AI space.

Paravision is ranked the US leader across several of NIST’s Face Recognition Vendor Test evaluations including 1:1 verification, 1:N identification, performance for paperless travel, and performance with face masks.

(Photo by Daniil Kuželev on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Paravision boosts its computer vision and facial recognition capabilities appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/29/paravision-boosts-its-computer-vision-and-facial-recognition-capabilities/feed/ 0
AI, Captain: IBM’s edge AI-powered ship Mayflower sets sail https://www.artificialintelligence-news.com/2021/06/18/ai-captain-ibm-edge-ai-powered-ship-mayflower-sets-sail/ https://www.artificialintelligence-news.com/2021/06/18/ai-captain-ibm-edge-ai-powered-ship-mayflower-sets-sail/#respond Fri, 18 Jun 2021 12:07:56 +0000 http://artificialintelligence-news.com/?p=10711 IBM’s fully-autonomous edge AI-powered ship Mayflower has set off on its crewless voyage from Plymouth, UK to Plymouth, USA. The ship is named after the Mayflower vessel which transported pilgrim settlers from Plymouth, England to Plymouth, Massachusetts in 1620. On its 400th anniversary, it was decided that a Mayflower for the 21st century should be... Read more »

The post AI, Captain: IBM’s edge AI-powered ship Mayflower sets sail appeared first on AI News.

]]>
IBM’s fully-autonomous edge AI-powered ship Mayflower has set off on its crewless voyage from Plymouth, UK to Plymouth, USA.

The ship is named after the Mayflower vessel which transported pilgrim settlers from Plymouth, England to Plymouth, Massachusetts in 1620. On its 400th anniversary, it was decided that a Mayflower for the 21st century should be built.

Mayflower 2.0 is a truly modern vessel packed with the latest technological advancements. Onboard edge AI computing enables the ship to carry out scientific research while navigating the harsh environment of the ocean—often without any connectivity.

“It will be entirely responsible for its own navigation decisions as it progresses so it has very sophisticated software on it—AIs that we use to recognise the various obstacles and objects in the water, whether that’s other ships, boats, debris, land obstacles, or even marine life,” Robert High, VP and CTO of Edge Computing at IBM, recently told Edge Computing News in an interview.

The Weather Company, which IBM acquired back in 2016, has been advising on the departure window for Mayflower’s voyage. Earlier this week, the Mayflower was given the green light to set sail.

Mayflower’s AI captain is developed by MarineAI and uses IBM’s artificial intelligence powers. A little fun fact is that the AI had to be trained specifically to ignore seagulls as they could appear to be large objects and lead to Mayflower taking unnecessary action to maneuver around them.

The progress of Mayflower can be viewed using a dashboard built by IBM’s digital agency iX.

A livestream from Mayflower’s onboard cameras is also available, but it can understandably be a little temperamental. IBM partnered with Videosoft, a company that specialises in live-streaming in challenging environments, to enable streaming over speeds of just 6kbps. However, there are times when Mayflower will be fully-disconnected—which even the best algorithms can’t overcome.

If the livestream is currently available, you can view it here.

Unlike its predecessor, Mayflower 2.0 won’t be reliant solely on wind power and will employ a wind/solar hybrid propulsion system with a backup diesel generator. The new ship also trades in a compass and nautical charts for navigation in favour of a state-of-the-art GNSS positioning system with SATCOM, RADAR, and LIDAR.

A range of sensors are onboard for scientific research including acoustic, nutrient, temperature, and water and air samplers. Edge devices will store and analyse data locally until connectivity is available. When a link has been established, the data will be uploaded to edge nodes onshore.

Mayflower is a fascinating project and we look forward to following its voyage. AI News will keep you updated on any relevant developments.

(Image Credit: IBM)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post AI, Captain: IBM’s edge AI-powered ship Mayflower sets sail appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/06/18/ai-captain-ibm-edge-ai-powered-ship-mayflower-sets-sail/feed/ 0
AWS announces nine major updates for its ML platform SageMaker https://www.artificialintelligence-news.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/ https://www.artificialintelligence-news.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/#comments Wed, 09 Dec 2020 14:47:48 +0000 http://artificialintelligence-news.com/?p=10096 Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker. SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case. During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.... Read more »

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.

SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.

During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.

Swami Sivasubramanian, VP of Amazon Machine Learning at AWS, said:

“Hundreds of thousands of everyday developers and data scientists have used our industry-leading machine learning service, Amazon SageMaker, to remove barriers to building, training, and deploying custom machine learning models. One of the best parts about having such a widely-adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables.

Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug, and run custom machine learning models with greater visibility, explainability, and automation at scale.”

The first announcement is Data Wrangler, a feature which aims to automate the preparation of data for machine learning.

Data Wrangler enables customers to choose the data they want from their various data stores and import it with a single click. Over 300 built-in data transformers are included to help customers normalise, transform, and combine features without having to write any code.

Frank Farrall, Principal of AI Ecosystems and Platforms Leader at Deloitte, comments:

“SageMaker Data Wrangler enables us to hit the ground running to address our data preparation needs with a rich collection of transformation tools that accelerate the process of machine learning data preparation needed to take new products to market.

In turn, our clients benefit from the rate at which we scale deployments, enabling us to deliver measurable, sustainable results that meet the needs of our clients in a matter of days rather than months.”

The second announcement is Feature Store. Amazon SageMaker Feature Store provides a new repository that makes it easy to store, update, retrieve, and share machine learning features for training and inference.

Feature Store aims to overcome the problem of storing features which are mapped to multiple models. A purpose-built feature store helps developers to access and share features that make it much easier to name, organise, find, and share sets of features among teams of developers and data scientists. Because it resides in SageMaker Studio – close to where ML models are run – AWS claims it provides single-digit millisecond inference latency.

Mammad Zadeh, VP of Engineering, Data Platform at Intuit, says:

“We have worked closely with AWS in the lead up to the release of Amazon SageMaker Feature Store, and we are excited by the prospect of a fully managed feature store so that we no longer have to maintain multiple feature repositories across our organization.

Our data scientists will be able to use existing features from a central store and drive both standardisation and reuse of features across teams and models.”

Next up, we have SageMaker Pipelines—which claims to be the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning.

Developers can define each step of an end-to-end machine learning workflow including the data-load steps, transformations from Amazon SageMaker Data Wrangler, features stored in Amazon SageMaker Feature Store, training configuration and algorithm set up, debugging steps, and optimisation steps.

SageMaker Clarify may be one of the most important features being debuted by AWS this week considering ongoing events.

Clarify aims to provide bias detection across the machine learning workflow, enabling developers to build greater fairness and transparency into their ML models. Rather than turn to often time-consuming open-source tools, developers can use the integrated solution to quickly try and counter any bias in models.

Andreas Heyden, Executive VP of Digital Innovations for the DFL Group, says:

“Amazon SageMaker Clarify seamlessly integrates with the rest of the Bundesliga Match Facts digital platform and is a key part of our long-term strategy of standardising our machine learning workflows on Amazon SageMaker.

By using AWS’s innovative technologies, such as machine learning, to deliver more in-depth insights and provide fans with a better understanding of the split-second decisions made on the pitch, Bundesliga Match Facts enables viewers to gain deeper insights into the key decisions in each match.”

Deep Profiling for Amazon SageMaker automatically monitors system resource utilisation and provides alerts where required for any detected training bottlenecks. The feature works across frameworks (PyTorch, Apache MXNet, and TensorFlow) and collects system and training metrics automatically without requiring any code changes in training scripts.

Next up, we have Distributed Training on SageMaker which AWS claims makes it possible to train large, complex deep learning models up to two times faster than current approaches.

Kristóf Szalay, CTO at Turbine, comments:

“We use machine learning to train our in silico human cell model, called Simulated Cell, based on a proprietary network architecture. By accurately predicting various interventions on the molecular level, Simulated Cell helps us to discover new cancer drugs and find combination partners for existing therapies.

Training of our simulation is something we continuously iterate on, but on a single machine each training takes days, hindering our ability to iterate on new ideas quickly.

We are very excited about Distributed Training on Amazon SageMaker, which we are expecting to decrease our training times by 90% and to help us focus on our main task: to write a best-of-the-breed codebase for the cell model training.

Amazon SageMaker ultimately allows us to become more effective in our primary mission: to identify and develop novel cancer drugs for patients.”

SageMaker’s Data Parallelism engine scales training jobs from a single GPU to hundreds or thousands by automatically splitting data across multiple GPUs, improving training time by up to 40 percent.

With edge computing advancements increasing rapidly, AWS is keeping pace with SageMaker Edge Manager.

Edge Manager helps developers to optimise, secure, monitor, and maintain ML models deployed on fleets of edge devices. In addition to helping optimise ML models and manage edge devices, Edge Manager also provides the ability to cryptographically sign models, upload prediction data from devices to SageMaker for monitoring and analysis, and view a dashboard which tracks and provided a visual report on the operation of the deployed models within the SageMaker console.

Igor Bergman, VP of Cloud and Software of PCs and Smart Devices at Lenovo, comments:

“SageMaker Edge Manager will help eliminate the manual effort required to optimise, monitor, and continuously improve the models after deployment. With it, we expect our models will run faster and consume less memory than with other comparable machine-learning platforms.

As we extend AI to new applications across the Lenovo services portfolio, we will continue to require a high-performance pipeline that is flexible and scalable both in the cloud and on millions of edge devices. That’s why we selected the Amazon SageMaker platform. With its rich edge-to-cloud and CI/CD workflow capabilities, we can effectively bring our machine learning models to any device workflow for much higher productivity.”

Finally, SageMaker JumpStart aims to make it easier for developers which have little experience with machine learning deployments to get started.

JumpStart provides developers with an easy-to-use, searchable interface to find best-in-class solutions, algorithms, and sample notebooks. Developers can select from several end-to-end machine learning templates(e.g. fraud detection, customer churn prediction, or forecasting) and deploy them directly into their SageMaker Studio environments.

AWS has been on a roll with SageMaker improvements—delivering more than 50 new capabilities over the past year. After this bumper feature drop, we probably shouldn’t expect any more until we’ve put 2020 behind us.

You can find coverage of AWS’ more cloud-focused announcements via our sister publication CloudTech here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/feed/ 1