computer vision Archives - AI News https://www.artificialintelligence-news.com/tag/computer-vision/ Artificial Intelligence News Mon, 17 Jun 2024 16:05:05 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png computer vision Archives - AI News https://www.artificialintelligence-news.com/tag/computer-vision/ 32 32 NVIDIA presents latest advancements in visual AI https://www.artificialintelligence-news.com/2024/06/17/nvidia-presents-latest-advancements-visual-ai/ https://www.artificialintelligence-news.com/2024/06/17/nvidia-presents-latest-advancements-visual-ai/#respond Mon, 17 Jun 2024 16:05:03 +0000 https://www.artificialintelligence-news.com/?p=15026 NVIDIA researchers are presenting new visual generative AI models and techniques at the Computer Vision and Pattern Recognition (CVPR) conference this week in Seattle. The advancements span areas like custom image generation, 3D scene editing, visual language understanding, and autonomous vehicle perception. “Artificial intelligence, and generative AI in particular, represents a pivotal technological advancement,” said... Read more »

The post NVIDIA presents latest advancements in visual AI appeared first on AI News.

]]>
NVIDIA researchers are presenting new visual generative AI models and techniques at the Computer Vision and Pattern Recognition (CVPR) conference this week in Seattle. The advancements span areas like custom image generation, 3D scene editing, visual language understanding, and autonomous vehicle perception.

“Artificial intelligence, and generative AI in particular, represents a pivotal technological advancement,” said Jan Kautz, VP of learning and perception research at NVIDIA.

“At CVPR, NVIDIA Research is sharing how we’re pushing the boundaries of what’s possible — from powerful image generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.”

Among the over 50 NVIDIA research projects being presented, two papers have been selected as finalists for CVPR’s Best Paper Awards – one exploring the training dynamics of diffusion models and another on high-definition maps for self-driving cars.

Additionally, NVIDIA has won the CVPR Autonomous Grand Challenge’s End-to-End Driving at Scale track, outperforming over 450 entries globally. This milestone demonstrates NVIDIA’s pioneering work in using generative AI for comprehensive self-driving vehicle models, also earning an Innovation Award from CVPR.

One of the headlining research projects is JeDi, a new technique that allows creators to rapidly customise diffusion models – the leading approach for text-to-image generation – to depict specific objects or characters using just a few reference images, rather than the time-intensive process of fine-tuning on custom datasets.

Another breakthrough is FoundationPose, a new foundation model that can instantly understand and track the 3D pose of objects in videos without per-object training. It set a new performance record and could unlock new AR and robotics applications.

NVIDIA researchers also introduced NeRFDeformer, a method to edit the 3D scene captured by a Neural Radiance Field (NeRF) using a single 2D snapshot, rather than having to manually reanimate changes or recreate the NeRF entirely. This could streamline 3D scene editing for graphics, robotics, and digital twin applications.

On the visual language front, NVIDIA collaborated with MIT to develop VILA, a new family of vision language models that achieve state-of-the-art performance in understanding images, videos, and text. With enhanced reasoning capabilities, VILA can even comprehend internet memes by combining visual and linguistic understanding.

NVIDIA’s visual AI research spans numerous industries, including over a dozen papers exploring novel approaches for autonomous vehicle perception, mapping, and planning. Sanja Fidler, VP of NVIDIA’s AI Research team, is presenting on the potential of vision language models for self-driving cars.

The breadth of NVIDIA’s CVPR research exemplifies how generative AI could empower creators, accelerate automation in manufacturing and healthcare, while propelling autonomy and robotics forward.

(Photo by v2osk)

See also: NLEPs: Bridging the gap between LLMs and symbolic reasoning

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NVIDIA presents latest advancements in visual AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/17/nvidia-presents-latest-advancements-visual-ai/feed/ 0
Amazon will use computer vision to spot defects before dispatch https://www.artificialintelligence-news.com/2024/06/04/amazon-use-computer-vision-spot-defects-before-dispatch/ https://www.artificialintelligence-news.com/2024/06/04/amazon-use-computer-vision-spot-defects-before-dispatch/#respond Tue, 04 Jun 2024 11:44:26 +0000 https://www.artificialintelligence-news.com/?p=14956 Amazon will harness computer vision and AI to ensure customers receive products in pristine condition and further its sustainability efforts. The initiative – dubbed “Project P.I.” (short for “private investigator”) – operates within Amazon fulfilment centres across North America, where it will scan millions of products daily for defects. Project P.I. leverages generative AI and... Read more »

The post Amazon will use computer vision to spot defects before dispatch appeared first on AI News.

]]>
Amazon will harness computer vision and AI to ensure customers receive products in pristine condition and further its sustainability efforts. The initiative – dubbed “Project P.I.” (short for “private investigator”) – operates within Amazon fulfilment centres across North America, where it will scan millions of products daily for defects.

Project P.I. leverages generative AI and computer vision technologies to detect issues such as damaged products or incorrect colours and sizes before they reach customers. The AI model not only identifies defects but also helps uncover the root causes, enabling Amazon to implement preventative measures upstream. This system has proven highly effective in the sites where it has been deployed, accurately identifying product issues among the vast number of items processed each month.

Before any item is dispatched, it passes through an imaging tunnel where Project P.I. evaluates its condition. If a defect is detected, the item is isolated and further investigated to determine if similar products are affected.

Amazon associates review the flagged items and decide whether to resell them at a discount via Amazon’s Second Chance site, donate them, or find alternative uses. This technology aims to act as an extra pair of eyes, enhancing manual inspections at several North American fulfilment centres, with plans for expansion throughout 2024.

Dharmesh Mehta, Amazon’s VP of Worldwide Selling Partner Services, said: “We want to get the experience right for customers every time they shop in our store.

“By leveraging AI and product imaging within our operations facilities, we are able to efficiently detect potentially damaged products and address more of those issues before they ever reach a customer, which is a win for the customer, our selling partners, and the environment.”

Project P.I. also plays a crucial role in Amazon’s sustainability initiatives. By preventing damaged or defective items from reaching customers, the system helps reduce unwanted returns, wasted packaging, and unnecessary carbon emissions from additional transportation.

Kara Hurst, Amazon’s VP of Worldwide Sustainability, commented: “AI is helping Amazon ensure that we’re not just delighting customers with high-quality items, but we’re extending that customer obsession to our sustainability work by preventing less-than-perfect items from leaving our facilities, and helping us avoid unnecessary carbon emissions due to transportation, packaging, and other steps in the returns process.”

In parallel, Amazon is utilising a generative AI system equipped with a Multi-Modal LLM (MLLM) to investigate the root causes of negative customer experiences.

When defects reported by customers slip through initial checks, this system reviews customer feedback and analyses images from fulfilment centres to understand what went wrong. For example, if a customer receives the wrong size of a product, the system examines the product labels in fulfilment centre images to pinpoint the error.

This technology is also beneficial for Amazon’s selling partners, especially the small and medium-sized businesses that make up over 60% of Amazon’s sales. By making defect data more accessible, Amazon helps these sellers rectify issues quickly and reduce future errors.

(Photo by Andrew Stickelman)

See also: X now permits AI-generated adult content

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon will use computer vision to spot defects before dispatch appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/04/amazon-use-computer-vision-spot-defects-before-dispatch/feed/ 0
Meta claims its new AI supercomputer will set records https://www.artificialintelligence-news.com/2022/01/25/meta-claims-new-ai-supercomputer-will-set-records/ https://www.artificialintelligence-news.com/2022/01/25/meta-claims-new-ai-supercomputer-will-set-records/#respond Tue, 25 Jan 2022 09:25:47 +0000 https://artificialintelligence-news.com/?p=11610 Meta (formerly Facebook) has unveiled an AI supercomputer that it claims will be the world’s fastest. The supercomputer is called the AI Research SuperCluster (RSC) and is yet to be fully complete. However, Meta’s researchers have already begun using it for training large natural language processing (NLP) and computer vision models. RSC is set to... Read more »

The post Meta claims its new AI supercomputer will set records appeared first on AI News.

]]>
Meta (formerly Facebook) has unveiled an AI supercomputer that it claims will be the world’s fastest.

The supercomputer is called the AI Research SuperCluster (RSC) and is yet to be fully complete. However, Meta’s researchers have already begun using it for training large natural language processing (NLP) and computer vision models.

RSC is set to be fully built in mid-2022. Meta says that it will be the fastest in the world once complete and the aim is for it to be capable of training models with trillions of parameters.

“We hope RSC will help us build entirely new AI systems that can, for example, power real-time voice translations to large groups of people, each speaking a different language, so they can seamlessly collaborate on a research project or play an AR game together,” wrote Meta in a blog post.

“Ultimately, the work done with RSC will pave the way toward building technologies for the next major computing platform — the metaverse, where AI-driven applications and products will play an important role.”

For production, Meta expects RSC will be 20x faster than Meta’s current V100-based clusters. RSC is also estimated to be 9x faster at running the NVIDIA Collective Communication Library (NCCL) and 3x faster at training large-scale NLP workflows.

A model with tens of billions of parameters can finish training in three weeks compared with nine weeks prior to RSC.

Meta says that its previous AI research infrastructure only leveraged open source and other publicly-available datasets. RSC was designed with the security and privacy controls in mind to allow Meta to use real-world examples from its production systems in production training.

What this means in practice is that Meta can use RSC to advance research for vital tasks such as identifying harmful content on its platforms—using real data from them.

“We believe this is the first time performance, reliability, security, and privacy have been tackled at such a scale,” says Meta.

(Image Credit: Meta)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta claims its new AI supercomputer will set records appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/25/meta-claims-new-ai-supercomputer-will-set-records/feed/ 0
GTC 2021: Nvidia debuts accelerated computing libraries, partners with Google, IBM, and others to speed up quantum research https://www.artificialintelligence-news.com/2021/11/09/gtc-2021-nvidia-debuts-accelerated-computing-libraries-partners-with-google-ibm-and-others-to-speed-up-quantum-research/ https://www.artificialintelligence-news.com/2021/11/09/gtc-2021-nvidia-debuts-accelerated-computing-libraries-partners-with-google-ibm-and-others-to-speed-up-quantum-research/#respond Tue, 09 Nov 2021 13:06:58 +0000 https://artificialintelligence-news.com/?p=11349 Nvidia has unveiled 65 new and updated software development kits at GTC 2021, alongside a partnership with industry leaders to speed up quantum research. The company’s roster of accelerated computing kits now exceeds 150 and supports the almost three million developers in NVIDIA’s Developer Program. Four of the major new SDKs are: ReOpt – Automatically... Read more »

The post GTC 2021: Nvidia debuts accelerated computing libraries, partners with Google, IBM, and others to speed up quantum research appeared first on AI News.

]]>
Nvidia has unveiled 65 new and updated software development kits at GTC 2021, alongside a partnership with industry leaders to speed up quantum research.

The company’s roster of accelerated computing kits now exceeds 150 and supports the almost three million developers in NVIDIA’s Developer Program.

Four of the major new SDKs are:

  • ReOpt – Automatically optimises logistical processes using advanced, parallel algorithms. This includes vehicle routes, warehouse selection, and fleet mix. The dynamic rerouting capabilities – shown in an on-stage demo – can reduce travel time, save fuel costs, and minimise idle periods.
  • cuNumeric – Implements the popular NumPy application programming interface and enables scaling to multi-GPU and multi-node systems with zero code changes.
  • cuQuantum – Designed for quantum computing, it enables large quantum circuits to be simulated faster. This enables quantum researchers to simulate areas such as near-term variational quantum algorithms for molecules, error correction algorithms to identify fault tolerance, and accelerate popular quantum simulators from Atos, Google, and IBM.
  • CUDA-X accelerated DGL container – Helps developers and data scientists working on graph neural networks to quickly set up a working environment. The container makes it easy to work in an integrated, GPU-accelerated GNN environment combining DGL and Pytorch.

Some existing AI-related SDKs that have received notable updates are:

  • Deepstream 6.0 – introduces a new graph composer that makes computer vision accessible with a visual drag-and-drop interface.
  • Triton 2.15, TensorRT 8.2 and cuDNN 8.4 – assists with the development of deep neural networks by providing new optimisations for large language models and inference acceleration for gradient-boosted decision trees and random forests.
  • Merlin 0.8 – boosts recommendation systems with its new capabilities for predicting a user’s next action with little or no user data and support for models larger than GPU memory.

Accelerating quantum research

Nvidia has established a partnership with Google, IBM, and a number of small companies, national labs, and university research groups to accelerate quantum research.

“It takes a village to nurture an emerging technology, so Nvidia is collaborating with Google Quantum AI, IBM, and others to take quantum computing to the next level,” explained the company in a blog post.

The first library from the aforementioned new cuQuantum SDK is Nvidia’s initial contribution to the partnership. The library is called cuStateVec and is an accelerator for the state vector simulation method which tracks the full state of the system in memory and can scale to tens of qubits.

cuStateVec has been integrated into Google Quantum AI’s state vector simulator qsim and can be used through the open-source framework Cirq.

“Quantum computing promises to solve tough challenges in computing that are beyond the reach of traditional systems,” commented Catherine Vollgraff Heidweiller at Google Quantum AI.

“This high-performance simulation stack will accelerate the work of researchers around the world who are developing algorithms and applications for quantum computers.”

In December, cuStateVec will also be integrated with Qiskit Aer—a high-performance simulator framework for quantum circuits from IBM.

Among the national labs using cuQuantum to accelerate their research are Oak Ridge, Argonne, Lawrence Berkeley National Laboratory, and Pacific Northwest National Laboratory. University research groups include those at Caltech, Oxford, and MIT.

Nvidia is helping developers to get started by creating a ‘DGX quantum appliance’ that puts its simulation software in a container optimised for its DGX A100 systems. The software will be available early next year via the company’s NGC Catalog.

(Image Credit: Nvidia)

Looking to revamp your digital transformation strategy? Learn more about the Digital Transformation Week event taking place in Amsterdam on 23-24 November 2021 and discover key strategies for making your digital efforts a success.

The post GTC 2021: Nvidia debuts accelerated computing libraries, partners with Google, IBM, and others to speed up quantum research appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/09/gtc-2021-nvidia-debuts-accelerated-computing-libraries-partners-with-google-ibm-and-others-to-speed-up-quantum-research/feed/ 0
Paravision boosts its computer vision and facial recognition capabilities https://www.artificialintelligence-news.com/2021/09/29/paravision-boosts-its-computer-vision-and-facial-recognition-capabilities/ https://www.artificialintelligence-news.com/2021/09/29/paravision-boosts-its-computer-vision-and-facial-recognition-capabilities/#respond Wed, 29 Sep 2021 13:06:14 +0000 http://artificialintelligence-news.com/?p=11143 US-based Paravision has announced updates to boost its computer vision and facial recognition capabilities across mobile, on-premise, edge, and cloud deployments. “From cloud to edge, Paravision’s goal is to help our partners develop and deploy transformative solutions around face recognition and computer vision,” said Joey Pritikin, Chief Product Officer at Paravision. “With these sweeping updates... Read more »

The post Paravision boosts its computer vision and facial recognition capabilities appeared first on AI News.

]]>
US-based Paravision has announced updates to boost its computer vision and facial recognition capabilities across mobile, on-premise, edge, and cloud deployments.

“From cloud to edge, Paravision’s goal is to help our partners develop and deploy transformative solutions around face recognition and computer vision,” said Joey Pritikin, Chief Product Officer at Paravision.

“With these sweeping updates to our product family, and with what has become possible in terms of accuracy, speed, usability and portability, we see a remarkable opportunity to unite disparate applications with a coherent sense of identity that bridges physical spaces and cyberspace.”

A new Scaled Vector Search (SVS) capability acts as a search engine to provide accurate, rapid, and stable face matching on large databases that may contain tens of millions of identities. Paravision claims the SVS engine supports hundreds of transactions per second with extremely low latencies.

Another scaling solution called Streaming Container 5 enables the processing of video at over 250 frames per second from any number of streams. The solution features advanced face tracking to ensure that identities remain accurate even in busy environments.

With more enterprises than ever looking to the latency-busting and privacy-enhancing benefits of edge computing, Paravision has partnered with Teknique to co-create a series of hardware and software reference designs that enable the rapid development of face recognition and computer vision capabilities at the edge.

Teknique is a leader in the development of hardware based on designs from California-based fabless semiconductor company Ambarella.

Paravision’s Face SDK has been enhanced for smart cameras powered by Ambarella CVflow chipsets. The update enables facial recognition on CVflow-powered cameras to achieve up to 40 frames per second full pipeline performance.

A new Liveness and Anti-spoofing SDK also adds new safeguards for Ambarella-powered facial recognition solutions. The toolkit uses Ambarella’s visible light, near-infrared, and depth-sensing capabilities to determine whether the camera is seeing a live subject or whether it’s being tricked by recorded footage or a dummy image.

On the mobile side, Paravision has released its Face SDK for Android. The SDK includes face detection, landmarks, quality assessment, template creation, and 1-to-1 or 1-to-many matching. Reference applications are included which include UI/UX recommendations and tools.

Last but certainly not least, Paravision has announced the availability of its first person-level computer vision SDK. The new SDK is designed to go “beyond face recognition” to detect the presence and position of individuals and unlock new use cases.

Provided examples of real-world applications for the computer vision SDK include occupancy analysis, the ability to spot tailgating, as well as custom intention or subject attributes.

“With Person Detection, users could determine whether employees are allowed access to a specific area, are wearing a mask or hard hat, or appear to be in distress,” the company explains. “It can also enable useful business insights such as metrics about queue times, customer throughput or to detect traveller bottlenecks.”

With these exhaustive updates, Paravision is securing its place as one of the most exciting companies in the AI space.

Paravision is ranked the US leader across several of NIST’s Face Recognition Vendor Test evaluations including 1:1 verification, 1:N identification, performance for paperless travel, and performance with face masks.

(Photo by Daniil Kuželev on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Paravision boosts its computer vision and facial recognition capabilities appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/29/paravision-boosts-its-computer-vision-and-facial-recognition-capabilities/feed/ 0
Luca Boschin, CEO, VISUA: The complex and diverse world of Visual-AI https://www.artificialintelligence-news.com/2021/08/17/luca-boschin-ceo-visua-the-complex-and-diverse-world-of-visual-ai/ https://www.artificialintelligence-news.com/2021/08/17/luca-boschin-ceo-visua-the-complex-and-diverse-world-of-visual-ai/#respond Tue, 17 Aug 2021 13:30:00 +0000 http://artificialintelligence-news.com/?p=10875 AI News sat down with Luca Boschin, CEO and co-founder of Visual-AI solutions firm VISUA, to discuss the growth of the company’s offering in recent years and the latest trends in visual artificial intelligence. AI News: What unique solutions do VISUA bring to the AI industry? Luca Boschin: VISUA has applied Visual-AI (also known as... Read more »

The post Luca Boschin, CEO, VISUA: The complex and diverse world of Visual-AI appeared first on AI News.

]]>
AI News sat down with Luca Boschin, CEO and co-founder of Visual-AI solutions firm VISUA, to discuss the growth of the company’s offering in recent years and the latest trends in visual artificial intelligence.

AI News: What unique solutions do VISUA bring to the AI industry?

VISUA Logo

Luca Boschin: VISUA has applied Visual-AI (also known as computer vision or vision AI) to numerous use cases since our inception in 2016. This started with brand monitoring, where we process hundreds of millions of images per month along with tens of thousands of hours of video to find brands mentioned visually, be that through a logo in an image or a brand name appearing in a video.

We also combine this with object and scene detection and visual search to extract key visual signals. For instance, it’s one thing knowing that Budweiser appears in 500,000 images in a month, but what is really critical to know is where Budweiser shows up. How often is it next to food? How often is it with football on the TV in the background? Does Corona show up more outdoors than indoors? This kind of data is really useful for marketers.

Recently however, we have adapted our tech stack for highly specific tasks, like sponsorship monitoring in live video feeds, counterfeit product detection, copyright infringement detection, and digital piracy monitoring. Most recent of all, we’ve added visual authentication of holograms and the detection of graphical attack vectors in phishing attacks.

In each of these use cases we look for what we call ‘visual signals’. This is the important unstructured data that is locked in visual media. Our Visual-AI can extract that data and report on it, delivering the insights required for each specific use case.

VISUA AI Technology

AN: What are some of the latest developments at VISUA?

LB: We recently added holographic authentication to our offering in partnership with De La Rue. Holograms have really revolutionised the world of brand protection because they allow brands to inexpensively provide a visual cue of their authenticity. But perhaps because of their popularity, bad actors started to create fake holograms to go with the fake products. These fake holograms were virtually indistinguishable to the naked eye without specific training or a genuine comparison. De La Rue, a key leader in the area of hologram labelling, needed a way to solve this and having reviewed many different offerings, chose VISUA to help them deliver a solution for quickly and automatically authenticating holograms. Just point a smartphone at the hologram and it will tell you if the product is genuine or fake within a few seconds.

Secondly, we’re really proud of the work we’ve done in cyber security, and particularly phishing detection. It’s amazing that bad actors are also using AI. But they use it to make detection difficult. Most recently they’re also using graphics to confuse victims and hiding trigger words. That makes these elements really difficult to catch. So platform providers, managed detection & response companies, and threat intelligence services all need more data and early warning systems to allow them to quarantine suspicious emails and websites for deeper analysis. Our Visual-AI provides that to them.

VISUA Phishing Detection

AN: What are VISUA’s plans for the coming year?

LB: We’re working really hard in the cyber security space. There is a great deal of interest in tackling this growing issue of graphical attack vectors. So, we’re working with various companies in this space to help them detect and block malicious content more effectively. Meanwhile, we are looking at other possible opportunities and verticals to see which are the most viable and worthwhile to pursue. I am sure that other AI providers feels the same way, but our problem is not one of finding opportunities, rather it’s identifying which are the best opportunities to pursue among the many that present themselves to us.

AN: What trends are VISUA noticing in the AI industry?

LB: Too often we see companies underestimating the complexity of computer vision. Companies like Microsoft, Google and Amazon offer APIs that allow you to access impressive computer vision technologies. But in most cases these are either limited in their ability to be adapted or require extensive knowledge to implement. They might look ‘off-the-shelf’ but in reality, they’re not even Lego blocks; they’re the plans that allow you to mould the Lego blocks to build your system.

We took a decision from the start to not be an API company. There’s a good reason for that. It’s relatively easy to build a prototype using off-the-shelf solutions. It gets shown the board and everyone claps and says, ‘OK, let’s create the full production version’. That’s when things go wrong because scaling computer vision so that it’s accurate, efficient and cost-effective is really hard! Several clients tried doing it themselves. A year later, with lots of wasted budget and lost opportunity, they admitted defeat and approached us for help. Within weeks they were operational!

API companies tend not to offer good support. If you need help you pay a lot of money for an extra support pack or you hire in a consultancy firm to help. Even then, if you haven’t set the parameters and brief for the project, the wheels can come off really fast. We saw this issue early on and we kind of bucked the trend. We saw that companies wanted to implement Visual-AI, but their brief needed padding out. Sometimes they didn’t even know what questions to ask themselves to develop their brief. That’s where we come in. We are the Visual-AI experts and can help guide these projects to success. We’re not consultants, and that’s not the main focus of our engagement, but we recognised that that’s what companies needed, just as much as access to our API.

AN: What does VISUA plan to discuss at TechEx Global?

Cyber Security & Cloud Expo

LB: We’re participating in the Cyber Security & Cloud Expo and our marketing director, Franco De Bonis, will be talking about the growing threat of graphical attack vectors and how Visual-AI can help mitigate or even eliminate that threat. But although we’re there for the cyber security event, we love discussing all things computer vision and we love a challenge. So if someone reading this has a particularly gnarly project that they think Visual-AI could fix, come and find us!

The post Luca Boschin, CEO, VISUA: The complex and diverse world of Visual-AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/17/luca-boschin-ceo-visua-the-complex-and-diverse-world-of-visual-ai/feed/ 0
You can now buy AI technologies from TikTok https://www.artificialintelligence-news.com/2021/07/05/you-can-now-buy-ai-technologies-from-tiktok/ https://www.artificialintelligence-news.com/2021/07/05/you-can-now-buy-ai-technologies-from-tiktok/#respond Mon, 05 Jul 2021 12:15:09 +0000 http://artificialintelligence-news.com/?p=10745 From the company’s owner, not your favourite TikTok influencer. Behind every successful TikTok video is a bunch of clever algorithms helping to make it a viral sensation. The company’s owner, ByteDance, launched a new division last month called BytePlus which sells TikTok’s AI technologies. Up for grabs is the recommendation algorithm behind the ForYou feed,... Read more »

The post You can now buy AI technologies from TikTok appeared first on AI News.

]]>
From the company’s owner, not your favourite TikTok influencer.

Behind every successful TikTok video is a bunch of clever algorithms helping to make it a viral sensation. The company’s owner, ByteDance, launched a new division last month called BytePlus which sells TikTok’s AI technologies.

Up for grabs is the recommendation algorithm behind the ForYou feed, computer vision tech, automatic speech-to-text and text-to-speech, data analysis tools, and more.

A look at the division’s website shows that it’s already generated some interest from some major players including WeBuy, GOAT, and Wego.

Wego is one of the provided case studies and claims to have improved the relevancy of their search results by using BytePlus Recommend’s machine learning algorithm. The company reportedly increased their conversions per user by 40 percent.

The battle-tested recommendation engine will probably generate the most interest of all the current offerings from BytePlus.

On TikTok, by (somewhat creepily) keeping tabs on just about everything you do on the platform – including the videos you like or comment on, the hashtags you use, your device type, and location – the Recommend algorithm behind ForYou can make some scarily accurate assumptions.

There have been dozens, if not hundreds, of cases where people claim TikTok’s algorithm knew their sexuality or certain mental health conditions before they did and guided them towards relevant communities of people.

BytePlus will be competing against players with large resources including Microsoft, Amazon, IBM, Google, and others. Given that some governments have expressed concern that TikTok could be used by the Chinese state to collect data about their citizens and/or influence their decisions, many companies outside of China may be wary about using BytePlus’ solutions.

(Photo by Solen Feyissa on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post You can now buy AI technologies from TikTok appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/07/05/you-can-now-buy-ai-technologies-from-tiktok/feed/ 0
Razer and ClearBot are using AI and robotics to clean the oceans https://www.artificialintelligence-news.com/2021/06/08/razer-clearbot-using-ai-robotics-clean-oceans/ https://www.artificialintelligence-news.com/2021/06/08/razer-clearbot-using-ai-robotics-clean-oceans/#respond Tue, 08 Jun 2021 08:59:40 +0000 http://artificialintelligence-news.com/?p=10658 Razer has partnered with marine waste cleaning startup ClearBot to advance the use of AI and robotics to reduce ocean pollution. The pair announced their partnership in celebration of World Oceans Day and is part of Razer’s 10-year #GoGreenWithRazer campaign that will see the company make green investments to support environment- and sustainability-focused startups. Patricia... Read more »

The post Razer and ClearBot are using AI and robotics to clean the oceans appeared first on AI News.

]]>
Razer has partnered with marine waste cleaning startup ClearBot to advance the use of AI and robotics to reduce ocean pollution.

The pair announced their partnership in celebration of World Oceans Day and is part of Razer’s 10-year #GoGreenWithRazer campaign that will see the company make green investments to support environment- and sustainability-focused startups.

Patricia Liu, Chief of Staff at Razer, said:

“We are extremely happy to have the opportunity to work with a startup focused on saving the environment.

ClearBot’s unique AI and advanced machine learning technology will enable and empower governments and organisations around the world to broaden their sustainability efforts.

We urge other innovative startups to reach out to Razer for collaboration opportunities as we strive to make the world a safer place for future generations.”

Around eight million metric tons of plastic is dumped into the oceans each year. For perspective, that’s about 17.6 billion pounds worth—or the equivalent of 57,000 blue whales.

As these plastics break into smaller pieces due to wave action and sun exposure, these microplastics also end up in our food chain in addition to releasing chemicals that further contaminate the sea.

Hypoxic ‘dead’ zones – areas of such low oxygen concentration that animal life suffocates and dies – are on the increase. In 2004, scientists found 146 hypoxic zones. By 2008, that number had swelled to 405. In 2017, scientists found a dead zone in the Gulf of Mexico equivalent to the size of New Jersey.

The team behind ClearBot design robots that leverage AI-powered computer vision to identify marine waste and retrieve it to be responsibly disposed of.

Sidhant Gupta, Chief Executive Officer at ClearBot, commented:

“The Razer team’s action-oriented approach to solving marine waste issues was extremely eye-opening. We are grateful to the team who volunteered their time for this project.

With the new model, we’re confident in extending our reach globally to protect marine waters, starting with partners which include marine harbour operators in Asia and NGOs who have already expressed interest.

Together with Razer, we look forward to effecting positive change for the world.”

ClearBot is calling on the community to upload photos of marine plastic waste commonly found in open waters to their website that will be used to help improve the robot’s waste-detection AI algorithm.

You can find out more about World Oceans Day from the United Nations’ website here.

The post Razer and ClearBot are using AI and robotics to clean the oceans appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/06/08/razer-clearbot-using-ai-robotics-clean-oceans/feed/ 0
Salesforce-backed AI project SharkEye aims to protect beachgoers https://www.artificialintelligence-news.com/2020/11/24/salesforce-ai-project-sharkeye-protect-beachgoers/ https://www.artificialintelligence-news.com/2020/11/24/salesforce-ai-project-sharkeye-protect-beachgoers/#comments Tue, 24 Nov 2020 13:32:04 +0000 http://artificialintelligence-news.com/?p=10050 Salesforce is backing an AI project called SharkEye which aims to save the lives of beachgoers from one of the sea’s deadliest predators. Shark attacks are, fortunately, quite rare. However, they do happen and most cases are either fatal or cause life-changing injuries. Just last week, a fatal shark attack in Australia marked the eighth... Read more »

The post Salesforce-backed AI project SharkEye aims to protect beachgoers appeared first on AI News.

]]>
Salesforce is backing an AI project called SharkEye which aims to save the lives of beachgoers from one of the sea’s deadliest predators.

Shark attacks are, fortunately, quite rare. However, they do happen and most cases are either fatal or cause life-changing injuries.

Just last week, a fatal shark attack in Australia marked the eighth of the year—an almost 100-year record for the highest annual death toll. Once rare sightings in Southern California beaches are now becoming increasingly common as sharks are preferring the warmer waters close to shore.

Academics from the University of California and San Diego State University have teamed up with AI researchers from Salesforce to create software which can spot when sharks are swimming around popular beach destinations.

Sharks are currently tracked – when at all – by either keeping tabs of tagged animals online or by someone on a paddleboard keeping an eye out. It’s an inefficient system ripe for some AI innovation.

SharkEye uses drones to spot sharks from above. The drones fly preprogrammed paths at a height of around 120 feet to cover large areas of the ocean while preventing marine life from being disturbed.

If a shark is spotted, a message can be sent instantly to people including lifeguards, surf instructors, and beachside homeowners to take necessary action. Future alerts could also be sent directly to beachgoers who’ve signed up for them or pushed via social channels.

The drone footage is helping to feed further research into movement patterns. The researchers hope that by combining with data like ocean temperature, and the movement of other marine life, an AI will be able to predict when and where sharks are most likely to be in areas which may pose a danger to people.

SharkEye is still considered to be in its pilot stage but has been tested for the past two summers at Padaro Beach in Santa Barbara County.

A shark is suspected to have bitten a woman at Padaro Beach over summer when the team wasn’t flying a drone due to the coronavirus shutdown. Fortunately, her injuries were minor. However, a 26-year-old man was killed in a shark attack a few hours north in Santa Cruz just eight days later.

Attacks can lead to sharks also being killed or injured in a bid to save human life. Using AI to help find safer ways for sharks and humans to share the water can only be a good thing.

(Photo by Laura College on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Salesforce-backed AI project SharkEye aims to protect beachgoers appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/11/24/salesforce-ai-project-sharkeye-protect-beachgoers/feed/ 1
Microsoft’s new AI auto-captions images for the visually impaired https://www.artificialintelligence-news.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/ https://www.artificialintelligence-news.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/#respond Mon, 19 Oct 2020 11:07:34 +0000 http://artificialintelligence-news.com/?p=9957 A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out. Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv. The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to... Read more »

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out.

Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv.

The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to learn a visual vocabulary.

A second dataset of properly captioned images is then used to help teach the AI how to best describe the pictures.

“Ideally, everyone would include alt text for all images in documents, on the web, in social media – as this enables people who are blind to access the content and participate in the conversation. But, alas, people don’t,” said Saqib Shaikh, a software engineering manager with Microsoft’s AI platform group.

Overall, the researchers expect the AI to deliver twice the performance of Microsoft’s existing captioning system.

In order to benchmark the performance of their new AI, the researchers entered it into the ‘nocaps’ challenge. As of writing, Microsoft’s AI now ranks first on its leaderboard.

“The nocaps challenge is really how are you able to describe those novel objects that you haven’t seen in your training data?” commented Lijuan Wang, a principal research manager in Microsoft’s research lab.

Developers wanting to get started with building apps using Microsoft’s auto-captioning AI can already do so as it’s available in Azure Cognitive Services’ Computer Vision package.

Microsoft’s impressive SeeingAI application – which uses computer vision to describe an individual’s surroundings for people suffering from vision loss – will be updated with features using the new AI.

“Image captioning is one of the core computer vision capabilities that can enable a broad range of services,” said Xuedong Huang, Microsoft CTO of Azure AI Cognitive Services.

“We’re taking this AI breakthrough to Azure as a platform to serve a broader set of customers,” Huang continued. “It is not just a breakthrough on the research; the time it took to turn that breakthrough into production on Azure is also a breakthrough.”

The improved auto-captioning feature is also expected to be available in Outlook, Word, and PowerPoint later this year.

(Photo by K8 on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/feed/ 0