TechEx Archives - AI News https://www.artificialintelligence-news.com/tag/techex/ Artificial Intelligence News Thu, 25 Apr 2024 14:13:23 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png TechEx Archives - AI News https://www.artificialintelligence-news.com/tag/techex/ 32 32 Igor Jablokov, Pryon: Building a responsible AI future https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/ https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/#respond Thu, 25 Apr 2024 14:13:22 +0000 https://www.artificialintelligence-news.com/?p=14743 As artificial intelligence continues to rapidly advance, ethical concerns around the development and deployment of these world-changing innovations are coming into sharper focus. In an interview ahead of the AI & Big Data Expo North America, Igor Jablokov, CEO and founder of AI company Pryon, addressed these pressing issues head-on. Critical ethical challenges in AI... Read more »

The post Igor Jablokov, Pryon: Building a responsible AI future appeared first on AI News.

]]>
As artificial intelligence continues to rapidly advance, ethical concerns around the development and deployment of these world-changing innovations are coming into sharper focus.

In an interview ahead of the AI & Big Data Expo North America, Igor Jablokov, CEO and founder of AI company Pryon, addressed these pressing issues head-on.

Critical ethical challenges in AI

“There’s not one, maybe there’s almost 20 plus of them,” Jablokov stated when asked about the most critical ethical challenges. He outlined a litany of potential pitfalls that must be carefully navigated—from AI hallucinations and emissions of falsehoods, to data privacy violations and intellectual property leaks from training on proprietary information.

Bias and adversarial content seeping into training data is another major worry, according to Jablokov. Security vulnerabilities like embedded agents and prompt injection attacks also rank highly on his list of concerns, as well as the extreme energy consumption and climate impact of large language models.

Pryon’s origins can be traced back to the earliest stirrings of modern AI over two decades ago. Jablokov previously led an advanced AI team at IBM where they designed a primitive version of what would later become Watson. “They didn’t greenlight it. And so, in my frustration, I departed, stood up our last company,” he recounted. That company, also called Pryon at the time, went on to become Amazon’s first AI-related acquisition, birthing what’s now Alexa.

The current incarnation of Pryon has aimed to confront AI’s ethical quandaries through responsible design focused on critical infrastructure and high-stakes use cases. “[We wanted to] create something purposely hardened for more critical infrastructure, essential workers, and more serious pursuits,” Jablokov explained.

A key element is offering enterprises flexibility and control over their data environments. “We give them choices in terms of how they’re consuming their platforms…from multi-tenant public cloud, to private cloud, to on-premises,” Jablokov said. This allows organisations to ring-fence highly sensitive data behind their own firewalls when needed.

Pryon also emphasises explainable AI and verifiable attribution of knowledge sources. “When our platform reveals an answer, you can tap it, and it always goes to the underlying page and highlights exactly where it learned a piece of information from,” Jablokov described. This allows human validation of the knowledge provenance.

In some realms like energy, manufacturing, and healthcare, Pryon has implemented human-in-the-loop oversight before AI-generated guidance goes to frontline workers. Jablokov pointed to one example where “supervisors can double-check the outcomes and essentially give it a badge of approval” before information reaches technicians.

Ensuring responsible AI development

Jablokov strongly advocates for new regulatory frameworks to ensure responsible AI development and deployment. While welcoming the White House’s recent executive order as a start, he expressed concerns about risks around generative AI like hallucinations, static training data, data leakage vulnerabilities, lack of access controls, copyright issues, and more.  

Pryon has been actively involved in these regulatory discussions. “We’re back-channelling to a mess of government agencies,” Jablokov said. “We’re taking an active hand in terms of contributing our perspectives on the regulatory environment as it rolls out…We’re showing up by expressing some of the risks associated with generative AI usage.”

On the potential for an uncontrolled, existential “AI risk” – as has been warned about by some AI leaders – Jablokov struck a relatively sanguine tone about Pryon’s governed approach: “We’ve always worked towards verifiable attribution…extracting out of enterprises’ own content so that they understand where the solutions are coming from, and then they decide whether they make a decision with it or not.”

The CEO firmly distanced Pryon’s mission from the emerging crop of open-ended conversational AI assistants, some of which have raised controversy around hallucinations and lacking ethical constraints.

“We’re not a clown college. Our stuff is designed to go into some of the more serious environments on planet Earth,” Jablokov stated bluntly. “I think none of you would feel comfortable ending up in an emergency room and having the medical practitioners there typing in queries into a ChatGPT, a Bing, a Bard…”

He emphasised the importance of subject matter expertise and emotional intelligence when it comes to high-stakes, real-world decision-making. “You want somebody that has hopefully many years of experience treating things similar to the ailment that you’re currently undergoing. And guess what? You like the fact that there is an emotional quality that they care about getting you better as well.”

At the upcoming AI & Big Data Expo, Pryon will unveil new enterprise use cases showcasing its platform across industries like energy, semiconductors, pharmaceuticals, and government. Jablokov teased that they will also reveal “different ways to consume the Pryon platform” beyond the end-to-end enterprise offering, including potentially lower-level access for developers.

As AI’s domain rapidly expands from narrow applications to more general capabilities, addressing the ethical risks will become only more critical. Pryon’s sustained focus on governance, verifiable knowledge sources, human oversight, and collaboration with regulators could offer a template for more responsible AI development across industries.

You can watch our full interview with Igor Jablokov below:

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Igor Jablokov, Pryon: Building a responsible AI future appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/feed/ 0
Rishabh Mehrotra, research lead, Spotify: Multi-stakeholder thinking with AI https://www.artificialintelligence-news.com/2021/09/24/rishabh-mehrotra-research-lead-spotify-multi-stakeholder-thinking-with-ai/ https://www.artificialintelligence-news.com/2021/09/24/rishabh-mehrotra-research-lead-spotify-multi-stakeholder-thinking-with-ai/#respond Fri, 24 Sep 2021 13:29:52 +0000 http://artificialintelligence-news.com/?p=11128 Streaming behemoth Spotify hosts more than seventy million songs and close to three million podcast titles on its platform. Delivering this without artificial intelligence (AI) would be comparable to traversing the Amazon rainforest armed with nothing but a spoon. To cut – or scoop – through this jungle of music, Spotify’s research team deploy hundreds... Read more »

The post Rishabh Mehrotra, research lead, Spotify: Multi-stakeholder thinking with AI appeared first on AI News.

]]>
Streaming behemoth Spotify hosts more than seventy million songs and close to three million podcast titles on its platform.

Spotify Logo

Delivering this without artificial intelligence (AI) would be comparable to traversing the Amazon rainforest armed with nothing but a spoon.

To cut – or scoop – through this jungle of music, Spotify’s research team deploy hundreds of machine learning models that improve the user experience, all the while trying to balance the needs of users and creators.

AI News caught up with Spotify research lead Rishabh Mehrotra at the AI & Big Data Expo Global on September 7 to learn more about how AI supports the platform.

AI News: How important is AI to Spotify’s mission?

Rishabh Mehrotra

Rishabh Mehrotra: AI is at the centre of what we do. Machine learning (ML) specifically has become an indispensable tool for powering personalised music and podcast recommendations to more than 365 million users across the world. It enables us to understand user needs and intents, which then helps us to deliver personalised recommendations across various touch points on the app.

It’s not just about the actual models which we deploy in front of users but also the various AI techniques we use to adopt a data driven process around experimentation, metrics, and product decisions.

We use a broad range of AI methods to understand our listeners, creators, and content. Some of our core ML research areas include understanding user needs and intents, matching content and listeners, balancing user and creator needs, using natural language understanding and multimedia information retrieval methods, and developing models that optimise long term rewards and recommendations.

What’s more, our models power experiences across around 180 countries, so we have to consider how they are performing across markets. Striking a balance between pushing global music but still facilitating local musicians and music culture is one of our most important AI initiatives.

AN: Spotify users might be surprised to learn just how central AI is to almost every aspect of the platform’s offering. It’s so seamless that I suspect most people don’t even realise it’s there. How crucial is AI to the user experience on Spotify?

RM: If you look at Spotify as a user then you typically view it as an app which gives you the content that you’re looking for. However, if you really zoom in you see that each of these different recommendation tools are all different machine learning products. So if you look at the homepage, we have to understand user intent in a far more subtle way than we would with a search query. The homepage is about giving recommendations based on a user’s current needs and context, which is very different from a search query where users are explicitly asking for something. Even in search, users can seek open and non-focused queries like ‘relaxing music’, or you could be searching the name of a specific song.

Looking at sequential radio sessions, our models try to balance familiar music with discovery content, aimed at not only recommending content users could enjoy at the moment, but optimising for long term listener-artist connections.

A good amount of our ML models are starting to become multi-objective. Over the past two years, we have deployed a lot of models that try to fulfil listener needs whilst also enabling creators to connect with and grow their audiences.

AN: Are artists’ wants and needs a big consideration for Spotify or is the focus primarily on the user experience?

RM: Our goal is to match the creators with the fans in an enriching way. While understanding user preferences is key to the success of our recommendation models, it really is a two-sided market in a lot of ways. We have the users who want to consume audio content on one side and the creators looking to grow their audiences on the other. Thus a lot of our recommendation products have a multi-stakeholder thinking baked into them to balance objectives from both sides.

AN: Apart from music recommendations and suggestions, does AI support Spotify in any other ways?

RM: AI plays an important role in driving our algotorial approach – Expert curators with an excellent sense for what’s up and coming, quite literally teach our machine learning system. Through this approach, we can create playlists that not only look at past data but also reflect cultural trends as they’re happening. Across all regions, we have editors who bring in deep domain expertise about music culture that we use proactively in our products. This allows us to develop and deploy human-in-the-loop AI techniques that can leverage editorial input to bootstrap various decisions that various ML models can then scale.

AN: What about podcasts? Do you utilise AI differently when applying it to podcasts over music?

RM: Users’ podcast journeys can differ in a lot of ways compared to music. While music is a lot about the audio and acoustic properties of songs, podcasts depend on a whole different set of parameters. For one, it’s much more about content understanding – understanding speakers, types of conversations and topics of discussions.

That said, we are seeing some very interesting results using music taste for podcast recommendations too. Members of our group have recently published work that shows how our ML models can leverage users’ music preferences to recommend podcasts, and some of these results have demonstrated significant improvements, especially for new podcast users.

AN: With so many models already turning the cogs at Spotify, it’s difficult to see how new and exciting use cases could be introduced. What are Spotify’s AI plans for the coming years?

RM: We’re working on a number of ways to elevate the experience even further. Reinforcement learning will be an important focus point as we look into ways to optimise for a lifetime of fulfilling content, rather than optimise for the next stream. In a sense this isn’t about giving users what they want right now as opposed to evolving their tastes and looking at their long term trajectories.

AN: As the years go on and your models have more and more data to work with, will the AI you use naturally become more advanced?

RM: A lot of our ML investments are not only about incorporating state-of-the-art ML into our products, but also extending the state-of-the-art based on the unique challenges we face as an audio platform. We are developing advanced causal inference techniques to understand the long term impact of our algorithmic decisions. We are innovating in the multi-objective ML modelling space to balance various objectives as part of our two-sided marketplace efforts. We are gravitating towards learning from long term trajectories and optimising for long term rewards.

To make data-driven decisions across all such initiatives, we rely heavily on solid scientific experimentation techniques, which also heavily relies on using machine learning.

Reinforcement learning furthers the scope of longer term decisions – it brings that long term perspective into our recommendations. So a quick example would be facilitating discovery on the platform. As a marketplace platform, we want users to not only consume familiar music but to also discover new music, leveraging the value of recommendations. There are 70 million tracks on the platform and only a few thousand will be familiar to any given user, putting aside the fact that it would take an individual several lifetimes to actually go through all this content. So tapping into that remaining 69.9 million and surfacing content users would love to discover is a key long-term goal for us.

How to fulfil users’ long term discovery needs, when to surface such discovery content, and by how much, not only across which set of users, but also across various recommended sets are a few examples of higher abstraction long term problems that RL approaches allow us to tackle well.

AN: Finally, considering the involvement Spotify has in directing users’ musical experiences, does the company have to factor in any ethical issues surrounding its usage of AI?

RM: Algorithmic responsibility and causal influence are topics we take very seriously and we actively work to ensure our systems operate in a fair and responsible manner, backed by focused research and internal education to prevent unintended biases.

We have a team dedicated to ensuring we approach these topics with the right research-informed rigour and we also share our learnings with the research community.

AN: Is there anything else you would like to share?

RM: On a closing note, one thing I love about Spotify is that we are very open with the wider industry and research community about the advances we are making with AI and machine learning. We actively publish at top tier venues, give tutorials, and we have released a number of large datasets to facilitate academic research on audio recommendations.

For anyone who is interested in learning more about this I would recommend checking out our Spotify Research website which discusses our papers, blogs, and datasets in greater detail.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Rishabh Mehrotra, research lead, Spotify: Multi-stakeholder thinking with AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/24/rishabh-mehrotra-research-lead-spotify-multi-stakeholder-thinking-with-ai/feed/ 0
Luca Boschin, CEO, VISUA: The complex and diverse world of Visual-AI https://www.artificialintelligence-news.com/2021/08/17/luca-boschin-ceo-visua-the-complex-and-diverse-world-of-visual-ai/ https://www.artificialintelligence-news.com/2021/08/17/luca-boschin-ceo-visua-the-complex-and-diverse-world-of-visual-ai/#respond Tue, 17 Aug 2021 13:30:00 +0000 http://artificialintelligence-news.com/?p=10875 AI News sat down with Luca Boschin, CEO and co-founder of Visual-AI solutions firm VISUA, to discuss the growth of the company’s offering in recent years and the latest trends in visual artificial intelligence. AI News: What unique solutions do VISUA bring to the AI industry? Luca Boschin: VISUA has applied Visual-AI (also known as... Read more »

The post Luca Boschin, CEO, VISUA: The complex and diverse world of Visual-AI appeared first on AI News.

]]>
AI News sat down with Luca Boschin, CEO and co-founder of Visual-AI solutions firm VISUA, to discuss the growth of the company’s offering in recent years and the latest trends in visual artificial intelligence.

AI News: What unique solutions do VISUA bring to the AI industry?

VISUA Logo

Luca Boschin: VISUA has applied Visual-AI (also known as computer vision or vision AI) to numerous use cases since our inception in 2016. This started with brand monitoring, where we process hundreds of millions of images per month along with tens of thousands of hours of video to find brands mentioned visually, be that through a logo in an image or a brand name appearing in a video.

We also combine this with object and scene detection and visual search to extract key visual signals. For instance, it’s one thing knowing that Budweiser appears in 500,000 images in a month, but what is really critical to know is where Budweiser shows up. How often is it next to food? How often is it with football on the TV in the background? Does Corona show up more outdoors than indoors? This kind of data is really useful for marketers.

Recently however, we have adapted our tech stack for highly specific tasks, like sponsorship monitoring in live video feeds, counterfeit product detection, copyright infringement detection, and digital piracy monitoring. Most recent of all, we’ve added visual authentication of holograms and the detection of graphical attack vectors in phishing attacks.

In each of these use cases we look for what we call ‘visual signals’. This is the important unstructured data that is locked in visual media. Our Visual-AI can extract that data and report on it, delivering the insights required for each specific use case.

VISUA AI Technology

AN: What are some of the latest developments at VISUA?

LB: We recently added holographic authentication to our offering in partnership with De La Rue. Holograms have really revolutionised the world of brand protection because they allow brands to inexpensively provide a visual cue of their authenticity. But perhaps because of their popularity, bad actors started to create fake holograms to go with the fake products. These fake holograms were virtually indistinguishable to the naked eye without specific training or a genuine comparison. De La Rue, a key leader in the area of hologram labelling, needed a way to solve this and having reviewed many different offerings, chose VISUA to help them deliver a solution for quickly and automatically authenticating holograms. Just point a smartphone at the hologram and it will tell you if the product is genuine or fake within a few seconds.

Secondly, we’re really proud of the work we’ve done in cyber security, and particularly phishing detection. It’s amazing that bad actors are also using AI. But they use it to make detection difficult. Most recently they’re also using graphics to confuse victims and hiding trigger words. That makes these elements really difficult to catch. So platform providers, managed detection & response companies, and threat intelligence services all need more data and early warning systems to allow them to quarantine suspicious emails and websites for deeper analysis. Our Visual-AI provides that to them.

VISUA Phishing Detection

AN: What are VISUA’s plans for the coming year?

LB: We’re working really hard in the cyber security space. There is a great deal of interest in tackling this growing issue of graphical attack vectors. So, we’re working with various companies in this space to help them detect and block malicious content more effectively. Meanwhile, we are looking at other possible opportunities and verticals to see which are the most viable and worthwhile to pursue. I am sure that other AI providers feels the same way, but our problem is not one of finding opportunities, rather it’s identifying which are the best opportunities to pursue among the many that present themselves to us.

AN: What trends are VISUA noticing in the AI industry?

LB: Too often we see companies underestimating the complexity of computer vision. Companies like Microsoft, Google and Amazon offer APIs that allow you to access impressive computer vision technologies. But in most cases these are either limited in their ability to be adapted or require extensive knowledge to implement. They might look ‘off-the-shelf’ but in reality, they’re not even Lego blocks; they’re the plans that allow you to mould the Lego blocks to build your system.

We took a decision from the start to not be an API company. There’s a good reason for that. It’s relatively easy to build a prototype using off-the-shelf solutions. It gets shown the board and everyone claps and says, ‘OK, let’s create the full production version’. That’s when things go wrong because scaling computer vision so that it’s accurate, efficient and cost-effective is really hard! Several clients tried doing it themselves. A year later, with lots of wasted budget and lost opportunity, they admitted defeat and approached us for help. Within weeks they were operational!

API companies tend not to offer good support. If you need help you pay a lot of money for an extra support pack or you hire in a consultancy firm to help. Even then, if you haven’t set the parameters and brief for the project, the wheels can come off really fast. We saw this issue early on and we kind of bucked the trend. We saw that companies wanted to implement Visual-AI, but their brief needed padding out. Sometimes they didn’t even know what questions to ask themselves to develop their brief. That’s where we come in. We are the Visual-AI experts and can help guide these projects to success. We’re not consultants, and that’s not the main focus of our engagement, but we recognised that that’s what companies needed, just as much as access to our API.

AN: What does VISUA plan to discuss at TechEx Global?

Cyber Security & Cloud Expo

LB: We’re participating in the Cyber Security & Cloud Expo and our marketing director, Franco De Bonis, will be talking about the growing threat of graphical attack vectors and how Visual-AI can help mitigate or even eliminate that threat. But although we’re there for the cyber security event, we love discussing all things computer vision and we love a challenge. So if someone reading this has a particularly gnarly project that they think Visual-AI could fix, come and find us!

The post Luca Boschin, CEO, VISUA: The complex and diverse world of Visual-AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/17/luca-boschin-ceo-visua-the-complex-and-diverse-world-of-visual-ai/feed/ 0