data Archives - AI News https://www.artificialintelligence-news.com/tag/data/ Artificial Intelligence News Mon, 20 May 2024 11:03:10 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png data Archives - AI News https://www.artificialintelligence-news.com/tag/data/ 32 32 Ethical, trust and skill barriers hold back generative AI progress in EMEA https://www.artificialintelligence-news.com/2024/05/20/ethical-trust-and-skill-barriers-hold-back-generative-ai-progress-in-emea/ https://www.artificialintelligence-news.com/2024/05/20/ethical-trust-and-skill-barriers-hold-back-generative-ai-progress-in-emea/#respond Mon, 20 May 2024 10:17:00 +0000 https://www.artificialintelligence-news.com/?p=14845 76% of consumers in EMEA think AI will significantly impact the next five years, yet 47% question the value that AI will bring and 41% are worried about its applications. This is according to research from enterprise analytics AI firm Alteryx. Since the release of ChatGPT by OpenAI in November 2022, there has been significant buzz about the... Read more »

The post Ethical, trust and skill barriers hold back generative AI progress in EMEA appeared first on AI News.

]]>
76% of consumers in EMEA think AI will significantly impact the next five years, yet 47% question the value that AI will bring and 41% are worried about its applications.

This is according to research from enterprise analytics AI firm Alteryx.

Since the release of ChatGPT by OpenAI in November 2022, there has been significant buzz about the transformative potential of generative AI, with many considering it one of the most revolutionary technologies of our time. 

With a significant 79% of organisations reporting that generative AI contributes positively to business, it is evident that a gap needs to be addressed to demonstrate AI’s value to consumers both in their personal and professional lives. According to the ‘Market Research: Attitudes and Adoption of Generative AI’ report, which surveyed 690 IT business leaders and 1,100 members of the general public in EMEA, key issues of trust, ethics and skills are prevalent, potentially impeding the successful deployment and broader acceptance of generative AI.

The impact of misinformation, inaccuracies, and AI hallucinations

These hallucinations – where AI generates incorrect or illogical outputs – are a significant concern. Trusting what generative AI produces is a substantial issue for both business leaders and consumers. Over a third of the public are anxious about AI’s potential to generate fake news (36%) and its misuse by hackers (42%), while half of the business leaders report grappling with misinformation produced by generative AI. Simultaneously, half of the business leaders have observed their organisations grappling with misinformation produced by generative AI.

Moreover, the reliability of information provided by generative AI has been questioned. Feedback from the general public indicates that half of the data received from AI was inaccurate, and 38% perceived it as outdated. On the business front, concerns include generative AI infringing on copyright or intellectual property rights (40%), and producing unexpected or unintended outputs (36%).

A critical trust issue for businesses (62%) and the public (74%) revolves around AI hallucinations. For businesses, the challenge involves applying generative AI to appropriate use cases, supported by the right technology and safety measures, to mitigate these concerns. Close to half of the consumers (45%) are advocating for regulatory measures on AI usage.

Ethical concerns and risks persist in the use of generative AI

In addition to these challenges, there are strong and similar sentiments on ethical concerns and the risks associated with generative AI among both business leaders and consumers. More than half of the general public (53%) oppose the use of generative AI in making ethical decisions. Meanwhile, 41% of business respondents are concerned about its application in critical decision-making areas. There are distinctions in the specific areas where its use is discouraged; consumers notably oppose its use in politics (46%), and businesses are cautious about its deployment in healthcare (40%).

These concerns find some validation in the research findings, which highlight worrying gaps in organisational practices. Only a third of leaders confirmed that their businesses ensure the data used to train generative AI is diverse and unbiased. Furthermore, only 36% have set ethical guidelines, and 52% have established data privacy and security policies for generative AI applications.

This lack of emphasis on data integrity and ethical considerations puts firms at risk. 63% of business leaders cite ethics as their major concern with generative AI, closely followed by data-related issues (62%). This scenario emphasises the importance of better governance to create confidence and mitigate risks related to how employees use generative AI in the workplace. 

The rise of generative AI skills and the need for enhanced data literacy

As generative AI evolves, establishing relevant skill sets and enhancing data literacy will be key to realising its full potential. Consumers are increasingly using generative AI technologies in various scenarios, including information retrieval, email communication, and skill acquisition. Business leaders claim they use generative AI for data analysis, cybersecurity, and customer support, and despite the success of pilot projects, challenges remain. Despite the reported success of experimental projects, several challenges remain, including security problems, data privacy issues, and output quality and reliability.

Trevor Schulze, Alteryx’s CIO, emphasised the necessity for both enterprises and the general public to fully understand the value of AI and address common concerns as they navigate the early stages of generative AI adoption.

He noted that addressing trust issues, ethical concerns, skills shortages, fears of privacy invasion, and algorithmic bias are critical tasks. Schulze underlined the necessity for enterprises to expedite their data journey, adopt robust governance, and allow non-technical individuals to access and analyse data safely and reliably, addressing privacy and bias concerns in order to genuinely profit from this ‘game-changing’ technology.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

The post Ethical, trust and skill barriers hold back generative AI progress in EMEA appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/20/ethical-trust-and-skill-barriers-hold-back-generative-ai-progress-in-emea/feed/ 0
Kamal Ahluwalia, Ikigai Labs: How to take your business to the next level with generative AI https://www.artificialintelligence-news.com/2024/04/17/kamal-ahluwalia-ikigai-labs-how-to-take-your-business-to-the-next-level-with-generative-ai/ https://www.artificialintelligence-news.com/2024/04/17/kamal-ahluwalia-ikigai-labs-how-to-take-your-business-to-the-next-level-with-generative-ai/#respond Wed, 17 Apr 2024 12:36:48 +0000 https://www.artificialintelligence-news.com/?p=14699 AI News caught up with president of Ikigai Labs, Kamal Ahluwalia, to discuss all things gen AI, including top tips on how to adopt and utilise the tech, and the importance of embedding ethics into AI design. Could you tell us a little bit about Ikigai Labs and how it can help companies? Ikigai is... Read more »

The post Kamal Ahluwalia, Ikigai Labs: How to take your business to the next level with generative AI appeared first on AI News.

]]>
AI News caught up with president of Ikigai Labs, Kamal Ahluwalia, to discuss all things gen AI, including top tips on how to adopt and utilise the tech, and the importance of embedding ethics into AI design.

Could you tell us a little bit about Ikigai Labs and how it can help companies?

Ikigai is helping organisations transform sparse, siloed enterprise data into predictive and actionable insights with a generative AI platform specifically designed for structured, tabular data.  

A significant portion of enterprise data is structured, tabular data, residing in systems like SAP and Salesforce. This data drives the planning and forecasting for an entire business. While there is a lot of excitement around Large Language Models (LLMs), which are great for unstructured data like text, Ikigai’s patented Large Graphical Models (LGMs), developed out of MIT, are focused on solving problems using structured data.  

Ikigai’s solution focuses particularly on time-series datasets, as enterprises run on four key time series: sales, products, employees, and capital/cash. Understanding how these time series come together in critical moments, such as launching a new product or entering a new geography, is crucial for making better decisions that drive optimal outcomes. 

How would you describe the current generative AI landscape, and how do you envision it developing in the future? 

The technologies that have captured the imagination, such as LLMs from OpenAI, Anthropic, and others, come from a consumer background. They were trained on internet-scale data, and the training datasets are only getting larger, which requires significant computing power and storage. It took $100m to train GPT4, and GP5 is expected to cost $2.5bn. 

This reality works in a consumer setting, where costs can be shared across a very large user set, and some mistakes are just part of the training process. But in the enterprise, mistakes cannot be tolerated, hallucinations are not an option, and accuracy is paramount. Additionally, the cost of training a model on internet-scale data is just not affordable, and companies that leverage a foundational model risk exposure of their IP and other sensitive data.  

While some companies have gone the route of building their own tech stack so LLMs can be used in a safe environment, most organisations lack the talent and resources to build it themselves. 

In spite of the challenges, enterprises want the kind of experience that LLMs provide. But the results need to be accurate – even when the data is sparse – and there must be a way to keep confidential data out of a foundational model. It’s also critical to find ways to lower the total cost of ownership, including the cost to train and upgrade the models, reliance on GPUs, and other issues related to governance and data retention. All of this leads to a very different set of solutions than what we currently have. 

How can companies create a strategy to maximise the benefits of generative AI? 

While much has been written about Large Language Models (LLMs) and their potential applications, many customers are asking “how do I build differentiation?”  

With LLMs, nearly everyone will have access to the same capabilities, such as chatbot experiences or generating marketing emails and content – if everyone has the same use cases, it’s not a differentiator. 

The key is to shift the focus from generic use cases to finding areas of optimisation and understanding specific to your business and circumstances. For example, if you’re in manufacturing and need to move operations out of China, how do you plan for uncertainty in logistics, labour, and other factors? Or, if you want to build more eco-friendly products, materials, vendors, and cost structures will change. How do you model this? 

These use cases are some of the ways companies are attempting to use AI to run their business and plan in an uncertain world. Finding specificity and tailoring the technology to your unique needs is probably the best way to use AI to find true competitive advantage.  

What are the main challenges companies face when deploying generative AI and how can these be overcome? 

Listening to customers, we’ve learned that while many have experimented with generative AI, only a fraction have pushed things through to production due to prohibitive costs and security concerns. But what if your models could be trained just on your own data, running on CPUs rather than requiring GPUs, with accurate results and transparency around how you’re getting those results? What if all the regulatory and compliance issues were addressed, leaving no questions about where the data came from or how much data is being retrained? This is what Ikigai is bringing to the table with Large Graphical Models.  

One challenge we’ve helped businesses address is the data problem. Nearly 100% of organisations are working with limited or imperfect data, and in many cases, this is a barrier to doing anything with AI. Companies often talk about data clean-up, but in reality, waiting for perfect data can hinder progress. AI solutions that can work with limited, sparse data are essential, as they allow companies to learn from what they have and account for change management. 

The other challenge is how internal teams can partner with the technology for better outcomes. Especially in regulated industries, human oversight, validation, and reinforcement learning are necessary. Adding an expert in the loop ensures that AI is not making decisions in a vacuum, so finding solutions that incorporate human expertise is key. 

To what extent do you think adopting generative AI successfully requires a shift in company culture and mindset? 

Successfully adopting generative AI requires a significant shift in company culture and mindset, with strong commitment from executive and continuous education. I saw this firsthand at Eightfold when we were bringing our AI platform to companies in over 140 countries. I always recommend that teams first educate executives on what’s possible, how to do it, and how to get there. They need to have the commitment to see it through, which involves some experimentation and some committed course of action. They must also understand the expectations placed on colleagues, so they can be prepared for AI becoming a part of daily life. 

Top-down commitment, and communication from executives goes a long way, as there’s a lot of fear-mongering suggesting that AI will take jobs, and executives need to set the tone that, while AI won’t eliminate jobs outright, everyone’s job is going to change in the next couple of years, not just for people at the bottom or middle levels, but for everyone. Ongoing education throughout the deployment is key for teams learning how to get value from the tools, and adapt the way they work to incorporate the new skillsets.  

It’s also important to adopt technologies that play to the reality of the enterprise. For example, you have to let go of the idea that you need to get all your data in order to take action. In time-series forecasting, by the time you’ve taken four quarters to clean up data, there’s more data available, and it’s probably a mess. If you keep waiting for perfect data, you won’t be able to use your data at all. So AI solutions that can work with limited, sparse data are crucial, as you have to be able to learn from what you have. 

Another important aspect is adding an expert in the loop. It would be a mistake to assume AI is magic. There are a lot of decisions, especially in regulated industries, where you can’t have AI just make the decision. You need oversight, validation, and reinforcement learning – this is exactly how consumer solutions became so good.  

Are there any case studies you could share with us regarding companies successfully utilising generative AI? 

One interesting example is a Marketplace customer that is using us to rationalise their product catalogue. They’re looking to understand the optimal number of SKUs to carry, so they can reduce their inventory carrying costs while still meeting customer needs. Another partner does workforce planning, forecasting, and scheduling, using us for labour balancing in hospitals, retail, and hospitality companies. In their case, all their data is sitting in different systems, and they must bring it into one view so they can balance employee wellness with operational excellence. But because we can support a wide variety of use cases, we work with clients doing everything from forecasting product usage as part of a move to a consumption-based model, to fraud detection. 

You recently launched an AI Ethics Council. What kind of people are on this council and what is its purpose? 

Our AI Ethics Council is all about making sure that the AI technology we’re building is grounded in ethics and responsible design. It’s a core part of who we are as a company, and I’m humbled and honoured to be a part of it alongside such an impressive group of individuals. Our council includes luminaries like Dr. Munther Dahleh, the Founding Director of the Institute for Data Systems and Society (IDSS) and a Professor at MIT; Aram A. Gavoor, Associate Dean at George Washington University and a recognised scholar in administrative law and national security; Dr. Michael Kearns, the National Center Chair for Computer and Information Science at the University of Pennsylvania; and Dr. Michael I. Jordan, a Distinguished Professor at UC Berkeley in the Departments of Electrical Engineering and Computer Science, and Statistics. I am also honoured to serve on this council alongside these esteemed individuals.  

The purpose of our AI Ethics Council is to tackle pressing ethical and security issues impacting AI development and usage. As AI rapidly becomes central to consumers and businesses across nearly every industry, we believe it is crucial to prioritise responsible development and cannot ignore the need for ethical considerations. The council will convene quarterly to discuss important topics such as AI governance, data minimisation, confidentiality, lawfulness, accuracy and more. Following each meeting, the council will publish recommendations for actions and next steps that organisations should consider moving forward. As part of Ikigai Labs’ commitment to ethical AI deployment and innovation, we will implement the action items recommended by the council. 

Ikigai Labs raised $25m funding in August last year. How will this help develop the company, its offerings and, ultimately, your customers? 

We have a strong foundation of research and innovation coming out of our core team with MIT, so the funding this time is focused on making the solution more robust, as well as bringing on the team that works with the clients and partners.  

We can solve a lot of problems but are staying focused on solving just a few meaningful ones through time-series super apps. We know that every company runs on four time series, so the goal is covering these in depth and with speed: things like sales forecasting, consumption forecasting, discount forecasting, how to sunset products, catalogue optimisation, etc. We’re excited and looking forward to putting GenAI for tabular data into the hands of as many customers as possible. 

Kamal will take part in a panel discussion titled ‘Barriers to Overcome: People, Processes and Technology’ at the AI & Big Data Expo in Santa Clara on June 5, 2024. You can find all the details here.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Kamal Ahluwalia, Ikigai Labs: How to take your business to the next level with generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/17/kamal-ahluwalia-ikigai-labs-how-to-take-your-business-to-the-next-level-with-generative-ai/feed/ 0
Why data quality is critical for marketing in the age of GenAI https://www.artificialintelligence-news.com/2024/04/04/why-data-quality-critical-marketing-age-of-genai/ https://www.artificialintelligence-news.com/2024/04/04/why-data-quality-critical-marketing-age-of-genai/#respond Thu, 04 Apr 2024 14:56:02 +0000 https://www.artificialintelligence-news.com/?p=14643 A recent survey reveals that CMOs around the world are optimistic and confident about GenAI’s future ability to enhance productivity and create competitive advantage. Seventy per cent are already using GenAI and 19 per cent are testing it. And the main areas they’re exploring are personalisation (67%), content creation (49%) and market segmentation (41%). However,... Read more »

The post Why data quality is critical for marketing in the age of GenAI appeared first on AI News.

]]>
A recent survey reveals that CMOs around the world are optimistic and confident about GenAI’s future ability to enhance productivity and create competitive advantage. Seventy per cent are already using GenAI and 19 per cent are testing it. And the main areas they’re exploring are personalisation (67%), content creation (49%) and market segmentation (41%).

However, for many consumer brands, the divide between expectations and reality looms large. Marketers envisioning a seamless, magical customer experience must recognise that AI’s effectiveness depends on high-quality underlying data. Without that, the AI falls flat, leaving marketers grappling with a less-than-magical reality.

AI-powered marketing fail

Let’s take a closer look at what AI-powered marketing with poor data quality could look like. Say I’m a customer of a general sports apparel and outdoor store, and I’m planning for my upcoming annual winter ski trip. I’m excited to use the personal shopper AI to give me an experience that’s easy and customised to me.

I need to fill in some gaps in my ski wardrobe, so I ask the personal shopper AI to suggest some items to purchase. But the AI is creating its responses based on data about me that’s been scattered across the brand’s multiple systems. Without a clear picture of who I am, it asks me for some basic information that it should already know. Slightly annoying… I’m used to entering my info when I shop online, but I was hoping the AI upgrade to the experience would make things easier for me. 

Because my data is so disconnected, the AI concierge only has an order associated with my name from two years ago, which was actually a gift. Without a full picture of me, this personal shopper AI is unable to generate accurate insights and ends up sharing recommendations that aren’t helpful.

Ultimately this subpar experience makes me less excited about purchasing from this brand, and I decide to go elsewhere. 

The culprit behind a disconnected and impersonal generative AI experience is data quality — poor data quality = poor customer experience. 

AI-powered marketing for the win

Now, let’s revisit this outdoor sports retailer scenario, but imagine that the personal shopper AI is powered by accurate, unified data that has a complete history of my interactions with the brand from first purchase to last return. 

I enter my first question, and I get a super-personalised and friendly response, already starting to create the experience of a one-on-one connection with a helpful sales associate. It automatically references my shopping history and connects my past purchases to my current shopping needs. 

Based on my prompts and responses, the concierge provides a tailored set of recommendations to fill in my ski wardrobe along with direct links to purchase. The AI is then able to generate sophisticated insights about me as a customer and even make predictions about the types of products I might want to buy based on my past purchases, driving up the likelihood of me purchasing and potentially even expanding my basket to buy additional items. 

Within the experience, I am able to actually use the concierge to order without having to navigate elsewhere. I also know my returns or any future purchases will be incorporated into my profile. 

Because it knew my history and preferences, Generative AI was able to create a buying experience for me that was super personalised and convenient. This is a brand I will keep returning to for future purchases.

In other words, when it comes to AI for marketing, better data = better results.

So how do you actually address the data quality challenge? And what could that look like in this new world of AI?

Solving the data quality problem

The critical first element to powering an effective AI strategy is a unified customer data foundation. The tricky part is that accurately unifying customer data is hard due to its scale and complexity — most consumers have at least two email addresses, have moved over eleven times in their lifetimes and use an average of five channels (or if they are millennials or Gen Z, it’s actually twelve channels).

Many familiar approaches to unifying customer data are rules-based and use deterministic/fuzzy matching, but these methods are rigid and break down when data doesn’t match perfectly. This, in turn, creates an inaccurate customer profile that can actually miss a huge portion of a customer’s lifetime history with the brand and not account for recent purchases or changes of contact information. 

A better way to build a unified data foundation actually involves using AI models (a different flavour of AI than generative AI for marketing) to find the connections between data points to tell if they belong to the same person with the same nuance and flexibility of a human but at massive scale. 

When your customer data tools can use AI to unify every touchpoint in the customer journey from first interaction to last purchase and beyond (loyalty, email, website data, etc…), the result is a comprehensive customer profile that tells you who your customers are and how they interact with your brand. 

How data quality in generative AI drives growth

For the most part, marketers have access to the same set of generative AI tools, therefore, the fuel you input will become your differentiator. 

Data quality to power AI provides benefits in three areas: 

  • Customer experiences that stand out — more personalised, creative offers, better customer service interactions, a smoother end-to-end experience, etc.
  • Operational efficiency gains for your teams — faster time to market, less manual intervention, better ROI on campaigns, etc.
  • Reduced compute costs — better-informed AI doesn’t need to go back and forth with the user, which saves on racking up API calls that quickly get expensive

As generative AI tools for marketing continue to evolve, they bring the promise of getting back to the level of one-to-one personalisation that customers would expect in their favourite stores, but now at a massive scale. That won’t happen on its own, though — brands need to provide AI tools with accurate customer data to bring the AI magic to life.

The dos and don’ts of AI in marketing

AI is a helpful sidekick to many industries, especially marketing — as long as it’s leveraged appropriately. Here’s a quick ‘cheat-sheet’ to help marketers on their GenAI journey:

Do:

  • Be explicit about the specific use cases where you plan to use data and AI and specify the expected outcomes. What results do you expect to achieve?
  • Carefully evaluate if Gen AI is the most appropriate tool for your specific use case.
  • Prioritise data quality and comprehensiveness — establishing a unified customer data foundation is essential for an effective AI strategy.

Don’t:

  • Rush to implement GenAI across all areas. Start with a manageable, human-in-the-loop use case, such as generating subject lines.

(Editor’s note: This article is sponsored by Amperity)

The post Why data quality is critical for marketing in the age of GenAI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/04/why-data-quality-critical-marketing-age-of-genai/feed/ 0
Financial services introducing AI but hindered by data issues https://www.artificialintelligence-news.com/2024/01/29/financial-services-introducing-ai-hindered-data-issues/ https://www.artificialintelligence-news.com/2024/01/29/financial-services-introducing-ai-hindered-data-issues/#respond Mon, 29 Jan 2024 16:34:29 +0000 https://www.artificialintelligence-news.com/?p=14279 According to research by EXL, around 89 percent of insurance and banking firms in the UK have introduced AI solutions over the past year. However, issues with data optimisation could hinder their impact. The researchers surveyed executives at top UK insurers and lenders about their AI strategies and found that 44 percent have deployed AI... Read more »

The post Financial services introducing AI but hindered by data issues appeared first on AI News.

]]>
According to research by EXL, around 89 percent of insurance and banking firms in the UK have introduced AI solutions over the past year. However, issues with data optimisation could hinder their impact.

The researchers surveyed executives at top UK insurers and lenders about their AI strategies and found that 44 percent have deployed AI across eight or more business functions—especially in marketing, business development, and regulatory compliance. 

Nearly 9 in 10 financial services leaders reported investing upwards of £7.9 million in AI over their last fiscal year. Over a third invested £39 million or more, exemplifying the industry’s willingness to commit major capital to AI implementation.

Despite the positive strides in AI integration, the study suggests that organisations might be overlooking the importance of prioritising their data operations. Nearly half (47%) admitted their organisations are only “minimally data driven,” raising concerns about the effectiveness of AI implementation without a solid data foundation.

“It’s clear industry leaders recognise AI’s potential, but external pressures to implement quickly can lead to unchecked investment,” commented Kshitij Jain, EMEA Practice Head at EXL. “The risk is that ensuring operations are truly data driven gets deprioritised, which can prove very costly.”

The research also identified a group of “Strivers,” representing 45 percent of respondents, who are implementing AI more narrowly across around four functions. Their focused approach has allowed them to efficiently leverage AI for cost-cutting, outperforming early AI adopters by 23 percentage points.

Additionally, over half of respondents are investing more in AI specifically due to advancements in generative AI. However, 70 percent voiced deep concerns about risks related to generative AI like potential brand damage and inaccurate data outcomes.

“The key with any AI rollout is a measured, strategic approach—getting the data architecture right, testing solutions, and training employees,” Jain concluded. “For enterprise adoption to succeed, boards must buy into AI’s capabilities and ensure investment is being used effectively.”  

A full copy of the research can be found here (registration required)

(Photo by Alev Takil on Unsplash)

See also: NCSC: AI to significantly boost cyber threats over next two years

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Financial services introducing AI but hindered by data issues appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/29/financial-services-introducing-ai-hindered-data-issues/feed/ 0
Stephen Almond, ICO: Prioritise privacy when adopting generative AI https://www.artificialintelligence-news.com/2023/06/15/stephen-almond-ico-prioritise-privacy-adopting-generative-ai/ https://www.artificialintelligence-news.com/2023/06/15/stephen-almond-ico-prioritise-privacy-adopting-generative-ai/#respond Thu, 15 Jun 2023 14:09:46 +0000 https://www.artificialintelligence-news.com/?p=13197 The Information Commissioner’s Office (ICO) is urging businesses to prioritise privacy considerations when adopting generative AI technology. According to new research, generative AI has the potential to become a £1 trillion market within the next ten years, offering significant benefits to both businesses and society. However, the ICO emphasises the need for organisations to be... Read more »

The post Stephen Almond, ICO: Prioritise privacy when adopting generative AI appeared first on AI News.

]]>
The Information Commissioner’s Office (ICO) is urging businesses to prioritise privacy considerations when adopting generative AI technology.

According to new research, generative AI has the potential to become a £1 trillion market within the next ten years, offering significant benefits to both businesses and society. However, the ICO emphasises the need for organisations to be aware of the associated privacy risks.

Stephen Almond, the Executive Director of Regulatory Risk at the ICO, highlighted the importance of recognising the opportunities presented by generative AI while also understanding the potential risks.

“Businesses are right to see the opportunity that generative AI offers, whether to create better services for customers or to cut the costs of their services. But they must not be blind to the privacy risks,” says Almond.

“Spend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won’t upset customers or regulators.”

Generative AI works by generating content based on extensive data collection from publicly accessible sources, including personal information. Existing laws already safeguard individuals’ rights, including privacy, and these regulations extend to emerging technologies such as generative AI.

In April, the ICO outlined eight key questions that organisations using or developing generative AI that processes personal data should be asking themselves. The regulatory body is committed to taking action against organisations that fail to comply with data protection laws.

Almond reaffirms the ICO’s stance, stating that they will assess whether businesses have effectively addressed privacy risks before implementing generative AI, and will take action if there is a potential for harm resulting from the misuse of personal data. He emphasises that businesses must not overlook the risks to individuals’ rights and freedoms during the rollout of generative AI.

“We will be checking whether businesses have tackled privacy risks before introducing generative AI – and taking action where there is a risk of harm to people through poor use of their data. There can be no excuse for ignoring risks to people’s rights and freedoms before rollout,” explains Almond.

“Businesses need to show us how they’ve addressed the risks that occur in their context – even if the underlying technology is the same. An AI-backed chat function helping customers at a cinema raises different questions compared with one for a sexual health clinic, for instance.”

The ICO is committed to supporting UK businesses in their development and adoption of new technologies that prioritise privacy.

The recently updated Guidance on AI and Data Protection serves as a comprehensive resource for developers and users of generative AI, providing a roadmap for data protection compliance. Additionally, the ICO offers a risk toolkit to assist organisations in identifying and mitigating data protection risks associated with generative AI.

For innovators facing novel data protection challenges, the ICO provides advice through its Regulatory Sandbox and Innovation Advice service. To enhance their support, the ICO is piloting a Multi-Agency Advice Service in collaboration with the Digital Regulation Cooperation Forum, aiming to provide comprehensive guidance from multiple regulatory bodies to digital innovators.

While generative AI offers tremendous opportunities for businesses, the ICO emphasises the need to address privacy risks before widespread adoption. By understanding the implications, mitigating risks, and complying with data protection laws, organisations can ensure the responsible and ethical implementation of generative AI technologies.

(Image Credit: ICO)

Related: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post Stephen Almond, ICO: Prioritise privacy when adopting generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/15/stephen-almond-ico-prioritise-privacy-adopting-generative-ai/feed/ 0
Infocepts CEO Shashank Garg on the D&A market shifts and impact of AI on data analytics https://www.artificialintelligence-news.com/2023/05/09/infocepts-ceo-shashank-garg-on-the-da-market-shifts-and-impact-of-ai-on-data-analytics/ https://www.artificialintelligence-news.com/2023/05/09/infocepts-ceo-shashank-garg-on-the-da-market-shifts-and-impact-of-ai-on-data-analytics/#respond Tue, 09 May 2023 14:11:35 +0000 https://www.artificialintelligence-news.com/?p=13027 Could you tell us a little bit your company, Infocepts? On a mission to bridge the gap between the worlds of business and analytics, Infocepts was founded in 2004 by me and Rohit Bhayana, both with more than 20 years of experience in the Data and Analytics (D&A) industry. People often use the term business... Read more »

The post Infocepts CEO Shashank Garg on the D&A market shifts and impact of AI on data analytics appeared first on AI News.

]]>
Could you tell us a little bit your company, Infocepts?

On a mission to bridge the gap between the worlds of business and analytics, Infocepts was founded in 2004 by me and Rohit Bhayana, both with more than 20 years of experience in the Data and Analytics (D&A) industry. People often use the term business analytics as one phrase, but if you have worked in the industry for a long time and if you talk to a lot of people, you’ll realise just how big the gap is.

And that’s Infocepts’ focus. We are an end-end D&A solutions provider with an increasing focus on AI and our solutions combine our processes, expertise, proprietary technologies all packaged together to deliver predictable outcomes to our clients. We work for marquee enterprise clients across industries. Infocepts has the highest overall ranking on Gartner peer reviews amongst our competitors and we are a Great Place to Work certified firm. So, we’re very proud that our clients and our people love us.

The data & analytics technology market is evolving very fast. What’s your view of it?

I love being in the data industry and a large reason is the pace at which it moves. In less than 10 years we have gone from about 60-70 technologies to 1,400+ and growing. But the problems have not grown 20X. That means, we now have multiple ways to solve the same problem.

Similarly, on the buyer side, we have seen a huge change in the buyer persona. Today, I don’t know of any business leader who is not a data leader. Today’s business leaders were born in the digital era and are super comfortable not just with insights but with the lowest level data. They know the modeling methods and have an intuitive sense of where AI can help. Most executives in today’s world also have a deeper understanding about what data quality means, its importance, and how it will change the game in the long run.

So, we are seeing a big change both on the supply and demand side.

What are some of the key challenges you see in front of business & data leaders focused on data-driven transformation?

The gap between the worlds of business and analytics is a very, very real one. I would like to quote this leadership survey which highlights this contradiction. Talking about D&A initiatives which are adding value – 90% of data leaders believe their company’s data products provide real business value, but only 39% of business leaders feel so. That’s a huge gap. Ten years ago, the number would have been lower, but the gap was still the same. This is not a technology issue. What it tells us is that the most common roadblocks to the success of D&A initiatives are all human-related challenges like skills shortages, lack of business engagement, difficulty accepting change and poor data literacy throughout the organisation

We all know the power of data and we spoke about business leaders being data leaders, but there are still people in organisations who need to change. Data leaders are still not speaking the language of business and are under intense pressure to demonstrate the value of D&A initiatives to business executives.

The pace at which technologies have changed and evolved is the pace at which you will see businesses evolving due to human-centric changes. The next five years look very transformational and positive for the industry.

Can you also talk about some of the opportunities you see in front of the D&A leaders?

The first big opportunity is to improve productivity to counter the economic uncertainty. Companies are facing financial crunch because of on-going economic uncertainty including the very real possibility of a recession in the next 12-18 months. Data shows that there are companies that come out strong after a recession, with twice the industry averages in revenue & profits. These are the companies who are proactive in preparing & executing against multiple scenarios backed by insights. They redeploy their resources towards the highest value activities in their strategy and manage other costs. Companies need to stop paying for legacy technologies and fix their broken managed services model. To keep up with the changing technology landscape, it’s important to choose on-demand talent. 

Secondly, companies and people should innovate with data and become data fluent. Many organisations have invested in specialised teams for delivering data. But the real value from data comes only when your employees use it. Data fluency is an organisational capability that enables every employee to understand, access, and apply data as fluently as one can speak in their language. With more fluent people in an organisation, productivity increases, turnover reduces, and innovation thrives without relying only on specialised teams. Companies should assess their organisational fluency and consider establishing a data concierge. It’s like a ten layered structure instead of a very big team. A concierge which can help you become more fluent and introduce interventions across the board to strengthen access, democratise data, increase trust adoption.

Lastly, there’s a huge opportunity to reimagine how we bring value to the business using data. Salesforce and Amazon pioneered commercially successful IaaS, PaaS, and SaaS models in cloud computing that gradually shifted significant portions of responsibility for bringing value from the client to the service provider. The benefits of agility, elasticity, economies of scale, and reach are well known. Similarly, data & analytics technologies need to go through a similar paradigm and go one step further towards productised services, what we call at Infocepts – Solutions as a Service!

Can you talk more about your Solutions as a Service approach?

What we mean by Solutions as a Service is a combination of products, problem solving & expertise together in one easy to use solution. This approach is inevitable given the sheer pace at which technology is evolving. This new category requires a shift in thinking and will give you a source of advantage like how the early cloud adopters received during the last decade. Infocepts offers many domain-driven as-a-service solutions in this category such as e360 for people analytics, AutoMate for business automations and DiscoverYai (also known as AI-as-a-Service) for realising the benefits of AI.

There is a lot of buzz around AI. What does AI mean for the world of D&A and how real is the power of AI?

Oh! It’s very real. In the traditional BI paradigm, business users struggled to get access to their data, but even if they crossed that hurdle, they still needed to know what questions to ask. AI can be an accelerator and educator by helping business folks know what to look for in their data in the first place.

AI-driven insights can help uncover the “why” in your data. For example, augmented analytics can help you discover why sales are increasing and why market penetration varies from city to city, guiding you towards hidden insights for which you didn’t know where to look.

Another example is the use of chatbots or NLP driven generative AI solutions that can understand and translate queries such as, “What are sales for each category and region?” Thanks to modern computing and programming techniques combined with the power of AI, these solutions can run thousands of analyses on billions of rows in seconds, use auto ML capabilities to identify best fit models & produce insights to answer such business questions.

Then, through natural language generation, the system can present the AI-driven insights to the user in an intuitive fashion – including results to questions that the user might not have thought to ask. With user feedback and machine learning, the AI can become more intelligent about which insights are most useful

In addition to insights generation, AI can also play a role in data management & engineering by automating data discovery, data classification, metadata enhancements, data lifecycle governance, data anonymisation and more.

On the data infra side, models trained in machine learning can be used to solve classification, prediction, and control problems to automate activities & add or augment capabilities such as – predictive capacity & availability monitoring, intelligent alert escalation & routing, anomaly detection, ChatOps, root cause identification and more. 

Where can AI create immediate impact for businesses? Can you share some examples?

AI is an enabler for data and analytics as against being a technology vertical by itself. As an example, let’s look at the retail industry – use cases like store activity monitoring, order management, fraud/threat detection, assortment management have existed for a while now. With AI, you can deliver them way faster.

In media, some of the use cases that we are helping our clients with are around demand prediction, content personalisation, content recommendation, synthetic content generation – both text & multimedia. AI also has vast applications in banking. We again have fraud detection, and coupled with automation, now it’s not just detection but you can also put controls in real time to stop fraud.

We have also implemented AI use-cases within Infocepts. We leverage AI to increase our productivity & employee engagement. Our HR team launched ‘Amber’, an AI bot that redefines employee experience. We use AI assistants to record, transcribe and track actions from voice conversations. Our marketing & comms teams use generative AI tools for content generation.

The advancement we have seen in the tech space in the last few years is what you will see in the next 3 to 4 years on the people side. And I think AI assisted tech processes and solutions will play a huge role there.

What advice would you give business leaders who are looking to get started with AI?                             

Embrace an AI-first mindset! Instead of the traditional approach of tackling complex business problems by sifting through data and wrestling with analysis for months before you see any results, it’s important to embrace an AI-first mindset to get things done in no time! AI-driven auto-analysis uncovers hidden patterns and trends so analysts can get to “why” faster and help their business users take actions. The auto-analysis gives data teams access to hidden patterns and the dark corners of their data. Let your AI tools do most of the grunt work faster than your traditional approaches. And now with Generative AI technologies bolted on top of these solutions, you can make it conversational using voice or natural language search capabilities.

Solutions like Infocepts DiscoverYai does just this. It gives organisations the opportunity to make smart choices based on data-driven insights. Our process starts by identifying clients’ objectives and then leveraging advanced AI strategies that quickly assess data quality, highlight key relationships in your data, identify drivers impacting your results and surface useful recommendations as well as provide an impact analysis resulting in actionable recommendations that have maximum impact potential – all delivered through an effective combination of tried-and-tested practices along with cutting edge AI driven processes!

Secondly, to gain the most from AI-driven insights, you’ll need to be ready for a little experimentation. Embrace getting it wrong and use those discoveries as learning opportunities. Hackathons/innovation contests are a great way to generate quick ideas, fail fast and succeed faster.

It’s also essential that your team can confidently understand data; this enables them to recognise useful actions generated by artificial intelligence without hesitation. So, while you use AI, ensure that it is explainable.

Lastly, help your organisation set up systems which will make sure your AI models don’t become obsolete in an ever-evolving landscape – keep upping their training so they remain ready to take on even harder challenges!

About Shashank Garg

Shashank Garg is the co-founder and CEO of Infocepts, a global leader in the Data & AI solutions. As a trusted advisor to CXOs of several Fortune 500 companies, Shashank has helped leaders across the globe to disrupt and transform their businesses using the power of Data Analytics and AI. Learn more about him on LinkedIn.

About Infocepts

Infocepts enables improved business results through more effective use of data, AI & user-friendly analytics. Infocepts partners with its clients to resolve the most common & complex challenges standing in their way of using data to strengthen business decisions. To learn more, visit: infocepts.com or follow Infocepts on LinkedIn.

Looking to revamp your intelligent automation strategy? Learn more about the Intelligent Automation Event & Conference, to discover the latest insights surrounding unbiased algorithyms, future trends, RPA, Cognitive Automation and more!

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Infocepts CEO Shashank Garg on the D&A market shifts and impact of AI on data analytics appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/09/infocepts-ceo-shashank-garg-on-the-da-market-shifts-and-impact-of-ai-on-data-analytics/feed/ 0
Devang Sachdev, Snorkel AI: On easing the laborious process of labelling data https://www.artificialintelligence-news.com/2022/09/30/devang-sachdev-snorkel-ai-on-easing-the-laborious-process-of-labelling-data/ https://www.artificialintelligence-news.com/2022/09/30/devang-sachdev-snorkel-ai-on-easing-the-laborious-process-of-labelling-data/#respond Fri, 30 Sep 2022 07:52:51 +0000 https://www.artificialintelligence-news.com/?p=12318 Correctly labelling training data for AI models is vital to avoid serious problems, as is using sufficiently large datasets. However, manually labelling massive amounts of data is time-consuming and laborious. Using pre-labelled datasets can be problematic, as evidenced by MIT having to pull its 80 Million Tiny Images datasets. For those unaware, the popular dataset... Read more »

The post Devang Sachdev, Snorkel AI: On easing the laborious process of labelling data appeared first on AI News.

]]>
Correctly labelling training data for AI models is vital to avoid serious problems, as is using sufficiently large datasets. However, manually labelling massive amounts of data is time-consuming and laborious.

Using pre-labelled datasets can be problematic, as evidenced by MIT having to pull its 80 Million Tiny Images datasets. For those unaware, the popular dataset was found to contain thousands of racist and misogynistic labels that could have been used to train AI models.

AI News caught up with Devang Sachdev, VP of Marketing at Snorkel AI, to find out how the company is easing the laborious process of labelling data in a safe and effective way.

AI News: How is Snorkel helping to ease the laborious process of labelling data?

Devang Sachdev: Snorkel Flow changes the paradigm of training data labelling from the traditional manual process—which is slow, expensive, and unadaptable—to a programmatic process that we’ve proven accelerates training data creation 10x-100x.

Users are able to capture their knowledge and existing resources (both internal, e.g., ontologies and external, e.g., foundation models) as labelling functions, which are applied to training data at scale. 

Unlike a rules-based approach, these labelling functions can be imprecise, lack coverage, and conflict with each other. Snorkel Flow uses theoretically grounded weak supervision techniques to intelligently combine the labelling functions to auto-label your training data set en-masse using an optimal Snorkel Flow label model. 

Using this initial training data set, users train a larger machine learning model of their choice (with the click of a button from our ‘Model Zoo’) in order to:

  1. Generalise beyond the output of the label model.
  2. Generate model-guided error analysis to know exactly where the model is confused and how to iterate. This includes auto-generated suggestions, as well as analysis tools to explore and tag data to identify what labelling functions to edit or add. 

This rapid, iterative, and adaptable process becomes much more like software development rather than a tedious, manual process that cannot scale. And much like software development, it allows users to inspect and adapt the code that produced training data labels.

AN: Are there dangers to implementing too much automation in the labelling process?

DS: The labelling process can inherently introduce dangers simply for the fact that as humans, we’re fallible. Human labellers can be fatigued, make mistakes, or have a conscious or unconscious bias which they encode into the model via their manual labels.

When mistakes or biases occur—and they will—the danger is the model or downstream application essentially amplifies the isolated label. These amplifications can lead to consequential impacts at scale. For example, inequities in lending, discrimination in hiring, missed diagnoses for patients, and more. Automation can help.

In addition to these dangers—which have major downstream consequences—there are also more practical risks of attempting to automate too much or taking the human out of the loop of training data development.

Training data is how humans encode their expertise to machine learning models. While there are some cases where specialised expertise isn’t required to label data, in most enterprise settings, there is. For this training data to be effective, it needs to capture the fullness of subject matter experts’ knowledge and the diverse resources they rely on to make a decision on any given datapoint.

However, as we have all experienced, having highly in-demand experts label data manually one-by-one simply isn’t scalable. It also leaves an enormous amount of value on the table by losing the knowledge behind each manual label. We must take a programmatic approach to data labelling and engage in data-centric, rather than model-centric, AI development workflows. 

Here’s what this entails: 

  • Elevating how domain experts label training data from tediously labelling one-by-one to encoding their expertise—the rationale behind what would be their labelling decisions—in a way that can be applied at scale. 
  • Using weak supervision to intelligently auto-label at scale—this is not auto-magic, of course; it’s an inherently transparent, theoretically grounded approach. Every training data label that’s applied in this step can be inspected to understand why it was labelled as it was. 
  • Bringing experts into the core AI development loop to assist with iteration and troubleshooting. Using streamlined workflows within the Snorkel Flow platform, data scientists—as subject matter experts—are able to collaborate to identify the root cause of error modes and how to correct them by making simple labelling function updates, additions, or, at times, correcting ground truth or “gold standard” labels that error analysis reveals to be wrong.

AN: How easy is it to identify and update labels based on real-world changes?

DS: A fundamental value of Snorkel Flow’s data-centric approach to AI development is adaptability. We all know that real-world changes are inevitable, whether that’s production data drift or business goals that evolve. Because Snorkel Flow uses programmatic labelling, it’s extremely efficient to respond to these changes.

In the traditional paradigm, if the business comes to you with a change in objectives—say, they were classifying documents three ways but now need a 10-way schema, you’d effectively need to relabel your training data set (often thousands or hundreds of thousands of data points) from scratch. This would mean weeks or months of work before you could deliver on the new objective. 

In contrast, with Snorkel Flow, updating the schema is as simple as writing a few additional labelling functions to cover the new classes and applying weak supervision to combine all of your labelling functions and retrain your model. 

To identify data drift in production, you can rely on your monitoring system or use Snorkel Flow’s production APIs to bring live data back into the platform and see how your model performs against real-world data.

As you spot performance degradation, you’re able to follow the same workflow: using error analysis to understand patterns, apply auto-suggested actions, and iterate in collaboration with your subject matter experts to refine and add labelling functions. 

AN: MIT was forced to pull its ‘80 Million Tiny Images’ dataset after it was found to contain racist and misogynistic labels due to its use of an “automated data collection procedure” based on WordNet. How is Snorkel ensuring that it avoids this labelling problem that is leading to harmful biases in AI systems?

DS: Bias can start anywhere in the system – pre-processing, post-processing, with task design, with modelling choices, etc. And in particular issues with labelled training data.

To understand underlying bias, it is important to understand the rationale used by labellers. This is impractical when every datapoint is hand labelled and the logic behind labelling it one way or another is not captured. Moreover, information about label author and dataset versioning is rarely available. Often labelling is outsourced or in-house labellers have moved on to other projects or organizations. 

Snorkel AI’s programmatic labelling approach helps discover, manage, and mitigate bias. Instead of discarding the rationale behind each manually labelled datapoint, Snorkel Flow, our data-centric AI platform, captures the labellers’ (subject matter experts, data scientists, and others) knowledge as a labelling function and generates probabilistic labels using theoretical grounded algorithms encoded in a novel label model.

With Snorkel Flow, users can understand exactly why a certain datapoint was labelled the way it is. This process, along with label function and label dataset versioning, allows users to audit, interpret, and even explain model behaviours. This shift from manual to programmatic labelling is key to managing bias.

AN: A group led by Snorkel researcher Stephen Bach recently had their paper on Zero-Shot Learning with Common Sense Knowledge Graphs (ZSL-KG) published. I’d direct readers to the paper for the full details, but can you give us a brief overview of what it is and how it improves over existing WordNet-based methods?

DS: ZSL-KG improves graph-based zero-shot learning in two ways: richer models and richer data. On the modelling side, ZSL-KG is based on a new type of graph neural network called a transformer graph convolutional network (TrGCN).

Many graph neural networks learn to represent nodes in a graph through linear combinations of neighbouring representations, which is limiting. TrGCN uses small transformers at each node to combine neighbourhood representations in more complex ways.

On the data side, ZSL-KG uses common sense knowledge graphs, which use natural language and graph structures to make explicit many types of relationships among concepts. They are much richer than the typical ImageNet subtype hierarchy.

AN: Gartner designated Snorkel a ‘Cool Vendor’ in its 2022 AI Core Technologies report. What do you think makes you stand out from the competition?

DS: Data labelling is one of the biggest challenges for enterprise AI. Most organisations realise that current approaches are unscalable and often ridden with quality, explainability, and adaptability issues. Snorkel AI not only provides a solution for automating data labelling but also uniquely offers an AI development platform to adopt a data-centric approach and leverage knowledge resources including subject matter experts and existing systems.

In addition to the technology, Snorkel AI brings together 7+ years of R&D (which began at the Stanford AI Lab) and a highly-talented team of machine learning engineers, success managers, and researchers to successfully assist and advise customer development as well as bring new innovations to market.

Snorkel Flow unifies all the necessary components of a programmatic, data-centric AI development workflow—training data creation/management, model iteration, error analysis tooling, and data/application export or deployment—while also being completely interoperable at each stage via a Python SDK and a range of other connectors.

This unified platform also provides an intuitive interface and streamlined workflow for critical collaboration between SME annotators, data scientists, and other roles, to accelerate AI development. It allows data science and ML teams to iterate on both data and models within a single platform and use insights from one to guide the development of the other, leading to rapid development cycles.

The Snorkel AI team will be sharing their invaluable insights at this year’s AI & Big Data Expo North America. Find out more here and swing by Snorkel’s booth at stand #52.

The post Devang Sachdev, Snorkel AI: On easing the laborious process of labelling data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/09/30/devang-sachdev-snorkel-ai-on-easing-the-laborious-process-of-labelling-data/feed/ 0
Ash Damle, TMDC: Data-based business decisions in real-time https://www.artificialintelligence-news.com/2022/09/22/ash-damle-tmdc-data-based-business-decisions-in-real-time/ https://www.artificialintelligence-news.com/2022/09/22/ash-damle-tmdc-data-based-business-decisions-in-real-time/#respond Thu, 22 Sep 2022 12:53:12 +0000 https://www.artificialintelligence-news.com/?p=12267 Ash Damle, Head of AI and Data Science at TMDC, explains how the company is humanising and democratising data access. AI News: The Modern Data Company (TMDC) aims to “democratise” data access. What are the benefits to enterprises?  Ash Damle: Modern companies are data companies. When a data company’s best asset, its data, is only... Read more »

The post Ash Damle, TMDC: Data-based business decisions in real-time appeared first on AI News.

]]>
Ash Damle, Head of AI and Data Science at TMDC, explains how the company is humanising and democratising data access.

AI News: The Modern Data Company (TMDC) aims to “democratise” data access. What are the benefits to enterprises? 

Ash Damle: Modern companies are data companies. When a data company’s best asset, its data, is only accessible by a handful of individuals, then the company is only scratching the surface of what data can do.

Democratisation of data enables every individual in the company to better perform, innovate, and meet business goals. Modern offers enterprises the ability to put data to work — that requires data to be available and to be trusted. 

AN: Can you still apply different levels of access to data based on individuals/teams? 

AD: You can absolutely still apply different levels of access to data within an organisation. In fact, our approach to governance is a key factor in enabling unprecedented levels of data access, transparency, and usability.

Our ABAC approach provides granular governance controls so that admins can open data to flow to stakeholders without risking privacy or security loopholes. Users can search for and see what data is available for use, while stewards can see who is using data, when, and why.

Regardless of business size or industry, it is fully scalable and allows the organisation to apply compliance and governance rules to all data systematically. This is an entirely new way to approach governance. 

AN: What features are in place to ensure compliance with data regulations? 

AD: Modern gives companies the flexibility to define and apply governance rules at the most granular levels. Our approach also enables admins and decision-makers to view their data ecosystem as a whole for critical governance questions such as: 

  • Who is using data and how are they using it? 
  • Where is data located and stored? 
  • Which business and risk processes does data impact? 
  • What dependencies exist downstream? 

AN: Another key goal of Modern is to “humanise” data. What does that mean in practice? 

AD: Being human involves intelligence and the ability to use that intelligence to inform and formulate dialog. DataOS gives data an organised “voice,” enabling users to trust data to inform their decision-making. It acts as a data engineering partner, allowing users to have a real dialogue with data. 

AN: What are some of the other key problems with traditional data platforms that your solution, DataOS, aims to fix? 

AD: Most data solutions look at a database like it’s just a box of data. Most also operate within a data silo, which may help solve one problem but it can’t serve as an end solution.

The challenge for enterprises is they don’t exist on just one database. DataOS accounts for that, offering a unified source of truth and then empowering users to easily act on the data — no matter the source — with outcome-based data engineering. A user can choose the outcome they need and DataOS will build the right process for them while ensuring that the process is compliant with all security and governance policies.  

AN: How do you ensure your platform is accessible for all employees regardless of their technical skills or background? 

AD: DataOS allows data access and use for individuals according to granular rules set by the organisation. How the company manages access often depends on particular roles and responsibilities, as well as their in-house approach to security.  

AN: What data formats are supported by DataOS? 

AD: DataOS deals with heterogeneous formats, such as Sequel, CSBS, Excel files, and many, many more. It also extracts data and allows enterprises to do more intelligent things with imagery, access essential data easily, and see metadata so they can leverage all data assets across the board. 

AN: Bad data is worse than no data. How do you check and report the quality of data? 

AD: With DataOS, organisations define their own rules for what to do with data before making it available. DataOS then automates enforcement of those rules to ensure they’re adhering to the right distributions and applying necessary quality checks. DataOS ensures that you’re always getting the best data possible and that you are always alerted of any data quality issues.

The Modern Data Company (TMDC) is sponsoring this year’s AI & Big Data Expo North America. Swing by their booth at stand #66 to learn more.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

The post Ash Damle, TMDC: Data-based business decisions in real-time appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/09/22/ash-damle-tmdc-data-based-business-decisions-in-real-time/feed/ 0
UK eases data mining laws to support flourishing AI industry https://www.artificialintelligence-news.com/2022/06/29/uk-eases-data-mining-laws-support-flourishing-ai-industry/ https://www.artificialintelligence-news.com/2022/06/29/uk-eases-data-mining-laws-support-flourishing-ai-industry/#respond Wed, 29 Jun 2022 12:21:38 +0000 https://www.artificialintelligence-news.com/?p=12111 The UK is set to ease data mining laws in a move designed to further boost its flourishing AI industry. We all know that data is vital to AI development. Tech giants are in an advantageous position due to either having existing large datasets or the ability to fund/pay for the data required. Most startups... Read more »

The post UK eases data mining laws to support flourishing AI industry appeared first on AI News.

]]>
The UK is set to ease data mining laws in a move designed to further boost its flourishing AI industry.

We all know that data is vital to AI development. Tech giants are in an advantageous position due to either having existing large datasets or the ability to fund/pay for the data required. Most startups rely on mining data to get started.

Europe has notoriously strict data laws. Advocates of regulations like GDPR believe they’re necessary to protect consumers, while critics argue it drives innovation, investment, and jobs out of the Eurozone to countries like the USA and China.

“You’ve got your Silicon Valley startup that can access large amounts of money from investors, access specialist knowledge in the field, and will not be fighting with one arm tied behind its back like a competitor in Europe,” explained Peter Wright, Solicitor and MD of Digital Law UK.

An announcement this week sets out how the UK intends to support its National AI Strategy from an intellectual property standpoint.

The announcement comes via the country’s Intellectual Property Office (IPO) and follows a two-month cross-industry consultation period with individuals, large and small businesses, and a range of organisations.

Text and data mining

Text and data mining (TDM) allows researchers to copy and harness disparate datasets for their algorithms. As part of the announcement, the UK says it will now allow TDM “for any purpose,” which provides much greater flexibility than an exception made in 2014 that allowed AI researchers to use such TDM for non-commercial purposes.

In stark contrast, the EU’s Directive on Copyright in the Digital Single Market offers a TDM exception only for scientific research.

“These changes make the most of the greater flexibilities following Brexit. They will help make the UK more competitive as a location for firms doing data mining,” wrote the IPO in the announcement.

AIs still can’t be inventors

Elsewhere, the UK is more or less sticking to its previous stances—including that AI systems cannot be credited as inventors in patents.

The most high-profile case on the subject is of US-based Dr Stephen Thaler, the founder of Imagination Engines. Dr Thaler has been leading the fight to give credit to machines for their creations.

An AI device created by Dr Thaler, DABUS, was used to invent an emergency warning light, a food container that improves grip and heat transfer, and more.

In August 2021, a federal court in Australia ruled that AI systems can be credited as inventors under patent law after Ryan Abbott, a professor at the University of Surrey, filed applications in the country on behalf of Dr Thaler. Similar applications were also filed in the UK, US, and New Zealand.

The UK’s IPO rejected the applications at the time, claiming that – under the country’s Patents Act – only humans can be credited as inventors. Subsequent appeals were also rejected.

“A patent is a statutory right and it can only be granted to a person,” explained Lady Justice Liang. “Only a person can have rights. A machine cannot.”

In the IPO’s latest announcement, the body reiterates: ”For AI-devised inventions, we plan no change to UK patent law now. Most respondents felt that AI is not yet advanced enough to invent without human intervention.”

However, the IPO highlights the UK is one of only a handful of countries that protects computer-generated works. Any person who makes “the arrangements necessary for the creation of the [computer-generated] work” will have the rights for 50 years from when it was made.

Supporting a flourishing AI industry

Despite being subject to strict data regulations, the UK has become Europe’s hub for AI with pioneers like DeepMind, Wayve, Graphcore, Oxbotica, and BenevolentAI. The country’s world-class universities churn out in-demand AI talent and tech investments more than double other European countries.

(Credit: Atomico)

More generally, the UK is regularly considered one of the best places in the world to set up a business. All eyes are on how the country will use its post-Brexit freedoms to diverge from EU rules to further boost its industries.

“The UK already punches above its weight internationally and we are ranked third in the world behind the USA and China in the list of top countries for AI,” said Chris Philp, DCMS Minister.

“We’re laying the foundations for the next ten years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”

There will undoubtedly be debates over the decisions made by the UK to boost its AI industry, especially regarding TDM, but the policies announced so far will support entrepreneurship and the country’s attractiveness for relevant investments.

(Photo by Chris Robert on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is also co-located with the Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK eases data mining laws to support flourishing AI industry appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/29/uk-eases-data-mining-laws-support-flourishing-ai-industry/feed/ 0
How sports clubs achieve a slam dunk in loyalty with data https://www.artificialintelligence-news.com/2022/06/06/how-sports-clubs-achieve-a-slam-dunk-in-loyalty-with-data/ https://www.artificialintelligence-news.com/2022/06/06/how-sports-clubs-achieve-a-slam-dunk-in-loyalty-with-data/#respond Mon, 06 Jun 2022 14:58:54 +0000 https://www.artificialintelligence-news.com/?p=12040 The way we watch, engage, and interact with our favourite sports clubs is undergoing a seismic shift. Recent UK research suggests that data will now have a more important role in fan engagement than ever before. In this article, we take a closer look at what this means for sports clubs serious about future-proofing their... Read more »

The post How sports clubs achieve a slam dunk in loyalty with data appeared first on AI News.

]]>
The way we watch, engage, and interact with our favourite sports clubs is undergoing a seismic shift. Recent UK research suggests that data will now have a more important role in fan engagement than ever before. In this article, we take a closer look at what this means for sports clubs serious about future-proofing their strategy to attract and retain loyal fans.

Matchday may be the ‘pinnacle’ for sports fans, but for sports clubs, the real battleground is that period between the ‘live-action’ and the ‘actual creation’ of a deep and enriching relationship with their fan base. As competition is heating up to win the hearts and minds of fans through relevant marketing and compete through the ‘noise’, the pressure is on sports clubs and associations to become more innovative. 

But despite the vast data sets at the fingertips of sports marketers, there is much room for improvement when it comes to delivering relevant, personalised communications or messages, experiences, or content to fans in real-time.

Creating a relationship beyond matchday 

To create a relationship that goes beyond game day, sports brands must connect with fans on the right channels at the right time. With zero-party data, or data willingly shared by fans, it’s possible to know what makes fans tick as well as the best ways to engage with them.

How do sports clubs encourage fans to share more of their personal information? You know, the “good stuff” that goes beyond names and email addresses to who they’re attending matches with and if they also watch the game at home, for example. It’s all about the value exchange. And the value exchange begins with data. 

Revolutionising engagement with data 

Data allows sports clubs to move to a more enriched understanding of who their fans are. It gives them insight into their motivations and preferences. The biggest success of the sports clubs we work with is, with Cheetah experiences, fans willingly share their information.

To improve every fan’s experience along their digital journey, it’s vital that the communications they receive from the club are tailored. They have to be personalised to their particular wants and desires. That’s where the data comes in. While content is perhaps the “shiniest” element of the marketing mix; it’s the data and the insights that really make a difference. These elements provide clubs with all the information they need to create bespoke communications, helping to foster that one-to-one relationship with fans.  

Data is also key in creating effective partnerships with brands that want to sponsor sports clubs. Once clubs know more about the fans, their behaviours, and motivations at a country-level, the value of sponsorships can be greatly enriched. That’s because partners are looking for clubs with an engaged fan base, and the only way to get an engaged fan base is to know and create meaningful relationships with them. This in turn, allows clubs to have successful commercial partnerships, which drives revenue into the club – revenue that allows them to invest back into the team and secure top-end spots in competition.

Turning challenges into opportunities

Not too long ago, the customer experience began and ended on matchday. Today, however, that’s simply not the case. In this new digital era, passionate fans are engaging with clubs on different platforms, 24/7. There’s no winter break, pre-season, or rest days for fan engagement – it’s constantly game-on.

Even when the pandemic toppled the athletic landscape, seeing sports ground to a halt with no indication of coming back again; it wasn’t the time to stop engaging fans. Instead, it was more vital than ever to keep their passion alive. Developing new ways to build off of a captive audience who was still hungry for sports was the first order of business for sports clubs and absolutely key to their survival.

But first, these clubs had to turn their unknown audience into a known audience. Digital channels and engagement are vital to helping clubs connect with their fans. It allows them to achieve deep, long-lasting, and meaningful relationships. Once fans feel connected to their clubs, their love grows and that creates a foundation that supports revenue creation and successful commercial partnerships.

However, this is nearly impossible to do without insights from data. Many clubs still have their data in silos where the ticketing team only sees their data, the hospitality team only sees their data, and so on. Getting away from silos and gaining a unified understanding of fans – who they are, what life stage they’re in, and what they want from the club – from top to bottom throughout the organisation is vital to revolutionising engagement.

Take a look at the Barcelona Spotify deal. If Barcelona truly knew its fans better, the deal could have been worth a lot more. However, since they didn’t, they were only able to target about 1% of their fan base — the rest were essentially invisible to them. 

The key takeaway from Barcelona’s unfortunate situation is just how crucial it is to get your fans to share information and permissions with you willingly. It’s absolutely essential in marketing to them more effectively.

And, of course, we can’t talk about effective marketing in today’s world without bringing up the death of the cookie. Never has there been a greater need to get fans to share their personal and preference data willingly than now. Unfortunately, it’s not an “ask and you shall receive” kind of arrangement. Fans are increasingly weary when it comes to handing over their personal information. That’s why sports clubs need to offer an enticing value exchange.

Leverage data for a game-winning loyalty strategy

When it comes to the value exchange, savvy sports clubs know that it doesn’t always have to be a discount or a red-letter prize that entices fans to share their details. Access to exclusive content and community initiatives can also be the catalyst for zero-party data collection.

According to Cheetah Digital’s report for sports teams and associations, 55% of fans will share psychographic data points like purchase motivations and product feedback with sports brands. Even more, half of all fans surveyed said they desire incentives like coupons, loyalty points, or exclusive access in return for their data. 

With Cheetah Digital’s Customer Engagement Suite, there’s an entire platform that makes it easy to build the most relevant, integrated, and profitable customer experiences. Take a look:

  • Cheetah Engagement Data Platform: This foundational data layer and personalisation engine enables marketers to drive data from intelligent insights to action at speed and scale.
  • Cheetah Experiences: Interactive digital acquisition experiences are delivered to delight customers, collect first- and zero-party data, and secure valuable permissions needed to execute compliant and successful marketing campaigns.
  • Cheetah Messaging: Enables marketers to create and deliver relevant, personalised marketing campaigns across all channels and touchpoints.
  • Cheetah Loyalty: Provides marketers with the tools to create and deliver unique loyalty programs that generate an emotional connection between brands and their customers.
  • Cheetah Personalisation: Enables marketers to leverage the power of machine learning and automated journeys to connect with customers on a one-to-one basis.

Acquisition helps to turn an “unknown” audience into a “known” audience. Why is this important? Well, with “known” fans come a lot of potential in the form of direct revenue, partner revenue, and participation.

The sports clubs to watch

Cheetah Digital has partnered with some of the world’s top sports brands and organisations to create and launch an array of successful campaign experiences with ease. Whether to boost match-day excitement, connect with fans, monetise a global audience, or increase content relevancy to reach a specific demographic; sports organisations are using Cheetah Experiences to create impactful digital experiences that drive results.

Below, we look at how Arsenal Football Club (F.C.) and the FA are leveraging a fully-fledged, zero-party data strategy to connect with fans on every digital channel and collect the preference insights and permissions required to drive personalisation initiatives. 

Arsenal F.C.

Arsenal F.C. intelligently uses data to enhance digital engagement amongst one of the largest and most passionate fan bases in the world – it’s estimated to be upwards of 750 million people! The club built out its omnichannel campaign strategies through various technologies with Cheetah Digital being the main platform. That ensures the communication it sends out is relevant to fans and that it’s communicated on the right platforms at the right times in the right tone. 

Adam Rutzler, Senior Campaign and Insight Manager at Arsenal, says the most crucial aspect of his team’s work is ensuring that fans receive the best content that’s most relevant to them. “We work with a magic triangle, the power of three – transactional data, a demographic segmentation, persona-led approach, and behavioural data,” he explains.

“We get a solid understanding of our fans by taking the combination of these three things and hitting the sweet spot in the middle. What are our fans buying, who are they, and how do they engage with our football club – that’s when we really get the power of understanding our fans, what they want from us, and how we can best give that to them.”

For example, Arsenal has found the score predictor game is well received with fans. It encourages them to guess the score of the upcoming match to win a prize. And that prize can be anything from signed shirts to training kits — whatever fans would desire. 

Where Arsenal has noticed the most traction and where it’s getting some real buy-in from fans, however, is in giving away those money-can’t-buy prizes, such as corner flags from matches. Fans are really excited about these types of prizes. That memorabilia from clubs is truly meaningful to fans who are very passionate about their teams.

Therefore, the experiences that we’re offering and serving up on behalf of the clubs that we work with need to be in tune with fans. They need to offer something fun and something that’s on-brand.

Going forward, Adam says he’s excited about all the possibilities data opens up for the club. “What’s exciting about the insights we’re working with right now to continue understanding our global following is the possibility of turning our triangle into a square by adding psychographic data in.

“We want to understand the fans’ attitudes, aspirations, and personalities. That will allow us to find out what motivates them to engage with certain communications of ours. If we understand that, it would provide us with some very powerful insights,” he says.

The Football Association (FA)

The FA has a grand ambition to double its contactable CRM database by 2024. Achieving this will drive direct revenue, boosting sales for the FA directly. It will increase partner revenue, expanding their reach and resonance with partners. And it will also drive participation in the sport at a grassroots level, which is basically the cornerstone of what the FA does.

In terms of value exchange, the club is achieving above-average conversion rates, using a diverse set of tools like team sectors, man-of-the-match polls, and score predictors for upcoming FA Cup competitions. According to Paul Brierley, CRM & Membership Lead at the FA, the reason the FA’s strategy has been so effective boils down to its value proposition and relevance.

“Cheetah experiences, in particular, are helping us to drive an incredibly effective value exchange with fans. The combination of sought-after prizes, relevance and timing of that prize, and a compelling gamification experience is producing a highly successful channel for fan experience and data growth,” he says.

Future success

Going forward, there’s no other way for a sports club to be successful without understanding its fan base. It’s paramount to capture their motivations, intentions, and preferences at scale to provide a truly personalised experience. By leveraging Cheetah Experiences and offering a value exchange, fans will tell all – the products they desire, what they look for in a loyalty program, and what motivates them to engage. And that information translates to a hugely successful club both now and into the future.

Download this campaign guide packed with examples from leading sports brands and associations that are delivering engaging, interactive experiences in return for fans’ opt-ins and preference data, and then using this data to deliver true personalisation.

(Editor’s note: This article is in association with Cheetah Digital)

The post How sports clubs achieve a slam dunk in loyalty with data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/06/how-sports-clubs-achieve-a-slam-dunk-in-loyalty-with-data/feed/ 0