ethics Archives - AI News https://www.artificialintelligence-news.com/tag/ethics/ Artificial Intelligence News Fri, 14 Jun 2024 14:56:45 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png ethics Archives - AI News https://www.artificialintelligence-news.com/tag/ethics/ 32 32 EU AI legislation sparks controversy over data transparency https://www.artificialintelligence-news.com/2024/06/14/eu-ai-legislation-sparks-controversy-over-data-transparency/ https://www.artificialintelligence-news.com/2024/06/14/eu-ai-legislation-sparks-controversy-over-data-transparency/#respond Fri, 14 Jun 2024 14:56:43 +0000 https://www.artificialintelligence-news.com/?p=15001 The European Union recently introduced the AI Act, a new governance framework compelling organisations to enhance transparency regarding their AI systems’ training data. Should this legislation come into force, it could penetrate the defences that many in Silicon Valley have built against such detailed scrutiny of AI development and deployment processes. Since the public release... Read more »

The post EU AI legislation sparks controversy over data transparency appeared first on AI News.

]]>
The European Union recently introduced the AI Act, a new governance framework compelling organisations to enhance transparency regarding their AI systems’ training data.

Should this legislation come into force, it could penetrate the defences that many in Silicon Valley have built against such detailed scrutiny of AI development and deployment processes.

Since the public release of OpenAI’s ChatGPT, backed by Microsoft 18 months ago, there has been significant growth in interest and investment in generative AI technologies. These applications, capable of writing text, creating images, and producing audio content at record speeds, have attracted considerable attention. However, the rise in AI activity accompanying these changes prompts an intriguing question: How do AI developers actually source the data needed to train their models? Is it through the use of unauthorised copyrighted material?

Implementing the AI Act

The EU’s AI Act, intended to be implemented gradually over the next two years, aims to address these issues. New laws take time to embed, and a gradual rollout allows regulators the necessary time to adapt to the new laws and for businesses to adjust to their new obligations. However, the implementation of some rules remains in doubt.

One of the more contentious sections of the Act stipulates that organisations deploying general-purpose AI models, such as ChatGPT, must provide “detailed summaries” of the content used to train them. The newly established AI Office has announced plans to release a template for organisations to follow in early 2025, following consultation with stakeholders.

AI companies have expressed strong resistance to revealing their training data, describing this information as trade secrets that would provide competitors with an unfair advantage if made public. The level of detail required in these transparency reports will have significant implications for both smaller AI startups and major tech companies like Google and Meta, which have positioned AI technology at the center of their future operations.

Over the past year, several top technology companies—Google, OpenAI, and Stability AI—have faced lawsuits from creators who claim their content was used without permission to train AI models. Under growing scrutiny, however, some tech companies have, in the past two years, pierced their own corporate veil and negotiated content-licensing deals with individual media outlets and websites. Some creators and lawmakers remain concerned that these measures are not sufficient.

European lawmakers’ divide

In Europe, differences among lawmakers are stark. Dragos Tudorache, who led the drafting of the AI Act in the European Parliament, argues that AI companies should be required to open-source their datasets. Tudorache emphasises the importance of transparency so that creators can determine whether their work has been used to train AI algorithms.

Conversely, under the leadership of President Emmanuel Macron, the French government has privately opposed introducing rules that could hinder the competitiveness of European AI startups. French Finance Minister Bruno Le Maire has emphasised the need for Europe to be a world leader in AI, not merely a consumer of American and Chinese products.

The AI Act acknowledges the need to balance the protection of trade secrets with the facilitation of rights for parties with legitimate interests, including copyright holders. However, striking this balance remains a significant challenge.

Different industries vary on this matter. Matthieu Riouf, CEO of the AI-powered image-editing firm Photoroom, compares the situation to culinary practices, claiming there’s a secret part of the recipe that the best chefs wouldn’t share. He represents just one instance on the laundry list of possible scenarios where this type of crime could be rampant. However, Thomas Wolf, co-founder of one of the world’s top AI startups, Hugging Face, argues that while there will always be an appetite for transparency, it doesn’t mean that the entire industry will adopt a transparency-first approach.

A series of recent controversies have driven home just how complicated this all is. OpenAI demonstrated the latest version of ChatGPT in a public session, where the company was roundly criticised for using a synthetic voice that sounded nearly identical to that of actress Scarlett Johansson. These examples point to the potential for AI technologies to violate personal and proprietary rights.

Throughout the development of these regulations, there has been heated debate about their potential effects on future innovation and competitiveness in the AI world. In particular, the French government has urged that innovation, not regulation, should be the starting point, given the dangers of regulating aspects that have not been fully comprehended.

The way the EU regulates AI transparency could have significant impacts on tech companies, digital creators, and the overall digital landscape. Policymakers thus face the challenge of fostering innovation in the dynamic AI industry while simultaneously guiding it towards safe, ethical decisions and preventing IP infringement.

In sum, if adopted, the EU AI Act would be a significant step toward greater transparency in AI development. However, the practical implementation of these regulations and their industry results could be far off. Moving forward, especially at the dawn of this new regulatory paradigm, the balance between innovation, ethical AI development, and the protection of intellectual property will remain a central and contested issue for stakeholders of all stripes to grapple with.

See also: Apple is reportedly getting free ChatGPT access

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU AI legislation sparks controversy over data transparency appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/14/eu-ai-legislation-sparks-controversy-over-data-transparency/feed/ 0
AI pioneers turn whistleblowers and demand safeguards https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/ https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/#respond Thu, 06 Jun 2024 15:39:54 +0000 https://www.artificialintelligence-news.com/?p=14962 OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology.  In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI systems remain aligned with human values. Leike’s exit came... Read more »

The post AI pioneers turn whistleblowers and demand safeguards appeared first on AI News.

]]>
OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology. 

In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI systems remain aligned with human values. Leike’s exit came shortly after OpenAI unveiled its new flagship GPT-4o model, which it touted as “magical” at its Spring Update event.

According to reports, Leike’s departure was driven by constant disagreements over security measures, monitoring practices, and the prioritisation of flashy product releases over safety considerations.

Leike’s exit has opened a Pandora’s box for the AI firm. Former OpenAI board members have come forward with allegations of psychological abuse levelled against CEO Sam Altman and the company’s leadership.

The growing internal turmoil at OpenAI coincides with mounting external concerns about the potential risks posed by generative AI technology like the company’s own language models. Critics have warned about the imminent existential threat of advanced AI surpassing human capabilities, as well as more immediate risks like job displacement and the weaponisation of AI for misinformation and manipulation campaigns.

In response, a group of current and former employees from OpenAI, Anthropic, DeepMind, and other leading AI companies have penned an open letter addressing these risks.

“We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity. We also understand the serious risks posed by these technologies,” the letter states.

“These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks, as have governments across the world, and other AI experts.”

The letter, which has been signed by 13 employees and endorsed by AI pioneers Yoshua Bengio and Geoffrey Hinton, outlines four core demands aimed at protecting whistleblowers and fostering greater transparency and accountability around AI development:

  1. That companies will not enforce non-disparagement clauses or retaliate against employees for raising risk-related concerns.
  2. That companies will facilitate a verifiably anonymous process for employees to raise concerns to boards, regulators, and independent experts.
  3. That companies will support a culture of open criticism and allow employees to publicly share risk-related concerns, with appropriate protection of trade secrets.
  4. That companies will not retaliate against employees who share confidential risk-related information after other processes have failed.

“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” said Daniel Kokotajlo, a former OpenAI employee who left due to concerns over the company’s values and lack of responsibility.

The demands come amid reports that OpenAI has forced departing employees to sign non-disclosure agreements preventing them from criticising the company or risk losing their vested equity. OpenAI CEO Sam Altman admitted being “embarrassed” by the situation but claimed the company had never actually clawed back anyone’s vested equity.

As the AI revolution charges forward, the internal strife and whistleblower demands at OpenAI underscore the growing pains and unresolved ethical quandaries surrounding the technology.

See also: OpenAI disrupts five covert influence operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI pioneers turn whistleblowers and demand safeguards appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/feed/ 0
X now permits AI-generated adult content https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/ https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/#respond Mon, 03 Jun 2024 12:44:45 +0000 https://www.artificialintelligence-news.com/?p=14927 Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities. “We believe that users should be able to create, distribute, and consume material related... Read more »

The post X now permits AI-generated adult content appeared first on AI News.

]]>
Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities.

“We believe that users should be able to create, distribute, and consume material related to sexual themes as long as it is consensually produced and distributed. Sexual expression, visual or written, can be a legitimate form of artistic expression,” X’s updated ‘adult content’ policy states.

The policy further elaborates: “We believe in the autonomy of adults to engage with and create content that reflects their own beliefs, desires, and experiences, including those related to sexuality. We balance this freedom by restricting exposure to adult content for children or adult users who choose not to see it.”

Users can mark their posts as containing sensitive media, ensuring that such content is restricted from users under 18 or those who haven’t provided their birth dates.

While X’s violent content rules have similar guidelines, the platform maintains a strict stance against excessively gory content and depictions of sexual violence. Explicit threats or content inciting or glorifying violence remain prohibited.

X’s decision to allow graphic content is aimed at enabling users to participate in discussions about current events, including sharing relevant images and videos. 

Although X has never outright banned porn, these new clauses could pave the way for developing services centred around adult content, potentially creating a competitor to services like OnlyFans and enhancing its revenue streams. This would further Musk’s vision of X becoming an “everything app,” similar to China’s WeChat.

A 2022 Reuters report, citing internal company documents, indicated that approximately 13% of posts on the platform contained adult content. This percentage has likely increased, especially with the proliferation of porn bots on X.

See also: Elon Musk’s xAI secures $6B to challenge OpenAI in AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post X now permits AI-generated adult content appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/feed/ 0
Nicholas Brackney, Dell: How we leverage a four-pillar AI strategy https://www.artificialintelligence-news.com/2024/05/30/nicholas-brackney-dell-leverage-four-pillar-ai-strategy/ https://www.artificialintelligence-news.com/2024/05/30/nicholas-brackney-dell-leverage-four-pillar-ai-strategy/#respond Thu, 30 May 2024 14:42:27 +0000 https://www.artificialintelligence-news.com/?p=14910 Dell is deeply embedded in the AI landscape, leveraging a comprehensive four-pillar strategy to integrate the technology across its products and services. Nicholas Brackney, Senior Consultant in Product Marketing at Dell, discussed the company’s AI initiatives ahead of AI & Big Data Expo North America. Dell’s AI strategy is structured around four core principles: AI-In,... Read more »

The post Nicholas Brackney, Dell: How we leverage a four-pillar AI strategy appeared first on AI News.

]]>
Dell is deeply embedded in the AI landscape, leveraging a comprehensive four-pillar strategy to integrate the technology across its products and services.

Nicholas Brackney, Senior Consultant in Product Marketing at Dell, discussed the company’s AI initiatives ahead of AI & Big Data Expo North America.

Dell’s AI strategy is structured around four core principles: AI-In, AI-On, AI-For, and AI-With:

  1. “Embedding AI capabilities in our offerings and services drives speed, intelligence, and automation,” Brackney explained. This ensures that AI is a fundamental component of Dell’s offerings.
  1. The company also enables customers to run powerful AI workloads on its comprehensive portfolio of solutions, from desktops to data centres, across clouds, and at the edge.
  1. AI innovation and tooling are applied for Dell’s business to enhance operations and share best practices with customers.
  1. Finally, Dell collaborates with strategic partners within an open AI ecosystem to simplify and enhance the AI experience.

Dell is well-positioned to help customers navigate AI workloads, emphasising choice and adaptability through the various evolutions of emerging technology. Brackney highlighted Dell’s commitment to serving customers from the early stages of AI adoption to achieving AI at scale.

“We’ve always believed in providing choice and have been doing it through the various evolutions of emerging technology, including AI, and understanding the challenges that come with them,” explained Brackney. “We fully leverage our unique operating model to serve customers in the early innings of AI to a future of AI at scale.”

Looking to the future, Dell is particularly excited about the potential of AI PCs.

“We know organisations and their knowledge workers are excited about AI, and they want to fit it into all their workflows,” Brackney said. Dell is focused on integrating AI into software and ensuring it runs efficiently on the right systems, enhancing end-to-end customer journeys in AI.

Ethical concerns in AI deployment are also a priority for Dell. Addressing issues such as deepfakes, transparency, and bias, Brackney emphasised the importance of a shared, secure, and sustainable approach to AI development.

“We believe in a shared, secure, and sustainable approach. By getting the foundations right at their core, we can eliminate some of the greatest risks associated with AI and work to ensure it acts as a force for good,” explains Brackney.

User data privacy in AI-driven products is another critical focus area. Brackney outlined Dell’s strategy of integrating AI with existing security investments without introducing new risks. Dell offers a suite of secure products, comprehensive data protection, advanced cybersecurity features, and global support services to safeguard user data.

On the topic of job displacement due to AI, Brackney underscored that Dell views AI as augmenting human potential rather than replacing it.

“The roles may change but the human element will always be key,” Brackney stated. “At Dell, we encourage our team members to understand, explore, and, where appropriate, use tools based on AI to learn, evolve, and enhance the overall work experience.”

Looking ahead, Brackney envisions a transformative role for AI within Dell and the tech industry. “We see customers in every industry wanting to become leaders in AI because it is critical to their organisation’s innovation, growth, and productivity,” he noted.

Dell aims to support this evolution by providing the necessary architectures, frameworks, and services to assist its customers on this transformative journey.

Dell is a key sponsor of this year’s AI & Big Data Expo. Check out Dell’s keynote presentation From Data Novice to Data Champion – Cultivating Data Literacy Across the Organization and swing by Dell’s booth at stand #66 to hear about AI from the company’s experts.

The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Nicholas Brackney, Dell: How we leverage a four-pillar AI strategy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/30/nicholas-brackney-dell-leverage-four-pillar-ai-strategy/feed/ 0
OpenAI takes steps to boost AI-generated content transparency https://www.artificialintelligence-news.com/2024/05/08/openai-steps-boost-ai-generated-content-transparency/ https://www.artificialintelligence-news.com/2024/05/08/openai-steps-boost-ai-generated-content-transparency/#respond Wed, 08 May 2024 14:12:21 +0000 https://www.artificialintelligence-news.com/?p=14784 OpenAI is joining the Coalition for Content Provenance and Authenticity (C2PA) steering committee and will integrate the open standard’s metadata into its generative AI models to increase transparency around generated content. The C2PA standard allows digital content to be certified with metadata proving its origins, whether created entirely by AI, edited using AI tools, or... Read more »

The post OpenAI takes steps to boost AI-generated content transparency appeared first on AI News.

]]>
OpenAI is joining the Coalition for Content Provenance and Authenticity (C2PA) steering committee and will integrate the open standard’s metadata into its generative AI models to increase transparency around generated content.

The C2PA standard allows digital content to be certified with metadata proving its origins, whether created entirely by AI, edited using AI tools, or captured traditionally. OpenAI has already started adding C2PA metadata to images from its latest DALL-E 3 model output in ChatGPT and the OpenAI API. The metadata will be integrated into OpenAI’s upcoming video generation model Sora when launched more broadly.

“People can still create deceptive content without this information (or can remove it), but they cannot easily fake or alter this information, making it an important resource to build trust,” OpenAI explained.

The move comes amid growing concerns about the potential for AI-generated content to mislead voters ahead of major elections in the US, UK, and other countries this year. Authenticating AI-created media could help combat deepfakes and other manipulated content aimed at disinformation campaigns.

While technical measures help, OpenAI acknowledges that enabling content authenticity in practice requires collective action from platforms, creators, and content handlers to retain metadata for end consumers.

In addition to C2PA integration, OpenAI is developing new provenance methods like tamper-resistant watermarking for audio and image detection classifiers to identify AI-generated visuals.

OpenAI has opened applications for access to its DALL-E 3 image detection classifier through its Researcher Access Program. The tool predicts the likelihood an image originated from one of OpenAI’s models.

“Our goal is to enable independent research that assesses the classifier’s effectiveness, analyses its real-world application, surfaces relevant considerations for such use, and explores the characteristics of AI-generated content,” the company said.

Internal testing shows high accuracy distinguishing non-AI images from DALL-E 3 visuals, with around 98% of DALL-E images correctly identified and less than 0.5% of non-AI images incorrectly flagged. However, the classifier struggles more to differentiate between images produced by DALL-E and other generative AI models.

OpenAI has also incorporated watermarking into its Voice Engine custom voice model, currently in limited preview.

The company believes increased adoption of provenance standards will lead to metadata accompanying content through its full lifecycle to fill “a crucial gap in digital content authenticity practices.”

OpenAI is joining Microsoft to launch a $2 million societal resilience fund to support AI education and understanding, including through AARP, International IDEA, and the Partnership on AI.

“While technical solutions like the above give us active tools for our defences, effectively enabling content authenticity in practice will require collective action,” OpenAI states.

“Our efforts around provenance are just one part of a broader industry effort – many of our peer research labs and generative AI companies are also advancing research in this area. We commend these endeavours—the industry must collaborate and share insights to enhance our understanding and continue to promote transparency online.”

(Photo by Marc Sendra Martorell)

See also: Chuck Ros, SoftServe: Delivering transformative AI solutions responsibly

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI takes steps to boost AI-generated content transparency appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/08/openai-steps-boost-ai-generated-content-transparency/feed/ 0
Chuck Ros, SoftServe: Delivering transformative AI solutions responsibly https://www.artificialintelligence-news.com/2024/05/03/chuck-ros-softserve-delivering-transformative-ai-solutions-responsibly/ https://www.artificialintelligence-news.com/2024/05/03/chuck-ros-softserve-delivering-transformative-ai-solutions-responsibly/#respond Fri, 03 May 2024 14:47:56 +0000 https://www.artificialintelligence-news.com/?p=14774 As the world embraces the transformative potential of AI, SoftServe is at the forefront of developing cutting-edge AI solutions while prioritising responsible deployment. Ahead of AI & Big Data Expo North America – where the company will showcase its expertise – Chuck Ros, Industry Success Director at SoftServe, provided valuable insights into the company’s AI... Read more »

The post Chuck Ros, SoftServe: Delivering transformative AI solutions responsibly appeared first on AI News.

]]>
As the world embraces the transformative potential of AI, SoftServe is at the forefront of developing cutting-edge AI solutions while prioritising responsible deployment.

Ahead of AI & Big Data Expo North America – where the company will showcase its expertise – Chuck Ros, Industry Success Director at SoftServe, provided valuable insights into the company’s AI initiatives, the challenges faced, and its future strategy for leveraging this powerful technology.

Highlighting a recent AI project that exemplifies SoftServe’s innovative approach, Ros discussed the company’s unique solution for a software company in the field service management industry. The vision was to create an easy-to-use, language model-enabled interface that would allow field technicians to access service histories, equipment documentation, and maintenance schedules seamlessly, enhancing productivity and operational efficiency.

“Our AI engineers built a prompt evaluation pipeline that seamlessly considers cost, processing time, semantic similarity, and the likelihood of hallucinations,” Ros explained. “It proved to be an extremely effective architecture that led to improved operational efficiencies for the customer, increased productivity for users in the field, competitive edge for the software company and for their clients, and—perhaps most importantly—a spark for additional innovation.”

While the potential of AI is undeniable, Ros acknowledged the key mistakes businesses often make when deploying AI solutions, emphasising the importance of having a robust data strategy, building adequate data pipelines, and thoroughly testing the models. He also cautioned against rushing to deploy generative AI solutions without properly assessing feasibility and business viability, stating, “We need to pay at least as much attention to whether it should be built as we do to whether it can be built.”

Recognising the critical concern of ethical AI development, Ros stressed the significance of human oversight throughout the entire process. “Managing dynamic data quality, testing and detecting for bias and inaccuracies, ensuring high standards of data privacy, and ethical use of AI systems all require human oversight,” he said. SoftServe’s approach to AI development involves structured engagements that evaluate data and algorithms for suitability, assess potential risks, and implement governance measures to ensure accountability and data traceability.

Looking ahead, Ros envisions AI playing an increasingly vital role in SoftServe’s business strategy, with ongoing refinements to AI-assisted software development lifecycles and the introduction of new tools and processes to boost productivity further. Softserve’s findings suggest that GenAI can accelerate programming productivity by as much as 40 percent.

“I see more models assisting us on a daily basis, helping us write emails and documentation and helping us more and more with the simple, time-consuming mundane tasks we still do,” Ros said. “In the next five years I see ongoing refinement of that view to AI in SDLCs and the regular introduction of new tools, new models, new processes that push that 40 percent productivity hike to 50 percent and 60 percent.”

When asked how SoftServe is leveraging AI for social good, Ros explained the company is delivering solutions ranging from machine learning models to help students discover their passions and aptitudes, enabling personalised learning experiences, to assisting teachers in their daily tasks and making their jobs easier.

“I love this question because one of SoftServe’s key strategic tenets is to power our social purpose and make the world a better place. It’s obviously an ambitious goal, but it’s important to our employees and it’s important to our clients,” explained Ros.

“It’s why we created the Open Eyes Foundation and have collected more than $15 million with the support of the public, our clients, our partners, and of course our employees. We naturally support the Open Eyes Foundation with all manner of technology needs, including AI.”

At the AI & Big Data Expo North America, SoftServe plans to host a keynote presentation titled “Revolutionizing Learning: Unleashing the Power of Generative AI in Education and Beyond,” which will explore the transformative impact of generative AI and large language models in the education sector.

“As we explore the mechanisms through which generative AI leverages data – including training methodologies like fine-tuning and Retrieval Augmented Generation (RAG) – we will pinpoint high-value, low-risk applications that promise to redefine the educational landscape,” said Ros.

“The journey from a nascent idea to a fully operational AI solution is fraught with challenges, including ethical considerations and risks inherent in deploying AI solutions. Through the lens of a success story at Mesquite ISD, where generative AI was leveraged to help students uncover their passions and aptitudes enabling the delivery of personalised learning experiences, this presentation will illustrate the practical benefits and transformative potential of generative AI in education.”

Additionally, the company will participate in panel discussions on topics such as “Getting to Production-Ready – Challenges and Best Practices for Deploying AI” and “Navigating the Data & AI Landscape – Ensuring Safety, Security, and Responsibility in Big Data and AI Systems.” These sessions will provide attendees with valuable insights from SoftServe’s experts on overcoming deployment challenges, ensuring data quality and user acceptance, and mitigating risks associated with AI implementation.

As a key sponsor of the event, SoftServe aims to contribute to the discourse surrounding the responsible and ethical development of AI solutions, while sharing its expertise and vision for leveraging this powerful technology to drive innovation, enhance productivity, and address global challenges. 

“We are, of course, always interested in both sharing and hearing about the diversity of business cases for applications in AI and big data: the concept of the rising tide lifting all boats is definitely relevant in AI and GenAI in particular, and we’re proud to be a part of the AI technology community,” Ros concludes.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Chuck Ros, SoftServe: Delivering transformative AI solutions responsibly appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/03/chuck-ros-softserve-delivering-transformative-ai-solutions-responsibly/feed/ 0
OpenAI faces complaint over fictional outputs https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/ https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/#respond Mon, 29 Apr 2024 08:45:02 +0000 https://www.artificialintelligence-news.com/?p=14751 European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union. “Making up false information... Read more »

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union.

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences,” said Maartje de Graaf, Data Protection Lawyer at noyb. 

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The GDPR requires that personal data be accurate, and individuals have the right to rectification if data is inaccurate, as well as the right to access information about the data processed and its sources. However, OpenAI has openly admitted that it cannot correct incorrect information generated by ChatGPT or disclose the sources of the data used to train the model.

“Factual accuracy in large language models remains an area of active research,” OpenAI has argued.

The advocacy group highlights a New York Times report that found chatbots like ChatGPT “invent information at least 3 percent of the time – and as high as 27 percent.” In the complaint against OpenAI, noyb cites an example where ChatGPT repeatedly provided an incorrect date of birth for the complainant, a public figure, despite requests for rectification.

“Despite the fact that the complainant’s date of birth provided by ChatGPT is incorrect, OpenAI refused his request to rectify or erase the data, arguing that it wasn’t possible to correct data,” noyb stated.

OpenAI claimed it could filter or block data on certain prompts, such as the complainant’s name, but not without preventing ChatGPT from filtering all information about the individual. The company also failed to adequately respond to the complainant’s access request, which the GDPR requires companies to fulfil.

“The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used to at least have an idea about the sources of information,” said de Graaf. “It seems that with each ‘innovation,’ another group of companies thinks that its products don’t have to comply with the law.”

European privacy watchdogs have already scrutinised ChatGPT’s inaccuracies, with the Italian Data Protection Authority imposing a temporary restriction on OpenAI’s data processing in March 2023 and the European Data Protection Board establishing a task force on ChatGPT.

In its complaint, noyb is asking the Austrian Data Protection Authority to investigate OpenAI’s data processing and measures to ensure the accuracy of personal data processed by its large language models. The advocacy group also requests that the authority order OpenAI to comply with the complainant’s access request, bring its processing in line with the GDPR, and impose a fine to ensure future compliance.

You can read the full complaint here (PDF)

(Photo by Eleonora Francesca Grotto)

See also: Igor Jablokov, Pryon: Building a responsible AI future

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/feed/ 0
Igor Jablokov, Pryon: Building a responsible AI future https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/ https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/#respond Thu, 25 Apr 2024 14:13:22 +0000 https://www.artificialintelligence-news.com/?p=14743 As artificial intelligence continues to rapidly advance, ethical concerns around the development and deployment of these world-changing innovations are coming into sharper focus. In an interview ahead of the AI & Big Data Expo North America, Igor Jablokov, CEO and founder of AI company Pryon, addressed these pressing issues head-on. Critical ethical challenges in AI... Read more »

The post Igor Jablokov, Pryon: Building a responsible AI future appeared first on AI News.

]]>
As artificial intelligence continues to rapidly advance, ethical concerns around the development and deployment of these world-changing innovations are coming into sharper focus.

In an interview ahead of the AI & Big Data Expo North America, Igor Jablokov, CEO and founder of AI company Pryon, addressed these pressing issues head-on.

Critical ethical challenges in AI

“There’s not one, maybe there’s almost 20 plus of them,” Jablokov stated when asked about the most critical ethical challenges. He outlined a litany of potential pitfalls that must be carefully navigated—from AI hallucinations and emissions of falsehoods, to data privacy violations and intellectual property leaks from training on proprietary information.

Bias and adversarial content seeping into training data is another major worry, according to Jablokov. Security vulnerabilities like embedded agents and prompt injection attacks also rank highly on his list of concerns, as well as the extreme energy consumption and climate impact of large language models.

Pryon’s origins can be traced back to the earliest stirrings of modern AI over two decades ago. Jablokov previously led an advanced AI team at IBM where they designed a primitive version of what would later become Watson. “They didn’t greenlight it. And so, in my frustration, I departed, stood up our last company,” he recounted. That company, also called Pryon at the time, went on to become Amazon’s first AI-related acquisition, birthing what’s now Alexa.

The current incarnation of Pryon has aimed to confront AI’s ethical quandaries through responsible design focused on critical infrastructure and high-stakes use cases. “[We wanted to] create something purposely hardened for more critical infrastructure, essential workers, and more serious pursuits,” Jablokov explained.

A key element is offering enterprises flexibility and control over their data environments. “We give them choices in terms of how they’re consuming their platforms…from multi-tenant public cloud, to private cloud, to on-premises,” Jablokov said. This allows organisations to ring-fence highly sensitive data behind their own firewalls when needed.

Pryon also emphasises explainable AI and verifiable attribution of knowledge sources. “When our platform reveals an answer, you can tap it, and it always goes to the underlying page and highlights exactly where it learned a piece of information from,” Jablokov described. This allows human validation of the knowledge provenance.

In some realms like energy, manufacturing, and healthcare, Pryon has implemented human-in-the-loop oversight before AI-generated guidance goes to frontline workers. Jablokov pointed to one example where “supervisors can double-check the outcomes and essentially give it a badge of approval” before information reaches technicians.

Ensuring responsible AI development

Jablokov strongly advocates for new regulatory frameworks to ensure responsible AI development and deployment. While welcoming the White House’s recent executive order as a start, he expressed concerns about risks around generative AI like hallucinations, static training data, data leakage vulnerabilities, lack of access controls, copyright issues, and more.  

Pryon has been actively involved in these regulatory discussions. “We’re back-channelling to a mess of government agencies,” Jablokov said. “We’re taking an active hand in terms of contributing our perspectives on the regulatory environment as it rolls out…We’re showing up by expressing some of the risks associated with generative AI usage.”

On the potential for an uncontrolled, existential “AI risk” – as has been warned about by some AI leaders – Jablokov struck a relatively sanguine tone about Pryon’s governed approach: “We’ve always worked towards verifiable attribution…extracting out of enterprises’ own content so that they understand where the solutions are coming from, and then they decide whether they make a decision with it or not.”

The CEO firmly distanced Pryon’s mission from the emerging crop of open-ended conversational AI assistants, some of which have raised controversy around hallucinations and lacking ethical constraints.

“We’re not a clown college. Our stuff is designed to go into some of the more serious environments on planet Earth,” Jablokov stated bluntly. “I think none of you would feel comfortable ending up in an emergency room and having the medical practitioners there typing in queries into a ChatGPT, a Bing, a Bard…”

He emphasised the importance of subject matter expertise and emotional intelligence when it comes to high-stakes, real-world decision-making. “You want somebody that has hopefully many years of experience treating things similar to the ailment that you’re currently undergoing. And guess what? You like the fact that there is an emotional quality that they care about getting you better as well.”

At the upcoming AI & Big Data Expo, Pryon will unveil new enterprise use cases showcasing its platform across industries like energy, semiconductors, pharmaceuticals, and government. Jablokov teased that they will also reveal “different ways to consume the Pryon platform” beyond the end-to-end enterprise offering, including potentially lower-level access for developers.

As AI’s domain rapidly expands from narrow applications to more general capabilities, addressing the ethical risks will become only more critical. Pryon’s sustained focus on governance, verifiable knowledge sources, human oversight, and collaboration with regulators could offer a template for more responsible AI development across industries.

You can watch our full interview with Igor Jablokov below:

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Igor Jablokov, Pryon: Building a responsible AI future appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/feed/ 0
Kamal Ahluwalia, Ikigai Labs: How to take your business to the next level with generative AI https://www.artificialintelligence-news.com/2024/04/17/kamal-ahluwalia-ikigai-labs-how-to-take-your-business-to-the-next-level-with-generative-ai/ https://www.artificialintelligence-news.com/2024/04/17/kamal-ahluwalia-ikigai-labs-how-to-take-your-business-to-the-next-level-with-generative-ai/#respond Wed, 17 Apr 2024 12:36:48 +0000 https://www.artificialintelligence-news.com/?p=14699 AI News caught up with president of Ikigai Labs, Kamal Ahluwalia, to discuss all things gen AI, including top tips on how to adopt and utilise the tech, and the importance of embedding ethics into AI design. Could you tell us a little bit about Ikigai Labs and how it can help companies? Ikigai is... Read more »

The post Kamal Ahluwalia, Ikigai Labs: How to take your business to the next level with generative AI appeared first on AI News.

]]>
AI News caught up with president of Ikigai Labs, Kamal Ahluwalia, to discuss all things gen AI, including top tips on how to adopt and utilise the tech, and the importance of embedding ethics into AI design.

Could you tell us a little bit about Ikigai Labs and how it can help companies?

Ikigai is helping organisations transform sparse, siloed enterprise data into predictive and actionable insights with a generative AI platform specifically designed for structured, tabular data.  

A significant portion of enterprise data is structured, tabular data, residing in systems like SAP and Salesforce. This data drives the planning and forecasting for an entire business. While there is a lot of excitement around Large Language Models (LLMs), which are great for unstructured data like text, Ikigai’s patented Large Graphical Models (LGMs), developed out of MIT, are focused on solving problems using structured data.  

Ikigai’s solution focuses particularly on time-series datasets, as enterprises run on four key time series: sales, products, employees, and capital/cash. Understanding how these time series come together in critical moments, such as launching a new product or entering a new geography, is crucial for making better decisions that drive optimal outcomes. 

How would you describe the current generative AI landscape, and how do you envision it developing in the future? 

The technologies that have captured the imagination, such as LLMs from OpenAI, Anthropic, and others, come from a consumer background. They were trained on internet-scale data, and the training datasets are only getting larger, which requires significant computing power and storage. It took $100m to train GPT4, and GP5 is expected to cost $2.5bn. 

This reality works in a consumer setting, where costs can be shared across a very large user set, and some mistakes are just part of the training process. But in the enterprise, mistakes cannot be tolerated, hallucinations are not an option, and accuracy is paramount. Additionally, the cost of training a model on internet-scale data is just not affordable, and companies that leverage a foundational model risk exposure of their IP and other sensitive data.  

While some companies have gone the route of building their own tech stack so LLMs can be used in a safe environment, most organisations lack the talent and resources to build it themselves. 

In spite of the challenges, enterprises want the kind of experience that LLMs provide. But the results need to be accurate – even when the data is sparse – and there must be a way to keep confidential data out of a foundational model. It’s also critical to find ways to lower the total cost of ownership, including the cost to train and upgrade the models, reliance on GPUs, and other issues related to governance and data retention. All of this leads to a very different set of solutions than what we currently have. 

How can companies create a strategy to maximise the benefits of generative AI? 

While much has been written about Large Language Models (LLMs) and their potential applications, many customers are asking “how do I build differentiation?”  

With LLMs, nearly everyone will have access to the same capabilities, such as chatbot experiences or generating marketing emails and content – if everyone has the same use cases, it’s not a differentiator. 

The key is to shift the focus from generic use cases to finding areas of optimisation and understanding specific to your business and circumstances. For example, if you’re in manufacturing and need to move operations out of China, how do you plan for uncertainty in logistics, labour, and other factors? Or, if you want to build more eco-friendly products, materials, vendors, and cost structures will change. How do you model this? 

These use cases are some of the ways companies are attempting to use AI to run their business and plan in an uncertain world. Finding specificity and tailoring the technology to your unique needs is probably the best way to use AI to find true competitive advantage.  

What are the main challenges companies face when deploying generative AI and how can these be overcome? 

Listening to customers, we’ve learned that while many have experimented with generative AI, only a fraction have pushed things through to production due to prohibitive costs and security concerns. But what if your models could be trained just on your own data, running on CPUs rather than requiring GPUs, with accurate results and transparency around how you’re getting those results? What if all the regulatory and compliance issues were addressed, leaving no questions about where the data came from or how much data is being retrained? This is what Ikigai is bringing to the table with Large Graphical Models.  

One challenge we’ve helped businesses address is the data problem. Nearly 100% of organisations are working with limited or imperfect data, and in many cases, this is a barrier to doing anything with AI. Companies often talk about data clean-up, but in reality, waiting for perfect data can hinder progress. AI solutions that can work with limited, sparse data are essential, as they allow companies to learn from what they have and account for change management. 

The other challenge is how internal teams can partner with the technology for better outcomes. Especially in regulated industries, human oversight, validation, and reinforcement learning are necessary. Adding an expert in the loop ensures that AI is not making decisions in a vacuum, so finding solutions that incorporate human expertise is key. 

To what extent do you think adopting generative AI successfully requires a shift in company culture and mindset? 

Successfully adopting generative AI requires a significant shift in company culture and mindset, with strong commitment from executive and continuous education. I saw this firsthand at Eightfold when we were bringing our AI platform to companies in over 140 countries. I always recommend that teams first educate executives on what’s possible, how to do it, and how to get there. They need to have the commitment to see it through, which involves some experimentation and some committed course of action. They must also understand the expectations placed on colleagues, so they can be prepared for AI becoming a part of daily life. 

Top-down commitment, and communication from executives goes a long way, as there’s a lot of fear-mongering suggesting that AI will take jobs, and executives need to set the tone that, while AI won’t eliminate jobs outright, everyone’s job is going to change in the next couple of years, not just for people at the bottom or middle levels, but for everyone. Ongoing education throughout the deployment is key for teams learning how to get value from the tools, and adapt the way they work to incorporate the new skillsets.  

It’s also important to adopt technologies that play to the reality of the enterprise. For example, you have to let go of the idea that you need to get all your data in order to take action. In time-series forecasting, by the time you’ve taken four quarters to clean up data, there’s more data available, and it’s probably a mess. If you keep waiting for perfect data, you won’t be able to use your data at all. So AI solutions that can work with limited, sparse data are crucial, as you have to be able to learn from what you have. 

Another important aspect is adding an expert in the loop. It would be a mistake to assume AI is magic. There are a lot of decisions, especially in regulated industries, where you can’t have AI just make the decision. You need oversight, validation, and reinforcement learning – this is exactly how consumer solutions became so good.  

Are there any case studies you could share with us regarding companies successfully utilising generative AI? 

One interesting example is a Marketplace customer that is using us to rationalise their product catalogue. They’re looking to understand the optimal number of SKUs to carry, so they can reduce their inventory carrying costs while still meeting customer needs. Another partner does workforce planning, forecasting, and scheduling, using us for labour balancing in hospitals, retail, and hospitality companies. In their case, all their data is sitting in different systems, and they must bring it into one view so they can balance employee wellness with operational excellence. But because we can support a wide variety of use cases, we work with clients doing everything from forecasting product usage as part of a move to a consumption-based model, to fraud detection. 

You recently launched an AI Ethics Council. What kind of people are on this council and what is its purpose? 

Our AI Ethics Council is all about making sure that the AI technology we’re building is grounded in ethics and responsible design. It’s a core part of who we are as a company, and I’m humbled and honoured to be a part of it alongside such an impressive group of individuals. Our council includes luminaries like Dr. Munther Dahleh, the Founding Director of the Institute for Data Systems and Society (IDSS) and a Professor at MIT; Aram A. Gavoor, Associate Dean at George Washington University and a recognised scholar in administrative law and national security; Dr. Michael Kearns, the National Center Chair for Computer and Information Science at the University of Pennsylvania; and Dr. Michael I. Jordan, a Distinguished Professor at UC Berkeley in the Departments of Electrical Engineering and Computer Science, and Statistics. I am also honoured to serve on this council alongside these esteemed individuals.  

The purpose of our AI Ethics Council is to tackle pressing ethical and security issues impacting AI development and usage. As AI rapidly becomes central to consumers and businesses across nearly every industry, we believe it is crucial to prioritise responsible development and cannot ignore the need for ethical considerations. The council will convene quarterly to discuss important topics such as AI governance, data minimisation, confidentiality, lawfulness, accuracy and more. Following each meeting, the council will publish recommendations for actions and next steps that organisations should consider moving forward. As part of Ikigai Labs’ commitment to ethical AI deployment and innovation, we will implement the action items recommended by the council. 

Ikigai Labs raised $25m funding in August last year. How will this help develop the company, its offerings and, ultimately, your customers? 

We have a strong foundation of research and innovation coming out of our core team with MIT, so the funding this time is focused on making the solution more robust, as well as bringing on the team that works with the clients and partners.  

We can solve a lot of problems but are staying focused on solving just a few meaningful ones through time-series super apps. We know that every company runs on four time series, so the goal is covering these in depth and with speed: things like sales forecasting, consumption forecasting, discount forecasting, how to sunset products, catalogue optimisation, etc. We’re excited and looking forward to putting GenAI for tabular data into the hands of as many customers as possible. 

Kamal will take part in a panel discussion titled ‘Barriers to Overcome: People, Processes and Technology’ at the AI & Big Data Expo in Santa Clara on June 5, 2024. You can find all the details here.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Kamal Ahluwalia, Ikigai Labs: How to take your business to the next level with generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/17/kamal-ahluwalia-ikigai-labs-how-to-take-your-business-to-the-next-level-with-generative-ai/feed/ 0
UK and South Korea to co-host AI Seoul Summit https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/ https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/#respond Fri, 12 Apr 2024 12:03:50 +0000 https://www.artificialintelligence-news.com/?p=14678 The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration. The two-day event will feature a virtual leaders’ session, co-chaired... Read more »

The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.

]]>
The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration.

The two-day event will feature a virtual leaders’ session, co-chaired by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol, and a subsequent in-person meeting among Digital Ministers. UK Technology Secretary Michelle Donelan, and Korean Minister of Science and ICT Lee Jong-Ho will co-host the latter.

This summit builds upon the historic Bletchley Park discussions held at the historic location in the UK last year, emphasising AI safety, inclusion, and innovation. It aims to ensure that AI advancements benefit humanity while minimising potential risks and enhancing global governance on tech innovation.

“The summit we held at Bletchley Park was a generational moment,” stated Donelan. “If we continue to bring international governments and a broad range of voices together, I have every confidence that we can continue to develop a global approach which will allow us to realise the transformative potential of this generation-defining technology safely and responsibly.”

Echoing this sentiment, Minister Lee Jong-Ho highlighted the importance of the upcoming Seoul Summit in furthering global cooperation on AI safety and innovation.

“AI is advancing at an unprecedented pace that exceeds our expectations, and it is crucial to establish global norms and governance to harness such technological innovations to enhance the welfare of humanity,” explained Lee. “We hope that the AI Seoul Summit will serve as an opportunity to strengthen global cooperation on not only AI safety but also AI innovation and inclusion, and promote sustainable AI development.”

Innovation remains a focal point for the UK, evidenced by initiatives like the Manchester Prize and the formation of the AI Safety Institute: the first state-backed organisation dedicated to AI safety. This proactive approach mirrors the UK’s commitment to international collaboration on AI governance, underscored by a recent agreement with the US on AI safety measures.

Accompanying the Seoul Summit will be the release of the International Scientific Report on Advanced AI Safety. This report, independently led by Turing Prize winner Yoshua Bengio, represents a collective effort to consolidate the best scientific research on AI safety. It underscores the summit’s role not only as a forum for discussion but as a catalyst for actionable insight into AI’s safe development.

The agenda of the AI Seoul Summit reflects the urgency of addressing the challenges and opportunities presented by AI. From discussing model safety evaluations, to fostering sustainable AI development. As the world embraces AI innovation, the AI Seoul Summit embodies a concerted effort to shape a future where technology serves humanity safely and delivers prosperity and inclusivity for all.

See also: US and Japan announce sweeping AI and tech collaboration

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/feed/ 0