AI Ethics & Society News | Ethical Considerations for AI | AI News https://www.artificialintelligence-news.com/categories/ai-ethics-society/ Artificial Intelligence News Fri, 14 Jun 2024 14:56:45 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Ethics & Society News | Ethical Considerations for AI | AI News https://www.artificialintelligence-news.com/categories/ai-ethics-society/ 32 32 EU AI legislation sparks controversy over data transparency https://www.artificialintelligence-news.com/2024/06/14/eu-ai-legislation-sparks-controversy-over-data-transparency/ https://www.artificialintelligence-news.com/2024/06/14/eu-ai-legislation-sparks-controversy-over-data-transparency/#respond Fri, 14 Jun 2024 14:56:43 +0000 https://www.artificialintelligence-news.com/?p=15001 The European Union recently introduced the AI Act, a new governance framework compelling organisations to enhance transparency regarding their AI systems’ training data. Should this legislation come into force, it could penetrate the defences that many in Silicon Valley have built against such detailed scrutiny of AI development and deployment processes. Since the public release... Read more »

The post EU AI legislation sparks controversy over data transparency appeared first on AI News.

]]>
The European Union recently introduced the AI Act, a new governance framework compelling organisations to enhance transparency regarding their AI systems’ training data.

Should this legislation come into force, it could penetrate the defences that many in Silicon Valley have built against such detailed scrutiny of AI development and deployment processes.

Since the public release of OpenAI’s ChatGPT, backed by Microsoft 18 months ago, there has been significant growth in interest and investment in generative AI technologies. These applications, capable of writing text, creating images, and producing audio content at record speeds, have attracted considerable attention. However, the rise in AI activity accompanying these changes prompts an intriguing question: How do AI developers actually source the data needed to train their models? Is it through the use of unauthorised copyrighted material?

Implementing the AI Act

The EU’s AI Act, intended to be implemented gradually over the next two years, aims to address these issues. New laws take time to embed, and a gradual rollout allows regulators the necessary time to adapt to the new laws and for businesses to adjust to their new obligations. However, the implementation of some rules remains in doubt.

One of the more contentious sections of the Act stipulates that organisations deploying general-purpose AI models, such as ChatGPT, must provide “detailed summaries” of the content used to train them. The newly established AI Office has announced plans to release a template for organisations to follow in early 2025, following consultation with stakeholders.

AI companies have expressed strong resistance to revealing their training data, describing this information as trade secrets that would provide competitors with an unfair advantage if made public. The level of detail required in these transparency reports will have significant implications for both smaller AI startups and major tech companies like Google and Meta, which have positioned AI technology at the center of their future operations.

Over the past year, several top technology companies—Google, OpenAI, and Stability AI—have faced lawsuits from creators who claim their content was used without permission to train AI models. Under growing scrutiny, however, some tech companies have, in the past two years, pierced their own corporate veil and negotiated content-licensing deals with individual media outlets and websites. Some creators and lawmakers remain concerned that these measures are not sufficient.

European lawmakers’ divide

In Europe, differences among lawmakers are stark. Dragos Tudorache, who led the drafting of the AI Act in the European Parliament, argues that AI companies should be required to open-source their datasets. Tudorache emphasises the importance of transparency so that creators can determine whether their work has been used to train AI algorithms.

Conversely, under the leadership of President Emmanuel Macron, the French government has privately opposed introducing rules that could hinder the competitiveness of European AI startups. French Finance Minister Bruno Le Maire has emphasised the need for Europe to be a world leader in AI, not merely a consumer of American and Chinese products.

The AI Act acknowledges the need to balance the protection of trade secrets with the facilitation of rights for parties with legitimate interests, including copyright holders. However, striking this balance remains a significant challenge.

Different industries vary on this matter. Matthieu Riouf, CEO of the AI-powered image-editing firm Photoroom, compares the situation to culinary practices, claiming there’s a secret part of the recipe that the best chefs wouldn’t share. He represents just one instance on the laundry list of possible scenarios where this type of crime could be rampant. However, Thomas Wolf, co-founder of one of the world’s top AI startups, Hugging Face, argues that while there will always be an appetite for transparency, it doesn’t mean that the entire industry will adopt a transparency-first approach.

A series of recent controversies have driven home just how complicated this all is. OpenAI demonstrated the latest version of ChatGPT in a public session, where the company was roundly criticised for using a synthetic voice that sounded nearly identical to that of actress Scarlett Johansson. These examples point to the potential for AI technologies to violate personal and proprietary rights.

Throughout the development of these regulations, there has been heated debate about their potential effects on future innovation and competitiveness in the AI world. In particular, the French government has urged that innovation, not regulation, should be the starting point, given the dangers of regulating aspects that have not been fully comprehended.

The way the EU regulates AI transparency could have significant impacts on tech companies, digital creators, and the overall digital landscape. Policymakers thus face the challenge of fostering innovation in the dynamic AI industry while simultaneously guiding it towards safe, ethical decisions and preventing IP infringement.

In sum, if adopted, the EU AI Act would be a significant step toward greater transparency in AI development. However, the practical implementation of these regulations and their industry results could be far off. Moving forward, especially at the dawn of this new regulatory paradigm, the balance between innovation, ethical AI development, and the protection of intellectual property will remain a central and contested issue for stakeholders of all stripes to grapple with.

See also: Apple is reportedly getting free ChatGPT access

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU AI legislation sparks controversy over data transparency appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/14/eu-ai-legislation-sparks-controversy-over-data-transparency/feed/ 0
Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/ https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/#respond Wed, 12 Jun 2024 15:45:08 +0000 https://www.artificialintelligence-news.com/?p=14988 Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process. Musk had initially sued OpenAI in March 2024,... Read more »

The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News.

]]>
Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process.

Musk had initially sued OpenAI in March 2024, alleging breach of contracts, unfair business practices, and failure in fiduciary duty. He claimed that his contributions to the company were made “in exchange for and in reliance on promises that those assets were irrevocably dedicated to building AI for public benefit, with only safety as a countervailing concern.”

The lawsuit sought remedies for “breach of contract, promissory estoppel, breach of fiduciary duty, unfair business practices, and accounting,” as well as specific performance, restitution, and damages.

However, Musk’s filings to withdraw the case provided no explanation for abandoning the lawsuit. OpenAI had previously called Musk’s claims “incoherent” and that his inability to produce a contract made his breach claims difficult to prove, stating that documents provided by Musk “contradict his allegations as to the alleged terms of the agreement.”

The withdrawal of the lawsuit comes at a time when Musk is strongly opposing Apple’s plans to integrate ChatGPT into its operating systems.

During Apple’s keynote event announcing Apple Intelligence for iOS 18, iPadOS 18, and macOS Sequoia, Musk threatened to ban Apple devices from his companies, calling the integration “an unacceptable security violation.”

Despite assurances from Apple and OpenAI that user data would only be shared with explicit consent and that interactions would be secure, Musk questioned Apple’s ability to ensure data security, stating, “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.”

Since bringing the lawsuit against OpenAI, Musk has also created his own AI company, xAI, and secured over $6 billion in funding for his plans to advance the Grok chatbot on his social network, X.

While Musk’s reasoning for dropping the OpenAI lawsuit remains unclear, his actions suggest a potential shift in focus towards advancing his own AI endeavours while continuing to vocalise his criticism of OpenAI through social media rather than the courts.

See also: DuckDuckGo releases portal giving private access to AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/feed/ 0
DuckDuckGo releases portal giving private access to AI models https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/ https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/#respond Fri, 07 Jun 2024 15:42:22 +0000 https://www.artificialintelligence-news.com/?p=14966 DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected. The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source... Read more »

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected.

The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source models are OpenAI’s GPT-3.5 Turbo and Anthropic’s Claude 3 Haiku, while the open-source models are Meta’s Llama-3 70B and Mistral AI’s Mixtral 8x7b.

What sets DuckDuckGo AI Chat apart is its commitment to user privacy. Neither DuckDuckGo nor the chatbot providers can use user data to train their models, ensuring that interactions remain private and anonymous. DuckDuckGo also strips away metadata, such as server or IP addresses, so that queries appear to originate from the company itself rather than individual users.

The company has agreements in place with all model providers to ensure that any saved chats are completely deleted within 30 days, and that none of the chats made on the platform can be used to train or improve the models. This makes preserving privacy easier than changing the privacy settings for each service.

In an era where online services are increasingly hungry for user data, DuckDuckGo’s AI Chat service is a breath of fresh air. The company’s commitment to privacy is a direct response to the growing concerns about data collection and usage in the AI industry. By providing a private and anonymous platform for users to interact with AI chatbots, DuckDuckGo is setting a new standard for the industry.

DuckDuckGo’s AI service is free to use within a daily limit, and the company is considering launching a paid tier to reduce or eliminate these limits. The service is designed to be a complementary partner to its search engine, allowing users to switch between search and AI chat for a more comprehensive search experience.

“We view AI Chat and search as two different but powerful tools to help you find what you’re looking for – especially when you’re exploring a new topic. You might be shopping or doing research for a project and are unsure how to get started. In situations like these, either AI Chat or Search could be good starting points.” the company explained.

“If you start by asking a few questions in AI Chat, the answers may inspire traditional searches to track down reviews, prices, or other primary sources. If you start with Search, you may want to switch to AI Chat for follow-up queries to help make sense of what you’ve read, or for quick, direct answers to new questions that weren’t covered in the web pages you saw.”

To accommodate that user workflow, DuckDuckGo has made AI Chat accessible through DuckDuckGo Private Search for quick access.

The launch of DuckDuckGo AI Chat comes at a time when the AI industry is facing increasing scrutiny over data privacy and usage. The service is a welcome addition for privacy-conscious individuals, joining the recent launch of Venice AI by crypto entrepreneur Erik Voorhees. Venice AI features an uncensored AI chatbot and image generator that doesn’t require accounts and doesn’t retain data..

As the AI industry continues to evolve, it’s clear that privacy will remain a top concern for users. With the launch of DuckDuckGo AI Chat, the company is taking a significant step towards providing users with a private and secure platform for interacting with AI chatbots.

See also: AI pioneers turn whistleblowers and demand safeguards

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/feed/ 0
AI pioneers turn whistleblowers and demand safeguards https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/ https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/#respond Thu, 06 Jun 2024 15:39:54 +0000 https://www.artificialintelligence-news.com/?p=14962 OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology.  In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI systems remain aligned with human values. Leike’s exit came... Read more »

The post AI pioneers turn whistleblowers and demand safeguards appeared first on AI News.

]]>
OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology. 

In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI systems remain aligned with human values. Leike’s exit came shortly after OpenAI unveiled its new flagship GPT-4o model, which it touted as “magical” at its Spring Update event.

According to reports, Leike’s departure was driven by constant disagreements over security measures, monitoring practices, and the prioritisation of flashy product releases over safety considerations.

Leike’s exit has opened a Pandora’s box for the AI firm. Former OpenAI board members have come forward with allegations of psychological abuse levelled against CEO Sam Altman and the company’s leadership.

The growing internal turmoil at OpenAI coincides with mounting external concerns about the potential risks posed by generative AI technology like the company’s own language models. Critics have warned about the imminent existential threat of advanced AI surpassing human capabilities, as well as more immediate risks like job displacement and the weaponisation of AI for misinformation and manipulation campaigns.

In response, a group of current and former employees from OpenAI, Anthropic, DeepMind, and other leading AI companies have penned an open letter addressing these risks.

“We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity. We also understand the serious risks posed by these technologies,” the letter states.

“These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks, as have governments across the world, and other AI experts.”

The letter, which has been signed by 13 employees and endorsed by AI pioneers Yoshua Bengio and Geoffrey Hinton, outlines four core demands aimed at protecting whistleblowers and fostering greater transparency and accountability around AI development:

  1. That companies will not enforce non-disparagement clauses or retaliate against employees for raising risk-related concerns.
  2. That companies will facilitate a verifiably anonymous process for employees to raise concerns to boards, regulators, and independent experts.
  3. That companies will support a culture of open criticism and allow employees to publicly share risk-related concerns, with appropriate protection of trade secrets.
  4. That companies will not retaliate against employees who share confidential risk-related information after other processes have failed.

“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” said Daniel Kokotajlo, a former OpenAI employee who left due to concerns over the company’s values and lack of responsibility.

The demands come amid reports that OpenAI has forced departing employees to sign non-disclosure agreements preventing them from criticising the company or risk losing their vested equity. OpenAI CEO Sam Altman admitted being “embarrassed” by the situation but claimed the company had never actually clawed back anyone’s vested equity.

As the AI revolution charges forward, the internal strife and whistleblower demands at OpenAI underscore the growing pains and unresolved ethical quandaries surrounding the technology.

See also: OpenAI disrupts five covert influence operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI pioneers turn whistleblowers and demand safeguards appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/feed/ 0
X now permits AI-generated adult content https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/ https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/#respond Mon, 03 Jun 2024 12:44:45 +0000 https://www.artificialintelligence-news.com/?p=14927 Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities. “We believe that users should be able to create, distribute, and consume material related... Read more »

The post X now permits AI-generated adult content appeared first on AI News.

]]>
Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities.

“We believe that users should be able to create, distribute, and consume material related to sexual themes as long as it is consensually produced and distributed. Sexual expression, visual or written, can be a legitimate form of artistic expression,” X’s updated ‘adult content’ policy states.

The policy further elaborates: “We believe in the autonomy of adults to engage with and create content that reflects their own beliefs, desires, and experiences, including those related to sexuality. We balance this freedom by restricting exposure to adult content for children or adult users who choose not to see it.”

Users can mark their posts as containing sensitive media, ensuring that such content is restricted from users under 18 or those who haven’t provided their birth dates.

While X’s violent content rules have similar guidelines, the platform maintains a strict stance against excessively gory content and depictions of sexual violence. Explicit threats or content inciting or glorifying violence remain prohibited.

X’s decision to allow graphic content is aimed at enabling users to participate in discussions about current events, including sharing relevant images and videos. 

Although X has never outright banned porn, these new clauses could pave the way for developing services centred around adult content, potentially creating a competitor to services like OnlyFans and enhancing its revenue streams. This would further Musk’s vision of X becoming an “everything app,” similar to China’s WeChat.

A 2022 Reuters report, citing internal company documents, indicated that approximately 13% of posts on the platform contained adult content. This percentage has likely increased, especially with the proliferation of porn bots on X.

See also: Elon Musk’s xAI secures $6B to challenge OpenAI in AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post X now permits AI-generated adult content appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/feed/ 0
OpenAI disrupts five covert influence operations https://www.artificialintelligence-news.com/2024/05/31/openai-disrupts-five-covert-influence-operations/ https://www.artificialintelligence-news.com/2024/05/31/openai-disrupts-five-covert-influence-operations/#respond Fri, 31 May 2024 17:54:24 +0000 https://www.artificialintelligence-news.com/?p=14918 In the last three months, OpenAI has disrupted five covert influence operations (IO) that attempted to exploit the company’s models for deceptive activities online. As of May 2024, these campaigns have not shown a substantial increase in audience engagement or reach due to OpenAI’s services. OpenAI claims its commitment to designing AI models with safety... Read more »

The post OpenAI disrupts five covert influence operations appeared first on AI News.

]]>
In the last three months, OpenAI has disrupted five covert influence operations (IO) that attempted to exploit the company’s models for deceptive activities online. As of May 2024, these campaigns have not shown a substantial increase in audience engagement or reach due to OpenAI’s services.

OpenAI claims its commitment to designing AI models with safety in mind has often thwarted the threat actors’ attempts to generate desired content. Additionally, the company says AI tools have enhanced the efficiency of OpenAI’s investigations.

Detailed threat reporting by distribution platforms and the open-source community has significantly contributed to combating IO. OpenAI is sharing these findings to promote information sharing and best practices among the broader community of stakeholders.

Disrupting covert IO

In the past three months, OpenAI disrupted several IO operations using its models for various tasks, such as generating short comments, creating fake social media profiles, conducting open-source research, debugging simple code, and translating texts.

Specific operations disrupted include:

  • Bad Grammar: A previously unreported operation from Russia targeting Ukraine, Moldova, the Baltic States, and the US. This group used OpenAI’s models to debug code for running a Telegram bot and to create political comments in Russian and English, posted on Telegram.
  • Doppelganger: Another Russian operation generating comments in multiple languages on platforms like X and 9GAG, translating and editing articles, generating headlines, and converting news articles into Facebook posts.
  • Spamouflage: A Chinese network using OpenAI’s models for public social media activity research, generating texts in several languages, and debugging code for managing databases and websites.
  • International Union of Virtual Media (IUVM): An Iranian operation generating and translating long-form articles, headlines, and website tags, published on a linked website.
  • Zero Zeno: A commercial company in Israel, with operations generating articles and comments posted across multiple platforms, including Instagram, Facebook, X, and affiliated websites.

The content posted by these operations focused on various issues, including Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, European and US politics, and criticisms of the Chinese government.

Despite these efforts, none of these operations showed a significant increase in audience engagement due to OpenAI’s models. Using Brookings’ Breakout Scale – which assesses the impact of covert IO – none of the five operations scored higher than a 2, indicating activity on multiple platforms but no breakout into authentic communities.

Attacker trends

Investigations into these influence operations revealed several trends:

  • Content generation: Threat actors used OpenAI’s services to generate large volumes of text with fewer language errors than human operators could achieve alone.
  • Mixing old and new: AI was used alongside traditional formats, such as manually written texts or copied memes.
  • Faking engagement: Some networks generated replies to their own posts to create the appearance of engagement, although none managed to attract authentic engagement.
  • Productivity gains: Threat actors used AI to enhance productivity, summarising social media posts and debugging code.

Defensive trends

OpenAI’s investigations benefited from industry sharing and open-source research. Defensive measures include:

  • Defensive design: OpenAI’s safety systems imposed friction on threat actors, often preventing them from generating the desired content.
  • AI-enhanced investigation: AI-powered tools improved the efficiency of detection and analysis, reducing investigation times from weeks or months to days.
  • Distribution matters: IO content, like traditional content, must be distributed effectively to reach an audience. Despite their efforts, none of the disrupted operations managed substantial engagement.
  • Importance of industry sharing: Sharing threat indicators with industry peers increased the impact of OpenAI’s disruptions. The company benefited from years of open-source analysis by the wider research community.
  • The human element: Despite using AI, threat actors were prone to human error, such as publishing refusal messages from OpenAI’s models on their social media and websites.

OpenAI says it remains dedicated to developing safe and responsible AI. This involves designing models with safety in mind and proactively intervening against malicious use.

While admitting that detecting and disrupting multi-platform abuses like covert influence operations is challenging, OpenAI claims it’s committed to mitigating the dangers.

(Photo by Chris Yang)

See also: EU launches office to implement AI Act and foster innovation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI disrupts five covert influence operations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/31/openai-disrupts-five-covert-influence-operations/feed/ 0
Nicholas Brackney, Dell: How we leverage a four-pillar AI strategy https://www.artificialintelligence-news.com/2024/05/30/nicholas-brackney-dell-leverage-four-pillar-ai-strategy/ https://www.artificialintelligence-news.com/2024/05/30/nicholas-brackney-dell-leverage-four-pillar-ai-strategy/#respond Thu, 30 May 2024 14:42:27 +0000 https://www.artificialintelligence-news.com/?p=14910 Dell is deeply embedded in the AI landscape, leveraging a comprehensive four-pillar strategy to integrate the technology across its products and services. Nicholas Brackney, Senior Consultant in Product Marketing at Dell, discussed the company’s AI initiatives ahead of AI & Big Data Expo North America. Dell’s AI strategy is structured around four core principles: AI-In,... Read more »

The post Nicholas Brackney, Dell: How we leverage a four-pillar AI strategy appeared first on AI News.

]]>
Dell is deeply embedded in the AI landscape, leveraging a comprehensive four-pillar strategy to integrate the technology across its products and services.

Nicholas Brackney, Senior Consultant in Product Marketing at Dell, discussed the company’s AI initiatives ahead of AI & Big Data Expo North America.

Dell’s AI strategy is structured around four core principles: AI-In, AI-On, AI-For, and AI-With:

  1. “Embedding AI capabilities in our offerings and services drives speed, intelligence, and automation,” Brackney explained. This ensures that AI is a fundamental component of Dell’s offerings.
  1. The company also enables customers to run powerful AI workloads on its comprehensive portfolio of solutions, from desktops to data centres, across clouds, and at the edge.
  1. AI innovation and tooling are applied for Dell’s business to enhance operations and share best practices with customers.
  1. Finally, Dell collaborates with strategic partners within an open AI ecosystem to simplify and enhance the AI experience.

Dell is well-positioned to help customers navigate AI workloads, emphasising choice and adaptability through the various evolutions of emerging technology. Brackney highlighted Dell’s commitment to serving customers from the early stages of AI adoption to achieving AI at scale.

“We’ve always believed in providing choice and have been doing it through the various evolutions of emerging technology, including AI, and understanding the challenges that come with them,” explained Brackney. “We fully leverage our unique operating model to serve customers in the early innings of AI to a future of AI at scale.”

Looking to the future, Dell is particularly excited about the potential of AI PCs.

“We know organisations and their knowledge workers are excited about AI, and they want to fit it into all their workflows,” Brackney said. Dell is focused on integrating AI into software and ensuring it runs efficiently on the right systems, enhancing end-to-end customer journeys in AI.

Ethical concerns in AI deployment are also a priority for Dell. Addressing issues such as deepfakes, transparency, and bias, Brackney emphasised the importance of a shared, secure, and sustainable approach to AI development.

“We believe in a shared, secure, and sustainable approach. By getting the foundations right at their core, we can eliminate some of the greatest risks associated with AI and work to ensure it acts as a force for good,” explains Brackney.

User data privacy in AI-driven products is another critical focus area. Brackney outlined Dell’s strategy of integrating AI with existing security investments without introducing new risks. Dell offers a suite of secure products, comprehensive data protection, advanced cybersecurity features, and global support services to safeguard user data.

On the topic of job displacement due to AI, Brackney underscored that Dell views AI as augmenting human potential rather than replacing it.

“The roles may change but the human element will always be key,” Brackney stated. “At Dell, we encourage our team members to understand, explore, and, where appropriate, use tools based on AI to learn, evolve, and enhance the overall work experience.”

Looking ahead, Brackney envisions a transformative role for AI within Dell and the tech industry. “We see customers in every industry wanting to become leaders in AI because it is critical to their organisation’s innovation, growth, and productivity,” he noted.

Dell aims to support this evolution by providing the necessary architectures, frameworks, and services to assist its customers on this transformative journey.

Dell is a key sponsor of this year’s AI & Big Data Expo. Check out Dell’s keynote presentation From Data Novice to Data Champion – Cultivating Data Literacy Across the Organization and swing by Dell’s booth at stand #66 to hear about AI from the company’s experts.

The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Nicholas Brackney, Dell: How we leverage a four-pillar AI strategy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/30/nicholas-brackney-dell-leverage-four-pillar-ai-strategy/feed/ 0
EU launches office to implement AI Act and foster innovation https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/ https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/#respond Thu, 30 May 2024 12:22:08 +0000 https://www.artificialintelligence-news.com/?p=14903 The European Union has launched a new office dedicated to overseeing the implementation of its landmark AI Act, which is regarded as one of the most comprehensive AI regulations in the world. This new initiative adopts a risk-based approach, imposing stringent regulations on higher-risk AI applications to ensure their safe and ethical deployment. The primary... Read more »

The post EU launches office to implement AI Act and foster innovation appeared first on AI News.

]]>
The European Union has launched a new office dedicated to overseeing the implementation of its landmark AI Act, which is regarded as one of the most comprehensive AI regulations in the world. This new initiative adopts a risk-based approach, imposing stringent regulations on higher-risk AI applications to ensure their safe and ethical deployment.

The primary goal of this office is to promote the “future development, deployment and use” of AI technologies, aiming to harness their societal and economic benefits while mitigating associated risks. By focusing on innovation and safety, the office seeks to position the EU as a global leader in AI regulation and development.

According to Margerthe Vertager, the EU competition chief, the new office will play a “key role” in implementing the AI Act, particularly with regard to general-purpose AI models. She stated, “Together with developers and a scientific community, the office will evaluate and test general-purpose AI to ensure that AI serves us as humans and upholds our European values.”

Sridhar Iyengar, Managing Director for Zoho Europe, welcomed the establishment of the AI office, noting, “The establishment of the AI office in the European Commission to play a key role with the implementation of the EU AI Act is a welcome sign of progress, and it is encouraging to see the EU positioning itself as a global leader in AI regulation. We hope to continue to see collaboration between governments, businesses, academics and industry experts to guide on safe use of AI to boost business growth.”

Iyengar highlighted the dual nature of AI’s impact on businesses, pointing out both its benefits and concerns. He emphasised the importance of adhering to best practice guidance and legislative guardrails to ensure safe and ethical AI adoption.

“AI can drive innovation in business tools, helping to improve fraud detection, forecasting, and customer data analysis to name a few. These benefits not only have the potential to elevate customer experience but can increase efficiency, present insights, and suggest actions to drive further success,” Iyengar said.

The office will be staffed by more than 140 individuals, including technology specialists, administrative assistants, lawyers, policy specialists, and economists. It will consist of various units focusing on regulation and compliance, as well as safety and innovation, reflecting the multifaceted approach needed to govern AI effectively.

Rachael Hays, Transformation Director for Definia, part of The IN Group, commented: “The establishment of a dedicated AI Office within the European Commission underscores the EU’s commitment to both innovation and regulation which is undoubtedly crucial in this rapidly evolving AI landscape.”

Hays also pointed out the potential for workforce upskilling that this initiative provides. She referenced findings from their Tech and the Boardroom research, which revealed that over half of boardroom leaders view AI as the biggest direct threat to their organisations.

“This initiative directly addresses these fears as employees across various sectors are given the opportunity to adapt and thrive in an AI-driven world. The AI Office offers promising hope and guidance in developing economic benefits while mitigating risks associated with AI technology, something we should all get on board with,” she added.

As the EU takes these steps towards comprehensive AI governance, the office’s work will be pivotal in driving forward both innovation and safety in the field.

(Photo by Sara Kurfeß)

See also: Elon Musk’s xAI secures $6B to challenge OpenAI in AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU launches office to implement AI Act and foster innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/feed/ 0
GPT-4o delivers human-like AI interaction with text, audio, and vision integration https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/ https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/#respond Tue, 14 May 2024 12:43:56 +0000 https://www.artificialintelligence-news.com/?p=14811 OpenAI has launched its new flagship model, GPT-4o, which seamlessly integrates text, audio, and visual inputs and outputs, promising to enhance the naturalness of machine interactions. GPT-4o, where the “o” stands for “omni,” is designed to cater to a broader spectrum of input and output modalities. “It accepts as input any combination of text, audio,... Read more »

The post GPT-4o delivers human-like AI interaction with text, audio, and vision integration appeared first on AI News.

]]>
OpenAI has launched its new flagship model, GPT-4o, which seamlessly integrates text, audio, and visual inputs and outputs, promising to enhance the naturalness of machine interactions.

GPT-4o, where the “o” stands for “omni,” is designed to cater to a broader spectrum of input and output modalities. “It accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs,” OpenAI announced.

Users can expect a response time as quick as 232 milliseconds, mirroring human conversational speed, with an impressive average response time of 320 milliseconds.

Pioneering capabilities

The introduction of GPT-4o marks a leap from its predecessors by processing all inputs and outputs through a single neural network. This approach enables the model to retain critical information and context that were previously lost in the separate model pipeline used in earlier versions.

Prior to GPT-4o, ‘Voice Mode’ could handle audio interactions with latencies of 2.8 seconds for GPT-3.5 and 5.4 seconds for GPT-4. The previous setup involved three distinct models: one for transcribing audio to text, another for textual responses, and a third for converting text back to audio. This segmentation led to loss of nuances such as tone, multiple speakers, and background noise.

As an integrated solution, GPT-4o boasts notable improvements in vision and audio understanding. It can perform more complex tasks such as harmonising songs, providing real-time translations, and even generating outputs with expressive elements like laughter and singing. Examples of its broad capabilities include preparing for interviews, translating languages on the fly, and generating customer service responses.

Nathaniel Whittemore, Founder and CEO of Superintelligent, commented: “Product announcements are going to inherently be more divisive than technology announcements because it’s harder to tell if a product is going to be truly different until you actually interact with it. And especially when it comes to a different mode of human-computer interaction, there is even more room for diverse beliefs about how useful it’s going to be.

“That said, the fact that there wasn’t a GPT-4.5 or GPT-5 announced is also distracting people from the technological advancement that this is a natively multimodal model. It’s not a text model with a voice or image addition; it is a multimodal token in, multimodal token out. This opens up a huge array of use cases that are going to take some time to filter into the consciousness.”

Performance and safety

GPT-4o matches GPT-4 Turbo performance levels in English text and coding tasks but outshines significantly in non-English languages, making it a more inclusive and versatile model. It sets a new benchmark in reasoning with a high score of 88.7% on 0-shot COT MMLU (general knowledge questions) and 87.2% on the 5-shot no-CoT MMLU.

The model also excels in audio and translation benchmarks, surpassing previous state-of-the-art models like Whisper-v3. In multilingual and vision evaluations, it demonstrates superior performance, enhancing OpenAI’s multilingual, audio, and vision capabilities.

OpenAI has incorporated robust safety measures into GPT-4o by design, incorporating techniques to filter training data and refining behaviour through post-training safeguards. The model has been assessed through a Preparedness Framework and complies with OpenAI’s voluntary commitments. Evaluations in areas like cybersecurity, persuasion, and model autonomy indicate that GPT-4o does not exceed a ‘Medium’ risk level across any category.

Further safety assessments involved extensive external red teaming with over 70 experts in various domains, including social psychology, bias, fairness, and misinformation. This comprehensive scrutiny aims to mitigate risks introduced by the new modalities of GPT-4o.

Availability and future integration

Starting today, GPT-4o’s text and image capabilities are available in ChatGPT—including a free tier and extended features for Plus users. A new Voice Mode powered by GPT-4o will enter alpha testing within ChatGPT Plus in the coming weeks.

Developers can access GPT-4o through the API for text and vision tasks, benefiting from its doubled speed, halved price, and enhanced rate limits compared to GPT-4 Turbo.

OpenAI plans to expand GPT-4o’s audio and video functionalities to a select group of trusted partners via the API, with broader rollout expected in the near future. This phased release strategy aims to ensure thorough safety and usability testing before making the full range of capabilities publicly available.

“It’s hugely significant that they’ve made this model available for free to everyone, as well as making the API 50% cheaper. That is a massive increase in accessibility,” explained Whittemore.

OpenAI invites community feedback to continuously refine GPT-4o, emphasising the importance of user input in identifying and closing gaps where GPT-4 Turbo might still outperform.

(Image Credit: OpenAI)

See also: OpenAI takes steps to boost AI-generated content transparency

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GPT-4o delivers human-like AI interaction with text, audio, and vision integration appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/feed/ 0
OpenAI takes steps to boost AI-generated content transparency https://www.artificialintelligence-news.com/2024/05/08/openai-steps-boost-ai-generated-content-transparency/ https://www.artificialintelligence-news.com/2024/05/08/openai-steps-boost-ai-generated-content-transparency/#respond Wed, 08 May 2024 14:12:21 +0000 https://www.artificialintelligence-news.com/?p=14784 OpenAI is joining the Coalition for Content Provenance and Authenticity (C2PA) steering committee and will integrate the open standard’s metadata into its generative AI models to increase transparency around generated content. The C2PA standard allows digital content to be certified with metadata proving its origins, whether created entirely by AI, edited using AI tools, or... Read more »

The post OpenAI takes steps to boost AI-generated content transparency appeared first on AI News.

]]>
OpenAI is joining the Coalition for Content Provenance and Authenticity (C2PA) steering committee and will integrate the open standard’s metadata into its generative AI models to increase transparency around generated content.

The C2PA standard allows digital content to be certified with metadata proving its origins, whether created entirely by AI, edited using AI tools, or captured traditionally. OpenAI has already started adding C2PA metadata to images from its latest DALL-E 3 model output in ChatGPT and the OpenAI API. The metadata will be integrated into OpenAI’s upcoming video generation model Sora when launched more broadly.

“People can still create deceptive content without this information (or can remove it), but they cannot easily fake or alter this information, making it an important resource to build trust,” OpenAI explained.

The move comes amid growing concerns about the potential for AI-generated content to mislead voters ahead of major elections in the US, UK, and other countries this year. Authenticating AI-created media could help combat deepfakes and other manipulated content aimed at disinformation campaigns.

While technical measures help, OpenAI acknowledges that enabling content authenticity in practice requires collective action from platforms, creators, and content handlers to retain metadata for end consumers.

In addition to C2PA integration, OpenAI is developing new provenance methods like tamper-resistant watermarking for audio and image detection classifiers to identify AI-generated visuals.

OpenAI has opened applications for access to its DALL-E 3 image detection classifier through its Researcher Access Program. The tool predicts the likelihood an image originated from one of OpenAI’s models.

“Our goal is to enable independent research that assesses the classifier’s effectiveness, analyses its real-world application, surfaces relevant considerations for such use, and explores the characteristics of AI-generated content,” the company said.

Internal testing shows high accuracy distinguishing non-AI images from DALL-E 3 visuals, with around 98% of DALL-E images correctly identified and less than 0.5% of non-AI images incorrectly flagged. However, the classifier struggles more to differentiate between images produced by DALL-E and other generative AI models.

OpenAI has also incorporated watermarking into its Voice Engine custom voice model, currently in limited preview.

The company believes increased adoption of provenance standards will lead to metadata accompanying content through its full lifecycle to fill “a crucial gap in digital content authenticity practices.”

OpenAI is joining Microsoft to launch a $2 million societal resilience fund to support AI education and understanding, including through AARP, International IDEA, and the Partnership on AI.

“While technical solutions like the above give us active tools for our defences, effectively enabling content authenticity in practice will require collective action,” OpenAI states.

“Our efforts around provenance are just one part of a broader industry effort – many of our peer research labs and generative AI companies are also advancing research in this area. We commend these endeavours—the industry must collaborate and share insights to enhance our understanding and continue to promote transparency online.”

(Photo by Marc Sendra Martorell)

See also: Chuck Ros, SoftServe: Delivering transformative AI solutions responsibly

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI takes steps to boost AI-generated content transparency appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/08/openai-steps-boost-ai-generated-content-transparency/feed/ 0