AI Privacy News | AI Privacy Issues & Solutions | AI News https://www.artificialintelligence-news.com/categories/privacy/ Artificial Intelligence News Wed, 12 Jun 2024 15:48:24 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Privacy News | AI Privacy Issues & Solutions | AI News https://www.artificialintelligence-news.com/categories/privacy/ 32 32 Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/ https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/#respond Wed, 12 Jun 2024 15:45:08 +0000 https://www.artificialintelligence-news.com/?p=14988 Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process. Musk had initially sued OpenAI in March 2024,... Read more »

The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News.

]]>
Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process.

Musk had initially sued OpenAI in March 2024, alleging breach of contracts, unfair business practices, and failure in fiduciary duty. He claimed that his contributions to the company were made “in exchange for and in reliance on promises that those assets were irrevocably dedicated to building AI for public benefit, with only safety as a countervailing concern.”

The lawsuit sought remedies for “breach of contract, promissory estoppel, breach of fiduciary duty, unfair business practices, and accounting,” as well as specific performance, restitution, and damages.

However, Musk’s filings to withdraw the case provided no explanation for abandoning the lawsuit. OpenAI had previously called Musk’s claims “incoherent” and that his inability to produce a contract made his breach claims difficult to prove, stating that documents provided by Musk “contradict his allegations as to the alleged terms of the agreement.”

The withdrawal of the lawsuit comes at a time when Musk is strongly opposing Apple’s plans to integrate ChatGPT into its operating systems.

During Apple’s keynote event announcing Apple Intelligence for iOS 18, iPadOS 18, and macOS Sequoia, Musk threatened to ban Apple devices from his companies, calling the integration “an unacceptable security violation.”

Despite assurances from Apple and OpenAI that user data would only be shared with explicit consent and that interactions would be secure, Musk questioned Apple’s ability to ensure data security, stating, “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.”

Since bringing the lawsuit against OpenAI, Musk has also created his own AI company, xAI, and secured over $6 billion in funding for his plans to advance the Grok chatbot on his social network, X.

While Musk’s reasoning for dropping the OpenAI lawsuit remains unclear, his actions suggest a potential shift in focus towards advancing his own AI endeavours while continuing to vocalise his criticism of OpenAI through social media rather than the courts.

See also: DuckDuckGo releases portal giving private access to AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/feed/ 0
DuckDuckGo releases portal giving private access to AI models https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/ https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/#respond Fri, 07 Jun 2024 15:42:22 +0000 https://www.artificialintelligence-news.com/?p=14966 DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected. The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source... Read more »

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected.

The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source models are OpenAI’s GPT-3.5 Turbo and Anthropic’s Claude 3 Haiku, while the open-source models are Meta’s Llama-3 70B and Mistral AI’s Mixtral 8x7b.

What sets DuckDuckGo AI Chat apart is its commitment to user privacy. Neither DuckDuckGo nor the chatbot providers can use user data to train their models, ensuring that interactions remain private and anonymous. DuckDuckGo also strips away metadata, such as server or IP addresses, so that queries appear to originate from the company itself rather than individual users.

The company has agreements in place with all model providers to ensure that any saved chats are completely deleted within 30 days, and that none of the chats made on the platform can be used to train or improve the models. This makes preserving privacy easier than changing the privacy settings for each service.

In an era where online services are increasingly hungry for user data, DuckDuckGo’s AI Chat service is a breath of fresh air. The company’s commitment to privacy is a direct response to the growing concerns about data collection and usage in the AI industry. By providing a private and anonymous platform for users to interact with AI chatbots, DuckDuckGo is setting a new standard for the industry.

DuckDuckGo’s AI service is free to use within a daily limit, and the company is considering launching a paid tier to reduce or eliminate these limits. The service is designed to be a complementary partner to its search engine, allowing users to switch between search and AI chat for a more comprehensive search experience.

“We view AI Chat and search as two different but powerful tools to help you find what you’re looking for – especially when you’re exploring a new topic. You might be shopping or doing research for a project and are unsure how to get started. In situations like these, either AI Chat or Search could be good starting points.” the company explained.

“If you start by asking a few questions in AI Chat, the answers may inspire traditional searches to track down reviews, prices, or other primary sources. If you start with Search, you may want to switch to AI Chat for follow-up queries to help make sense of what you’ve read, or for quick, direct answers to new questions that weren’t covered in the web pages you saw.”

To accommodate that user workflow, DuckDuckGo has made AI Chat accessible through DuckDuckGo Private Search for quick access.

The launch of DuckDuckGo AI Chat comes at a time when the AI industry is facing increasing scrutiny over data privacy and usage. The service is a welcome addition for privacy-conscious individuals, joining the recent launch of Venice AI by crypto entrepreneur Erik Voorhees. Venice AI features an uncensored AI chatbot and image generator that doesn’t require accounts and doesn’t retain data..

As the AI industry continues to evolve, it’s clear that privacy will remain a top concern for users. With the launch of DuckDuckGo AI Chat, the company is taking a significant step towards providing users with a private and secure platform for interacting with AI chatbots.

See also: AI pioneers turn whistleblowers and demand safeguards

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/feed/ 0
Ethical, trust and skill barriers hold back generative AI progress in EMEA https://www.artificialintelligence-news.com/2024/05/20/ethical-trust-and-skill-barriers-hold-back-generative-ai-progress-in-emea/ https://www.artificialintelligence-news.com/2024/05/20/ethical-trust-and-skill-barriers-hold-back-generative-ai-progress-in-emea/#respond Mon, 20 May 2024 10:17:00 +0000 https://www.artificialintelligence-news.com/?p=14845 76% of consumers in EMEA think AI will significantly impact the next five years, yet 47% question the value that AI will bring and 41% are worried about its applications. This is according to research from enterprise analytics AI firm Alteryx. Since the release of ChatGPT by OpenAI in November 2022, there has been significant buzz about the... Read more »

The post Ethical, trust and skill barriers hold back generative AI progress in EMEA appeared first on AI News.

]]>
76% of consumers in EMEA think AI will significantly impact the next five years, yet 47% question the value that AI will bring and 41% are worried about its applications.

This is according to research from enterprise analytics AI firm Alteryx.

Since the release of ChatGPT by OpenAI in November 2022, there has been significant buzz about the transformative potential of generative AI, with many considering it one of the most revolutionary technologies of our time. 

With a significant 79% of organisations reporting that generative AI contributes positively to business, it is evident that a gap needs to be addressed to demonstrate AI’s value to consumers both in their personal and professional lives. According to the ‘Market Research: Attitudes and Adoption of Generative AI’ report, which surveyed 690 IT business leaders and 1,100 members of the general public in EMEA, key issues of trust, ethics and skills are prevalent, potentially impeding the successful deployment and broader acceptance of generative AI.

The impact of misinformation, inaccuracies, and AI hallucinations

These hallucinations – where AI generates incorrect or illogical outputs – are a significant concern. Trusting what generative AI produces is a substantial issue for both business leaders and consumers. Over a third of the public are anxious about AI’s potential to generate fake news (36%) and its misuse by hackers (42%), while half of the business leaders report grappling with misinformation produced by generative AI. Simultaneously, half of the business leaders have observed their organisations grappling with misinformation produced by generative AI.

Moreover, the reliability of information provided by generative AI has been questioned. Feedback from the general public indicates that half of the data received from AI was inaccurate, and 38% perceived it as outdated. On the business front, concerns include generative AI infringing on copyright or intellectual property rights (40%), and producing unexpected or unintended outputs (36%).

A critical trust issue for businesses (62%) and the public (74%) revolves around AI hallucinations. For businesses, the challenge involves applying generative AI to appropriate use cases, supported by the right technology and safety measures, to mitigate these concerns. Close to half of the consumers (45%) are advocating for regulatory measures on AI usage.

Ethical concerns and risks persist in the use of generative AI

In addition to these challenges, there are strong and similar sentiments on ethical concerns and the risks associated with generative AI among both business leaders and consumers. More than half of the general public (53%) oppose the use of generative AI in making ethical decisions. Meanwhile, 41% of business respondents are concerned about its application in critical decision-making areas. There are distinctions in the specific areas where its use is discouraged; consumers notably oppose its use in politics (46%), and businesses are cautious about its deployment in healthcare (40%).

These concerns find some validation in the research findings, which highlight worrying gaps in organisational practices. Only a third of leaders confirmed that their businesses ensure the data used to train generative AI is diverse and unbiased. Furthermore, only 36% have set ethical guidelines, and 52% have established data privacy and security policies for generative AI applications.

This lack of emphasis on data integrity and ethical considerations puts firms at risk. 63% of business leaders cite ethics as their major concern with generative AI, closely followed by data-related issues (62%). This scenario emphasises the importance of better governance to create confidence and mitigate risks related to how employees use generative AI in the workplace. 

The rise of generative AI skills and the need for enhanced data literacy

As generative AI evolves, establishing relevant skill sets and enhancing data literacy will be key to realising its full potential. Consumers are increasingly using generative AI technologies in various scenarios, including information retrieval, email communication, and skill acquisition. Business leaders claim they use generative AI for data analysis, cybersecurity, and customer support, and despite the success of pilot projects, challenges remain. Despite the reported success of experimental projects, several challenges remain, including security problems, data privacy issues, and output quality and reliability.

Trevor Schulze, Alteryx’s CIO, emphasised the necessity for both enterprises and the general public to fully understand the value of AI and address common concerns as they navigate the early stages of generative AI adoption.

He noted that addressing trust issues, ethical concerns, skills shortages, fears of privacy invasion, and algorithmic bias are critical tasks. Schulze underlined the necessity for enterprises to expedite their data journey, adopt robust governance, and allow non-technical individuals to access and analyse data safely and reliably, addressing privacy and bias concerns in order to genuinely profit from this ‘game-changing’ technology.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

The post Ethical, trust and skill barriers hold back generative AI progress in EMEA appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/20/ethical-trust-and-skill-barriers-hold-back-generative-ai-progress-in-emea/feed/ 0
FT and OpenAI ink partnership amid web scraping criticism https://www.artificialintelligence-news.com/2024/04/29/ft-and-openai-ink-partnership-web-scraping-criticism/ https://www.artificialintelligence-news.com/2024/04/29/ft-and-openai-ink-partnership-web-scraping-criticism/#respond Mon, 29 Apr 2024 15:57:06 +0000 https://www.artificialintelligence-news.com/?p=14759 The Financial Times and OpenAI have announced a strategic partnership and licensing agreement that will integrate the newspaper’s journalism into ChatGPT and collaborate on developing new AI products for FT readers. However, just because OpenAI is cozying up to publishers doesn’t mean it’s not still scraping information from the web without permission. Through the deal,... Read more »

The post FT and OpenAI ink partnership amid web scraping criticism appeared first on AI News.

]]>
The Financial Times and OpenAI have announced a strategic partnership and licensing agreement that will integrate the newspaper’s journalism into ChatGPT and collaborate on developing new AI products for FT readers. However, just because OpenAI is cozying up to publishers doesn’t mean it’s not still scraping information from the web without permission.

Through the deal, ChatGPT users will be able to see selected attributed summaries, quotes, and rich links to FT journalism in response to relevant queries. Additionally, the FT became a customer of ChatGPT Enterprise earlier this year, providing access for all employees to familiarise themselves with the technology and benefit from its potential productivity gains.

“This is an important agreement in a number of respects,” said John Ridding, FT Group CEO. “It recognises the value of our award-winning journalism and will give us early insights into how content is surfaced through AI.”

In 2023, technology companies faced numerous lawsuits and widespread criticism for allegedly using copyrighted material from artists and publishers to train their AI models without proper authorisation.

OpenAI, in particular, drew significant backlash for training its GPT models on data obtained from the internet without obtaining consent from the respective content creators. This issue escalated to the point where The New York Times filed a lawsuit against OpenAI and Microsoft last year, accusing them of copyright infringement.

While emphasising the FT’s commitment to human journalism, Ridding noted the agreement would broaden the reach of its newsroom’s work while deepening the understanding of reader interests.

“Apart from the benefits to the FT, there are broader implications for the industry. It’s right, of course, that AI platforms pay publishers for the use of their material. OpenAI understands the importance of transparency, attribution, and compensation – all essential for us,” explained Ridding.

Earlier this month, The New York Times reported that OpenAI was utilising scripts from YouTube videos to train its AI models. According to the publication, this practice violates copyright laws, as content creators who upload videos to YouTube retain the copyright ownership of the material they produce.

However, OpenAI maintains that its use of online content falls under the fair use doctrine. The company, along with numerous other technology firms, argues that their large language models (LLMs) transform the information gathered from the internet into an entirely new and distinct creation.

In January, OpenAI asserted to a UK parliamentary committee that it would be “impossible” to develop today’s leading AI systems without using vast amounts of copyrighted data.

Brad Lightcap, COO of OpenAI, expressed his enthusiasm about the FT partnership: “Our partnership and ongoing dialogue with the FT is about finding creative and productive ways for AI to empower news organisations and journalists, and enrich the ChatGPT experience with real-time, world-class journalism for millions of people around the world.”

This agreement between OpenAI and the Financial Times is the most recent in a series of new collaborations that OpenAI has forged with major news publishers worldwide.

While the financial details of these contracts were not revealed, OpenAI’s recent partnerships with publishers will enable the company to continue training its algorithms on web content, but with the crucial difference being that it now has obtained the necessary permissions to do so.

Ridding said the FT values “the opportunity to be inside the development loop as people discover content in new ways.” He acknowledged the potential for significant advancements and challenges with transformative technologies like AI but emphasised, “what’s never possible is turning back time.”

“It’s important for us to represent quality journalism as these products take shape – with the appropriate safeguards in place to protect the FT’s content and brand,” Ridding added.

The FT has embraced new technologies throughout its history. “We’ll continue to operate with both curiosity and vigilance as we navigate this next wave of change,” Ridding concluded.

(Photo by Utsav Srestha)

See also: OpenAI faces complaint over fictional outputs

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post FT and OpenAI ink partnership amid web scraping criticism appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/29/ft-and-openai-ink-partnership-web-scraping-criticism/feed/ 0
OpenAI faces complaint over fictional outputs https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/ https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/#respond Mon, 29 Apr 2024 08:45:02 +0000 https://www.artificialintelligence-news.com/?p=14751 European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union. “Making up false information... Read more »

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union.

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences,” said Maartje de Graaf, Data Protection Lawyer at noyb. 

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The GDPR requires that personal data be accurate, and individuals have the right to rectification if data is inaccurate, as well as the right to access information about the data processed and its sources. However, OpenAI has openly admitted that it cannot correct incorrect information generated by ChatGPT or disclose the sources of the data used to train the model.

“Factual accuracy in large language models remains an area of active research,” OpenAI has argued.

The advocacy group highlights a New York Times report that found chatbots like ChatGPT “invent information at least 3 percent of the time – and as high as 27 percent.” In the complaint against OpenAI, noyb cites an example where ChatGPT repeatedly provided an incorrect date of birth for the complainant, a public figure, despite requests for rectification.

“Despite the fact that the complainant’s date of birth provided by ChatGPT is incorrect, OpenAI refused his request to rectify or erase the data, arguing that it wasn’t possible to correct data,” noyb stated.

OpenAI claimed it could filter or block data on certain prompts, such as the complainant’s name, but not without preventing ChatGPT from filtering all information about the individual. The company also failed to adequately respond to the complainant’s access request, which the GDPR requires companies to fulfil.

“The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used to at least have an idea about the sources of information,” said de Graaf. “It seems that with each ‘innovation,’ another group of companies thinks that its products don’t have to comply with the law.”

European privacy watchdogs have already scrutinised ChatGPT’s inaccuracies, with the Italian Data Protection Authority imposing a temporary restriction on OpenAI’s data processing in March 2023 and the European Data Protection Board establishing a task force on ChatGPT.

In its complaint, noyb is asking the Austrian Data Protection Authority to investigate OpenAI’s data processing and measures to ensure the accuracy of personal data processed by its large language models. The advocacy group also requests that the authority order OpenAI to comply with the complainant’s access request, bring its processing in line with the GDPR, and impose a fine to ensure future compliance.

You can read the full complaint here (PDF)

(Photo by Eleonora Francesca Grotto)

See also: Igor Jablokov, Pryon: Building a responsible AI future

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/feed/ 0
EU approves controversial AI Act to mixed reactions https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/ https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/#respond Wed, 13 Mar 2024 16:39:55 +0000 https://www.artificialintelligence-news.com/?p=14535 The European Parliament today approved the AI Act, the first ever regulatory framework governing the use of AI systems. The legislation passed with an overwhelming majority of 523 votes in favour, 46 against and 49 abstentions. “This is a historic day,” said Italian lawmaker Brando Benifei, co-lead on the AI Act. “We have the first... Read more »

The post EU approves controversial AI Act to mixed reactions appeared first on AI News.

]]>
The European Parliament today approved the AI Act, the first ever regulatory framework governing the use of AI systems. The legislation passed with an overwhelming majority of 523 votes in favour, 46 against and 49 abstentions.

“This is a historic day,” said Italian lawmaker Brando Benifei, co-lead on the AI Act. “We have the first regulation in the world which puts a clear path for safe and human-centric development of AI.”

The AI Act will categorise AI systems into four tiers based on their potential risk to society. High-risk applications like self-driving cars will face strict requirements before being allowed on the EU market. Lower risk systems will have fewer obligations.

“The main point now will be implementation and compliance by businesses and institutions,” Benifei stated. “We are also working on further AI legislation for workplace conditions.”

His counterpart, Dragoş Tudorache of Romania, said the EU aims to promote these pioneering rules globally. “We have to be open to work with others on how to build governance with like-minded parties.”

The general AI rules take effect in May 2025, while obligations for high-risk systems kick in after three years. National oversight agencies will monitor compliance.

Differing viewpoints on impact

Reaction was mixed on whether the Act properly balances innovation with protecting rights.

Curtis Wilson, a data scientist at Synopsys, believes it will build public trust: “The strict rules and punishing fines will deter careless developers, and help customers be more confident in using AI systems…Ensuring all AI developers adhere to these standards is to everyone’s benefit.”

However, Mher Hakobyan from Amnesty International criticised the legislation as favouring industry over human rights: “It is disappointing that the EU chose to prioritise interests of industry and law enforcement over protecting people…It lacks proper transparency and accountability provisions, which will likely exacerbate abuses.”

Companies now face the challenge of overhauling practices to comply.

Marcus Evans, a data privacy lawyer, advised: “Businesses need to create and maintain robust AI governance to make the best use of the technology and ensure compliance with the new regime…They need to start preparing now to not fall foul of the rules.”

After years of negotiations, the AI Act signals the EU intends to lead globally on this transformative technology. But dissenting voices show challenges remain in finding the right balance.

(Photo by Tabrez Syed on Unsplash)

See also: OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU approves controversial AI Act to mixed reactions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/feed/ 0
Reddit is reportedly selling data for AI training https://www.artificialintelligence-news.com/2024/02/19/reddit-is-reportedly-selling-data-for-ai-training/ https://www.artificialintelligence-news.com/2024/02/19/reddit-is-reportedly-selling-data-for-ai-training/#respond Mon, 19 Feb 2024 11:11:40 +0000 https://www.artificialintelligence-news.com/?p=14419 Reddit has negotiated a content licensing deal to allow its data to be used for training AI models, according to a Bloomberg report. Just ahead of a potential $5 billion initial public offering (IPO) debut in March, Reddit has reportedly signed a $60 million deal with an undisclosed major AI company. This move could be... Read more »

The post Reddit is reportedly selling data for AI training appeared first on AI News.

]]>
Reddit has negotiated a content licensing deal to allow its data to be used for training AI models, according to a Bloomberg report.

Just ahead of a potential $5 billion initial public offering (IPO) debut in March, Reddit has reportedly signed a $60 million deal with an undisclosed major AI company. This move could be seen as a last-minute effort to showcase potential revenue streams in the rapidly growing AI industry to prospective investors.

Although Reddit has yet to confirm the deal, the decision could have significant implications. If true, it would mean that Reddit’s vast trove of user-generated content – including posts from popular subreddits, comments from both prominent and obscure users, and discussions on a wide range of topics – could be used to train and enhance existing large language models (LLMs) or provide the foundation for the development of new generative AI systems.

However, this decision by Reddit may not sit well with its user base, as the company has faced increasing opposition from its community regarding its recent business decisions.

Last year, when Reddit announced plans to start charging for access to its application programming interfaces (APIs), thousands of Reddit forums temporarily shut down in protest. Days later, a group of Reddit hackers threatened to release previously stolen site data unless the company reversed the API plan or paid a ransom of $4.5 million.

Reddit has recently made other controversial decisions, such as removing years of private chat logs and messages from users’ accounts. The platform also implemented new automatic moderation features and removed the option for users to turn off personalised advertising, fuelling additional discontent among its users.

This latest reported deal to sell Reddit’s data for AI training could generate even more backlash from users, as the debate over the ethics of using public data, art, and other human-created content to train AI systems continues to intensify across various industries and platforms.

(Photo by Brett Jordan on Unsplash)

See also: Amazon trains 980M parameter LLM with ’emergent abilities’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Reddit is reportedly selling data for AI training appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/19/reddit-is-reportedly-selling-data-for-ai-training/feed/ 0
AI-generated Biden robocall urges Democrats not to vote https://www.artificialintelligence-news.com/2024/01/23/ai-generated-biden-robocall-urges-democrats-not-to-vote/ https://www.artificialintelligence-news.com/2024/01/23/ai-generated-biden-robocall-urges-democrats-not-to-vote/#respond Tue, 23 Jan 2024 17:04:04 +0000 https://www.artificialintelligence-news.com/?p=14253 An AI-generated robocall impersonating President Joe Biden has urged Democratic Party members not to vote in the upcoming primary on Tuesday. Kathy Sullivan – a prominent New Hampshire Democrat and former state party chair – is calling for the prosecution of those responsible, describing the incident as “an attack on democracy.” The call began with... Read more »

The post AI-generated Biden robocall urges Democrats not to vote appeared first on AI News.

]]>
An AI-generated robocall impersonating President Joe Biden has urged Democratic Party members not to vote in the upcoming primary on Tuesday.

Kathy Sullivan – a prominent New Hampshire Democrat and former state party chair – is calling for the prosecution of those responsible, describing the incident as “an attack on democracy.”

The call began with a dismissive “What a bunch of malarkey,” a phrase that’s become associated with the 81-year-old president. It then went on to discourage voting in the primary, suggesting that Democrats should save their votes for the November election.

Sullivan, an attorney, believes the call may violate several laws and is determined to uncover the individuals behind it. New Hampshire attorney general, John Formella, has urged voters to disregard the call’s contents.

The robocall controversy has sparked an investigation, with NBC News releasing a recording of the call. Sullivan’s phone number was included in the message, raising concerns about privacy and potential harassment.

This incident comes amid a wider debate about the use of AI in political campaigns. OpenAI recently suspended the developer of a ChatGPT-powered bot called Dean.Bot that mimicked Democratic candidate Dean Phillips.

As concerns about AI manipulation in elections grow, advocacy groups like Public Citizen are pushing for federal regulation. A petition from Public Citizen calls on the Federal Election Commission (FEC) to regulate AI use in campaign ads. The FEC chair, Sean Cooksey, acknowledged the issue but stated that resolving it might take until early summer.

The deepfake call and politician-impersonating chatbot has intensified calls for swift action to address the potential chaos AI could cause in elections. With state lawmakers also considering bills to tackle this practice, the incident raises questions about the vulnerability of democratic processes to AI manipulation in a crucial election year.

(Photo by Manny Becerra on Unsplash)

See also: OpenAI launches GPT Store for custom AI assistants

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI-generated Biden robocall urges Democrats not to vote appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/23/ai-generated-biden-robocall-urges-democrats-not-to-vote/feed/ 0
OpenAI: Copyrighted data ‘impossible’ to avoid for AI training https://www.artificialintelligence-news.com/2024/01/09/openai-copyrighted-data-impossible-avoid-for-ai-training/ https://www.artificialintelligence-news.com/2024/01/09/openai-copyrighted-data-impossible-avoid-for-ai-training/#respond Tue, 09 Jan 2024 15:45:05 +0000 https://www.artificialintelligence-news.com/?p=14167 OpenAI made waves this week with its bold assertion to a UK parliamentary committee that it would be “impossible” to develop today’s leading AI systems without using vast amounts of copyrighted data. The company argued that advanced AI tools like ChatGPT require such broad training that adhering to copyright law would be utterly unworkable. In... Read more »

The post OpenAI: Copyrighted data ‘impossible’ to avoid for AI training appeared first on AI News.

]]>
OpenAI made waves this week with its bold assertion to a UK parliamentary committee that it would be “impossible” to develop today’s leading AI systems without using vast amounts of copyrighted data.

The company argued that advanced AI tools like ChatGPT require such broad training that adhering to copyright law would be utterly unworkable.

In written testimony, OpenAI stated that between expansive copyright laws and the ubiquity of protected online content, “virtually every sort of human expression” would be off-limits for training data. From news articles to forum comments to digital images, little online content can be utilised freely and legally.

According to OpenAI, attempts to create capable AI while avoiding copyright infringement would fail: “Limiting training data to public domain books and drawings created more than a century ago … would not provide AI systems that meet the needs of today’s citizens.”

While defending its practices as compliant, OpenAI conceded that partnerships and compensation schemes with publishers may be warranted to “support and empower creators.” But the company gave no indication that it intends to dramatically restrict its harvesting of online data, including paywalled journalism and literature.

This stance has opened OpenAI up to multiple lawsuits, including from media outlets like The New York Times alleging copyright breaches.

Nonetheless, OpenAI appears unwilling to fundamentally alter its data collection and training processes—given the “impossible” constraints self-imposed copyright limits would bring. The company instead hopes to rely on broad interpretations of fair use allowances to legally leverage vast swathes of copyrighted data.

As advanced AI continues to demonstrate uncanny abilities emulating human expression, legal experts expect vigorous courtroom battles around infringement by systems intrinsically designed to absorb enormous volumes of protected text, media, and other creative output. 

For now, OpenAI is betting against copyright maximalists in favour of near-boundless copying to drive ongoing AI development.

(Photo by Levart_Photographer on Unsplash)

See also: OpenAI’s GPT Store to launch next week after delays

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI: Copyrighted data ‘impossible’ to avoid for AI training appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/09/openai-copyrighted-data-impossible-avoid-for-ai-training/feed/ 0
Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/#respond Mon, 30 Oct 2023 10:18:14 +0000 https://www.artificialintelligence-news.com/?p=13798 President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use. The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership... Read more »

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

  1. New standards for AI safety and security: The order mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government. Rigorous standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy before public release. Additionally, measures will be taken to protect against the risks of using AI to engineer dangerous biological materials and combat AI-enabled fraud and deception.
  2. Protecting citizens’ privacy: The President calls on Congress to pass bipartisan data privacy legislation, prioritizing federal support for privacy-preserving techniques, especially those using AI. Guidelines will be developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
  3. Advancing equity and civil rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination, especially in areas like housing and federal benefit programs. Best practices will be established for the use of AI in the criminal justice system to ensure fairness.
  4. Standing up for consumers, patients, and students: Responsible use of AI in healthcare and education will be promoted, ensuring that consumers are protected from harmful AI applications while benefiting from its advancements in these sectors.
  5. Supporting workers: Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers, addressing issues such as job displacement, workplace equity, and health and safety. A report on AI’s potential labour-market impacts will be produced, identifying options for strengthening federal support for workers facing labour disruptions due to AI.
  6. Promoting innovation and competition: The order aims to catalyse AI research across the US, promote a fair and competitive AI ecosystem, and expand the ability of highly skilled immigrants and non-immigrants to study, stay, and work in the US to foster innovation in the field.
  7. Advancing leadership abroad: The US will collaborate with other nations to establish international frameworks for safe and trustworthy AI deployment. Efforts will be made to accelerate the development and implementation of vital AI standards with international partners and promote the responsible development and deployment of AI abroad to address global challenges.
  8. Ensuring responsible and effective government adoption: Clear standards and guidelines will be issued for government agencies’ use of AI to protect rights and safety. Efforts will be made to help agencies acquire AI products and services more rapidly and efficiently, and an AI talent surge will be initiated to enhance government capacity in AI-related fields.

The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.

“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.

(Photo by David Everett Strickler on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/feed/ 0