fraud Archives - AI News https://www.artificialintelligence-news.com/tag/fraud/ Artificial Intelligence News Wed, 27 Mar 2024 10:27:46 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png fraud Archives - AI News https://www.artificialintelligence-news.com/tag/fraud/ 32 32 Large language models could ‘revolutionise the finance sector within two years’ https://www.artificialintelligence-news.com/2024/03/27/large-language-models-could-revolutionsise-the-finance-sector-within-two-years/ https://www.artificialintelligence-news.com/2024/03/27/large-language-models-could-revolutionsise-the-finance-sector-within-two-years/#respond Wed, 27 Mar 2024 06:07:00 +0000 https://www.artificialintelligence-news.com/?p=14612 Large Language Models (LLMs) have the potential to improve efficiency and safety in the finance sector by detecting fraud, generating financial insights and automating customer service, according to research by The Alan Turing Institute. Because LLMs have an ability to analyse large amounts of data quickly and generate coherent text, there is growing understanding of... Read more »

The post Large language models could ‘revolutionise the finance sector within two years’ appeared first on AI News.

]]>
Large Language Models (LLMs) have the potential to improve efficiency and safety in the finance sector by detecting fraud, generating financial insights and automating customer service, according to research by The Alan Turing Institute.

Because LLMs have an ability to analyse large amounts of data quickly and generate coherent text, there is growing understanding of the potential to improve services across a range of sectors including healthcare, law, education and in financial services including banking, insurance and financial planning.

This report, which is the first to explore the adoption of LLMs across the finance ecosystem, shows that people working in this area have already begun to use LLMs to support a variety of internal processes, such as the review of regulations, and are assessing its potential for supporting external activity like the delivery of advisory and trading services.

Alongside a literature survey, researchers held a workshop of 43 professionals from major high street and investment banks, regulators, insurers, payment service providers, government and legal professions.

The majority of workshop participants (52%) are already using these models to enhance performance in information-orientated tasks, from the management of meeting notes to cyber security and compliance insight, while 29% use them to boost critical thinking skills, and another 16% employ them to break down complex tasks.

The sector is also already establishing systems to enhance productivity through rapid analysis of large amount of text to simplify decision making processes, risk profiling and to improve investment research and back-office operations.

When asked about the future of LLMs in the finance sector, participants felt that LLMs would be integrated into services like investment banking and venture capital strategy development within two years.

They also thought it likely that LLMs would be integrated to improve interactions between people and machines, for example dictation and embedded AI assistants could reduce the complexity of knowledge intensive tasks such as the review of regulations.

But participants also acknowledged that the technology poses risks which will limit its usage. Financial institutions are subject to extensive regulatory standards and obligations which limits their ability to use AI systems that they cannot explain and do not generate output predictably, consistently or without risk of error.

Based on their findings, the authors recommend that financial services professionals, regulators and policy makers collaborate across the sector to share and develop knowledge about implementing and using LLMs, particularly related to safety concerns. They also suggest that the growing interest in open-source models should be explored and could be used and maintained effectively, but that mitigating security and privacy concerns would be a high priority.

Professor Carsten Maple, lead author and Turing Fellow at The Alan Turing Institute, said: “Banks and other financial institutions have always been quick to adopt new technologies to make their operations more efficient and the emergence of LLMs is no different. By bringing together experts across the finance ecosystem, we have managed to create a common understanding of the use cases, risks, value and timeline for implementation of these technologies at scale.”

Professor Lukasz Szpruch, programme director for Finance and Economics at The Alan Turing Institute, said: “It’s really positive that the financial sector is benefiting from the emergence of large language models and their implementation into this highly regulated sector has the potential to provide best practices for other sectors. This study demonstrates the benefit of research institutes and industry working together to assess the vast opportunities as well as the practical and ethical challenges of new technologies to ensure they are implemented safely.”

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Large language models could ‘revolutionise the finance sector within two years’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/27/large-language-models-could-revolutionsise-the-finance-sector-within-two-years/feed/ 0
Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/#respond Mon, 30 Oct 2023 10:18:14 +0000 https://www.artificialintelligence-news.com/?p=13798 President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use. The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership... Read more »

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

  1. New standards for AI safety and security: The order mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government. Rigorous standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy before public release. Additionally, measures will be taken to protect against the risks of using AI to engineer dangerous biological materials and combat AI-enabled fraud and deception.
  2. Protecting citizens’ privacy: The President calls on Congress to pass bipartisan data privacy legislation, prioritizing federal support for privacy-preserving techniques, especially those using AI. Guidelines will be developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
  3. Advancing equity and civil rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination, especially in areas like housing and federal benefit programs. Best practices will be established for the use of AI in the criminal justice system to ensure fairness.
  4. Standing up for consumers, patients, and students: Responsible use of AI in healthcare and education will be promoted, ensuring that consumers are protected from harmful AI applications while benefiting from its advancements in these sectors.
  5. Supporting workers: Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers, addressing issues such as job displacement, workplace equity, and health and safety. A report on AI’s potential labour-market impacts will be produced, identifying options for strengthening federal support for workers facing labour disruptions due to AI.
  6. Promoting innovation and competition: The order aims to catalyse AI research across the US, promote a fair and competitive AI ecosystem, and expand the ability of highly skilled immigrants and non-immigrants to study, stay, and work in the US to foster innovation in the field.
  7. Advancing leadership abroad: The US will collaborate with other nations to establish international frameworks for safe and trustworthy AI deployment. Efforts will be made to accelerate the development and implementation of vital AI standards with international partners and promote the responsible development and deployment of AI abroad to address global challenges.
  8. Ensuring responsible and effective government adoption: Clear standards and guidelines will be issued for government agencies’ use of AI to protect rights and safety. Efforts will be made to help agencies acquire AI products and services more rapidly and efficiently, and an AI talent surge will be initiated to enhance government capacity in AI-related fields.

The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.

“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.

(Photo by David Everett Strickler on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/feed/ 0
Wozniak warns AI will power next-gen scams https://www.artificialintelligence-news.com/2023/05/09/wozniak-warns-ai-will-power-next-gen-scams/ https://www.artificialintelligence-news.com/2023/05/09/wozniak-warns-ai-will-power-next-gen-scams/#respond Tue, 09 May 2023 17:11:39 +0000 https://www.artificialintelligence-news.com/?p=13041 Apple co-founder Steve Wozniak has raised concerns over the potential misuse of AI-powered tools by cybercriminals to create convincing online scams.  Wozniak fears that AI will fall into the wrong hands and lead to increased scams and more difficult-to-spot online fraud. The renowned engineer has called for the regulation of AI technology to limit its... Read more »

The post Wozniak warns AI will power next-gen scams appeared first on AI News.

]]>
Apple co-founder Steve Wozniak has raised concerns over the potential misuse of AI-powered tools by cybercriminals to create convincing online scams. 

Wozniak fears that AI will fall into the wrong hands and lead to increased scams and more difficult-to-spot online fraud.

The renowned engineer has called for the regulation of AI technology to limit its use by bad players who are willing to trick people about their identity and deceive them to obtain sensitive information.

Wozniak’s comments come at a time when the use of AI technology is on the rise. Many businesses are turning to AI-powered tools to automate their processes, improve their efficiency, and create new products and services.

OpenAI’s ChatGPT and Google’s Bard are among a growing number of generative AI tools that can converse with humans in written form in a natural, human-like way.

According to a report by Goldman Sachs, the technology is expected to impact an estimated 300 million workplace roles in the coming years, though it added that many of these jobs will likely be assisted by AI rather than replaced.

However, Wozniak believes that AI technology is open to abuse by cybercriminals, who can use it to clone a person’s voice and trick their friends or relatives into handing over money. Wozniak hopes that AI can be trained to recognise such scams and alert the target to take appropriate action to protect themselves.

Wozniak was one of around 1,000 technology experts who put their names to a letter in March calling for a six-month pause on the development of some AI tools so that guidelines for their safe deployment can be drawn up.

He wants the regulation of major tech companies that “feel they can kind of get away with anything” to ensure that they stay within certain boundaries. However, Wozniak also pondered whether such regulation would be effective, stating that “the forces that drive for money usually win out, which is sort of sad.”

As AI technology continues to evolve, it is essential to ensure that its use is regulated to prevent cybercriminals from using it for fraudulent activities. At the same time, it is vital to balance regulation with innovation to enable AI technology to be developed in a responsible and safe manner.

Similar: AI ‘godfather’ warns of dangers and quits Google

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Wozniak warns AI will power next-gen scams appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/09/wozniak-warns-ai-will-power-next-gen-scams/feed/ 0
AI targets cryptocurrency ‘pump-and-dump’ schemes https://www.artificialintelligence-news.com/2018/12/06/ai-cryptocurrency-pump-dump-schemes/ https://www.artificialintelligence-news.com/2018/12/06/ai-cryptocurrency-pump-dump-schemes/#respond Thu, 06 Dec 2018 16:08:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4300 Pump-and-dump schemes result in serious penalties in regulated markets, but in the wild west of cryptocurrencies, they’re a regular occurrence. For those unaware, a pump-and-dump is typically a group of individuals who use their combined buying power to artificially inflate the price of an asset before selling off near its peak. Outsiders believe the price is... Read more »

The post AI targets cryptocurrency ‘pump-and-dump’ schemes appeared first on AI News.

]]>
Pump-and-dump schemes result in serious penalties in regulated markets, but in the wild west of cryptocurrencies, they’re a regular occurrence.

For those unaware, a pump-and-dump is typically a group of individuals who use their combined buying power to artificially inflate the price of an asset before selling off near its peak. Outsiders believe the price is a sign of increased interest and end up buying high.

Anonymous messaging app Telegram is often used for organising these groups, the most notorious being ‘Official McAfee Pump Signals’ with 12,333 members.

Typically, organisers of these groups have a separate chat – sometimes accessed via a subscription fee – where users are notified ahead of those in a public group that a pump is about to occur so they can buy-in at the floor price.

Academics from Imperial College London used machine learning to predict these illicit schemes. By analysing over 300 Telegram channels, the researchers identified 220 ‘pump events’ used to build their model.

The result was the ability to predict a pump with 80 percent accuracy.

Legitimate exchanges could one day implement such an AI to warn users of an impending pump-and-dump scheme, or block them from occurring. Some exchanges, however, are involved themselves.

The researchers highlight the YoBit exchange as organising pump-and-dumps:

One case highlighted by the academics was that of the ‘BVB’ coin, a cryptocurrency said to have been created for supporters of Borussia Dortmund football club. It’s not been active since 2016.

People started buying the coin at around $0.0014 before it peaked at $0.0045, offering a potential 3x profit. The coin traded at less than its original value after just three and a half minutes.

The researchers estimate pump-and-dump schemes account for almost $7 million of monthly cryptocurrency trading volume. With a current 24 hour volume of $15.9 billion, that still only represents 0.0438 percent of the market.

You can read the full paper here (PDF)

Interested in hearing leading global brands discuss subjects like this? Find out more at the Blockchain Expo World Series with events in London, Europe, and North America.

The post AI targets cryptocurrency ‘pump-and-dump’ schemes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/12/06/ai-cryptocurrency-pump-dump-schemes/feed/ 0