cybersecurity Archives - AI News https://www.artificialintelligence-news.com/tag/cybersecurity/ Artificial Intelligence News Tue, 28 May 2024 15:32:38 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png cybersecurity Archives - AI News https://www.artificialintelligence-news.com/tag/cybersecurity/ 32 32 Gil Pekelman, Atera: How businesses can harness the power of AI https://www.artificialintelligence-news.com/2024/05/28/gil-pekelman-atera-how-businesses-can-harness-the-power-of-ai/ https://www.artificialintelligence-news.com/2024/05/28/gil-pekelman-atera-how-businesses-can-harness-the-power-of-ai/#respond Tue, 28 May 2024 15:32:37 +0000 https://www.artificialintelligence-news.com/?p=14888 TechForge recently caught up with Gil Pekelman, CEO of all-in-one IT management platform, Atera, to discuss how AI is becoming the IT professionals’ number one companion. Can you tell us a little bit about Atera and what it does? We launched the Atera all-in-one platform for IT management in 2016, so quite a few years... Read more »

The post Gil Pekelman, Atera: How businesses can harness the power of AI appeared first on AI News.

]]>
TechForge recently caught up with Gil Pekelman, CEO of all-in-one IT management platform, Atera, to discuss how AI is becoming the IT professionals’ number one companion.

Can you tell us a little bit about Atera and what it does?

We launched the Atera all-in-one platform for IT management in 2016, so quite a few years ago. And it’s very broad. It’s everything from technical things like patching and security to ongoing support, alerts, automations, ticket management, reports, and analytics, etc. 

Atera is a single platform that manages all your IT in a single pane of glass. The power of it – and we’re the only company that does this – is it’s a single codebase and single database for all of that. The alternative, for many years now, has been to buy four or five different products, and have them all somehow connected, which is usually very difficult. 

Here, the fact is it’s a single codebase and a single database. Everything is connected and streamlined and very intuitive. So, in essence, you sign up or start a trial and within five minutes, you’re already running with it and onboarding. It’s that intuitive.

We have 12,000+ customers in 120 countries around the world. The UK is our second-largest country in terms of business, currently. The US is the first, but the UK is right behind them.

What are the latest trends you’re seeing develop in AI this year?

From the start, we’ve been dedicated to integrating AI into our company’s DNA. Our goal has always been to use data to identify problems and alert humans so they can fix or avoid issues. Initially, we focused on leveraging data to provide solutions.

Over the past nine years, we’ve aimed to let AI handle mundane IT tasks, freeing up professionals for more engaging work. With early access to Chat GPT and Open AI tools a year and a half ago, we’ve been pioneering a new trend we call Action AI.

Unlike generic Generative AI, which creates content like songs or emails, Action AI operates in the real world, interacting with hardware and software to perform tasks autonomously. Our AI can understand IT problems and resolve them on its own, moving beyond mere dialogue to real-world action.

Atera offers Copilot and Autopilot. Could you explain what these are?

Autopilot is autonomous. It understands a problem you might have on your computer. It’s a widget on your computer, and it will communicate with you and fix the problem autonomously. However, it has boundaries on what it’s allowed to fix and what it’s not allowed to fix. And everything it’s allowed to deal with has to be bulletproof. 100% secure or private. No opportunity to do any damage or anything like that. 

So if a ticket is opened up, or a complaint is raised, if it’s outside of these boundaries, it will then activate the Copilot. The Copilot augments the IT professional.

They’re both companions. The Autopilot is a companion that takes away password resets, printer issues, installs software, etc. – mundane and repetitive issues – and the Copilot is a companion that will help the IT professional deal with the issues they deal with on a day-to-day basis. And it has all kinds of different tools. 

The Copilot is very elaborate. If you have a problem, you can ask it and it will not only give you an answer like ChatGPT, but it will research and run all kinds of tests on the network, the computer, and the printer, and it will come to a conclusion, and create the action that is required to solve it. But it won’t solve it. It will still leave that to the IT professional to think about the different information and decide what they want to do. 

Copilot can save IT professionals nearly half of their workday. While it’s been tested in the field for some time, we’re excited to officially launch it now. Meanwhile, Autopilot is still in the beta phase.

What advice would you give to any companies that are thinking about integrating AI technologies into their business operations?

I strongly recommend that companies begin integrating AI technologies immediately, but it is crucial to research and select the right and secure generative AI tools. Incorporating AI offers numerous advantages: it automates routine tasks, enhances efficiency and productivity, improves accuracy by reducing human error, and speeds up problem resolution. That being said, it’s important to pick the right generative AI tool to help you reap the benefits without compromising on security. For example, with our collaboration with Microsoft, our customers’ data is secure—it stays within the system, and the AI doesn’t use it for training or expanding its database. This ensures safety while delivering substantial benefits.

Our incorporation of AI into our product focuses on two key aspects. First, your IT team no longer has to deal with mundane, frustrating tasks. Second, for end users, issues like non-working printers, forgotten passwords, or slow internet are resolved in seconds or minutes instead of hours. This provides a measurable and significant improvement in efficiency.

There are all kinds of AIs out there. Some of them are more beneficial, some are less. Some are just Chat GPT in disguise, and it’s a very thin layer. What we do literally changes the whole interaction with IT. And we know, when IT has a problem things stop working, and you stop working. Our solution ensures everything keeps running smoothly.

What can we expect from AI over the next few years?

AI is set to become significantly more intelligent and aware. One remarkable development is its growing ability to reason, predict, and understand data. This capability enables AI to foresee issues and autonomously resolve them, showcasing an astonishing level of reasoning.

We anticipate a dual advancement: a rapid acceleration in AI’s intelligence and a substantial enhancement in its empathetic interactions, as demonstrated in the latest OpenAI release. This evolution will transform how humans engage with AI.

Our work exemplifies this shift. When non-technical users interact with our software to solve problems, AI responds with a highly empathetic, human-like approach. Users feel as though they are speaking to a real IT professional, ensuring a seamless and comforting experience.

As AI continues to evolve, it will become increasingly powerful and capable. Recent breakthroughs in understanding AI’s mechanisms will not only enhance its functionality but also ensure its security and ethical use, reinforcing its role as a force for good.

What plans does Atera have for the next year?

We are excited to announce the upcoming launch of Autopilot, scheduled for release in a few months. While Copilot, our comprehensive suite of advanced tools designed specifically for IT professionals, has already been instrumental in enhancing efficiency and effectiveness, Autopilot represents the next significant advancement.

Currently in beta so whoever wants to try it already can, Autopilot directly interacts with end users, automating and resolving common IT issues that typically burden IT staff, such as password resets and printer malfunctions. By addressing these routine tasks, Autopilot allows IT professionals to focus on more strategic and rewarding activities, ultimately improving overall productivity and job satisfaction.

For more information, visit atera.com

Atera is a sponsor of TechEx North America 2024 on June 5-6 in Santa Clara, US. Visit the Atera team at booth 237 for a personalised demo, or to test your IT skills with the company’s first-of-kind AIT game, APOLLO IT, for a chance to win a prize.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Gil Pekelman, Atera: How businesses can harness the power of AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/28/gil-pekelman-atera-how-businesses-can-harness-the-power-of-ai/feed/ 0
Ethical, trust and skill barriers hold back generative AI progress in EMEA https://www.artificialintelligence-news.com/2024/05/20/ethical-trust-and-skill-barriers-hold-back-generative-ai-progress-in-emea/ https://www.artificialintelligence-news.com/2024/05/20/ethical-trust-and-skill-barriers-hold-back-generative-ai-progress-in-emea/#respond Mon, 20 May 2024 10:17:00 +0000 https://www.artificialintelligence-news.com/?p=14845 76% of consumers in EMEA think AI will significantly impact the next five years, yet 47% question the value that AI will bring and 41% are worried about its applications. This is according to research from enterprise analytics AI firm Alteryx. Since the release of ChatGPT by OpenAI in November 2022, there has been significant buzz about the... Read more »

The post Ethical, trust and skill barriers hold back generative AI progress in EMEA appeared first on AI News.

]]>
76% of consumers in EMEA think AI will significantly impact the next five years, yet 47% question the value that AI will bring and 41% are worried about its applications.

This is according to research from enterprise analytics AI firm Alteryx.

Since the release of ChatGPT by OpenAI in November 2022, there has been significant buzz about the transformative potential of generative AI, with many considering it one of the most revolutionary technologies of our time. 

With a significant 79% of organisations reporting that generative AI contributes positively to business, it is evident that a gap needs to be addressed to demonstrate AI’s value to consumers both in their personal and professional lives. According to the ‘Market Research: Attitudes and Adoption of Generative AI’ report, which surveyed 690 IT business leaders and 1,100 members of the general public in EMEA, key issues of trust, ethics and skills are prevalent, potentially impeding the successful deployment and broader acceptance of generative AI.

The impact of misinformation, inaccuracies, and AI hallucinations

These hallucinations – where AI generates incorrect or illogical outputs – are a significant concern. Trusting what generative AI produces is a substantial issue for both business leaders and consumers. Over a third of the public are anxious about AI’s potential to generate fake news (36%) and its misuse by hackers (42%), while half of the business leaders report grappling with misinformation produced by generative AI. Simultaneously, half of the business leaders have observed their organisations grappling with misinformation produced by generative AI.

Moreover, the reliability of information provided by generative AI has been questioned. Feedback from the general public indicates that half of the data received from AI was inaccurate, and 38% perceived it as outdated. On the business front, concerns include generative AI infringing on copyright or intellectual property rights (40%), and producing unexpected or unintended outputs (36%).

A critical trust issue for businesses (62%) and the public (74%) revolves around AI hallucinations. For businesses, the challenge involves applying generative AI to appropriate use cases, supported by the right technology and safety measures, to mitigate these concerns. Close to half of the consumers (45%) are advocating for regulatory measures on AI usage.

Ethical concerns and risks persist in the use of generative AI

In addition to these challenges, there are strong and similar sentiments on ethical concerns and the risks associated with generative AI among both business leaders and consumers. More than half of the general public (53%) oppose the use of generative AI in making ethical decisions. Meanwhile, 41% of business respondents are concerned about its application in critical decision-making areas. There are distinctions in the specific areas where its use is discouraged; consumers notably oppose its use in politics (46%), and businesses are cautious about its deployment in healthcare (40%).

These concerns find some validation in the research findings, which highlight worrying gaps in organisational practices. Only a third of leaders confirmed that their businesses ensure the data used to train generative AI is diverse and unbiased. Furthermore, only 36% have set ethical guidelines, and 52% have established data privacy and security policies for generative AI applications.

This lack of emphasis on data integrity and ethical considerations puts firms at risk. 63% of business leaders cite ethics as their major concern with generative AI, closely followed by data-related issues (62%). This scenario emphasises the importance of better governance to create confidence and mitigate risks related to how employees use generative AI in the workplace. 

The rise of generative AI skills and the need for enhanced data literacy

As generative AI evolves, establishing relevant skill sets and enhancing data literacy will be key to realising its full potential. Consumers are increasingly using generative AI technologies in various scenarios, including information retrieval, email communication, and skill acquisition. Business leaders claim they use generative AI for data analysis, cybersecurity, and customer support, and despite the success of pilot projects, challenges remain. Despite the reported success of experimental projects, several challenges remain, including security problems, data privacy issues, and output quality and reliability.

Trevor Schulze, Alteryx’s CIO, emphasised the necessity for both enterprises and the general public to fully understand the value of AI and address common concerns as they navigate the early stages of generative AI adoption.

He noted that addressing trust issues, ethical concerns, skills shortages, fears of privacy invasion, and algorithmic bias are critical tasks. Schulze underlined the necessity for enterprises to expedite their data journey, adopt robust governance, and allow non-technical individuals to access and analyse data safely and reliably, addressing privacy and bias concerns in order to genuinely profit from this ‘game-changing’ technology.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

The post Ethical, trust and skill barriers hold back generative AI progress in EMEA appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/20/ethical-trust-and-skill-barriers-hold-back-generative-ai-progress-in-emea/feed/ 0
NCSC: AI to significantly boost cyber threats over next two years https://www.artificialintelligence-news.com/2024/01/24/ncsc-ai-significantly-boost-cyber-threats-next-two-years/ https://www.artificialintelligence-news.com/2024/01/24/ncsc-ai-significantly-boost-cyber-threats-next-two-years/#respond Wed, 24 Jan 2024 16:50:10 +0000 https://www.artificialintelligence-news.com/?p=14257 A report published by the UK’s National Cyber Security Centre (NCSC) warns that AI will substantially increase cyber threats over the next two years.  The centre warns of a surge in ransomware attacks in particular; involving hackers deploying malicious software to encrypt a victim’s files or entire system and demanding a ransom payment for the... Read more »

The post NCSC: AI to significantly boost cyber threats over next two years appeared first on AI News.

]]>
A report published by the UK’s National Cyber Security Centre (NCSC) warns that AI will substantially increase cyber threats over the next two years. 

The centre warns of a surge in ransomware attacks in particular; involving hackers deploying malicious software to encrypt a victim’s files or entire system and demanding a ransom payment for the decryption key.

The NCSC assessment predicts AI will enhance threat actors’ capabilities mainly in carrying out more persuasive phishing attacks that trick individuals into providing sensitive information or clicking on malicious links.

“Generative AI can already create convincing interactions like documents that fool people, free of the translation and grammatical errors common in phishing emails,” the report states. 

The advent of generative AI, capable of creating convincing interactions and documents free of common phishing red flags, is identified as a key contributor to the rising threat landscape over the next two years.

The NCSC assessment identifies challenges in cyber resilience, citing the difficulty in verifying the legitimacy of emails and password reset requests due to generative AI and large language models. The shrinking time window between security updates and threat exploitation further complicates rapid vulnerability patching for network managers.

James Babbage, director general for threats at the National Crime Agency, commented: “AI services lower barriers to entry, increasing the number of cyber criminals, and will boost their capability by improving the scale, speed, and effectiveness of existing attack methods.”

However, the NCSC report also outlined how AI could bolster cybersecurity through improved attack detection and system design. It calls for further research on how developments in defensive AI solutions can mitigate evolving threats.

Access to quality data, skills, tools, and time makes advanced AI-powered cyber operations feasible mainly for highly capable state actors currently. But the NCSC warns these barriers to entry will progressively fall as capable groups monetise and sell AI-enabled hacking tools.

Extent of capability uplift by AI over next two years:

(Credit: NCSC)

Lindy Cameron, CEO of the NCSC, stated: “We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat.”

The UK government has allocated £2.6 billion under its Cyber Security Strategy 2022 to strengthen the country’s resilience to emerging high-tech threats.

AI is positioned to substantially change the cyber risk landscape in the near future. Continuous investment in defensive capabilities and research will be vital to counteract its potential to empower attackers.

A full copy of the NCSC’s report can be found here.

(Photo by Muha Ajjan on Unsplash)

See also: AI-generated Biden robocall urges Democrats not to vote

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NCSC: AI to significantly boost cyber threats over next two years appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/24/ncsc-ai-significantly-boost-cyber-threats-next-two-years/feed/ 0
McAfee unveils AI-powered deepfake audio detection https://www.artificialintelligence-news.com/2024/01/08/mcafee-unveils-ai-powered-deepfake-audio-detection/ https://www.artificialintelligence-news.com/2024/01/08/mcafee-unveils-ai-powered-deepfake-audio-detection/#respond Mon, 08 Jan 2024 10:49:16 +0000 https://www.artificialintelligence-news.com/?p=14161 McAfee has revealed a pioneering AI-powered deepfake audio detection technology, Project Mockingbird, during CES 2024. This proprietary technology aims to defend consumers against the rising menace of cybercriminals employing fabricated, AI-generated audio for scams, cyberbullying, and manipulation of public figures’ images. Generative AI tools have enabled cybercriminals to craft convincing scams, including voice cloning to... Read more »

The post McAfee unveils AI-powered deepfake audio detection appeared first on AI News.

]]>
McAfee has revealed a pioneering AI-powered deepfake audio detection technology, Project Mockingbird, during CES 2024. This proprietary technology aims to defend consumers against the rising menace of cybercriminals employing fabricated, AI-generated audio for scams, cyberbullying, and manipulation of public figures’ images.

Generative AI tools have enabled cybercriminals to craft convincing scams, including voice cloning to impersonate family members seeking money or manipulating authentic videos with “cheapfakes.” These tactics manipulate content to deceive individuals, creating a heightened challenge for consumers to discern between real and manipulated information.

In response to this challenge, McAfee Labs developed an industry-leading AI model, part of the Project Mockingbird technology, to detect AI-generated audio. This technology employs contextual, behavioural, and categorical detection models, achieving an impressive 90 percent accuracy rate.

Steve Grobman, CTO at McAfee, said: “Much like a weather forecast indicating a 70 percent chance of rain helps you plan your day, our technology equips you with insights to make educated decisions about whether content is what it appears to be.”

Project Mockingbird offers diverse applications, from countering AI-generated scams to tackling disinformation. By empowering consumers to distinguish between authentic and manipulated content, McAfee aims to protect users from falling victim to fraudulent schemes and ensure a secure digital experience.

Deep concerns about deepfakes

As deepfake technology becomes more sophisticated, consumer concerns are on the rise. McAfee’s December 2023 Deepfakes Survey highlights:

  • 84% of Americans are concerned about deepfake usage in 2024
  • 68% are more concerned than a year ago
  • 33% have experienced or witnessed a deepfake scam, with 40% prevalent among 18–34 year-olds
  • Top concerns include election influence (52%), undermining public trust in media (48%), impersonation of public figures (49%), proliferation of scams (57%), cyberbullying (44%), and sexually explicit content creation (37%)

McAfee’s unveiling of Project Mockingbird marks a significant leap in the ongoing battle against AI-generated threats. As countries like the US and UK enter a pivotal election year, it’s crucial that consumers are given the best chance possible at grappling with the pervasive influence of deepfake technology.

(Photo by Markus Spiske on Unsplash)

See also: MyShell releases OpenVoice voice cloning AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post McAfee unveils AI-powered deepfake audio detection appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/08/mcafee-unveils-ai-powered-deepfake-audio-detection/feed/ 0
Global AI security guidelines endorsed by 18 countries https://www.artificialintelligence-news.com/2023/11/27/global-ai-security-guidelines-endorsed-by-18-countries/ https://www.artificialintelligence-news.com/2023/11/27/global-ai-security-guidelines-endorsed-by-18-countries/#respond Mon, 27 Nov 2023 10:28:13 +0000 https://www.artificialintelligence-news.com/?p=13954 The UK has published the world’s first global guidelines for securing AI systems against cyberattacks. The new guidelines aim to ensure AI technology is developed safely and securely. The guidelines were developed by the UK’s National Cyber Security Centre (NCSC) and the US’ Cybersecurity and Infrastructure Security Agency (CISA). They have already secured endorsements from... Read more »

The post Global AI security guidelines endorsed by 18 countries appeared first on AI News.

]]>
The UK has published the world’s first global guidelines for securing AI systems against cyberattacks. The new guidelines aim to ensure AI technology is developed safely and securely.

The guidelines were developed by the UK’s National Cyber Security Centre (NCSC) and the US’ Cybersecurity and Infrastructure Security Agency (CISA). They have already secured endorsements from 17 other countries, including all G7 members.

The guidelines provide recommendations for developers and organisations using AI to incorporate cybersecurity at every stage. This “secure by design” approach advises baking in security from the initial design phase through development, deployment, and ongoing operations.  

Specific guidelines cover four key areas: secure design, secure development, secure deployment, and secure operation and maintenance. They suggest security behaviours and best practices for each phase.

The launch event in London convened over 100 industry, government, and international partners. Speakers included reps from Microsoft, the Alan Turing Institute, and cyber agencies from the US, Canada, Germany, and the UK.  

NCSC CEO Lindy Cameron stressed the need for proactive security amidst AI’s rapid pace of development. She said, “security is not a postscript to development but a core requirement throughout.”

The guidelines build on existing UK leadership in AI safety. Last month, the UK hosted the first international summit on AI safety at Bletchley Park.

US Secretary of Homeland Security Alejandro Mayorkas said: “We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy.

“The guidelines jointly issued today by CISA, NCSC, and our other international partners, provide a common-sense path to designing, developing, deploying, and operating AI with cybersecurity at its core.”

The 18 endorsing countries span Europe, Asia-Pacific, Africa, and the Americas. Here is the full list of international signatories:

  • Australia – Australian Signals Directorate’s Australian Cyber Security Centre (ACSC)
  • Canada – Canadian Centre for Cyber Security (CCCS) 
  • Chile – Chile’s Government CSIRT
  • Czechia – Czechia’s National Cyber and Information Security Agency (NUKIB)
  • Estonia – Information System Authority of Estonia (RIA) and National Cyber Security Centre of Estonia (NCSC-EE)
  • France – French Cybersecurity Agency (ANSSI)
  • Germany – Germany’s Federal Office for Information Security (BSI)
  • Israel – Israeli National Cyber Directorate (INCD)
  • Italy – Italian National Cybersecurity Agency (ACN)
  • Japan – Japan’s National Center of Incident Readiness and Strategy for Cybersecurity (NISC; Japan’s Secretariat of Science, Technology and Innovation Policy, Cabinet Office
  • New Zealand – New Zealand National Cyber Security Centre
  • Nigeria – Nigeria’s National Information Technology Development Agency (NITDA)
  • Norway – Norwegian National Cyber Security Centre (NCSC-NO)
  • Poland – Poland’s NASK National Research Institute (NASK)
  • Republic of Korea – Republic of Korea National Intelligence Service (NIS)
  • Singapore – Cyber Security Agency of Singapore (CSA)
  • United Kingdom – National Cyber Security Centre (NCSC)
  • United States of America – Cybersecurity and Infrastructure Agency (CISA); National Security Agency (NSA; Federal Bureau of Investigations (FBI)

UK Science and Technology Secretary Michelle Donelan positioned the new guidelines as cementing the UK’s role as “an international standard bearer on the safe use of AI.”

“Just weeks after we brought world leaders together at Bletchley Park to reach the first international agreement on safe and responsible AI, we are once again uniting nations and companies in this truly global effort,” adds Donelan.

The guidelines are now published on the NCSC website alongside explanatory blogs. Developer uptake will be key to translating the secure by design vision into real-world improvements in AI security.

(Photo by Jan Antonin Kolar on Unsplash)

See also: Paul O’Sullivan, Salesforce: Transforming work in the GenAI era

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Global AI security guidelines endorsed by 18 countries appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/27/global-ai-security-guidelines-endorsed-by-18-countries/feed/ 0
DHS AI roadmap prioritises cybersecurity and national safety https://www.artificialintelligence-news.com/2023/11/15/dhs-ai-roadmap-prioritises-cybersecurity-national-safety/ https://www.artificialintelligence-news.com/2023/11/15/dhs-ai-roadmap-prioritises-cybersecurity-national-safety/#respond Wed, 15 Nov 2023 10:10:47 +0000 https://www.artificialintelligence-news.com/?p=13893 The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has launched its inaugural Roadmap for AI. Viewed as a crucial step in the broader governmental effort to ensure the secure development and implementation of AI capabilities, the move aligns with President Biden’s recent Executive Order. “DHS has a broad leadership role in... Read more »

The post DHS AI roadmap prioritises cybersecurity and national safety appeared first on AI News.

]]>
The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has launched its inaugural Roadmap for AI.

Viewed as a crucial step in the broader governmental effort to ensure the secure development and implementation of AI capabilities, the move aligns with President Biden’s recent Executive Order.

“DHS has a broad leadership role in advancing the responsible use of AI and this cybersecurity roadmap is one important element of our work,” said Secretary of Homeland Security Alejandro N. Mayorkas.

“The Biden-Harris Administration is committed to building a secure and resilient digital ecosystem that promotes innovation and technological progress.” 

Following the Executive Order, DHS is mandated to globally promote AI safety standards, safeguard US networks and critical infrastructure, and address risks associated with AI—including potential use “to create weapons of mass destruction”.

“In last month’s Executive Order, the President called on DHS to promote the adoption of AI safety standards globally and help ensure the safe, secure, and responsible use and development of AI,” added Mayorkas.

“CISA’s roadmap lays out the steps that the agency will take as part of our Department’s broader efforts to both leverage AI and mitigate its risks to our critical infrastructure and cyber defenses.”

CISA’s roadmap outlines five strategic lines of effort, providing a blueprint for concrete initiatives and a responsible approach to integrating AI into cybersecurity.

CISA Director Jen Easterly highlighted the dual nature of AI, acknowledging its promise in enhancing cybersecurity while acknowledging the immense risks it poses.

“Artificial Intelligence holds immense promise in enhancing our nation’s cybersecurity, but as the most powerful technology of our lifetimes, it also presents enormous risks,” commented Easterly.

“Our Roadmap for AI – focused at the nexus of AI, cyber defense, and critical infrastructure – sets forth an agency-wide plan to promote the beneficial uses of AI to enhance cybersecurity capabilities; ensure AI systems are protected from cyber-based threats; and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day.”

The outlined lines of effort are as follows:

  • Responsibly use AI to support our mission: CISA commits to using AI-enabled tools ethically and responsibly to strengthen cyber defense and support its critical infrastructure mission. The adoption of AI will align with constitutional principles and all relevant laws and policies.
  • Assess and Assure AI systems: CISA will assess and assist in secure AI-based software adoption across various stakeholders, establishing assurance through best practices and guidance for secure and resilient AI development.
  • Protect critical infrastructure from malicious use of AI: CISA will evaluate and recommend mitigation of AI threats to critical infrastructure, collaborating with government agencies and industry partners. The establishment of JCDC.AI aims to facilitate focused collaboration on AI-related threats.
  • Collaborate and communicate on key AI efforts: CISA commits to contributing to interagency efforts, supporting policy approaches for the US government’s national strategy on cybersecurity and AI, and coordinating with international partners to advance global AI security practices.
  • Expand AI expertise in our workforce: CISA will educate its workforce on AI systems and techniques, actively recruiting individuals with AI expertise and ensuring a comprehensive understanding of the legal, ethical, and policy aspects of AI-based software systems.

“This is a step in the right direction. It shows the government is taking the potential threats and benefits of AI seriously. The roadmap outlines a comprehensive strategy for leveraging AI to enhance cybersecurity, protect critical infrastructure, and foster collaboration. It also emphasises the importance of security in AI system design and development,” explains Joseph Thacker, AI and security researcher at AppOmni.

“The roadmap is pretty comprehensive. Nothing stands out as missing initially, although the devil is in the details when it comes to security, and even more so when it comes to a completely new technology. CISA’s ability to keep up may depend on their ability to get talent or train internal folks. Both of those are difficult to accomplish at scale.”

CISA invites stakeholders, partners, and the public to explore the Roadmap for Artificial Intelligence and gain insights into the strategic vision for AI technology and cybersecurity here.

See also: Google expands partnership with Anthropic to enhance AI safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DHS AI roadmap prioritises cybersecurity and national safety appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/15/dhs-ai-roadmap-prioritises-cybersecurity-national-safety/feed/ 0
OpenAI battles DDoS against its API and ChatGPT services https://www.artificialintelligence-news.com/2023/11/09/openai-battles-ddos-against-api-chatgpt-services/ https://www.artificialintelligence-news.com/2023/11/09/openai-battles-ddos-against-api-chatgpt-services/#respond Thu, 09 Nov 2023 15:50:14 +0000 https://www.artificialintelligence-news.com/?p=13866 OpenAI has been grappling with a series of distributed denial-of-service (DDoS) attacks targeting its API and ChatGPT services over the past 24 hours. While the company has not yet disclosed specific details about the source of these attacks, OpenAI acknowledged that they are dealing with “periodic outages due to an abnormal traffic pattern reflective of... Read more »

The post OpenAI battles DDoS against its API and ChatGPT services appeared first on AI News.

]]>
OpenAI has been grappling with a series of distributed denial-of-service (DDoS) attacks targeting its API and ChatGPT services over the past 24 hours.

While the company has not yet disclosed specific details about the source of these attacks, OpenAI acknowledged that they are dealing with “periodic outages due to an abnormal traffic pattern reflective of a DDoS attack.”

Users affected by these incidents reported encountering errors such as “something seems to have gone wrong” and “There was an error generating a response” when accessing ChatGPT.

This recent wave of attacks follows a major outage that impacted ChatGPT and its API on Wednesday, along with partial ChatGPT outages on Tuesday, and elevated error rates in Dall-E on Monday.

OpenAI displayed a banner across ChatGPT’s interface, attributing the disruptions to “exceptionally high demand” and reassuring users that efforts were underway to scale their systems.

Threat actor group Anonymous Sudan has claimed responsibility for the DDoS attacks on OpenAI. According to the group, the attacks are in response to OpenAI’s perceived bias towards Israel and against Palestine.

The attackers utilised the SkyNet botnet, which recently incorporated support for application layer attacks or Layer 7 (L7) DDoS attacks. In Layer 7 attacks, threat actors overwhelm services at the application level with a massive volume of requests to strain the targets’ server and network resources.

Brad Freeman, Director of Technology at SenseOn, commented:

“Distributed denial of service attacks are internet vandalism. Low effort, complexity, and in most cases more of a nuisance than a long-term threat to a business. Often DDOS attacks target services with high volumes of traffic which can be ’off-ramped, by their cloud or Internet service provider.

However, as the attacks are on Layer 7 they will be targeting the application itself, therefore OpenAI will need to make some changes to mitigate the attack. It’s likely the threat actor is sending complex queries to OpenAI to overload it, I wonder if they are using AI-generated content to attack AI content generation.”

However, the attribution of these attacks to Anonymous Sudan has raised suspicions among cybersecurity researchers. Some experts suggest that this could be a false flag operation and the group might have connections to Russia instead which, along with Iran, is suspected of stoking the bloodshed and international outrage to benefit its domestic interests.

The situation once again highlights the ongoing challenges faced by organisations dealing with DDoS attacks and the complexities of accurately identifying the perpetrators.

(Photo by Johann Walter Bantz on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI battles DDoS against its API and ChatGPT services appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/09/openai-battles-ddos-against-api-chatgpt-services/feed/ 0
Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/#respond Mon, 30 Oct 2023 10:18:14 +0000 https://www.artificialintelligence-news.com/?p=13798 President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use. The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership... Read more »

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

  1. New standards for AI safety and security: The order mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government. Rigorous standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy before public release. Additionally, measures will be taken to protect against the risks of using AI to engineer dangerous biological materials and combat AI-enabled fraud and deception.
  2. Protecting citizens’ privacy: The President calls on Congress to pass bipartisan data privacy legislation, prioritizing federal support for privacy-preserving techniques, especially those using AI. Guidelines will be developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
  3. Advancing equity and civil rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination, especially in areas like housing and federal benefit programs. Best practices will be established for the use of AI in the criminal justice system to ensure fairness.
  4. Standing up for consumers, patients, and students: Responsible use of AI in healthcare and education will be promoted, ensuring that consumers are protected from harmful AI applications while benefiting from its advancements in these sectors.
  5. Supporting workers: Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers, addressing issues such as job displacement, workplace equity, and health and safety. A report on AI’s potential labour-market impacts will be produced, identifying options for strengthening federal support for workers facing labour disruptions due to AI.
  6. Promoting innovation and competition: The order aims to catalyse AI research across the US, promote a fair and competitive AI ecosystem, and expand the ability of highly skilled immigrants and non-immigrants to study, stay, and work in the US to foster innovation in the field.
  7. Advancing leadership abroad: The US will collaborate with other nations to establish international frameworks for safe and trustworthy AI deployment. Efforts will be made to accelerate the development and implementation of vital AI standards with international partners and promote the responsible development and deployment of AI abroad to address global challenges.
  8. Ensuring responsible and effective government adoption: Clear standards and guidelines will be issued for government agencies’ use of AI to protect rights and safety. Efforts will be made to help agencies acquire AI products and services more rapidly and efficiently, and an AI talent surge will be initiated to enhance government capacity in AI-related fields.

The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.

“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.

(Photo by David Everett Strickler on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/feed/ 0
Enterprises struggle to address generative AI’s security implications https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/ https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/#respond Wed, 18 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13766 In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use. Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace. The... Read more »

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use.

Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace.

The report delves into how organisations are dealing with the use of generative AI tools, revealing a significant cognitive dissonance among IT and security leaders. Astonishingly, 73 percent of these leaders confessed that their employees frequently use generative AI tools or Large Language Models (LLM) at work. Despite this, a staggering majority admitted to being uncertain about how to effectively address the associated security risks.

When questioned about their concerns, IT and security leaders expressed more worry about the possibility of inaccurate or nonsensical responses (40%) than critical security issues such as exposure of customer and employee personal identifiable information (PII) (36%) or financial loss (25%).

Raja Mukerji, Co-Founder and Chief Scientist at ExtraHop, said: “By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

One of the startling revelations from the study was the ineffectiveness of generative AI bans. About 32 percent of respondents stated that their organisations had prohibited the use of these tools. However, only five percent reported that employees never used these tools—indicating that bans alone are not enough to curb their usage.

The study also highlighted a clear desire for guidance, particularly from government bodies. A significant 90 percent of respondents expressed the need for government involvement, with 60 percent advocating for mandatory regulations and 30 percent supporting government standards for businesses to adopt voluntarily.

Despite a sense of confidence in their current security infrastructure, the study revealed gaps in basic security practices.

While 82 percent felt confident in their security stack’s ability to protect against generative AI threats, less than half had invested in technology to monitor generative AI use. Alarmingly, only 46 percent had established policies governing acceptable use and merely 42 percent provided training to users on the safe use of these tools.

The findings come in the wake of the rapid adoption of technologies like ChatGPT, which have become an integral part of modern businesses. Business leaders are urged to understand their employees’ generative AI usage to identify potential security vulnerabilities.

You can find a full copy of the report here.

(Photo by Hennie Stander on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/feed/ 0
Dave Barnett, Cloudflare: Delivering speed and security in the AI era https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/ https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/#respond Fri, 13 Oct 2023 15:39:34 +0000 https://www.artificialintelligence-news.com/?p=13742 AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era. According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably,... Read more »

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era.

According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably, a significant portion of their services are offered to consumers for free.

“We continuously reinvent, we’re very comfortable in the digital space. We’re very proud that the vast majority of our customers actually consume our services for free because it’s our way of giving back to society,” said Barnett.

Barnett also revealed Cloudflare’s focus on AI during their anniversary week. The company aims to enable organisations to consume AI securely and make it accessible to everyone. Barnett says that Cloudflare achieves those goals in three key ways.

“One, as I mentioned, is operating AI inference engines within Cloudflare close to consumers’ eyeballs. The second area is securing the use of AI within the workplace, because, you know, AI has some incredibly positive impacts on people … but the problem is there are some data protection requirements around that,” explains Barnett.

“Finally, is the question of, ‘Could AI be used by the bad guys against the good guys?’ and that’s an area that we’re continuing to explore.”

Just a day earlier, AI News heard from Raviv Raz, Cloud Security Manager at ING, during a session at the expo that focused on the alarming potential of AI-powered cybercrime.

Regarding security models, Barnett discussed the evolution of the zero-trust concept, emphasising its practical applications in enhancing both usability and security. Cloudflare’s own journey with zero-trust began with a focus on usability, leading to the development of its own zero-trust network access products.

“We have servers everywhere and engineers everywhere that need to reboot those servers. In 2015, that involved VPNs and two-factor authentication… so we built our own zero-trust network access product for our own use that meant the user experiences for engineers rebooting servers in far-flung places was a lot better,” says Barnett.

“After 2015, the world started to realise that this approach had great security benefits so we developed that product and launched it in 2018 as Cloudflare Access.”

Cloudflare’s innovative strides also include leveraging NVIDIA GPUs to accelerate machine learning AI tasks on an edge network. This technology enables organisations to run inference tasks – such as image recognition – close to end-users, ensuring low latency and optimal performance.

“We launched Workers AI, which means that organisations around the world – in fact, individuals as well – can run their inference tasks at a very close place to where the consumers of that inference are,” explains Barnett.

“You could ask a question, ‘Cat or not cat?’, to a trained cat detection engine very close to the people that need it. We’re doing that in a way that makes it easily accessible to organisations looking to use AI to benefit their business.”

For developers interested in AI, Barnett outlined Cloudflare’s role in supporting the deployment of machine learning models. While machine learning training is typically conducted outside Cloudflare, the company excels in providing low-latency inference engines that are essential for real-time applications like image recognition.

Our conversation with Barnett shed light on Cloudflare’s commitment to cloud-native architecture, AI accessibility, and cybersecurity. As the industry continues to advance, Cloudflare remains at the forefront of delivering speed and security in the AI era.

You can watch our full interview with Dave Barnett below:

(Photo by ryan baker on Unsplash)

See also: JPMorgan CEO: AI will be used for ‘every single process’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo, Edge Computing Expo, and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/feed/ 0