AI Security News | Latest AI in Security News | AI News https://www.artificialintelligence-news.com/categories/ai-security/ Artificial Intelligence News Fri, 14 Jun 2024 14:56:45 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Security News | Latest AI in Security News | AI News https://www.artificialintelligence-news.com/categories/ai-security/ 32 32 EU AI legislation sparks controversy over data transparency https://www.artificialintelligence-news.com/2024/06/14/eu-ai-legislation-sparks-controversy-over-data-transparency/ https://www.artificialintelligence-news.com/2024/06/14/eu-ai-legislation-sparks-controversy-over-data-transparency/#respond Fri, 14 Jun 2024 14:56:43 +0000 https://www.artificialintelligence-news.com/?p=15001 The European Union recently introduced the AI Act, a new governance framework compelling organisations to enhance transparency regarding their AI systems’ training data. Should this legislation come into force, it could penetrate the defences that many in Silicon Valley have built against such detailed scrutiny of AI development and deployment processes. Since the public release... Read more »

The post EU AI legislation sparks controversy over data transparency appeared first on AI News.

]]>
The European Union recently introduced the AI Act, a new governance framework compelling organisations to enhance transparency regarding their AI systems’ training data.

Should this legislation come into force, it could penetrate the defences that many in Silicon Valley have built against such detailed scrutiny of AI development and deployment processes.

Since the public release of OpenAI’s ChatGPT, backed by Microsoft 18 months ago, there has been significant growth in interest and investment in generative AI technologies. These applications, capable of writing text, creating images, and producing audio content at record speeds, have attracted considerable attention. However, the rise in AI activity accompanying these changes prompts an intriguing question: How do AI developers actually source the data needed to train their models? Is it through the use of unauthorised copyrighted material?

Implementing the AI Act

The EU’s AI Act, intended to be implemented gradually over the next two years, aims to address these issues. New laws take time to embed, and a gradual rollout allows regulators the necessary time to adapt to the new laws and for businesses to adjust to their new obligations. However, the implementation of some rules remains in doubt.

One of the more contentious sections of the Act stipulates that organisations deploying general-purpose AI models, such as ChatGPT, must provide “detailed summaries” of the content used to train them. The newly established AI Office has announced plans to release a template for organisations to follow in early 2025, following consultation with stakeholders.

AI companies have expressed strong resistance to revealing their training data, describing this information as trade secrets that would provide competitors with an unfair advantage if made public. The level of detail required in these transparency reports will have significant implications for both smaller AI startups and major tech companies like Google and Meta, which have positioned AI technology at the center of their future operations.

Over the past year, several top technology companies—Google, OpenAI, and Stability AI—have faced lawsuits from creators who claim their content was used without permission to train AI models. Under growing scrutiny, however, some tech companies have, in the past two years, pierced their own corporate veil and negotiated content-licensing deals with individual media outlets and websites. Some creators and lawmakers remain concerned that these measures are not sufficient.

European lawmakers’ divide

In Europe, differences among lawmakers are stark. Dragos Tudorache, who led the drafting of the AI Act in the European Parliament, argues that AI companies should be required to open-source their datasets. Tudorache emphasises the importance of transparency so that creators can determine whether their work has been used to train AI algorithms.

Conversely, under the leadership of President Emmanuel Macron, the French government has privately opposed introducing rules that could hinder the competitiveness of European AI startups. French Finance Minister Bruno Le Maire has emphasised the need for Europe to be a world leader in AI, not merely a consumer of American and Chinese products.

The AI Act acknowledges the need to balance the protection of trade secrets with the facilitation of rights for parties with legitimate interests, including copyright holders. However, striking this balance remains a significant challenge.

Different industries vary on this matter. Matthieu Riouf, CEO of the AI-powered image-editing firm Photoroom, compares the situation to culinary practices, claiming there’s a secret part of the recipe that the best chefs wouldn’t share. He represents just one instance on the laundry list of possible scenarios where this type of crime could be rampant. However, Thomas Wolf, co-founder of one of the world’s top AI startups, Hugging Face, argues that while there will always be an appetite for transparency, it doesn’t mean that the entire industry will adopt a transparency-first approach.

A series of recent controversies have driven home just how complicated this all is. OpenAI demonstrated the latest version of ChatGPT in a public session, where the company was roundly criticised for using a synthetic voice that sounded nearly identical to that of actress Scarlett Johansson. These examples point to the potential for AI technologies to violate personal and proprietary rights.

Throughout the development of these regulations, there has been heated debate about their potential effects on future innovation and competitiveness in the AI world. In particular, the French government has urged that innovation, not regulation, should be the starting point, given the dangers of regulating aspects that have not been fully comprehended.

The way the EU regulates AI transparency could have significant impacts on tech companies, digital creators, and the overall digital landscape. Policymakers thus face the challenge of fostering innovation in the dynamic AI industry while simultaneously guiding it towards safe, ethical decisions and preventing IP infringement.

In sum, if adopted, the EU AI Act would be a significant step toward greater transparency in AI development. However, the practical implementation of these regulations and their industry results could be far off. Moving forward, especially at the dawn of this new regulatory paradigm, the balance between innovation, ethical AI development, and the protection of intellectual property will remain a central and contested issue for stakeholders of all stripes to grapple with.

See also: Apple is reportedly getting free ChatGPT access

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU AI legislation sparks controversy over data transparency appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/14/eu-ai-legislation-sparks-controversy-over-data-transparency/feed/ 0
Gil Pekelman, Atera: How businesses can harness the power of AI https://www.artificialintelligence-news.com/2024/05/28/gil-pekelman-atera-how-businesses-can-harness-the-power-of-ai/ https://www.artificialintelligence-news.com/2024/05/28/gil-pekelman-atera-how-businesses-can-harness-the-power-of-ai/#respond Tue, 28 May 2024 15:32:37 +0000 https://www.artificialintelligence-news.com/?p=14888 TechForge recently caught up with Gil Pekelman, CEO of all-in-one IT management platform, Atera, to discuss how AI is becoming the IT professionals’ number one companion. Can you tell us a little bit about Atera and what it does? We launched the Atera all-in-one platform for IT management in 2016, so quite a few years... Read more »

The post Gil Pekelman, Atera: How businesses can harness the power of AI appeared first on AI News.

]]>
TechForge recently caught up with Gil Pekelman, CEO of all-in-one IT management platform, Atera, to discuss how AI is becoming the IT professionals’ number one companion.

Can you tell us a little bit about Atera and what it does?

We launched the Atera all-in-one platform for IT management in 2016, so quite a few years ago. And it’s very broad. It’s everything from technical things like patching and security to ongoing support, alerts, automations, ticket management, reports, and analytics, etc. 

Atera is a single platform that manages all your IT in a single pane of glass. The power of it – and we’re the only company that does this – is it’s a single codebase and single database for all of that. The alternative, for many years now, has been to buy four or five different products, and have them all somehow connected, which is usually very difficult. 

Here, the fact is it’s a single codebase and a single database. Everything is connected and streamlined and very intuitive. So, in essence, you sign up or start a trial and within five minutes, you’re already running with it and onboarding. It’s that intuitive.

We have 12,000+ customers in 120 countries around the world. The UK is our second-largest country in terms of business, currently. The US is the first, but the UK is right behind them.

What are the latest trends you’re seeing develop in AI this year?

From the start, we’ve been dedicated to integrating AI into our company’s DNA. Our goal has always been to use data to identify problems and alert humans so they can fix or avoid issues. Initially, we focused on leveraging data to provide solutions.

Over the past nine years, we’ve aimed to let AI handle mundane IT tasks, freeing up professionals for more engaging work. With early access to Chat GPT and Open AI tools a year and a half ago, we’ve been pioneering a new trend we call Action AI.

Unlike generic Generative AI, which creates content like songs or emails, Action AI operates in the real world, interacting with hardware and software to perform tasks autonomously. Our AI can understand IT problems and resolve them on its own, moving beyond mere dialogue to real-world action.

Atera offers Copilot and Autopilot. Could you explain what these are?

Autopilot is autonomous. It understands a problem you might have on your computer. It’s a widget on your computer, and it will communicate with you and fix the problem autonomously. However, it has boundaries on what it’s allowed to fix and what it’s not allowed to fix. And everything it’s allowed to deal with has to be bulletproof. 100% secure or private. No opportunity to do any damage or anything like that. 

So if a ticket is opened up, or a complaint is raised, if it’s outside of these boundaries, it will then activate the Copilot. The Copilot augments the IT professional.

They’re both companions. The Autopilot is a companion that takes away password resets, printer issues, installs software, etc. – mundane and repetitive issues – and the Copilot is a companion that will help the IT professional deal with the issues they deal with on a day-to-day basis. And it has all kinds of different tools. 

The Copilot is very elaborate. If you have a problem, you can ask it and it will not only give you an answer like ChatGPT, but it will research and run all kinds of tests on the network, the computer, and the printer, and it will come to a conclusion, and create the action that is required to solve it. But it won’t solve it. It will still leave that to the IT professional to think about the different information and decide what they want to do. 

Copilot can save IT professionals nearly half of their workday. While it’s been tested in the field for some time, we’re excited to officially launch it now. Meanwhile, Autopilot is still in the beta phase.

What advice would you give to any companies that are thinking about integrating AI technologies into their business operations?

I strongly recommend that companies begin integrating AI technologies immediately, but it is crucial to research and select the right and secure generative AI tools. Incorporating AI offers numerous advantages: it automates routine tasks, enhances efficiency and productivity, improves accuracy by reducing human error, and speeds up problem resolution. That being said, it’s important to pick the right generative AI tool to help you reap the benefits without compromising on security. For example, with our collaboration with Microsoft, our customers’ data is secure—it stays within the system, and the AI doesn’t use it for training or expanding its database. This ensures safety while delivering substantial benefits.

Our incorporation of AI into our product focuses on two key aspects. First, your IT team no longer has to deal with mundane, frustrating tasks. Second, for end users, issues like non-working printers, forgotten passwords, or slow internet are resolved in seconds or minutes instead of hours. This provides a measurable and significant improvement in efficiency.

There are all kinds of AIs out there. Some of them are more beneficial, some are less. Some are just Chat GPT in disguise, and it’s a very thin layer. What we do literally changes the whole interaction with IT. And we know, when IT has a problem things stop working, and you stop working. Our solution ensures everything keeps running smoothly.

What can we expect from AI over the next few years?

AI is set to become significantly more intelligent and aware. One remarkable development is its growing ability to reason, predict, and understand data. This capability enables AI to foresee issues and autonomously resolve them, showcasing an astonishing level of reasoning.

We anticipate a dual advancement: a rapid acceleration in AI’s intelligence and a substantial enhancement in its empathetic interactions, as demonstrated in the latest OpenAI release. This evolution will transform how humans engage with AI.

Our work exemplifies this shift. When non-technical users interact with our software to solve problems, AI responds with a highly empathetic, human-like approach. Users feel as though they are speaking to a real IT professional, ensuring a seamless and comforting experience.

As AI continues to evolve, it will become increasingly powerful and capable. Recent breakthroughs in understanding AI’s mechanisms will not only enhance its functionality but also ensure its security and ethical use, reinforcing its role as a force for good.

What plans does Atera have for the next year?

We are excited to announce the upcoming launch of Autopilot, scheduled for release in a few months. While Copilot, our comprehensive suite of advanced tools designed specifically for IT professionals, has already been instrumental in enhancing efficiency and effectiveness, Autopilot represents the next significant advancement.

Currently in beta so whoever wants to try it already can, Autopilot directly interacts with end users, automating and resolving common IT issues that typically burden IT staff, such as password resets and printer malfunctions. By addressing these routine tasks, Autopilot allows IT professionals to focus on more strategic and rewarding activities, ultimately improving overall productivity and job satisfaction.

For more information, visit atera.com

Atera is a sponsor of TechEx North America 2024 on June 5-6 in Santa Clara, US. Visit the Atera team at booth 237 for a personalised demo, or to test your IT skills with the company’s first-of-kind AIT game, APOLLO IT, for a chance to win a prize.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Gil Pekelman, Atera: How businesses can harness the power of AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/28/gil-pekelman-atera-how-businesses-can-harness-the-power-of-ai/feed/ 0
Ethical, trust and skill barriers hold back generative AI progress in EMEA https://www.artificialintelligence-news.com/2024/05/20/ethical-trust-and-skill-barriers-hold-back-generative-ai-progress-in-emea/ https://www.artificialintelligence-news.com/2024/05/20/ethical-trust-and-skill-barriers-hold-back-generative-ai-progress-in-emea/#respond Mon, 20 May 2024 10:17:00 +0000 https://www.artificialintelligence-news.com/?p=14845 76% of consumers in EMEA think AI will significantly impact the next five years, yet 47% question the value that AI will bring and 41% are worried about its applications. This is according to research from enterprise analytics AI firm Alteryx. Since the release of ChatGPT by OpenAI in November 2022, there has been significant buzz about the... Read more »

The post Ethical, trust and skill barriers hold back generative AI progress in EMEA appeared first on AI News.

]]>
76% of consumers in EMEA think AI will significantly impact the next five years, yet 47% question the value that AI will bring and 41% are worried about its applications.

This is according to research from enterprise analytics AI firm Alteryx.

Since the release of ChatGPT by OpenAI in November 2022, there has been significant buzz about the transformative potential of generative AI, with many considering it one of the most revolutionary technologies of our time. 

With a significant 79% of organisations reporting that generative AI contributes positively to business, it is evident that a gap needs to be addressed to demonstrate AI’s value to consumers both in their personal and professional lives. According to the ‘Market Research: Attitudes and Adoption of Generative AI’ report, which surveyed 690 IT business leaders and 1,100 members of the general public in EMEA, key issues of trust, ethics and skills are prevalent, potentially impeding the successful deployment and broader acceptance of generative AI.

The impact of misinformation, inaccuracies, and AI hallucinations

These hallucinations – where AI generates incorrect or illogical outputs – are a significant concern. Trusting what generative AI produces is a substantial issue for both business leaders and consumers. Over a third of the public are anxious about AI’s potential to generate fake news (36%) and its misuse by hackers (42%), while half of the business leaders report grappling with misinformation produced by generative AI. Simultaneously, half of the business leaders have observed their organisations grappling with misinformation produced by generative AI.

Moreover, the reliability of information provided by generative AI has been questioned. Feedback from the general public indicates that half of the data received from AI was inaccurate, and 38% perceived it as outdated. On the business front, concerns include generative AI infringing on copyright or intellectual property rights (40%), and producing unexpected or unintended outputs (36%).

A critical trust issue for businesses (62%) and the public (74%) revolves around AI hallucinations. For businesses, the challenge involves applying generative AI to appropriate use cases, supported by the right technology and safety measures, to mitigate these concerns. Close to half of the consumers (45%) are advocating for regulatory measures on AI usage.

Ethical concerns and risks persist in the use of generative AI

In addition to these challenges, there are strong and similar sentiments on ethical concerns and the risks associated with generative AI among both business leaders and consumers. More than half of the general public (53%) oppose the use of generative AI in making ethical decisions. Meanwhile, 41% of business respondents are concerned about its application in critical decision-making areas. There are distinctions in the specific areas where its use is discouraged; consumers notably oppose its use in politics (46%), and businesses are cautious about its deployment in healthcare (40%).

These concerns find some validation in the research findings, which highlight worrying gaps in organisational practices. Only a third of leaders confirmed that their businesses ensure the data used to train generative AI is diverse and unbiased. Furthermore, only 36% have set ethical guidelines, and 52% have established data privacy and security policies for generative AI applications.

This lack of emphasis on data integrity and ethical considerations puts firms at risk. 63% of business leaders cite ethics as their major concern with generative AI, closely followed by data-related issues (62%). This scenario emphasises the importance of better governance to create confidence and mitigate risks related to how employees use generative AI in the workplace. 

The rise of generative AI skills and the need for enhanced data literacy

As generative AI evolves, establishing relevant skill sets and enhancing data literacy will be key to realising its full potential. Consumers are increasingly using generative AI technologies in various scenarios, including information retrieval, email communication, and skill acquisition. Business leaders claim they use generative AI for data analysis, cybersecurity, and customer support, and despite the success of pilot projects, challenges remain. Despite the reported success of experimental projects, several challenges remain, including security problems, data privacy issues, and output quality and reliability.

Trevor Schulze, Alteryx’s CIO, emphasised the necessity for both enterprises and the general public to fully understand the value of AI and address common concerns as they navigate the early stages of generative AI adoption.

He noted that addressing trust issues, ethical concerns, skills shortages, fears of privacy invasion, and algorithmic bias are critical tasks. Schulze underlined the necessity for enterprises to expedite their data journey, adopt robust governance, and allow non-technical individuals to access and analyse data safely and reliably, addressing privacy and bias concerns in order to genuinely profit from this ‘game-changing’ technology.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

The post Ethical, trust and skill barriers hold back generative AI progress in EMEA appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/20/ethical-trust-and-skill-barriers-hold-back-generative-ai-progress-in-emea/feed/ 0
IBM and Tech Mahindra unveil new era of trustworthy AI with watsonx https://www.artificialintelligence-news.com/2024/05/17/ibm-and-tech-mahindra-unveil-new-era-of-trustworthy-ai-with-watsonx/ https://www.artificialintelligence-news.com/2024/05/17/ibm-and-tech-mahindra-unveil-new-era-of-trustworthy-ai-with-watsonx/#respond Fri, 17 May 2024 07:01:54 +0000 https://www.artificialintelligence-news.com/?p=14835 Tech Mahindra, a global provider of technology consulting and digital solutions, has collaborated with IBM to help organisations sustainably accelerate generative AI use worldwide. This collaboration combines Tech Mahindra’s range of AI offerings, TechM amplifAI0->∞, and IBM’s watsonx AI and data platform with AI Assistants. Customers can now combine IBM watsonx’s capabilities with Tech Mahindra’s AI consulting and engineering skills... Read more »

The post IBM and Tech Mahindra unveil new era of trustworthy AI with watsonx appeared first on AI News.

]]>
Tech Mahindra, a global provider of technology consulting and digital solutions, has collaborated with IBM to help organisations sustainably accelerate generative AI use worldwide.

This collaboration combines Tech Mahindra’s range of AI offerings, TechM amplifAI0->∞, and IBM’s watsonx AI and data platform with AI Assistants.

Customers can now combine IBM watsonx’s capabilities with Tech Mahindra’s AI consulting and engineering skills to access a variety of new generative AI services, frameworks, and solution architectures. This enables the development of AI apps in which organisations can use their trusted data to automate processes. It also provides a basis for businesses to create trustworthy AI models, promotes explainability to help manage risk and bias, and enables scalable AI adoption across hybrid cloud and on-premises environments.

According to Kunal Purohit, Tech Mahindra’s chief digital services officer, organisations focus on responsible AI practices, and incorporating generative AI technologies to revitalise enterprises. 

“Our work with IBM can help advance digital transformation for organisations, adoption of GenAI, modernisation, and ultimately foster business growth for our global customers,” Purohit added.

To further enhance business capabilities in AI, Tech Mahindra has established a virtual watsonx Centre of Excellence (CoE), which is already operational. This CoE functions as a co-innovation centre, with a dedicated team tasked with maximising synergies between the two companies and producing unique offerings and solutions based on their combined capabilities.

The collaborative offerings and solutions developed through this partnership could help enterprises achieve their goals of constructing machine learning models using open-source frameworks while also enabling them to scale and accelerate the impact of generative AI. These AI-driven solutions have the potential to aid organisations enhance efficiency and productivity responsibly. 

Kate Woolley, GM of IBM Ecosystem, emphasised the collaboration’s potential, adding that generative AI may serve as a catalyst for innovation, unlocking new market opportunities when built on a foundation of explainability, transparency, and trust. 

Woolley said: “Our work with Tech Mahindra is expected to expand the reach of watsonx, allowing even more customers to build trustworthy AI as we seek to combine our technology and expertise to support enterprise use cases such as code modernisation, digital labour, and customer service.”

This collaboration aligns with Tech Mahindra’s continuous endeavour to transform enterprises with advanced AI-led offerings and solutions, including their recent additions like Vision amplifAIer, Ops amplifAIer, Email amplifAIer, Enterprise Knowledge Search offering, Evangelize Pair Programming, and Generative AI Studio.

It is worth mentioning that the two companies have previously collaborated. Earlier this year, Tech Mahindra announced the opening of a Synergy Lounge in conjunction with IBM on the company’s Singapore campus. This Lounge seeks to accelerate digital adoption for APAC organisations. It aids in operationalising and leveraging next-generation technologies such as AI, intelligent automation, hybrid cloud, 5G, edge computing, and cybersecurity.

Beyond Tech Mahindra, IBM watsonx has been used in other collaborations to speed up the deployment of generative AI. Also happened early this year, the GSMA and IBM announced a new partnership to support the use and capabilities of generative AI in the telecom industry by launching GSMA Advance’s AI Training program and the GSMA Foundry Generative AI program.

In addition, there is a digital version of the program that covers both the commercial strategy and technology fundamentals of generative AI. This initiative uses IBM watsonx to provide hands-on training for architects and developers seeking in-depth practical generative AI knowledge.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IBM and Tech Mahindra unveil new era of trustworthy AI with watsonx appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/17/ibm-and-tech-mahindra-unveil-new-era-of-trustworthy-ai-with-watsonx/feed/ 0
80% of AI decision makers are worried about data privacy and security https://www.artificialintelligence-news.com/2024/04/17/80-of-ai-decision-makers-are-worried-about-data-privacy-and-security/ https://www.artificialintelligence-news.com/2024/04/17/80-of-ai-decision-makers-are-worried-about-data-privacy-and-security/#respond Wed, 17 Apr 2024 22:25:00 +0000 https://www.artificialintelligence-news.com/?p=14692 Organisations are enthusiastic about generative AI’s potential for increasing their business and people productivity, but lack of strategic planning and talent shortages are preventing them from realising its true value. This is according to a study conducted in early 2024 by Coleman Parkes Research and sponsored by data analytics firm SAS, which surveyed 300 US... Read more »

The post 80% of AI decision makers are worried about data privacy and security appeared first on AI News.

]]>
Organisations are enthusiastic about generative AI’s potential for increasing their business and people productivity, but lack of strategic planning and talent shortages are preventing them from realising its true value.

This is according to a study conducted in early 2024 by Coleman Parkes Research and sponsored by data analytics firm SAS, which surveyed 300 US GenAI strategy or data analytics decision makers to pulse check major areas of investment and the hurdles organisations are facing.

Marinela Profi, strategic AI advisor at SAS, said: “Organisations are realising that large language models (LLMs) alone don’t solve business challenges. 

“GenAI should be treated as an ideal contributor to hyper automation and the acceleration of existing processes and systems rather than the new shiny toy that will help organisations realise all their business aspirations. Time spent developing a progressive strategy and investing in technology that offers integration, governance and explainability of LLMs are crucial steps all organisations should take before jumping in with both feet and getting ‘locked in.’”

Organisations are hitting stumbling blocks in four key areas of implementation:

• Increasing trust in data usage and achieving compliance. Only one in 10 organisations has a reliable system in place to measure bias and privacy risk in LLMs. Moreover, 93% of U.S. businesses lack a comprehensive governance framework for GenAI, and the majority are at risk of noncompliance when it comes to regulation.

• Integrating GenAI into existing systems and processes. Organisations reveal they’re experiencing compatibility issues when trying to combine GenAI with their current systems.

• Talent and skills. In-house GenAI is lacking. As HR departments encounter a scarcity of suitable hires, organisational leaders worry they don’t have access to the necessary skills to make the most of their GenAI investment.

• Predicting costs. Leaders cite prohibitive direct and indirect costs associated with using LLMs. Model creators provide a token cost estimate (which organisations now realise is prohibitive). But the costs for private knowledge preparation, training and ModelOps management are lengthy and complex.

Profi added: “It’s going to come down to identifying real-world use cases that deliver the highest value and solve human needs in a sustainable and scalable manner. 

“Through this study, we’re continuing our commitment to helping organisations stay relevant, invest their money wisely and remain resilient. In an era where AI technology evolves almost daily, competitive advantage is highly dependent on the ability to embrace the resiliency rules.”

Details of the study were unveiled today at SAS Innovate in Las Vegas, SAS Software’s AI and analytics conference for business leaders, technical users and SAS partners.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post 80% of AI decision makers are worried about data privacy and security appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/17/80-of-ai-decision-makers-are-worried-about-data-privacy-and-security/feed/ 0
NCSC: AI to significantly boost cyber threats over next two years https://www.artificialintelligence-news.com/2024/01/24/ncsc-ai-significantly-boost-cyber-threats-next-two-years/ https://www.artificialintelligence-news.com/2024/01/24/ncsc-ai-significantly-boost-cyber-threats-next-two-years/#respond Wed, 24 Jan 2024 16:50:10 +0000 https://www.artificialintelligence-news.com/?p=14257 A report published by the UK’s National Cyber Security Centre (NCSC) warns that AI will substantially increase cyber threats over the next two years.  The centre warns of a surge in ransomware attacks in particular; involving hackers deploying malicious software to encrypt a victim’s files or entire system and demanding a ransom payment for the... Read more »

The post NCSC: AI to significantly boost cyber threats over next two years appeared first on AI News.

]]>
A report published by the UK’s National Cyber Security Centre (NCSC) warns that AI will substantially increase cyber threats over the next two years. 

The centre warns of a surge in ransomware attacks in particular; involving hackers deploying malicious software to encrypt a victim’s files or entire system and demanding a ransom payment for the decryption key.

The NCSC assessment predicts AI will enhance threat actors’ capabilities mainly in carrying out more persuasive phishing attacks that trick individuals into providing sensitive information or clicking on malicious links.

“Generative AI can already create convincing interactions like documents that fool people, free of the translation and grammatical errors common in phishing emails,” the report states. 

The advent of generative AI, capable of creating convincing interactions and documents free of common phishing red flags, is identified as a key contributor to the rising threat landscape over the next two years.

The NCSC assessment identifies challenges in cyber resilience, citing the difficulty in verifying the legitimacy of emails and password reset requests due to generative AI and large language models. The shrinking time window between security updates and threat exploitation further complicates rapid vulnerability patching for network managers.

James Babbage, director general for threats at the National Crime Agency, commented: “AI services lower barriers to entry, increasing the number of cyber criminals, and will boost their capability by improving the scale, speed, and effectiveness of existing attack methods.”

However, the NCSC report also outlined how AI could bolster cybersecurity through improved attack detection and system design. It calls for further research on how developments in defensive AI solutions can mitigate evolving threats.

Access to quality data, skills, tools, and time makes advanced AI-powered cyber operations feasible mainly for highly capable state actors currently. But the NCSC warns these barriers to entry will progressively fall as capable groups monetise and sell AI-enabled hacking tools.

Extent of capability uplift by AI over next two years:

(Credit: NCSC)

Lindy Cameron, CEO of the NCSC, stated: “We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat.”

The UK government has allocated £2.6 billion under its Cyber Security Strategy 2022 to strengthen the country’s resilience to emerging high-tech threats.

AI is positioned to substantially change the cyber risk landscape in the near future. Continuous investment in defensive capabilities and research will be vital to counteract its potential to empower attackers.

A full copy of the NCSC’s report can be found here.

(Photo by Muha Ajjan on Unsplash)

See also: AI-generated Biden robocall urges Democrats not to vote

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NCSC: AI to significantly boost cyber threats over next two years appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/24/ncsc-ai-significantly-boost-cyber-threats-next-two-years/feed/ 0
Global AI security guidelines endorsed by 18 countries https://www.artificialintelligence-news.com/2023/11/27/global-ai-security-guidelines-endorsed-by-18-countries/ https://www.artificialintelligence-news.com/2023/11/27/global-ai-security-guidelines-endorsed-by-18-countries/#respond Mon, 27 Nov 2023 10:28:13 +0000 https://www.artificialintelligence-news.com/?p=13954 The UK has published the world’s first global guidelines for securing AI systems against cyberattacks. The new guidelines aim to ensure AI technology is developed safely and securely. The guidelines were developed by the UK’s National Cyber Security Centre (NCSC) and the US’ Cybersecurity and Infrastructure Security Agency (CISA). They have already secured endorsements from... Read more »

The post Global AI security guidelines endorsed by 18 countries appeared first on AI News.

]]>
The UK has published the world’s first global guidelines for securing AI systems against cyberattacks. The new guidelines aim to ensure AI technology is developed safely and securely.

The guidelines were developed by the UK’s National Cyber Security Centre (NCSC) and the US’ Cybersecurity and Infrastructure Security Agency (CISA). They have already secured endorsements from 17 other countries, including all G7 members.

The guidelines provide recommendations for developers and organisations using AI to incorporate cybersecurity at every stage. This “secure by design” approach advises baking in security from the initial design phase through development, deployment, and ongoing operations.  

Specific guidelines cover four key areas: secure design, secure development, secure deployment, and secure operation and maintenance. They suggest security behaviours and best practices for each phase.

The launch event in London convened over 100 industry, government, and international partners. Speakers included reps from Microsoft, the Alan Turing Institute, and cyber agencies from the US, Canada, Germany, and the UK.  

NCSC CEO Lindy Cameron stressed the need for proactive security amidst AI’s rapid pace of development. She said, “security is not a postscript to development but a core requirement throughout.”

The guidelines build on existing UK leadership in AI safety. Last month, the UK hosted the first international summit on AI safety at Bletchley Park.

US Secretary of Homeland Security Alejandro Mayorkas said: “We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy.

“The guidelines jointly issued today by CISA, NCSC, and our other international partners, provide a common-sense path to designing, developing, deploying, and operating AI with cybersecurity at its core.”

The 18 endorsing countries span Europe, Asia-Pacific, Africa, and the Americas. Here is the full list of international signatories:

  • Australia – Australian Signals Directorate’s Australian Cyber Security Centre (ACSC)
  • Canada – Canadian Centre for Cyber Security (CCCS) 
  • Chile – Chile’s Government CSIRT
  • Czechia – Czechia’s National Cyber and Information Security Agency (NUKIB)
  • Estonia – Information System Authority of Estonia (RIA) and National Cyber Security Centre of Estonia (NCSC-EE)
  • France – French Cybersecurity Agency (ANSSI)
  • Germany – Germany’s Federal Office for Information Security (BSI)
  • Israel – Israeli National Cyber Directorate (INCD)
  • Italy – Italian National Cybersecurity Agency (ACN)
  • Japan – Japan’s National Center of Incident Readiness and Strategy for Cybersecurity (NISC; Japan’s Secretariat of Science, Technology and Innovation Policy, Cabinet Office
  • New Zealand – New Zealand National Cyber Security Centre
  • Nigeria – Nigeria’s National Information Technology Development Agency (NITDA)
  • Norway – Norwegian National Cyber Security Centre (NCSC-NO)
  • Poland – Poland’s NASK National Research Institute (NASK)
  • Republic of Korea – Republic of Korea National Intelligence Service (NIS)
  • Singapore – Cyber Security Agency of Singapore (CSA)
  • United Kingdom – National Cyber Security Centre (NCSC)
  • United States of America – Cybersecurity and Infrastructure Agency (CISA); National Security Agency (NSA; Federal Bureau of Investigations (FBI)

UK Science and Technology Secretary Michelle Donelan positioned the new guidelines as cementing the UK’s role as “an international standard bearer on the safe use of AI.”

“Just weeks after we brought world leaders together at Bletchley Park to reach the first international agreement on safe and responsible AI, we are once again uniting nations and companies in this truly global effort,” adds Donelan.

The guidelines are now published on the NCSC website alongside explanatory blogs. Developer uptake will be key to translating the secure by design vision into real-world improvements in AI security.

(Photo by Jan Antonin Kolar on Unsplash)

See also: Paul O’Sullivan, Salesforce: Transforming work in the GenAI era

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Global AI security guidelines endorsed by 18 countries appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/27/global-ai-security-guidelines-endorsed-by-18-countries/feed/ 0
DHS AI roadmap prioritises cybersecurity and national safety https://www.artificialintelligence-news.com/2023/11/15/dhs-ai-roadmap-prioritises-cybersecurity-national-safety/ https://www.artificialintelligence-news.com/2023/11/15/dhs-ai-roadmap-prioritises-cybersecurity-national-safety/#respond Wed, 15 Nov 2023 10:10:47 +0000 https://www.artificialintelligence-news.com/?p=13893 The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has launched its inaugural Roadmap for AI. Viewed as a crucial step in the broader governmental effort to ensure the secure development and implementation of AI capabilities, the move aligns with President Biden’s recent Executive Order. “DHS has a broad leadership role in... Read more »

The post DHS AI roadmap prioritises cybersecurity and national safety appeared first on AI News.

]]>
The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has launched its inaugural Roadmap for AI.

Viewed as a crucial step in the broader governmental effort to ensure the secure development and implementation of AI capabilities, the move aligns with President Biden’s recent Executive Order.

“DHS has a broad leadership role in advancing the responsible use of AI and this cybersecurity roadmap is one important element of our work,” said Secretary of Homeland Security Alejandro N. Mayorkas.

“The Biden-Harris Administration is committed to building a secure and resilient digital ecosystem that promotes innovation and technological progress.” 

Following the Executive Order, DHS is mandated to globally promote AI safety standards, safeguard US networks and critical infrastructure, and address risks associated with AI—including potential use “to create weapons of mass destruction”.

“In last month’s Executive Order, the President called on DHS to promote the adoption of AI safety standards globally and help ensure the safe, secure, and responsible use and development of AI,” added Mayorkas.

“CISA’s roadmap lays out the steps that the agency will take as part of our Department’s broader efforts to both leverage AI and mitigate its risks to our critical infrastructure and cyber defenses.”

CISA’s roadmap outlines five strategic lines of effort, providing a blueprint for concrete initiatives and a responsible approach to integrating AI into cybersecurity.

CISA Director Jen Easterly highlighted the dual nature of AI, acknowledging its promise in enhancing cybersecurity while acknowledging the immense risks it poses.

“Artificial Intelligence holds immense promise in enhancing our nation’s cybersecurity, but as the most powerful technology of our lifetimes, it also presents enormous risks,” commented Easterly.

“Our Roadmap for AI – focused at the nexus of AI, cyber defense, and critical infrastructure – sets forth an agency-wide plan to promote the beneficial uses of AI to enhance cybersecurity capabilities; ensure AI systems are protected from cyber-based threats; and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day.”

The outlined lines of effort are as follows:

  • Responsibly use AI to support our mission: CISA commits to using AI-enabled tools ethically and responsibly to strengthen cyber defense and support its critical infrastructure mission. The adoption of AI will align with constitutional principles and all relevant laws and policies.
  • Assess and Assure AI systems: CISA will assess and assist in secure AI-based software adoption across various stakeholders, establishing assurance through best practices and guidance for secure and resilient AI development.
  • Protect critical infrastructure from malicious use of AI: CISA will evaluate and recommend mitigation of AI threats to critical infrastructure, collaborating with government agencies and industry partners. The establishment of JCDC.AI aims to facilitate focused collaboration on AI-related threats.
  • Collaborate and communicate on key AI efforts: CISA commits to contributing to interagency efforts, supporting policy approaches for the US government’s national strategy on cybersecurity and AI, and coordinating with international partners to advance global AI security practices.
  • Expand AI expertise in our workforce: CISA will educate its workforce on AI systems and techniques, actively recruiting individuals with AI expertise and ensuring a comprehensive understanding of the legal, ethical, and policy aspects of AI-based software systems.

“This is a step in the right direction. It shows the government is taking the potential threats and benefits of AI seriously. The roadmap outlines a comprehensive strategy for leveraging AI to enhance cybersecurity, protect critical infrastructure, and foster collaboration. It also emphasises the importance of security in AI system design and development,” explains Joseph Thacker, AI and security researcher at AppOmni.

“The roadmap is pretty comprehensive. Nothing stands out as missing initially, although the devil is in the details when it comes to security, and even more so when it comes to a completely new technology. CISA’s ability to keep up may depend on their ability to get talent or train internal folks. Both of those are difficult to accomplish at scale.”

CISA invites stakeholders, partners, and the public to explore the Roadmap for Artificial Intelligence and gain insights into the strategic vision for AI technology and cybersecurity here.

See also: Google expands partnership with Anthropic to enhance AI safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DHS AI roadmap prioritises cybersecurity and national safety appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/15/dhs-ai-roadmap-prioritises-cybersecurity-national-safety/feed/ 0
GitLab’s new AI capabilities empower DevSecOps https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/ https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/#respond Mon, 13 Nov 2023 17:27:18 +0000 https://www.artificialintelligence-news.com/?p=13876 GitLab is empowering DevSecOps with new AI-powered capabilities as part of its latest releases. The recent GitLab 16.6 November release includes the beta launch of GitLab Duo Chat, a natural-language AI assistant. Additionally, the GitLab 16.7 December release sees the general availability of GitLab Duo Code Suggestions. David DeSanto, Chief Product Officer at GitLab, said:... Read more »

The post GitLab’s new AI capabilities empower DevSecOps appeared first on AI News.

]]>
GitLab is empowering DevSecOps with new AI-powered capabilities as part of its latest releases.

The recent GitLab 16.6 November release includes the beta launch of GitLab Duo Chat, a natural-language AI assistant. Additionally, the GitLab 16.7 December release sees the general availability of GitLab Duo Code Suggestions.

David DeSanto, Chief Product Officer at GitLab, said: “To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing DevSecOps teams to benefit from boosts to security, efficiency, and collaboration.”

GitLab Duo Chat – arguably the star of the show – provides users with invaluable insights, guidance, and suggestions. Beyond code analysis, it supports planning, security issue comprehension and resolution, troubleshooting CI/CD pipeline failures, aiding in merge requests, and more.

As part of GitLab’s commitment to providing a comprehensive AI-powered experience, Duo Chat joins Code Suggestions as the primary interface into GitLab’s AI suite within its DevSecOps platform.

GitLab Duo comprises a suite of 14 AI capabilities:

  • Suggested Reviewers
  • Code Suggestions
  • Chat
  • Vulnerability Summary
  • Code Explanation
  • Planning Discussions Summary
  • Merge Request Summary
  • Merge Request Template Population
  • Code Review Summary
  • Test Generation
  • Git Suggestions
  • Root Cause Analysis
  • Planning Description Generation
  • Value Stream Forecasting

In response to the evolving needs of development, security, and operations teams, Code Suggestions is now generally available. This feature assists in creating and updating code, reducing cognitive load, enhancing efficiency, and accelerating secure software development.

GitLab’s commitment to privacy and transparency stands out in the AI space. According to the GitLab report, 83 percent of DevSecOps professionals consider implementing AI in their processes essential, with 95 percent prioritising privacy and intellectual property protection in AI tool selection.

The State of AI in Software Development report by GitLab reveals that developers spend just 25 percent of their time writing code. The Duo suite aims to address this by reducing toolchain sprawl—enabling 7x faster cycle times, heightened developer productivity, and reduced software spend.

Kate Holterhoff, Industry Analyst at Redmonk, commented: “The developers we speak with at RedMonk are keenly interested in the productivity and efficiency gains that code assistants promise.

“GitLab’s Duo Code Suggestions is a welcome player in this space, expanding the available options for enabling an AI-enhanced software development lifecycle.”

(Photo by Pankaj Patel on Unsplash)

See also: OpenAI battles DDoS against its API and ChatGPT services

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab’s new AI capabilities empower DevSecOps appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/feed/ 0
OpenAI battles DDoS against its API and ChatGPT services https://www.artificialintelligence-news.com/2023/11/09/openai-battles-ddos-against-api-chatgpt-services/ https://www.artificialintelligence-news.com/2023/11/09/openai-battles-ddos-against-api-chatgpt-services/#respond Thu, 09 Nov 2023 15:50:14 +0000 https://www.artificialintelligence-news.com/?p=13866 OpenAI has been grappling with a series of distributed denial-of-service (DDoS) attacks targeting its API and ChatGPT services over the past 24 hours. While the company has not yet disclosed specific details about the source of these attacks, OpenAI acknowledged that they are dealing with “periodic outages due to an abnormal traffic pattern reflective of... Read more »

The post OpenAI battles DDoS against its API and ChatGPT services appeared first on AI News.

]]>
OpenAI has been grappling with a series of distributed denial-of-service (DDoS) attacks targeting its API and ChatGPT services over the past 24 hours.

While the company has not yet disclosed specific details about the source of these attacks, OpenAI acknowledged that they are dealing with “periodic outages due to an abnormal traffic pattern reflective of a DDoS attack.”

Users affected by these incidents reported encountering errors such as “something seems to have gone wrong” and “There was an error generating a response” when accessing ChatGPT.

This recent wave of attacks follows a major outage that impacted ChatGPT and its API on Wednesday, along with partial ChatGPT outages on Tuesday, and elevated error rates in Dall-E on Monday.

OpenAI displayed a banner across ChatGPT’s interface, attributing the disruptions to “exceptionally high demand” and reassuring users that efforts were underway to scale their systems.

Threat actor group Anonymous Sudan has claimed responsibility for the DDoS attacks on OpenAI. According to the group, the attacks are in response to OpenAI’s perceived bias towards Israel and against Palestine.

The attackers utilised the SkyNet botnet, which recently incorporated support for application layer attacks or Layer 7 (L7) DDoS attacks. In Layer 7 attacks, threat actors overwhelm services at the application level with a massive volume of requests to strain the targets’ server and network resources.

Brad Freeman, Director of Technology at SenseOn, commented:

“Distributed denial of service attacks are internet vandalism. Low effort, complexity, and in most cases more of a nuisance than a long-term threat to a business. Often DDOS attacks target services with high volumes of traffic which can be ’off-ramped, by their cloud or Internet service provider.

However, as the attacks are on Layer 7 they will be targeting the application itself, therefore OpenAI will need to make some changes to mitigate the attack. It’s likely the threat actor is sending complex queries to OpenAI to overload it, I wonder if they are using AI-generated content to attack AI content generation.”

However, the attribution of these attacks to Anonymous Sudan has raised suspicions among cybersecurity researchers. Some experts suggest that this could be a false flag operation and the group might have connections to Russia instead which, along with Iran, is suspected of stoking the bloodshed and international outrage to benefit its domestic interests.

The situation once again highlights the ongoing challenges faced by organisations dealing with DDoS attacks and the complexities of accurately identifying the perpetrators.

(Photo by Johann Walter Bantz on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI battles DDoS against its API and ChatGPT services appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/09/openai-battles-ddos-against-api-chatgpt-services/feed/ 0