The post Nicholas Brackney, Dell: How we leverage a four-pillar AI strategy appeared first on AI News.
]]>Nicholas Brackney, Senior Consultant in Product Marketing at Dell, discussed the company’s AI initiatives ahead of AI & Big Data Expo North America.
Dell’s AI strategy is structured around four core principles: AI-In, AI-On, AI-For, and AI-With:
Dell is well-positioned to help customers navigate AI workloads, emphasising choice and adaptability through the various evolutions of emerging technology. Brackney highlighted Dell’s commitment to serving customers from the early stages of AI adoption to achieving AI at scale.
“We’ve always believed in providing choice and have been doing it through the various evolutions of emerging technology, including AI, and understanding the challenges that come with them,” explained Brackney. “We fully leverage our unique operating model to serve customers in the early innings of AI to a future of AI at scale.”
Looking to the future, Dell is particularly excited about the potential of AI PCs.
“We know organisations and their knowledge workers are excited about AI, and they want to fit it into all their workflows,” Brackney said. Dell is focused on integrating AI into software and ensuring it runs efficiently on the right systems, enhancing end-to-end customer journeys in AI.
Ethical concerns in AI deployment are also a priority for Dell. Addressing issues such as deepfakes, transparency, and bias, Brackney emphasised the importance of a shared, secure, and sustainable approach to AI development.
“We believe in a shared, secure, and sustainable approach. By getting the foundations right at their core, we can eliminate some of the greatest risks associated with AI and work to ensure it acts as a force for good,” explains Brackney.
User data privacy in AI-driven products is another critical focus area. Brackney outlined Dell’s strategy of integrating AI with existing security investments without introducing new risks. Dell offers a suite of secure products, comprehensive data protection, advanced cybersecurity features, and global support services to safeguard user data.
On the topic of job displacement due to AI, Brackney underscored that Dell views AI as augmenting human potential rather than replacing it.
“The roles may change but the human element will always be key,” Brackney stated. “At Dell, we encourage our team members to understand, explore, and, where appropriate, use tools based on AI to learn, evolve, and enhance the overall work experience.”
Looking ahead, Brackney envisions a transformative role for AI within Dell and the tech industry. “We see customers in every industry wanting to become leaders in AI because it is critical to their organisation’s innovation, growth, and productivity,” he noted.
Dell aims to support this evolution by providing the necessary architectures, frameworks, and services to assist its customers on this transformative journey.
Dell is a key sponsor of this year’s AI & Big Data Expo. Check out Dell’s keynote presentation ‘From Data Novice to Data Champion – Cultivating Data Literacy Across the Organization’ and swing by Dell’s booth at stand #66 to hear about AI from the company’s experts.
The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Nicholas Brackney, Dell: How we leverage a four-pillar AI strategy appeared first on AI News.
]]>The post Ben Ball, IBM: Revolutionising technology operations with IBM Concert appeared first on AI News.
]]>IBM’s current focal point in AI research and development lies in applying it to technology operations. As Ball explained, “As people try to build applications out in the world, it’s an increasingly complex situation. There are so many tools, there are so many environments that go into building and maintaining an application over time that a lot of teams are just drowned under all of the data that’s involved.”
To tackle this challenge, the company has announced IBM Concert, which will harness AI to make sense of the vast amount of data involved in application development and maintenance. “It’s using AI to figure out actually how your application works, and then provides recommendations about how to make it better,” Ball said.
According to Ball, a current opportunity is organising the unstructured data that feeds into AI models. “There can be a gap between the unstructured amoeba of data and then what you want in AI, which is sorted, ready to go,” he acknowledged. However, IBM is actively working to bridge this gap, with IBM Concert set to evolve and incorporate tools to organise data into a format more digestible for AI engines.
Explainability is another critical aspect of AI that IBM is addressing with IBM Concert. Ball emphasised the importance of not blindly accepting AI recommendations, stating, “We’re actually building in a function that you can question the recommendation so that you can question what the AI comes up with, and sort of dig a little bit deeper into how it came to that conclusion.”
Beyond IBM Concert, IBM offers a suite of AI technologies and tools, such as watsonx and AI governance solutions. As Ball explained, IBM aims to provide a “use-case-neutral” approach, allowing customers to leverage IBM’s AI capabilities for their specific needs.
One area where IBM has seen early success with IBM Concert is in addressing the data overload faced by many organisations. Ball shared that design partners have been “amazed at what we’re able to do, the insights that we’re able to show, even at a very basic level.” As IBM Concert’s capabilities continue to evolve, IBM expects to deliver even more profound insights and conclusions, ultimately improving application performance, security, and overall management.
For organisations considering adopting AI for the first time, Ball’s advice is clear: “Be deliberate about what you want to do with it. Don’t come in just thinking that the technology itself is the goal, but have a real use case in mind, a real goal in mind that you want AI to accomplish.”
At the upcoming Intelligent Automation Conference, where IBM is a key sponsor, the company plans to showcase IBM Concert and its potential to transform technology operations through the power of AI.
As the interview concluded, Ball expressed excitement about the possibilities of IBM Concert, stating, “We’re really excited about this, and we think our customers are going to be really excited about it too.”
You can watch our full interview with Ben Ball below:
Gain further insights from Ben Ball as he shares his expertise in his day one presentation titled ‘Leveraging Gen AI to proactively mitigate security vulnerabilities in your applications’ at the Intelligent Automation Conference.
Want to learn more about intelligent automation from industry leaders? Check out the Intelligent Automation Conference taking place in California, London, and Amsterdam. The comprehensive event is co-located with other leading events including AI & Big Data Expo, IoT Tech Expo, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Ben Ball, IBM: Revolutionising technology operations with IBM Concert appeared first on AI News.
]]>The post How to safeguard your business from AI-generated deepfakes appeared first on AI News.
]]>Deepfakes are forms of digitally altered media — including photos, videos and audio clips — that seem to depict a real person. They are created by training an AI system on real clips featuring a person, and then using that AI system to generate realistic (yet inauthentic) new media. Deepfake use is becoming more common. The Hong Kong case was the latest in a series of high-profile deepfake incidents in recent weeks. Fake, explicit images of Taylor Swift circulated on social media, the political party of an imprisoned election candidate in Pakistan used a deepfake video of him to deliver a speech and a deepfake ‘voice clone’ of President Biden called primary voters to tell them not to vote.
Less high-profile cases of deepfake use by cybercriminals have also been rising in both scale and sophistication. In the banking sector, cybercriminals are now attempting to overcome voice authentication by using voice clones of people to impersonate users and gain access to their funds. Banks have responded by improving their abilities to identify deepfake use and increasing authentication requirements.
Cybercriminals have also targeted individuals with ‘spear phishing’ attacks that use deepfakes. A common approach is to deceive a person’s family members and friends by using a voice clone to impersonate someone in a phone call and ask for funds to be transferred to a third-party account. Last year, a survey by McAfee found that 70% of surveyed people were not confident that they could distinguish between people and their voice clones and that nearly half of surveyed people would respond to requests for funds if the family member or friend making the call claimed to have been robbed or in a car accident.
Cybercriminals have also called people pretending to be tax authorities, banks, healthcare providers and insurers in efforts to gain financial and personal details.
In February, the Federal Communications Commission ruled that phone calls using AI-generated human voices are illegal unless made with prior express consent of the called party. The Federal Trade Commission also finalized a rule prohibiting AI impersonation of government organizations and businesses and proposed a similar rule prohibiting AI impersonation of individuals. This adds to a growing list of legal and regulatory measures being put in place around the world to combat deepfakes.
To protect employees and brand reputation against deepfakes, leaders should adhere to the following steps:
Though deepfakes are a cybersecurity concern, companies should also think of them as complex and emerging phenomena with broader repercussions. A proactive and thoughtful approach to addressing deepfakes can help educate stakeholders and ensure that measures to combat them are responsible, proportionate and appropriate.
(Photo by Markus Spiske)
See also: UK and US sign pact to develop AI safety tests
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post How to safeguard your business from AI-generated deepfakes appeared first on AI News.
]]>The post Databricks claims DBRX sets ‘a new standard’ for open-source LLMs appeared first on AI News.
]]>The company says the 132 billion parameter DBRX model surpasses popular open-source LLMs like LLaMA 2 70B, Mixtral, and Grok-1 across language understanding, programming, and maths tasks. It even outperforms Anthropic’s closed-source model Claude on certain benchmarks.
DBRX demonstrated state-of-the-art performance among open models on coding tasks, beating out specialised models like CodeLLaMA despite being a general-purpose LLM. It also matched or exceeded GPT-3.5 across nearly all benchmarks evaluated.
The state-of-the-art capabilities come thanks to a more efficient mixture-of-experts architecture that makes DBRX up to 2x faster at inference than LLaMA 2 70B, despite having fewer active parameters. Databricks claims training the model was also around 2x more compute-efficient than dense alternatives.
“DBRX is setting a new standard for open source LLMs—it gives enterprises a platform to build customised reasoning capabilities based on their own data,” said Ali Ghodsi, Databricks co-founder and CEO.
DBRX was pretrained on a massive 12 trillion tokens of “carefully curated” text and code data selected to improve quality. It leverages technologies like rotary position encodings and curriculum learning during pretraining.
Customers can interact with DBRX via APIs or use the company’s tools to finetune the model on their proprietary data. It’s already being integrated into Databricks’ AI products.
“Our research shows enterprises plan to spend half of their AI budgets on generative AI,” said Dave Menninger, Executive Director, Ventana Research, part of ISG. “One of the top three challenges they face is data security and privacy.
“With their end-to-end Data Intelligence Platform and the introduction of DBRX, Databricks is enabling enterprises to build generative AI applications that are governed, secure and tailored to the context of their business, while maintaining control and ownership of their IP along the way.”
Partners including Accenture, Block, Nasdaq, Prosus, Replit, and Zoom praised DBRX’s potential to accelerate enterprise adoption of open, customised large language models. Analysts said it could drive a shift from closed to open source as fine-tuned open models match proprietary performance.
Mike O’Rourke, Head of AI and Data Services at NASDAQ, commented: “Databricks is a key partner to Nasdaq on some of our most important data systems. They continue to be at the forefront of the industry in managing data and leveraging AI, and we are excited about the release of DBRX.
“The combination of strong model performance and favourable serving economics is the kind of innovation we are looking for as we grow our use of generative AI at Nasdaq.”
You can find the DBRX base and fine-tuned models on Hugging Face. The project’s GitHub has further resources and code examples.
(Photo by Ryan Quintal)
See also: Large language models could ‘revolutionise the finance sector within two years’
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Databricks claims DBRX sets ‘a new standard’ for open-source LLMs appeared first on AI News.
]]>The post NVIDIA unveils Blackwell architecture to power next GenAI wave appeared first on AI News.
]]>The Blackwell platform promises up to 25 times lower cost and energy consumption compared to its predecessor: the Hopper architecture. Named after pioneering mathematician and statistician David Harold Blackwell, the new GPU architecture introduces six transformative technologies.
“Generative AI is the defining technology of our time. Blackwell is the engine to power this new industrial revolution,” said Jensen Huang, Founder and CEO of NVIDIA. “Working with the most dynamic companies in the world, we will realise the promise of AI for every industry.”
The key innovations in Blackwell include the world’s most powerful chip with 208 billion transistors, a second-generation Transformer Engine to support double the compute and model sizes, fifth-generation NVLink interconnect for high-speed multi-GPU communication, and advanced engines for reliability, security, and data decompression.
Central to Blackwell is the NVIDIA GB200 Grace Blackwell Superchip, which combines two B200 Tensor Core GPUs with a Grace CPU over an ultra-fast 900GB/s NVLink interconnect. Multiple GB200 Superchips can be combined into systems like the liquid-cooled GB200 NVL72 platform with up to 72 Blackwell GPUs and 36 Grace CPUs, offering 1.4 exaflops of AI performance.
NVIDIA has already secured support from major cloud providers like Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure to offer Blackwell-powered instances. Other partners planning Blackwell products include Dell Technologies, Meta, Microsoft, OpenAI, Oracle, Tesla, and many others across hardware, software, and sovereign clouds.
Sundar Pichai, CEO of Alphabet and Google, said: “We are fortunate to have a longstanding partnership with NVIDIA, and look forward to bringing the breakthrough capabilities of the Blackwell GPU to our Cloud customers and teams across Google to accelerate future discoveries.”
The Blackwell architecture and supporting software stack will enable new breakthroughs across industries from engineering and chip design to scientific computing and generative AI.
Mark Zuckerberg, Founder and CEO of Meta, commented: “AI already powers everything from our large language models to our content recommendations, ads, and safety systems, and it’s only going to get more important in the future.
“We’re looking forward to using NVIDIA’s Blackwell to help train our open-source Llama models and build the next generation of Meta AI and consumer products.”
With its massive performance gains and efficiency, Blackwell could be the engine to finally make real-time trillion-parameter AI a reality for enterprises.
See also: Elon Musk’s xAI open-sources Grok
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post NVIDIA unveils Blackwell architecture to power next GenAI wave appeared first on AI News.
]]>The post Wipro and IBM collaborate to propel enterprise AI appeared first on AI News.
]]>The extended partnership between Wipro and IBM combines the former’s extensive industry expertise with IBM’s leading AI innovations. The collaboration seeks to develop joint solutions that facilitate the implementation of robust, reliable, and enterprise-ready AI solutions.
The Wipro Enterprise AI-Ready Platform harnesses various components of the IBM watsonx suite, including watsonx.ai, watsonx.data, and watsonx.governance, alongside AI assistants. It offers clients a comprehensive suite of tools, large language models (LLMs), streamlined processes, and robust governance mechanisms, laying a solid foundation for the development of future industry-specific analytic solutions.
Jo Debecker, Managing Partner & Global Head of Wipro FullStride Cloud, said: “This expanded partnership with IBM combines our deep contextual cloud, AI, and industry expertise with IBM’s leading AI innovation capabilities.”
A key aspect of this collaboration is the establishment of the IBM TechHub@Wipro, a centralised tech hub aimed at supporting joint client pursuits. This initiative will bring together subject matter experts, engineers, assets, and processes to drive and support AI initiatives.
Kate Woolley, General Manager of IBM Ecosystem, commented: “We’re pleased to reach this new milestone in our 20-year partnership to support clients through the combination of Wipro’s and IBM’s joint expertise and technology, including watsonx.”
The Wipro Enterprise AI-Ready Platform offers infrastructure and core software for AI and generative AI workloads, enhancing automation, dynamic resource management, and operational efficiency in the enterprise. Moreover, it caters to specialised industry use cases, such as banking, retail, health, energy, and manufacturing, offering tailored solutions for customer support, marketing, feedback analysis, and more.
Nagendra Bandaru, Managing Partner and President of Wipro Enterprise Futuring, highlighted the flexibility of the platform, stating: “Wipro’s Enterprise AI-Ready Platform will allow clients to easily integrate and standardise multiple data sources augmenting AI- and GenAI-enabled transformation across business functions.”
In addition to facilitating AI governance through the AI lifecycle, the platform prioritises responsible AI practices, ensuring transparency, data protection, and compliance with relevant laws and regulations.
As part of this collaboration, Wipro associates will undergo training in IBM hybrid cloud, AI, and data analytics technologies, further enhancing their capabilities in developing joint solutions.
(Photo by Carson Masterson on Unsplash)
See also: Reddit is reportedly selling data for AI training
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Wipro and IBM collaborate to propel enterprise AI appeared first on AI News.
]]>The post Google launches Gemini 1.5 with ‘experimental’ 1M token context appeared first on AI News.
]]>The new capability allows Gemini 1.5 to process extremely long text passages – up to one million characters – to understand context and meaning. This dwarfs previous AI systems like Claude 2.1 and GPT-4 Turbo, which max out at 200,000 and 128,000 tokens respectively:
“Gemini 1.5 Pro achieves near-perfect recall on long-context retrieval tasks across modalities, improves the state-of-the-art in long-document QA, long-video QA and long-context ASR, and matches or surpasses Gemini 1.0 Ultra’s state-of-the-art performance across a broad set of benchmarks,” said Google researchers in a technical paper (PDF).
The efficiency of Google’s latest model is attributed to its innovative Mixture-of-Experts (MoE) architecture.
“While a traditional Transformer functions as one large neural network, MoE models are divided into smaller ‘expert’ neural networks,” explained Demis Hassabis, CEO of Google DeepMind.
“Depending on the type of input given, MoE models learn to selectively activate only the most relevant expert pathways in its neural network. This specialisation massively enhances the model’s efficiency.”
To demonstrate the power of the 1M token context window, Google showed how Gemini 1.5 could ingest the entire 326,914-token Apollo 11 flight transcript and then accurately answer specific questions about it. It also summarised key details from a 684,000-token silent film when prompted.
Google is initially providing developers and enterprises free access to a limited Gemini 1.5 preview with a one million token context window. A 128,000 token general release for the public will come later, along with pricing details.
For now, the one million token capability remains experimental. But if it lives up to its early promise, Gemini 1.5 could set a new standard for AI’s ability to understand complex, real-world text.
Developers interested in testing Gemini 1.5 Pro can sign up in AI Studio. Google says that enterprise customers can reach out to their Vertex AI account team.
(Image Credit: Google)
See also: Amazon trains 980M parameter LLM with ’emergent abilities’
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Google launches Gemini 1.5 with ‘experimental’ 1M token context appeared first on AI News.
]]>The post JumpCloud report reveals SMEs conflicted about AI appeared first on AI News.
]]>The latest edition of the report delves into the impact of artificial intelligence (AI) on identity management, security challenges, economic uncertainties, and the growing reliance on managed service providers (MSPs) in IT operations. JumpCloud commissioned this biannual survey of SME IT admins to gain unique insights into the day-to-day experiences of IT professionals who power and secure operations without enterprise-level budgets and staff.
The most recent survey results, polled from admins in the US, UK, and India, indicates how quickly AI has impacted identity management and highlights that IT professionals have both big hopes and big fears in their response to it. With a strong majority of respondents both planning or actively implementing within the next year and advocating for AI investment, IT leaders clearly see potential benefits from deploying AI in their workplaces. But IT admins report notable concerns around their organisations’ current ability to secure against related threats—and personal concerns about AI’s impact on their career.
“While AI is the buzzword that grabs headlines, it’s security that remains a paramount concern for IT teams given the increasing sophistication of external threats and rising regulatory pressures,” said Rajat Bhargava, CEO, JumpCloud. “And it’s only getting worse. We found that 56% of admins agree that they’re more concerned about their organisation’s security posture now than they were six months ago. To reduce this complexity and anxiety, organisations should look toward solutions that offer a unified, open identity and IT management approach. This can enhance security, mitigate operational disruptions, and alleviate admin burnout.”
AI adoption: Optimism and concerns
A vast majority of admins see AI as a net positive for their organisation and think their organisation is approaching AI at the right pace—though this optimism is tempered by significant concerns about AI’s potential impact on security and individual careers.
Topline AI findings include:
Uncertainty for IT
The start of 2024 finds SMEs continuing to wrestle with economic uncertainties and IT teams unsure about what that means for their organisations and their operations.
Topline IT management findings include:
Security challenges persist as admins adjust their response
IT teams continue to report that security concerns continue to dominate among the various challenges and responsibilities they manage. With the rise of AI and the evolving sophistication of cybersecurity threats, IT admins are adapting their responses and deploying additional layers of protection.
Topline security findings include:
MSPs play major role in IT operations
MSPs are increasingly crucial to SME IT operations as increasing numbers of SMEs are turning to them for IT management.
Topline MSP findings include:
Survey Methodology:
JumpCloud surveyed 1,213 SME IT decision-makers in the UK, US, and India, including managers, directors, vice presidents, and executives. Each survey respondent represented an organisation with 2,500 or fewer employees across a variety of industries. The online survey was conducted by Propeller Insights, from November 14, 2023 to November 27, 2023.
The findings from the JumpCloud Q1 2024 SME IT Trends Report can be found in “State of IT 2024: The Rise of AI, Economic Uncertainty, and Evolving Security Threats,” here.
(Editor’s note: This article is sponsored by JumpCloud)
The post JumpCloud report reveals SMEs conflicted about AI appeared first on AI News.
]]>The post Fetch.ai and Deutsche Telekom partner to converge AI and blockchain appeared first on AI News.
]]>As the first major corporate partner of the Fetch.ai Foundation, Deutsche Telekom joins forces with Bosch and Fetch.ai in supporting an open AI and blockchain platform aimed at widespread adoption. Its subsidiary, Deutsche Telekom MMS, will also serve as a validator on the decentralised Fetch.ai network, helping secure transactions on the blockchain.
The core of Fetch.ai’s technology relies on autonomous software agents that can manage resources, conduct transactions, and analyse data flows independently thanks to AI algorithms. These agents unlock a myriad of real-world applications across sectors like automotive, supply chain, healthcare, and digital identity.
For example, AI agents could optimise production schedules based on supply chain data or match patients to clinical trials using health records. The tamper-proof nature of blockchain also enables secure transmission and access to sensitive data.
“The convergence of blockchain, AI and IoT is trailblazing the digital transformation of entire industries,” said Dirk Röder, Head of Deutsche Telekom’s Web3 Infrastructure & Solutions team. “Autonomous agents will automate industrial services, simplifying processes securely thanks to blockchain.”
As a validator, Deutsche Telekom MMS will ensure network security as more devices, users, and services integrate with the Fetch.ai blockchain. Built on the Cosmos protocol, Fetch.ai operates as a permissionless decentralised network with open-source code that is accessible globally.
The collaboration demonstrates how blockchain can unlock AI’s potential by providing reliable, transparent data while AI can help securely analyse blockchain transactions. Together, these technologies lay the foundations for a decentralised Web3 internet that empowers user privacy and control.
“This partnership signals real progress in integrating AI and Web3 innovations into the machine economy,” said Fetch.ai CEO Humayun Sheikh.
Deutsche Telekom and Fetch.ai will be working together at one of Europe’s largest AI and IoT hackathons, Bosch Connected Experience, on 28-29 February 2024.
See also: Telcos to spend $20B on AI network orchestration by 2028
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Fetch.ai and Deutsche Telekom partner to converge AI and blockchain appeared first on AI News.
]]>The post Telcos to spend $20B on AI network orchestration by 2028 appeared first on AI News.
]]>The researchers predict the investment growth will be necessary as telcos expand 5G networks globally and develop future 6G networks. The AI software will play a vital role in optimising network performance and security; the two most critical areas expected to account for over 50 percent of operator spending on AI by 2028.
As enterprises make increasing use of cellular connectivity for smart factories, self-driving vehicles, and other bandwidth-intensive applications, the report argues AI orchestration will be essential for telcos to maximise efficiency, reduce costs, and provide the best quality of service.
Automating functions like real-time network analysis and rapid adjustments to changing demands can minimise expenses tied to network operations and provisioning.
“As operators compete on the quality of their networks, AI will be essential to maximising the value of using a cellular network for connectivity,” said Frederick Savage, author of the report.
“High-spending users will gravitate to those networks that can provide the best service conditions.”
Telcos that fail to incorporate AI may ultimately struggle to keep pace with customer demands for performance and security.
A full copy of the report can be found here (paywall)
(Photo by Larisa Birta on Unsplash)
See also: The UK is outpacing the US for AI hiring
Unified Communications is a two-day event taking place in California, London, and Amsterdam that delves into the future of workplace collaboration in a digital world. The comprehensive event is co-located with AI & Big Data Expo, Digital Transformation Week, IoT Tech Expo, Edge Computing Expo, Intelligent Automation, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Telcos to spend $20B on AI network orchestration by 2028 appeared first on AI News.
]]>