AI Robotics News | Latest Robotics in AI Developments | AI News https://www.artificialintelligence-news.com/categories/ai-robotics/ Artificial Intelligence News Mon, 25 Mar 2024 10:40:55 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Robotics News | Latest Robotics in AI Developments | AI News https://www.artificialintelligence-news.com/categories/ai-robotics/ 32 32 Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’ https://www.artificialintelligence-news.com/2024/03/25/stanhope-raises-2-3m-for-ai-that-teaches-machines-to-make-human-like-decisions/ https://www.artificialintelligence-news.com/2024/03/25/stanhope-raises-2-3m-for-ai-that-teaches-machines-to-make-human-like-decisions/#respond Mon, 25 Mar 2024 10:40:00 +0000 https://www.artificialintelligence-news.com/?p=14604 Stanhope AI – a company applying decades of neuroscience research to teach machines how to make human-like decisions in the real world – has raised £2.3m in seed funding led by the UCL Technology Fund. Creator Fund also participated, along with, MMC Ventures, Moonfire Ventures and Rockmount Capital and leading angel investors.  Stanhope AI was... Read more »

The post Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’ appeared first on AI News.

]]>
Stanhope AI – a company applying decades of neuroscience research to teach machines how to make human-like decisions in the real world – has raised £2.3m in seed funding led by the UCL Technology Fund.

Creator Fund also participated, along with, MMC Ventures, Moonfire Ventures and Rockmount Capital and leading angel investors. 

Stanhope AI was founded as a spinout from University College London, supported by UCL Business, by three of the most eminent names in neuroscience and AI research – CEO Professor Rosalyn Moran (former Deputy Director of King’s Institute for Artificial Intelligence), Director Karl Friston, Professor at the UCL Queen Square Institute of Neurology and Technical Advisor Dr Biswa Sengupta (MD of AI and Cloud products at JP Morgan Chase). 

By using key neuroscience principles and applying them to AI and mathematics, Stanhope AI is at the forefront of the new generation of AI technology known as ‘agentic’ AI.  The team has built algorithms that, like the human brain, are always trying to guess what will happen next; learning from any discrepancies between predicted and actual events to continuously update their “internal models of the world.” Instead of training vast LLMs to make decisions based on seen data, Stanhope agentic AI’s models are in charge of their own learning. They autonomously decode their environments and rebuild and refine their “world models” using real-time data, continuously fed to them via onboard sensors.  

The rise of agentic AI

This approach, and Stanhope AI’s technology, are based on the neuroscience principle of Active Inference – the idea that our brains, in order to minimise free energy, are constantly making predictions about incoming sensory data around us. As this data changes, our brains adapt and update our predictions in response to rebuild and refine our world view. 

This is very different to the traditional machine learning methods used to train today’s AI systems such as LLMs. Today’s models can only operate within the realms of the training they are given, and can only make best-guess decisions based on the information they have. They can’t learn on the go. They require extreme amounts of processing power and energy to train and run, as well as vast amounts of seen data.  

By contrast, Stanhope AI’s Active Inference models are truly autonomous. They can constantly rebuild and refine their predictions. Uncertainty is minimised by default, which removes the risk of hallucinations about what the AI thinks is true, and this moves Stanhope’s unique models towards reasoning and human-like decision-making. What’s more, by drastically reducing the size and energy required to run the models and the machines, Stanhope AI’s models can operate on small devices such as drones and similar.  

“The most all-encompassing idea since natural selection”

Stanhope AI’s approach is possible because of its founding team’s extensive research into the neuroscience principles of Active Inference, as well as free energy. Director Indeed Professor Friston, a world-renowned neuroscientist at UCL whose work has been cited twice as many times as Albert Einstein, is the inventor of the Free Energy Theory Principle. 

Friston’s principle theory centres on how our brains minimise surprise and uncertainty. It explains that all living things are driven to minimise free energy, and thus the energy needed to predict and perceive the world. Such is its impact, the Free Energy Theory Principle has been described as the “most all-encompassing idea since the theory of natural selection.” Active Inference sits within this theory to explain the process our brains use in order to minimise this energy. This idea infuses Stanhope AI’s work, led by Professor Moran, a specialist in Active Inference and its application through AI; and Dr Biswa Sengupta, whose doctoral research was in dynamical systems, optimisation and energy efficiency from the University of Cambridge. 

Real-world application

In the immediate term, the technology is being tested with delivery drones and autonomous machines used by partners including Germany’s Federal Agency for Disruptive Innovation and the Royal Navy. In the long term, the technology holds huge promise in the realms of manufacturing, industrial robotics and embodied AI. The investment will be used to further the company’s development of its agentic AI models and the practical application of its research.  

Professor Rosalyn Moran, CEO and co-founder of Stanhope AI, said: “Our mission at Stanhope AI is to bridge the gap between neuroscience and artificial intelligence, creating a new generation of AI systems that can think, adapt, and decide like humans. We believe this technology will transform the capabilities of AI and robotics and make them more impactful in real-world scenarios. We trust the math and we’re delighted to have the backing of investors like UCL Technology Fund who deeply understand the science behind this technology and their support will be significant on our journey to revolutionise AI technology.”

David Grimm, partner UCL Technology Fund, said: “AI startups may be some of the hottest investments right now but few have the calibre and deep scientific and technical know-how as the Stanhope AI team. This is emblematic of their unique approach, combining neuroscience insights with advanced AI, which presents a groundbreaking opportunity to advance the field and address some of the most challenging problems in AI today. We can’t wait to see what this team achieves.” 

Marina Santilli, sasociate director UCL Business, added “The promise offered by Stanhope AI’s approach to Artificial Intelligence is hugely exciting, providing hope for powerful whilst energy-light models. UCLB is delighted to have been able to support the formation of a company built on the decades of fundamental research at UCL led by Professor Friston, developing the Free Energy Principle.” 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/25/stanhope-raises-2-3m-for-ai-that-teaches-machines-to-make-human-like-decisions/feed/ 0
Hugging Face is launching an open robotics project https://www.artificialintelligence-news.com/2024/03/08/hugging-face-launching-open-robotics-project/ https://www.artificialintelligence-news.com/2024/03/08/hugging-face-launching-open-robotics-project/#respond Fri, 08 Mar 2024 17:37:22 +0000 https://www.artificialintelligence-news.com/?p=14519 Hugging Face, the startup behind the popular open source machine learning codebase and ChatGPT rival Hugging Chat, is venturing into new territory with the launch of an open robotics project. The ambitious expansion was announced by former Tesla staff scientist Remi Cadene in a post on X: In keeping with Hugging Face’s ethos of open... Read more »

The post Hugging Face is launching an open robotics project appeared first on AI News.

]]>
Hugging Face, the startup behind the popular open source machine learning codebase and ChatGPT rival Hugging Chat, is venturing into new territory with the launch of an open robotics project.

The ambitious expansion was announced by former Tesla staff scientist Remi Cadene in a post on X:

In keeping with Hugging Face’s ethos of open source, Cadene stated the robot project would be “open-source, not as in Open AI” in reference to OpenAI’s legal battle with Cadene’s former boss, Elon Musk.

Cadene – who will be leading the robotics initiative – revealed that Hugging Face is hiring robotics engineers in Paris, France.

A job listing for an “Embodied Robotics Engineer” sheds light on the project’s goals, which include “designing, building, and maintaining open-source and low cost robotic systems that integrate AI technologies, specifically in deep learning and embodied AI.”

The role involves collaborating with ML engineers, researchers, and product teams to develop innovative robotics solutions that “push the boundaries of what’s possible in robotics and AI.” Key responsibilities range from building low-cost robots using off-the-shelf components and 3D-printed parts to integrating deep learning and embodied AI technologies into robotic systems.

Until now, Hugging Face has primarily focused on software offerings like its machine learning codebase and open-source chatbot. The robotics project marks a significant departure into the hardware realm as the startup aims to bring AI into the physical world through open and affordable robotic platforms.

(Photo by Possessed Photography on Unsplash)

See also: Google engineer stole AI tech for Chinese firms

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Hugging Face is launching an open robotics project appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/08/hugging-face-launching-open-robotics-project/feed/ 0
AUKUS trial advances AI for military operations  https://www.artificialintelligence-news.com/2024/02/05/aukus-trial-advances-ai-for-military-operations/ https://www.artificialintelligence-news.com/2024/02/05/aukus-trial-advances-ai-for-military-operations/#respond Mon, 05 Feb 2024 16:29:13 +0000 https://www.artificialintelligence-news.com/?p=14324 The UK armed forces and Defence Science and Technology Laboratory (Dstl) recently collaborated with the militaries of Australia and the US as part of the AUKUS partnership in a landmark trial focused on AI and autonomous systems.  The trial, called Trusted Operation of Robotic Vehicles in Contested Environments (TORVICE), was held in Australia under the... Read more »

The post AUKUS trial advances AI for military operations  appeared first on AI News.

]]>
The UK armed forces and Defence Science and Technology Laboratory (Dstl) recently collaborated with the militaries of Australia and the US as part of the AUKUS partnership in a landmark trial focused on AI and autonomous systems. 

The trial, called Trusted Operation of Robotic Vehicles in Contested Environments (TORVICE), was held in Australia under the AUKUS partnership formed last year between the three countries. It aimed to test robotic vehicles and sensors in situations involving electronic attacks, GPS disruption, and other threats to evaluate the resilience of autonomous systems expected to play a major role in future military operations.

Understanding how to ensure these AI systems can operate reliably in the face of modern electronic warfare and cyber threats will be critical before the technology can be more widely adopted.  

The TORVICE trial featured US and British autonomous vehicles carrying out reconnaissance missions while Australia units simulated battlefield electronic attacks on their systems. Analysis of the performance data will help strengthen protections and safeguards needed to prevent system failures or disruptions.

Guy Powell, Dstl’s technical authority for the trial, said: “The TORVICE trial aims to understand the capabilities of robotic and autonomous systems to operate in contested environments. We need to understand how robust these systems are when subject to attack.

“Robotic and autonomous systems are a transformational capability that we are introducing to armies across all three nations.” 

This builds on the first AUKUS autonomous systems trial held in April 2023 in the UK. It also represents a step forward following the AUKUS defense ministers’ December announcement that Resilient and Autonomous Artificial Intelligence Technologies (RAAIT) would be integrated into the three countries’ military forces beginning in 2024.

Dstl military advisor Lt Col Russ Atherton says that successfully harnessing AI and autonomy promises to “be an absolute game-changer” that reduces the risk to soldiers. The technology could carry out key tasks like sensor operation and logistics over wider areas.

“The ability to deploy different payloads such as sensors and logistics across a larger battlespace will give commanders greater options than currently exist,” explained Lt Atherton.

By collaborating, the AUKUS allies aim to accelerate development in this crucial new area of warfare, improving interoperability between their forces, maximising their expertise, and strengthening deterrence in the Indo-Pacific region.

As AUKUS continues to deepen cooperation on cutting-edge military technologies, this collaborative effort will significantly enhance military capabilities while reducing risks for warfighters.

(Image Credit: Dstl)

See also: Experts from 30 nations will contribute to global AI safety report

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AUKUS trial advances AI for military operations  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/05/aukus-trial-advances-ai-for-military-operations/feed/ 0
AWS and NVIDIA expand partnership to advance generative AI https://www.artificialintelligence-news.com/2023/11/29/aws-nvidia-expand-partnership-advance-generative-ai/ https://www.artificialintelligence-news.com/2023/11/29/aws-nvidia-expand-partnership-advance-generative-ai/#respond Wed, 29 Nov 2023 14:30:14 +0000 https://www.artificialintelligence-news.com/?p=13962 Amazon Web Services (AWS) and NVIDIA have announced a significant expansion of their strategic collaboration at AWS re:Invent. The collaboration aims to provide customers with state-of-the-art infrastructure, software, and services to fuel generative AI innovations. The collaboration brings together the strengths of both companies, integrating NVIDIA’s latest multi-node systems with next-generation GPUs, CPUs, and AI... Read more »

The post AWS and NVIDIA expand partnership to advance generative AI appeared first on AI News.

]]>
Amazon Web Services (AWS) and NVIDIA have announced a significant expansion of their strategic collaboration at AWS re:Invent. The collaboration aims to provide customers with state-of-the-art infrastructure, software, and services to fuel generative AI innovations.

The collaboration brings together the strengths of both companies, integrating NVIDIA’s latest multi-node systems with next-generation GPUs, CPUs, and AI software, along with AWS technologies such as Nitro System advanced virtualisation, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability.

Key highlights of the expanded collaboration include:

  1. Introduction of NVIDIA GH200 Grace Hopper Superchips on AWS:
    • AWS becomes the first cloud provider to offer NVIDIA GH200 Grace Hopper Superchips with new multi-node NVLink technology.
    • The NVIDIA GH200 NVL32 multi-node platform enables joint customers to scale to thousands of GH200 Superchips, providing supercomputer-class performance.
  2. Hosting NVIDIA DGX Cloud on AWS:
    • Collaboration to host NVIDIA DGX Cloud, an AI-training-as-a-service, on AWS, featuring GH200 NVL32 for accelerated training of generative AI and large language models.
  3. Project Ceiba supercomputer:
    • Collaboration on Project Ceiba, aiming to design the world’s fastest GPU-powered AI supercomputer with 16,384 NVIDIA GH200 Superchips and processing capability of 65 exaflops.
  4. Introduction of new Amazon EC2 instances:
    • AWS introduces three new Amazon EC2 instances, including P5e instances powered by NVIDIA H200 Tensor Core GPUs for large-scale generative AI and HPC workloads.
  5. Software innovations:
    • NVIDIA introduces software on AWS, such as NeMo Retriever microservice for chatbots and summarisation tools, and BioNeMo to speed up drug discovery for pharmaceutical companies.

This collaboration signifies a joint commitment to advancing the field of generative AI, offering customers access to cutting-edge technologies and resources.

Internally, Amazon robotics and fulfilment teams already employ NVIDIA’s Omniverse platform to optimise warehouses in virtual environments first before real-world deployment.

The integration of NVIDIA and AWS technologies will accelerate the development, training, and inference of large language models and generative AI applications across various industries.

(Photo by ANIRUDH on Unsplash)

See also: Inflection-2 beats Google’s PaLM 2 across common benchmarks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AWS and NVIDIA expand partnership to advance generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/29/aws-nvidia-expand-partnership-advance-generative-ai/feed/ 0
Open X-Embodiment dataset and RT-X model aim to revolutionise robotics https://www.artificialintelligence-news.com/2023/10/04/open-x-embodiment-dataset-rt-x-model-aim-revolutionise-robotics/ https://www.artificialintelligence-news.com/2023/10/04/open-x-embodiment-dataset-rt-x-model-aim-revolutionise-robotics/#respond Wed, 04 Oct 2023 14:36:10 +0000 https://www.artificialintelligence-news.com/?p=13674 In a collaboration between 33 academic labs worldwide, a consortium of researchers has unveiled a revolutionary approach to robotics. Traditionally, robots have excelled in specific tasks but struggled with versatility, requiring individual training for each unique job. However, this limitation might soon be a thing of the past. Open X-Embodiment: The gateway to generalist robots... Read more »

The post Open X-Embodiment dataset and RT-X model aim to revolutionise robotics appeared first on AI News.

]]>
In a collaboration between 33 academic labs worldwide, a consortium of researchers has unveiled a revolutionary approach to robotics.

Traditionally, robots have excelled in specific tasks but struggled with versatility, requiring individual training for each unique job. However, this limitation might soon be a thing of the past.

Open X-Embodiment: The gateway to generalist robots

At the heart of this transformation lies the Open X-Embodiment dataset, a monumental effort pooling data from 22 distinct robot types.

With the contributions of over 20 research institutions, this dataset comprises over 500 skills, encompassing a staggering 150,000 tasks across more than a million episodes.

This treasure trove of diverse robotic demonstrations represents a significant leap towards training a universal robotic model capable of multifaceted tasks.

RT-1-X: A general-purpose robotics model

Accompanying this dataset is RT-1-X, a product of meticulous training on RT-1 – a real-world robotic control model – and RT-2, a vision-language-action model. This fusion resulted in RT-1-X, exhibiting exceptional skills transferability across various robot embodiments.

In rigorous testing across five research labs, RT-1-X outperformed its counterparts by an average of 50 percent.

The success of RT-1-X signifies a paradigm shift, demonstrating that training a single model with diverse, cross-embodiment data dramatically enhances its performance on various robots.

Emergent skills: Leaping into the future

The experimentation did not stop there. Researchers explored emergent skills, delving into uncharted territories of robotic capabilities.

RT-2-X, an advanced version of the vision-language-action model, exhibited remarkable spatial understanding and problem-solving abilities. By incorporating data from different robots, RT-2-X demonstrated an expanded repertoire of tasks, showcasing the potential of shared learning in the robotic realm.

A responsible approach

Crucially, this research emphasises a responsible approach to the advancement of robotics. 

By openly sharing data and models, the global community can collectively elevate the field—transcending individual limitations and fostering an environment of shared knowledge and progress.

The future of robotics lies in mutual learning, where robots teach each other, and researchers learn from one another. The momentous achievement unveiled this week paves the way for a future where robots seamlessly adapt to diverse tasks, heralding a new era of innovation and efficiency.

(Photo by Brett Jordan on Unsplash)

See also: Amazon invests $4B in Anthropic to boost AI capabilities

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Open X-Embodiment dataset and RT-X model aim to revolutionise robotics appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/04/open-x-embodiment-dataset-rt-x-model-aim-revolutionise-robotics/feed/ 0
UK commits £13M to cutting-edge AI healthcare research https://www.artificialintelligence-news.com/2023/08/10/uk-commits-13m-cutting-edge-ai-healthcare-research/ https://www.artificialintelligence-news.com/2023/08/10/uk-commits-13m-cutting-edge-ai-healthcare-research/#respond Thu, 10 Aug 2023 14:51:26 +0000 https://www.artificialintelligence-news.com/?p=13457 The UK has announced a £13 million investment in cutting-edge AI research within the healthcare sector. The announcement, made by Technology Secretary Michelle Donelan, marks a major step forward in harnessing the potential of AI in revolutionising healthcare. The investment will empower 22 winning projects across universities and NHS trusts, from Edinburgh to Surrey, to... Read more »

The post UK commits £13M to cutting-edge AI healthcare research appeared first on AI News.

]]>
The UK has announced a £13 million investment in cutting-edge AI research within the healthcare sector.

The announcement, made by Technology Secretary Michelle Donelan, marks a major step forward in harnessing the potential of AI in revolutionising healthcare. The investment will empower 22 winning projects across universities and NHS trusts, from Edinburgh to Surrey, to drive innovation and transform patient care.

Dr Antonio Espingardeiro, IEEE member and software and robotics expert, comments:

“As it becomes more sophisticated, AI can efficiently conduct tasks traditionally undertaken by humans. The potential for the technology within the medical field is huge—it can analyse vast quantities of information and, when coupled with machine learning, search through records and infer patterns or anomalies in data, that would otherwise take decades for humans to analyse.

We are just starting to see the beginning of a new era where machine learning could bring substantial value and transform the traditional role of the doctor. The true capabilities of this technology as an aide to the healthcare sector are yet to be fully realised. In the future, we may even be able to solve of some of the biggest challenges and issues of our time.

One of the standout projects receiving funding is the University College London’s Centre for Interventional and Surgical Sciences. With a grant exceeding £500,000, researchers aim to develop a semi-autonomous surgical robotics platform designed to enhance the removal of brain tumours. This pioneering technology promises to elevate surgical outcomes, minimise complications, and expedite patient recovery times.

“With the increased adoption of AI and robotics, we will soon be able to deliver the scalability that the healthcare sector needs and establish more proactive care delivery,” added Espingardeiro.

University of Sheffield’s project, backed by £463,000, is focused on a crucial aspect of healthcare – chronic nerve pain. Their innovative approach aims to widen and improve treatments for this condition, which affects one in ten adults over 30.

The University of Oxford’s project, bolstered by £640,000, seeks to expedite research into a foundational AI model for clinical risk prediction. By analysing an individual’s existing health conditions, this AI model could accurately forecast the likelihood of future health problems and revolutionise early intervention strategies.

Meanwhile, Heriot-Watt University in Edinburgh has secured £644,000 to develop a groundbreaking system that offers real-time feedback to trainee surgeons practising laparoscopy procedures, also known as keyhole surgeries. This technology promises to enhance the proficiency of aspiring surgeons and elevate the overall quality of healthcare.

Finally, the University of Surrey’s project – backed by £456,000 – will collaborate closely with radiologists to develop AI capable of enhancing mammogram analysis. By streamlining and improving this critical diagnostic process, AI could contribute to earlier cancer detection.

Ayesha Iqbal, IEEE senior member and engineering trainer at the Advanced Manufacturing Training Centre, said:

“The emergence of AI in healthcare has completely reshaped the way we diagnose, treat, and monitor patients.

Applications of AI in healthcare include finding new links between genetic codes, performing robot-assisted surgeries, improving medical imaging methods, automating administrative tasks, personalising treatment options, producing more accurate diagnoses and treatment plans, enhancing preventive care and quality of life, predicting and tracking the spread of infectious diseases, and helping combat epidemics and pandemics.”

With the UK healthcare sector already witnessing AI applications in improving stroke diagnosis, heart attack risk assessment, and more, the £13 million investment is poised to further accelerate transformative healthcare breakthroughs.

Health and Social Care Secretary Steve Barclay commented:

“AI can help the NHS improve outcomes for patients, with breakthroughs leading to earlier diagnosis, more effective treatments, and faster recovery. It’s already being used in the NHS in a number of areas, from improving diagnosis and treatment for stroke patients to identifying those most at risk of a heart attack.

This funding is yet another boost to help the UK lead the way in healthcare research. It comes on top of the £21 million we recently announced for trusts to roll out the latest AI diagnostic tools and £123 million invested in 86 promising tech through our AI in Health and Care Awards.”

However, the announcement was made the same week as NHS waiting lists hit a record high. Prime Minister Rishi Sunak made reducing waiting lists one of his five key priorities for 2023 on which to hold him “to account directly for whether it is delivered.” Hope is being pinned on technologies like AI to help tackle waiting lists.

This pivotal move is accompanied by the nation’s preparations to host the world’s first major international summit on AI safety, underscoring its commitment to responsible AI development.

Scheduled for later this year, the AI safety summit will provide a platform for international stakeholders to collaboratively address AI’s risks and opportunities.

As Europe’s AI leader, and the third-ranking globally behind the USA and China, the UK is well-positioned to lead these discussions and champion the responsible advancement of AI technology.

(Photo by National Cancer Institute on Unsplash)

See also: BSI publishes guidance to boost trust in AI for healthcare

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK commits £13M to cutting-edge AI healthcare research appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/10/uk-commits-13m-cutting-edge-ai-healthcare-research/feed/ 0
SK Telecom outlines its plans with AI partners https://www.artificialintelligence-news.com/2023/06/20/sk-telecom-outlines-its-plans-with-ai-partners/ https://www.artificialintelligence-news.com/2023/06/20/sk-telecom-outlines-its-plans-with-ai-partners/#respond Tue, 20 Jun 2023 16:37:32 +0000 https://www.artificialintelligence-news.com/?p=13203 SK Telecom (SKT) is taking significant steps to solidify its position in the global AI ecosystem.  The company recently held a meeting at its Silicon Valley headquarters with CEOs from four new AI partners – CMES, MakinaRocks, Scatter Lab, and FriendliAI – to discuss business cooperation and forge a path towards leadership in the AI... Read more »

The post SK Telecom outlines its plans with AI partners appeared first on AI News.

]]>
SK Telecom (SKT) is taking significant steps to solidify its position in the global AI ecosystem. 

The company recently held a meeting at its Silicon Valley headquarters with CEOs from four new AI partners – CMES, MakinaRocks, Scatter Lab, and FriendliAI – to discuss business cooperation and forge a path towards leadership in the AI industry.

SKT has been actively promoting AI transformation through strategic partnerships and collaborations with various AI companies. During MWC 2023, the company announced partnerships with seven AI companies: SAPEON, Bespin Global, Moloco, Konan Technology, Swit, Phantom AI, and Tuat.

During the meeting, SKT’s CEO Ryu Young-sang outlined the company’s AI vision and discussed its business plans with the AI partners. The executives from SKT and its AI partners engaged in in-depth discussions on major global AI trends, the latest technological achievements, ongoing R&D projects, and global business and investment opportunities.

One of the notable discussions took place between SKT and CMES, an AI-powered robotics company.

SKT and CMES exchanged views and ideas on the development of pricing plans for “Robot as a Service (RaaS)” and subscription-based business models for AI-driven RaaS tailored for enterprises.

RaaS is gaining attention as a cost-effective alternative to additional manpower or infrastructure investment for automation. The demand for RaaS is expected to grow rapidly in sectors such as logistics, delivery, construction, and healthcare.

Furthermore, SKT aims to collaborate with Scatter Lab, a renowned AI startup known for its Lee Lu-da chatbot. The company plans to integrate an emotional AI agent into its AI service, ‘A.’

Additionally, SKT discussed strategies for synergy creation with MakinaRocks, a startup specialising in industrial AI solutions, and FriendliAI, a startup that provides a platform for developing generative AI models. By joining forces, the companies aim to establish a leading position in the global AI market.

Ryu Young-sang, CEO of SKT, commented:

“Now with our AI partners on board, we have completed the blueprint for driving new growth in the global market.

We will work together to develop diverse cooperation opportunities in AI, and bring our AI technologies and services to the global market.”

By harnessing the expertise and technologies of its AI partners, SKT is well-positioned to lead the global AI ecosystem and deliver innovative AI solutions to the market.

(Photo by Brett Jordan on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post SK Telecom outlines its plans with AI partners appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/20/sk-telecom-outlines-its-plans-with-ai-partners/feed/ 0
Tesla’s AI supercomputer tripped the power grid https://www.artificialintelligence-news.com/2022/10/03/tesla-ai-supercomputer-tripped-power-grid/ https://www.artificialintelligence-news.com/2022/10/03/tesla-ai-supercomputer-tripped-power-grid/#respond Mon, 03 Oct 2022 09:40:05 +0000 https://www.artificialintelligence-news.com/?p=12337 Tesla’s purpose-built AI supercomputer ‘Dojo’ is so powerful that it tripped the power grid. Dojo was unveiled at Tesla’s annual AI Day last year but the project was still in its infancy. At AI Day 2022, Tesla unveiled the progress it has made with Dojo over the course of the year. The supercomputer has transitioned... Read more »

The post Tesla’s AI supercomputer tripped the power grid appeared first on AI News.

]]>
Tesla’s purpose-built AI supercomputer ‘Dojo’ is so powerful that it tripped the power grid.

Dojo was unveiled at Tesla’s annual AI Day last year but the project was still in its infancy. At AI Day 2022, Tesla unveiled the progress it has made with Dojo over the course of the year.

The supercomputer has transitioned from just a chip and training tiles into a full cabinet. Tesla claims that it can replace six GPU boxes with a single Dojo tile, which it says is cheaper than one GPU box.

Per tray, there are six Dojo tiles. Tesla claims that each tray is equivalent to “three to four full-loaded supercomputer racks”. Two trays can fit in a single Dojo cabinet with a host assembly.

Such a supercomputer naturally has a large power draw. Dojo requires so much power that it managed to trip the grid in Palo Alto.

“Earlier this year, we started load testing our power and cooling infrastructure. We were able to push it over 2 MW before we tripped our substation and got a call from the city,” said Bill Chang, Tesla’s Principal System Engineer for Dojo.

In order to function, Tesla had to build custom infrastructure for Dojo with its own high-powered cooling and power system.

An ‘ExaPOD’ (consisting of a few Dojo cabinets) has the following specs:

  • 1.1 EFLOP
  • 1.3TB SRAM
  • 13TB DRAM

Seven ExaPODs are currently planned to be housed in Palo Alto.

Dojo is purpose-built for AI and will greatly improve Tesla’s ability to train neural nets using video data from its vehicles. These neural nets will be critical for Tesla’s self-driving efforts and its humanoid robot ‘Optimus’, which also made an appearance during this year’s event.

Optimus

Optimus was also first unveiled last year and was even more in its infancy than Dojo. In fact, all it was at the time was a person in a spandex suit and some PowerPoint slides.

While it’s clear that Optimus still has a long way to go before it can do the shopping and carry out dangerous manual labour tasks, as Tesla envisions, we at least saw a working prototype of the robot at AI Day 2022.

“I do want to set some expectations with respect to our Optimus robot,” said Tesla CEO Elon Musk. “As you know, last year it was just a person in a robot suit. But, we’ve come a long way, and compared to that it’s going to be very impressive.”

Optimus can now walk around and, if attached to apparatus from the ceiling, do some basic tasks like watering plants:

The prototype of Optimus was reportedly developed in the past six months and Tesla is hoping to get a working design within the “next few months… or years”. The price tag is “probably less than $20,000”.

All the details of Optimus are still vague at the moment, but at least there’s more certainty around the Dojo supercomputer.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tesla’s AI supercomputer tripped the power grid appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/10/03/tesla-ai-supercomputer-tripped-power-grid/feed/ 0
Chess robot breaks child’s finger after premature move https://www.artificialintelligence-news.com/2022/07/25/chess-robot-breaks-childs-finger-after-premature-move/ https://www.artificialintelligence-news.com/2022/07/25/chess-robot-breaks-childs-finger-after-premature-move/#respond Mon, 25 Jul 2022 14:33:03 +0000 https://www.artificialintelligence-news.com/?p=12172 A robot went rogue at a Moscow chess tournament and broke a kid’s finger after he made a move prematurely.  The robot, which uses AI to play three chess games at once, grabbed and pinched the child’s finger. Unfortunately, despite several people rushing to help, the robot broke the kid’s finger: According to Moscow Chess... Read more »

The post Chess robot breaks child’s finger after premature move appeared first on AI News.

]]>
A robot went rogue at a Moscow chess tournament and broke a kid’s finger after he made a move prematurely. 

The robot, which uses AI to play three chess games at once, grabbed and pinched the child’s finger. Unfortunately, despite several people rushing to help, the robot broke the kid’s finger:

According to Moscow Chess Federation VP Sergey Smagin, the robot has been used for 15 years and this is the first time such an incident has occurred.

Reports suggest the robot expects its human rival to leave a set amount of time after it makes its play. The child played too quickly and the robot didn’t know how to handle the situation.

“There are certain safety rules and the child, apparently, violated them. When he made his move, he did not realise he first had to wait,” Smagin said. “This is an extremely rare case, the first I can recall.”

It doesn’t paint Russia’s robotics scene in the best light and it’s quite surprising the story even made it out of the country’s notorious censorship.

Fortunately, the child’s finger has been put in a cast and he is expected to make a quick and complete recovery. There doesn’t appear to be any lasting mental trauma either as he played again the next day.

A study in 2015 found that one person is killed each year by an industrial robot in the US alone. As robots become ever more prevalent in our work and personal lives; that number is likely to increase.

Most injuries and fatalities with robots are from human error, so it’s always worth being cautious.

(Photo by GR Stocks on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Chess robot breaks child’s finger after premature move appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/07/25/chess-robot-breaks-childs-finger-after-premature-move/feed/ 0
IBM’s AI-powered Mayflower ship crosses the Atlantic https://www.artificialintelligence-news.com/2022/06/06/ibm-ai-powered-mayflower-ship-crosses-the-atlantic/ https://www.artificialintelligence-news.com/2022/06/06/ibm-ai-powered-mayflower-ship-crosses-the-atlantic/#respond Mon, 06 Jun 2022 15:13:54 +0000 https://www.artificialintelligence-news.com/?p=12045 A groundbreaking AI-powered ship designed by IBM has successfully crossed the Atlantic, albeit not quite as planned. The Mayflower – named after the ship which carried Pilgrims from Plymouth, UK to Massachusetts, US in 1620 – is a 50-foot crewless vessel that relies on AI and edge computing to navigate the often harsh and unpredictable... Read more »

The post IBM’s AI-powered Mayflower ship crosses the Atlantic appeared first on AI News.

]]>
A groundbreaking AI-powered ship designed by IBM has successfully crossed the Atlantic, albeit not quite as planned.

The Mayflower – named after the ship which carried Pilgrims from Plymouth, UK to Massachusetts, US in 1620 – is a 50-foot crewless vessel that relies on AI and edge computing to navigate the often harsh and unpredictable oceans.

IBM’s Mayflower has been attempting to autonomously complete the voyage that its predecessor did over 400 years ago but has been beset by various problems.

The initial launch was planned for June 2021 but a number of technical glitches forced the vessel to return to Plymouth.

Back in April 2022, the Mayflower set off again. This time, an issue with the generator forced the boat to divert to the Azores Islands in Portugal.

The Mayflower was patched up and pressed on until late May when a problem developed with the charging circuit for the generator’s starter batteries. This time, a course for Halifax, Nova Scotia was charted.

After more than five weeks since it departed Plymouth, the modern Mayflower is now docked in Halifax. While it’s yet to reach its final destination, the Mayflower has successfully crossed the Atlantic (hiccups aside.)

While mechanically the ship leaves a lot to be desired, IBM says the autonomous systems have worked flawlessly—including the AI captain developed by MarineAI.

It’s beyond current AI systems to instruct and control robotics to carry out mechanical repairs for any number of potential failures. However, the fact that Mayflower’s onboard autonomous systems have been able to successfully navigate the ocean and report back mechanical issues is an incredible achievement.

“It will be entirely responsible for its own navigation decisions as it progresses so it has very sophisticated software on it—AIs that we use to recognise the various obstacles and objects in the water, whether that’s other ships, boats, debris, land obstacles, or even marine life,” Robert High, VP and CTO of Edge Computing at IBM, told Edge Computing News in an interview.

IBM designed Mayflower 2.0 with marine research nonprofit Promare. The ship uses a wind/solar hybrid propulsion system and features a range of sensors for scientific research on its journey including acoustic, nutrient, temperature, and water and air samplers.

You can find out more about the Mayflower and view live data and webcams from the ship here.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IBM’s AI-powered Mayflower ship crosses the Atlantic appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/06/ibm-ai-powered-mayflower-ship-crosses-the-atlantic/feed/ 0