conversational ai Archives - AI News https://www.artificialintelligence-news.com/tag/conversational-ai/ Artificial Intelligence News Thu, 15 Feb 2024 14:36:50 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png conversational ai Archives - AI News https://www.artificialintelligence-news.com/tag/conversational-ai/ 32 32 Amazon trains 980M parameter LLM with ’emergent abilities’ https://www.artificialintelligence-news.com/2024/02/15/amazon-trains-980m-parameter-llm-emergent-abilities/ https://www.artificialintelligence-news.com/2024/02/15/amazon-trains-980m-parameter-llm-emergent-abilities/#respond Thu, 15 Feb 2024 14:35:28 +0000 https://www.artificialintelligence-news.com/?p=14410 Researchers at Amazon have trained a new large language model (LLM) for text-to-speech that they claim exhibits “emergent” abilities.  The 980 million parameter model, called BASE TTS, is the largest text-to-speech model yet created. The researchers trained models of various sizes on up to 100,000 hours of public domain speech data to see if they... Read more »

The post Amazon trains 980M parameter LLM with ’emergent abilities’ appeared first on AI News.

]]>
Researchers at Amazon have trained a new large language model (LLM) for text-to-speech that they claim exhibits “emergent” abilities. 

The 980 million parameter model, called BASE TTS, is the largest text-to-speech model yet created. The researchers trained models of various sizes on up to 100,000 hours of public domain speech data to see if they would observe the same performance leaps that occur in natural language processing models once they grow past a certain scale. 

They found that their medium-sized 400 million parameter model – trained on 10,000 hours of audio – showed a marked improvement in versatility and robustness on tricky test sentences.

The test sentences contained complex lexical, syntactic, and paralinguistic features like compound nouns, emotions, foreign words, and punctuation that normally trip up text-to-speech systems. While BASE TTS did not handle them perfectly, it made significantly fewer errors in stress, intonation, and pronunciation than existing models.

“These sentences are designed to contain challenging tasks—none of which BASE TTS is explicitly trained to perform,” explained the researchers. 

The largest 980 million parameter version of the model – trained on 100,000 hours of audio – did not demonstrate further abilities beyond the 400 million parameter version.

While an experimental process, the creation of BASE TTS demonstrates these models can reach new versatility thresholds as they scale—an encouraging sign for conversational AI. The researchers plan further work to identify optimal model size for emergent abilities.

The model is also designed to be lightweight and streamable, packaging emotional and prosodic data separately. This could allow the natural-sounding spoken audio to be transmitted across low-bandwidth connections.

You can find the full BASE TTS paper on arXiv here.

(Photo by Nik on Unsplash)

See also: OpenAI rolls out ChatGPT memory to select users

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon trains 980M parameter LLM with ’emergent abilities’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/15/amazon-trains-980m-parameter-llm-emergent-abilities/feed/ 0
OpenAI set to unveil custom GPT-4 chatbot creator https://www.artificialintelligence-news.com/2023/11/06/openai-set-unveil-custom-gpt-4-chatbot-creator/ https://www.artificialintelligence-news.com/2023/11/06/openai-set-unveil-custom-gpt-4-chatbot-creator/#respond Mon, 06 Nov 2023 13:05:22 +0000 https://www.artificialintelligence-news.com/?p=13838 As OpenAI gears up for its inaugural developer conference, some major announcements appear to have leaked. The leak includes screenshots and videos showcasing a custom chatbot creator utilising GPT-4. This advanced version of ChatGPT boasts features such as web browsing and data analysis, enhancing its capabilities significantly. According to the leaked information, OpenAI will introduce... Read more »

The post OpenAI set to unveil custom GPT-4 chatbot creator appeared first on AI News.

]]>
As OpenAI gears up for its inaugural developer conference, some major announcements appear to have leaked.

The leak includes screenshots and videos showcasing a custom chatbot creator utilising GPT-4. This advanced version of ChatGPT boasts features such as web browsing and data analysis, enhancing its capabilities significantly.

According to the leaked information, OpenAI will introduce a new marketplace where users can share their custom chatbots or explore creations made by others.

The leaker, a Twitter user named CHOI, provided a summary of the anticipated updates:

Additionally, SEO tools developer Tibor Blaho shared a video demonstrating the user interface of the new feature—revealing a GPT Builder option that enables users to input prompts and create bespoke chatbots.

The GPT Builder interface offers a user-friendly experience, allowing individuals to select a default language, tone, and writing style for their chatbot. Users can configure the bot by providing a name, description, and instructions, along with the ability to upload files for a personalised knowledgebase.

The tool also allows toggling of features such as web browsing and image generation; giving users unprecedented control over their chatbot’s capabilities. Custom actions can be added to enhance the bot’s functionality.

Furthermore, the leaked information suggests that OpenAI plans to launch an enterprise-level “Team” subscription plan with both “Flexible” and “Annual” options.

The Team plan reportedly offers benefits such as unlimited high-speed GPT-4 usage and extended context capabilities, with a pricing structure of $25 per user, per month for the annual subscription and $30 per month for the non-annual option, requiring a minimum of three users.

OpenAI has recently rolled out several beta features for ChatGPT, including live web results, image generation, and voice chat.

The company is set to provide a preview of the new tools at the upcoming developer conference, offering the tech community a firsthand look at the future of conversational AI.

(Photo by Jem Sahagun on Unsplash)

See also: NIST announces AI consortium to shape US policies

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI set to unveil custom GPT-4 chatbot creator appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/06/openai-set-unveil-custom-gpt-4-chatbot-creator/feed/ 0
Microsoft acquires Nuance to usher in ‘new era of outcomes-based AI’ https://www.artificialintelligence-news.com/2022/03/08/microsoft-acquires-nuance-new-era-outcomes-based-ai/ https://www.artificialintelligence-news.com/2022/03/08/microsoft-acquires-nuance-new-era-outcomes-based-ai/#respond Tue, 08 Mar 2022 15:46:00 +0000 https://artificialintelligence-news.com/?p=11738 Microsoft has completed its acquisition of Siri backend creator Nuance in a bumper deal that it says will usher in a “new era of outcomes-based AI”. “Completion of this significant and strategic acquisition brings together Nuance’s best-in-class conversational AI and ambient intelligence with Microsoft’s secure and trusted industry cloud offerings,” said Scott Guthrie, Executive Vice... Read more »

The post Microsoft acquires Nuance to usher in ‘new era of outcomes-based AI’ appeared first on AI News.

]]>
Microsoft has completed its acquisition of Siri backend creator Nuance in a bumper deal that it says will usher in a “new era of outcomes-based AI”.

“Completion of this significant and strategic acquisition brings together Nuance’s best-in-class conversational AI and ambient intelligence with Microsoft’s secure and trusted industry cloud offerings,” said Scott Guthrie, Executive Vice President of the Cloud + AI Group at Microsoft. 

“This powerful combination will help providers offer more affordable, effective, and accessible healthcare, and help organisations in every industry create more personalised and meaningful customer experiences. I couldn’t be more pleased to welcome the Nuance team to our Microsoft family.”

Nuance became a household name (in techie households, anyway) for creating the speech recognition engine that powers Apple’s smart assistant, Siri. However, Nuance has been in the speech recognition business since 2001 when it was known as ScanSoft.

While it may not have made many big headlines in recent years, Nuance has continued to make some impressive advancements—which caught the attention of Microsoft.

Microsoft announced its intention to acquire Nuance for $19.7 billion last year, in the company’s largest deal after its $26.2 billion acquisition of LinkedIn (both deals would be blown out the water by Microsoft’s proposed $70 billion purchase of Activision Blizzard).

The proposed acquisition of Nuance caught the attention of global regulators. It was cleared in the US relatively quickly, while the EU’s regulator got in the festive spirit and cleared the deal just prior to last Christmas. The UK’s Competition and Markets Authority finally gave it a thumbs-up last week.

Regulators examined whether there may be anti-competition concerns in some verticals where both companies are active, such as healthcare. However, after investigation, the regulators determined that competition shouldn’t be affected by the deal.

The EU, for example, determined that “competing transcription service providers in healthcare do not depend on Microsoft for cloud computing services” and that “transcription service providers in the healthcare sector are not particularly important users of cloud computing services”.

Furthermore, the EU’s regulator concluded:

  • Microsoft-Nuance will continue to face stiff competition from rivals in the future.
  • There’d be no ability/incentive to foreclose existing market solutions.
  • Nuance can only use the data it collects for its own services.
  • The data will not provide Microsoft with an advantage to shut out competing software providers.

The companies appear keen to ensure that people are aware the deal is about more than just healthcare.

“Combining the power of Nuance’s deep vertical expertise and proven business outcomes across healthcare, financial services, retail, telecommunications, and other industries with Microsoft’s global cloud ecosystems will enable us to accelerate our innovation and deploy our solutions more quickly, more seamlessly, and at greater scale to solve our customers’ most pressing challenges,” said Mark Benjamin, CEO of Nuance.

Benjamin will remain the CEO of Nuance and will report to Guthrie.

(Photo by Omid Armin on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft acquires Nuance to usher in ‘new era of outcomes-based AI’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/08/microsoft-acquires-nuance-new-era-outcomes-based-ai/feed/ 0
Why AI needs human intervention https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/ https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/#respond Wed, 19 Jan 2022 17:07:47 +0000 https://artificialintelligence-news.com/?p=11586 In today’s tight labour market and hybrid work environment, organizations are increasingly turning to AI to support various functions within their business, from delivering more personalized experiences to improving operations and productivity to helping organizations make better and faster decisions. That is why the worldwide market for AI software, hardware, and services is expected to... Read more »

The post Why AI needs human intervention appeared first on AI News.

]]>
In today’s tight labour market and hybrid work environment, organizations are increasingly turning to AI to support various functions within their business, from delivering more personalized experiences to improving operations and productivity to helping organizations make better and faster decisions. That is why the worldwide market for AI software, hardware, and services is expected to surpass $500 billion by 2024, according to IDC.

Yet, many enterprises aren’t ready to have their AI systems run independently and entirely without human intervention – nor should they do so. 

In many instances, enterprises simply don’t have sufficient expertise in the systems they use as AI technologies are extraordinarily complex. In other instances, rudimentary AI is built into enterprise software. These can be fairly static and remove control over the parameters of the data most organizations need. But even the most AI savvy organizations keep humans in the equation to avoid risks and reap the maximum benefits of AI. 

AI Checks and Balances

There are clear ethical, regulatory, and reputational reasons to keep humans in the loop. Inaccurate data can be introduced over time leading to poor decisions or even dire circumstances in some cases. Biases can also creep into the system whether it is introduced while training the AI model, as a result of changes in the training environment, or due to trending bias where the AI system reacts to recent activities more than previous ones. Moreover, AI is often incapable of understanding the subtleties of a moral decision. 

Take healthcare for instance. The industry perfectly illustrates how AI and humans can work together to improve outcomes or cause great harm if humans are not fully engaged in the decision-making process. For example, in diagnosing or recommending a care plan for a patient, AI is ideal for making the recommendation to the doctor, who then evaluates if that recommendation is sound and then gives the counsel to the patient.

Having a way for people to continually monitor AI responses and accuracy will avoid flaws that could lead to harm or catastrophe while providing a means for continuous training of the models so they get continuously better and better. That’s why IDC expects more than 70% of G2000 companies will have formal programs to monitor their digital trustworthiness by 2022.

Models for Human-AI Collaboration

Human-in-the-Loop (HitL) Reinforcement Learning and Conversational AI are two examples of how human intervention supports AI systems in making better decisions.

HitL allows AI systems to leverage machine learning to learn by observing humans dealing with real-life work and use cases. HitL models are like traditional AI models except they are continuously self-developing and improving based on human feedback while, in some cases, augmenting human interactions. It provides a controlled environment that limits the inherent risk of biases—such as the bandwagon effect—that can have devastating consequences, especially in crucial decision-making processes.

We can see the value of the HitL model in industries that manufacture critical parts for vehicles or aircraft requiring equipment that is up to standard. In situations like this, machine learning increases the speed and accuracy of inspections, while human oversight provides added assurances that parts are safe and secure for passengers.

Conversational AI, on the other hand, provides near-human-like communication. It can offload work from employees in handling simpler problems while knowing when to escalate an issue to humans for solving more complex issues. Contact centres provide a primary example.

When a customer reaches out to a contact centre, they have the option to call, text, or chat virtually with a representative. The virtual agent listens and understands the needs of the customer and engages back and forth in a conversation. It uses machine learning and AI to decide what needs to be done based on what it has learned from prior experience. Most AI systems within contact centres generate speech to help communicate with the customer and mimic the feeling of a human doing the typing or talking.

For most situations, a virtual agent is enough to help service customers and resolve their problems. However, there are cases where AI can stop typing or talking and then make a seamless transfer to a live representative to take over the call or chat.  Even in these examples, the AI system can shift from automation to augmentation, by still listening to the conversation and providing recommendations to the live representative to aid them in their decisions

Going beyond conversational AI with cognitive AI, these systems can learn to understand the emotional state of the other party, handle complex dialogue, provide real-time translation and even adjust based on the behaviour of the other person, taking human assistance to the next level of sophistication.

Blending Automation and Human Interaction Leads to Augmented Intelligence

AI is best applied when it is both monitored by and augments people. When that happens, people move up the skills continuum, taking on more complex challenges, while the AI continually learns, improves, and is kept in check, avoiding potentially harmful effects. Using models like HitL, conversational AI, and cognitive AI in collaboration with real people who possess expertise, ingenuity, empathy and moral judgment ultimately leads to augmented intelligence and more positive outcomes.

(Photo by Arteum.ro on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Why AI needs human intervention appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/feed/ 0
Hi Auto brings conversational AI to drive-thrus using Intel technology https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/ https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/#respond Thu, 20 May 2021 14:34:08 +0000 http://artificialintelligence-news.com/?p=10583 Hi Auto is increasing the efficiency of drive-thrus with a conversational AI system powered by Intel technologies. Drive-thru usage has rocketed over the past year with many indoor restaurants closed due to pandemic-induced restrictions. In fact, research suggests that drive-thru orders in the US alone increased by 22 percent in 2020. Long queues at drive-thrus... Read more »

The post Hi Auto brings conversational AI to drive-thrus using Intel technology appeared first on AI News.

]]>
Hi Auto is increasing the efficiency of drive-thrus with a conversational AI system powered by Intel technologies.

Drive-thru usage has rocketed over the past year with many indoor restaurants closed due to pandemic-induced restrictions. In fact, research suggests that drive-thru orders in the US alone increased by 22 percent in 2020.

Long queues at drive-thrus have therefore become part of the “new normal” and fast food is no longer the convenient alternative to cooking after a long day of Zoom calls.

Israel-based Hi Auto has created a conversational AI system that greets drive-thru guests, answers their questions, suggests menu items, and enters their orders into the point-of-sale system. If an unrelated question is asked – or the customer orders something that is not on the standard menu – the AI system automatically switches over to a human employee.

The first restaurant to trial the system is Lee’s Famous Recipe Chicken in Ohio.

Chuck Doran, Owner and Operator at Lee’s Famous Recipe Chicken, said:

“The automated AI drive-thru has impacted my business in a simple way. We don’t have customers waiting anymore. We greet them as soon as they get to the board and the order is taken correctly.

It’s amazing to see the level of accuracy with the voice recognition technology, which helps speed up service. It can even suggest additional items based on the order, which helps us increase our sales.

If a person is running the drive-thru, they may suggest a sale in one out of 20 orders. With Hi Auto, it happens in every transaction where it’s feasible. So, we see improvements in our average check, service time, and improvements in consistency and customer service.

And, because the cashier is now less stressed, she can focus on customer service as well. A less-burdened employee will be a happier employee and we want happy employees interacting with our customers.”

By reducing the number of staff needed for customer service, more employees can be put to work on fulfilling orders to serve as many people as possible. A recent survey of small businesses found that 42 percent have job openings that can’t be filled so ensuring that every worker is optimally utilised is critical.

Roy Baharav, CEO and Co-Founder at Hi Auto, commented:

“At Lee’s, we met a team that puts its heart and soul into serving its customers.

We operationalised our AI system based on what we learned from the owners, general managers, and employees. They have embraced the solution and within a short time began reaping the benefits.

We are now applying the process and lessons learned at Lee’s at additional customer sites.”

Hi Auto’s solution runs on Intel Xeon processors in the cloud and Intel NUC.

Joe Jensen, VP in the Internet of Things Group and GM of Retail, Banking, Hospitality and Education at Intel, said:

“We’re increasingly seeing restaurants interested in leveraging AI to deliver actionable data and personalise customer experiences.

With Hi Auto’s solution powered by Intel technology, quick-service restaurants can help their employees be more productive while increasing customer satisfaction and, ultimately, their bottom line.”

Lee’s Famous Recipe Chicken restaurants plan to rollout Hi Auto’s solution at more of its branches. A video of the conversational AI system in action can be viewed here:

Going forward, Hi Auto plans to add Spanish language support and continue optimising its conversational AI solution. The company says pilots are already underway with some of the largest quick-service restaurants.

(Image Credit: Lee’s Famous Recipe Chicken)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Hi Auto brings conversational AI to drive-thrus using Intel technology appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/feed/ 0
Meena is Google’s first truly conversational AI https://www.artificialintelligence-news.com/2020/01/29/meena-google-truly-conversational-ai/ https://www.artificialintelligence-news.com/2020/01/29/meena-google-truly-conversational-ai/#respond Wed, 29 Jan 2020 14:59:17 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6387 Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena. Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should... Read more »

The post Meena is Google’s first truly conversational AI appeared first on AI News.

]]>
Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena.

Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should offer another leap forward.

Meena is a neural network with 2.6 billion parameters. Google claims Meena is able to handle multiple turns in a conversation (everyone has that friend who goes off on multiple tangents during the same conversation, right?)

Google published its work on e-print repository arXiv on Monday in a paper called “Towards a Human-like Open Domain Chatbot”.

A neural network architecture called Transformer was released by Google in 2017 which is widely acknowledged to be among the best language models available. A variation of Transformer, along with a mere 40 billion English words, was used to train Meena.

Google also debuted a metric alongside Meena called Sensibleness and Specificity Average (SSA) which measures the ability of agents to maintain a conversation.

Meena scores 79 percent using the new SSA metric. For comparison, Mitsuku – a Loebner Prize-winning AI agent developed by Pandora Bots – scored 56 percent.

The result of Meena brings its conversational ability close to that of humans. On average, humans score around 86 percent using the SSA metric.

We don’t yet know when Google intends to debut Meena’s technology in its products but, as the digital assistant war heats up, we’re sure the company is as eager to release it as we are to use it.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Meena is Google’s first truly conversational AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/01/29/meena-google-truly-conversational-ai/feed/ 0
Watch out Google Duplex, Microsoft also has a chatty AI https://www.artificialintelligence-news.com/2018/05/23/google-duplex-microsoft-chatty-ai/ https://www.artificialintelligence-news.com/2018/05/23/google-duplex-microsoft-chatty-ai/#respond Wed, 23 May 2018 10:45:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3148 Not content with being outdone by Google’s impressive (yet creepy) Duplex demo, Microsoft has shown it also has an AI capable of making human-like phone calls. The company first launched its XiaoIce project back in August 2017. In April, Microsoft said it had achieved full duplexing — the ability to speak and listen at the... Read more »

The post Watch out Google Duplex, Microsoft also has a chatty AI appeared first on AI News.

]]>
Not content with being outdone by Google’s impressive (yet creepy) Duplex demo, Microsoft has shown it also has an AI capable of making human-like phone calls.

The company first launched its XiaoIce project back in August 2017. In April, Microsoft said it had achieved full duplexing — the ability to speak and listen at the same time, similar to humans.

Microsoft’s announcement was made before Google’s demonstration earlier this month but, unlike Google, the company had nothing to show at the time.

XiaoIce has now been demonstrated in action during a London event:

The chatbot is only available in China at this time, but it’s become incredibly popular with more than 500 million users.

XiaoIce also features over 230 skills and has been used to perform things such as creating news and hosting radio programs as part of its ‘Content Creation Platform’.

In a blog post, Microsoft VP of AI Harry Shum revealed that more than 600,000 people have spoken on the phone with XiaoIce since it launched in August.

“Most intelligent agents today like Alexa or Siri focus on IQ or task completion, providing basic information like weather or traffic,” wrote Shum. “But we need agents and bots to balance the smarts of IQ with EQ – our emotional intelligence.”

“When we communicate, we use tone of voice, word play, and humour, things that are very difficult for computers to understand. However, Xiaoice has the ability to have human-like verbal conversations, which the industry calls full duplex.”

As many have called for since the Duplex demo, and Google has promised, Microsoft ensures a human participant is aware they’re speaking to an AI.

One thing we’d love to see is a conversation between XiaoIce and Google Duplex to see how well they each hold up. However, let’s keep our hands on the kill switch in case world domination becomes a topic.

What are your thoughts on conversational AIs like XiaoIce and Duplex? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Watch out Google Duplex, Microsoft also has a chatty AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/05/23/google-duplex-microsoft-chatty-ai/feed/ 0