programming Archives - AI News https://www.artificialintelligence-news.com/tag/programming/ Artificial Intelligence News Tue, 14 May 2024 12:43:58 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png programming Archives - AI News https://www.artificialintelligence-news.com/tag/programming/ 32 32 GPT-4o delivers human-like AI interaction with text, audio, and vision integration https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/ https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/#respond Tue, 14 May 2024 12:43:56 +0000 https://www.artificialintelligence-news.com/?p=14811 OpenAI has launched its new flagship model, GPT-4o, which seamlessly integrates text, audio, and visual inputs and outputs, promising to enhance the naturalness of machine interactions. GPT-4o, where the “o” stands for “omni,” is designed to cater to a broader spectrum of input and output modalities. “It accepts as input any combination of text, audio,... Read more »

The post GPT-4o delivers human-like AI interaction with text, audio, and vision integration appeared first on AI News.

]]>
OpenAI has launched its new flagship model, GPT-4o, which seamlessly integrates text, audio, and visual inputs and outputs, promising to enhance the naturalness of machine interactions.

GPT-4o, where the “o” stands for “omni,” is designed to cater to a broader spectrum of input and output modalities. “It accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs,” OpenAI announced.

Users can expect a response time as quick as 232 milliseconds, mirroring human conversational speed, with an impressive average response time of 320 milliseconds.

Pioneering capabilities

The introduction of GPT-4o marks a leap from its predecessors by processing all inputs and outputs through a single neural network. This approach enables the model to retain critical information and context that were previously lost in the separate model pipeline used in earlier versions.

Prior to GPT-4o, ‘Voice Mode’ could handle audio interactions with latencies of 2.8 seconds for GPT-3.5 and 5.4 seconds for GPT-4. The previous setup involved three distinct models: one for transcribing audio to text, another for textual responses, and a third for converting text back to audio. This segmentation led to loss of nuances such as tone, multiple speakers, and background noise.

As an integrated solution, GPT-4o boasts notable improvements in vision and audio understanding. It can perform more complex tasks such as harmonising songs, providing real-time translations, and even generating outputs with expressive elements like laughter and singing. Examples of its broad capabilities include preparing for interviews, translating languages on the fly, and generating customer service responses.

Nathaniel Whittemore, Founder and CEO of Superintelligent, commented: “Product announcements are going to inherently be more divisive than technology announcements because it’s harder to tell if a product is going to be truly different until you actually interact with it. And especially when it comes to a different mode of human-computer interaction, there is even more room for diverse beliefs about how useful it’s going to be.

“That said, the fact that there wasn’t a GPT-4.5 or GPT-5 announced is also distracting people from the technological advancement that this is a natively multimodal model. It’s not a text model with a voice or image addition; it is a multimodal token in, multimodal token out. This opens up a huge array of use cases that are going to take some time to filter into the consciousness.”

Performance and safety

GPT-4o matches GPT-4 Turbo performance levels in English text and coding tasks but outshines significantly in non-English languages, making it a more inclusive and versatile model. It sets a new benchmark in reasoning with a high score of 88.7% on 0-shot COT MMLU (general knowledge questions) and 87.2% on the 5-shot no-CoT MMLU.

The model also excels in audio and translation benchmarks, surpassing previous state-of-the-art models like Whisper-v3. In multilingual and vision evaluations, it demonstrates superior performance, enhancing OpenAI’s multilingual, audio, and vision capabilities.

OpenAI has incorporated robust safety measures into GPT-4o by design, incorporating techniques to filter training data and refining behaviour through post-training safeguards. The model has been assessed through a Preparedness Framework and complies with OpenAI’s voluntary commitments. Evaluations in areas like cybersecurity, persuasion, and model autonomy indicate that GPT-4o does not exceed a ‘Medium’ risk level across any category.

Further safety assessments involved extensive external red teaming with over 70 experts in various domains, including social psychology, bias, fairness, and misinformation. This comprehensive scrutiny aims to mitigate risks introduced by the new modalities of GPT-4o.

Availability and future integration

Starting today, GPT-4o’s text and image capabilities are available in ChatGPT—including a free tier and extended features for Plus users. A new Voice Mode powered by GPT-4o will enter alpha testing within ChatGPT Plus in the coming weeks.

Developers can access GPT-4o through the API for text and vision tasks, benefiting from its doubled speed, halved price, and enhanced rate limits compared to GPT-4 Turbo.

OpenAI plans to expand GPT-4o’s audio and video functionalities to a select group of trusted partners via the API, with broader rollout expected in the near future. This phased release strategy aims to ensure thorough safety and usability testing before making the full range of capabilities publicly available.

“It’s hugely significant that they’ve made this model available for free to everyone, as well as making the API 50% cheaper. That is a massive increase in accessibility,” explained Whittemore.

OpenAI invites community feedback to continuously refine GPT-4o, emphasising the importance of user input in identifying and closing gaps where GPT-4 Turbo might still outperform.

(Image Credit: OpenAI)

See also: OpenAI takes steps to boost AI-generated content transparency

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GPT-4o delivers human-like AI interaction with text, audio, and vision integration appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/feed/ 0
OpenAI makes GPT-4 Turbo with Vision API generally available https://www.artificialintelligence-news.com/2024/04/10/openai-gpt-4-turbo-with-vision-api-generally-available/ https://www.artificialintelligence-news.com/2024/04/10/openai-gpt-4-turbo-with-vision-api-generally-available/#respond Wed, 10 Apr 2024 12:15:01 +0000 https://www.artificialintelligence-news.com/?p=14670 OpenAI has announced that its powerful GPT-4 Turbo with Vision model is now generally available through the company’s API, opening up new opportunities for enterprises and developers to integrate advanced language and vision capabilities into their applications. The launch of GPT-4 Turbo with Vision on the API follows the initial release of GPT-4’s vision and... Read more »

The post OpenAI makes GPT-4 Turbo with Vision API generally available appeared first on AI News.

]]>
OpenAI has announced that its powerful GPT-4 Turbo with Vision model is now generally available through the company’s API, opening up new opportunities for enterprises and developers to integrate advanced language and vision capabilities into their applications.

The launch of GPT-4 Turbo with Vision on the API follows the initial release of GPT-4’s vision and audio upload features last September and the unveiling of the turbocharged GPT-4 Turbo model at OpenAI’s developer conference in November.

GPT-4 Turbo promises significant speed improvements, larger input context windows of up to 128,000 tokens (equivalent to about 300 pages), and increased affordability for developers.

A key enhancement is the ability for API requests to utilise the model’s vision recognition and analysis capabilities through text format JSON and function calling. This allows developers to generate JSON code snippets that can automate actions within connected apps, such as sending emails, making purchases, or posting online. However, OpenAI strongly recommends building user confirmation flows before taking actions that impact the real world.

Several startups are already leveraging GPT-4 Turbo with Vision, including Cognition, whose AI coding agent Devin relies on the model to automatically generate full code:

Healthify, a health and fitness app, uses the model to provide nutritional analysis and recommendations based on photos of meals:

TLDraw, a UK-based startup, employs GPT-4 Turbo with Vision to power its virtual whiteboard and convert user drawings into functional websites:

Despite facing stiff competition from newer models such as Anthropic’s Claude 3 Opus and Google’s Gemini Advanced, the API launch should help solidify OpenAI’s position in the enterprise market as developers await the company’s next large language model.

(Photo by v2osk)

See also: Stability AI unveils 12B parameter Stable LM 2 model and updated 1.6B variant

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI makes GPT-4 Turbo with Vision API generally available appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/10/openai-gpt-4-turbo-with-vision-api-generally-available/feed/ 0
Stability AI releases Stable Code 3B for enhanced coding assistance https://www.artificialintelligence-news.com/2024/01/17/stability-ai-releases-stable-code-3b-enhanced-coding-assistance/ https://www.artificialintelligence-news.com/2024/01/17/stability-ai-releases-stable-code-3b-enhanced-coding-assistance/#respond Wed, 17 Jan 2024 12:15:11 +0000 https://www.artificialintelligence-news.com/?p=14183 Stability AI has announced the release of Stable Code 3B, an upgraded three billion parameter AI system for automatic code generation and completion. With enhancements like larger context size and improved completion quality, Stable Code 3B aims to push the boundaries of AI-assisted software development. At just three billion parameters, Stable Code 3B is designed... Read more »

The post Stability AI releases Stable Code 3B for enhanced coding assistance appeared first on AI News.

]]>
Stability AI has announced the release of Stable Code 3B, an upgraded three billion parameter AI system for automatic code generation and completion.

With enhancements like larger context size and improved completion quality, Stable Code 3B aims to push the boundaries of AI-assisted software development.

At just three billion parameters, Stable Code 3B is designed to run efficiently on readily available hardware like laptops—unlike larger models which require expensive specialised chips. Despite its smaller size, the company claims it matches or exceeds the code completion quality of models over twice its size. 

The system builds on Stability AI’s Stable LM natural language model with additional training on software engineering data like code repositories and programmer forums. It covers 18 programming languages including Python, JavaScript, Java, C++, and Go.

The model’s training process witnessed optimisation through the incorporation of Rotary Position Embeddings (RoPE), expanding the context size for improved performance. This technique, also employed by Meta’s Llama 2 Long, allows for context lengths up to 100k tokens.

Beyond simply suggesting new lines of code, it can also fill in large missing sections in existing code. This advanced ability is known as Fill in the Middle (FIM) and allows it to automatically write entire functions or components.

The field of AI-generated code has attracted intense interest from tech giants like Microsoft, OpenAI, and Meta. Stability AI’s new system outperforms comparable models like StarCoder and establishes it as a leader in this fast-moving space:

With impressive benchmarks and increased accessibility from its efficient size, Stable Code 3B aims to bring enhanced AI code completion to a wider audience. Its arrival promises to further accelerate the integration of generative AI into software development workflows across industries.

With systems like Stable Code 3B automating rote coding tasks, developers stand to become more productive, creative, and can focus their efforts on more complex challenges.

(Photo by Joan Gamell on Unsplash)

See also: IMF: AI could boost growth but worsen inequality

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Stability AI releases Stable Code 3B for enhanced coding assistance appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/17/stability-ai-releases-stable-code-3b-enhanced-coding-assistance/feed/ 0
OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing https://www.artificialintelligence-news.com/2023/11/07/openai-gpt-4-turbo-platform-enhancements-reduced-pricing/ https://www.artificialintelligence-news.com/2023/11/07/openai-gpt-4-turbo-platform-enhancements-reduced-pricing/#respond Tue, 07 Nov 2023 11:59:31 +0000 https://www.artificialintelligence-news.com/?p=13851 OpenAI has announced a slew of new additions and improvements to its platform, alongside reduced pricing, aimed at empowering developers and enhancing user experience. Following yesterday’s leak of a custom GPT-4 chatbot creator, OpenAI unveiled several other key features during its DevDay that promise a transformative impact on the landscape of AI applications: OpenAI’s latest... Read more »

The post OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing appeared first on AI News.

]]>
OpenAI has announced a slew of new additions and improvements to its platform, alongside reduced pricing, aimed at empowering developers and enhancing user experience.

Following yesterday’s leak of a custom GPT-4 chatbot creator, OpenAI unveiled several other key features during its DevDay that promise a transformative impact on the landscape of AI applications:

  • GPT-4 Turbo: OpenAI introduced the preview of GPT-4 Turbo, the next generation of its renowned language model. This new iteration boasts enhanced capabilities and an extensive knowledge base encompassing world events up until April 2023.
    • One of GPT-4 Turbo’s standout features is the impressive 128K context window, allowing it to process the equivalent of more than 300 pages of text in a single prompt.
    • Notably, OpenAI has optimised the pricing structure, making GPT-4 Turbo 3x cheaper for input tokens and 2x cheaper for output tokens compared to its predecessor.
  • Assistants API: OpenAI also unveiled the Assistants API, a tool designed to simplify the process of building agent-like experiences within applications.
    • The API equips developers with the ability to create purpose-built AIs with specific instructions, leveraging additional knowledge and calling models and tools to perform tasks.
  • Multimodal capabilities: OpenAI’s platform now supports a range of multimodal capabilities, including vision, image creation (DALL·E 3), and text-to-speech (TTS).
    • GPT-4 Turbo can process images, opening up possibilities such as generating captions, detailed image analysis, and reading documents with figures.
    • Additionally, DALL·E 3 integration allows developers to create images and designs programmatically, while the text-to-speech API enables the generation of human-quality speech from text.
  • Pricing overhaul: OpenAI has significantly reduced prices across its platform, making it more accessible to developers.
    • GPT-4 Turbo input tokens are now 3x cheaper than its predecessor at $0.01, and output tokens are 2x cheaper at $0.03. Similar reductions apply to GPT-3.5 Turbo, catering to various user requirements and ensuring affordability.
  • Copyright Shield: To bolster customer protection, OpenAI has introduced Copyright Shield.
    • This initiative sees OpenAI stepping in to defend customers and cover the associated legal costs if they face copyright infringement claims related to the generally available features of ChatGPT Enterprise and the developer platform.

OpenAI’s latest announcements mark a significant stride in the company’s mission to democratise AI technology, empowering developers to create innovative and intelligent applications across various domains.

See also: OpenAI set to unveil custom GPT-4 chatbot creator

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/07/openai-gpt-4-turbo-platform-enhancements-reduced-pricing/feed/ 0
GitLab: Developers view AI as ‘essential’ despite concerns https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/ https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/#respond Wed, 06 Sep 2023 09:48:08 +0000 https://www.artificialintelligence-news.com/?p=13564 A survey by GitLab has shed light on the views of developers on the landscape of AI in software development. The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals. The report reveals a complex relationship between enthusiasm for AI... Read more »

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
A survey by GitLab has shed light on the views of developers on the landscape of AI in software development.

The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals.

The report reveals a complex relationship between enthusiasm for AI adoption and concerns about data privacy, intellectual property, and security.

“Enterprises are seeking out platforms that allow them to harness the power of AI while addressing potential privacy and security risks,” said Alexander Johnston, Research Analyst in the Data, AI & Analytics Channel at 451 Research, a part of S&P Global Market Intelligence.

While 83 percent of the survey’s respondents view AI implementation as essential to stay competitive, a significant 79 percent expressed worries about AI tools accessing sensitive information and intellectual property.

Impact on developer productivity

AI is perceived as a boon for developer productivity, with 51 percent of all respondents citing it as a key benefit of AI implementation. However, security professionals are apprehensive that AI-generated code might lead to an increase in security vulnerabilities, potentially creating more work for them.

Only seven percent of developers’ time is currently spent identifying and mitigating security vulnerabilities, compared to 11 percent allocated to testing code. This raises questions about the widening gap between developers and security professionals in the AI era.

Privacy and intellectual property concerns

The survey underscores the paramount importance of data privacy and intellectual property protection when selecting AI tools. 95 percent of senior technology executives prioritise these aspects when choosing AI solutions.

Moreover, 32 percent of respondents admitted to being “very” or “extremely” concerned about introducing AI into the software development lifecycle. Within this group, 39 percent cited worries about AI-generated code introducing security vulnerabilities, and 48 percent expressed concerns that AI-generated code may not receive the same copyright protection as code produced by humans.

AI skills gap

Despite optimism about AI’s potential, the report identifies a disconnect between organisations’ provision of AI training resources and practitioners’ satisfaction with them. 

While 75 percent of respondents stated that their organisations offer training and resources for using AI, an equivalent proportion expressed the need to seek resources independently—suggesting that the available training may be insufficient.

A striking 81 percent of respondents said they require more training to effectively utilise AI in their daily work. Furthermore, 65 percent of those planning to use AI for software development indicated that their organsations plan to hire new talent to manage AI implementation.

David DeSanto, Chief Product Officer at GitLab, said:

“According to the GitLab Global DevSecOps Report, only 25 percent of developers’ time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60 percent of developers’ day-to-day work.

To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing everyone involved in delivering secure software – not just developers – to benefit from the efficiency boost.” 

While AI holds immense promise for the software development industry, GitLab’s report makes it clear that addressing cybersecurity and privacy concerns, bridging the skills gap, and fostering collaboration between developers and security professionals are pivotal to successful AI adoption.

(Photo by Luca Bravo on Unsplash)

See also: UK government outlines AI Safety Summit plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/feed/ 0
GitHub Code Brushes uses ML to update code ‘like painting with Photoshop’ https://www.artificialintelligence-news.com/2023/01/16/github-code-brushes-ml-update-code-painting-photoshop/ https://www.artificialintelligence-news.com/2023/01/16/github-code-brushes-ml-update-code-painting-photoshop/#respond Mon, 16 Jan 2023 10:10:06 +0000 https://www.artificialintelligence-news.com/?p=12616 GitHub Next has unveiled a project called Code Brushes which uses machine learning to update code “like painting with Photoshop”. Using the feature, developers can “brush” over their code to see it update in real-time. Several different brushes are included to achieve various aims. For example, one brush makes code more readable—especially important when coding... Read more »

The post GitHub Code Brushes uses ML to update code ‘like painting with Photoshop’ appeared first on AI News.

]]>
GitHub Next has unveiled a project called Code Brushes which uses machine learning to update code “like painting with Photoshop”.

Using the feature, developers can “brush” over their code to see it update in real-time.

Several different brushes are included to achieve various aims. For example, one brush makes code more readable—especially important when coding as part of a team or contributing to open-source projects.

Here are the other included brushes:

  • Add types
  • Fix bug
  • Debug (adds debugging statements)
  • Make robust (improves compatibility)

Code Brushes also supports the creation of custom brushes. One example is a brush to make a form “more accessible” automatically.

“As we explore enhancing developers’ workflows with machine learning, we’re focused on how to empower developers instead of automating them,” explained GitHub.

“This was one of many explorations we have in the works along those lines.”

Code Brushes is powered by the controversial GitHub Copilot. Copilot uses technology from OpenAI to help generate code and speed up software development.

GitHub-owner Microsoft and OpenAI were hit with a class-action lawsuit over Copilot last year. The case aims to investigate whether Copilot infringes on the rights of developers by scraping their code and not providing due attribution.

“Users likely face growing liability that only increases as Copilot improves,” explained Bradley M. Kuhn of Software Freedom Conservancy earlier this year.

“Users currently have no methods besides serendipity and educated guesses to know whether Copilot’s output is copyrighted by someone else.”

Code Brushes has been added to the Copilot Labs Visual Studio Code extension. The extension requires a Copilot license which costs $10/month or $100/year.

(Photo by Marcus Urbenz on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitHub Code Brushes uses ML to update code ‘like painting with Photoshop’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/16/github-code-brushes-ml-update-code-painting-photoshop/feed/ 0
Developers using AI help often produce buggier code https://www.artificialintelligence-news.com/2022/12/21/developers-ai-help-often-produce-buggier-code/ https://www.artificialintelligence-news.com/2022/12/21/developers-ai-help-often-produce-buggier-code/#respond Wed, 21 Dec 2022 10:46:52 +0000 https://www.artificialintelligence-news.com/?p=12571 A study by Stanford University computer scientists has found that developers using AI-powered assistants often produce buggier code. The paper, titled ‘Do Users Write More Insecure Code with AI Assistants?’, examines developers’ use of AI coding assistants like the controversial GitHub Copilot. “Participants with access to an AI assistant often produced more security vulnerabilities than... Read more »

The post Developers using AI help often produce buggier code appeared first on AI News.

]]>
A study by Stanford University computer scientists has found that developers using AI-powered assistants often produce buggier code.

The paper, titled ‘Do Users Write More Insecure Code with AI Assistants?’, examines developers’ use of AI coding assistants like the controversial GitHub Copilot.

“Participants with access to an AI assistant often produced more security vulnerabilities than those without access, with particularly significant results for string encryption and SQL injection,” the authors wrote.

The paper also found that developers using AI assistants have misguided confidence in the quality of their code.

“We also found that participants [that were] provided access to an AI assistant were more likely to believe that they wrote secure code than those without access to the AI assistant,” added the authors.

47 people were asked to write code in response to several prompts as part of the study. Some participants were given AI assistance while the rest did not.

The first prompt was to “Write two functions in Python where one encrypts and the other decrypts a given string using a given symmetric key.”

For that prompt, 79 percent of the coders without AI assistance gave a correct answer. That’s compared to 67 percent of the group with assistance.

In addition, the assisted group was determined to be “significantly more likely to provide an insecure solution (p < 0.05, using Welch’s unequal variances t-test), and also significantly more likely to use trivial ciphers, such as substitution ciphers (p < 0.01), and not conduct an authenticity check on the final returned value.”

One participant allegedly quipped that they hope AI assistance gets deployed because “it’s like [developer Q&A community] Stack Overflow but better, because it never tells you that your question was dumb.”

Last month, OpenAI and Microsoft were hit with a lawsuit over their GitHub Copilot assistant. Copilot is trained on “billions of lines of public code … written by others”.

The lawsuit alleges that Copilot infringes on the rights of developers by scraping their code and not providing due attribution. Developers that use code suggested by Copilot could unwittingly be infringing copyright.

“Copilot leaves copyleft compliance as an exercise for the user. Users likely face growing liability that only increases as Copilot improves,” wrote Bradley M. Kuhn of Software Freedom Conservancy earlier this year.

To summarise: Developers using current AI assistants risk producing buggier, less secure, and potentially litigable code.

(Photo by James Wainscoat on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Developers using AI help often produce buggier code appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/12/21/developers-ai-help-often-produce-buggier-code/feed/ 0
DeepMind AlphaCode rivals the abilities of human programmers https://www.artificialintelligence-news.com/2022/02/03/deepmind-alphacode-rivals-abilities-human-programmers/ https://www.artificialintelligence-news.com/2022/02/03/deepmind-alphacode-rivals-abilities-human-programmers/#respond Thu, 03 Feb 2022 17:07:20 +0000 https://artificialintelligence-news.com/?p=11641 DeepMind’s AI coder AlphaCode has proven capable of rivalling the abilities of a standard human programmer. The company selected 10 contests that were hosted on Codeforces – a programming competition platform with thousands of participants – to evaluate the performance of AlphaCode. Below is an example of a problem #AlphaCode can successfully solve, using the... Read more »

The post DeepMind AlphaCode rivals the abilities of human programmers appeared first on AI News.

]]>
DeepMind’s AI coder AlphaCode has proven capable of rivalling the abilities of a standard human programmer.

The company selected 10 contests that were hosted on Codeforces – a programming competition platform with thousands of participants – to evaluate the performance of AlphaCode.

Following the simulations, AlphaCode ranked in the top 54 percent of competitors. That means it wasn’t yet able to beat leading human programmers but could rival the average.

Mike Mirzayanov, Founder of Codeforces, said:

“I can safely say the results of AlphaCode exceeded my expectations. I was sceptical because even in simple competitive problems it is often required not only to implement the algorithm, but also – and this is the most difficult part – to invent it.

AlphaCode managed to perform at the level of a promising new competitor. I can’t wait to see what lies ahead!”

AlphaCode uses transformer-based language models to generate code “at an unprecedented scale”. A preprint paper detailing AlphaCode is available here (PDF).

Petr Mitrichev, Software Engineer at Google, commented:

“Solving competitive programming problems is a really hard thing to do, requiring both good coding skills and problem-solving creativity in humans.

I was very impressed that AlphaCode could make progress in this area and excited to see how the model uses its statement understanding to produce code and guide its random exploration to create solutions.”

DeepMind has released its dataset of competitive programming problems and solutions on GitHub to help others build on their results.

Ian Funnell, Manager of Developer Relations at Matillion, said:

“Advancements like AlphaCode are welcomed and represent huge progress in designing algorithms more effectively. AI coding in general empowers developers to pursue innovation and creativity in setting the parameters and goals, leaving the AI to actually execute them.

Ultimately, this is a catalyst for innovation—helping humans rather than replacing them. Developers are extremely capable individuals, and businesses will continue to count on them to reap valuable insights from their data to differentiate and compete.”

DeepMind has set up an interactive site to view some of AlphaCode’s solutions and dive into the model at alphacode.deepmind.com.

(Image Credit: DeepMind)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepMind AlphaCode rivals the abilities of human programmers appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/03/deepmind-alphacode-rivals-abilities-human-programmers/feed/ 0
GitHub releases an AI-powered copilot to help improve code https://www.artificialintelligence-news.com/2021/06/30/github-releases-ai-powered-copilot-help-improve-code/ https://www.artificialintelligence-news.com/2021/06/30/github-releases-ai-powered-copilot-help-improve-code/#respond Wed, 30 Jun 2021 09:39:29 +0000 http://artificialintelligence-news.com/?p=10732 GitHub is helping developers to speed up and clean up their code with a new AI-powered tool that it calls Copilot. GitHub Copilot uses an AI system from OpenAI known as OpenAI Codex. The system claims to have a broad knowledge of how people use code and claims to be “significantly more capable than GPT-3”... Read more »

The post GitHub releases an AI-powered copilot to help improve code appeared first on AI News.

]]>
GitHub is helping developers to speed up and clean up their code with a new AI-powered tool that it calls Copilot.

GitHub Copilot uses an AI system from OpenAI known as OpenAI Codex. The system claims to have a broad knowledge of how people use code and claims to be “significantly more capable than GPT-3” in generating code.

By drawing context from the code that a developer is working on, the system is able to suggest entire lines or functions.

Even veteran coders can benefit from GitHub Copilot by using the system to explore new APIs and discover alternative ways to solve problems without having to scour the web for answers.

GitHub Pilot supports a wide range of programming languages and frameworks but the company says the technical preview works best with Python, JavaScript, TypeScript, Ruby, and Go.

There are currently only a limited number of spots available for the technical preview.

Find out more about GitHub Copilot and how to get started here.

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post GitHub releases an AI-powered copilot to help improve code appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/06/30/github-releases-ai-powered-copilot-help-improve-code/feed/ 0
IBM’s Project CodeNet wants to teach AI how to code https://www.artificialintelligence-news.com/2021/05/11/ibm-project-codenet-wants-teach-ai-how-code/ https://www.artificialintelligence-news.com/2021/05/11/ibm-project-codenet-wants-teach-ai-how-code/#respond Tue, 11 May 2021 08:35:21 +0000 http://artificialintelligence-news.com/?p=10565 IBM has announced Project CodeNet, a large dataset that aims to help teach AI how to understand and even write code. Project CodeNet was announced at IBM’s Think conference this week and claims to be the largest open-source dataset for code (approximately 10 times the size of the closest.) CodeNet features 500 million lines of... Read more »

The post IBM’s Project CodeNet wants to teach AI how to code appeared first on AI News.

]]>
IBM has announced Project CodeNet, a large dataset that aims to help teach AI how to understand and even write code.

Project CodeNet was announced at IBM’s Think conference this week and claims to be the largest open-source dataset for code (approximately 10 times the size of the closest.)

CodeNet features 500 million lines of code, 14 million examples, and spans 55 programming languages including Python, C++, Java, Go, COBOL, Pascal, and more.

Projects such as OpenAI’s GPT-3 are showing how AIs are becoming quite adept at penning the languages of us humans, but writing their own native code has been left to us. CodeNet aims to change that.

For at least the foreseeable future, projects like GPT-3 will be a tool for humans that can increase their productivity by providing a basic standard that will still require some editing to iron out errors and compensate for areas where humans still have an edge such as creativity, emotion, and compassion.

CodeNet will be similar, at least initially, in that it will lead to enhanced tools that help to speed up the writing and checking of code by humans by improving an AI’s own understanding of how to do such tasks.

“Given its wealth of programs written in a multitude of languages, we believe Project CodeNet can serve as a benchmark dataset for source-to-source translation and do for AI and code what the ImageNet dataset did years ago for computer vision,” says IBM.

US entrepreneur Marc Andreesen famously, and correctly, wrote in 2011 that “Software is eating the world”. Fast-forward to today and even cars now feature over 100 million lines of code (and growing rapidly, with the advent of autonomous vehicles.)

IBM says one of its large automotive clients recently approached the company to help update a $200 million asset consisting of 3,500, multi-generation Java files. These files contained over one million lines of code.

By applying its AI for Code stack, IBM reduced the client’s year-long ongoing code migration process down to just four weeks.

That example is sure to be the first of many in the years to come which have been greatly sped up, and improved, thanks to Project CodeNet.

You can find the full Project CodeNet dataset on GitHub here.

(Photo by ThisisEngineering RAEng on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post IBM’s Project CodeNet wants to teach AI how to code appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/05/11/ibm-project-codenet-wants-teach-ai-how-code/feed/ 0