In March, Google made waves on the internet and in the technology sector by launching Google Bard, an AI model and ChatGPT’s rival. Following the success of ChatGPT, the search engine giant developed and released Bard, which functions similarly to ChatGPT but pulls real-time information from the web, unlike its competitor.
Three months later, the company announced a major Bard update that can skyrocket its reasoning and logic capabilities. In addition to regularly updating its database, ChatGPT could also write its code. In a new development, the chatbot can also execute code by itself, which allows it to solve problems more deeply and effectively than current AI models.
Google’s AI tool can perform new operations through ‘implicit code execution’, a technique that allows Bard to find computational prompts and execute codes in the background, improving maths and coding accuracy by one-third. Since Bard will have tested the outcome before delivering it, the mathematical and coding questions will receive more accurate responses.
“The new method improved Bard’s accuracy for coding and maths problems by roughly 30 percent during internal tests,” claimed Google.
Before this update, large language models (LLMs) like Bard and ChatGPT primarily targeted language and creative operations as they could obtain training data to predict what a subject might say next. However, these strategies fall short when performing reasoning and mathematics tasks, as they function without deep thought. With these limitations in mind, Google researchers made changes to their AI model.
According to Google’s blog post, the new update allows Bard to create and run codes to “boost its reasoning and maths abilities.” The researchers also said, “With the latest update, we’ve combined the capabilities of both LLMs and traditional code to improve accuracy in Bard’s responses. Through implicit code execution, Bard identifies prompts that might benefit from logical code, writes it ‘under the hood,’ executes it, and uses the result to generate a more accurate response.”
However, as technologically advanced as AI chatbots are, accuracy is not always guaranteed. Despite the upgrade, Google released a disclaimer saying Bard “won’t always get it right.”
AI chatbots sometimes create inaccurate or fabricated information called ‘hallucinations’ that they execute with confidence and assurance, often misleading the users.
ChatGPT developer OpenAI also announced a new method to combat AI misinformation last month, which entails two AI models arguing and debating with each other until they unanimously agree on the correct answer.