Gemini Advanced faltered in basic coding tests, while ChatGPT excelled. Here’s its mistakes

0
518
Gemini Advanced failed these simple coding tests that ChatGPT aced. Here's what it got wrong

Google has recently rebranded its AI coding assistant from Bard to Gemini, offering a more capable and advanced version called Gemini Advanced. Similar to ChatGPT’s base model and ChatGPT Plus service, Google and OpenAI charge $20/month for access to their smarter, more super-powered offerings. To test the effectiveness of these AI assistants, coding challenges were put in place, comparing the performance of ChatGPT, Bard (now Gemini), and Gemini Advanced.

In a series of tests involving coding challenges like writing a WordPress plugin, rewriting string functions, and finding bugs in code, Gemini Advanced struggled to perform as expected. For example, in the test to write a WordPress plugin, ChatGPT generated back-end code while Gemini Advanced created a front-end feature, failing to execute the intended functionality. Similarly, in the test to rewrite a string function, Gemini Advanced generated code that did not support non-decimal inputs accurately, showcasing a lack of understanding of basic programming concepts.

Furthermore, when tasked with identifying a bug in the code, Gemini Advanced failed to provide a clear solution and suggested looking elsewhere in the plugin instead of pinpointing the actual error, unlike ChatGPT which correctly identified the issue. Overall, Gemini Advanced’s performance fell short compared to ChatGPT, raising concerns about its effectiveness and value for its subscription fee.

In conclusion, while AI coding assistants like ChatGPT have shown promise in speeding up coding processes and enhancing productivity, Gemini Advanced’s underperformance highlights the importance of thorough testing and evaluation before relying on AI tools for coding tasks. With ChatGPT outshining Gemini Advanced in these tests, users may find more value in the former for their coding needs.