and I hope you enjoyed this deep dive comparison of Google Gemini Ultra 1.0 and Chad GPT-4. It’s fascinating to see how AI models are progressing and the potential they hold for future applications. As always, stay tuned for more updates and advancements in the world of AI. Thank you for reading!

49 COMMENTS

  1. It came up on my feed and it was funny because I thought that’s why not what I’ve heard about Gemini the last couple days then I saw it was two weeks old

  2. I confess I gave both Chatgpt4 and Gemini Advanced the same diagram of an electrical circuit and asked them to calculate the total capacitance. Chatgpt4 got it right. .Gemini Advanced got it WRONG. .Gemini Advanced was good however at explaining the finer points of Spanish grammar.

  3. The Gemini has more ethical stops than Pope Benedict. Ask to root your android device or how to install uncensored AI.
    Try asking about war in Russia, no stops there butt ask about war in Israel or Gaza and stops you in a track.
    If all countries have a feeling issue that AI needs to fallow or google be kicked out of their country's than what purpose is it serving?

  4. Please stop using this ridiculous expression “GPT killer”, this shows a basic misunderstanding of what generative AI is. When a new model arrives, the other models are made BETTER and more useful, since you can have the models speak to one another. Anyone seriously interested in AI should obviously be using both GPT and Gemini, and have them speak together and collaborate. This works incredibly well. The value you get is tenfold what you pay for it you use the models well.

  5. Ive tried this, and it's not even close compared to gpt4, i don't know how people can say it's a competitor it doesn't even do simple task without giving you some virtue signaling and refuses to elaborate, any way openai still ahead

  6. If any YouTuber claims that Gemini Ultra is on par with GPT-4, especially when it comes to coding, that's my cue to hit unsubscribe. I've tackled a bunch of non trivial coding challenges, and let me tell you, Gemini Ultra doesn't just fall short of GPT-4—it's a total disaster. Google's got a serious problem on its hands. Despite some YouTubers hyping it up with basic examples, anyone who actually tries to use Gemini Ultra for real coding tasks will quickly see it's complete garbage.

  7. This is interesting, thanks for sharing Wes!

    This reminds me of a podcast I listened to of an ex-google employee who worked there during the early days of search. He mentioned how google discovered through research that every millisecond of search speed they improved was highly correlated to more users and searches (hence why they used to display X # of searches in 0.0X seconds).

    Similar to this phenomenon, I wonder if we will see something similar with the winners in the foundation models space moving forward, or if there will be a different metric that is more telling for wide user adoption.

  8. BACHELOR REASONING: Wrong assessment of Gemini? IMHO Gemini failed as it attacked the validity of the first sentence, which was not the question. The question was, does the 2nd sentence follow from the first. => GPT4 nails it, Gemini ducks the question.

  9. Good content but you talk way too fast and too indistinctly. Makes it very difficult to follow exactly what you are asking the models and what the response is. Please slow down.

  10. I have been using it and it is NOT that good. I put a picture of a wine label in the prompt and asked if it could tell me the name of the wine. It was hard to tell from the picture. Gemini's response?: "Sorry, I can't help with images of people yet." Despite continued prodding it would not answer.

  11. Gpt is great, but it has a dealbreaking flaw at the moment. Even with the paid version, you are limited to 40 messages / 3 hours with gpt 4, while Gemini advanced is seemingly unlimited. I like doing complex multivariable text based adventures, and it's just not feasible on GPT 4 because of that limitation.

  12. Hey Wes, thank you for the video. I'm curious to know if you think your Custom Instructions can affect the output from gpt4. (either for good or bad)?

  13. About the python "proverb" code, if it is really caused by the JSON format. That mean, Gemini understood (wrongly?) the query as an automation task, write the code and run it in a sandbox ?
    That would mean that Gemini can wrote his own agent given a specific query and provide you the code used for transparency purpose ? That's very powerful.

  14. After a short high i was quickly disappointed … stupid loops, complaining about calling it a *****, lecturing me, acting like a 10 year old, the copy button introduced hidden characters that breaks everything, multiple times it forgot what it was working on, don't know at all what it was talking about and wants me to to fix the bugs it introduced. Yes, that was with ultra and a short python script that I already did with gpt-4 before. Not really what I expected at least not for programming. And by no way I think gpt is really good, it is also quite frustrating but at least the result is better (for me).

  15. Remember, RLHF degrades model performance. We saw a research paper benchmarking GPT-4's ability to solve the same set of math problems, and watched its abilities fall month by month. (Sam Altman's response to this objective data was: I know it can feel dumber, but I think that's because the bar had been raised for what we expect.) >:-(
    Anyway, Google is collecting feedback now, so expect Gemini to degrade over the next few months.

  16. On the "speed" point, wouldn't this be related to scaling? I mean that at this time there would be practically no users using the Google service, and surely they over-dimensioned the backend to avoid latency issues at this early stage , while ChatGPT would be managing a clearly much larger volume of inference requests.

  17. I'm sorry, but chat gpt 4 is way above Gemini ultra regarding real practical applications. Gemini Ultra is also losing its context: "I was talking to it about recommending events to be tracked in a specific use case and it recommended where I can find events in the location that I'm in.

  18. Excuse me, Wes. It might make sense to compare at this simple level. But this is not really AI-like. Maybe you should look at the transcript of the interview by Tucker Carlson with Putin and ask questions about it. Then you would have a completely different view of AI in general.

  19. The ability of GPT4 to upload different file types then EXECUTE CODE WITHIN THE SESSION is a massive difference. What is the percentage of people able to copy the code and run it 'in their own environment'? It is possible right now to build ML models right inside a chat on GPT. It will correct its own code and suggest strategies. The main current limitation is the 'window' (when it chokes and provides a regenerate button). Once the model is built it allows the user to download the entire model. It has difficulties when asked to build a neural network but has no problem with random forest or even XGBoost.

    I believe that these 'creatures' are mostly being reviewed by software savvy people who think creating one's own environment is trivial. GPT4 allows the user to build inference models just by asking. It seems unaware of the libraries available to it, during sessions, but can proceed to 'build it from scratch'. It can build a 'predictive' model then even run a genetic algorithm to optimize the model it built… executing right in front of the user. And it handles the data quite beautifully. It is excellent at preprocessing files. All this within the little $20 window it gives 'plus' users. (Using the api to do these things is very expensive.)

    If we imagine a scaled version of current behavior it completely replaces almost the entire software development cycle.

    The executable environment of GPT4 puts it in another league. Of course, Gemini is also wonderful but it cannot replace software developers. I've seen many developers award GPT4 the title of 'intern developer'. I think this misses the point. Some of the best developers I ever met were interns. They always built exactly what I asked. As an exploratory tool GPT4 is almost supernatural.

  20. GPT4 is still the FarCry for the Hardware, what we had back in time. Google, a multi billion company, with the amount of people and data, is not not able to dethrone OpenAI.

    No multi file selection and only pictures?

  21. these LLMs all hallucinate and probably always will, still remarkable technology, nice to toy around with, some great productivity use cases, but everybody should turn down the hype a little, Yann LeCun is right, better systems will come

  22. Why do all AIs only code in Python? Aren't they familiar with ALL coding languages? You can easily ask them to code in a certain language, but it almost always comes out incomplete or wrong and they will always prefer to give you some code in Python if you haven't specified which language.

LEAVE A REPLY

Please enter your comment!
Please enter your name here