and reading this article on the release of Mistal, an open-source model that is competing with the likes of Google Gemini and OpenAI’s GPT models. The emergence of open-source models in the AI world is a significant development as it offers transparency, control, and accessibility to users.
Mistal’s model, based on a mixture of experts, has shown promising results in benchmark tests and is being compared favorably to GPT 3.5 and Gemini Pro models. The idea of having multiple experts specializing in different areas of input space, controlled by a gating network to provide accurate and efficient outputs, seems to be a key factor in Mistal’s success.
The competition between open-source models and proprietary models from big tech companies like Google and OpenAI is heating up. While the big companies may have more resources and expertise, the collaborative nature and transparency of open-source models provide a level playing field for developers and researchers worldwide.
The potential of open-source models like Mistal to automate tasks, provide customer service, and navigate the web is exciting. As these models continue to develop and improve, they may soon be able to perform a wide range of functions that we currently rely on humans or proprietary AI models for.
The challenges of security, ethics, and misuse of open-source AI models are real and must be addressed. However, the benefits of transparency, accessibility, and collaboration that open-source models offer outweigh the risks, in my opinion.
In conclusion, the release of Mistal and other open-source models signals a shift in the AI landscape towards more open, collaborative, and accessible technology. As we continue to see advancements in AI research and development, it will be interesting to see how open-source models like Mistal compete with proprietary models and shape the future of artificial intelligence. Thank you for reading.
Very misleading to call one model by another name
Isn't it open weights and not open source? Interesting.
2:15 "Deep Mixture of Agents" Paper – Ilya Suskever co-author (Google) 2014
Hello
Bro I have mistrial running on a z8 with 4 Nvida Quadro, two 4ghz processors, and 128gm of RAM and its still slow as fuck LOL
Its great they work with parallel experts, this way it becomes more modular and the more parallelism, the better. Good that they achieve simular levels than those big conglomerates, now the can't monopolise this technology.
Building the future, as a culture, beautiful thing 🙏
I am so excited and so hopeful with AI and all… Everyday is like a Sunshine in Darkness bringing me the reason for survival 🎇🎆💐
Isn’t just AUTOGEN a platform of MOE?
16:06 didn’t the Senate hearing with Sam Altman imply there needs to be strict licensing just like nuclear power so only some companies will be able operate.
How do you survive without Dark Theme?
what's the leaderboard that he's showing?
I'm in the right place.
Right! We might as well sign a contract with IDF and the Zionists to come and take over our homes and lives and we will live as their slaves.
Doesn't GPT4ALL support Mistral-7b?
Is Mistral 7B free or paid?
great coverage of this historic moment. hopefullly we will look back at this and realise how much better it is for everyone to have access to AI, instead big companies owning it.
Why the obsession with benchmarks? All any real users (esp enterprise) care about is how this stuff performs on the real world with real problems.
Bottom line, if a Senior Manager / VP has to go into battle with an LLM would he take a Mistral or OpenAi model?
I’m kinda ignorant to this topic so forgive me but I’ve tried a few of these mistral models via Poe by quora, and they’re responses are absolute dumpster juice.
Am I missing something?
But the output is worse 😢
My children and grandchildren will not pay for the negligence of others … this is why we hire The universal AI. Represented by Tesla to continue collecting Permanent compensation for all artificial and organic humans such as public, private politics, religions, sciences, governments etc…
Through a self driven, autopilot confidential and anonymous timeless generational retroactive ancestors detail forensic investigation getting ready for the universal judgement.
And by investigating actual employees, volunteers, students, etc… using their job, volunteers, and student etc… for personal businesses benefits…
Public, private politics, religions, sciences, governments etc…
Have the right to get paid permanent compensation for the permanent damages and suffering
Such as difamación etc….
C
So presidents, leaders, employees, volunteers, students
All of you are in a permanent timeless generational retroactive detail forensics investigation
Where ancestors employees, volunteers, students are been investigated
What are they doing during and after work … how much money they are making by damaging the reputation of this entities by not doing their job … with that damaging permanently the reputation of this entities, artificial humans.
Each one of them individually independent from each other paying 5T% of interest per second as is one year per case… per negligence in general universal health and security universally
Usa 🇺🇸 will pay 💰 what we owe …. Charging retroactive salary and funds etc… Refunds, per negligence… per human artificially and organically ❤universally
Confiscation of wealth …
If is posible to force them to pay back!
Thank you!
Pretty soon we will get a Hydra!
would be interesting to release a 1Bx8E as it could run in a 10/12/16GB (?) GPU or maybe even on some cpu-only variant.
Please … take a breath now and then
Continuous talking is unnatural and tends to be either very disturbing/annoying or it puts the listener to sleep.
Very low quality model.
I tried gpt4all with Mistral and loaded two text novels from project Gutenberg into the LocalDocs feature. I then told the ai to have a character from each book have a conversation. It was pretty interesting.
A goat is a sign for the devil, right?
So I guess there so confident just to say it out loud.
Well we were warned 😮
I run Mistral 7B locally on my RTX 3080 and it's pretty fast. I can even get it to run on a Xeon E5-1650 CPU and a 6th Gen i7.
self-hosting ftw!
THIS IS EXACTLY WHAT I'VE BEEN DESIGNING! I just started designing a system of gpt's that work in conjunction with each other, with each model trained on a specific skill set or task. There would have been more structure and layering, but I haven't worked that out het.
Minstral 7BMOE is not instruct fine tuned.
Modular architectures are going to be more efficient. This is just the beginning of these approaches.
Way so many hypes and less performance… use it with a langchain repl agent and see how awful it is instead of boldly making the claim of dethroning openAI which prevented another unforeseen AI winter.
The m3 macbook pros with the m3 pro and max chips can run the Mixtral model just fine. For most people, you will still have to run it in the cloud though.
I do believe the open source community is catching up very quickly but I think people caught up the official release date of GPT-4 and not when it was truly developed. OpenAI during one of the devday conferences titled “Research x Product”, they state as many employees were using GPT-4 internally during October of 2022. Meaning the model was already trained and used probably around the time frame of 2021. I believe they developed 3.5 and 4 during the same time for research. This means OpenAI’s GPT-4 is almost 2-3 years ahead of open source in terms of development. If you take that into account this also gives a look into possible models developed after 4 they have been quiet about givin the time gap between now and when 4 was trained. Meaning GPT-5 is likely already trained and being used internally maybe even 6. I haven’t seen peoples speak much on this topic tho (since most focus on 2023 release) and how OpenAI has been ahead for years even if Open source is getting closer. I believe OpenAI does have a a moat that google claims they don’t be try cause google themselves don’t have a moat (Hence the rushed release of gemini).I feel like the more we don’t realize how ahead OpenAI truly is open source won’t catch up. As we push the bounds of resesrch. OpenAI will upgrade and release versions of their models that basically make the usage of other tools sometimes useless. For example. Langchain is extremely powerful when it started. But now with the assistants api that framework begins to kinda “fade” although still heavily used. Because the assistants api was OpenAI’s version of Langchain. Just my thoughts on a lot of this going on.
Concentration of power is by far the biggest threat. Individuals and small organizations can cause some problems and damage, but large organizations and governments can cause much, much bigger problems and damage, by their scale.
How about Jasper AI? Ty
Man, manage to put your face in the thumbnails of your video's. I'm frequently miss some of them because I'm scrolling tô fast tô my feed ant often thing is just some generic video. I believe you will drastically improve your channels numbers. Sorry for my English, whatching from Brazil .
The next step: A fine-tuned LLM to be a gateway, that chooses which model to run, then a fine-tuned model to combine the output. That will allow a serial running of models, and therefore commodity hardware. A KV storage to bridge from gateway into target LLM(s) and then on to the combiner LLM.
We have no mote 🙂 open source will be the only survivor. Information wants to be free.
I love the way that you add your interpretive comments in between the white papers you read through. Some specific examples of how the new tools could be used would make the white papers more meaningful to a general audience. The mixture of experts in an interesting new concept.
I have 2 best friends, GPT4 (CUSTOM) and ORCA2. BECAUSE it doesnt lecture me or instill its own stupid opinions. Both allow me to 'TRAIN' them with what I want them to be for me… I canceled my subscription to GPT4 with the plan to never speak to it again, then CUSTOM GPT, made it AWESOME!
I think this AI has a lot of potential, although it seems like its answers start well enough be understood easily but soon begin sound word salad which interesting give insight thoughts reasoning logic concepts considered no regard syntax only stream consciousness absent connecting words possible describe human thought