Incredible META OPEN SOURCE Coding AI outperforms GPT-4 | Code Llama’s 70B framework is a game-changer

0
302

So today, Facebook/meta dropped code llama 70 billion parameters, the biggest and best-performing version of their open-source LLM. So far, there are three models to be aware of: the Cod Lama 70 billion, the base model, Cod Lama 70 billion Python specifically fine-tuned for Python coding, and Cod Lama 70 billion Instruct. Let’s look at what that means and why this is important.

In August 2023, Facebook introduced code llama, a state-of-the-art large language model for coding. At the end of January 2024, they released code llama 70 billion parameters, the largest and best-performing model in the code llama family. The code llama 70 billion is available in the same three versions as previously released code llama models, all free for research and commercial use. We’ll dive deeper into what the actual license is in the end.

The three models that they released are the foundational model, the Python specialized model for coding with Python, and the Instruct model, which is fine-tuned for understanding natural language instructions. This is more like Chad GPT, for example, where it acts as a bit more like an AI assistant versus the foundational model, which is more of a text completion engine.

They’re saying today we are releasing code llama, a large language model that can use text prompts to generate code. Code llama is the state-of-the-art for publicly available NLMs on code tasks and has the potential to make workflows faster and more efficient for current developers and lower the barrier to entry for people who are learning to code. It’s great for productivity, an educational tool for programmers, and also helps create more robust, well-documented code. They’re releasing this under the same Community license as Lama 2.

They’ve posted the community license here. So basically, you might remember hearing a little bit about this. It’s pretty open. The little fine print that they have in there has annoyed people. If the monthly active users of those products or services are greater than 100 million monthly active users in the preceding calendar month, you must request a license from meta. This basically means they’re trying to not grant these licenses to the other big tech competitors so they’re trying to exclude Microsoft and Google, etc. But if you’re not one of the behemoths in the tech industry, it’s a pretty open license.

Code llama is a code specialized version of L2 further train L2 on its code-specific data sets. They’ve been releasing various sizes of code llama with 7 billion, 13 billion, 34 billion, and now finally 70 billion parameters. Each of these models is trained with 50 billion tokens of code and code-related data, except for the 70 billion, which is trained on one trillion tokens.

Code llama is basically they’re taking L2 and training it on code. For the code llama Python, they’re adding Python code training and long context fine-tuning. On the other side, they have long context fine-tuning which is the code llama, and additional fine-tuning to make it into code llama instruct. So that kind of chat back and forth assistant.

On the Humanval gpt-4, they were self-reported, I believe, in the gpt-4 technical card paper they published at 67, a score of 67. GPT 3.5 was clocked in at 48.1 and here Cod Lama instruct does slightly better at 67.8, surpassing GPT 4 at the time that it came out, and they’re saying that this is the highest compared with other state-of-the-art open solutions and on par with Chad GPT.

Code llama is available on Hugging Face, and you can download the models right here. This is ai.meet.com. I’ll leave the link below. Fill out your details, select which models you want to have access to, know what you’re agreeing to, click “I accept”, and let’s go. You’ll be emailed with the download link, and so you’ll get this email almost immediately.

So, visit the code L repository that looks like this. There’s a read-me about how to use the download.sh script. You will need the signed URL that you receive over email. That’s going to be this thing at the bottom. I can’t show it because it’s unique to me. And step three, select which model weights to download. It looks like you have 24 hours with that unique ID.

I’m very excited to see what the open-source community does with code llama. Here’s Harrison Kingsley, AKA Sent Dex, on YouTube. He talks a lot about Python programming and various cool machine learning applications. He used to be a lawyer and pivoted to AI, so he gets a lot of points for that. He’s saying, “I have found tunes of code llama 34b to be close enough to what GPT 4 did for me to cancel my sub to GPT 4. With code llama 70 billion finally released, the subsequent tunes can further seal the fate of closed AI for coding models, where open-source AI is just plain better in every way.”

From here, Mixel is a general-purpose open-source AI model that can also be fine-tuned further by the open-source movement to replace the more general purpose goals. And Mixel isn’t going to be the last. If you moved off of GPT-4 when did you? If you haven’t yet, what capabilities are you not happy with from OSS AI models?

The abbreviation OSS here is just open-source software. This is the important thing to understand about these code llama models and open-source AI models in general. When the company releases them, that’s just version kind of 1.0, and the community itself from then starts fine-tuning them and finding ways to make it better. Here he seems to be saying, and I mean I don’t know if this is the case or not, but he strikes me as an honest, trustworthy guy. He’s got a lot of following. People know him. This is a known person with 1.2 million subscribers on YouTube. So he’s finding that he was able to reach GPT-4 like performance for Co in so what GPT 4 did for him with code Llama 34 billion because of another fine tuning that somebody else did apparently.

This, I’m guessing fine 34 billion that was doing very well, and he’s saying he would wait for the new fine-tuned 70 billion models coming out to see how well those do. This is interesting because very soon we might see these homecooked models that are better at coding than GPT-4, the leading closed-source model.

So, I’m very excited to see what is ahead for the open-source community and what crazy things they’re going to cook up with the open-source code llama. Mark Zuckerberg being the champion of open-source AI. Who had that on their big card?

My name is Wes R. Thank you for watching.