Open <a href='https://ainewsera.com/how-to-use-new-google-gemini/artificial-intelligence-news/' title='Discover the Ultimate Guide for Mastering Google Gemini AI in 2024' >AI</a> Dev Day: A Game Changer in <a href='https://ainewsera.com/how-to-use-new-google-gemini/artificial-intelligence-news/' title='Discover the Ultimate Guide for Mastering Google Gemini AI in 2024' >AI</a> <a href='https://ainewsera.com/why-wont-openai-say-what-the-q-algorithm-is/ai-companies/' title='Why Won’t OpenAI Say What the Q* Algorithm Is?' >Technology</a>

Okay so open AI Dev day just happened

Welcome to our first ever open AI Dev day and this is their biggest keynote of this year apparently so we’ve got some great stuff to announce today because what they announced goes beyond what anybody has been expecting and trust me.

Exciting Announcements

There were a lot of rumors out there but even though the rumors were quite large they managed to overd deliver jet GPT got way better in many ways including a brand new GPT store we’re going to launch the GPT store where people can build their own Bots then they upgraded.

GPT 4 API Upgrade

Their GPT 4 API which is huge even for non-coders because this model will also be used in the chat GPT web interface and then they announce assistance you’ll just ask computer for what you need and it’ll do all of these tasks for you which is essentially open AI native.

Agents they don’t call them that yet we’re thrilled to introduce gpts but they’re more potent than some of the agents that are out there today so let’s talk about all of this what it means for you as an user and then let’s briefly look at what it.

Means for the AI space because even before this conference they were ahead of their competition but apparently they didn’t consider that to be good enough and they released all of this today starting with GPT 4 Turbo and I think it makes sense that we start the conversation with this one because it’s.

Exciting Updates

Going to be the new underlying model under GPT 4 when you use chat GPT in the web interface big news that some people shared on Twitter over the last week was that it actually got updated it’s not cut off in September 21 anymore we are just as annoyed as all of you probably.

More that gbd4 knowledge about the world ended in 2021 as of now it’s cut off an April 23 GPT 4 Turbo has knowledge about the world up to April of 2023 so yeah you can just go into chat GPT and it’s up to date and they even announced that.

They’re going to keep updating it’s something they haven’t been doing up until now so this date is going to keep coming closer and closer to today’s date over time beyond that there’s a whole list of developer focused improvements like a massive context window of 128k tokens that’s 300 pages of a standard.

Book this makes it even larger than Claude and means it can effectively take in a Harry Potter book and all that with apparently greater accur accuracy and in addition to longer context length you’ll notice that the model is much more accurate over a long context and again.

These are updates for the API so this won’t be available in the web interface as of today but I think it’s still interesting so let’s cover the other points here because in my opinion a huge thing they’re coming out with is a so-called copyright Shield copyright.

Copyright Shield and Pricing

Shield means that we will step in and defend our customers and pay the costs incurred if you face legal claims around copyright infringement and this applies both to chat GPT Enterprise and the API so this protects all the companies working with the gp4 API from getting sued for copyright infringement open ey.

Is going to take the hit for you if any lawsuit should come up that’s pretty huge oh and talking about big improvements I’m super excited to announce that we worked really hard on this and GPT 4 Turbo a better model it’s considerably cheaper well they made the whole thing two to three times.

Cheaper to use gp4 was pretty pricey so recently during product development we had to generate tens of thousands of prompts and this ran up the bill very quickly legitimately $100 evaporate like this now it’s going to be up to three times cheaper to do that oh and the last.

But definitely not least Point here is the releasing apis for do free the vision module and voice Dolly 3 gp4 Turbo with vision and the new text to speech model are all going into the API today so you’re going to be able to generate high quality images with text.

In them you’re going to be able to read images so a whole new set of applications is going to pop up I’m guessing there’s going to be these wearable cameras that are using the gp4 vision API and they’ll be able to take photos of the world analyze them and.

Then run software on top of that like that is pretty crazy and something I personally am very excited for can’t wait to see what people come up with here but all of this is available now right people can build on top of it so these are the updates for the gbd4 turbo.

Revolutionizing AI Space

API and the reason they started with it and I also wanted to cover it first because this is really going to bring the biggest real world upgrades to apps from today on out so over time we’ll be covering these on the channel but what might be more relevant to most people.

Watching this video is what they’re bringing to the chat GPT web interface cuz they’re changing a lot now you might have heard this news that they’re bringing all the models together you’re not going to have code interpreter gp4 or do free and the choice of GPT 3.5 no.

They’re merging it all into one thing so you don’t have to pick a model anymore if you have cat GPT plus you’re just going to log in and everything’s going to be there if you ask it to do a task it’s going to decide by itself which.

Custom Chat Bots and Marketplace

Tools are the right ones for the task you gave it chat gbt will just know what to use and when you need it and while that’s all well and good the big news here is they’re bringing out a whole new concept so take a deep breath and try to.

Wrap your head around this because what they’re coming out with they’re calling gpts and what they essentially introduced here are custom chat Bots that you can create yourself with natural language so in the new GPT maker you’re just going to be able to type what you’re looking for drag and drop a.

PDF you wanted to know about hit the little check box that says use do free in my custom chat bot and it’s going to do all that and so much more and the best part is once you set this up in exactly the way that you want it to.

Function you don’t have to publish it you can set it to private and use it just for yourself or your organization but they’re not stopping there they’re going A Step Beyond and they’re creating a whole marketplace where you’re going to be able to share these gpts you’re.

Even going to be able to sell them if they’re good enough so if you train them on specific transcripts or give them specific knowledge that only you might have and the resulting chatbot works so well that other people keep using it well you’re going to be making money off.

Of that and yes effectively you’ll be entering into a revenue share agreement with open AI but that is absolutely incredible you get to build a chatbot with their tool on their website sell it through their Marketplace and read the benefits so if you have some type of.

Unique data well now might be your time to shine build your own GPT and offer it to the public I love that so they briefly showed the store and one example that I caught on there was a custom GPT that was just there to create slides so if you’re creating PowerPoint.

Autonomous Agents

Presentations no need to open up GPT 4 and stumble your way through the entire process there’s going to be a customized GPT that does one thing and one thing only create PowerPoint slides right the same thing goes for every coding language and many other use cases which we’ll have to discover here together.

Over the coming days and weeks now you bet that I’ll be live streaming my experiences with building some of these so tune in for that and yeah as mentioned all of these gpts have custom knowledge bases so you can upload any documents to them and you don’t have to.

Worry about technical topics like how is it going to be chunked and is it going to retrieve the various chunks within the document correctly or is it just going to mess up along the way these are all problems that people who are building custom chat Bots were facing on.

A daily basis none of that anymore you just speak to it and drag and drop your files I love that I’ll be using it and I hope a lot of you will join me on that journey and if you thought this was amazing already well we’re not done yet.

Introducing Autonomous Assistants

Because they’re bringing Auto gpts to their own playground because open a is Tak what we this far called Auto gpts and they’re giving it a very own API they’re calling this the assistant API and what this essentially is is autonomous agents but they’re not calling it that because in their eyes.

And in mine too it’s not there yet but rather than talking about it let me just show it to you because it’s available already you can go to platform. open.com assistance and then with your API key which by the way if you’re not aware you’re paying for every single request.

So just be careful with that you can and should set up usage limits just in case but inside of this playground you get to not create assistance so let’s see let’s just hit create you get to name it then you get to give it instructions and this is something like the custom.

Instructions feature inside of cat GPT so you can just tell it you are a helpful assistant and this would be the default setting of cat GPT or you could go into great detail and give it rich context and at this point I just have to mention the product that we’ve been.

Developing for the past four months because it does exactly this it gives you a super rich character that you can use inside of chat gpd’s custom instructions or inside of any chatbot just like like this so if I wanted this assistant to take on the role of art.

Director I would just go in here pick the art director copy the entire character with all the reach details in here and then just paste it in instructions and there you go and now we have art director assistant with all the details that are fleshed out here by the.

Way this product also comes with 20,000 prompts that are specific to all the characters we’re going to be updating it to a th000 characters and you get a course a website a notion template and the prompt generator that you can use on any of these characters so it’s really.

The ultimate suit to build assistance like this but in this case we we’re just going to be going through this test and the best part is they already made this brand new model with 128k token context window available you just have to pick the gp4 1106 preview model now you can.

Do things like this where you feed it super long instructions and here at the bottom is the best parts this wasn’t easy to do up until now you could just activate either retrieval or the code interpreter or you could even add custom functions with code here but what these.

Do is code interpreter allows you to run python code inside of a Sandbox that means that this assistant is actually going to be able to execute code it’s going to be able to create graphs it’s going to be able to run data analysis it’s going to be able to process photo.

Video all the things that you can do with code interpreter this assistant will be able to do by itself now and retrieval is one of the most convenient functions because what this allows you to do is you can simply upload files and look now I can do something like this.

Where I just Google Nike design manual let’s add PDF and now what I can do is I can upload this Nike design guide inside of my new assistant here and now I have a art director character that is aware of all of Nike’s design guidelines and.

That’s it this took me about 2 minutes while explaining it right so in this case I’ll just turn off the code interpretor cuz I don’t really need it for this assistant and I’ll simply hit save and now I get to interact with this brand new assistant that at any time I.

Could give access to the code interpreter and access to more PDFs just like this one all in seconds and it’s using all the perks of the gp4 turbo model we just talked about in the beginning of the video oh by the way they mentioned that you’re going to have voice in and output.

With these two so you’re going to be able to create your very own assistants talk to them and they’re going to be talking back I am JIS a virtual artificial intelligence and if you want them to have more knowledge you upload more files if you want to customize.

Their personality or how they behave use customize the instructions in here this all just became really simple even for non-coders and if you go to platform. open.com playground link will be in the description you can actually access the brand new gp4 turbo model with the 128k context window today look this is it.

Right here you just send your message in here and by the way for all chat GPT plus users if you go to gp4 and ask it right now when is your knowledge cut off it will tell you my knowledge is up to date until April 2023 meaning we have.

GPT 4 Turbo inside of cat GPT already but now it’s time to sum all of this up so what does this mean there’s a lot of new updates they overhauled their most important products right their Flagship API just got so much better cheaper more up todate cat GPT plus users now have.

One model that does it all plus there going to be these custom gpts that are really good at one specific thing and they’re adding a store for it plus if that’s not enough you’re going to have these assistants that are going to just run by themselves and the story of the.

Conclusion

Day here is that all of these developments Point towards one thing they gave us the base models then they let the developer Community figure out all the best use cases and then they just build themselves and includeed as a feature of their platform these assistants were autog GPT baby AGI and.

All the other apps that were built around this concept of autonomous agents a few months back the gpts and the store are all the various AI rappers right you have one that is really good for creating slides one that is really good to help you train your dog now from.

Today on out most of those are obsolete because you have them right inside of cat GPT and you can expect the same to happen with all the new apis people are going to be building Vision API apps and then in a few months they’re probably going to come out with versions of the.

Best ones that they create themselves and what this nits out to is that it’s quite bad for the big players that have put a lot of money into developing their very own assistance whereas now people can just do this in a few clicks but who it’s really good for is the users it’s.

The people who use it every single day you can out build your own assistant you can out build your own gpts it’s cheaper than ever it’s easier than ever and the next route of innovation is going to start today with the release of their Vision API I personally can’t wait to.

See what people built with it if you care to learn more about the best use cases of the vision API which is probably the next big chapter of open AI story then check out this video where I dissected a research paper that showed off over a 100 Vision use cases I’ll see.

You there


43 COMMENTS

  1. FYI there is a failure of direct retrieval with GPT-4 using the new OpenAI Assistant API. GPT tokenizes text and creates its own vector embeddings based on its specific training data. The new terms and sequences may not connect well to the pretrained knowledge in GPT's weight tensors.
    There was no semantic similarity between the new API terms and GPT's existing vector space. This is a fundamental issue with retrieval augmentation systems like Rag – external knowledge is not truly integrated into the model's learned weights. Adding more vector stores cannot solve this core problem.
    The solution is to have multiple learned "knowledge planes" with trained weight tensors for specific tasks that can be switched in. This is better than just retrieving separate vector representations.

  2. I want a gptv included in SD's latent or regional prompt with control net line art. and linked to my photo editor. and now when I draw skitches and I click a button on my keyboard it saves the current image gives it to the gptv, gptv tries to make sense of it and for each position in the image using regional prompt it will put a word to what that is and after it finishes the SD extensions will run the SD will generate 4 sets of the pic given the previous info. and so I can work as an artist and at the same time see my little assistant's generations with my favorite model. and why not with my favorite loras too. the prompting will be done by gptv not me. I only draw. 😀

  3. Super excited for GPTs. I've started creating one that I've had in mind for almost a year. It's taking awhile to train, but it'll be so worth it. I'm also thrilled to hear that we'll no longer have to pick which model to work with. Being stuck with a model for the duration of a chat sucks when new ones come along, so I'm wondering if that feature will be retroactive, ie.., "old" chats will now be upgraded to the most currently all-inclusive model. Now THAT would be amazing.

  4. The major concern remains: Privacy.

    Imagine what Microsoft will have access to with all these private files and data uploaded to OpenAI’s servers.

  5. I'm highly confused. So, we ChatGPT users won't get larger context windows? It's only for API and Playground? I hope not because that would be unfair, and that would mean Claude has ChatGPT beat. $20 a month is not cheap for most people. We should get the same as API users. Am I missing something?

  6. Hi Igor, can you make a video on a few of the different use cases for different custom GPTs, what it means for increased context window sizes, and how APIs can be used? Some of this content is abstract and difficult to apply to our day-to-day lives.

  7. Could you please explain what I can do with OpenAI as if I’m a confused grandfather?

    This is very confusing. I barely understand what you are talking about.

    I want to use this for my hotel business, and to help me write a book.

    Thank you 😅

  8. For me currently, it says GPT4 has DALLE3, data analysis, and web browsing but I guess it's not turned on yet or something because it says it can't generate images or browse the web

  9. We don't all actually have GPT-4 Turbo inside of ChatGPT yet, OpenAI just programmed GPT-4 to say that's when it's cutoff is. Try uploading a 100k token file or larger and it will not respond because it says that it is too long. Once the model picker is gone for you, that means that you have GPT-4 Turbo. They have started rolling it out to people already.

  10. 01.03 bro, please use a pop filter with your mic.
    Also, this vocals are so bad; there is too much bass and muddy. Please eq your vocals before uploading.

  11. Were you able to fit your entire character prompt description into the AI assistant instructions? I would have assumed there's some sort of a character limit

  12. They should add a way to record and save the text to speech thread so people can easily create audiobooks and not rely on extra subscriptions from other services

LEAVE A REPLY

Please enter your comment!
Please enter your name here