<a href='' title='Ai News Era' >AI News</a> Breakdown

In this video I’m going to break down the AI news

Share some really cool research and Tech that’s coming out and at the end of the video I’m going to share who won The Meta rayan glasses now that I’ve passed 500,000 subscribers so let’s get into it starting with this

Email from Open AI

That Open AI sent out to pretty much everybody that’s on their API list the announcement says Dear GPT Builder we want to let you know that we will launch the GPT store next week if you’re interested in sharing your GPT in the store you’ll need to and then it kind of… Gives you some instruction on how to share your own GPT in the store now if you don’t remember what the GPT store is Sam mman Actually announced it back on dev day in November 2023 and it’s a store where you can create your own custom chat Bots inside of chat GPT and then resell them in their GPT store can list a GPT there and we’ll be able to feature the best and the most popular gpts of course we’ll make sure that gpts in the store follow our policies before they’re accessible Revenue sharing is important to us we’re going to pay People who build the most useful and the most used gpts a portion of our Revenue essentially you’ll be able to create your own gpts using their GPT Builder and after some sort of approval process you’ll actually be able to monetize those gpts that feature is coming next Week one interesting thing to note here and this is from Nick Doos who is the creator of this grimoire GPT is that if somebody actually uses your GPT and needs some customer support on it it looks like people could send support emails directly to the creator of GPT You can see here in this screenshot somebody can write a support message and it says send a support email to the Builder of grimro if you do build a really cool really successful GPT you could probably expect a whole bunch of customer support tickets to help manage That which is likely going to be a deterrent for some people to actually create gpts while we’re on the subject of open AI it’s recently come out that New York Times is suing open AI because they were able to get chat GPT to give back articles written by New York Times And the text was given back verbatim from the article now I talked about that in last Friday’s news video well it turns out that open AI is actually trying to get a lot of these news companies to allow training on their news however they’re only offering them Between 1 million and $5 million deals for this information however if people start going to open Ai and chat GPT to get all of their news information instead of some of these original news sources these news sources stand to lose a lot more than the 1 to 5 million Dollar that they would make off of their deal with open Ai and to put it into even more perspective it says here that apple is looking to partner with media companies to use content for AI training and is offering at least 50 million over a multi-year period for the data I think For some of the smaller news sources a deal like this would be an amazingly phenomenal deal for them for some of the big bigger news sources like the New York Times Forbes companies like that it would be a pretty horrible deal for them because they stand to lose a lot more Than what they gain off of it but as we enter this new era of AI where more and more people are using AI to get their news to get their articles to get their information I personally think a lot of these news companies should really start to rethink how they approach their Business model because they may be really really fighting tooth and nail to hold on to a business model that could be fairly obsolete not too far in the future now I did mention in past videos that I do believe the people that are doing the investigative journalism the People that are doing the research the people that are spending the time to create this content that chatbots are then getting trained on do deserve to be compensated there does still need to be incentive to want to do the research and put in the time to create this content But it just needs to be rethought I think people are getting increasingly fed up with every dang news site these days having a paywall and if there’s no paywall they are just inundated with insane amounts of ads all over the place that really screws the user experience When you’re trying to read the content there has to be another way other than pay Walling it all or making our eyes bleed with advertising so hopefully some new more futureproof models for Content creators journalists and the people that are actually out there reporting on the News are sort of figured out in 2024

Microsoft’s Co-Pilot Key

This week Microsoft introduced a new key that’s going to go on future Microsoft keyboards It called the co-pilot key and it’s G to look like the little co-pilot logo and when you press it it’s going to pop up that little sidebar Where you can access co-pilot on the right hand side now if you’re on one of the newer versions of Windows you’ve got this little button down here right next to your search bar in the bottom left of your screen if you click on that it brings up your co-pilot bar this new Button that Microsoft wants to add to Future keyboards is going to be a single button press on your keyboard to bring this up not the most exciting news in the world but it does show that Microsoft is really really committed to baking AI into everything they do in the Future Looks like Google is getting ready to release Gemini Ultra pretty soon but it’s going to come at a premium price tag to be able to use it right now Google bard uses Gemini Pro all of the demos we saw from Gemini were actually demos of Gemini Ultra which is their Really powerful model and according to a developer over at Google named Dylan rousel it will be called Bard Advanced and you’ll be able to get 3 months access to it when you sign up for Google one which is I guess a sort of more expensive Google Drive that gives you Access to more storage and things like that we don’t know when we’re going to get access to this all we’re really working off of is this one tweet from Dylan we’ve been seeing a lot of amazing text to video and image to video models lately from companies like Runway and Pika and stable video diffusion and recently Leonardo entered the game well now Alibaba group is entering the game with i2v Gen XL or image to video generation XL now supposedly this model is capable of generating higher resolution videos as well as longer videos now from what I can tell it does Require an image input it’s not text to video it is image to video but some of the examples here look pretty impressive although not a whole lot longer than what we’ve seen out of the other image to video models unlike most of the other video models that are out there like Pika and Runway this one is available as open source and we can see right here on alibaba’s GitHub page all of the code is currently available all of the installation instructions to install it locally or on some sort of cloud computer are available right here I haven’t personally installed this yet But if I can get it working on my own computer I’d get some good results out of it you can bet I will make a tutorial video and walk through how I got it to work lately when it comes to chips being used for AI we’ve been hearing a lot From companies like Nvidia and AMD but not a ton from Intel well Intel just spun off a new AI company called articulate AI with an eight in it and this appears to be Intel’s division that’s going to be fully focused on AI according to this article articulate plans to deliver new software Products and services including geni powered products from what I can tell this articulate company isn’t going to be creating AI chips but more like provide Cloud compute service and gpus that companies can have access to to train and run their AI models on there’s a new AI out that can read your mind Called enwave a lot of the other research that we’ve seen so far around some of this AI mind reading Tech that’s out there either requires somebody to slide into an MRI machine or get a brain implant like the neuralink well this one you just wear a funny looking hat or Electroen cap or EEG now they did put out a demo video here on their website that sort of explains how the technology works if I’m being totally honest it’s a little bit over my head at the moment but let me go ahead and fast forward to them actually showing it in action yes I’d like a bowl of chicken soup please yes a bowl of beef soup did speed that up a little bit but essentially this person’s wearing this EEG cap they think silently there is a little bit of a pause between them thinking but then their thoughts show up on text on the Screen and then an audio voice reads out the text the idea being that this can help paralysis patients in the future better communicate this week the Supreme Court and the US also put out their year-end report on the federal Judiciary and much of the report is actually about AI and how the courts should really really be considering how to use ais advantages and understand their disadvantages starting on page five here we Face the latest technological frontier artificial intelligence at its core AI combines algorithms and enormous data sets to solve problems it goes on To sort of explain what AI is facial recognition voice recognition and it talks about how a lot of people are worried that AI is going to replace a lot of jobs in the legal system like maybe in the future of AI judges or AI lawyers but it also cites some cases Where lawyers have actually presented past cases that AI talked about that didn’t actually exist because the AI hallucinated them so not only does it talk about where AI can benefit the courts and the legal system but also the things we need to worry about they say here machines cannot fully replace key Actors in court judges for example measure the sincerity of a defendant’s allocution at sentencing Nuance matters much can turn on a shaking hand a quivering voice a change of inflection a beat of sweat a moment’s hesitation a fleeting break in eye contact and most people still trust humans more than Machines to perceive and draw the right inferences from these Clues I’d argue that AI is not too far off from drawing the right inferences from those Clues as well but as it stands right now yes I do believe that there still needs to be humans in the loop when it comes to the Court System and determining somebody’s innocence or guilt it goes on to say that humans should be in the loop but they do see AI as something that can help make their jobs easier It’s a really interesting read I did read the whole thing but for the most part I just Gave you the cliffs notes of what they said regarding AI 2024 in my opinion is going to be the year of Robotics I mentioned that in my predictions video I believe it even more we’re kicking off the year with so much robotics news coming out already in just the first First week of the year including Google deep mind showing off some of their latest research this includes Auto RT which harnesses large models to better train robots you can see a little bit of a diagram here of how this works it Maps the environment describes what it sees Generates tasks based on what it sees filters out the tasks that it can and can’t do does the tasks that it can comes back with some sort of score and then repeats the process they also announc their Sarah RT or self-adaptive robust attention for robotics transformers this is a way of making Models more efficient by fine-tuning or what they call up trining basically converting a lot of the complex computations it’s doing under the hood into less complex computations it says here when we applied Sarah RT to a state-of-the-art rt2 model with billions of parameters it resulted in Faster decision-making and better performance On a wide range of robotic tasks you can see some examples here closing a drawer knocking over a Coke can moving an orange can near a green rice chip bag moving a Red Bull can near blueberry RX bar Etc and finally the RT trajectory which helps robots generalize it says RT Trajectory automatically adds visual outlines that describe robot motions in training videos RT trajectory takes each video in a training data set and overlays it with a 2d trajectory sketch of the robot arms grippers as it performs the task we can see in this demo on the right we have a human sort Of drawing the path that it wants the robot to take here and then on the left we see the robot mimicking that path now while we’re on the topic of Robotics these videos have been going somewhat viral on X lately for mobile Aloha learning by manual mobile manipulation With lowcost whole body Telly operation they really like to make these names fun but here we see this general purpose robot that can do all sorts of things for example in this demo video we can see it cooking shrimp totally autonomously this video is sped up to Six times speed but we can see down here some other skills that are showing in real time like wiping wine off of a counter here or pressing the button to call an elevator and then when the elevator comes actually getting on the elevator or using cabinets or rinsing a Pan pushing chairs in and out and it can even high five people and all of these that we’re seeing in these demos right here are being done completely autonomously and then they have the Telly operation where we can actually see somebody behind the Robot Operating The robot so it’s kind of hard to tell in this video here but if I move forward a little bit here we can actually see the guy’s legs here behind the machine operating the machine this robot is being operated by a human I don’t know if this is used for training purposes as Well or if it’s just uh way to not have to use your hands I would assume that a lot of the Telly operation is how it actually is trained to then do these things autonomously in the future here’s another example showing at 10x speed of the Telly operation with somebody Actually behind the machine doing all the work that the robot’s following I don’t totally understand the purpose of this if it’s not actually being used for training purposes so digging into the paper a little bit here it says Are teleoperation system is capable of multiple hours of consecutive usage such As cooking three Leek meals cleaning public bathrooms and doing laundry our imitation learning result also holds across a wide range of complex tasks such as opening a two-door wall cabinet to store heavy cooking pots calling an elevator pushing in chairs and cleaning up spilled wine with co-training we are Able to achieve over 80% success on these tasks with only 50 human demonstrations per task moving down a little bit here it says augmenting the Aloha system with a mobile base and wholebody Telly operation allows us to collect highquality demonstrations on complex mobile manipulation tasks then through imitation learning co-trained With static Aloha data mobile Aloha can learn to perform these tasks with only 20 to 50 demonstrations So based on my understanding you can teleoperate it to help train it along with training it without teleoperation but using the teleoperation features to help train it makes it much more effective here’s What’s really really cool about this mobile Aloha is the whole thing is completely open source you can build one of these for yourself right now today with existing parts that are available to buy on the internet if I click on this little tutorial button here you can See it brings me to a Google doc with a step-by-step tutorial on how to build one of these yourself and all it’s going to cost you in Parts is $ 31,798 but all of these parts are accessible through Internet ordering like these robot arms that you need two Of and these robot arms which you also need two of and once you’ve got it built the hardware code and the learning code are both available on GitHub as well as some of the data sets they’ve already trained it on like the ability to push In chairs or wipe up wine or operate an elevator or operate a cabinet so if you feel like doing a $32,000 fun weekend experiment I got one for you there’s also been some interesting announcements about announc ments for example this company at rabbit. Tech says that on January 9th at 10:00 a.m. Pacific time they’re releasing something interesting and I actually find their trailer kind of funny now I’m actually muting this trailer here cuz I don’t know the status of the copyright on the background music but I find it funny because they’re holding whatever this rabbit device is In their hands and it’s completely blurred out in pixelated and whenever I see that it just reminds me of like I don’t know you imagine some sort of a adult toy or something in their hand just trying to imagine what they’re pixelating and hiding just makes me Laugh for some reason but it appears to be some sort of device where you speak into it and it will be somewhat of like a pocket assistant some of the examples are them just speaking into it and saying order me an Uber the example of this guy on the screen right here he’s Saying that was delicious look in my fridge see what ingredients I need and order them for me so I can make that same recipe again tomorrow and that’s The Prompt that he’s speaking into this unknown blurred box so we’ll find out more about what this rabbit. te thing is When they finally reveal it to us on January 9th I’m not necessarily sure if people are going to want another Gadget outside of their like mobile phone in their pocket if this is something else like you have a phone and you have this gadget that you talk to I can’t really See it doing super well because eventually we’ll just have this Tech in our phone anyway but I’m still curious cuz they’re not telling us what it is and they’re being very mysterious with their blurring open loop concept of an ad here the other announcement of an announcement is that Samsung is going to Make an announcement on January 17th so on January 17th we’re going to get a sneak peek at Samsung’s next phone Samsung said that its latest device will offer an allnew mobile experience powered by AI we don’t know what that means yet but we’ll find out on January 17th I guess and finally next week is the annual Consumer Electronics Show in Las Vegas and the theme of this year’s show is pretty much AI is going to be an everything so although the year has started off somewhat slow with AI news I imagine after CES we’re going to be Flooded with AI news there’s announcements from Nvidia LG which is featuring its new AI processors Samsung which it says here we’ll have to wait until the 17th to find out about their new phone they won’t be announcing that at CES but in a press conference the Theme of Samsung at CES is AI for all connectivity in the age of AI so whatever they’re going to be talking about it’s going to revolve around AI now I’m actually headed out to CES I’m going to be there from Sunday evening all the way through Friday I’m going to Be there for the entire event my goal while I’m at CES is to try to find the coolest craziest most Innovative Tech while I’m there probably mostly related to AI but probably some that’s not I love augmented reality virtual reality cool displays gaming all of that stuff I Love it as well I’m going to be trying to hunt down the craziest the coolest the most futuristic Tech that I can possibly find at CES and then making videos about it while I’m there the plan is to record a bunch of footage of this Cool Tech while on the CES floor go back to my hotel room and then compile it together into a handful

20 COMMENTS

  1. Just curious if anyone can help: where can I look for any existing open source LLM models/frameworks that can be used to embed chat functionality into apps, with the ability to function offline (i.e without needing cloud APIs)?

  2. Did you delete the video linked to your hostinger during website setup? I was hoping to find some tips and discount codes but I couldn't find your video (

  3. Wow, this video on AI is truly mind-blowing! The advancements in technology never cease to amaze me and with AI, the possibilities seem endless. Thank you for sharing such an insightful and informative video. It's exciting to think about the potential impact AI could have on our future. Keep up the great work

  4. Hi Matt,

    Greetings! I've been an avid follower of your content, finding immense value in the comprehensive AI insights your channel provides.

    Outside of my main channel here, I'm currently working on developing a foundation similar to that of the Innocence Project, dedicated to addressing the longstanding issues within our flawed judicial system. Our focus lies in leveraging AI solutions to revolutionize legal research and disclosure review, specifically targeting entry-level legal tasks. This involves training AI on legal systems and datasets, enabling it to navigate case law, formulate legal arguments, and scrutinize disclosures for inconsistencies.

    Our mission is clear: we aim to counteract the injustices perpetrated by the legal system, which often exploit the limited resources of those wrongfully imprisoned. By leveling the playing field through AI assistance, we strive to reduce the number of innocent individuals facing conviction due to inadequate defense. Alarming statistics indicate that a significant portion of us will encounter the shortcomings of the current legal system directly or indirectly during our lifetimes.

    I'm reaching out in the hope that you and your knowledgeable audience could contribute to our brainstorming efforts. We're in search of AI solutions that can be accessible to individuals, not just qualified law practitioners. The ultimate goal is to democratize legal assistance and empower those who lack the resources to defend themselves adequately.

    In my research, PaxtonAI seems promising, although clarity is needed regarding whether their services are exclusive to qualified law practitioners or open to the wider public. I've initiated contact with their headquarters and will provide updates as they become available.

    Your insights and suggestions, as well as those from your engaged audience, would be immensely valuable in our pursuit to make legal support more accessible and equitable.

    Thank you for considering our cause, and I look forward to any guidance or collaboration that may arise from this outreach.

LEAVE A REPLY

Please enter your comment!
Please enter your name here