Another massive release just happened: Midjourney V5 just came out
And I already found some tricks to get better results. But why would you even want to use this? And once you use it, what do you need to look out for? We’re going to be discussing all that, plus how to use GPT-4 to get superior, majority V5 results today. So let’s get into it!
Accessing Midjourney V5
First up, I need to tell you that only paying Midjourney members have access to V5. However, if you’re a paying subscriber starting at $10 a month, it’s quite easy. You just go into Discord, say “settings,” and now you can change your model. This is the most comfortable way to interact with it. Otherwise, you need to include “–V Space 5” at the end of your prompts.
Reasons to Use Midjourney V5
The biggest reason to use V5 against V4 right now is a desire for more realistic images, especially of humans. The hand situation has been fixed, and the photorealistic qualities have been significantly improved. These pictures look as if they were taken with a camera, like they’re 95% there, with great skin texture and proper hands, and even details like teeth now being properly represented because they train this new model to do exactly that. You can start using this thing to replace models right now.
Midjourney V5’s Superior Features
The more realistic settings that are baked into the system are not everything here. Overall, you can view V5 as more of a professional model than V4 was. You need to go more into detail to get better results. So as opposed to GPT-4, where it got smarter and it got a little easier to prompt, this moved in the other direction. You can get better results, but your prompts need to be better.
Using GPT-4 with Midjourney V5
One of the biggest changes with V5 is that no more prompting with keywords only. At least, they don’t recommend it. You should use natural language, just like I’m speaking to you right now. Natural language includes so much detail and direction that it’s going to be able to craft better results from it than language with only keywords.
Another change that has been made with V5 is that it’s now upskilled already, so no more waiting for upscaling. The upscaling is super fast, and the photorealistic capabilities are baked in. Using GPT-4 to create detailed prompts can be incredibly useful in getting superior results with Midjourney V5.
Getting Better Results with GPT-4
Creating detailed prompts with GPT-4 can help you get the most out of Midjourney V5. Including a variety of photography-related terminology, a specific lens, and a detailed description of the volumetric lighting can lead to better and more realistic outputs. GPT-4 acts as a stable diffusion photography prompt generator, and once you specify the input and output, you can generate detailed descriptions of scenes that you’d like to recreate inside Midjourney V5.
Further Learning
If you want to get even better at Midjourney, you should consider learning more about photography and lenses. Understanding lenses and how they influence the look of an image can be beneficial in crafting better prompts for Midjourney V5. Resources are provided in the description of this article for further learning.
Creating Amazing Images
If you create some amazing images with Midjourney V5, join our Discord server and share them with the community. Me and many others in there are posting their creations and discussing what is going on in the AI space on a daily basis. And if you want to get even better at Midjourney, you should check out this video because it will teach you the ultimate hack on how to get the most out of it with just one keyword. See you there!
Its not about creating something great but creating a certain something you WANT to create. GPT cant help with that.
With the new /shorten command, it's looking like the devs are telling people to look at how little of their prompts is being processed. Have you tried /shorten on these paperbacks? I've used the Bing chat bot to generate prompts for MJ. They're shorter, and are (I hope) what MJ is expecting. It doesn't seem to me that MJ does well with normal speech patterns. This is a command line interface, and we need to understand the parameters and keywords, and how they are interpreted, to get good results.
🤔 I copied your providet Prompt into Chat GPT. Ther are no photographic details in the output . Did I miss something ?
Does it take long to generate? This is actually so cool! What are your views on BlueWillow ? please share tips on this new tool next for us designers currently using it. That would truly be amazing 🤩
On word, AMAZING 😄Can I use BlueWillow for this process? Please share for us beginners who just started using this new tool. Your insight would be amazing
Great info! I used the first prompt with my own description and was flagged for a possible content policy violation. Nothing in my description seems to get even close to Open AI's content prohibitions and ChatGPT won't (can't?) diagnose the problematic content. If you know of a good resource for content policy trigger terms, please share. Thanks for the excellent videos.
can’t wait for in painting 😮
thx igor
Thanks for sharing this useful information! Does anyone know how to use – – v5 and certain aspect ratio in the same prompt?👀
Oof. AI art, the epitome of talentless hackery
i just started using GPT but where is the button to get to GPT-4 ?
Priming chat gpt with prompts before requesting is absolutely genius and the new mid journey works are absolutely stunning. I also used gpt4 for Blue Willow prompts and I got some really cool images
I really enjoyed this clip, it was so entertaining and informative. What are your views on BlueWillow? I just started using this new tool and I would really appreciate your help. Please share. I believe there is a lot I can learn from you 😄
Wait you use a bag image took a your phone and make it batter?
How come YouTube videos now a days are, 98% talk and 2% content?
If you add "always start the prompt with "/imagine prompt: "" to your instructions it'll save you from having to type /imagine all the time
seriously it seems that people who are fans of midjourney have no eyes …no matter what you do nothing goes in these pictures nothing ..at first we have the impression to have something and then …we look and any child can realize that it's just not a normal thing, the proportions have no sense, there is no details, the anatomy is completely fucked, the colors come out of a dirty ass the general sense is distorted in short nothing goes . .apart from the people who hate the others and who hate the image and who just need fast and tasteless content I don't know who can say wow except someone who has never seen an image in his life
ALSO, NEGATIVE PROMPTS can have NEGATIVE effects on the output, it all DEPENDS on which model you are using and HOW MUCH negative you are using. Sometimes NO negatives gives BETTER result, sometimes a SIMPLE negative word fixes all, and sometimes LOTS of negative is NEEDED if a model requires it, , but not always……
Sorry, but it has been tested MANY times, if you use a ChatGPT generated prompt, PLEASE EDIT IT AND REDUCE the excess…. no NEED to use a prompt like the ones you are showing. YES, we can use MORE words than before, and YES, even without editing chatbot you will get NICE results, SAME as NOT using a long paragraph…… you just need to do BETTER DETAILED promtps, not "LONGER" prompts.,, and i apologize but its just too much text for achieving something THE SAME as fewer texts…..
Midjourney should begin making plans to raise their 25 free limit right away. As an alternative, AI technologies like Blue Willow will become more and more popular over time. Blue Willow, in my opinion, will be the one to take down MidJourney
So Midjourney themselves said that it basically ignores everything after roughly 60 words, this was before V5 however. It's probably worth finding out if it's still the same, but it's something you should take into consideration while prompting.
2:40 It's not that the image upscales really fast, but rather the 4 results are already upscaled. They said they are working on further upscaling images to even higher resolutions but it's not implemented yet.
For what it's worth, if you'll append "Return the completed prompt inside of a code box." It'll return the response inside a code box with the "Copy" button at the top right. Nothing groundbreaking there. It just makes the copy and paste that much easier.
Man’s been smoking why too much, that does not look like a real photo bro
The sound of your video is so bad my dude, the compressor is insane and it sounds like my ears are shutting down everytime you start talking. Just a tip
I tried this and just got a bunch of pictures of cameras lol…
I keep getting the AI putting GIANT images of cameras right in the middle of the images. it's really annoying lmao
you need to readjust the sound compressor, the attack is too slow, sounds terrible… but the content is great, thanks
Does it take long to generate? This is actually so cool! What are your views on BlueWillow ? please share tips on this new tool next for us designers currently using it. That would truly be amazing 🤩
Thanks, the lens info and using GPT to prompt Mid-j v5 has helped improve my images by leaps and bounds, thank you.
I think your mic is very sensitive for sounddirection. Volume changes a lot.
Your audio settings seem a little off. Your mic gate seems to continuously open and close. Your voice is getting louder and quieter through the whole video. Might take a look into it. Good video though 🙂
Hey Igor, I just watched your video and I must say that it was really informative and well-made. I loved your videos. I was wondering if I could help you edit your videos and also make highly engaging shorts for you.
this is ssooooo amazing!!! so helpful!! thank you
the link to the differnt Lens types has stopped work 🙁 is there another one?
If I purchase any Plan on midjourney, can I able to sell (commercial use) on sites like Shutterstock, Adobe Stock, Freepik, Getty Images…etc,.?
great content, but check your audio compression and mic work to improve the audio.
honestly there is a thing in our heads, called the brain…and together with something a little bit more metaphysical, called your mind , we can get inspired and start to create arts…why would I use such a boring software that takes away exactly what creating is all about…the creative process..??…that´s really for some lazy people… Sorry,.one doesn´t need to jump on every train..Very soon people will have avatars having sex on behalf of themselves…ha ha ha ha,…
Thats really cool I'm gonna build something similar into a new app i'm making would be thrilled to have you try! I got access for GPT-4 full api so there's some interesting things that can be done!
When I can create long form video (1h plus) with text to video – ill be impressed.
I like how you explain everything slowly and calmly. Midjourney should begin making plans to raise their 25 free limit right away. As an alternative, AI technologies like Blue Willow will become more and more popular over time