AI/Generative Video animation Exploration

Over the past year I have been experimenting with various AI/generative image, video and 3D workflows. There are currently no “best practices”, it’s the wild-west, no seatbelts or hand rails so wear a helmet and bring your own weapons. Recently I started making simple collages of scenes for generative video with better than expected results.

This might seem like a no brainer to some who are coming into this cold. Most people including myself have been generating imagery from textual or image to image prompting and iterating in ComfyUI with Flux, Stable Diffusion and various LoRa’s (a smaller file that contains more detailed info of a subject) touching it all up in Photoshop than moving to 3rd party platform for animation such as Runway or KlingAI and Luma is looking promising (edit: there are now open source options to run on your personal machine and Luma’s ray2 is outstanding). Yup skipping all that and going into Photoshop and cutting/masking up a variety of images and collaging them together. I was a fan of the B/W photocopier collage aesthetic often found in punk rock fanzines in the 90s and at that time it inspired me to make my own drastically & forever altering the page folding of any sketchbook I owned from that point on (sticky pages & not for the usual teen reasons) but it’s not just the similarity with that aesthetic of that era, it’s the energy to it all. This is fun! I can now do things I wanted to do 10 to 20 years ago and never could because of ether time, resources or simply it just was not possible at the time. I’m going off on a tangent. So back to what is oddly working way better than I imagined.

Here’s what worked for me. Develop an idea of what the system you are planning to use can do – Have a good idea of its limitations – then build your collage. upload it to an animation platform and do a couple of test generations with your collage at low credit settings. If it looks promising than continue. If you get nothing or a jumbled mess of an animation like say backwards body parts. That’s an indication that the model does not have any training data to pull from to help you animate anything useful for your concept and it’s time to improvise. I have had the best results with Kling AI the cavoite; it has no unlimited options for it’s service and it can be expensive iterating within it. Runway, Hailuo AI, and Luma have unlimited options (edit: Luma’s ray2 is now competitive with Kling AI). I knew Kling was good with normal motion and it gave me what I wanted to do with this fair use satirical and educational exploration using old fast food mascots McDonalds “Hamburglar” and Burger Kings “King” characters. How can the two interact, will the AI work with characters who look more like cartoons or toys than people? As you can see. Yes it can and look what it did. Facial movements of both characters. I did not expect that. The lighting notice how the Kings movements cast a shadow on to the Hamburglar. This was not the only generation that did this. Out of 8 generations most of them produced similar results I just happened to prefer this one.

Once I had my generation I decided to turn it into a bogus ad for a TV show similar to the gazillions I worked on in the past. I needed some title art for this fictitious show and wanted to see what google labs could do with it’s imageFX… Yikes! all I can say right now is “do not sleep on google” imageFX is powerful! Probably no… No “probably” it is the best. It has the best prompt adherence for any image generating platform I have used. Try it yourself. Again going off on a tangent. I asked it to give me some fast food icons for a restaurant called “Burglar King” with a cartoonish burger wearing a crown and bandits mask. It delivered and boy can it iterate with one or two or whatever color options and illustration styles. I personally love illustrating, give me a pen and paper and I can occupy myself for hours. Thus found this to be “scary good” at what it does. For those who make a living illustrating, do not keep your head in the sand. You can still get ahead of this by using it in your own work. These generative “AI” systems are not in and of themselves creative. You are! but they are getting “good enough” and yes the legality with IP generated with it is still up in the air on top of it not even being true AI but as many say “this is as bad as it will be” and it continues to improve. There are no two steps forward and one back with this. Its more like two steps forward then 5 with a little pirouette in the middle and it’s eating your sandwich. Do not put your hopes in any laws or government to protect from this. Remember there are other nations out there who have little to no interest in our laws or standards protecting IP and they will keep on chugging along without that legal burden and our government (I’m assuming you are in Free country USA) will want to stay competitive so make of that what you will.

 

 

With all that said. It’s still rough around the edges and every day those edges are smoothing out and I would now say as of today this is now a usable platform for rapidly generating consistent creative content.
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *