Author: admin

  • Decommissioned Atlas-D ICBM Launch Complex

    Scattered throughout the rolling grasslands of America are many historic forgotten sites from the Cold War. ICBM development during 50s and 60s was quick with breakneck iterations of designs so these launch facilities were often obsolete by the time they were completed and put into service.

  • Rocky Mountain Arsenal

    Here is a 3D Radiance capture of some structural WWII remains found at the Rocky Mountain Arsenal Wildlife Refuge. The site was used during the war years to create chemical weapons and continued into the Cold-War until nuclear weapons became the US primary deterrence. Searching for peacetime operations the facility manufactured fuel for NASA and leased some of it’s property to Shell Chemical Company for the production of agricultural chemicals. Eventually the site was demilitarized, cleaned up and turned into a wildlife refuge.

    It’s a picturesque location close to Denver they did a great job in restoring how it may have looked prior to the installation. it’s easy to see some bison/buffalo if you are visiting Denver and are looking to do so.

  • Gloriana is still the best song from this millennia

    Sorry Taylor Swift but It’s true! Gloriana from Quickspace 94-05 a song from their 2000 “The Death of Quickspace” album is still and will always be the best song from this millennia. Pure science went into this determination, yup not only double blind, or triple blind, sleeves were rolled up, dogs barked at the sun and quad blind studies were conducted. The study was so intense a once well-known Ivy League school went bankrupt, erased from history and public memory trying to disprove the research. Having such an old well-loved and storied institution redacted from the fabric of time often has unusual consequences as a result this song is not well known, but it is still the best song from this millennia.

  • David Lynch Interactive: Focus 0.1


    Here is the first iteration of an interactive module using a combination of real content from David Lynch’s movie Mulholland DR. and generative content. With this I wanted to test the use of transparent videos and generative AI content. I used Google’s Gemini, Luma AI Ray2, and Open AI’s 4o and a decent amount of work in After Effects. I attempted to do something like this back in 2013 using Flash but it did not work well. It was clunky and slow and was never used in any work projects. I wish I still had the file.

  • Decommissioned Atlas-D Coffin launch Bunker

    Somewhere north of Cheyenne Wyoming monolithic remains of the Cold War still stand.

  • Quentin Tarantino’s Death Proof Loop.

    The above video loop was generated from a single studio photo from Quentin Tarantino’s 2007 Death Proof.

    The processes was not overly complicated. I used Luma Labs to produce the initial good generation because you can largely iterate without worry of eating credits like you can in Kling. Once I had a good generation I used the last frame of that generation as the first frame in a new generation render. From there you can go on for as long as you want. The trouble arrived with the end frame in Luma you could not use their ray2 model (at least you could not until today) falling back on their early model resulted in a noticeable quality lose so moving the final generations last frame to Kling and giving it the very first frame of the original generation as it’s last frame. Allows you to make smooth seamless video loops.

    Then since this is my little proof of concept project I wanted to have some fun with the layout. I brought it into After Effects and ran some stock filters over the whole thing and using the bulge filter to give it that more in you face kind of fill. Watch the video again. Look at the lines in the road on the right side of the video. The bulge filter should be way more noticeable now.

  • AI/Generative Video animation Exploration

    Over the past year I have been experimenting with various AI/generative image, video and 3D workflows. There are currently no “best practices”, it’s the wild-west, no seatbelts or hand rails so wear a helmet and bring your own weapons. Recently I started making simple collages of scenes for generative video with better than expected results.

    This might seem like a no brainer to some who are coming into this cold. Most people including myself have been generating imagery from textual or image to image prompting and iterating in ComfyUI with Flux, Stable Diffusion and various LoRa’s (a smaller file that contains more detailed info of a subject) touching it all up in Photoshop than moving to 3rd party platform for animation such as Runway or KlingAI and Luma is looking promising (edit: there are now open source options to run on your personal machine and Luma’s ray2 is outstanding). Yup skipping all that and going into Photoshop and cutting/masking up a variety of images and collaging them together. I was a fan of the B/W photocopier collage aesthetic often found in punk rock fanzines in the 90s and at that time it inspired me to make my own drastically & forever altering the page folding of any sketchbook I owned from that point on (sticky pages & not for the usual teen reasons) but it’s not just the similarity with that aesthetic of that era, it’s the energy to it all. This is fun! I can now do things I wanted to do 10 to 20 years ago and never could because of ether time, resources or simply it just was not possible at the time. I’m going off on a tangent. So back to what is oddly working way better than I imagined.

    Here’s what worked for me. Develop an idea of what the system you are planning to use can do – Have a good idea of its limitations – then build your collage. upload it to an animation platform and do a couple of test generations with your collage at low credit settings. If it looks promising than continue. If you get nothing or a jumbled mess of an animation like say backwards body parts. That’s an indication that the model does not have any training data to pull from to help you animate anything useful for your concept and it’s time to improvise. I have had the best results with Kling AI the cavoite; it has no unlimited options for it’s service and it can be expensive iterating within it. Runway, Hailuo AI, and Luma have unlimited options (edit: Luma’s ray2 is now competitive with Kling AI). I knew Kling was good with normal motion and it gave me what I wanted to do with this fair use satirical and educational exploration using old fast food mascots McDonalds “Hamburglar” and Burger Kings “King” characters. How can the two interact, will the AI work with characters who look more like cartoons or toys than people? As you can see. Yes it can and look what it did. Facial movements of both characters. I did not expect that. The lighting notice how the Kings movements cast a shadow on to the Hamburglar. This was not the only generation that did this. Out of 8 generations most of them produced similar results I just happened to prefer this one.

    Once I had my generation I decided to turn it into a bogus ad for a TV show similar to the gazillions I worked on in the past. I needed some title art for this fictitious show and wanted to see what google labs could do with it’s imageFX… Yikes! all I can say right now is “do not sleep on google” imageFX is powerful! Probably no… No “probably” it is the best. It has the best prompt adherence for any image generating platform I have used. Try it yourself. Again going off on a tangent. I asked it to give me some fast food icons for a restaurant called “Burglar King” with a cartoonish burger wearing a crown and bandits mask. It delivered and boy can it iterate with one or two or whatever color options and illustration styles. I personally love illustrating, give me a pen and paper and I can occupy myself for hours. Thus found this to be “scary good” at what it does. For those who make a living illustrating, do not keep your head in the sand. You can still get ahead of this by using it in your own work. These generative “AI” systems are not in and of themselves creative. You are! but they are getting “good enough” and yes the legality with IP generated with it is still up in the air on top of it not even being true AI but as many say “this is as bad as it will be” and it continues to improve. There are no two steps forward and one back with this. Its more like two steps forward then 5 with a little pirouette in the middle and it’s eating your sandwich. Do not put your hopes in any laws or government to protect from this. Remember there are other nations out there who have little to no interest in our laws or standards protecting IP and they will keep on chugging along without that legal burden and our government (I’m assuming you are in Free country USA) will want to stay competitive so make of that what you will.

    (more…)
  • The Leone Boyz chapter 1: Silent Serenade

    Here is my exploration into AI video. The concept is a less violent Spaghetti western I used Stable Diffusion/Flux, Runway and Hailuo AI.

  • Manhattan Project National Historical Park

    In Los Alamos New Mexico you can find this memorial of Dr. Oppenheimer & Gen. Groves. It’s about a block away from Oppenheimer’s Manhattan Project home. The whole block is maintained by the National Parks Service as a National Historical Park.

    I have a strong interest in our atomic history and Cold War so I have visited Los Alamos many times through the years. The Oppenheimer movie has noticeably made the mesa and surrounding area more popular.

  • Wyoming Atlas Rocket Launch Facility

    Did some Gaussian Splatting over the July 4th holiday. With a mix of electric & gas their is no “range anxiety” with the 2015 Chevy Volt. It had no trouble finding this abandoned Atlas Rocket Launch Facility in the middle of Wyoming.