I have been self-teaching and using Houdini as my primary 3D tool for the past 4-ish months and have learned so, so many things. One area that I wanted to dive deeper into was exploring Houdini’s newer capability to do the entire character pipeline. With their rigging toolset’s update, you do pretty much everything without leaving the software. So I went through the whole character development process, with every step being entirely procedural! Very exciting, very cool. This was just a first stab at this whole pipeline within Houdini, so I’m excited to iterate in the future and learn from mistakes I made along the way.
Floppy, soft-body character.
Other Computational Stuff
This one I have not yet gotten to since it will take quite a while. I will be turning in the final render as my final documentation.
Right now I am taking this short project centered course taught by the animator Christopher Rutledge, who uses Houdini as his main tool for animation. The course just goes through the entire character development process specific to Houdini, which covers some things I didn’t know yet like KineFX, Houdini’s rigging toolset.
He has also used the vr sculpting workflow for character creation.
The awesome thing about Houdini is I can do all of the normal character development process, but also use the simulation power of Houdini to computationally make the scenes/characters themselves more interesting and fun. For example this clip below showcases a really charming character design and added physics simulation to make the movement that much more appealing/charming, funny and unexpected.
For my final project, I am going to use Houdini to make a very short animated scene that involves a character. I hope to use the VR headset as well to be able to quickly sculpt some 3D characters that I can use in the short. I haven’t completely decided yet if I want to bring my Houdini stuff into Unity to make it an interactive project or simply render it in Houdini (maybe both?). I think the most important thing I want to focus on with this project, is just having fun. I’m thinking I’ll end up with something kinda goofy, but we’ll see. At the moment, I want to repurpose this idea I came up with earlier in the semester for this class and make something fun out of it. It will involve using a lot of softbody, springy simulation. I’m not completely sure though if I’m set on this idea.
This work is an archival and conceptual project started in 2008 by artist Jon Rafman. These images are all screenshots from Google Street View’s image database, back when it was a new initiative without the kinds of regulations that it has nowadays. At that time it was (and still somewhat is) “A massive, undiscerning machine for image-making whose purpose [was] to simply capture everything, Street View takes photographs without apparent concern for ethics or aesthetics, from a supposedly neutral point of view.” The work meditates on the implications of this kind of automation of image-making, and at this kind of scale.
I am so glad I came across this project. It really struck me going through all of these images, I just could not stop looking. As a collection for photography’s sake, they work together to really poetically capture subtle moments of life, whether it be ugly, mundane, funny or really beautiful. But as you look through these photos, it feels really eerie too. It feels like you shouldn’t be looking at a lot of these (I mean.. because we shouldn’t) and yet here they are, moments of life captured with no regard for its subject. Some of these photos made me audibly gasp, like the one of an inmate running or one showcasing what looks like a kidnapping. And then in the same collection you have a guy mooning the camera. Or a set of white laundry billowing in the sun. It’s just strange and so eerie. I really, really love it though.
I think this also holds up so well in our current times with the development of computer vision and artificial intelligence. The technology is certainly developing and yes, the ethical side is too – but definitely not enough considering many of the issues posed in Rafman’s work are still relevant today.
I’ll keep this semi short and also just add a few things Jean didn’t mention in her post.
The first idea, as discussed with Golan, was based on the idea of a space for release via limitless destruction on the internet. It would’ve been similar to a rage room except because it isn’t real, you could destroy things you wouldn’t have been able to normally. It’s also extremely accessible because it’s a public website. However, this ended up just not be feasible within the constraints of a telematic environment, so we decided to go with another idea of making a more subtle, calming, ripple environment. We did actually plan to add sound, but just didn’t get to it. The goal was if just one person clicked it would sound like a soft water droplet, and as more people join and start clicking, it would sound like a quiet rainfall. Thus, it would have a few different peaceful states based on collective effort/how many people are using the site at that moment. We also just thought there was something poetic about being able to see the effects of some anonymous person’s actions in a visible ripple. In addition, in one of our preliminary ideas that included the background, it diluted the focus of this idea and instead made it more about the cute environment – which is not what we wanted.
From Joy Buolamwini’s talk “1 in 2 adults in the U.S. have their face in facial recognition networks”… a terrifying fact because as she says, these networks are very often wrong. Misidentifying someone in the context of policing and the justice system takes this fact to an entirely new level of terrifying. There are many people out there that because they do not know how these systems work (or do, but know that others don’t), take it to be full-proof and factual, using these “facts” to leverage their goals.
In Kyle McDonald’s Appropriating New Technologies: Face as Interface, he describes how “Without any effort, we maintain a massively multidimensional model that can recognize minor variations in shape and color,” Going further to reference a theory that says “color vision evolved in apes to help us empathize.” I found this super interesting and read the article that it linked to. The paper, published by a team of California Institute of Technology researchers “[suggested] that we primates evolved our particular brand of color vision so that we could subtly discriminate slight changes in skin tone due to blushing and blanching.” This is just so funny to me, we are such emotional and empathetic creatures.
I think this may have been my looking outwards from the first ML unit I did… funny how I found myself drawn to it once again. It’s just so good – so fleshy and painterly but also not recognizable at all.
This is a major wip, i have other homework so i need to stop for now but this is not what I really wanted to do with it. I want it to interactive but I don’t have time right now so I am just demonstrating some of it’s characteristics. It also isn’t supposed to go beyond the canvas but too tired to fix bug. That’s that for now.
I find this work of Sofia’s to be fascinating. It feels as if you’re looking at an old image, but what you’re actually looking at is fairly obscure. It’s a sort of uncanny plant life. It’s almost as if you’re looking at a historical image from a different world/universe.