One of the things that I am looking for as the advancement of technology and interactive design continues is the influence that it may have on clothing. Back in 2015, there was an article about a company, Print All Over Me, that was trying to give programmers a creative outlet and prompt people who aren’t in a creative field to explore that side of things.
They push for collaboration and the companies collaborations with companies like Processing Foundation, SoSoLimited, and LIA, all push this extra customization in hopes to give creators and consumers a new platform of design. When this article was released in 2015, the feature of uploading your own code and having it made wasn’t available yet, but instead was a bit more primitive where you could customize the clothing via keywords, number keys, etc. and the code would look through the internet for an image that was fitting.
Although I am not sure how long this whole project has taken, looking at their website in 2017, they seem rather successful and has honed into their niche group pretty well. It will be interesting to see how much further this can be pushed with the inclusion of more technological advances.
Droning Like a Ur, by Ian Cheng, is a live simulation with an infinite duration. Cheng works with a video game engine that continually generates new combinations which allow the simulations to evolve.
This means that his simulations make their own decisions based off of Cheng’s programming. These actions cannot be foreseen and are not completely predetermined. Cheng says that his live simulations are like video games that play themselves.
I love the way in which the creatures and environments in his work maintain ambiguous and strange shapes but are still recognizable. If characters maintain a mostly “human” appearance, we still identify them as such… even if they are unlike the humans in our world.
In the beginning of this video, there are strange, squishing sounds as well as a bug-like buzzing. Coupled with the imagery of bodies twisting around, these sounds created an uneasy tension. This is one of the features of Cheng’s work that I love. He generates worlds with depth and spirit. Art does not have to be lifelike to achieve this.
While I do admire this piece, I believe it could have been more effective if the dialogue in the video were less understandable and less recognizable as human. In my opinion, I feel like the voice contradicted with the unpredictability of Cheng’s simulations. It seemed too absolute, even though it was not fully comprehendible.
I fell in love with a company this summer called CW&T they caught my attention with a pen they designed that totally blew my mind with its precision machining and how visually stunning it was. After seeing this first product I looked at their other stuff and found a project they did with some coding as well as engineering involved. Simply it is a counter, a button and a display that records all the button presses up to 999,999 and then it stops. Each individual element of the construction is rated to last the perfect amount of time, as well as the code having a self destruct function set to run after 999,999 presses.
Imaging Systems Lab, Rochester Institute of Technology, Summer 2015
Yiquin Cui and Kevin Farnand
One of the most notable projects I have had the pleasure to see developed was a 3D camera created by Raspberry Pi’s. The project’s goal was to open up new possibilities of the Raspberry Pi and make the first “DIY” or “do it yourself” 3D camera. The group was composed of three Rochester Institute of Technology students and two high school interns. They had six weeks to make a 3D camera out of two Raspberry Pi’s, Lenses, Wires, and write their own code. Utilizing a combination of JavaScript and Python, they were able to create the software that controlled the commercial hardware.
Inspired by the numerous “DIY” projects that have been created utilizing the Raspberry Pi, they were determined to find a low-cost and simple solution to create a camera that, when viewed with 3D glasses, would produce an image that contains depth. I really admire how determined their team was, and how they overcame all the challenges and roadblocks that crossed their path. It still amazes me how accessible technology is with coding knowledge and how there are still ideas out there that have not yet been created.
In early 2014, James Murphy (best known as the founder and leader of the band LCD Soundsystem, and co-founder of DFA Records), created about 400 hours of sound using data from tennis matches. He teamed up with members of IBM, mainly developer Patrick Gunderson, to convert data from US Open Tennis Matches into music. They were able to use a unique algorithm to synthesize the movements of the tennis players through the IBM cloud. Following this process, James Murphy created an album of 12 remixed tracks from the original sounds that were generated. The results are hypnotic, alluring electronic music.
IBM originally was gathering and analyzing data to, “figure out what makes the athletes great.” They eventually invited James Murphy to put a spin on the original research, by making the data into music. Since James Murphy is used to working very physically, with instrumentation and tools to create sound, and has no experience with code, developers like Patrick Gunderson created a way to depict the necessary tools on the computer as more movable; it appeared more like Garage Band, with dials and switches. This allowed Murphy to work with Gunderson and other programmers more easily, and ideas could be translated more smoothly.
James Murphy likely derived inspiration for this project from his own musical creations, which are often full of unique sounds inspired by simple sounds he has heard in his lifetime. In one interview, he mentions being fascinated by the whirring sound of a running refrigerator. This project is an example of how odd sources or seemingly simple things can be turned into works of art.
This certainly offers opportunities for artists to derive their work from unique sources. Even the act itself of taking raw data, finding a way to convert it to audio / visuals through an algorithm, is a conversion process that will be dependent on where the data is coming from, and what it is being turned into.
If you like games you’ve probably heard of Firewatch, an award winning narrative exploration game (a walking simulator to some) made by Campo Santo. It is heavily narrative based; the player plays as a Henry, a fire lookout in the Shoshone national forest. The game highlights feelings of isolation and human emotional connection through Henry’s/your conversations with his/your supervisor Delilah. These conversations happen as you traverse the open world, and are facilitated by a hand held radio. Other than disembodied chatting, you are completely alone.
Firewatch was made by ten people using Unity 4.5 as a game engine. There were several challenges that they faced being a relatively large team for the functions unity provided at the time. Two challenges they faced were making a contiguous open world and having multiple people work on the scene at the same time. They implemented door and portal scene streaming using trigger volumes to load and unload assets when the player enters a certain area and used plugins that allowed the separation and compilation of the scene based on different disciplines (for instance, one person could make a scene of trees and the other could edit post processing, and the plugin would combine the two without destroying the changes of the other person’s section.) This, on top of custom tree shaders, custom atmospheric fog that changes color over distance, and their custom made skybox generator, challenges the functional and artistic capabilities of Unity in its generic form.
Artistically, Firewatch focuses on strong colors and silhouettes, realism and style, to convince the player and immerse them in the narrative.
I came across the website Tabletop Whale, which is a website by a biology PHD student Eleanor Lutz who spends time creating infographics on science. Although she makes most of her infographics in Photoshop and Illustrator, some of her blog posts are gifs or animated interactive explanations for her topic of the month.
She firstly researches a theme or topic, then once she is sure she understands it well enough to teach others, she will go ahead to design and create the infographics. Her main purpose is to inform others. Eleanor is inspired by the idea that such illustrations can bring surprise and spark interest in topics which others may not even fully understand. This type of project shows that sometimes ideas are best presented through diagrams, illustrations or moving pictures to help others understand. What I appreciate about her approach is that she makes use of different types of media and presentation to present different concepts, from computational 3D modeling to gif making through multiple Photoshop image frames to using Python coding. This just shows that today and in the future, complex information can be visualized and not only read through words on textbooks.
Game name: Framed Developer: Loveshack Publisher: Noodlecake Studios Inc
The trailer video of Framed
Framed is a mobile game where you re-arrange panels of the animated comic book to change the outcome of the story. It is developed by the Australian studio Loveshack, which was a three-member team (now there are four members). The unique narrative-based puzzles game mechanism, the outstanding art style, and the seamless animation design impressed me a lot. Framed did a really good job on combining interaction and storytelling together, for allowing players actually to affect the story through the whole process instead of just watching as a viewer. Different from the normal design of moving the protagonist or objects in scenes, the player moves the order of the comic panel to change the story.
From the art aspect, this game has a polished silhouette comic style. The comic style matches the game mechanism very well, and it gives an intuitive indirect control for the players. Besides the visual part, the music part also does a good job to help the player get into the story. There is no dialogue or traditional sound effect in the game, that is why the background music becomes important. As the story going on, the player will find that three different characters have their own theme. Moreover, the music has different layers. When the player is arranging the panel, which means the story is on a break, the background music is a basic accompaniment. When the play button is pressed for starting the story, the theme melody joins in. The design of music helps to indicate the pace and emotion of the story, creating an immersive atmosphere.
When Loveshack created Framed, the team had three members: Joshua Boggs, Ollie Browne, and Adrian Moore. Joshua was responsible for programming and design, Ollie was the artist and designer, and Adrian took the music and design part. The studio spent about 2 years to develop the game. At the very beginning, the rough idea of Framed was born when Joshua read the book Understanding Comics by Scott McCloud and happened to see that the position of the gaps between panels in comics can affect the meaning of the story. Joshua immediately realized it could be a great idea for a game mechanism.
So far, Framed has become a great success for Loveshack as it has received over 30 awards and accolades and the studio released Framed 2 which is a sequel to Framed in July 2017.