For my looking outwards topic, I chose Garfield Kart.
Garfield Kart is a living master piece, appearing on many different platforms it can seen as one the greatest pieces of interactive media known to mankind.
I’ve spent over 150 hours playing Garfield kart and I have yet to scratch the surface of the complexity of the mechanics and lore in the GCU.
My favourite installment of Garfield Kart has to be Garfield Kart: Furious Racing, I feel like the sequel brought the game to a whole new level with new interesting mechanics while refreshing the old and outdated ones.
10/10 can’t have lasagna cause I’m lactose intolerant.
A video game that I find inspiring is the game Outer Wilds. It’s definitely one of those games where if you’ve heard anything about it, it’s probably been from somebody who goes on and on about it being a masterpiece, and I’m definitely in that camp. In short it’s basically a mystery game on the scale of a solar system, where you have total free range over where you go and your methods of uncovering the central mystery is through exploring and at times solving environmental puzzles through the settlements and technological structures of an extinct alien race, all while the player is stuck in a time loop where they die every 22 minutes. It’s the only game I can really think of where the “lore” of the game is basically the game itself, and the only piece of media where I’ve actually cared about truly understanding the mechanics of weird sci-go alien tech. The mystery in the game is surprisingly ingeniously crafted, and there were many times where I learned some new vital piece of information or where some sort of obscure mechanic clicked which absolutely dumbfounded me and completely recontextualized my understanding of the game world and everything I had done up to those points. It’s definitely a type of game that has reframed my understanding of what games can be and how to utilize games a medium for story telling.
What makes the game particularly inspiring to me, besides it being one of my favorite games, is that it had fairly simple and low-scale beginnings. The games creator, Alex Beachum, created the game’s alpha as a master thesis for USC’s Interactive Media and Games Division, and his original motivation in making the game was not to create a complex space mystery, or to push the boundaries of games as a medium, but just a desire to fly around in a space ship in a physics-based solar system. Everything else sprung out from the original goal, and I think that stands as a testimony that not every great piece of art was something that was planned out meticulously in advance, but that most time great art does and sometimes must develop and change overtime. From what I can find out, the developers behind the game were originally a team of 6, but the team has since grown to 13.
Holly Herdon is one artist that I like who uses AI and technolgy in a particularly well balanced way. She is a composer, musician, sound artist who uses an AI model “Spawn” to generate sound for her music. When you listen to her music, it’s not immedietly obvious that this is the case — the technology is not right in front your face. In fact, they sound less “electronic” than some of their experimental music counterparts that are not as technology intensive (at least no AI) such as Ryuichi Sakamoto. Compared to other generative music technology, such as OpenAI’s Jukebox (which I played with for a while with an unsatifying level of success,) the artist still has great control over her AI model in the creation of her music, which makes them sound human while being surreal and otherworldly.
One of her most recent projects is Holly+(https://holly.plus/), which is a digital twin of hers. The audiance is welcomed to upload audio files of their own to have it be processed to her style. It is also worth mentioning that the AI model — spawn — used to create many of her music is trained with audio segments of her own and her friends’. The model is also surprisingly lightweight (compared to Jukebox(https://openai.com/blog/jukebox/), which is a transformer-based monstrosity that was trained on 1.2 millions songs and takes 10 hours to generate a short segment on Tesla V100.) So I also appriciate her work from the technical perspective.
Architecture of Radio by Richard Vijgen is an Augmented Reality app that visualizes ~the infosphere~ that is, the cell, Wi-Fi, and satellite signals flying invisibly around you. The concept of the project, the medium, and the interaction of use are intrinsically linked and make the other stronger. The concept is to make visible these otherwise invisible signals that allow our technology to work and talk to each other; the medium of using a screen as a window or frame then makes perfect sense as a lens to see the invisible (as a device, the tablet itself must feed off signals to work and also to sense them) and also reinforces the notion of this alternate invisible world; and the interaction encourages exploration as only a narrow field of view can be “illuminated” at one time. I appreciate this sort of restraint and unified nature of the project because it almost seems it could not exist another way – and it uses code in a way that is essential but does not call attention to itself or distract from the deeper concept.
The project was made using Three.js and Ionic Framework. At first I thought it used cell signal data actually being received by the tablet itself (which makes it really feel like magic), but it actually simply uses the GPS location to pull data on nearby cell towers from OpenCellID and satellite data from JPL. In a site-specific installation of the piece, Vijgen also incorporated wired communication infrastructure. Apart from these tools, he had to incorporate models to calculate the “shapes” of signal radiation based on the distance between the user and the transmitter. In that sense, Vijgen explains in interviews that it is more of a simulation – but I think the magic of it is that it prompts you to think and feel the same things as if you could see the actual radiations, and makes that feeling accessible.
There are many precedents for this line of thinking – especially with the accessibility of communications technology in the late 90s, artists like Dunne and Raby began speculating on what it meant to live with the invisible presence of technology in projects like Placebo Project (2001).
Reminds me of this BERG project https://vimeo.com/7022707 that visualizes the spatial qualities of RFID emitters simply using an RFID emitter and RFID probe linked to an LED. In a way, I think the RFID project is even stronger in that it uses the latent qualities of the technology to visualize itself – but from what I’ve seen it was more of a research exercise for the purpose of design, not an experiential art piece.
The work I selected is Rain Room by Random International. This work is an interactive installation that allows visitors to walk through a downpour without getting wet as motion sensors detect visitors’ movements as they navigate through the space. Although I’ve never got the chance to visit this work in-person, I still really enjoy this work. Walking through rain without getting wet seems to gives the visitor a sense of control over rain. The experience of walking in the rain is replicated in an indoor space can encourage some interesting reflection on our relationship with technology and environment.
Rain Room is equipped with 3D motion sensors that track movement underneath the water valves. When it senses a person walking inside the piece, the sensors turn off the water valves for the area around that person. This effectively creates a circle with no rainfall centered on that person, which follows them as they move around the piece.
Based on the artists, “the idea originated in a three-second spark that came up during a discussion where we had looked at a (too) complicated process of printing information with water onto very large hydrochromic surfaces. It seemed that we somehow shared a curiosity to see how it would feel to be immersed in a rainstorm that wouldn’t physically affect you. So, we just knew, we had to do this. ” It then took four years of research and development and support from Stuart and Maxine Frankel and their Art Foundation to develop and build the first Rain Room, which was shown at the Barbican in 2012.
I am intrigued by Yayoi Kusama’s series of Infinity Mirror Rooms, especially The Souls of Millions of Light Years Away (2013), which are installations held in a room surrounded by mirrors. The series of mirrors create endless reflections, creating an immersive experience for the audience. The Souls of Millions of Light Years Away makes use of hundreds of small LED lights, which gather to produce a dotted pattern in a seemingly-infinite space.
I admire this work for its heightening sensual experience and its ability to foster emotional connections. For its sensual aspect, I love how Kusama coordinated the rhythmic system of LED lights with a series of mirrors, which constructed visual depth. I believe that the synchronized visual effect effectively builds an isolated environment for the audience to contemplate their existence. Onto the second aspect, I am drawn to how Kusama includes a part of herself: her major inspiration came from her childhood experience of striking hallucinations with fields of patterns and dots. She began drawing her hallucinations as a way of treating her mental disease to control the anxiety, which became a foundation for her repetitive elements. Moreover, when the audience enters the room, they are immediately faced with a Droste effect. This effect is widespread on online platforms, which leads the audience to capture themselves in the infinity room with their phones. By capturing an infinite version of themselves spread throughout the room, the audience is able to visually observe their size in proportion to the infinite space and develop their own connections to the artwork.
In terms of the creation of the artwork, the series of LED lights hanging from the ceiling have been programmed to turn on and off in rhythmic patterns. It begins with subtle changes producing a soft glow into more rapid changes forming a staccato effect. At one moment, the audience is left in complete darkness. As the rhythmic pattern of the lights changes, I am assuming that code would have been used in the creation of this work, but I wasn’t able to find clear information about how this work was made.
This is a “procedurally-generated vector-format infinitely-scrolling Chinese landscape for the browser.” I’ve always been really drawn to this project of LingDong’s. This style of painting is one of my favorite’s and for him to be able to recreate it so beautifully using code is just amazing. He captures the nuance of the style so well from far away, it would be hard to tell if it was digital at all. His work tends to make you forget the technicality and the technology that was used to make it. I think that is why I love this project (and his work in general) so much. While I don’t see myself necessarily doing this exact type of work in the future, I am very inspired by the attention to detail and the way he so effortlessly immerses the viewer… You just get lost in the experience and the beauty. That is the type of work I want to create.
The Legend of Zelda: Breath of the Wild (2017) is a 3D open-world RPG developed by Nintendo as part of the Legend of Zelda series. The player plays as Link, a swordsman exploring the kingdom of Hyrule, which is in ruins 100 years after a catastrophe. I love this game because of its expansive and immersive landscapes, as well as how its story and worldbuilding are grounded in player exploration. I think the aspects of the game, from its semi-realistic art style, realistic physics, to sound design, all contribute to creating a world that feels entirely real and inhabited. One of the most interesting parts of the game to me is how there is a strong sense of the passing of time; it feels like the regions the player explores have existed for thousands of years since they are littered with ruins and giant skeletons.There were about 300 developers involved in the game, and it took around five years to develop (2012 – 2017). The developers used an in-house game engine and modified an already developed physics engine (Havok) for the game. The game’s director stated that it was inspired by Minecraft and Terraria, and the art style was inspired by plein air / gouache painting, as well as Studio Ghibli films. There is a sequel set for release (hopefully) sometime this year.
The piece that I selected is The Event of a Thread by Ann Hamilton. I enjoy that there is more than one way to interact with this piece, and multiple dimensions within each manner of interaction. You can choose to just sit on one of the swings, or move around on one, alternatively you can sit under the curtain and watch is shift above you, and as a third option, you can take a view from further back and watch the overall shifting of the piece and how it relates to the motion of the people on the swings. This is what I feel like interactive pieces should be all about. You should be able to see how external stimulus affects the piece as this works as an implicit user’s manual that allows people who interact with the piece to manipulate it in the ways that they would like to. Without that aspect, the piece may as well be moving without any input as the point of interactivity is to be able to see what effects your input has upon the piece. The Event of a Thread captures this perfectly as you can trace any one of the swings to its terminus at the curtain and see how the motion of the swing effects the curtain locally.
This piece takes up a total of 55,000 square feet between the curtain and the swings!
The artist then applied the same algorithm on many different dataset, got different results. But he didn’t cite where the data was coming from.
It looks like after genetic algorithm, the artist tried style transform to get the following results: But it is unclear which algorithm are used.
From the images themself, I can tell that they are generated by CNNs due to some texture quality. This seems to be resolved with higher processing power from the graphics card with the later generations of the fish. This let me think instead of end-to-end generation, would it be better to have the network generate the model and render it though traditional computer graphics. This way, pixel quality could be ensured.
From those artwork and failures, I see a general theme and interest from the artist. The game cyberpunk 2077 and the collection Bored Ape Yacht Club was mentioned in his documentation. And therefore I suspect those influence this collection.
Documentations From Artist:
The artist also tried to sell those artwork on Opensea.