I actually remember seeing this project last year, and I think it’s so genius. Especially because I’m coming from the perspective of a designer, I’m interested in the types of interactions that are made possible with Deep Learning. After doing a little digging on Cyril Diagne, I realized they are also the person behind cleanup.pictures, which has the same kind of magic. It’s such a simple interaction from the user’s perspective, that is so intuitively what we’d want to do with technology: drag something we see through our camera in the real world, directly into screen world. To me, a project like this seems almost obvious after the fact, but involves noticing of the moments of frustration many (designers, in this case) have with technology but accept and put up with, and taking a step back to question whether there are actually ways around it.
I found this project interesting because it gathers pieces of information across thousands of digital images and generates a cohesive piece. I also like how it is interactive in that the original images appear when a user clicks on a certain part of the image. I personally would also like to explore ways to receive user input and derive information from such input to generate an interactive work.
This project was commissioned by Google in 2018 by the Google Brand Studio. The project uses neural networks to identify real-life versions of emojis captured by one’s phone camera. More info
This project truly baffled me because of its implications. For people who were already adults at the conception of the Emoji / people who spent their later development years growing with the Emoji, it is taken for granted that emojis are cartoonish representations of real-life objects. For example, the thought process is that there is a dog emoji which represents a real-life thing: a dog; however, this project inverses this logic: that there is a real-life version of the dog-emoji: a dog. The thing which is considered first is flipped. This subconsciously gives more weight to the emoji rather than the object the emoji represents. The “real” thing is no longer the dog, but the emoji.
This writing is just an extremely deep dive into an objectively shallow thing, but regardless there is more to say. Thinking back, it was mentioned in the above paragraph that current adults/older adolescents accept that emojis are representations of real-life things. However, this is not the case for younger children who have seen or experienced more through the phone than in real life. This is especially true due to the pandemic. Those who are around 0-7 right now have likely grown up more online than in person, and therefore have seen more emojis than the “real” versions of emojis. It is therefore likely that they consider the emojis to be more “real” than their “real” counterparts, meaning they consider first the emoji and then the object, meaning logic is backwards in respect to ours (the adults/older adolescents).
This project, the Emoji Scavenger Hunt, forces this backwards perspective onto its user. While it may initially seem like a harmless and fun iSpy game, it is also on some level a commentary on the difference in perspective or logical journey of the older versus newer generation in respect to technology.
This piece really caught my attention because I took a class in high school on how to create logos and the amount of iterations that go into creating such simple shapes and typography. I feel like a future version of this technology could be used by graphic designers to more rapidly prototype logos. This piece takes in a hand-drawn image and using p2p smooths and subtly changes the picture to make it into a logo.
They call this style “Augmented Hand” which I thought was a very interesting perspective on AI drawing. This was a two year project that attempted to enhance their own traditional art skills and tools for large scale gallery paintings. They used tools like pix2pix, photoshop and python to bring these creations to life. I really like the lively colours and shapes, they all seem to have this geometric aspect to them that I find very pleasing.
The project that caught my attention was Font Map by Kevin Ho. The project uses machine learning to find similarities between fonts and maps in relation to each other on a plain. This interested me because I spend a lot of time with typefaces and being able to visually see fonts that are similar would honestly be a very useful tool for designing things. This tool not only allows for you to be aware of fonts that are similar to a typeface you like but it also brings in an aspect of exploration.
Refik Anadol explores the mind of machines through representing memories of New York City in his work Machine Hallucinations. The work is a 30-minute experimental cinema that showcases the memories of New York through processing hundreds of millions of images. I found the idea of a machine hallucinating incredibly interesting because the visuals reflect on how we consciously and unconsciously perceive the city we live in. I am enchanted by Anadol’s manipulation of traditional storytelling cinematic techniques to showcase the memories of New York through a machine that has the “capacity” to dream. It awes me how data could produce such stunning visuals and immersive experiences and also the ability of machine learning to use existing information to “dream” about possible futures. I like how suggestive the visuals are, seemingly familiar yet distant as it represents no real place and hence its ability to portray something nostalgic like in one’s memory.