tale-LookingOutwards04

I really liked Draw to Art by Google Creative Lab, Google Arts & Culture Lab, and IYOIYO. Draw to Art is a program that uses ML to match drawings, paintings, sculptures, and many more artworks you could find in the museums around the world to the user’s input drawing.

Not only did I find this educational and informative, I also thought having this interactive program in museums would enhance the museum experience by making the visit to that museum more enjoyable (if the user gets an opportunity to visit the museum). It’s definitely a more interesting method to learn about another piece of art in a museum around the world. If the data the program trained on is limited to pieces from one museum, it could also lead to another game of finding the artwork matched by the program, providing more memorable experience at the museum. There are so many possible positive experiences this program could provide, so I really like the concept of this.

tale-mask

A little bit off then what I expected, but I’m satisfied with the amount of new things I’ve learned & tried.

Before motion (mouth open):

After motion (mouth open):

Original render image:

(Somehow one of the texture files looked different in Lens Studio than in Blender…)

tale-facereadings

  • Clearview.ai

I didn’t know about Clearview.ai until I watched “Face Recognition” from Last Week Tonight with John Oliver. I was extremely surprised that there are a great number of people who aren’t aware of the critical problems these face recognition softwares and accessing/collecting data from online are. Although I did thought about the possibility that others who I don’t know may access my photos on my SNS page, for example, and use it for whatever purpose I do not give consent to, knowing that there exists a company that actively collects and analyzes photos from online really concerns me. I think I’ll be thinking twice before I upload anything to my SNS…

  • Inaccuracy in face recognition leading to social justice issues

I learned that inaccuracy in face recognition software could lead to a severe results from Joy Buolamwini’s Tedtalk video. I too have a memory of laughing at the inaccurately tagged names on social media, but just like what Joy Buolamwini said, it no longer becomes something trivial that we could just laugh over when it comes to crimes and suspects. People could be blamed for something they weren’t even aware of, at the most unexpected moments. I strongly agree that face recognitions not recognizing faces of people of colors enhances the pre-existing racism in the country, and that this problem should be brought up to the surface even more and be fixed anytime soon.

tale-mobiletelematic

Shake2Break 

( code | app )

Shake2Break is a quick competitive phone shaking game that could be used to solve a minor dispute by deciding the rank/order of people. (2 players required to play)

Project Reflection:

This project was a nice opportunity for me to learn how to implement client data / server data in server.js, as well as how to utilize sensor data from a mobile device. Although I was initially intimidated by the idea of using server.js and mobile sensor data (as I’ve never used nor seen a code for either of them before), I found implementing server.js and manipulating mobile sensor data very interesting and fun to play with. Since it’s my first time using those, I wanted to begin with a project that is simple yet to the point: an active synchronous shaking game. The server requires two players synchronously present to begin the game. Once there are two players, the competition begins, where each player can shake their mobile device back and forth to engage with the accelerometer inside the device, specifically the accelerometer data along the z-axis.

Demo Video:

Demo Video with Animation Intro:

Game Start:

Each shake is counted only when the player shook the phone “hard enough” (i.e. the acceleration in the z-axis is greater than 70). The circle’s radius is drawn in the intention of helping the player visualize the acceleration of the phone.

There are 10 rounds of 10 counts to fulfill (hence total of 100 shakes), and the progress of the player is both numerically and visually (filling up the square) represented.

As a player, you could see both your progress (indicated as “ME” on the top left corner), as well as that of the opponent’s.

Mid-game:

(Shaking each phone very hard…)

Game End:

Once the player reaches 100 shakes, the rank is assigned and shown based on who has reached 100 counts first.

Sketches:

^(initial ideation of the project)

^(project sketch)

tale-SoliSandbox

New Media Experience: Virtual Squash

( code | app )

New Media Experience: Virtual Squash is an app that recreates the experience of practicing squash in a squash court using Soli sensor on Google Pixel 4.

This app is made to motivate myself to do some workout in the COVID19 era.

Overview:

I made a virtual squash that could be played with Soli sensor. Left and right swipes mimic the squash swings, and tap reclaims the squash ball. Every time the squash ball hits the front wall, the number of hits on the top left corner increments by one. Similarly, every time the ball is reclaimed, the number of resets below the number of hits increments by one.

Reflection:

Though this project, I learned how to create space with WEBGL in p5.js, manipulate camera function to change viewpoint, and incorporate a sound file to be played every time a certain action has taken place.

Quick Demo Video:

~35 minute long video of playing squash with Soli:

 

* I used an aluminum bottle so that Soli detects my motion better. As you can see throughout the video, I noticed Soli recognizing almost every move I make in a certain bottle position and a certain speed of it moving across the screen and tried to swipe in that fixed position & speed.

Fastforward ver. of video above:

tale-soli-checkin

Screen Recording

(using left/right arrow keys for swipes, enter key for tap)

Google Pixel 4 w. Soli sensor

Concept Decision:

I decided to create an app that helps me “work out” since I haven’t done any exercise for a while due to Covid 🙁

I’ve been playing squash for years and I really love squash, but I couldn’t play since March and I really miss being on court ( big sad 🙁 ). So for this project, I recreated squash practicing experience!

Current Problem:

One thing I didn’t anticipate for some unknown reason is that the phone is vertical, hence cuts off the sides of the court. Considering that rail shots (hitting straight deep to make the ball bounce parallel to the side wall) are most commonly practiced in squash, not being able to see the sides makes the experience not quite complete.

Potential Changes to Make:

My goal for next deliverable is to make it playable horizontally as well. I could also add the sound effect every time it bounces off the wall, if I could find a nice clean sound file of a squash ball hitting against the front wall. (Often times the court echos a lot, so it’s hard to get a sound of it without having other noises.)

 

 

tale-soli-sketches

I want to have more options and hear others opinion on these so I came up with more than 3 ideas. I’m not sure which idea is most interesting/ possible to implement in a week of a timeframe so I’ll include sketches to all, but I’ll only explain the three ideas I find most suitable/interesting for this project.

1. Virtual Bookshelf

Idea: 

Often times I find myself not reading enough books. Libraries are either closed or operating with limited hours due to covid19, making reading books even harder for everyone. To be honest, reading a book is totally achievable even without having libraries to be open, but our book options are relatively limited. I wanted to recreate the experience of having some options to choose from when reading  a book/searching for a certain topic-related book.

How it works:

I’m thinking to have “swipe” to shift through the options on the bookshelf and “tap” to select the book.

Each title of the book will be a broad category, and the content of the book will be some random information of that topic. For example, “recipes” would give a recipe of a random dish, and “motivation” would give some random quote that motivates you. If the first content doesn’t satisfy you, then you can “swipe” to generate another one.

Hopefully I could connect to some sort of data set/API (if there is one) that I could potentially draw a random piece of information from.

Potential Challenges:

  • COLLECTING DATA:  I’m currently unaware of any big data/API of random facts/information/quotes of a certain category.

2. Flapping Bird (bottom sketch)

Idea:

Exercising is not happening in my life these days and I’ve been losing a lot of muscles.. A lot of my friends also related to the lack of movement due to covid19 and social distancing, so I wondered if I could make something that at least works out my arm. So I thought of this idea, where the player would be required to move their arm to “tap” to make the bird fly through the sky.

How it works:

“Tap” to make the bird flap. The bird might fall (i.e. game over) if you don’t “tap” often, but you don’t have to “tap” all the time because oftentimes birds ride air current instead of flapping rigorously.

Potential Add-on:

“swipe” to make the bird go left/right of the screen?

Potential Challenges:

I realized Soli can’t recognize consecutive motions as well as a single independent motion. It also often recognizes one motion as another (ex. “tap” as “swipe” or vice versa), so perhaps it’d lead to game over even though the user has been moving arms so much to get the motion of tapping across…

On a bright side, it’d lead to more arm exercise? Although I get annoyed when the sensor keeps reading my intended motion as another, so perhaps it might annoy others too..

3. Stay Clean (covid19 edition?)

Idea:

Due to covid-19, many people including myself began to use disinfecting products much more often. I came up with this little app that swipes between the three interactive pieces related to our daily lives with covid19.

How it works:

Swipe left or right to move to the previous/next interactive piece:

a. hand sanitizer

“Tap” or “Swipe down” to push down the hand sanitizer bottle and make a little pile of hand sanitizer next to the bottle.

b. wiping surfaces

“Swipe up” & “Swipe down” to wipe out the virus.

c. wear mask properly

“Tap” to begin/end the game, “swipe up/left/right” accordingly to make the person on the screen correctly wear their mask. I got the idea from all those people who don’t wear their masks properly in the public space. (Mask is not a chin guard.. )

Potential Challenges:

  • Would I be able to implement all three within the week of time? I’d be drawing all the visuals in illustrator on top of coding the interactive part. How realistic is this to all be done in a week with preparing for midterms?
  • Same as in the second idea: Soli often detects one motion as another, so what if the user is having fun wiping surfaces, for example, and is so close to being done but then soli reads their motion as swiping left/right and changes to the next piece?

tale-CriticalInterface

10. The interface uses metaphors that create illusions: I am free, I can go back, I have unlimited memory, I am anonymous, I am popular, I am creative, it’s free, it’s neutral, it is simple, it is universal. Beware of illusions!

Did you know that the first photographic camera, the first washing machine, the first transistor radio, the first Mac and the first windows had the same slogan? “YOU JUST CLICK, WE DO THE REST”

Imagine your desktop is a kitchen, a garden, a hospital, a computer. Now, imagine it using no metaphor

This tenet and the propositions talks about the critical role our imagination plays in the interface. I found the second quote, which is one of the propositions listed under this tenet, particularly interesting because it means that many of the new technologies/tools were introduced/advertised in such way that the advertisement exploits people’s imaginations and thus makes its item more interesting and valuable. I think people’s ability to imagine is boundless — the more you gain knowledge, the more you could imagine based on all the information your brain has collected.

I remember reading about what distinguishes human from other species, of which the main takeaway was that human has the ability to imagine and believe in fiction, while other species don’t. For example, we enjoy dramas, movies, musical, play, literature, etc. in our everyday life, and these are all possible because humans have the ability to pretend and collectively believe in something that’s not there. All the characters and events in the fiction don’t exist or didn’t happen anywhere around the world (usually), yet everyone still enjoys knowing that everything is made up.

Similarly, the concept of money and currency totally relies on the ability to imagine, as what we call money is either a piece of paper/metal or some number presented on the computer system.

tale-LookingOutwards03

I found this kinetic light bulb sculpture presented by Build UP LLC (a kinetic art distributor company in UAE) very interesting. Although the idea of color-changing light bulbs moving up and down is not as mind-blowing these days as to the past, I find the concept of generating certain shapes or motions through the moving light bulbs interesting.

So far any installations that used a set of light bulbs attached to a string from the ceiling that I’ve seen either remained static or just moved up and down, creating an impression of shining particles. However, what they have done instead is arranging the light bulbs such that it looks like it’s part of the interior design and/or is moving in a satisfying mathematical motion.

These are some screenshots from the video: