I had to upload through a zip file because my canvas size was 650 in width(a little over the max size for WordPress). Thus, continue reading to learn how to access the game.
Instructions on accessing the game: To begin this game, you must download the zip file attached above. Once the file is downloaded, a zip file called “Monica-Chang-Final-Project” should appear. Open the file and a folder called “104final” will appear; then, open “sketch.js” to access the code behind it.
Because this is outside of WordPress and my code has implemented sound, it is crucial to trigger the sound file playback and open the game by following the instructions below(taken from Lab Week 10).
cmd
) in Windows.cd path-to-your-directory
(ex. cd Desktop/104final )python -m SimpleHTTPServer
Or if you are using Python 3, type:python -m http.server
http://localhost:8000
in your browser to test your sketch.Description: For this final project, I developed a game in which the player or virtual “photographer” will be responsible for capturing pictures of the flying birds with the virtual camera by pressing the SPACEBAR. The problem is the birds fly by way too fast but the players are required to capture enough pictures to reach “captured” score of 100. There are no losing points in this game.
Another element that I implemented is changing the weather. Since the initial weather is gloomy and raining, the player may press his/her mouse to change the screen to a happy, blue sky. The field audio/sounds will change accordingly.
For this final project, I wanted to develop a game called “Capture Red”. In this game, the player or virtual “photographer” will be responsible for capturing birds with the virtual camera by pressing the spacebar. The problem is the birds fly by way too fast but the players have to capture as many as 100 “photos. Birds of different colors will also appear and if the player accidentally captures they will lose points.
In the default/beginning screen, there will include a frame that resembles what the player would see looking through the lens but in a more exaggerated way. I will also include field audio to incorporate the sounds of the environment. The lens will be looking towards the sky where the player will also see moving clouds(I am also thinking of changing the sky throughout the game with different weather patterns and different weather sounds like wind, rain, thunder, etc.). UPDATE: I have also incorporated rain as a weather element and default screen and rainSounds as the background noise. When the player clicks the screen, the weather will change from raining(gloomy) to blue skies(with clouds).
Whenever the player “captures” a bird, the screen will flash and a camera flash sound will also play. When the player “captures” the bird, in the top corner, they will see that they have gained a point.
Once the player has captured enough to have a “CAPTURED” count of 100 the game will end and the player must refresh to start again.
]]>For this final project, I want to utilize the generative landscape and sound to create an imaginative, colorful world. There are many artists who I will be thinking about over the course of developing this project but there are two that will be prioritized.
One of these artists is Maxim Zhestov who is a media artist who concentrates on film, installation and computational design to stretch the boundaries of visual language. He is famous for his digitally-rendered gallery designs which produce choreographed and calculated movements of elements and shapes using physics and computers.
Another one of the artists I would like to explore further along this final project is Mike Tucker who is an interactive designer and director who focuses on audio-visual exploration who co-created the app, Tónandi, which had a feature of a VR experience of a fantasy-like landscape.
Although Mike Tucker was able to create an environment that was able to interact with his audience, I felt Maxim Zhestov’s piece would have also been more interesting if the audience was able to interact with the visuals as well although fully computer-generated. However, both beautifully generate a new, fantastical world that fascinates me the most.
]]>//Monica Chang
//mjchang@andrew.cmu.edu
//Section D
//Project 11 - Generative Landscapes
//LANDSCAPE DESCRIPTION:
// SURPRISE! THERE IS HIDDEN LAVA BEHIND THE FIRST TERRAIN!
var tethered = [];
var terrainSpeed = 0.0008;// speed of orange terrain and middle terrain
var terrainSpeedThree = 0.0007; // speed of very back mountain
var terrainDetail = 0.008;
var terrainDetailTwo = 0.001;
var terrainDetailThree = 0.02; //smoothness of the terrains
function setup() {
createCanvas(480, 480);
frameRate(20);
//initial lava
for (i = 0; i < 30; i++) {
var tx = random(width);
var ty = random(300, height);
tethered[i] = makeTethered(tx, ty);
}
}
function draw() {
//lavendar background
background(236, 225, 250);
//arranging the landscape elements(three terrains, lava spots)
renderTerrainTwo(); // middle, low-opacity mountain
renderTerrainThree(); // third mountain in the very back
updateAndDisplayTethered(); //hidden lava behind the front terrain
renderTerrainOne(); // first terrain int he very front
}
function displayTethered() {
//drawing the "tethered" lava
noStroke(); //no outline
fill(255, 11, 5); //red tethered coat color
push();
translate(this.x0, this.y0); //locate lava body at x0, y0
ellipse(5, 5, 10, 5); //tethered lava body
pop();
}
function makeTethered(birthLocationX, birthLocationY) {
var theTethered = {x0: birthLocationX,
y0: birthLocationY,
tx: random(0, width),
ty: random(300, height),
speed: -3.0,
move: moveTethered,
display: displayTethered}
return theTethered;
}
function moveTethered() {
this.x0 += this.speed; //speed of lava moving
if (this.x0 <= -10) { //new lava appears at the right as they disappear to the left
this.x0 += width + 10;
}
}
function updateAndDisplayTethered() {
for(i = 0; i < tethered.length; i++) {
tethered[i].move();
tethered[i].display();
}
}
function renderTerrainThree(){
// drawing the terrain in the back
noStroke();
fill(51, 16, 84);
beginShape();
for (i = 0; i < width; i++) {
var t = (i * terrainDetailThree) + (millis() * terrainSpeedThree);
//terrains y coordinate
var y = map(noise(t), 0, 1.5, height / 8, height);
//keep drawing terrain
vertex(i, y);
}
//terrain constraints
vertex(width, height);
vertex(0, height);
endShape();
}
function renderTerrainTwo() {
// drawing terrain number two(in the middle)
noStroke();
fill(71, 11, 6, 200); //low-opacity color of maroon
beginShape();
for(var a = 0; a < width; a++){
var b = (a * terrainDetail) + (millis() * terrainSpeed);
var c = map(noise(b), 0, 1, 0, height / 4);
vertex(a, c);
}
vertex(width, height);
vertex(0, height);
endShape(CLOSE);
}
function renderTerrainOne() {
//drawing the terrain in the very front
noStroke();
fill(235, 64, 52);
beginShape();
for(var x = 0; x < width; x++){
var t = (x * terrainDetailTwo) + (millis() * terrainSpeed);
var y = map(noise(t), 0, 1, 0.55, height + 100);
vertex(x, y);
}
vertex(width, height);
vertex(0, height);
endShape(CLOSE);
}
I was originally inspired by the horror film, ‘Us’, which was released this year and wanted to illustrate the “Tethered”. However, they ended up looking more like lava due to the color so I ended up creating a landscape with holes of lava. This project was really fun and helpful in making me understand objects better.
Her website: http://rosa-menkman.blogspot.com/
Rosa Menkman is a Dutch curator, visual artist and researcher who specializes in digital media and analogue – specifically noise artifacts: glitches, encoding and feedback artifacts. With her artwork, she emphasizes the idea that the process of imposing efficiency, order and functionality does not involve the creation of procedures and solutions, but utilizes ambiguous compromises and the forever unseen and forgotten.
Menkman is considered to be one of the most iconic video glitch artists as she often utilizes software glitches to develop her stunning pieces. One of her algorithmic pieces, ‘Xilitla’, is a hallucinatory, futuristic 3D architectural environment formed by polygons and other unconventional objects. Using game-like functions, the viewer is allowed to navigate through this graphic landscape using the head-piece in the center. Menkman also considers this particular piece to be one that would best describe her body of other works.
]]>//Monica Chang
//mjchang@andrew.cmu.edu
//Section D
//Project-10-Sonic-Sketch
function preload() {
// call loadImage() and loadSound() for all media files here
//the chalkboard
classroomURL = "https://i.imgur.com/HudNKW3.png"
classroom = loadImage(classroomURL);
// load six sounds for each student
cough = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/11/cough.wav");
giggle = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/11/giggle.wav");
sneeze = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/11/achoo.wav");
fart = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/11/fart.wav");
writing = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/11/writing.wav");
sniffles = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/11/sniffles.wav");
}
function setup() {
createCanvas(365, 329);
}
function draw() {
// image of classroom
background(255, 223, 171);
image(classroom, 0, 0);
}
function mousePressed() {
//Play giggle noise when mouse is pressed on female student in the front(with a ponytail)
if(mouseX > 135 & mouseX < 210 & mouseY > 85 & mouseY < 165){
cough.play();
}
//Play fart noise when mouse is pressed on male student closer to the teacher
if(mouseX > 53 & mouseX < 120 & mouseY > 135 & mouseY < 245){
fart.play();
}
//Play writing sounds when mouse is pressed on female student with black hair(no ponytail)
if(mouseX > 210 & mouseX < 285 & mouseY > 124 & mouseY < 200){
writing.play();
}
//Play giggle sound when mouse is pressed on female student with a bun
if(mouseX > 113 & mouseX < 186 & mouseY > 187 & mouseY < 270){
giggle.play();
}
//Play sneeze noise when mouse is pressed on male student in the back
if(mouseX > 280 & mouseX < width & mouseY > 155 & mouseY < 260){
sneeze.play();
}
//Play sniffling noise when mouse is pressed on female student in the back(with a ponytail)
if(mouseX > 170 & mouseX < 250 & mouseY > 228 & mouseY < 310) {
sniffles.play();
}
}
For this project, I used an image of a classroom and further integrated sounds you would typically hear in a class setting(besides the farting noise) via the students. The sounds I used were: farting, sniffling, coughing, writing, giggling and sneezing.
]]>by Imogen Heap
I first discovered the Mi-Mu Glove through a participating music artist: Ariana Grande. As I have been a fan of her music for a long time, I became aware of this new, technological way of expressing and performing music through movement of the performer during Ariana Grande’s tour in 2015.
These gloves, however, were created by an award-winning musician and technology innovator, Imogen Heap. With these gloves, a wide variety of musicians have explored different ways of performing. For instance, artists such as vocalists, classical pianists, pop artists, beat boxers, and guitarists participated in the earlier versions of these gloves since they were released in the year 2010.
Once Heap began collaborating with a range of musical artists, the MiMu design team began to expand with engineers, designers, and artists specializing in software, textiles, electronics, sensors, UX/UI and music! With this team and these gloves, she continues her search for a relationship between music software and hardware as a musical tool.
]]>//Monica Chang
//mjchang@andrew.cmu.edu
//Section D
//Project 09 - Computational Portrait
function preload() {
var myImageURL = "https://i.imgur.com/3WVgXfE.jpg";
itMe = loadImage(myImageURL); //uploading my image
}
function setup() {
createCanvas(360, 480);
background(0);
itMe.loadPixels(); //pixelates the image
frameRate(4000); // rate of generating pixels
}
function draw() {
var px = random(width);
var py = random(height);
var size = random(3, 8);
var offset = 15;
var cx = constrain(floor(px), 0, width-1);
var cy = constrain(floor(py), 0, height-1);
var imgColor = itMe.get(cx, cy);
noStroke();
fill(imgColor);
ellipse(px, py, size);
textSize(size);
textFont("Georgia");
text("M", px + offset, py);
}
I chose to approach this project with a self-portrait.
I think this was one of the easiest but one of the more fun projects we have done this semester. Just like all the other projects, this was very open-ended which allowed me to explore different options comfortably although I struggled to find what else I could do to the image. This also gave me a chance to look at some beautiful photos I had abandoned.
]]>For this Looking Outwards post, I was intrigued by one of Kristine Kim’s Looking Outwards post where she looked into the artist, Refik Anadol. Something that really pulls me towards Anadol’s whole portfolio is his ability to create visuals that are able to take the viewers to an alternate universe. Kristine also articulates his ability to play around with the functionalities of that of architectural characteristics which makes sense when his pieces often take the space into account when being presented.
There is one particular piece which Kristine included in her response which was Melting Memories by Anadol which held a concept of materializing memory. It was interesting to see something so fragile and non-tactile like memory was manifested into a visual and perceptive form. With methods like this, Anadol is able to create a new world with digital processed as Kristine also mentions in her response. Anadol’s body of work consists of these ideas of materializing areas/things in the world where one would not really be able to visual or imagine in physical form and he wants to create a path towards a future that connects the digital world with the one that surrounds us already.
]]>Mike Tucker’s Lecture(second video): https://vimeo.com/channels/eyeo2019/page:4
Tónandi project: https://www.magicleap.com/experiences/tonandi
Mike Tucker is an interactive designer and director at a company, MagicLeap, that focuses on the future of spatial computing. With his skills of audio-visual exploration, he was able to collaborate with Radiohead’s Jonny Greenwood and Encyclopedia Pictura in creating Kanye West’s video game. He has also worked with Universal Everything(I have blogged about before in Looking-Outwards 05) which is a collection of designs and digital art which pertains to technology and humanities curated by a man named Matt Pyke.
Tucker has worked with various concepts and projects such as hand and eye-tracking, spatial controllers and optics. Spatial music interactivity became his next experiment when he was collaborating with Sigur Rós and MagicLeap.
Tucker states that he hopes to inspire and to enhance the minds of spatial designers all over the world by providing a new mindset of approaching technology and utilizing it to be able to design a “mixed-reality future”.
]]>