bumble_b-ProjectToShare

I’d like to share the work of an artist, Jasper Léonard, who specializes in a specific type of photography I love, miniature faking.

Miniature faking involves manipulating a tilt-shift lens on a camera in a specific way to create a miniature effect on a normally-sized subject. Léonard captures life-sized scenes in this fun and inventive way, and as a lover of all things miniature and scale model (things that are smaller than they should be are just so darn cute), this process of capturing the world really speaks to me.

Yes, these are actual photos of New York City that are made to look like toys!

He has a few books that focus on specific cities captured this way, such as New York ResizedAntwerp Resized, and Belgium Resized. I have New York Resized, if anyone is interested and wants me to bring it to class!

Looking Outwards: 06.01.2020 18.39 (2022)

06.01.2020 18.39 (2022). Video projection, sound and fans

This is an installation by Alfredo Jaar exhibited at the Whitney Biennial 2022. This piece consists of video footage from June 2020 Black Lives Matter protests in Washington, DC following the murder of George Floyd. 

In this installation, the video part shows scenes of police in riot gear using flash grenades and tear gas on crowds of protesters in an effort to disperse them, and a police helicopter is brought in to make them further scatter. 

In one key moment, the helicopter flies incredibly low, threatening the crowd. The fans and the sound in this installation simulates the sounds and movement of the helicopter. The fans spin incredibly fast, with the loud sound simulating the engine, this piece situations the audience into the threatening environment. This work brings in discussion of social justice, violence, and surveillance.

I found this work inspiring and relevant to this class because it records, captures, and displays the event in a way that challenges the viewers to physically engage with video works.

Links: 

https://news.artnet.com/art-world/alfredo-jaar-whitney-biennial-2022-2093016

https://www.theartnewspaper.com/2022/04/25/2022-whitney-biennial-video-artists

PARIS | Guy Trefler

PARIS | Guy Trefler

Work title:Paris

Author:Guy Trefler

Type: Short Video

This video shows a mixing of reality and personal interpretation and his own imagination. He alternates between the real urban composition in Paris and the wild, crazy, also childish doodles in this video. I think what inspires me in this video is what the camera already captured can also be changed! By doing this, Guy said that he created a new story for Paris and I think it gives people multiple perspectives to interpret the city.

Looking Outwards

Neural swamp is a Future Fields commissioned multi-channel video installation. Martine Syms’s work engages with a wealth of digital media that challenge the gendered and racialized politics of images and systems that allow consumption and production. The piece features the use of YouTube videos, video game clips, and other found footage. Syms’s uses complex computer algorithms, known as ‘Neural Networks”, that are machine learning systems designed to mimic the functions of the human brain. Syms’s investigates the intersections of Blackness and Womanhood in a hyper-digital world. The exhibition is constantly evolving and will continue to grow as it is on view.

I found this work to be incredibly consuming and immersive. I could view the installation this summer, and it blew my mind. The room was this hyper-green color that immediately put me in the mindset of the digital. While walking throughout the space, each monitor had potent content unraveling before me. I am intrigued by the way the screen is used as this device to navigate the viewer into multiple spaces. I believe this is relevant to the class because of the challenging use of the digital lens and its growing state with the algorithm.



Martine Syms Website link

Sexy AI? “technological possibilities are irresistible to [hu]man”

The ethnography of Nafus and Sherman (2014) shows how those in the Quantified Self movement “collect extensive data about their own bodies” to become more aware of their mood, health, and habits, redeeming the liberatory and interpretive potential from the same technologies which usually “attract the most hungrily panoptical of the data aggregation businesses” in service of capital, carceral, or managerial ends.

In this “Looking Outwards” report, I encountered Berlin-based fashion designer and sports scientist Anna Franziska Michel, who creates designs for fabric and clothes based on self-capture of her health and sport data. In her presentation, she wears a red and blue marble-patterned dress she created using an “AI” “neural painter” from her self-tracking data. She observes that the prominence of red demonstrates that she is sitting more after founding a fashion design company.

However, I found this work fell short in ways I see other new media art falling short: exciting conceptual impulses motivate the exploration of new technological possibilities, but without a coherent link in the other direction. In what way does the affordances of the hyped new technological artifact inform new conceptual ideas or possibilities, in turn? For example, how does her outfit inform her sense of self, help her better understand her health as she wears it, or comment on the idea of self more generally? When she sells her designs based on her own data for others to wear, what does this represent for the wearers? Do they feel any connection to her or her data they are wearing, or do they just receive it as a cool looking design, made sexy by the imprimatur of “AI”?

***

Nafus, Dawn, and Jamie Sherman. This One Does Not Go Up to 11: The Quantified Self Movement as an Alternative Big Data Practice. 2014, p. 11.

Michel, Anna Franziska. Using Running And Cycling Data To Inform My Fashion. https://quantifiedself.com/show-and-tell/?project=1098. Quantified Self Conference.

The title quotes John von Neumann, from: Chandler, Daniel. Technological or Media Determinism. 1995.

 

Looking Outwards – DALL·E

The project I would like to share is DALL·E, which is a machine-learning program that generates images based on natural language description. DALL·E has been insanely popularized these recent months by artists who use it to create “AI art”.

There is a large room for improvement in the future of DALL·E. Currently, because of how the algorithm “mush together” found online images, generated images usually have less defined edges and distorted features. They appear more abstract and like a “fever dream”, where objects all resemble something but the details are all wrong. This is part of the reason why artists now mostly use it to generate abstract concept art or fix up the images later by drawing. However, with better algorithms implemented it could generate much more precise images in the future.

DALL·E has sparked heated discussions among traditional as well as digital artists about whether or not “machine will overtake human artists”. This artificial intelligence is revolutionizing the art industry drastically and forcing artists to rethink the definition, meaning, and scope of art. It reminds me of how the invention of photography caused revolution and panic in the traditional art world. DALL·E, and many other AI based image generator, should not be feared in the art world. They mark a turning point in the art-related technology. Now we can turn the images in our heads to image by a click of a button, what’s next?

An example of how DALL·E 1 vs DALL·E 2 compare for the prompt “a painting of a fox sitting in a field at sunrise in the style of Claude Monet” (Image credit: OpenAI)

 

miniverse – looking outwards

there are hundreds of thousands of tourist pictures of the notre dame. Each from a slightly different angle and location on the ground. CV experts at UW had the idea to use these free photos as photogrammetry inputs and estimate a 3d model of notre dame. This requires very intense knowledge of CV because each tourist photo needs to be mapped to a physical location around the building and then used to create a 3d model. This is like a much more complicated version of epipolar reconstruction. I think it’s interesting how this could be applied to virtually any location with enough footage or images. I took a class in cv, but I don’t really understand how enough 2d information can construct a 3d model with lots of math. Now that notre dame has burned down, processes like this allow retroactive 3d models of a a location that can no longer be captured.

https://www.washington.edu/news/2007/11/01/vacation-photos-create-3d-models-of-world-landmarks/

Looking Outwards – Inbar Hagai

One of my all-time favorite experimental documentaries is Leviathan (2012) by Lucien Castaing-Taylor and Véréna Paravel, which was produced as part of the Sensory Ethnography Lab at Harvard University. Its 87 minutes were shot on an industrial shipping vessel at open sea (200 miles off the coast of New Bedford, Massachusetts) using GoPro cameras, which often seem to be mounted on top of different parts of the ship. It’s one of the most potent texts about the food industry I’ve ever seen, yet the entire film has barely one sentence of text spoken throughout it.

Leviathan retains a highly meditative and beautiful watching experience, yet that beauty is created out of a somewhat abstract representation of an extremely cruel and violent reality (of both humans and non-humans). In that aspect, a potential criticism of the film would be that it transforms violence into a visual pleasure to such an extent that it weakens any activist potential that could possibly have arisen. As Harun Farocki suggests in his film “Inextinguishable Fire” (1969), the question of how to represent reality in a way that will lead to its disclosure and understanding, in a way that will lead the viewer to action, is a paradoxical one, since a violent image can also mask or lead to not seeing, to oblivion and numbness.

That being said, from my perception of what art is, this film is an extraordinary example of poetic precision between form and content. It does not preach or impose a meaning, there is no trace of pedagogy or didacticism, yet it holds it all within it and allows for in-depth observation and examination of a distant reality of labor and death that the average person relies on in their day-to-day life without giving it any thought. I would argue that this context, along with the potent visual imagery of the film, which is unbelievably mesmerizing, and the very clever almost hidden soundtrack treatment, elevate Leviathan to a masterpiece.

(A side note: other than Leviathan, Harvard’s Sensory Ethnography Lab is the home of some more pretty amazing experimental explorations – https://sel.fas.harvard.edu/)

ultrablack-lookingoutwards

Is there a correspondence between the subjective feelings we perceive and the objective data and indicators? How does it correspond? As they are both information encoded in different formats.

Science can explain part of the truth in this world, but human still have so many things unknown or unexplainable, things beyond our understanding or perception.

Sound artist Maxime Michaud turned his body into an instrument – generator, actuator, and receiver. He used Muse brainwave device and capture his up-and-down brain volts/electron info and used sound to interpret the brainwave. He tried to enter different statse of mind: when he thinks hard, when he meditates, etc. to trigger different sound effects. The final presentation is a live performance in theatre.

Also, this website braintoneart  is a real-time experience, that turns into brainwave into digital art. It makes me think of Rorschach inkblot. (further research needed)

 

As a person who meditates a lot, noticing a lot of fine brain movement, sensitively experiencing different sensation from body but couldnt explain by words, i”m interested in the gap between how human perception, how it’s actually doing in the world of mater, and how we communicate beside/beyond language format.

Looking Outwards

My post focuses less on a specific project and more on a method of capturing that has been used throughout generations that I really find fascinating.

Kirlian Photography – or more popularly known as aura photography – started off as a method which did not use a lens at all. You would place an object onto photographic film on top of a metal plate. Then a high voltage current would create an exposure, which when exposed to light would create an “aura” print.

Two Kirlian photos comparing the aura prints of the tip of David Bowie’s forefinger before and after consuming coke. These were used for album cover artwork.

This idea was later remixed into the auraCamera, which is a device where people place their hands on biofeedback receptors which measure their electromagnetic field and an attached data box converts the energy readings into frequencies that correspond to certain colors. These newer cameras create the more recognized versions of aura portraits that we see regularly today – polaroids with color clouds on top of them.

Halo Auragraphic

I am interested in how the interpretation of these images have changed due to popular culture and business practices. Aura photography is still a prospering business (its recent revival on social media apps may have also aided in this), even though now we can easily look up an explanation to how these photos are scientifically captured. It seems that people would rather lean into its debunked paranormal myths. Also, I find it fascinating how Kirlian photography was first created to capture the energies of living things that were not human. You can take “spiritual” snapshots of literal objects, which I think also makes people re-evaluate what we should believe to have a spirit and what we should not.