I have played this model years ago. The model tries very hard to fit both low-level pattern and border structure to
the input data, which creates funny-looking images. But the model is only good at interpreting thin lines, with
equal stroke weight and without color.
The Current Team made a strange and
unpleasant video using 3D reconstructions. It uses badly made 3D models to address the issue of consumerism which is
clever.
Originally I wanted to make a worm using the joints that are shown today in class. Because the act of showing made the project less interesting, I decided to add more to it using some PID controls that I have not implemented myself before. But suddenly, the spider-like movement comes into my mind and I have long admired to reproduce this effect. So here it is. The problem is more non-trivial than I thought. It takes some hack. Rotation of the spider legs is harder to implement, so I did some tricks to fake it.
This is nuts. The best-generated tree I have seen in my lifetime. It is strange that I can’t find a single root in the image (use of transparency?) and the coloring algorithm is very effective.
Interestingly, I would argue that Effective Complexity is fully determined by human perception.
Mathematical Equations, to some people, are mostly chaos since the change of numbers in the equation would not invoke psychological change to them because they are not interested in extracting information from a blackboard of non-sense. To them, the image above reflects low effective complexity. To mathematicians, however, each bit of information is important and thus the image above reflects high effective complexity.
My Personal Note
“Generative Art Theory” talks about generative art as repeating execution of rule-based art, which incorporates many ancient generative arts not executed by computers.
“Generative Art Theory” by Philip Galanter discussed Effective Complexity. The intuition lies within although the trajectory of individual gas molecules is not predictable, the overall effect of gas property is well known only with little random error. But this is just an intuition, is there a way to find a mathematical definition for Effective Complexity? If we can systematically quantify such metrics, the next generation GAN could be optimized to achieve high Effective Complexity! (Given that the metrics is computable and well defined, we can have genetic algorithm do the generation. It doesn’t have to be differentiable)
The information theory counts every detail in the system as bits of information, but the human perception clearly does not.
Is the Effective Complexity only exist given human perception or is it more fundamental?
One way to model a complex system is to use statistical tools like discerning the mean and standard deviation. Two gas systems with different information will still have similar mean and standard deviation which aligns with human perception.
My opinion about The Problem of Authorship: by defining generative art as repeating execution of rule-based art, all information is, therefore, encodable and can be represented by the rules themselves. If the final products follow exactly as the rules describe, then the final product, as a reflection of the rule, does not add additional meaning to the work. In this case, the authorship should fully belong to who wrote the rules. However, in the case of random number generation (especially for pseudo-random numbers), decisions (on which random number to use) are made by the computer, not the artist. Say, you wrote a program that uses total randomness to generate 100px by 100px images. Most of the time the resulting image is an image of white noise. However, it is still possible for the computer to generate something meaningful by small chance. This problem is magnified with artwork that involves latent space (typically in GANs) as this probability becomes larger. A computer can discover interesting random input to the latent space to “discover” an interesting artwork. At this point, we should attribute some authorship to that computer in choosing the right input. The “amount” of authorship we attribute to the executor should be proportional to the search space. This link to computational complexity is intuitive: as search space shrinks, the rule becomes more restrictive, and therefore more percentage of authorship should be rewarded to the rule-writer instead of the executor. In summary, for computer-generated art with uncertainties, I think the authorship should be split to both the rule-writer and the executor based on how restrictive the rule is.
I am very interested in creating photorealistic quality using non-PBR tools. A map is relatively achievable and when those hand strokes are simulated by the machine, the quality increases but the authenticity still somehow remain as if it was crafted by hand. The entire image is made of many layers of clipping, many canvases, many Perlin noises, and many high school geometry. However, I strongly encourage the audience to not pay attention to the map that is drawn on the parchment because the map is clearly unfinished and badly made.
The most artistically challenging part is to make the parchment look real with a nice texture. The most technically difficult part is to figure out how clipping can work on pixel arrays. Of course, the project is over-scoped as I normally do. But since I think I can learn the most out of an over-scoped project, I will keep doing it until the time gradually shrink as I get increasingly familiar with 2D canvas coding.
Let me introduce to you the greatest invention in the century: distributed time. Have you ever experienced inaccurate time displays when you just got off the airplane from another time zone? Do you know that the time display on your phone, laptop, or even on the wall can be manipulated maliciously by a third party and therefore would cause you to miss a meeting or make your code crash? Take a step further, do you really trust that the location of the sun and the moon is not controlled by an alien civilization (since all physical processes are turning complete and can be simulated) It is the “time” to solve all these problems! Distributed Time Project is a project to enable global time consensus based on distributed blockchain technology. It provides you with the most accurate time regardless of your location on earth (as long as it is on earth, negligible space-time dilation). You can literally speed up the time if you wish to make your meeting earlier in the day (how convenient that is), but of course, that comes with a cost. This is just one application. The other important thing about Distributed Time Project is its convenience for space travel. If you are at Alpha Centauri, it is painful and unmeaningful and stupid for you to keep earth time. It will be more annoying to find out time un-sync if you are a delivery person who doesn’t want to make Amazon delivery late. Put it simply: the current time tracking consensus breakdown during space travel. Also, if Twitter wants to serve both people on earth and on the planets of Alpha Centauri, it is more intuitive for the server to display posts’ Distributed Time relative to the poster rather than the time of either planet or time of the server since distributed servers are on different location of the Universe. For centuries, time was governed by the sun and the moon. In Distributed Time Project, we believe that time is created by the people and for the people.
It is made simply, designed for color blindness as the color can be read in grayscale. It strikes for simplicity design while carrying an artistic taste. The browser queries a blockchain node for the current block time. Because blocktime, although in consensus, is not linear with respect to the time of nature, the graphical interface needs to be responsive about predicting when the next block will be mined.
This project is focused more on the conceptual aspect. It would be best if you can view it on a spaceship of some sort that can travel fast enough to experience time dilution.
I did not have a chance to finish the project as I originally projected. This is a cut-down version as I did not implement a full eth2 node based on PoS. If I did that, the clock will look more interesting as eth2 introduced heartbeat and sharding mechanisms. I spend about 15 hours on the project and this might be considered the most unsuccessful part of the project.
The main bone of the fish does not flow with the rest of the body. I can’t think of a way to coordinate all these with `rect`. I have already spent too much time on this piece. So I decided to move on.
Here is my stylus and electronic-paper sketch. Hope it is fine.
The video about timekeeping devices made me imagine and think through many interesting problems we need to solve in the future regard to timekeeping on the scale of light-years. With relativity, timekeeping is non-trivial and indeed demands research. Many applications, such as interplanetary Amazon delivery and interplanetary Twitter can hugely be benefited from the next generation time-keeping device.
(Visit: https://kokecacao.me/page/Course/S22/60-212/report/2022-02-06.md for better experience)
2022-02-06
Unlike many of looped GIFs, this image has a nice conceptual meaning beyond visual satisfaction. Titled “Rush Hour”, the GIF highlights the complexity of human decentralized route-planning ability. Think subway in NYC, somehow people are able to navigate through crowd without any communication. This, even with full communication, is an NP-hard problem. The image, with high visual complexity, highlight this fact vividly.
It is hard to play with colors of high saturation. And this image demonstrates it well.
It is just an image that visualize a 2D function, but it let me stare at it for a long time. As an art, it does not sacrifice any accuracy to make the visualization more interesting.