Mengyu Chen, Mert Toka
Environment
2nd floor, room 2005
GANesis is an artificial intelligence mediated immersive space where all the visual forms and content are computed and generated in real time by a 3D generative adversarial network (3D-GAN). By training a generator capable of producing voxelized shapes and point clouds of plants, everyday objects, mathematical geometries, and human bodies, we are able to produce a fictional virtual environment filled by variations of abstract, natural, and artificial objects.
A GAN consists of two neural networks: a generator which produces synthetic data and a discriminator which is tasked with discriminating between the synthetic data produced by the generator and real data. The dataset we use to train the discriminator is curated to contain various types of objects that imply life forms. By doing this, we can let the generator create a metaphorical narrative as a perpetual machine that constantly tries to turn itself into a conscious being. At the same time, the visual narrative of GANesis, though partially defined by our methods of point cloud rendering and visualization, is frequently changing with no preset form, as all these generated artificial objects can interpolate from one type to another in the latent space learned by the generator and formulate new environments. The audience can this navigate the borderless space, diving into the deep mind of artificial intelligence.