The figure of the cyborg, defined by Haraway as ‘a cybernetic mechanism, a hybrid of machine and organism’ (1991, 149) has become a familiar one across different media. This talk will examine the ‘place’ of the body in contemporary cyberculture from an historical perspective: from Mary Shelley’s patch-work monster from Frankenstein, to the female android in Fritz Lang’s Metropolis, not forgetting the replicants from Ridley Scott’s Blade Runner and the Japanese Astroboy and Gigantor, precursors of the various Terminators and Robocops. Our aim is to discuss the ways in which such representations of the cyborg address contemporary real fears (and desires) towards the post-human body, cloning technologies, the simulated (hyper)reality of the cyborg world.


Dr Anna Notaro
Programme Leader
Contemporary Media Theory
School of Media Arts & Imaging
Duncan of Jordanstone College of Art & Design
13 Perth Road
Dundee  DD1 4HT (UK)


The machines are created because they are a natural integration of my innate abilities and passions. The act of working is, in essence, play and the machines that result are the byproducts of these moments. I am moved by many challenges: engineering, figuring things out, and imagining ways to manipulate materials. Little bits of wire, motors, or things that I encounter in my daily life are assembled into systems that physically move, and thereby are animated with a kind of “life-energy”.

The process is important and the resulting machines are what is left over. There is no goal. There is no over-reaching direction. There is no separation between the making of machines and my life as a whole.




This talk outlines the work that is being done at the University of Essex and the University of Bristol on an EPSRC Adventure Fund project to build a conscious robot. Given the early state of consciousness research and no hint of a solution to the so-called hard problem, this might appear to be a rather premature and preposterous project. However, there are some grounds for thinking that a start could be made on the development of machine consciousness. These include rapid progress in our knowledge about consciousness, a large body of work on the neural correlates of consciousness and the establishment of a number of theories about consciousness that have a substantial amount in common. All of this suggests that although we cannot design a conscious robot in the same way that we can design a red robot, we can at least build a robot that includes many correlates of consciousness and which will be attributed some form of consciousness by a majority of the mainstream theories.
To maximise the chances of consciousness in the robot we are pursuing a biological approach wherever possible and the design of the robot’s body and ‘brain’ has been closely based on the human body and brain. Human beings are the paradigmatic examples of conscious creatures, and so it can be argued that a close copy of the human body and brain is more likely to become conscious than artefacts with little or no connection to human biology – something that is especially true if consciousness is an emergent phenomenon.
The main components of this project are a hardware robot called CRONOS, a virtual copy of this robot known as SIMNOS, a spiking neural simulator (SpikeStream) and a systematic approach to describing the synthetic phenomenology of the system. After covering each of these in more detail I will explain how we plan to identify signs of consciousness in the robot and finish with an outline of previous work in this area and some directions for future work on machine consciousness.




The computer is a programmable machine that can store, retrieve, and process data. It is also a metamachine, a machine for creating virtual machines. Each time one starts a program on a computer, it is turned into a different virtual machine following different rules. These rules, a series of actions define a process. Processes appear at several levels in the computer. First, text instructions are formalised to define the inner mechanism of a system. These instructions are translated to a program through an interpretation process. Then the generated machine language rules set into motion resulting in a completed piece of software, a virtual machine. This software can also describe rules and artificial entities to follow them.

The work Machine/Process, rooted in the history of conceptual and process artworks, tries to explore the layers of machines and processes being present in a metamachine. The piece is a work of generative art in itself. It employs computational algorithms to create abstract geometric animations in real-time. The result is an animated, living environment created by continuously changing dynamic graphics. Furthermore, it also reveals the invisible, elusive inner working of virtual and real machines through the creation of a community of artificial agents.



// Boutique Vizique

being lost

quote from ‘A Field Guide to Getting Lost’, by Rebbeca Solnit
(The word “lost” comes from the Old Norse los, meaning the disbanding of an army, and this origin suggests soldiers falling out of formation to go home, a truce with the wide world.!)

“Leave the door open for the unknown, the door into the dark. That’s where most important things come from, where you yourself came from, and where you will go.”

In 1994, we met at the Royal Academy of Fine Arts in Ghent Belgium, where we both studied photography. Neither of us was really practicing the profession when, in 2000, we put our computers side-by-side into the living room to work together on a VJ-set. I think we were totally ‘lost’ at that point. We created this setting that felt like coming home to us, but we had no clue about what we were really doing. It turned out that in the next 3 years, those computers stayed there in that living room. And looking back, I think that is about the only thing that stayed the same. Generally, our work process would look like this. We would engage in some sort of soft- or hardware for a while and when we sort of ‘controlled’ it, move onto something else, something new. Often we would force ourself into some project we had no experience at all with. It resulted in a series of short videos, TV commercials, some graphic design and a long series of VJ-sets. Trying to make a living out of it and at the same time learning all these new tools.

realtime video processing

And that’s how it went on. We were doing a lot of video in collaboration with several musicians, but missed the possibility to change our images on the fly.
So, we started writing our own AV software, in the Max/Msp/Jitter programming environment , to create the desired results. During several AV performances we would basically be on top of the musicians, trying to get some sort of input from their actions. When we reached the point where their sound influenced our images directly, we realized that our presence on stage was not important anymore.
Well, … what if we turn the audience into the performer.!

audience as performer

Originally still working with video as the main medium, our work all of a sudden became 3-dimensional. This was a big change for us and we both look back to those early experiments with a blush on our face. Somehow we were lucky to let things sprout out of these early try-outs. Several of our installations have gone through a series of development in order to become what they are today. We were given multiple chances to fail, try, and fail again.

Soon we realized that the most interesting aspect of this kind of art work is that the line between creator and the audience becomes blurry. Every visitor lives in his/her personal universe and while interacting with the installation awakens part of that world and shares it with the other people that are present.
And that is one of the initial ideas from which our Dustbunnies arose.

// Dustbunnies

So, our goal was to create an interface that would allow both individual and multi-user interaction. As a reflection of human social behavior, we came up with the idea to create a group of separate, but similar interfaces that would behave as a group itself. Their internal communication became the centre of our focus creating an installation that - in it’s reactive behavior - emphasizes the visitor to consider their own actions and movements. As a result, communication - in whatever form - among the different visitors, young and old, is inevitable.
tools are toys are tools are ..
Technically the dustbunnies are seven ball shaped, soft sculptures that contain a whole set of electronic components. and although we use some of the latest technology, all this is hidden as far as possible. It’s their guts and brains, but just like ours no fun to look at. Yet, we care about all the pluses and minuses cause it gives them a soul that gets reflected in their actions. Their cute look aims to brake social boundaries more easily; and playfulness is a returning element in all of our work.

In order to penetrate their mysterious world you have to give up some of your own personal freedom. If that’s out of question, you may try to reveal their secrets by taking a closer look. but be careful.! As soft as they look and feel; touching a dustbunny will cause a furious screaming by all members of the colony. The whole group will show their dislike. By doing so they stimulate a certain group exploration by the visitors. One person interacting will cause a total different reaction then a group intervention.

// Q&A
A major change happened within the Boutique Vizique structure in 2003. Fifty percent of our little collective moved to another country far away. The other fifty percent stayed loyal to the home base.!
Originally, our practice consisted mainly of interventions together with other artists. As it turned out, this colaborative aspect became less important at the moment we geographicaly parted. Our woolgathering however survived and focussing more on our own personal ways of expression. It engendered a series of affable interactive installations and we hope to go on for a long time.!



Maskinen kan bli smartere enn deg

(fra Aftenposten 22.08.03):

Hva er vitsen med å skape noe som er smartere enn mennesket? Kunstig, overmenneskelig hyperintelligens kan vise seg å være det vi må ha for å løse kreftgåten og et utall andre problemer vi i dag ikke makter å løse. Men en slik intelligens har også potensial til å få verdensøkonomien ut av balanse eller i ytterste konsekvens å ta fra oss kontrollen over kloden, akkurat slik utallige science fiction-romaner har beskrevet. Det forbausende er ikke at slike perspektiver finnes, men at kunstig intelligens (KI) kan bli en virkelighet før de fleste av oss aner det.
Førsteamanuensis Jørn Hokland ved Institutt for Datateknikk og Informasjonsvitenskap på NTNU er informatiker og matematiker, men har hele sitt voksne liv interessert seg intenst for psykologi og nevrofysiologi. Han tror han er på sporet av kunnskapen som kommer til å være den mest revolusjonerende innsikt vi noen gang har hatt: Lærdommen om hvordan vi skaper en maskin som er smartere enn oss selv. Men for å klare det, må vi mennesker først forstå hvordan hjernen lærer.
- Hjernen er det mest komplekse objekt vi kjenner til, og derfor det området hvor vi har kommet kortest i vår forståelse. Det betyr ikke at læring er en mystisk prosess. De tanker og følelser vi har, er resultatet av elektriske signaler mellom nevroner i hjernen. Vi vet mye om nevronet, men selve mekanismen for læring kjenner vi ikke i dag. Jeg er helt sikker på at vi før eller senere vil forstå hvordan nevronet fungerer, og hvordan hjernen tilegner seg kunnskap. Alt vi trenger, er å forstå hvordan et enkelt nevron varierer sin følsomhet for andre nevroner, slik at hjernen lærer. Når vi forstår dette og kan beskrive det matematisk, er det en ren ingeniøroppgave å konstruere en intelligent maskin, sier Jørn Hokland.

Han understreker at det er et helt annet spørsmål om vi bør bygge en slik maskin. Hokland mener at samfunnet må kjenne til og ta stilling til denne problemstillingen.


While biological systems have served as mimetic maps from which to study, emulate and create artificial life artworks, biological intelligences have never existed in individuals alone. In biological systems there is a complex symbiotic coupling between all levels of living beings (micro and macro) as well as a structural coupling with matter, energy and information that exists in an ecosystem. Consequently, approaches to artificial life art installation, should be expanded to incorporate and consider complex environmental and behavioral entities (humans), which may contribute to emergent properties exhibited in these systems.
Natural living systems are competitive, communicative, and symbiotically intertwined, and yet we have few examples of functioning machine installations designed to exhibit bio-machinic symbiosis. Ken Rinaldo will review early works as important precedents to later works, which will look at his artworks and scientific investigations, which expand notions of artificial intelligence, biological art, transpecies communication and artificial life. He proposes that an awareness of ecology and symbiosis in biological systems can point to software and hardware approaches, which look to the environment in which our intelligent machines may arise, emerge and intertwine.



I am fascinated by the idea of mechanical devices which have unpredictable “lives of their own”… sets of internal rules and cycles which give them autonomous and surprising behaviors. Since 1966, I have been building kinetic devices that have deliberately minimal visual appeal, yet a strong behavioral dimension. Their behavior derives partly from the materials that I employ… electronic components, motors, pulleys, gears, etc. The tendency for such materials to wear and break echos my own mortality and provides me with yet another way to transcend my own intention and control.

As an artist, I am concerned with the social framework in which I present my creations to the public. I believe that for too long Western society has clung to the idea that exhibiting in galleries and museums is integral to art practice. The result has been the alienation of large sectors of a society who feel intimidated by the highly controlled, self-conscious aura of the average gallery. Therefore my projects over the last twelve years have sought out ways to bring art to all people of a given place, especially those people who tend to avoid institutionalized art venues.

The above conceptual mix — experimentation, machine behavior and vulnerability, computer physicality, broadened public access — has led me inevitiably to robotics. It has become for me but another form of portraiture, rife with myriad possibilites of introspection, irony, drama, farce, and social commentary.



Even though there is no general agreement for the definition of life, there is a cluster of properties connected to life: growth, reproduction, adaption, responsiveness, metabolism, movement. Sometimes autonomy, development, and evolution are also mentioned. In general, life is regarded as a complex biochemical machine. These bio-machines¹ have been available for studies ­ as carbon based life forms ­ for many years now, but at the same time researchers and artists always had a strong interest in simulating biological phenomena through the use of biochemistry, mechanics, robotics or computer models.

It started with simple captures of living organism in statues, drawings and paintings having movable body parts that needed human power to be activated. After further progresses like the early Egyptian water clocks, Clepsydra, based on the technology of water transport and the Pneumatics produced by Hero of Alexandria, the first more complicated forms of simulation have been developed in the age of mechanical clocks. Complex internal catenations made the simulation of life-like motion possible. Famous inventions like the Duck of Vaucanson and the Lady-musician of Pierre Jaquet-Drot (1774) followed and started a new direction of artificial life. The copies of nature became more and more complex leading to contemporary high-tech robots which simulate human senses and movements almost perfectly in certain aspects.

As we can see, one important push for building artificial life was the desire for machines that could help organizing the every day life more comfortable. Today these ’support functions of robots are very complex and only traceable by teams of experts and computer-based systems. Nevertheless the fascination of creating life is still present: not to realize basic functions but as the opportunity to communicate ideas of life and its philosophy in an artistic context. Our motivation is the enthusiasm of creating living things, observe their independent behaviors in lab and nature and peoples reaction when they get in contact with simple life forms. In this case, art is technology. We do not rebuild organic creatures with the feeling of being forced to use ugly technology. We explored technology ­ especially small electronic components and its functions ­ which made us thinking of the ’elf project. It is fascinating to use very un-organic material, put it together in a way that it is still recognizable but adding some simple pure function that gives this living expression.

The whole idea of this project is the exploration of technology and putting it in a new context/environment/perspective which questions the relationship between technology, nature and humans.




“The Evolution of an Idea”

In the mid 1960’s when I first encountered the use of computers to make music , the world was abuzz with visions of a new musical utopia enabled by technology. I will trace some of the history of these ideas and then narrate my experiences to the present day, exploring the realization of these dreams: how they have been realized, how they have evolved, how they have fulfilled our expectations or failed to do so, how the original ideas seem in retrospect.



Busby Berkeley choreographed dancers to mimic the motions of machines and modern inventions. “AutoGene” is the flipside of this .It’s a simple aesthetic looking robot composed of eight modified umbrellas mounted in a circular pattern. A cocktail of air hoses and electrical cables join these umbrellas to a central computer which enables “AutoGene” to produce a choreographed dance to music which erodes the machines mechanical qualities and transforms the mundane umbrellas into magical animated objects.



At times I have described my work as falling under the categories of wearable sound, interactive electronic installation, and digital imaging. At other times I have grouped it in with the disciplines of video telepresence, animation, performance, and robotics. Like many artists working amongst and between these categories, the final materials that are used in my production tend to be of less importance than the continuity of the artistic research and ideas behind them. I am driven by a fascination with the design and construction of systems that are organic-electronic hybrids and that have a true function that goes beyond their existence as cultural objects. I am interested in making creative machines - machines that independently make art when cross-pollinated with human interaction. These artworks’ behaviors are based in generative algorithms embedded within them. I use networks, reflexivity, and the interplay of randomness and pattern to initiate a genuine engagement of the machines with human visitors and vice versa. I view such innovations as tools for creating new experiential frameworks that question the nature of creativity and authorship.

In 1999 I began experimenting with ideas of how common elements of household and commercial building interiors could be repurposed into informative and metaphoric interfaces. My first artwork along this line of inquiry was the Unstoppable Hum (1999). This was a large sculptural artwork that monitored various electromechanical activities in a building such as automatic doors, security, ventilation, computer and phone systems. It tracked their patterns of activity and then created musical compositions on the fly that expressed the complexity of that activity. Humans thereby were informed of when and how much the structure around them was working towards maintaining the safety and comfort of their habitat. In 2002 I created Dry Translator which included two custom designed audio vests and an interactive wall. When a gallery visitor touched the drywall of the gallery, they heard the sound of their touch not locally where their fingers hit the wall, but actually on their own torso as the sound was amplified and transmitted wirelessly to the audio vest they were wearing. The walls thereby became skin-like extensions of the participant’s own body. Participants were also able to record a series of touches or gestures on the drywall via an interactive consol and, in doing so, leave a “touch message” for the next participant to play back on the vest.

My most recent research continues to focus on architectural interfaces but now via telepresence and with an aesthetic of weightlessness. I am producing a new series of video-enabled rovers (robotic vehicles) - some of which may be operated over the Internet. This current work is inspired in part by the exploratory rovers sent to Mars as well as the concurrent proliferation of personal webcams. Webcams have become a phenomenally pervasive tool for exploring remote public and domestic landscapes. The first work completed in this series is searchstoretrash (2003). For this piece, I custom fabricated a track of thin translucent material that extends up, down, and through a gallery space. Visitors are able to navigate the track via a remote controlled rover with a live video feed. They thereby “experience” the gallery space from either a mouse’s or a fly’s eye point of view depending on the rover’s location. This is an opportunity for people to explore the gallery environment from odd and unusual vantage points and perhaps - through this process - gain some perspective -both literally and figuratively - on that environment.

Artists of influence to my work are David Rokeby, George Gessart, Joe Davis, Sarah Sze, Charlie White, Inigo Manglano-Ovalle, R. Buckminster Fuller, and Louis Bec.






Partial Head, Walking Head & Extra Ear.

The EXTRA EAR has now been constructed on my arm. A right ear on a left arm. An ear that does not hear but transmits. A facial feature has been replicated, relocated and repositioned elsewhere. Excess skin was created with an implanted skin expander in the forearm. By injecting saline solution into a subcutaneous port, the kidney shaped silicon implant stretched the skin, forming a pocket of excess skin that was used in surgically constructing the ear. When electronically complete it will form part of a distributed bluetooth headset. I will be able to speak to the remote person through the Extra Ear but will hear the sound of the person speaking to me in my mouth. If my mouth is closed only I will be able to hear them. If I open my mouth and someone is close by, they will hear the sound of the remote person from within my mouth.

The PARTIAL HEAD was a project that was generated from the image of the flattened digital skin that was made for the PROSTHETIC HEAD (a computer generated head that speaks to the person who interrogates it). But with the PARTIAL HEAD, my face was scanned. A hominid skull was scanned. We digitally transplanted the face over the skull, constructing a third face, one that is post-hominid but pre-human in form. A scaffold of ABSi thermal plastic was formed using a 3D printer. The scaffold was seeded with living cells. The PARTIAL HEAD is a partial portrait of the artist, partially living. It’s life-support system was a custom engineered bioreactor, incubator and circulatory system which immersed the head in nutrient kept at 37 C.

The WALKING HEAD is a 2 m diameter 6-legged autonomous walking robot. Vertically mounted on its chassis is an LCD screen imaging a computer generated human-like head. The robot has a scanning ultra-sound sensor that detects the presence of a person in front of it. It sits still until someone comes into the gallery space- then it stands, selects from a set of movements from its library of preprogrammed motions and performs the choreography. It then stops and waits till it detects someone else. The robot performs on a 4 m diameter platform and its tilt sensor system detects when it is close to the edge and backs off, walking in another direction. The Walking Head robot will become an actual-virtual system in that its mechanical leg motions will actuate its facial behaviours of nods, turns, tilts blinks and its vocalizations.

The recent projects tentatively and imperfectly explore alternate anatomical architectures that incorporate physiologically plausible structures and re-wirings. They also postulate hybrids of biology and technology and actual-virtual chimeras. Operational and living systems as mixed and augmented realities. In so doing it exposes the obsolescence of the body and questions its present form and functions.


Heide Museum of Modern Art, Melbourne, Australia
18 July- 29 October, 2006
Photograph: Stelarc


Melbourne, 2003
Photograph: Polixeni Papapetrou

Next Page »