Tuesday, December 11, 2012

Into The New Dimensions We Go




----recreating the concept of senses through augmented reality


At numerous occasions, I’ve had frustrating experience at Kinko’s copy centre. “Sorry, the oversize scanner is not working.” “Sorry we are busy right now, you won’t be able to pick up your prints until 8pm tonight. Or you can come back next week!” Somehow I’ve always had to carry around my horrendously large portfolio bag all around town, as it suffers the billowy east coast winter, in search for a properly working scanner, just to get the costly service, after which, let me mind you, I still have to adjust the colour and scale myself. The monopoly of technology of the print shop limits me, as a painter, to explore new horizons in using different surfaces or materials. I couldn’t help but wonder if all the images are just the multitude of dots with information that describes their colours and positions, why can’t we figure out a more flexible way to transfer them from one surface to another instead of going through machines after machines, risking the loss of quality one step after another? I can’t help but wonder, how much would it cost for the backdrop at a theatre scene to be printed, as well as the marquee, the posters, and everything else with sizes unmanageable with a single stroke.


I believe that digital media and Internet technology can help us achieve much, and we can use basic human behavioral patterns to predict the next step of development in said industry, and especially in the frontline of augmented reality, where everything imaginable is possible.

The creations in technology are mostly motivated by the desire to re-mediate activities achievable only on a certain scale or dimension onto another: from as “simple” as making Instagram compatible with android phones, to as complicated as what Pattie Maes’ SixSenses can achieve: borrowing the entire internet database as an external brain and retrieve information to help understand, or capture pretty much anything in one’s environment.


Zoom In and Zoom Out
----the way a computer “sees”


Technology has enabled us to zoom in and see the most basic particles. As a matter of fact, such a notion is nothing new in the art world. Centuries ago George Seurat already attempted his methods in blending coloured dots in creating his luminous paintings, of which perception is highly dependent on the viewers as their retinas receive the bits of colour information and then mix them up with their visual receptors. Visual receptors, is probably the first sense that human recreated with machine. A camera can “see” and then analyze whatever it captured into pixels of different colours, all of which labeled with the most basic data so they’re easily processed (such is quite similar to the rods and cones in human eyes!) When standing from a distance, our eyes couldn’t possibly take in all the information in each one of the pixels in one image although we can make out the whole picture: we make a mental scan of the image based on what we simultaneously receive from a continuum of space. Closing up, we take in bits after bits of the colours and curves and put together pieces, forming a more detailed, accurate scan in our visual cortex. If we remediate that process digitally, we will need a camera (visual receptor), which capture all the visual information and translate them into digital data. With a Bluetooth connection, the data can be transmitted to the storage system, from which a printer will be able to proceed with the outputting process, or as an alternative, a projector will be able to recreate the image on any surface. Such concept is already in the working, as the portable scanner with bluetooth can now scan standard sized documents in seconds --- with that in mind, scanning over-sized material won’t be a far stretch.

The visual use of augmented reality can be quite beneficial in performing arts where large scale of scenes are required to be recreated and changed as fast as possible. If we could create the stage scenes and present the plays/musicals with augmented reality, I dare say theatre designers will have much more liberty to push the boundaries of what is possible to recreate on stage with limited space and budget. Similarly, interior designers will be able to test out their concepts and lay out their designs in the actual space -- that's one step more advanced than the 3-D modelling software being used nowadays.

This kind of computerized visual sense is only the first step. Making photocopies, making projections true to the original images, are simply “receiving the images and give out the exact same mirror feedback”. There’s no logical inference taking place. Let’s now take another look at how human brain processes a visual stimulation. After the receptor captures the image, the language cortex gets stimulated. Usually we receive more than one stimulant at once, as a result, simultaneous cognitive functioning takes place, retrieving stored mental images from long term memory that are related to current vision. This process is recreated by Pattie Maes and Pranav Mistry’s SixSenses. The camera received image, followed by the projector giving out feedback of information related to what the sensor “sees” - the only difference (besides the fact that all the sensors are artificial) is that the database it retrieve knowledge from is the entire internet as opposed to individual brain. In other words, if said technology becomes accessible on the market, the entire world will be sharing one single brain eventually: the response to images becomes highly objective and doesn’t vary based on emotions or personalities; our reactions to perceived information becomes purely intellect...now that’s a scary notion, but I dare say, it seems to be the direction we are heading towards.

Hearing is not Recording; Memorizing is not Archiving
---computerized auditory system

Similarly, the auditory sensor is becoming more “logical” and “flexible” for the machines nowadays. Of course, we're not just talking about the machine's ability to "hear" any sounds, but in particular, its ability to decipher languages and communicate with human operators. From Siri to Android’s voice recognition, nowadays the “little people living inside the mystery boxes” not only can record and store sounds, but can also give responses in a logical fashion. This is surely we’ve been trying to achieve throughout human history: we hold the belief that machine doesn’t lie, so we’ll get the most accurate feedback (as long as those responses are all made based on what we considered facts). Of course, at this point our computers’ ability of holding a conversation is still similar to an Alzheimer patient: it has perfect long time memories of all the facts of the universe, as a matter of fact, once it heard something, it can never be erased; yet it cannot relate the user’s two consecutive questions together logically.


One example is the Cleverbot, a kind of artificial intelligence interactive engine, similar to what AIM used to have, the computer generated chatting avatar as we knew. Try asking questions and then ask a further question based on the answer, you’ll get something completely out of the context.

Here’s a brief conversation I had with Cleverbot:

User: I need an idea for my final paper on augmented reality.
Cleverbot: I will give you one, but I need to know your gender.
User: Female.
Cleverbot: And your age?
User: 23.
Cleverbot: You were 14 before?
User: Yes I were.
Cleverbot: That sentence had horrible grammar.
User: Yes indeed. You haven't given me a topic on augmented reality yet.
Cleverbot: I know but do you think I can't be funny?
User: You better be sarcastic than funny
Cleverbot: I believe that.


Cleverbot is definitely one step up from the traditional AI in that it can be witty and generates conversation that seems to make sense. However, we still can’t expect it to become truly creative and give out sound advices in response to open-ended questions. Personally I thought it’d be more entertaining if Cleverbot was named Eleanor Rigby, because I’ve learned that people would spend hours talking to this lonely woman.

An even more coherent AI conversationalist is created by the same developer in the style of a British Intelligence Officers Exam inspired by the 007 films. The AI, this time with the speech pattern of an agent, gives out a scenario, based on which the user needs to come up with instructions. The conversation would seem even more logical and as if you’re talking to a real person. Compared to the open ended conversation generated by Cleverbot, the Secret Agent game has one single topic, more manageable for a programmed AI. Similarly, the auto voice recognition system in various companies (e.g. banks, phone companies, postal service) also only process singular instructions although they can now recognize speech, under ideal conditions (that is to say, the user needs to be clear and that there’s no external noise/interference.) This might have been the most annoying development in technology, for I believe everyone has experienced frustrating and pointless conversation with answer machines that simply refuse to transfer you to a human being right away. Of course, the faith also remains that eventually answering machines will be more user-friendly and can recognize more complex sentences and different speech patterns, forming logical connections between dialogues and answer questions more flexibly based on individual needs.


As We Stroke the Illusions
---Computerized Tactical Senses


Since the invention of touch screen, our relationship with computers had become closer than ever. First we saved the space by diminishing keyboard, making all operations within one surface. It marvels us to see how we can push the boundaries of interfaces and spread our everyday tasks into one continuum. As we have seen from the Corning innovation, the future of modern civilization is becoming more and more, shall we say, fool-proof.

Here’s the expanded version of A Day Made of Glass




     

In this version, the application of Corning’s technology is demonstrated in various fields including education, medical, science, history. There’s also the augmented reality element that came into play: the little girls in the video had a vivid interaction with dinosaurs from thousands of years ago through a thin layer of glass; the brain sergeants have the ability to assess the patients with a perspective like never been: technology reduces the risk of surgical damages and enables medical workers to have a more accurate evaluation before taking actions; children learn about the world without having to acquire any tools, for the entire world is at their fingertips...however, it also gives me a sense of uneasiness seeing that in the near future, we will be receiving information exclusively via touching a piece of glass, tapping and pressing on invisible buttons with no different textures one way or another,. It seems to me that we are creating a moat between the world and us during the process of diminishing the distance of space --- indeed the end of the world might be within arm’s reach thanks to the internet and augmented reality, but who will be embracing the real reality? Children of the next century, or even sooner, next decade, will no longer smell the waxy scent of crayons as they learn about different colours. They will no longer learn to endure through falling on the ground and bruising themselves when they go on field trips to learn about the nature: we will perceive the world increasingly like a computer, for everything will be presented in front of us through screens; we will be touching everything so easily, with nothing to hold on to.

Of course, scientists thought of that problem too, and are trying to come up with an alternative solution to what I call as: the monotonous sensory experienced optimized by technology, and find ways to recreate tactical sensations. In other words, as humans are receiving and processing information increasingly like machines, machines are gaining increasingly more humane sensitivities.

Augmented reality is no longer limited to visual and auditory, but is incorporating tactical sensations as well. In other words, you can practically touch anything, wherever it might be, however impossible it might seems: for example, you can touch a tiger’s nose and feel its breaths without fearing getting eaten; you can touch the bottom of the ocean and feel the sands thousands of miles under water...You might want to ask, does that mean teleportation will be made possible? Probably a theory isn’t yet conceivable and we might not yet be able to transmit ourselves to different places, but we could achieve some of the goals we need teleportation for in the first place. Imagine this: during a conference call in the not-so-far-away future, someone presents a product sample. Obviously we can easily evaluate its colour, shape, the sound it makes, etc. We see the illusion of said object through the screen, like what we mostly do through Corning glasses, but what if we can touch it and feel its texture, its weight, temperature as well as every bumps and edges on its surface as if we’re holding it? is it still an illusion or can we practically call it teleportation?

Disney recently collaborated with Carnegie Melon University science lab and developed REVEL: a gadget with which tactical sensations are programmable, recently demonstrated in SIGGRAPH 2012 in California. On the Disney Research website, the actual research paper as well as a detailed demonstration video are available to all that are interested.

"REVEL is a new wearable tactile technology that modifies the user’s tactile perception of the physical world. Current tactile technologies enhance objects and devices with various actuators to create rich tactile sensations, limiting the experience to the interaction with instrumented devices. In contrast, REVEL can add artificial tactile sensations to almost any surface or object, with very little if any instrumentation of the environment. As a result, REVEL can provide dynamic tactile sensations on touch screens as well as everyday objects and surfaces in the environment, such as furniture, walls, wooden and plastic objects, and even human skin.


REVEL is based on Reverse Electrovibration. It injects a weak electrical signal into anywhere on the user’s body, creating an oscillating electrical field around the user’s skin. When sliding his or her fingers on a surface of the object, the user perceives highly distinctive tactile textures that augment the physical object. Varying the properties of the signal, such as the shape, amplitude and frequency, can provide a wide range of tactile sensations." (DisneyResearch)

Here’s the a demonstrative video of it that you can see at a glance what it’s capable of:

In a way, REVEL can break the boundaries of distance and accessibility and make it possible for us to have a more “hands on” experience. One practical use of such technology is in the interior design field: I trust that with REVEL, we will be able to lie in a field of grasses, inside of our living room. Performing arts would be another field that benefits from this technology.. Recently site specific theatre returned to people’s attention and gained much popularity thanks to the success of Sleep No More, a life art/interactive theatre experience involving audience members exploring the space as the story took place. The cost of putting together such a show is extremely high as the designer has to recreate an environment as realistic as possible according to the dramaturge who would research the element of the era, the background of the play, etc. If we install REVEL in the performance space, creating different texture of the installations, the budget of producing a show would be much less, and arts will become more accessible to the public rather than a luxury for the rich. Combine the tactical recreation technology with what SixSenses can do, we can enter a multi-dimensional augmented reality and examine what is in front of us like never before: with maybe one simple photograph, we will be able to retrieve its background information and all the stories related to it, and with REVEL, we’ll be able to feel the objects within the picture --- if that isn’t what teleportation would be like, I couldn’t imagine what would be.

Such combination of technology can be widely used in museums, which might be the most important application. Looking at the display, taking careful notes of the description plates, doing research when we’re back at home...we shed as we pick up information and couldn’t easily make connection between what see and what we look for. Under such circumstances, augmented reality provides us with an aura of information so rich that we can immerse ourselves into: it’s a dimension that is not describable or limited with the concept of time and space, but a new kind of dimension, to which I believe virtual reality and cyberspace belong to, for it stretches to all directions; it occupies no space yet is infinitely big.

It’s a dangerous notion to think about what technology will enable us tomorrow. With new media, we process information differently. The passages from sources to destinations are more fluid and flexible. What’s the purpose of media? It creates a path for the traffic of information from point A to point B, but nowadays we’ll have to rephrase that concept, for information is exploding exponentially around us. The problem is no longer acquiring information, but finding ways to put them into the right places. It’s interesting to think about the reversed role human brain and computer server “brain”, or that virtual central system where all the digital information is stored. As human beings, we are always trying to optimize our memories by training our brains like muscles. The method of loci (memory palace) is a great way to develop our cognitive abilities. We put the information we receive with different sensors into boxes and rooms. It’s almost as if a reverse remediation: we use our brain to carry out the task usually achieved by filing cabinets because the retrieval cues are not strong enough to make us click, and find the information buried deep inside of our wells of memories.


Award winning short film Her Method of Loci examines human memories and its somewhat inaccuracy.

<Her Method of Loci>

from Nathan Coetzee on Vimeo.

On the other hand, we are seeing more and more of the “cloud” concepts in digital media. We are storing our information externally in cyberspace. As we search for information each day, we are also contributing to the contents. The line between sources and destinations becomes blurry. Digital database is fluid and ever changing, while printed database is solid and takes time and energy to update, and maybe that’s the biggest difference.

If we consider information in a dimension that is parallel to the world we live in, then we’re limiting its ability to expand. Now that we’re recreating tactical sense with machines and creating experiences that are close to “teleportation” and “telekinesis”, we are also moving one step at a time ever so close to achieving time travelling. The Corning glass brought back dinosaurs from millions of years ago, but that’s a programed, fixed routine and is only used for educational purposes rather than being an active process of acquiring the knowledge of the past.

There are, I found out, essentially two types of augmented reality visualization. For one, we add a layer in front of the three dimensional vision-scape in front of us and use that invisible canvas to visualize external information, whether it be commentary, footnote, etc. It’s an excellent way to assist lecturers, like the example below, as he snaps his fingers and points to different areas of the graph, dots and data appear, making the seemingly complicated information easy to process.

Hans Rosling's famous lectures combine enormous quantities of public data with a sport's commentator's style to reveal the story of the world's past, present and future development. Now he explores stats in a way he has never done before - using augmented reality animation. In this spectacular section of 'The Joy of Stats' he tells the story of the world in 200 countries over 200 years using 120,000 numbers - in just four minutes. Plotting life expectancy against income for every country since 1810, Hans shows how the world we live in is radically different from the world most of us imagine.




Another example of this kind of additional layer is more personal. This developmental concept of augmented reality contact lens aims to provide user with additional information about their surroundings and people around them.


The other use of visualization with augmented reality is quite the opposite, we “peel off” layers of reality to reveal the past as we add the screen between our environment and ourselves, thus we’ll be able to experience a sense of travelling through time of stepping through a parallel universe (which through the development of augmented reality technology, such as wearable items like the augmented reality contact lenses, I believe it will become more and more realistic).

This type of application would be perfect for tourists to historical sites, or archeologists working in the fields. This following video shows how augmented reality helps tourists to see the historical buildings and streets through the decades. On the other hand, we can also peal off the surface and enter into a magazine, or a book to learn what is behind what we see on a two-dimensional piece of paper -- such notion is almost as if something out of a Harry Potter book.




To make it one step further, computer technician nowadays can created computerized figures in augmented reality as well. This is a video of a date with Hatsune MIku, a computerized pop star in augmented reality. Pathetic as it seems, I do believe that such technology can be of great use in the future. For example, we will be able to create augmented tour guides, either regular ones or special ones for people with impaired vision or hearing. There can also be augmented caretaker for the seniors or children, reminding them daily tasks.


Sometimes I think the phrase augmented reality becomes arbitrary: we are evolving as a species and via technology; we are re-considering the ability of our senses and adding dimensions to the conceivable reality. There are more to see, more to hear, more to feel, more to learn about; augmented reality, if we have to call it that, makes it easier for us to achieve that goal. However, in the end we should not be enslaved by technology, but master it and apply them into our lives, making our limited time on this earth infinite, for we can now stretch our lives into all dimensions, even if it’s unseen.

Augmented reality creates new dimensions and as we set our feet into them, we need to re-evaluate the way we see, we listen to, or we sense our environment. The world becomes moldable and the size of it can become rather small: augmented reality forces us to re-think how much we can see visually within a space. The contents of media is no longer fixed, but interactive and as it receives feedbacks and inputs constantly from receivers, it evolves as it presents itself. Of course, I can also see such development in new media give rise to the other extreme: people will sooner or later experience the stress resulted from information overload and go back to the more organic, traditional methods of communication and life without mediated technology.

For example in the theatre industry, the recent popularity of craftsmanship and people’s re-discovered attention and appeals to handmade set pieces and props is an interesting phenomenon It’s exciting and comforting to see that people are now adapting a more balanced attitude towards new development in technology and new media: we will welcome the convenience it provides, but we will be able to recognize its limitation and try to solve the potential danger. The recent natural hazards have made us understand that a society overly dependent on wifi and electricity cannot survive disasters which will destroy the functionality of modern technology…we always have to go back to the basics and pick ourselves up when we got hit by the most primal forces of nature. New Media and the newly found computerized senses can help us learn so much, move so much faster and create like never before…it’s important to make the most of it, with steady-handedly and level-headedly.

Sources:

Cleverbot. Cleverbot.com

Coetzee, Nathan. Her Method of Loci http://vimeo.com/19599373

Disney Research. Revel: Programming the Sense of Touch http://www.disneyresearch.com/project/revel-programming-the-sense-of-touch/

Mistry, Pranav. http://pranavmistry.com/







No comments:

Post a Comment