This concept has allowed me to access to the world finals and therefore it gave opportunity to be identified and hired by Gregory Renard, co-founder and CEO of Wygwam, Xbrainsoft and Xbrainlab.
Since when have you been interested in augmented reality and how did you discover AR?
If one wants to be more precise, the term we should use should be "Augmented Perception" as through a tablet, a smartphone, etc., it is our perception of the world which is augmented and not its reality. I think that this distinction is very important; it helps to consider a new paradigm that is closer to what augmented reality is in the collective phantasm and this is what I personally call "Alternative Reality".
My past experience as an illusionist magician gives me a unique capacity of observation and understanding about how people interact with the real and virtual world. Because, to a certain extent, magic is already augmented reality or virtual reality! I have learned to decipher what, unconsciously, controls the human behavior. And now, this is what I use to nourish my creations. While computing is shrinking to the point of no longer being visible and the insatiable need for immediacy leads us to dematerialize physical media, there is a time when humans must act and interact with this invisible and pervasive computing and this digital dust. Creating these vectors which recompose this information or creating an anchor as a tangible link between the real world and its computer model is precisely my job. And this is why I am an innovative HMI Designer, because my creative process is not based on the technologies of the market, but on a personal intuition of the future.
I know that interfaces and pervasive computing are a subject that fascinates you. How do you think we will interact with our environment in the future?
This missing link will be the conductor which will attune the digital and the real. By constantly performing a numerical copy of space, time, objects, and the living, we will naturally have this reference grid where the object-oriented programming will be intricate to its real and augmented attributes. By unifying all data generated by the sensors, cameras, satellites, drones, smartphones, connected devices, and everything else, we will cause a singularity of the earth which will provide Gaia with a digital omniscient consciousness.
It is logical and inevitable. Whether it concerns cars without drivers to move, whether it is about drones to fly, robots evolving among us, and even to create an internet of objects with inanimate objects, rather than to recalculate the environment for each entity, mutualisation will enable it to be more efficient and less resource demanding, while providing an accurate model for everyone.
There already exists initiatives that go in this direction, such as Kinect@Home or even the European project RoboEarth which aims at mutualising in the cloud, the robots’ various knowledge. Google maps, Bing Maps, Nokia maps or the French UBICK are involved in digitizing our environment. We already have a bunch of sensors that could transcribe any of our actions, not to mention the amazing number of technologies developed by universities, research labs and that the general public does not even suspect: 3D tracking of people and objects with simple surveillance cameras, automatic 3D reconstruction by photogrammetry from movies or pictures, prediction algorithms that indicate where future crimes are likely to take place and even where one will be in 24H, laser vision by reverberation that can shoot in 3D inside a room without being in it and so on and so forth. Not to mention countless microphones around us that are able to capture and interpret what we say.
Today, it is humanly impossible to store all this data, or even to treat it by the way. Great futurists, such as Ray Kurzweil, base themselves on an exponential temporal technology curve, and predict that in a few years, storage will no longer be a problem. As for the data processing, one can reasonably rely on an artificial intelligence that will have, by far, exceeded the cognitive abilities of Man and will be able to put everything in order ... in short: The Ordinator! To finish with this A.I, it will be able to reconstruct by anastylosis our past environment, and by learning from our behavioral models, will be able to complement the gaps of our past behaviors and simulate our likely immediate future.
So to simplify: this awakening world that I call the Egosystem will be a real-time 3D copy of our world. It will be the perfect mesh of our environment and if we consider this 3D mesh like the one of a video game , we then understand the incredible reach of the tool. This enable’s us to create an unlimited range of possibilities since everything is programmable and configurable!
For example at breakfast, a simple cup in front of me will have its digital layer, which will be able to be programmed. I can change its color depending on the temperature of its content. While taking my morning coffee, I can display information on my cup’s periphery such as the day’s weather forecast …always useful to know before going to work. One can also imagine rich and cross interactions, like for example, turning the cup on the table like a potentiometer to raise or lower the music volume in the room... And this is of course through augmented reality devices that I can see and interact with these different states of the object. In short, with the Egosystem, seeing through walls, having the gift of ubiquity, looking at the picture of one’s relatives to instantly communicate with them and so on, would get summarized in only a few lines of code!
We just sent you an email. Please click the link in the email to confirm your subscription!