“Everything we call real is made up of things that cannot be regarded as real”– Niels Bohr
For many, the “virtual world” of AI and the “real world” in which we all physically live represent vastly different places that have been difficult to connect. With the existence of AI largely being academic in nature for most of the past ~60 years and only more recently perceived of in a largely logical sense, with no physicality or rules of physics, it’s easy to see why many question the potential of it being truly useful or disruptive in our “real world.” But, perhaps these two seemingly different worlds really aren’t that different at all?
Let’s challenge this premise of separate worlds with an experiment. Close your eyes. Now picture your favorite person. Imagine that person speaking your name and touching your face. Notice how your mind is able to intelligently enable you to see that person, hear that person and feel that person, in your head? Now close you eyes and imagine adding 131 + 17. Notice how the mind visually presents those numbers to you, and enables you to think through the problem in your own voice? These things are not occurring in the physical world are they? This non-physical world where our mental activities occur is a similar place to where the thinking of AI’s occur.
The moment we open our eyes following this exercise, it’s easy to look through them and hold onto the traditional Newtonian views that the foundation of our universe is defined around physical material reality. This is because our five keen senses (what we see, smell, touch, hear and taste) do such a great job at connecting and converging our conscious, subconscious, and unconscious mind with the physical world. So good, that both seemingly appear as one.
Without any senses, it could be argued that our minds would be incapable of connecting to the physical world all together, let alone converging both. This is the same argument that is made regarding AI’s inability to connect in a meaningful way to our physical world. That said, just like our own human consciousness, AI can rely on senses similar to humans, in order to connect their thought processes to our physical world.
The Internet of Things (IoT) provides ready access to sensors that allow more meaningful sensory access to our physical world, thus enabling AI to come to life. Sensors such as cameras give the AI eyes to see into the world. Microphones give them a “good ear.” Accelerometers and gyroscopes give them a “good feel for things.” Particulate and chemical sensors give them “a good nose.” In many ways these sensors can empower the AI with “superhuman” capabilities compared to you and I, such as:
- Access to many more sensory inputs than we have as people, in many different places, at the same time
- Infrared “eyes” so that the AI can see heat variances, as well as standard object detection, counting and classification
- Ultrasonic “hearing” that enable access to frequency ranges beyond the human spectrum
- Accelerometers that capture haptic movements with much finer details than our own fingertips.
AI is what empowers perception and meaning of these IoT driven sensory inputs. Effectively, the IoT sensor measures and indicates the physical data property, and the AI enables the intellect that allows perception of what the physical data represents. This is what allows sensors to evolve into senses.
IoT sensors are what empower the senses that bring AI to life. These factors, as well as broad availability of low cost distributed compute capabilities (the cloud,) the open source software movement, machine learning advancements, and mobile driven advancing microelectronic (ARM) capabilities, connect the dots that make AI a reality. In future articles, we’ll be exploring these various drivers. In the meantime, after so many years of being academic in nature, it’s time for the AI Revolution.
note: i originally wrote this piece for the awesome folks at readwrite. You can also check it out here.