The next few years will see the emergence of Anticipatory Computing applications This is an exciting time to be developing computer applications. We are in the midst of a fundamental shift in the way that we interact with our computing devices. At the same time, applications are becoming much smarter and more capable of responding to natural inputs like voice, touch and gesture.
Our computing devices no longer tether us to our desks. Instead, they are with us always. Today, we may have a smartphone in our hand and a tablet device on the table in front of us. Very soon, we may also have a smart panel on the wall next to us and perhaps even a computer embedded in our eyeglasses.
These devices know who we are from our online profile and status updates. They also know where we are and where we have been. In some cases, we may also want these devices to hear what we hear and see what we see. All of these inputs can become very useful signals for an application to learn and anticipate our needs.
Just a few years ago, the technology that could make sense of these information signals was beyond reach. Due to recent advances in cloud computing and machine learning, applications have become far more intelligent. In the past two years, we have seen examples of server clusters that can defeat world champions in Jeopardy! as well as smartphone assistants that can answer a surprising range of spoken questions. We believe this is just the beginning.
We are entering an era where technology is beginning to be able to infer insight from all of the information that we see and hear every day. In this world, the applications we use will pay attention continuously to our activity and learn from the world around us. In some cases, these applications may understand us well enough that they will be able to anticipate the information we need before we even ask for it. We call applications like this “Anticipatory Computing” applications.
Our first step is to make conversations easier and more productive For the past 2 years, Expect Labs has been developing a technology platform to serve as the foundation for new types of intelligent applications. We call it our “Anticipatory Computing Engine”. As an initial use case for this technology platform, we have focused on improving the way we all have conversations.
Whether it is a phone call, a video conference, or an in-person meeting, we all have conversations - sometimes several times a day. All too often, someone will mention something that rings a bell - the name of a mutual acquaintance, a book they read, or a restaurant where they ate. We would love to run a quick search to jog our memory, but by the time we pull out our phone, find the right app, and type our query, the moment has passed.
We believe that, during our conversations, our computing devices and applications can do a better job getting us the facts we need when we need them. By paying attention to our interaction with others and listening to what we say, we think that applications can, in some cases, anticipate the information we may need. When this happens, our conversations can be easier and more productive.
With that purpose in mind, we have built an iPad app called MindMeld. Our initial release of MindMeld is an exploration into how the ambient information surrounding every conversation can be harvested and utilized to make our conversations more effective. Our initial focus in MindMeld is to streamline information discovery and sharing during a conversation on a touch-driven device. As our technology improves, we suspect it may also be able to assist our conversations in other ways such as archiving and collaboration. Our ultimate goal is to build a general-purpose conversational assistant that can facilitate the wide range of common tasks we often perform during our conversations.