The following guest post was written by Edo Segal (@edosegal).
Earlier last week, as the day was coming to an end and I was speaking with my 5 year old at bedtime we shared the highlights of our day. I started by telling him the company that created the iPhone is about to come out with . . . I paused—how do I describe it?—well, a "big iPhone" I said. About this big, I gestured holding my hands about 10 inches apart. “Wow, Amazing!” was his instant reaction as his eyes lit up. Even my 5-year-old knows that bigger is better, especially when it comes to tactile interfaces. In fact, the advantages are probably more obvious to his generation than it is to ours.
For this first generation born into a world of the iPhone, Wii and soon the Xbox's Project Natal, the distance between the metaphor created by these devices and the reality of their interaction is constantly shrinking. My wife is currently doing her PhD research on the merits of tangible interfaces for young children in education and the data is telling. There is no doubt that there is great potential to enhance learning with tactile computing. Through that lens the “Bigger iPhone” is akin to a bigger yard to play in or a bigger room. This insight is telling. For these kids the iPhone’s primary function is by no means a phone. It is first and formost a gaming device, followed by a networked camera, followed by everything else. Through this lens one can see the importance of the iPad in the historical trajectory of our human-computer interaction. What's lost in all the complaints about what the iPad is lacking (multitasking, camera, etc.) is that people need to view the iPad on more than its merits as a first-generation product. Rather, they need to understand it in context of the evolutionary arc of computing.