I'm here at the IDF 2010 conference in San Francisco, and during this morning's keynote session, I had to the opportunity to see some pretty cool technology. One of the things that they demonstrated was essentially the next generation of handheld game controllers. Think of the Wii remotes, but with more accuracy in 3 dimensions. Wii remotes use triangulation and technically only detect two dimensions (although clever programming in the Wii helps pick out a certain amount of a third dimension). But these devices they demonstrated had full three dimensional handling.

The new technology relies on the latest and greatest Intel technology, called Sandy Bridge. That might seem surprising since we're talking about game controllers, but that's precisely the point being made at this conference: The technologies are converging into single chips and platforms. The new Sandy Bridge generation of Intel technology covers multi-core technology, graphics, and floating point processing all on a single chip, making the chips usable in a wide range of applications, and removing the need for specialized chips. In the past we saw chips devoted to graphics, chips devoted to floating point operations (such as in digital signal processing designs), and so on. But now the individual chips can handle all of this. So Intel's idea is this: Why build specialized chips and platforms when the current processor can do it all for you? That leaves programmers able to build powerful applications and focus on the "what" rather than the "how". Need powerful image analysis operations? The capability is there. Need 3D realtime handling? The processor can handle it, so you can write the code that makes use of it. (But I should clarify: The processors have the capabilities built in and include udpated Assembler instructions. The programmers, then, will write code that makes use of these advanced instructions, either through language extensions or Assembler libraries.)

The 3D controllers that they demonstrated were pretty sweet. On stage the Intel guy held two of them, one in each hand, and manipulated a 3D model. He resized objects and moved them around, and even did boolean operations on them, all without a keyboard. He was able to rotate the virtual world, looking at it from different options. And at one point he even opened up a menu, which appeared like a small handheld device on the screen itself. His left hand controlled where the virtual handheld was and what angle it was at, and the right-hand typed on it with a stylus. Of course, in reality he only had two controllers, one in each hand, and no actual handheld device or stylus existed. But on the screen was the image of the handheld and a stylus, which he easily manipulated.

At the end he mentioned that the SDK would be made available for these controllers. He didn't say when, but I'll be getting more information while at the conference, and will keep DaniWeb readers updated. These are fun times to be a programmer!

Sounds awesome ... Anyone have a picture of what the prototypes look like?