I took a look at 10/GUI‘s new desktop concept last night and was very impressed. It’s always hard to break backwards compatibiliy, and when the place you’re going to be breaking it is the weakest link in the interaction chain – the user’s comfort zone. However, it’s fair to say that comfort zone is already expanding in the case of a lot of casual users through the incorporation of multi-touch interfaces in some mobile phones (the iPhone springs to mind).
There are two distinct conceptual areas in the video – the multi-touch interface, and the Con10uum desktop environment itself. They meld (apparently) seamlessy, but I started to wonder how easy it would be to separate them out in order to bring the benefits of the desktop design to those who won’t or can’t easily use dexterity as their primary input mechanism. In the same way that some people have trouble using a mouse but are able to cope with a trackball, physical barriers to the use of some technology will always be present.
In order to think about how well Con10uum would interface with a mouse, the major constraint is the typical 2-button mouse. The limited buttons mean that using one would require a gesture-heavy input style. This is, of course, exactly what the multi-touch screen uses but the key difference it the multi-touch environment is very friendly to gestures; it’s easy to imagine pinch-zooming or scrolling with a sweep of a finger because the result is close to the physical gesture itself. Gesturing using a mouse is very much an exercise in training the user to remember what does what.
Luckily, the mouse is at heart a device to move a pointer from point a,b to point x,y. Buttons, scrollwheels, navigations buttons. etc are just gravy – well, one of those buttons isn’t gravy but the rest certainly are. Accordingly, I think it’s fair game to choose, if not some tooled-up übermouse, then at least one with a clickable scrollwheel and two navigation buttons. They fit ergonomically onto a well-designed mouse (some debate about the scrollwheel may be inserted here by sufferers of carpel tunnel syndrome) and exist in a mouse that you can just stroll down to your local hardware store and buy.
What, then, can these buttons and gestures be mapped to?
Con10uum has global and local “edges” defiend at the extreme left- and right-hand edges that are activated when a finger-press is detected in one. This is nice and easy – let’s assume that if the mouse pointer hits either edge and remains there for 1 second the menu pops up (all times are subject to arguments, bar fights, etc).
Scrolling left/right across the workspace can be assigned to the navigation buttons – one click equals one window to the left/right as appropriate. Holding down either navigation button while using the scroll wheel would drop to the thumbnail view of all open applications.
Zooming in and out is – yawn – a function of the scroll wheel.
Repositioning windows to the left/right is done by grabbing the title bar and gesturing sharply to the left/right (had to be a gesture in there somewhere!).
Expanding windows would be done by an expand-to-visible-area button that doesn’t appear in the video. I think it could be worked in easily enough, given all the other comprises we’re making at this point 🙂
I personally think the Con10uum desktop has got some legs to it, and I wouldn’t be at all surprised to see someone knock up at the very least a prototype for Linux. I think the multi-touch input device – powerful as it is – will take a while to achieve mass market popularity and therefore shouldn’t be forced to slow anything else down.
However this turns out, it’s going to be interesting.