TSMC’s A9 Chip Outperforming Samsung’s in Early iPhone 6s Battery Benchmarks
via macrumors.com“Remember a week ago when everyone was saying they were going to return their TSMC devices for Samsung devices?”
via macrumors.com“Remember a week ago when everyone was saying they were going to return their TSMC devices for Samsung devices?”
The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.
Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.
What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.
The present of photography is already computational, it’s always been, since the advent of the digital camera: from the very moment an imaging sensor’s output signal is digitized, billions of operations are performed to turn a stream of electrical levels into a colorless bitmap, then reconstructing colors by means of interpolation, correcting gamma, white balance, lens aberrations, reducing noise, compressing the image by discarding information not visible by the human eye, etc., just to name a few basic operations performed by a typical image processing pipeline. There is no such thing as #nofilter.
What we are seeing now is an unprecedented rate of innovation in image processing enabled by huge advancements in computing power and integration, bound to marginalize the traditional photographic industry.
via techcrunch.comThe key to the Mac therefore becomes that which the iPad/iPhone isn’t: an indirect input device. The keyboard and mouse/trackpad are what define the Mac. The operating system, the apps, the UX, are all oriented around the indirect input method. The iPhone’s capacitive touch brought about the direct input method, a third pivot in input methods (first was mouse, second trackpad/scroll wheel). Each pivot launched a new set of platforms and the Mac is the legacy of the second. (…)
The touchbar coupled to the other two inputs is a totally new way to interact with computing products. It’s not an “easy” interface as it’s not direct manipulation. It remains indirect, a defining characteristic of the second wave. Indirect inputs are powerful and lend themselves to muscle memory with practice. This is the way professional users become productive. The same way keyboard shortcuts are hard to learn but pay off with productivity, touchbar interactions are fiddly but will pay off with a two-handed interaction model. They are not something you “get” right away. They require practice and persistence for a delayed payoff. But, again, that effort is what professionals are accustomed to investing.
Horace Dediu perfectly nails the purpose of the Touch Bar: an indirect, context-aware input method that perfectly fits into the existing UI model, while enabling a whole new class of interactions.
via asymco.comAlthough the actual implementation of the 3D Touch is somewhat problematic, the approach taken to the functionality assigned to this feature is the correct one: 3D Touch should be an enhancement to the user experience, not a requirement to achieving a user task. Indeed, so far, all the functionality provided by 3D Touch, whether in quick actions or peek-and-pop mode, is redundant: users who don’t have the latest iPhone or have trouble with the 3D Touch can still do their tasks without using it and achieve the same kinds of actions, albeit in a more roundabout way. This redundancy is the right solution to the problems that gestures pose: lack of affordance and memorability, as well as difficulty in performing them.
Great in-depth analysis of 3D Touch by Raluca Budiu, Nielsen Norman Group. Adding a whole new dimension of interaction can be a double-edged sword, but Apple seems to have nailed it by encouraging the adoption of microsession-oriented patterns focused on efficiency, like “Quick Actions” and “Peek and Pop”.
via nngroup.com