In all of these cases, the back pressure that gives wide review any force, beyond a moral high ground, is the fact of multiple implementations. To put it another way, why would implementers listen to wide review if not for the implied threat that a particular feature will not be implemented by other engines?
So yes, I absolutely think multiple implementations are a good thing for the web. Without multiple implementations, I absolutely think that none of this positive stuff would have happened. I think we’d have a much more boring and less diverse and vibrant web platform. Proponents of a “move fast and break things” approach to the web tend to defend their approach as defending the web from the dominance of native applications. I absolutely think that situation would be worse right now if it weren’t for the pressure for wide review that multiple implementations has put on the web.
Microsoft’s release of its new, Chromium-based, Edge browser has sparked renewed concerns about the rapidly decreasing diversity of browser engines. “All browsers becoming Chrome” is problematic in many ways, but while having bigger contributors like Microsoft in the Chromium project could actually help steering the project away from its Google-centric agenda, the issues intrinsic to relying on a single implementation remain open.via torgo.com
Compared to DSLR cameras, smartphone cameras have smaller sensors, which limits their spatial resolution; smaller apertures, which limits their light gathering ability; and smaller pixels, which reduces their signal-to-noise ratio. The use of color filter arrays (CFAs) requires demosaicing, which further degrades resolution. In this paper, we supplant the use of traditional demosaicing in single-frame and burst photography pipelines with a multi-frame super-resolution algorithm that creates a complete RGB image directly from a burst of CFA raw images. We harness natural hand tremor, typical in handheld photography, to acquire a burst of raw frames with small offsets. These frames are then aligned and merged to form a single image with red, green, and blue values at every pixel site. This approach, which includes no explicit demosaicing step, serves to both increase image resolution and boost signal to noise ratio.
Further indications that mobile photography is disrupting the digital imaging industry thanks to a tighter integration of hardware and software.via sites.google.com
The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.
Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.
What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.
The present of photography is already computational, it’s always been, since the advent of the digital camera: from the very moment an imaging sensor’s output signal is digitized, billions of operations are performed to turn a stream of electrical levels into a colorless bitmap, then reconstructing colors by means of interpolation, correcting gamma, white balance, lens aberrations, reducing noise, compressing the image by discarding information not visible by the human eye, etc., just to name a few basic operations performed by a typical image processing pipeline. There is no such thing as #nofilter.
What we are seeing now is an unprecedented rate of innovation in image processing enabled by huge advancements in computing power and integration, bound to marginalize the traditional photographic industry.via techcrunch.com
Modern cars work, let’s say for the sake of argument, at 98% of what’s physically possible with the current engine design. Modern buildings use just enough material to fulfill their function and stay safe under the given conditions. All planes converged to the optimal size/form/load and basically look the same. Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance. Everybody just seems to be ok with it. People are often even proud about how much inefficient it is, as in “why should we worry, computers are fast enough”:
Software engineering shifted from craftsmanship to being an industrial process without learning any lesson from other… industries.
Now it’s easy to hate on Electron’s inefficiency or blame overengineering becoming standard practice in “modern” software development, but if we really want to tackle the issue, we should focus on changing the perception of the underlying economics that push businesses to accept the tradeoffs between performances and the ability to ship products faster. Nothing comes for free, and today’s competitive advantage is tomorrow’s technical debt.
It’s totally acceptable to build products that are “good enough”, but we should never stop challenging how good is good enough.via tonsky.me
My recommendation is to go for replaceability instead of re-use. Replaceability leads in the right direction: you think about how to separate responsibilities in order to make them replaceable, which gives you loose coupling. You think about separating clients from the implementation in order to keep it replaceable, which gives you client-driven APIs.
And, if you might realize after a while, that a component tends to be used several times, you go the extra mile to make it really re-usable in terms of additional testing, documentation and hardening. Therefore I would like to add one more line to the learnings: strive for replaceability, not re-use. It will lead you the right way.
Premature abstraction really is the worst type of evil in engineering, as well as in any other design discipline.via blog.codecentric.de
It’ll be some time before computational notebooks replace PDFs in scientific journals, because that would mean changing the incentive structure of science itself. Until journals require scientists to submit notebooks, and until sharing your work and your data becomes the way to earn prestige, or funding, people will likely just keep doing what they’re doing.
It is incredibly depressing that we live in a world where scientific knowledge is still shared mostly by means of PDF documents, but the title of this article is misleading at best. The future of science communication will not be built on yet another proprietary document format. On the other hand, the Web platform has all the technical capabilities needed to create any sort of “computational” papers, but it still lacks appropriate authoring tools to empower scientists to do it by themselves.
The reports of the scientific paper’s death have been (unfortunately) greatly exaggerated.
Complexity bias is a logical fallacy that leads us to give undue credence to complex concepts.
Faced with two competing hypotheses, we are likely to choose the most complex one. That’s usually the option with the most assumptions and regressions. As a result, when we need to solve a problem, we may ignore simple solutions — thinking “that will never work” — and instead favor complex ones.
To understand complexity bias, we need first to establish the meaning of three key terms associated with it: complexity, simplicity, and chaos.
Nice piece on the risks of being seduced by unnecessary complexity, especially in the broader context of language. It reminded me of an old essay by Italo Calvino, “L’antilingua”—literally: “the anti-language”—in which he comically shows the effects of replacing simple words with increasingly grotesque jargon. To paraphrase Calvino, the anti-language is the language of people who prefer saying “utilize” instead of “use”, people who are scared of showing familiarity with the subject of their talk. According to him, speaking the anti-language is a sign of being out of touch with life, and ultimately represents the death of language itself.via fs.blog
For the past decade we’ve invested in and celebrated companies through the lens of network effects, Amazon’s power in retail, and measuring the potential of a brand by its scale and path to category dominance. We assumed that antiquated monolithic brands would be attacked by new modern brands that take over consumer consciousness enmasse. But instead, old and big brands are fighting against thousands of tiny brands with low overhead, high on design merchandise, and supremely efficient customer acquisition tactics. Many of us failed to recognize the collective impact of the long-tail of micro brands.
A lot of questions here: how many of these brands will survive the test of time? Isn’t it too risky to build a business on top of a single, closed platform? How can you tell which brand is authentic and which is not? Does it even matter? Only consumers hold the answers.via medium.com
At this point, we are left to answer a critical question. How can we decide when to overrule our common sense? What should we do in the many, almost daily, situations where it’s impossible to verify the validity of a statement? When can we trust common talk? […]
I suspect the answer cannot be found in a positive theory of certainty, but in the acceptance that, as humans, our destiny is to live, and act, in doubt.
I finally managed to read this terrific piece by Stefano Zorzi and I can’t help but think the answer to these questions lies in fully embracing hermeneutics not as just a tool, but instead as the very foundation of being, as brilliantly argued by philosopher Gianni Vattimo in his groundbreaking 1983 essay “Dialectics, Difference, Weak Thought”.via ribbonfarm.com
The basic premise is that, in the near future, a first “seed AI” will be created, with general problem-solving abilities slightly surpassing that of humans. This seed AI would start designing better AIs, initiating a recursive self-improvement loop that would immediately leave human intelligence in the dust, overtaking it by orders of magnitude in a short time. Proponents of this theory also regard intelligence as a kind of superpower, conferring its holders with almost supernatural capabilities to shape their environment (…)
This science-fiction narrative contributes to the dangerously misleading public debate that is ongoing about the risks of AI and the need for AI regulation. In this post, I argue that intelligence explosion is impossible — that the notion of intelligence explosion comes from a profound misunderstanding of both the nature of intelligence and the behavior of recursively self-augmenting systems.
Exhaustive post by Françoise Chollet (author of Keras, currently working on Deep Learning at Google) confuting the dangerous misconceptions that fuel the AI debate.via medium.com
Without beautiful, precise pictures of the product we wish to create, how do we gain resources to actually make them a reality? This new approach would turn the process on its head: it makes building and designing something one and the same. Rather than creating and presenting a design prototype, only to dismantle it in order to build and present a functional prototype (often at a lower quality), the functional prototype itself becomes the presented artefact, greatly reducing the cost of making it a stable, complete product.
The burden and responsibility of precise, perfect design should be shared between designers and engineers. The fact that this is true for every other related industry—architecture, industrial design, printed matter—and not for digital product design is indicative of nothing but the immaturity of our tools, processes, and philosophy.
Daniel Eden on the current state of design tools and workflows, short and on point.
I have always believed we need to stop treating design and development as two distinct disciplines and start following a more cohesive and authentic approach.via daneden.me
For centuries, Europeans who could read did so aloud. The ancient Greeks read their texts aloud. So did the monks of Europe’s dark ages. But by the 17th century, reading society in Europe had changed drastically. Text technologies, like moveable type, and the rise of vernacular writing helped usher in the practice we cherish today: taking in words without saying them aloud, letting them build a world in our heads.
If a relatively small typographic advancement had such a profound impact on the way we construct reality, imagine what being constantly interconnected with every other individual in the world will mean for future generations.via quartzy.qz.com
People in our industry think they stopped doing waterfall and switched to agile. In reality they just switched to high-frequency waterfall.
Agile became synonymous with speed. Everybody wants more, faster. And one thing most teams aren’t doing fast enough is shipping. So cycles became “sprints” and the metric of success, “velocity.”
But speed isn’t the problem. And cycles alone don’t help you ship. The problems are doing the wrong things, building to specs, and getting distracted. (…)
If a team works to a spec, there’s no point in iterating. The purpose of working iteratively is to change direction as you go. Defining the project in advance forces the team into a waterfall process. If every detail of the plan must be built, teams have no choice when they discover something is harder than expected, less important than expected, or when reality contradicts the plan.
Fundamental read on some of the most common misconceptions about product management and software development.via m.signalvnoise.com
The problem with technology-enforced restrictions isn’t that they allow legitimate enforcement of rights; it’s the collateral damage they cause in the process. In my personal opinion the problems are (very concisely) that they:
- quantise and prejudge discretion,
- reduce “fair use” to “historic use”,
- empower a hierarchical agent to remain in the control loop, and
- condemn content to become inaccessible.
Insightful analysis of the troubling collateral damages of Digital Rights Management technologies by Simon Phipps. While the short-term disadvantages of DRM may seem obvious, especially from a UX perspective, the long-term implications could be way more damaging in terms of access to our cultural heritage.via meshedinsights.com
Nice video from Vox and 99% Invisible’s Roman Mars about biomimicry.
It reminds me of Bruno Munari’s analytical study of plants and fruits in his must read Design as Art, in which he meticulously describes and praises the essential features of natural objects as a source of inspiration:
This object [an orange, an almost perfect object where shape, function and use display total consistency] is made up of a series of modular containers shaped very much like the segments of an orange arranged in a circle around the vertical axis. Each container or section has its straight side flush with the axis and its curved side turned outwards. In this way the sum of their curved sides forms a globe, a rough sphere.
Mainstream artists are at the center of a circle, with each larger concentric ring representing artists of decreasing popularity. The average U.S. teen is very close to the center of the chart — that is, they’re almost exclusively streaming very popular music. Even in the age of media fragmentation, most young listeners start their musical journey among the Billboard 200 before branching out.
And that is exactly what happens next. As users age out of their teens and into their 20s, their path takes them out of the center of the popularity circle. Until their early 30s, mainstream music represents a smaller and smaller proportion of their streaming. And for the average listener, by their mid-30s, their tastes have matured, and they are who they’re going to be.
Two factors drive this transition away from popular music. First, listeners discover less-familiar music genres that they didn’t hear on FM radio as early teens, from artists with a lower popularity rank. Second, listeners are returning to the music that was popular when they were coming of age — but which has since phased out of popularity. Interestingly, this effect is much more pronounced for men than for women
Great analysis and insights on this fascinating phenomenon, which I suspect is more about the lack of desire to discover new things, than it is about popularity itself.via skynetandebert.com
It saddens me to say it, but we are approaching the end of the automotive era.
The auto industry is on an accelerating change curve. For hundreds of years, the horse was the prime mover of humans and for the past 120 years it has been the automobile. Now we are approaching the end of the line for the automobile because travel will be in standardized modules. The end state will be the fully autonomous module with no capability for the driver to exercise command. You will call for it, it will arrive at your location, you’ll get in, input your destination and go to the freeway. (…)
Most of these standardized modules will be purchased and owned by the Ubers and Lyfts and God knows what other companies that will enter the transportation business in the future. (…)
This transition will be largely complete in 20 years.
Bob Lutz, former vice chairman and head of product development at General Motors, on the future of the automotive industry. We have been talking about this for a long time, but it’s impressive to hear it from an industry leader with decades of experience in the field.via autonews.com
I quit Facebook seven months ago.
Despite its undeniable value, I think Facebook is at odds with the open web that I love and defend. This essay is my attempt to explain not only why I quit Facebook but why I believe we’re slowly replacing a web that empowers with one that restricts and commoditizes people. And why we should, at the very least, stop and think about the consequences of that shift.
Another good piece on the current state of the Web as an open platform, with some very practical examples.
While the idea of quitting social media may be unrealistic for most people, it’s important to raise awareness about what is at stake. No platform is inherently bad, as long as users understand what they are giving away in exchange for the service.via neustadt.fr
So when people voice fears of artificial intelligence, very often, they invoke images of humanoid robots run amok. You know? Terminator? You know, that might be something to consider, but that’s a distant threat. Or, we fret about digital surveillance with metaphors from the past. “1984,” George Orwell’s “1984,” it’s hitting the bestseller lists again. It’s a great book, but it’s not the correct dystopia for the 21st century. What we need to fear most is not what artificial intelligence will do to us on its own, but how the people in power will use artificial intelligence to control us and to manipulate us in novel, sometimes hidden, subtle and unexpected ways. Much of the technology that threatens our freedom and our dignity in the near-term future is being developed by companies in the business of capturing and selling our data and our attention to advertisers and others: Facebook, Google, Amazon, Alibaba, Tencent.
Essential talk by sociologist Zeynep Tüfekçi on the (very current) risks of machine learning applications in social media and advertising networks.via ted.com
The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. (…) Intelligence is not a single dimension. It is a complex of many types and modes of cognition, each one a continuum.
The main problem with AI still remains finding a suitable definition of the concept of “intelligence”.via wired.com
For the moment, reports of the e-book’s death are exaggerated. If the disinterest of Amazon and resistance from the book trade continue, however, there is a chance that the e-book is killed — in my view, prematurely. Publishers should see e-books as complementary to print rather than as competition. Letting the e-book die may benefit print sales in the short-term, but the wider transition to digital media consumption presents a longer-term threat. Books need to remain visible and distinct from other genres of writing in the competition for attention. Publishers may wish to build upon the success of e-book/audiobook bundling to build a sustainable future for the e-book.
Up to this moment, existing publishers have spent most of their efforts trying to replicate traditional publishing paradigms and business practices in the digital domain, mainly in order to preserve the status quo.
I believe that if we want to leverage the full potential of digital publishing we have to shift to an entirely new, social-driven approach.
I have been saying this for years, in fact, I literally took these paragraphs from a pitch I made in 2014.via thebookseller.com
Can you think of any other demographic, ethnic or social group that anyone would claim is best influenced by targeting someone else? The whole science of marketing is based on finding the most relevant message and delivering it to the most probable buyer. Except when it comes to people over 50. Then all the rules are suspended. Because these people don’t count. They just “skew the data.”
So why are marketers and advertisers ignoring people over 50?
In this post from 2013, Bob Hoffman makes some great points against the universally accepted notion that teenagers are the most valuable demographic to target in advertising.via adcontrarian.blogspot.se
So why do these sensibilities differ? Why is it that French people won’t talk about their salaries, but will take off their bikini tops? Why is it that Americans comply with court discovery orders that open essentially all of their documents for inspection, but refuse to carry identity cards? Why is it that Europeans tolerate state meddling in their choice of baby names? Why is it that Americans submit to extensive credit reporting without rebelling?
These are not questions we can answer by assuming that all human beings share the same raw intuitions about privacy. We do not have the same intuitions, as anybody who has lived in more than one country ought to know. What we typically have is something else: We have intuitions that are shaped by the prevailing legal and social values of the societies in which we live.
All these varieties of speech are beautiful, just as the varieties of butterflies are beautiful. No matter what your first language, you should treasure it all your life. If it happens to not be standard English, and if it shows itself when your write standard English, the result is usually delightful, like a very pretty girl with one eye that is green and one that is blue.
I myself find that I trust my own writing most, and others seem to trust it most, too, when I sound most like a person from Indianapolis, which is what I am. What alternatives do I have? The one most vehemently recommended by teachers has no doubt been pressed on you, as well: to write like cultivated Englishmen of a century or more ago.
Kurt Vonnegut, as brilliant as ever, explaining the importance of simplicity, clarity, and authenticity in writing.via novelr.com
Profits focus the mind. There are so many things we could do as a company, but far less that really constitute the essence of why we’re here. Profits helps us concentrate on what to do and what not to do. It helps us shed things beyond the scope, it helps us keep the company fit, without accumulated layers of fat from chasing a thousand potential directions at once.
Jason Fried, Founder & CEO at Basecamp on the benefits of building a profitable business and living outside the tech bubbles.via m.signalvnoise.com
Typesetting on the web has evolved from a quirky afterthought into an invaluable practice. Within a span of twenty years complex interfaces that adapt to their environment, as well as an overwhelming number of typefaces, have bloomed all around us. Likewise, using animations and transitions or balancing display text in conjunction with powerful OpenType features became not only possible but expected. So where do we go from here? What are the skills we need to contribute to the future of typography? And what do two ghostly figures from the 15th century have to do with that future?
The big change is: there’s now no difference between ads and content. Content, the information you see on your feed, is targeted at you just like ads. And that content can be anything and serve any purpose. There’s no implied social contract for content to be true. Content is now weaponized for a purpose. In other words: content is now propaganda.
A more fundamental question lies here, hidden in plain sight: what makes fake news “fake”? The answers to this question could be way scarier then technology in itself.via highscalability.com
“Part of the beautiful thing about books, unlike refrigerators or something, is that sometimes you pick up a book that you don’t know,” says Katherine Flynn, a partner at Boston-based literary agency Kneerim & Williams. “You get exposed to things you wouldn’t have necessarily thought you liked. You thought you liked tennis, but you can read a book about basketball. It’s sad to think that data could narrow our tastes and possibilities.”
In the recommendation age, algorithms have the power to confine us in “taste bubbles”, where we are only able to reinforce existing tastes, rather than develop new ones.
Automatic recommendations based on existing data can be really useful, but serendipitous discovery still plays a vital part in the way we shape our tastes. If skating where the puck is going is our preferred strategy, we are going to miss out. A lot.via wired.com
A politician, as well as a chemical engineer and entrepreneur, Olivetti had a philosophical view of entrepreneurship, one that put people and communities at the center of a business. He was a firm believer in the competitive advantage of treating workers fairly and investing in their wellbeing. Andrea Granelli, president of Associazione Archivio Storico Olivetti (Olivetti’s historic archive association), told Quartz “the profits from sales were invested in innovation, expansion, higher salaries, social services.”
Looking back at the rise and fall of Adriano Olivetti’s vision for sustainable and socially responsible entrepreneurship is both depressing and extremely inspiring at the same time. So many lessons to learn from that experience.via qz.com
The key to the Mac therefore becomes that which the iPad/iPhone isn’t: an indirect input device. The keyboard and mouse/trackpad are what define the Mac. The operating system, the apps, the UX, are all oriented around the indirect input method. The iPhone’s capacitive touch brought about the direct input method, a third pivot in input methods (first was mouse, second trackpad/scroll wheel). Each pivot launched a new set of platforms and the Mac is the legacy of the second. (…)
The touchbar coupled to the other two inputs is a totally new way to interact with computing products. It’s not an “easy” interface as it’s not direct manipulation. It remains indirect, a defining characteristic of the second wave. Indirect inputs are powerful and lend themselves to muscle memory with practice. This is the way professional users become productive. The same way keyboard shortcuts are hard to learn but pay off with productivity, touchbar interactions are fiddly but will pay off with a two-handed interaction model. They are not something you “get” right away. They require practice and persistence for a delayed payoff. But, again, that effort is what professionals are accustomed to investing.
Horace Dediu perfectly nails the purpose of the Touch Bar: an indirect, context-aware input method that perfectly fits into the existing UI model, while enabling a whole new class of interactions.via asymco.com
The Italian type foundry Nebiolo of Turin was the biggest type and printing equipment manufacturer in Italy. It started in 1852 and thrived in the first half of the 20th century, but never made the transition to phototype. The foundry closed in 1978.
Great profile of Nebiolo, one of the most influential type foundries in Italy. The text in the article was aptly set in Forma, Nebiolo’s answer to Helvetica, designed by Aldo Novarese in 1968 and digitally revived by David Jonathan Ross.via djr.typenetwork.com
This is a perennial question from non-designers and folks who don’t use typefaces. They do, of course, need them on a daily basis. Modern life would grind to a halt if every typeface suddenly vanished overnight. Typefaces are so ingrained into our existence that it seems like they’ve always been there. It’s a “problem” that’s been “solved”. Most people don’t see the typeface, not consciously anyway, they read the words. To notice the forms of the letters is a learned, higher-level process and largely unnecessary for daily life. If meaning and information have been sucessfully extracted from the words, conscious recognition of the typeface is unnecessary: any old typeface will do.
However, if this were strictly true, the purpose of typography would be to merely convey information, to crystallise spoken words into symbols. It would thus render people as simple automatons blithely absorbing data. Efficient, but utterly joyless. Our relationship to typography is like our relationship to food—we eat for pleasure, not simply for nutrition.
Delightful short piece about the meaning and purpose of typography by Klim Type Foundry.via klim.co.nz
Archivio Grafica Italiana is the first digital archive dedicated to the Italian graphic design heritage. A growing overview to spread and promote the culture of quality that distinguishes the Italian design tradition. From the greatest classics to the best contemporary projects, commissioned by Italian clients or made by Italian designers, to explore and discover the fundamental aesthetic and cultural contribution brought by the Italian graphic design all over the world.
Typography is the visual component of the written word. But the converse is also true: without typography, a text has no visual characteristics. A goblet can be invisible because the wine is not. But text is already invisible, so typography cannot be. Rather than wine in a goblet, a more apt parallel might be helium in a balloon: the balloon gives shape and visibility to something that otherwise cannot be seen.
Matthew Butterick on the false dichotomy of form and substance.via practicaltypography.com
Self-driving cars went viral again recently, when Tesla dropped a $2,500 software update on its customers that promised a new “autopilot” feature. The videos are fascinating to watch, mostly because of what’s not happening. There’s one, titled “Tesla Autopilot tried to kill me!” where a guy drives with his hands off the wheel for the first time. He hasn’t replaced driving with, say, watching a movie or relaxing—instead, he’s replaced the stress of driving with something worse. (…)
Somewhere in between where we stand now, annoyed at how much time we waste sitting in traffic, and the future, where we’re driven around by robots, there will be hundreds of new cars. Their success doesn’t simply depend on engineering. The success depends on whether we, the people, understand what some new button in our brand-new car can do. Can we guess how to use it, even if we’ve never used it before? Do we trust it? Getting this right isn’t about getting the technology right—the technology exists, as the Tesla example proved so horribly. The greater challenge lies in making these technologies into something we understand—and want to use.
Great piece about one of the most compelling questions around the advent of self-driving cars: how do we build trust in a machine?via fastcodesign.com
I look at my Instagram feed and it’s a network; I’m seeing through the eyes of people around the world,” says the image-sharing app’s Teru Kuwayama. Following two decades as a noted photojournalist, covering war and humanitarian crises in Iraq, Afghanistan, Pakistan, and Kashmir, the TED Senior Fellow now works on the community team at Instagram, specifically with photojournalists and the wider photo community. “So many eyes and so many minds are coming online and being harnessed to this grid,” he says. For Kuwayama, this collective network and its unprecedented audience serves as the greatest draw for his involvement. “It’s unlocked a totally different spectrum of reporting,” he says.
So far, Instagram has succeeded where Twitter has failed. They say a picture is worth more than 140 characters, isn’t it?via artsy.net
If history is a guide, the costs and convenience of radical transparency will once again take us back to our roots as a species that could not even conceive of a world with privacy. It’s hard to know whether complete and utter transparency will realize a techno-utopia of a more honest and innovative future.
But, given that privacy has only existed for a sliver of human history, it’s disappearance is unlikely to doom mankind. Indeed, transparency is humanity’s natural state.
A really interesting journey through the history and evolution of the concept of “privacy”, although arguing that privacy is (or not) a natural condition seems as problematic as trying to define nature itself.via medium.com
A visual history of the design process behind Morning Boost’s brand new website: morningboost.co.uk
As a project I worked on comes to life, I like to compile a snapshot of the key iterations that lead to its final, or better, initial form, in order to spot early mistakes and snatch unexpected insights. It’s like a near-death experience, but safer.via dribbble.com
The web is not print. Webpages are not books. Therefore, the goal of Tufte CSS is not to say “websites should look like this interpretation of Tufte’s books” but rather “here are some techniques Tufte developed that we’ve found useful in print; maybe you can find a way to make them useful on the web”.
Agreed. I’ve always been fascinated with Edward Tufte’s distinctive typographic style, and this experiment is an interesting, web-focused interpretation of Tufte’s principles.via edwardtufte.github.io
Although the actual implementation of the 3D Touch is somewhat problematic, the approach taken to the functionality assigned to this feature is the correct one: 3D Touch should be an enhancement to the user experience, not a requirement to achieving a user task. Indeed, so far, all the functionality provided by 3D Touch, whether in quick actions or peek-and-pop mode, is redundant: users who don’t have the latest iPhone or have trouble with the 3D Touch can still do their tasks without using it and achieve the same kinds of actions, albeit in a more roundabout way. This redundancy is the right solution to the problems that gestures pose: lack of affordance and memorability, as well as difficulty in performing them.
Great in-depth analysis of 3D Touch by Raluca Budiu, Nielsen Norman Group. Adding a whole new dimension of interaction can be a double-edged sword, but Apple seems to have nailed it by encouraging the adoption of microsession-oriented patterns focused on efficiency, like “Quick Actions” and “Peek and Pop”.via nngroup.com
Naming artworks has always been important, not only because it’s useful to have a way to refer to the piece, but also, and much more importantly, to present a window to the creator’s vision and ideas, to clarify his intentions when creating the piece and to provide additional content to the visual. In ‘artistic’ photography it seems that the situation is similar, and an image’s title sometimes holds much more than can be seen in the image itself, insinuate as to the photographer’s motives and feelings and hint at things which can be missed otherwise. Even ‘Untitled’ images are often left untitled for a good reason. The title, or lack thereof, is a critical part of the art.
While watching Mr. Robot, Sam Esmail’s terrific new TV series, I couldn’t help but noticing a small detail from main character Elliot Alderson’s personal computer.
I find it fascinating that in 2015 we still use Eurostile, the iconic typeface designed by Aldo Novarese in 1962 for the Turin based Nebiolo foundry, to convey modernity and technological edge.
Not only Eurostile has stood the test of time, it still dictates time.
The modern day automobile is not intuitive. Drivers need to learn to operate an automobile. Passengers have to conform to a car’s existing seating arrangement with only marginal modification. Software has the potential to change all of these limitations.
While Tesla has become a pioneer in electric vehicles and BMW continues to slowly build momentum in the space, both companies have not actually altered the automobile’s fundamental purpose.
Great piece by Neil Cybart on how Apple uses design as an asset to marginalize industries.via aboveavalon.com
“Remember a week ago when everyone was saying they were going to return their TSMC devices for Samsung devices?”
Samsung Electronics on Wednesday called for its operating profits to rise in the third quarter, putting an end to an almost two-year decline — though the recovery is reportedly based mostly on chip sales.
Apple components business is more of a Samsung-saviour than any Galaxy phone has ever been an iPhone-killer.via appleinsider.com
“I can think of nothing that has done more harm to the Internet than ad tech,” says Bob Hoffman, a veteran ad executive, industry critic, and author of the blog the Ad Contrarian. “It interferes with everything we try to do on the Web. It has cheapened and debased advertising and spawned criminal empires.” Most ridiculous of all, he adds, is that advertisers are further away than ever from solving the old which-part-of-my-budget-is-working problem. “Nobody knows the exact number,” Hoffman says, “but probably about 50 percent of what you’re spending online is being stolen from you.”