Some bits about web
Imagine if you could follow an Instagram user from your Twitter account and comment on their photos without leaving your account. If Twitter and Instagram were federated services that used the same protocol, that would be possible. With a Mastodon account, you can communicate with any other compatible website, even if it is not running on Mastodon. All that is necessary is that the software support the same subset of the ActivityPub protocol that allows for creating and interacting with status updates.
Mastodon is a fascinating project. At surface level, it is similar enough to Twitter for people to consider it a valid alternative: the UI and the fundamental social constructs could not be more familiar. At the same time, you don’t need to dig too deep to encounter esoteric concepts like ActivityPub and the fediverse.
A common viewpoint is that Mastodon has failed to appeal to a broader, less tech-savvy audience so far due to its federation model, but I tend to disagree: after all we are using federated messaging systems every day and we’ll likely keep doing so until the end of time.
There was a moment when email itself was an esoteric concept and it was important to know what SMTP is in order to send a message, but we have managed to abstract all that complexity away, so I am cautiously optimistic about federated social media in the long run. Arguably the biggest challenge so far is the network effect (or lack thereof), as it’s hard to move to a new platform when all you friends are somewhere else. In that respect, feel free to follow me @email@example.com. Don’t be shy!via docs.joinmastodon.org
In all of these cases, the back pressure that gives wide review any force, beyond a moral high ground, is the fact of multiple implementations. To put it another way, why would implementers listen to wide review if not for the implied threat that a particular feature will not be implemented by other engines?
So yes, I absolutely think multiple implementations are a good thing for the web. Without multiple implementations, I absolutely think that none of this positive stuff would have happened. I think we’d have a much more boring and less diverse and vibrant web platform. Proponents of a “move fast and break things” approach to the web tend to defend their approach as defending the web from the dominance of native applications. I absolutely think that situation would be worse right now if it weren’t for the pressure for wide review that multiple implementations has put on the web.
Microsoft’s release of its new, Chromium-based, Edge browser has sparked renewed concerns about the rapidly decreasing diversity of browser engines. “All browsers becoming Chrome” is problematic in many ways, but while having bigger contributors like Microsoft in the Chromium project could actually help steering the project away from its Google-centric agenda, the issues intrinsic to relying on a single implementation remain open.via torgo.com
Modern cars work, let’s say for the sake of argument, at 98% of what’s physically possible with the current engine design. Modern buildings use just enough material to fulfill their function and stay safe under the given conditions. All planes converged to the optimal size/form/load and basically look the same. Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance. Everybody just seems to be ok with it. People are often even proud about how much inefficient it is, as in “why should we worry, computers are fast enough”:
Software engineering shifted from craftsmanship to being an industrial process without learning any lesson from other… industries.
Now it’s easy to hate on Electron’s inefficiency or blame overengineering becoming standard practice in “modern” software development, but if we really want to tackle the issue, we should focus on changing the perception of the underlying economics that push businesses to accept the tradeoffs between performances and the ability to ship products faster. Nothing comes for free, and today’s competitive advantage is tomorrow’s technical debt.
It’s totally acceptable to build products that are “good enough”, but we should never stop challenging how good is good enough.via tonsky.me
It’ll be some time before computational notebooks replace PDFs in scientific journals, because that would mean changing the incentive structure of science itself. Until journals require scientists to submit notebooks, and until sharing your work and your data becomes the way to earn prestige, or funding, people will likely just keep doing what they’re doing.
It is incredibly depressing that we live in a world where scientific knowledge is still shared mostly by means of PDF documents, but the title of this article is misleading at best. The future of science communication will not be built on yet another proprietary document format. On the other hand, the Web platform has all the technical capabilities needed to create any sort of “computational” papers, but it still lacks appropriate authoring tools to empower scientists to do it by themselves.
The reports of the scientific paper’s death have been (unfortunately) greatly exaggerated.