Attention conservation notice: 1000-word grudging concession that a bete noire might have a point, followed immediately and at much greater length by un-constructive hole-poking; about social media, by someone who's given up on using social media; also about the economics of recommendation engines, by someone who is neither an economist nor a recommendation engineer.
Because he hates me and wants to make sure that I never get back to any (other) friend or collaborator, Simon made me read Jack Dorsey endorsing an idea of Stephen Wolfram's. Much as it pains me to say, Wolfram has the germ of an interesting idea here, which is to start separating out different aspects of the business of running a social network, as that's currently understood. I am going to ignore the stuff about computational contracts (nonsense on stilts, IMHO), and focus just on the idea that users could have a choice about the ranking / content recommendation algorithms which determine what they see in their feeds. (For short I'll call them "recommendation engines" or "recommenders".) There are still difficulties, though.
— Back in the dreamtime, before the present was widely distributed, Vannevar Bush imagined the emergence of people who'd make their livings by pointing out what, in the vast store of the Memex, would be worth others' time: "there is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record." Or, again, there's Paul Ginsparg's vision of new journals erecting themselves as front ends to arxiv. Appealing those such visions are, it's just not happened in any sustained, substantial way. (All respect to Maria Popova for Brain Pickings, but how many like her are there, who can do it as a job and keep doing it?) Maybe the obstacles here are ones of scale, and making content-recommendation a separate, algorithmic business could help fulfill the vision. Maybe.
"Presumably", Wolfram says, "the content platform would give a commission to the final ranking provider". So the recommender is still in the selling-ads business, just as Facebook, Twitter, etc. are now. I don't see how this improves the incentives at all. Indeed, it'd presumably mean the recommender is a "publisher" in the digital-advertizing sense, and Facebook's and Twitter's core business situation is preserved. (Perhaps this is why Dorsey endorses it?) But the concerns about the bad and/or perverse effects of those incentives (e.g.) are not in the least alleviated by having many smaller entities channeled in the same direction.
On the other hand, I imagine it's possible that people would pay for recommendations, which would at least give the recommenders a direct financial incentive to please the users. This might still not be good for the users, but at least it would align them more with users' desires, and diversity of those desires could push towards a diversity of recommendations. Of course, there would be the usual difficulty of fee-based services competing against free-to-user-ad-supported services.
Further: as Wolfram proposes it, the features used to represent content are already calculated by the operator. This can of course impose all sorts of biases and "editorial" decisions centrally, ones which the recommenders would have difficulty over-riding, if they could do so at all.
Normally I'd say there'd also be switching costs to lock users in to the first recommender they seriously use, but I could imagine the network operators imposing data formats and input-output requirements to make it easy to switch from one recommender to another without losing history.
— Not quite so long ago as "As We May Think", but still well before the present was widely distributed, Carl Shaprio and Hal Varian wrote a quietly brilliant book on the strategies firms in information businesses should follow to actually make money. The four keys were economies of scale, network externalities, lock-in of users, and control of standards. The point of all of these is to reduce competition. These principles work — it is no accident that Varian is now the chief economist of Google — and they will apply here.
Someone else must have proposed this already. This conclusion is an example of induction by simple enumeration, which is always hazardous, but compelling with this subject. I would be interested to read about those earlier proposal, since I suspect they'll have thought about how it actually could work.
*: Back of the envelope, say the prediction error is
$O(n^{-1/2})$, as it often is. The question is then how utility to the user
scales with error. If it was simply inversely proportional, we'd get utility
scaling like $O(n^{1/2})$, which is a lot less than the $O(n)$ claimed for
classic network externalities
by Metcalfe's law
rule-of-thumb. On the other hand it feels more sensible to say that going
from an error of $\pm 1$ on a 5 point scale to $\pm 0.1$ is a lot more valuable
to users than going from $\pm 0.1$ to $\pm 0.01$, not much less valuable.
Indeed we might expect that even perfect prediction would have only
finite utility to users, so the utility would be something like
$c-O(n^{-1/2})$. This suggests that we could have multiple very large
services, especially if there is a cost to switch between recommenders. But it
also suggests that there'd be a minimum viable size for a service, since if
it's too small a customer would be paying the switching cost to get worse
recommendations. ^
The Dismal Science; Actually, "Dr. Internet" Is the Name of the Monster's Creator
Posted at March 26, 2021 14:03 | permanent link