- Requests for Startups
- Posts
- Make the Media Progressive Again
Make the Media Progressive Again
How do we create media that supports progress, positivity and competing opinions?
Ever since the introduction of the printing press, the media has served a foundational role as a conduit for the communication of new ideas. As technology has evolved, so too has the media.
With the rapid advances of large language models we have seen in less than a year, we sit at a crucial inflection point in media’s relationship with technology. It is at this inflection point that we see the media as playing an adverse, deliberately destructive role in the firehose flows of information surrounding new developments surrounding the technology that could ring its death knell.
Why is this happening, what are the likely outcomes and where can we find remedies?
Top-Down Media
There is a famous saying which says “there are decades where nothing happens, and months where decades happen”.
The past six months in tech has been one of those periods where decades of progress have been achieved. Since its public launch in August of last year, Stability AI’s flagship Stable Diffusion tool has attracted over 10 million daily active users creating 170 million images on all platforms.
Not long after, the open sourcing of OpenAI’s ChatGPT launched with unprecedented traction among users. One million users, 5 days, minimal marketing spend. This story should have been one of triumph for product-led growth in the open marketplace. (Note: This example will be used as a proxy for much of technological dynamism for the remainder of this section.)
Unfortunately, this was not how it was reported. See below some sample headlines (some more recent) discussing ChatGPT as a consumer product:
The idea that pessimism sells for media outlets is nothing new. Nor should we be unaware that the manipulation of soundbites for headlines is a favourite tool in the Pessimist’s Playbook. More interesting than these two general characteristics of negative media coverage is their incentives and likely outcomes.
Let’s begin with incentives. Why would Forbes/WaPo/CBS have a staked interest in panning ChatGPT?
Returns to Catastrophism. Media outlets have relatively character agnostic user bases. They are designed such that there is something for everyone. General purpose language models like ChatGPT are much the same. This creates a lot of crossover in user bases. If ChatGPT gets a million users in 5 days and a decent number of those are readers of publication X, publication X knows that because pessimism sells, the ROI of dunking on ChatGPT can yield some hefty viewership statistics and is a good chance of landing under ‘Most Viewed’. Advertising dollars rejoice. Note that this phenomenon is not unique to technology, but the double barrel of high user traction and high platform crossover make the incentive (and the effect) starker.
If you thought returns to catastrophism in the case of AI were high, they are no match for crypto. While crypto’s permissionless nature enables a lot of scope for bad actors, media pile-ons do nothing to provide balanced perspective on these situations. See below for some perspective on the media’s attitude and impacts on driving innovation in decentralised technology:
Source: Kelly-Ann Coulter, 2022.
Competition and Defensive Positioning. In addition to audience crossover, traditional media platforms also find themselves in the unfortunate circumstance of having function crossover with a technology described as ‘god-in-a-box’. At the base level, the workflows of a journalist and an LLM are much similar: take in information —> use interrelations with previous knowledge to summarise —> spit out output.
In the case of ChatGPT, journalists maintain the important advantage of having access to up-to-date information. But what happens once this is gone?
Any journalist worth their weight in salt can see the writing on the wall - their ability to draw in the right information, summarise it correctly and use it to spin captivating stories is unlikely to outdo models with machine memory and exponential learning capabilities.
This is where the legacy media’s last remaining trump card comes in handy: the trust of the public. While even this ‘trump card’ is rapidly dissolving before our eyes, it still (somehow) plays a crucial role in public opinion and top-down decision making.
By adopting negative standpoints and spreading doubt & fear around possible outcomes, legacy media squeezes what remaining power it has to defend itself against the upstart.
The public is fortunate enough to have this technology reach its Cambrian moment over half a century after its introduction. This gives us the benefit of having a cavalcade of practitioners, researchers and teachers with a lifetime of experience in the area to understand the risks and consult on potential guidelines and mitigation strategies for responsible progress.
However, the great paradox of democracy is that these people matter little without the media megaphone. Unfortunately, the media’s incentive structure does not lie with promoting the expert insight of these individuals but rather optimises for:
Meeting short-term financial targets. This is achieved by doing what works (i.e promoting sensationalist pessimism) in exchange for views which leads to subscriptions and upwards price pressure on ad spots.
Protecting long-term financial competition. If the media can persuade a large enough swathe of the public to hold negative opinions of progressive competition, the public can campaign for the halting of progress through the democratic process. By re-electing those with an interest in preserving legacy institutions, traditional media platforms protect their long-term interests against competitors through censorship and regulation.
These realities obscure expert opinion from the most popular channels and ensure that the opinions of government and the public that they represent are informed by outputs heavily polluted by adverse media incentives. This in turn leaves us with a distorted marketplace of ideas no longer optimised for maximising global surplus.
Top-Down Censorship & Regulation
This section is something of a second-order effect of the top-down media structures highlighted above. The incentives of the censor and regulator in this case are as follows:
Maintain positioning & power through re-election (group level). While this incentive is often lamented, when acted upon without direct malice, manipulation or corruption it serves as a powerful driver for acting in the public interest. However, as discussed above the public interest is far from immune to corruption in the flows of information. This is the mechanism by which governments are accused of being controlled by media.
Promote self-interest for advancement through the bureaucracy (individual level). When the group above succeeds, the individual can access some degree of relevance, but in order to fully yield the benefits of regulatory power agents must still outcompete their peers. There are multiple demonstrated ways of doing this. One is to build a powerful personal brand that tells the people that you are “one of them”. Another is to signal your capabilities internally by taking on hallmark cases (i.e crusades). If a top-down media can do a good enough job of stirring up public discomfort over the disruptive possibilities of novel technology, the stack on offer for the agent who can bust the case wide open becomes quite significant and opens up pathways to the top of organisations designed to act in this interest.
The outcomes of these regulatory incentives being acted upon tends to have a more immediate effect, which also allows for more instantaneous analysis of their fallout. Some recent examples, again driven by the rollout of ChatGPT and similar generative AI tools.
Nationwide bans. While the Italian example in the link has been reversed at the time of writing, it remains a stark example of the knee-jerk ability of regulators to completely kill a service and stifle progress. The reasons for the ban fall on both the regulator and the developing entity for not being able to guarantee the privacy of user data (more on this later).
While its enactment was less reactive than the Italy announcement, The US Government’s ‘Operation Chokepoint’ assault on the digital assets industry is another stark example of the sweeping effects that the media-public-regulatory stifle cycle can have.
Bans in Schools. Media fear mongering and misunderstanding, unsurprisingly, leads to fear and misunderstanding across the societal spectrum. In the case of bans in schools, this dynamic may have disastrous longer-term consequences. Just as language models will change the flows of information for the media, so they will also change the means necessary to prepare a child for navigating the world of information in the future. Again, regulators in all jurisdictions where these tools were banned in schools were able to knee-jerk their way into a complex situation that (until reversed) kneecaps the potential of children to adjust to this most recent informational revolution.
In sum, top-down censorship & regulation is a threat a) because of its ability to instantaneously shut down development and b) because of its dependence on corrupted public opinions.
Top-Down Development
Discussion to this point has focused on hindrances to progress arising from outside the progressive arena. To limit the discussion to these factors would be to ignore the equally disastrous fallout from disruptive technology being concentrated and monitored by the few involved in its development.
Misappropriation of resources. Lack of external governance or monitoring leads to the developing organisation being entirely driven by top-down organisational goals. In the case of OpenAI from the attached link, a desire to provide everyone in the world access to LLMs cheaply was prioritised over fair treatment or just reward for the humans-in-the-loop essential to their reinforcement learning methods. In the long-term, there is little doubt this kind of training or treatment of human feedback providers on models is sustainable or fair.
Furthermore, such single-minded pursuits of internal goals without seeking public feedback or open-sourcing their methods also creates an easy target for regulatory friction.
Transparency. Part of the reason the media feels threatened by the creators of LLMs is just as much about taking over the role of censor as it is about playing the role of disseminator.
This is a story in two parts. Firstly, AI-generated information will spread far more effectively than today’s media because (among other reasons) it can be a) tailored & targeted, ii) presented better than any outlet is currently capable of and iii) marketed & promoted more efficiently. These reach capabilities will then bestow upon these labs the authority as arbiter of truth as it will likely win the media market. What then do the labs choose to censor? How can they prevent their models from playing censor themselves? These two questions pose existential threats that apply at a scale far beyond what the modern media was ever able to achieve.
Direction. Who knows what the top minds in AI are working on at the moment inside the (majority of) labs that are bringing these tools to the world? Anyone? The same problem exists in crypto - many projects purport to be decentralised through the mechanism of DAO governance, but the recent example of the Arbitrum Foundation (to name just one) shows that direction and executive decision-making can still operate independently of this.
Lack of knowledge around the direction and end goals of the dominant early players in the space may in the long-run prove to be extremely dangerous. As private enterprises, they are well within their rights to preserve private information for competition reasons. However, this degree of privacy makes them privy to regulatory overreach as the public takes the reins over the entity’s self-control.
This creates an exchange of potentially existentially disastrous consequences for a guarantee of constraints on progress. No one is the winner in this situation, but the public might get to continue using the same chatbots we have access to. Such is the malaise of the centralised, private entity with a society-bending model at its disposal. These models are the kind of tool that no singular entity can, or should, be trusted with.
New operating structures are needed in order to leverage these technologies.
In Short: Maladjusted Incentives
Like scientific evolution, markets operate in a manner that optimises for continued progress. Out with the old, in with the new tends to benefit the whole. This article was not written explicitly as an argument over whether new technology (particularly artificial general intelligence) offers a net positive society. Instead, it aims to highlight the ways in which technological progress, public optionality and open markets can be hindered by poorly designed and archaic reward mechanisms.
.
The Tr(AI)lemma
I have tried my best throughout this post to avoid ascribing any of the ‘enemies’ to technological progress as being deliberately malicious. Systems and structures are much more at fault than human character for any of these drawbacks.
Even more than not being a product of malice, every stakeholder is in fact trying to the best thing they can depending on where there incentives lie on the ‘AI-lemma’ diagram above.
The media reports negatively on these developments as they see themselves as holding labs accountable to responsible development and can bring easy rewards to their employers by warning of threats to humanity.
Regulators act in the interests of the public, who are understandably worried about the media’s threats of their diminished role relative to machines in the future. Even more pertinently, they must ensure that labs are pursuing responsible development practices. This often follows the path of blanket regulation because they have no transparency into what is happening behind closed doors. The inputs are regulating by placing upper limits on the output.
A curious third concern of the regulator is their subservience to regulatory capture from incumbent players in any tech industry. In order to protect some of their biggest customers (large taxpaying entities), governments are often caught in a situation where they act in the interest of incumbent players rather than the disruptors who could cause a s
Even if intentions are not necessarily evil, the outcomes apply limitations to the human pursuit for progress and evolution.
Breaking the Cycle
What then are the alternatives to the cycle of negative actions in the face of progress? How can we optimise the above trilemma of responsible development, market satisfaction and human benefit?
Some thoughts:
‘Behind-closed-doors’ development will not win. Developing any paradigm-changing technology in a closed environment invites skepticism. Skepticism from media is the easiest path to skepticism from the public, which in turn is the easiest path to rapid action from regulators.
Regardless of whether the business concerned is in crypto, AI, quantum computing etc - regulators are most fearful of the things that either a) they don’t understand or b) they can’t control.
As playbooks for capturing value in open source become more widely established & foolproof, the rewards accruing towards ‘patented’ or ‘proprietary’ technology will decline proportionately. Outside of the regulatory point made above, there are two other reasons for this:
Pace of Development. The size and dynamism of open-source communities leads to a degree of composability and rapid development that cannot be replicated by centralised employers. In terms of shipping game-changing product from a consumer-facing lens, the open-source community is playing an entirely different ball game. The key challenges to solve given this pace then become user interfaces and loyalty.
Rationality. Employees of big (and small) tech startups build upon the whims of upper management. Open source developers vote with their commits. In a fair marketplace (which, for the record, I believe GitHub to be), developers will build atop the things that make sense and discount those that don’t. Developers in major foundations/labs etc do not have that advantage. As such, open source developers can operate on much faster feedback cycles regarding what makes sense and what doesn’t. This has been evident for a while, but as the stakes have raised to creating self-sovereign economic systems and self-learning machine brains its pertinence should be more pronounced.
Little more evidence is needed for the entirety of this point than the recently leaked Google memo (which can be read in full here):
While we’ve been squabbling, a third faction has been quietly eating our lunch.
I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today.
Meaningful incentives for humans-in-the-loop will help to mitigate the trilemma. In the short-run, a large factor in the tug-of-war between conflicting desires among AI stakeholders is that of the role of the human amidst all of the change.
In terms of returns to negativity, there have been few easier honeypots for the media to hit than the threat of technology companies taking jobs.
But what if an entity made publicised promises and clarifications around the new roles that will be created as part of the AI services space?
Reinforcement through human learning (RLHF) remains the most reliable and aligned method of training these models. Per their name, they require humans in the loop. Humanity is unlikely grow more and more dependent on these models without guarantees of some form of human taking responsibility for training them to see the world the way that we do.
I believe that one of the biggest competitive fields in AI in coming years will be over who can create the strongest, likely tokenised incentive mechanisms to get anyone from anywhere in the world to contribute to tagging and feedback on model outputs.
The more volume accrues to these platforms, the larger guarantee there is to the public that this is a meaningful sector of the economy that will create more jobs than it replaces.
Media will continue trending social, then decentralised. As shown above, trust in top-down media has been declining for some time. It did not take long for the first generation of social media giants to experience the same trend.
Twitter’s recent doubling down on its value proposition as a ‘town hall’ for public discourse has shown the power of truly social media. The recent duality of US Presidential campaign launches is a great barometer for the popularity of this movement, with Twitter’s Spaces feature being a particularly emblematic representation of the ‘town hall’.
However, no matter how popular any social media platform becomes, it will always be at the mercy of its users complaints regarding moderation, censorship black-box algorithms, promotional systems and anything else under the sun.
Cue decentralised social. The concept itself is nothing new, and traction on decentralised social platforms to date has lagged what many decentralisation maxis would desire.
Traction on Lens Protocol has waxed and waned alongside crypto cycles, but remains a beacon for a decentralised future of media. Source: @rustamov, dune.com
That being said, people respond to incentives. The first, most obvious, incentive is the monetary incentive. A decentralised social network could theoretically raise money and subsidise users for traction through the token network effect path best illustrated by a16z Crypto’s Chris Dixon (below).
Even more effective, however, will be the effect of portability. Once any truly decentralised protocol generates sufficient traction, it creates a ripe competitive field for other protocols to allow users to port over their follower/subscriber/true fanbase to new platforms and compete there. This, in turn, will allow for rich debates within the uncontestable, immutable and most importantly transparent confines of the smart contracts governing the platforms. Once this critical mass is achieved, where are the incentives for publishers to stay on Twitter or Discord?
This post in 3 quotes and 1 graphic.
The Stifle Cycle. Media quotes are actually real.
Requests for Startups (RFS).
Media Marketplaces for Debate. Both traditional and social media suffer from a variety of ills that have caused them to lose credence among the general population. The key ones that can be solved for in new systems are incentives for competing opinions (as opposed to top-down, nail your niche content) and immutable moderation policies. Fortunately for the would-be founder, these two birds fly close enough together to be killed with one stone.
The most primitive idea here is that instead of single comments sections where users are pushed to the top by likes (a mechanism largely dependent on the user makeup of the platform), we create bifurcated comments sections. Users stake themselves on either side of the debate, with the best opinions on either side being pushed to the top. Users whose arguments place them at the top of either side of the comment feed by volume staked or quantity of likes will receive a fraction of the rewards pool, with a disproportionate share of these rewards going to those who have staked the winning side.
Permafacts has created an interesting PoC trending in this direction for those looking for a reference.
Labels of provenance for human-generated content. In a world where the majority of content online is produced by non-humans, how can we place a premium on organic human content? After all, the AI’s abilities only come from its capacity to train and learn from organic human inputs.
What kind of product/signal will become the source of truth? There is already a small array of advance proof-of-humanity protocols (e.g Worldcoin) or labels of provenance (e.g Arweave stamps), but what will allow them to scale?
Winners (emphasis on the plural) will inherit a massive problem and the massive rewards commensurate with its solving.
Incentive networks for AI’s unsung heroes: the humans in the loop. Tagging, testing, supervising, fine-tuning, classifying and on and on and on. As artificial intelligence continues to become more and more ubiquitous in our everyday lives, the need for humans in the loop scales proportionately. How then, do we get people to contribute where needed to ensure these models are correct, aligned and refined?
The same way we’ve always done - through incentives. I am keen to see the first generation of less-technical Kaggle-style protocols that allow the layman to participate in the AI economy by participating in the trenches and making models more robust in the process.