Suggested by Chris Bura over 1 year ago
For better or worse, we're now online communicators. But for thousands of years, we as a beast have communicated in person, read body language, had context or background on those we've agreed and disagreed with. If someone floated the idea of committing violence, he wouldn't find a community of encouragers, instead it would be swiftly nipped in the bud.
I'm not saying social media is inherently bad - we can all think of numerous amazing things social media has brought about. Nonetheless there is a great unmet need.
If we look at the incumbent social media players that focus on thought, dialogue and debate, it's really Facebook, Twitter and Reddit. (There're others like Quora and the partisan sites of Truth Social & Parler, and to a lesser extent Substack) But let's focus on Twitter. As Musk states -"Twitter is the de facto town square". Even though just 10% of Twitter users account for 92% of its activity, Twitter effects all our daily lives more than any other platform. The prevailing winds of Twitter drive corporate initiatives and government policies. It's the megaphone of politicians, activists, media outlets, and influencers, which effects everything from local school board and District Attorney elections to congressional seats and presidential races. Twitter can inform us like no other, and ignite rhetorical wildfires leading to boycotts, harassment, death threads and career endings.
Twitter didn't ask to be the town square or set the mood for the country, which is good, because it's poorly engineered to do so:
Some great reads:
Here's where I apologize for coming off overly dramatic... :)
We as a nation (and many other democracies) have never been more polarized, more nudged to the extremes, and more pessimistic about our country's future. Based on my own experience, interviews with others, and many smart thinkers - our current options for online discourse are only exacerbating the problem.
“At what point then is the approach of danger to be expected? I answer, if it ever reach us, it must spring up amongst us. It cannot come from abroad. If destruction be our lot, we must ourselves be its author and finisher. As a nation of freemen, we must live through all time, or die by suicide.” - Abraham Lincoln
Are you interested in addressing this Unmet Need?
Generalist
I spent 2021 working at a pre-seed AI startup using ML and NLP to build a novel platform seeking to address mis/dis-information. We looked at Wikipedia, Quora, Reddit, Twitter, etc. Lots of thoughts on this subject.
CEO | Founder | Managing Partner @ Platform Venture Studio
Awesome - would love to hear more about what you learned!
Researching my next Startup
From Jack Dorsey today responding to a tweet about how he feels about Twitter.
CEO | Founder | Managing Partner @ Platform Venture Studio
@Tim Connors and I have discussed these issues a lot and I agree with the vast majority of what you write.
A core problem is who is doing the fact-checking, how can that practically happen at scale, and who is paying for it? i.e. how can the "fact-checker in the loop" model of traditional journalism be replaced by something that works at Internet scale?
Researching my next Startup
I think the ONLY way fact-checking or Truth Vetting as I like to call it can happen at scale is via a gamified thought market.
Think of the "side-bar" courtroom concept. Imagine an impassioned post such as "The Inflation Reduction Act will NOT reduce inflation!!" and then a bunch of comments supporting, opposing, etc. Someone can decide to drag this to the Truth Vetting widget within the platform. But unlike Quora - you can't just answer with your opinion. You have to include at least one piece of external evidence such as a link for your support or refutation. Then the crowd (who's already invested in the original post) can jump into the truth vetter and up vote whichever side they believe in. The side with the most votes is determined to be the truth. Back of the envelope visual.
Sounds crazy - but it's not unlike Wikipedia. You get enough people invested, AND you design good anti-gaming features for frivolous or activist voting, I think you get valid results. Then those who've contributed evidence have their credibility scores raised or lowered based on the result -which gives this greater voting power, and greater reach within the system.
Advisor, Product Strategy @ Various Startups
In this system, what would qualify as outside evidence? Is it any external link?
Researching my next Startup
Hi - great question - Any external link or in the case of whistle blowing or not-yet-public evidence - uploads of videos, documents. The crowd not-only can up vote, they can mark evidence as irrelevant. And, if a trend of those marks occurs, the credibility score of the evidence submitter goes down.
Advisor, Product Strategy @ Various Startups
Cool. What anti-gaming features do you have in mind?
Researching my next Startup
Didn't realize how skinny replies of replies get in the UI. LOL
Researching my next Startup
love the question - anti-gaming is unfortunately one of the biggest challenges.
I see any human-behavior based algorithms like a series of dials on a big panel. Each dial is basically a rule. And like a dial the rule can be adjusted down to zero, effectively turning the rule off or anything > 0 making the rule have some sort of effect. Rules can be adjusted via machine learning or administratively.
Here's an example of a Truth Vetting question - Let's say the evidence is pretty overwhelming for a "yes". But a "yes" hurts some ideology. So you have a bunch of activists emailing and messaging to come vote "no". How do you prevent this? First you require some effort. User's can't just vote without reviewing the evidence. So you put in some event handler to see that a link has been clicked or a document opened (plus id obfuscation techniques in case crawler bots made it past registration.) That eliminates the laziest of votes. Next you have to deal with the dedicated ones who do click the evidence but still vote in bad faith. This is where scale and history come into play. The system can determine the ideology of users based on who they follow, and who follows them. (I think i'd also ask if the person leans right or left during signup). The system will also know how many times a particular voter ended up on the "winning" or "losing" side of previous Truth Vettings and what the ideologies of other voters on boths sides were. (Do they ever cross their ideological aisle and end up on the "winning" side.) System will also take into consideration when they voted? Are they leaders or followers? If history shows X votes come in per hour - did they vote early, middle or late in voting lifecycle. System will look at the difference between this vote and their registration date. Was it below X days/hours? If so, it could mean an activist campaign voter. These are not exhaustive by any means. All these dials (rules) are adjustable - but they all equate to a "Credibility Score". If someone's got this great track record of voting, mostly ending on the right side, voting across the ideological aisle, rarely getting down-voted on civility - this person's voting weight is going to be greater than 1:1. They might have an impact of 10:1. A person who meets all the criteria of an activist vote might be 0 or maybe 1/10:1 (But again this is a dial that can be adjusted as well).
Advisor, Product Strategy @ Various Startups
Let's see if we can get these columns to < 3 chars wide!
Really interesting ideas. Would love to see if these would be durable in the real world. Truth and civility are such hard problems to solve for.
CEO | Founder | Managing Partner @ Platform Venture Studio
Seems like we need a min width!