Posts

Tweet Transparency

avatar of @tarazkp
25
@tarazkp
·
·
0 views
·
4 min read

So, what is Twitter hiding?

It seems that there is a little bit of turmoil since Elon Musk made an offer to buy and then, asked for information of the number of fake accounts on the platform - something that they seem unwilling to provide, putting the takeover in jeopardy. While apparently he is entitled to ask for this information, I think that the Twitter board know that if they give it to him, he will make it public and that would be detrimental, as it likely far exceeds the 5% they claim.

If you look at the market reaction of Netflix losing subscriptions, for the first time in a decade, if Twitter had to admit to so many fake accounts, it puts a serious dent in their credibility and their ad revenue business. As you can imagine, they don't want to give away their "secrets" unless they have to. Ironically, after kicking him off the platform, it is kind of like Trump not releases his tax papers.

But, I think that this is a bigger problem across the platforms, as I imagine that all of the ad revenue reliant companies are going to do some level of fudging in order to improve the profitability. And, without transparency on these metrics, there is little way to verify and it becomes a process of,

Trust us

Do you trust the word of those who benefit from you trusting them?

And, there is obviously a very large difference between the likes of Twitter and Netflix, because Netflix is a SaaS model, meaning users actually have to pay to be on the platform. While there are people sharing login information, there is no such thing as "fake accounts" on Netflix, as there is no benefit in having one. On Twitter however, there are benefits for fake accounts in order to affect the metrics and since advertisers pay for impressions and click-throughs, there is incentive to keep them churning content and interaction.

For instance, I have long suspected that a lot of the players on Words with Friends are actually "players", bot accounts designed to keep people like me engaged. For example, if playing with friends it is possible to chat, but there are game modes where playing with a "stranger" where it is not possible, nor is it possible to add them as a friend. And, often during these game modes, the timed rounds are nearly always pushed to the last seconds, maximizing the "time on site" metrics, whereas when I take my turn, I am happy to play in the first few seconds if I have a word. While this doesn't affect me, this does change the pricing model for paying advertisers.

I wonder, if Twitter is monitoring their fake accounts, how many of them are actually Twitter fakes, designed by the platform itself to drive engagement and interaction - create drama. This is the question for Facebook and Instagram also, because they are able to heavily affect their "impression" for advertisers by affecting these metrics also and, they are no strangers to employing behavioral economists to gamify, manipulate and affect user behavior.

It is an interesting "problem" in many respects, especially considering that there are various laws that force advertisements to be labelled as such, but shares from users don't fall under those conditions. This means that user generated advertising impressions are able to fly under the radar, adding a very high incentive to increase this kind of content.

And, in the case of Musk and Twitter, if there is a high number of fake accounts, the valuation for the offer is especially overvalued and if the numbers are released and are as many suspect, the share price is going to tumble hard. Now, instead of who is buying, what the employees should really be worried about is whether anyone is interested in buying it at all.

I predict that over the coming few years, lack of transparency on these centralized, ad revenue model platforms, is going to get increasing attention. This will come from several directions, including the accountability of the platforms to manage and curate the content on their experiences. If they are legally responsible for what is on there and the speed at which it gets taken down, they are going to have to very quickly identify accounts that are able to distribute "risky" material. To meet legislation, they are going to have to admit and prove that they are incapable of identifying all instances, therefore wearing the cost of breaking laws. On top of this, there will be an increasing number of user generated legal suits, where people will claim that they have been somehow negatively affected by the platforms' inability to properly curate.

Centralized curation is not something they want to do, so that means that they have to maintain their "we can't do anything" position, which puts them in a bit of a predicament of which is it - can they not accurately identify fake accounts, or are they unwilling to? Either way, it isn't good for them.

All of these very public questions being raised will increasingly put pressure on the platforms to answer questions that they don't want to answer and every time they fail, they will lose a little more ground, a few more users, a couple more content creators - a slice more advertising income. With their high overhead models and shareholders looking for ROI, eventually it becomes unsustainable and they start making cuts, costing them more market share.

In five years from now, the social landscape that has been highly influential in driving culture and behavior over the last 15 years, is going to evolve a lot and one of the major components that will take it forward, is the web 3 tokenization and the real ability to earn and own the platforms. However, this doesn't mean all platforms will be able to pivot to this model and one thing should be remembered, no business is too big to fail - because history tells us, all businesses fail eventually.

Will the sale go through? It is 50/50 at this point, but one thing is becoming clear, the industry of society is changing.

We are ahead of the curve.

Taraz [ Gen1: Hive ]

Posted Using LeoFinance Beta