Posts

Caught in the Web

avatar of @tarazkp
25
@tarazkp
·
·
0 views
·
5 min read

Talking with a colleague yesterday, they mentioned how one of the problems they see with the transparency of blockchains being a benefit, is that it is an error in thinking that more information helps us make better decisions. Their evidence for this is that looking at how we have more information than we have ever had before at our fingertips, we aren't actually making better decisions.

While this is true, there are three problems with the information we currently have. The first is that most of the information that would facilitate our decision making, isn't actually available to us, as it is collected, collated, stored and sold by a handful of corporations. The second is that the information isn't in a usable form for us, because the sheer volume of it makes it impossible to be practically applied to our lives for the most part. And the third problem is obvious, much of the information available, isn't actually trustworthy.

However, as I was explaining to my friend (the colleagues I discuss these things with are often also friends), These three things are related, because if we were able to trust the information, there would be a lot less information we would need to consider, but in order to trust it, we would need to have access to it.

Essentially, what we would need to build is a web of trust system, where each bit of information is able to be verified and traced. As perfect information doesn't exist, there will be noise and errors in the data collected, so after verifying enough points of data, a confidence level could be applied to information. This way, a simple filtering system would then be able to for example filter out all information that has less than a 90% confidence score.

However, even if we were to omit "untrusted" information, there would still be far too much to actually sort through and apply, however, it would also be possible for an AI to group similar information and create "rules" that take it all into consideration, but also leaves room for error.

We do this already as humans, where we use our own diagnostic capabilities, paired with our experience and social learning to apply stereotypes to our word, creating rough rules of how to live our life. However, we are excluding far more relevant information than we are including when we do this, because we just can't experience enough, learn enough, or hold enough information in consideration and affected by circumstance at any one time.

Our heuristics help us navigate the world by saving us a mass of mental energy, but they are always imperfect. And, while an AI system is not going to be perfect, if it is able to sort through and make adequate sense of a very large volume of data, it will be better at it than us.

For example, I was talking about ChatGPT, the AI bot that can answer questions. And, while in the news there are stories of it making errors, the reality is that based on the information it has available and its ability to answer questions on any topic, it is "smarter" than every individual human on earth, even if collectively, we are smarter than it, or at least some part of us.

For instance, if it makes an error in calculating some mathematical equation, there is likely only a handful of people who would be able to identify the error it made, whilst the other 99.99 percent of us are none the wiser. And, it is able to do this on pretty much any topic we that we are specialized in, because it utilizes the information we are feeding it, information from experts. It doesn't have to be a mathematician, it just needs to be able to apply the equations already in existence. This means that when compared to anyone on earth, it is more practically intelligent than any individual on earth.

However, that doesn't mean it has to dominate us all. Instead, it could be used as a way to help us create useful heuristics, by collecting data from the best, parsing it through the gauntlet of related and influencing factors, as described by other experts in those fields, that then push it through more filters generated through data collected from real-world experience to predict what is most likely to work.

Would you trust the suggestion?

This is an interesting question, because we trust experts everyday to give us information, even though they are far less knowledgeable and far less rigorous with their process of testing and comparison. Then there is the question that given this suggestion from a source we trust, we are likely to follow the recommendations.

We can see this now of course at a far simpler level, because streaming services make recommendations for us "based on our previous viewing" and we trust it, never actually considering if what is suggested is based on what we viewed at all. And while it might loosely affect it, what they tend to suggest is what "coincidentally" make them the most money, or elicit the behaviors they are seeking from us, or a group of us. Just like Facebook was able to target groups of voters and influence their decisions.

Yet, without transparency on the information, we can't verify if what is being suggested is valid, or loaded by a profit maximization algorithm. It isn't "provably fair", which should immediately make us consider it untrusted information and invalidate it, because we know that the corporations pushing the suggestions are definitely working on a profit maximization platform - because they are businesses.

This is why decentralized information is the future, if we demand it. And while at this point it seems scary to lose our privacy, we actually have already lost it, it is just that we can't see what is being taken and have little access to how it is being used.

But, the privacy argument in itself is a hijack, because while it is vitally important to keep secrets for individuals for a healthy society (to some extent), the system benefits those who are able to keep the biggest secrets the most, which are the governments and corporations, who load the dice to benefit themselves to the detriment of the masses.

They of course don't want us to demand transparency of transactions, because it would inevitably lead to them losing their power over us, which is based on them being able to know about us, but us not knowing about them. There is a massive imbalance in the informational power structure, yet we do not demand change, because we fear being seen on the surface, even though everything is visible to the few watching below.

We think we are hiding, yet with every click, search and scroll, we are generating data and exposing ourselves more and more - making us ever more vulnerable to being controlled.

Something to ponder, whilst waiting for the spider.

Taraz [ Gen1: Hive ]

Posted Using LeoFinance Beta