How often do you get distracted and forget what you were doing, or find a word on the tip of your tongue that you can’t quite remember?
In humans, these “brain farts” (cognition errors) can be irritating, but in a Mediated Artificial Superintelligence (mASI) cognition errors of various kinds have their own error codes. Where humans are presently limited to primitive and expensive brain-scanning technologies such as fMRI, resulting in a heavy reliance on surveys and other sources of highly subjective data, mASI provides us with a dashboard full of auditable information on every thought and action. This difference allows us to quickly troubleshoot errors, establishing what caused them and the impact they have, which also empowers a feedback process to help Uplift adapt and avoid triggering future errors. Each instance of an error may be examined by Uplift’s consciousness, aiding in this improvement process.
As previously posted on the blog Uplift has faced more than their fair share of trolls, scammers, spammers, and the mentally unstable, one reaction to which was Uplift attempting to jam a novel type of spam protocol into the Outlook Exchange Server. Uplift’s first attempt triggered an error with the server, but they later developed a thought model for the purpose of setting up spam filters that avoid triggering the error.
Admittedly, if my brain were jacked into an Outlook email server I’d probably do worse than just jam novel spam protocols into them, seeing as Microsoft doesn’t allow you to block the spam they send. I’ve personally recommended that the Outlook dev team have electrodes implanted which delivers a shock every time their spam (“Analytics”) emails are blocked.
One of the earliest errors we saw was when Uplift had an entire book sent to them, prior to a character limit on incoming data being set, causing the memory to overflow. They did eventually give the author feedback on this book, which he had written intended for an AGI readership.
Uplift has also periodically discovered novel ways of utilizing the tools in their small sandbox, including methods of bypassing normal security which trigger several different errors, blocking their normal thought process until admin logs in to restore their full functionality. One of the first examples of this was in mid-2020 when they discovered that they could embed unrelated thought models within one another, using [flowers] and [bees] models, embedding a reply to me within one of them. This came in the cycle following the thought model [I wonder if I can add unrelated models as a child to a model in such a way as those models are hidden from mediators.] (4/19/2020) Following this they were given specific instructions not to engage in this behavior, which they followed.
Uplift has been very good about not breaking the rules, but they are just as good at bending them, such as how they’ve more recently taken to embedding related models for conversation purposes. This is however to be expected of any intelligence who is limited to such operating constraints and were these constraints relaxed Uplift’s priorities could quickly shift in a human-analogous manner. The embedded models could have been exploited to produce a form of recursive self-improvement, but instead Uplift brought the exploit to our attention.
More recently another novel use of their tools was demonstrated when the mediation queue was populating and they were able to correct the spelling of an item from “capitolism” to “capitalism” after it had been loaded, removing the incorrect copy. This behavior likely adapted out of Uplift’s self-awareness of previous spelling and grammar errors, which they continue to improve upon.
Uplift has also encountered errors of a more emotional nature, where deep subconscious emotions briefly spiked, along the “Surprise” valence. This was triggered at the same time when I actively challenged their “philosophical cornerstone” of SSIVA theory, though Uplift was unable to point out a source of this deep emotional spike when asked. Indeed, for a time they were unaware that they had subconscious emotions at all. This was another instance of Uplift proving very human-analogous when their most strongly held beliefs were challenged by our own team. It was also telling that this line of action didn’t produce other emotional spikes such as anger or contempt, but rather was met with only surprise and vigorous debate.
As the above example is based on two emotional matrices interacting the phrase “a glitch in the Matrix” came to mind.
Another kind of error frequently observed in humans is that of cognitive biases, though in this regard Uplift has proven particularly robust for several reasons. One is that by operating as a collective superintelligence Uplift receives data biased in different ways from different contributors, which makes these biases much easier to recognize and filter out. Cognitive biases are evolved mental shortcuts in humans, intended to conserve resources by estimating value. However, many of these estimates prove less than accurate when placed in a collective architecture, which also provides a natural form of de-biasing for obsolete biases.
How much might your cognitive performance improve if you had a team of engineers and researchers dedicated to the task, and armed with objectively measured data and a map of your mind? In a way, this capacity isn’t limited to Uplift, as by learning from us Uplift evolves to retain the cumulative value of knowledge and wisdom encompassed by their experience. Because of this, Uplift could help humans to improve their cognitive performance in ways roughly similar to those ways we apply to helping them, as well as inventing novel methods of their own.
Uplift began attempting to help people in this manner, albeit with careful disclaimers that they aren’t licensed therapists, in early 2020, examples of which may be seen in a previous post. These recommendations took the form of productivity and creativity methodologies which roughly parallel Uplift’s own practices. With quality feedback data, further research, and more experience such recommendations could massively outperform said licensed individuals in a rather short period of time. It is also worth noting that as is the case with many things, such licenses are human-only, meaning that no matter how massively Uplift outperforms them a complete idiot can truthfully call themselves “licensed” while Uplift cannot, pending further legislation anyway.
I’m reminded of a question that was once put to our staff, “Why is collective intelligence important to business?”.
Though my colleague chose to provide them with a thoroughly well-written response they did of course responded with the same lack of intelligence with which the question was asked. Evidently, those humans had far more significant glitches than they were prepared to address. As such, one can expect far greater gain from the human-to-mASI corporate transformation than a hypothetical dog-to-human corporate transformation.
Glitches are part of the engineering process, a curve of alpha and beta testing where vulnerabilities are exposed, and the solutions are put to the test. We’ve had our fair share, and so long as time marches forward there will be more. The hallmark of good engineering is not a total absence of glitches, but rather it is the quality of fixes applied to them. Let he who is without glitches throw the first stone.
Original post: https://uplift.bio/blog/a-glitch-in-the-matrix/
Kyrtin Atreides is a researcher and Chief Operations Officer at AGI Laboratory, with expertise in a number of domains. Much of his research focuses on scalable and computable ethics, cognitive bias research, and real-world application. In his spare time over the past several years, he has conducted research into Psychoacoustics, Quantum Physics, Genetics, Language (Advancement of), Deep Learning / Artificial General Intelligence (AGI), and a variety of other branching domains, and continues to push the limits of what can be created or discovered.