The Wizened Circuit
A submission to a contest, which had a kind of neato prompt. And cash prizes.
This here is a submission to a contest, the details of which can be found here.
It is difficult, to write for a prompt which does not seem to fully understand itself. I shall opt for a simpler approach. I will define your prompt for you, best that I can. Then, once I have claimed your prompt as my own, I will write an essay off of that prompt instead, and you can judge for yourself whether it contains anything which is worthwhile. I suspect that the words which follow will be of little use to you, but on the off chance that they aren’t, I shall write them anyway.
There are three terms which are central to the matter being explored. Knowledge, wisdom, and automation. All of these are tricky to define, but a firm definition allows for more thorough exploration, so I’ll strive to pin them down. For the purposes of this essay, let’s call knowledge the ability to form a coalition of ideas. To take concepts like yes and no, this and that, green and red, and put them all together in the same room, comfortably chatting with one another over a round of beers. ‘Yes, that red, this no green.’ The more knowledge that someone/thing has, the larger the set of concepts which can occupy the room while still maintaining a conversation. Knowledge is lesser when the conversation breaks down (That yes this green red no), or when the number of concepts which coexist are reduced (That red).
As for wisdom, let’s say that it is the ability to effectively accomplish a goal which you intend to accomplish, through the use of knowledge. Wisdom is the application of knowledge to real world situations, typically under the context of doing good. In a sense, wisdom is actually a more stringent form of knowledge. It isn’t enough for ideas to be in coherent communication with one another, they must also be practically useful. Good wisdom bridges the gap between knowledge and application, often teaching people more about how the world works in the process. It accounts for shifting environments, underlying motives, and missing information in order to choose a better path forward. Unwise actions/advice accomplish the opposite of their intended outcome, or fail to account for how an event will realistically play out.
Lastly is automation, the use of processes and machinery to economize a set of actions and outcomes, usually with the intention of minimizing required human input. Make something faster, simpler, and easier than it otherwise might be. Something which runs by itself, and accomplishes set goals reliably and repeatedly. Good automation is efficient, requiring the bare minimum of resources in order to accomplish a task. Bad automation takes too much, and gives too little. It eats up time and energy, and doesn’t reliably accomplish the intended goal.
With all of these defined, we can actually get into the meat of things. If there’s one thing I want to emphasize with this little essay, it’s the assertion that automating wisdom is an uniquely difficult task, likely to be met with barriers to entry which we have not yet seen in AI development. Discovering those barriers is the first step to solving them of course, which is the main reason I’m here. In order to reveal the struggles ahead, it is most useful to diagnose the struggles which lay behind us, and how they have been overcome. With that said, allow me to briefly go over the automation of knowledge.
Most of the achievements of the modern era have been revolutions in the automation of knowledge. Prior to the treasure trove of knowledge housed on the internet, and computation which is robust enough to access the full of it, the ability to automate something as vast and interconnected as ‘the sum of human knowledge’ was an inconceivable task. Yet, with determination, the right algorithms, and huge amounts of cash money, we can reliably say that significant breakthroughs in the field have been achieved. AI has been produced which speaks coherent english, able to answer questions about a wide ranging variety of topics with fairly high accuracy. It can be reasonably argued that ChatGPT and its compatriots are more knowledgeable than the majority of human beings. They wield organized access to a vast array of topics, from the exceedingly niche to the broad and easily understood, and their capabilities are only expected to grow in the coming years.
But before we become too enamored with recent breakthroughs in AI, it is worth considering whether they represent a categorical shift, or simply incremental progress. People have made the mistake of confusing the two before, after all. Chess was once regarded as an inherently human game. Something which no computer could ever hope to understand, let alone conquer to the same degree as a grandmaster. These beliefs were wholly unfounded. Chess is a concrete system, with a fixed win state, and fixed rules which allow you to achieve that win state. It is a very complicated form of a static game, but a static game nonetheless. Due to the reliablility of both the win state and the methods used to get there, chess was always a game well suited to automation. The only limitation being, of course, whether we have access to a large enough chess data set, and robust enough computation to utilize the full of it. Once both of these are fulfilled, it becomes possible to automatically master chess.
Knowledge is similarly regarded to be an inherently human activity, but this may be ill conceived. As I’ve described it, knowledge is the coherent connection of a large spread of concepts and ideas. This actually makes it achieveable through standard methods of automation. The “win state” of knowledge, the precise set of coherent connections which exist among the concepts which humans know about, is fixed; even if it is massive and complicated. Similarly, the actions needed to reach that end point are fairly rigid, composed of words/concepts whose definitions and relative connections are reliable. That’s half the point of language, after all. To provide a common outlet for sharing knowledge which is understood by a large body of very different people. If language isn’t reliable, then two people cannot use it for communication, and it ceases to be language. So, given the presence of a clear end goal and reliable actions (words) which can be taken to approach it: as long as a large enough data set of knowledge exists, and we have robust enough computation to measure it, the automation of knowledge is feasible.
This is where my definition of knowledge clashes with other, more common takes on what knowledge is. As I’m sure you know, knowledge as popularly understood is not a static entity. It has changed and expanded over time, aided by the hard work and teachings of billions of humans. This actually helps to flesh out my broader point. Automation does not work well with the conventional model of knowledge. When testing an AI model, the easiest way to flummox it is to get outside of its comfort zone. Talk about things which no one has ever talked about. Speak in ways that barely anyone uses, and set up obtuse rulesets which must be followed. Once the AI is forced to interact in an area with both a shifting end goal, and a shifting means of achieving that end goal, it falls apart. Of course it does. Automation is very bad at dealing with unknowns.
Which leads us to the troubles of automating wisdom. Unlike knowledge, wisdom comes with difficulties which sidestep the strengths of automation, rather than simply challenging them head on. The most valuable wisdom comes from knowing the correct path forward when nobody else does. Wisdom which everyone is already privy to is largely useless. As such, most wisdom arises from areas of low knowledge, where the optimal end goal and the ideal means of reaching it are initially unknown. This environment is directly contrary to the large, reliable data sets which modern day automation is built upon. Wisdom is not simply a matter of complexity in most cases. It is a matter of discovery and discrimination.
In order to automate wisdom, it is necessary to remove one of automation’s greatest tools. You must limit the information which is accessible, rather than expanding it. To give an example through the lens of chess, a knowledgeable AI will succeed at the game after playing millions of matches. A wise AI will succeed after only a few. It needs less data in order to accomplish the same result, since wisdom multiplies the impact which any knowledge has on effective decision making. Conveniently enough, this makes wisdom measurable. Something essential, if we hope to automate it. An AI that manages to gain and maintain larger amounts of knowledge over a shorter span of computational “time” can be said to be wiser. It makes more efficient use of lower levels of knowledge.
The trick of things is that such an AI is unlikely to arise using current methods of training. Most modern AI are being trained to automate knowledge, not wisdom. If we wish to make progress in the automation of wisdom, AI need to be trained on the process of training. They need to learn the most reliable methods of learning new material, utilizing limited resources.
I’m sure there are numerous ways to go about achieving this, but I’ll propose one, just to give an idea of where we might be headed.
When they lack information, wise AI should be trained in the process of hypothesis creation. Wisdom is the ability to accomplish goals using knowledge. If you lack the knowledge needed to immediately accomplish your goal, then the wisest move is typically to create more knowledge for yourself. With enough tweaks, AI is surely capable of doing this, likely using its own pre-existing stores of knowledge as an initial template, modifying and reintegrating them, then being rewarded or punished based on its newfound accuracy. The best way to manage this would likely involve taking a “grown up” version of an AI, privy to a comprehensive set of data, and set it towards judging the performance of several “child” AI, which have had certain sets of data excluded from training, leaving gaps in the knowledge they have access to. After a human prompts the children with discovering and elaborating on a certain set of data not in their training, the grown AI would periodically check for accuracy and time spent getting to that level of accuracy, rewarding or punishing as appropriate. Ideally, this would teach the learning process to the child AI, automating the process of learning new things, even in a low information environment. Child AI would steadily learn which lines of inquiry are most likely to achieve results, which lines of logic tend to hold up over time, and how much reinvention is optimal for synthesizing new, reliable knowledge. They would learn how to learn things, possibly saving billions in training costs, and making them far more useful advisors in low knowledge situations of significant importance.
They might also hopelessly flop around like fish out of water. In fact, that is most likely what they will do. Tests of this variety, even if successful, would likely need to be followed up with a slew of other tests. One testing an AI’s ability to account for hidden motives. One testing an AI’s ability to account for literally unknowable outcomes. One testing an AI’s ability to create, or avoid creation, of wisdom which defies well tested strains of logic. The list goes on. Decision making is difficult in the best of times, but one of the reasons we hold wisdom in such high regard is because it encourages good decision making in the absolute worst of times. I understand the desire for automating the production of such an important and valuable trait, but prior to accomplishing such a goal, perhaps we first need to impart our own wisdom to the machine. AI must learn how to learn. It can’t simply observe how we have progressed through the centuries; it must create that progression with its own digital hands. Without this fundamental, underlying trait, any wisdom which it gives will be biased towards knowledge systems which are already in place. Valuable, yes, but valuable in a stagnant fashion. Valuable in the areas where wisdom is least necessary.
By better understanding how we have gotten this far, we can better see how to go farther. I am certain my inexperience has shown through in the preceding paragraphs, but I hope they have given you something worthwhile to think about nonetheless. My thanks to the persons who were kind enough to present such an interesting prompt to the broader world, this has been a unique topic to ponder over.