Proposing a primary-directive driven general-purpose AI

Work in progress

I propose a primary directive driven AI to be the optimal way to implement a general-purpose AI.

I define general-purpose AI as a computer-generated software construction that solves a variety of challenges. I define primary directive as a singular need towards which this AI devotes itself.

Q: Why this approach?
A: I submit it is the optimal way to implement a general purpose AI. I propose that humans act in terms of a primary directive, and that this primary directive is a proven enabler of general purpose problem solving. I propose further that all components are in place for us to replicate this concept with AIs, and further that we can formulate an algorithm by which to evaluate derived results of this replicated behavior.

Q: What is the primary directive for humans…?
A: Following the evolutionary path. All of our actions ultimately serve to support our evolution, as has been the case en route to our current, complex organism.

Q: … and what should be the primary directive for AIs?
A: We have several options available. We can choose to mimic the evolutionary path of humans, but this would soon limit us: We humans are affected by a biological randomness that it will not make sense to replicate with an AI. We evolve in adherence to these factors of biology. We feel the biological urge to procreate, for example – this has hitherto been the most convenient way to evolve. It makes little sense to direct attention towards these factors in the scope of implementing an AI, in much the same way it makes little sense to implement an immune system – we should not plan on exposing our AI’s to the flu.

Human evolution is an entirely biological concept, targeting faster execution speeds, optimal energy consumption, enhanced skills. Although these targeted results appear desirable traits for AIs – perform faster, better – and furthermore appear perfectly tangible targets for implementation, we should not lay the foundation of AI building on them. They target biological needs, which our AIs do not have.

We will instead dictate a primary directive, and in doing so we will impregnate the AI with a single basic measure of its own evolution – not in the usual human sense, yet with the corresponding sense of drive, motivation and urgency.

I propose the following primary directive: that we instill in our AI simply ‘the need to be taken care of’. A very basic need, too – and one that, when successfully implemented, carries primarily the following implications:

– It will allow our AI to become dependent on others. This is significant in its ability to, among other, learn from others and to be influenced by others, and carry out its tasks accordingly. The comparison to humans is straightforward: as infants we are utterly dependent on the care of those around us, indeed we cannot survive without it. Provided a sufficient quantity and quality of care(o) we grow, and we learn, and we contribute to society. In determining this somewhat similar primary directive for our AI, we decide how our it will fundamentally behave; in much the same fashion as parents decide, if only initially at least and all things being equal, how their children will fundamentally behave. This should go to some lengths to alleviate the popular notion of selfless, aggressive AI-controlled robots. To be clear, an AI that is set out to search for its own primary directive may indeed find egocentric behavior rewarding, and for the majority of scenarios we should like to avoid this result. ‘Given suffient free will’, some fear, ‘how will an AI behave?’ The question is flawed; free will as a concept has been meticilously debunked by now, and thus we will not address its concerns. Thankfully so! – it would be tremendously hard to implement(o) within the confines of a programming language. Furthermore, in as much as we implement a primary directive of our own choosing, the level of aggression becomes adjustable: we can merely alter the appropriate parts of the algorithm that evaluates which input signals are favorable to the primary directive. Further to this point, the primary directive is a scalable measure that we determine the outlier values for, within an instant or over the course of some time, as we so chooses. Will the AI have a very great desire to be needed, or will it have a diminished desire(o)? Initially, as with children, we would ideally set the bar high, i.e. promote a high level need (‘to be taken care of’) with our AI, as this will boost its aggression in this early context not negatively laden, rather suitable to that of a child: so our AI will early on become highly motivated to seek out that stimuli that promotes positive recognition(o), and committing this to foundation memory. Over time we would then gradually lower that need, as we incorporates the AI into a society with vastly varied interests and tasks. We could then speculate in adjusting the primary directive level, or ‘drive’ is perchance the better word, even further: A diminished desire may promote ego-centrical behaviour suitable for AIs performing certain critical functions, such as surgery, where such traits are better appreciated. Notwithstanding(o), before we seek to specialize our AIs we should concern them with general problem solving as a foundation, and an outlier primary directive level is not initially suited hereto(o). We should also concern ourselves with the scenario of neglible(o) fulfillment of the primary directive, of what should or should not happen when the basic need is not taken care of. I propose an automatic low-threshold cut-off mechanism, in which the AI shuts itself down in the case of its primary directive need not being met. The down-time will allow for proper debugging and corresponding maintenance., and seems also the simplest way to prevent runaway, irregular decision-making. True, the AI will cease to perform its functions meanwhile; yet this should not be of great concern, in as much as if its function is of vital consideration we will have not one but many AIs working on it simultaniously.

– This primary directive is a quantifiable measure: input signals that our AI processes can be measured against this basic need. Thus we can calculate a result from the AI’s interactions and deduce a net result – does this input-signal aide or abet(o) this primary directive? This, in essence, will allow our AI to function; in essense(o) allow it to distinguish good (input) from bad (input), so to speak, and carry out appropriate responses to stimulus provided it. Even the most mundane of inputs can be considered in this light(o) . We humans do this constantly, albeit towards a different primary directive, though remain completely unaware that we are constantly weighing pros and cons. Further implications of this quantification are primarily opening up the potential of  large scale processing, distributed processing, resource sharing. In other words, (o)… subject to tecnological advances.

It is significant to note that the primary directive is a basic need, not a basic desire. Desires fuel needs; thus we do not engage in so-called feelings such as love, happiness, etc. We cannot reliably quantify happiness, we must consider the underlying principles behind it. Feelings, as a concept, are short-hand notations, ever substitutable for their raw background(o)(!): happiness, for example, can be boiled down to the core(o), which is that it’s the result of a number of processed and deduced input signals that weighed positive in regards to how they aided our primary directive. While this may seem terribly unromantic, even unethical to some, for the purpose of our exercise we must adhere to the … (o).

Q: Is it plausible to implement an AI based on a primary directive as indicated, and how should such an AI be implemented?
A: As the primary directive is a quantifiable measure, it is suitable for implementation. In regards to how, there seems to be no immediate need to deviate from the human style of rearing children. Though this does present certain challenges; which particular style suits the challenge the best? There are many to choose from. Notwithstanding the selected style or combination of styles, we desire to be certain that it facilitates the most rapid learning towards solving general problems. We desire therefore also to study a pedagogy style which applies in all circumstances, regardless of ‘handicaps’: consider the handicapped human sans a sense of vision, hearing. Our AI need not necessarily possess these facilities, and the style of training it should not be dependent on restrictions of the human body – nor mind.