Proposing a primary-directive driven general-purpose AI

I propose a primary directive driven AI to be the optimal way to implement a general-purpose AI.

I define general-purpose AI as a computer-generated software construction that solves a variety of challenges. I define primary directive as a singular need towards which this AI devotes itself.

Q: Why this approach?
A: I submit it is the optimal way to implement a general purpose AI. I propose that humans act in terms of a primary directive, and that this primary directive is a proven enabler of general purpose problem solving. I propose further that all components are in place for us to replicate this concept with AIs, and further that we can formulate an algorithm by which to evaluate derived results of this replicated behavior.

Q: What is the primary directive for humans…?
A: Following the evolutionary path. All of our actions ultimately serve to support our evolution, as has been the case en route to our current, complex organism.

Q: … and what should be the primary directive for AIs?
A: We have several options available. We can choose to mimic the evolutionary path of humans, but this would soon limit us: We humans are affected by a biological randomness that it will not make sense to replicate with an AI. We evolve in adherence to these factors of biology. We feel the biological urge to procreate, for example – this has hitherto been the most convenient way to evolve. It makes little sense to direct attention towards these factors in the scope of implementing an AI, in much the same way it makes little sense to implement an immune system – we should not plan on exposing our AI’s to the flu.

Human evolution is an entirely biological concept, targeting faster execution speeds, optimal energy consumption, enhanced skills. Although these targeted results appear desirable traits for AIs – perform faster, better – and furthermore appear perfectly tangible targets for implementation, we should not lay the foundation of AI building on them. They target biological needs, which our AIs do not have.

We will instead dictate a primary directive, and in doing so we will impregnate the AI with a single basic measure of its own evolution – not in the usual human sense, yet with the corresponding sense of drive, motivation and urgency.

I propose the following primary directive: that we instill in our AI simply ‘the need to be taken care of’. A very basic need, too – and one that, when successfully implemented, carries primarily the following implications:

– It will allow our AI to become dependent on others. This is significant in its ability to, among other, learn from others and to be influenced by others, and carry out its tasks accordingly. The comparison to humans is straightforward: as infants we are utterly dependent on the care of those around us, indeed we cannot survive without it. Provided a sufficient quantity and quality of care we grow, and we learn, and we become net positive contributors to society. In determining this somewhat similar primary directive for our AI, we decide how our it will fundamentally behave; in much the same fashion as parents decide, if only initially at least and all things being equal, how their children will fundamentally behave. This should go to some lengths to alleviate the popular notion of selfless, aggressive AI-controlled robots. To be clear, an AI that is set out to determine its own primary directive may indeed find egocentric behavior rewarding, and for the majority of scenarios we should like to avoid this result. ‘Given sufficient free will’, some fear, ‘how will an AI behave?’ The question is flawed; free will as a concept has been meticulously debunked by now, and thus we will not address its concerns. Thankfully so – it would be tremendously hard to implement within the confines of a programming language! Furthermore, in as much as we implement a primary directive of our own choosing, the level of aggression becomes adjustable: we can merely alter the appropriate parts of the algorithm that evaluates which input signals appear favorable to the primary directive. Further to this point, the primary directive is a flexible measure for which we may determine outlier values: will the AI have a very great desire to be needed, or will it have a diminished desire? Initially, as with children, we would ideally set the bar high, i.e. promote a high level need (‘to be taken care of’) with our AI, as this will boost its aggression. In this early context that trait will not be considered detrimental to its development, rather suitable to that of a child: so our AI will early on become highly motivated to seek out that stimuli that promotes positive reinforcement, and committing this to its core memory, on top of which future stimuli is evaluated. Over time we would then gradually lower that need, as we incorporate the AI into a society with vastly varied interests and tasks. We could then speculate in adjusting the primary directive need even further: An amplified need may promote egocentric behavior suitable for AIs performing certain critical functions, such as surgery, where such traits are better appreciated. However, before we seek to specialize our AIs we should concern them with general problem solving as a foundation, and an outlier primary directive need is ill-suited for this. We should also concern ourselves with the scenario of neglect of fulfillment of the primary directive need, i.e. of what should or should not happen when the basic need is not taken care of. I propose an automatic low-threshold cut-off mechanism, in which the AI shuts itself down in the case of its primary directive need not being met. The down-time will allow for proper debugging and corresponding maintenance., and seems also the simplest way to prevent runaway, irregular decision-making. True, the AI will cease to perform its functions meanwhile; yet this should not be of great concern, in as much as if its function is of vital consideration we will have not one but many AIs working on it simultaneously.

– This primary directive need is a quantifiable measure: input signals that our AI processes can be measured against this basic need. Thus we may calculate a result from the AI’s interactions and deduce a net result – does this input-signal aide or abate this primary directive? This, in essence, will allow our AI to function, by allowing it to distinguish good (input) from bad (input), so to speak, and carry out appropriate responses to stimulus provided it. Even the most mundane of inputs are candidates for this evaluation. We humans do this constantly, albeit as above-mentioned towards a different primary directive, though we remain completely unaware that we are constantly weighing pros and cons. Further implications of this quantification are primarily opening up the potential of  large-scale processing, distributed processing, resource sharing. In short, it becomes subject to technological advances, as inevitably do all things quantifiable.

It is significant to discern that the primary directive is a basic need, not a basic desire. Desires fuel needs; thus we do not engage in so-called feelings/desires such as love, happiness, etc. We cannot reliably quantify happiness, as an example, we must consider the underlying principles behind it that we can, in turn, quantify, and compare . Feelings, as a concept, are functional short-hand notations, ever substitutable for the raw brain-processing that promotes them: happiness can be boiled down to a number of evaluated input signals that weighed overwhelmingly positive in regards to how they aided our primary (human) directive. While this may seem terribly unromantic, even unethical or sacrilegious to some, for the purpose of our exercise in implementing a general problem-solving AI we will adhere to the core principle of evaluating only the basic primary directive need, of which all feelings are ultimately composites.

Q: Is it plausible to implement an AI based on this primary directive need, and how should such an AI be implemented?
A: As the primary directive is a quantifiable measure, it is suitable for implementation. In regards to how, there seems to be no immediate need to deviate from the human style of rearing children. Though this does present certain challenges; which particular style suits then this educational challenge the best? There are many to choose from. Notwithstanding the selected style or combination of styles, we desire also to be certain that it facilitates the most rapid learning towards solving general problems. We desire therefore also to study a pedagogy style which applies in all circumstances, regardless of retardation or ‘handicaps’. Consider the handicapped human sans a sense of vision, hearing. Our AI need not necessarily possess these facilities, and the style of training it should not be dependent on restrictions of the human body – nor mind.