How to talk to Abstract Intellect?
This is a continuation of the old post, from 2019 (which is still relevant, by the way).
“It seemed so obvious that humans will talk to machines as if they were humans; humans after all have this tendency of personification of pretty much everything, including the forces of nature…” - quote from 2019. And indeed, that’s what is happening in 2024, - companies give names to their ‘models’ and start them with a capital letter as if they were living souls; they teach these tables of numbers embedded into algorithms to respond in first person: “I am…, I do…” and pass judgements on human intentions of interlocutors without being asked to do so. Relax, that’s a usual childhood disease, babies are playing with their new toy. But jokes aside, how do we talk to an Abstract Intellect functionally present in these things? Without delving into the depth of the question “what exactly this type of conversation is?” how do we go about it? How do we put our words together so that the Abstract Intellect could respond to them in a meaningful way? What language constructs or ‘figures of speech’ (more about them later) serve the purpose of these interactions best?
Before the conversation.
It turned out in 2023 and later that the ‘self-service paradigm’ goes as far as requiring us to configure our own ‘AI’… ourselves. This idea of infinitely configurable ‘Artificial Intelligence’ emerged several years before as a part of the wet dream of some engineers to ‘upload’ a ‘copy’ of their ‘intellect’ (so many quotes, but they are all in place, including the last pair) and ‘become immortal’. Thanks to that really strange idea, we now have an ‘instructable’ ‘AI’ that follows orders with devotion of an obedient servant. If you are psychologically uncomfortable in this role of ‘accidental manager of help’ you are not alone. But, deep thoughts about this problem aside, how do we configure the thing in the least offensive (for us) way, so that it helps us with our intellectual needs and deficiencies?
Why talk to the thing at all?
The only reason why we, humans, may need any kind of ‘Artificial Intelligence’ is to compensate for our short lifespan and the weaknesses of our intellect. Mortality is a problem that doesn’t require a discussion. Our intellectual weaknesses, on the other side, can potentially be compensated; they are: failure to understand, limited ability to comprehend reality and predict consequences, errors in judgment and poor decisions. In a sense ‘failure to understand’ is the main flaw, the rest are just causes or consequences of it and that’s how this short list (that can be expanded ad infinitum) should be viewed.
Later.