In a previous text we have argued that it does not make sense to ask whether a robot can display consciousness. In fact, once consciousness is defined as the subjective experience which can only be "felt" by the experiencer, there is no way for another, external observer, to demonstrate the presence of consciousness in any other person - or machine. We said, at this regard, "no Turing test for consciousness", meaning that the Turing test can be rightly applied to discriminate a man from a robot, when the question is simply about knowledge - but it does not make sense in the case of consciousness.

Let us turn now to cognition, which is a different and, in a way, a simpler notion than consciousness. More generally, cognition is defined as the process of acquiring knowledge through thinking, or the sense experience, and therefore appears to be more directly accessible than the notion of consciousness. There is here a first, important point: does the term acquiring knowledge also mean "understanding"? For example, a robot can accumulate data in an archive- does it mean that the robot understands all these data? Clearly, the question is not trivial.

In order to proceed more correctly, we may need a more precise definition of the term cognition. In particular, let us turn to the field of biology, where the term cognition is directly involved with the action of living. The term has been used extensively in the Santiago school of Maturana and Varela, on autopoiesis and cognition.

Let us start then with autopoiesis. Any living system is autopoietic, meaning that it is a molecular system capable of self-maintenance, due to an internal mechanism which re-generate all components which are consumed away in the reactions. My haemoglobin is continuously degraded and continuously re-made by my body. In mammals, as in fish and plants, cells are continuously destroyed and reconstructed anew from within the system.

Autopoiesis is thus the signature of life, in the sense that there is no form of life on our Earth which does not comply to this basic description. In general, the living can be seen as a thermodynamically open homeostatic system which maintains the proper characteristic and constant behaviour although displaying thousands of chemical transformations within its boundary. The energetic basis for all this, is of course energy and nutrients from the environment. Cognition, accordingly, has to do with the interaction between the living and the environment, in the sense that each organism is provided with the physiological means to recognize the proper environment, and to positively interact with it-the fish with water, the bird with air, the earth-worm with the mud, and so on. The living organism does what it does according to its internal rules, in order to maintain and implement its own autopoiesis (Maturana and Varela, 1980).

And characteristic of all this, is also the notion of operational closure, which states that, to implement these internal organization, the organism does not need any information from outside. The ant behaves like an ant, the earth-worm like an earth worm, due only to the information existing inside their living system. The environment can only trigger the internal mechanism of behaviour. And if there are changes in the environment, the organism may adapt in order to maintain its autopoiesis, or die.

Important in this picture, is the fact that the observer-the scientist- should not imply any particular aim or design in the behaviour of the living as an anthropomorphic projection, the observing scientist can only say that the living does what it has to do. For example, for an observer to say that the amoeba moves in a sugar gradient in order to get food, would be a non-acceptable extrapolation, as the amoeba, from its internal rules, does not know anything about sugar or about feeding -it does what it has to do according to its internal program. This also means that the observer cannot "see" the wold from the inside of the observed living system- which goes back to the initial point about consciousness- subjective experience cannot be felt by any other person or thing. And also suggests the existence of many world, as many as he different observers.

Now, let us turn to a robot which moves and does things in a given place. The robot is generally programmed so as to recognize a given environment, and to properly interact with it- be cleaning a room in an apartment, or attending patients in a hospital. If I would put this robot in a swimming pool, it would be unable to do anything, it would "die". But one could have programmed this robot, so that it can also recognize water and have a program to move in the swimming pool. The robot would then be capable of doing so, or even to work in an atmosphere at 90 degrees or in atmosphere dense of carbonic anhydride or ammonia. In fact, as it is well known, one of the advantage of using robots is, that they can work in environments which are prohibitive to humans. Of course, the robot would not adapt in our biological sense, can only display cognition in different environment if it is programmed for that.

Now, let us go back to the relation between cognition and autopoiesis. On the basis of what we have said, we can accept that robots may be conceived and constructed in order to be cognitive, the big difference with living organisms being this: that animals or plants interact with the environment primarily in order to implement their own autopoiesis -to maintain a homeostatic behaviour going on. Can the same be said for robots?

Here the question becomes more difficult. Let us see some specific case.

Suppose a robot programmed in such a way, to regenerate its own energy when it becomes necessary, by charging itself to a wall plug; and it can even be programmed to repair and renew certain simple parts of its interior -for example simple electric circuits or exchange light bulbs. Wouldn’t this be a kind of self-maintenance, corresponding roughly to the homeostatic self-maintenance? And if we answer yes to this question, we would be then asked, whether robots are thus living- as they respect the criteria of autopoiesis and cognition simultaneously...

Here even more caution is necessary. As first thing, I hear the admonition of Maturana, particularly frequent and strong in the last years: that autopoiesis, the living in general, has to do with molecular structures and molecular mechanisms; and therefore, all what is not molecular, like machines or even social systems, cannot be brought into the framework of autopoiesis and therefore of the living. This is clearly an important clarification, but on the other hand the notion of autopoiesis has been since several years extended to non-molecular systems, see the social autopoiesis of Luhmann (ref). A robot which would re-generate itself from within, would it be autopoietic, and then living?

We are confronted here with an old problem of general validity: that once you give and accept a definition- here accepting the equivalence between autopoiesis and life- you are then obliged to be consequent. At this point, I would really go back again to the caveat of Maturana and talk about life only in the case of molecular mechanisms. The self-generating robot, assuming that something like that could be constructed, would be something else, for which we should find a new name.

But certainly, we can say that in robots there may be cognition without life. After all, this is not breaking news. The recognition aero-mobile which have been sent to the moon or to Mars are certainly devices provided of cognition, so are other AI machines, like the drones which are now commonly used and not for the most peaceful aims. Should we use here a different term from cognition, just to avoid confusion with the "living" cognition of Maturana and Varela?

This is again a general question: whether and to what extent, talking about AI systems, robots in first place, we should use, or forget, the terminology we commonly use for the living. And this goes to a general outline, almost a simple-minded conclusion from these few notes. In fact, I believe that the most general observation is this: that based on an apparent analogy, we should not simply extend to robots and machines the anthropomorphic notions and terminology that we use for living systems. It would correspond to a trivial anthropomorphic projection. Robots, and other AI devices, are new things which necessitate their own vocabulary. This is true also for other AI devices, I am now thinking to the electronic circuits, for which we are often, sic and simpliciter, using the language used for the brain neuronal networks. More working to do with robots and AI also at a semantic level? Certainly so.

References
H. Maturana and F. Varela, The tree of knowledge, re. edt. Boston, Shambala, 1998
H. Maturana and F. Varela, Autopoiesis and Cognition, Dordrech, Reidel, 1980
P.L. Luisi, The emergence of life, sec. edit., Cambridge Univ. Press, 2016
K. Luhmann, Soziale Systeme, Suhrkamp, 1984