I’m taking a philosophy of knowledge class, and it’s been an absolute eye-opener. AI has been emerging at a dizzying pace, and with many of us caught between endless hustles and bustles, I have come to realize that it is critical to squeeze out time (to the best of our extent) to revisit classical theories on our mind and body.
This week we read the article “Computing Machinery and Intelligence”, or the article where “Turing test” originated from. Machines have evolved exponentially fast, but even so it is difficult to fully assert whether machines sufficiently resemble humans in their functionality and identity. It’s a must-read for those wanting to form clearer ideas around what machines are and aren’t (vs. humans).
I am copying and pasting my paper for this assignment here — perhaps doing so might spark some discussions. These hypotheses are — and will be for centuries — relevant in thinking about machines vs. humans, but we do have to uncover
Here is the original reading on informality of behavior.
It is not possible to produce a set of rules purporting to describe what a man should do in every conceivable set of circumstances. One might for instance have a rule that one is to stop when one sees a red traffic light, and to go if one sees a green one, but what if by some fault both appear together? One may perhaps decide that it is safest to stop. But some further difficulty may well arise from this decision later. To attempt to provide rules of conduct to cover every eventuality, even those arising from traffic lights, appears to be impossible. With all this I agree. From this it is argued that we cannot be machines. I shall try to reproduce the argument, but I fear I shall hardly do it justice. It seems to run something like this. “if each man had a definite set of rules of conduct by which he regulated his life he would be no better than a machine. But there are no such rules, so men cannot be machines.” The undistributed middle is glaring. I do not think the argument is ever put quite like this, but I believe this is the argument used nevertheless. There may however be a certain confusion between “rules of conduct” and “laws of behaviour” to cloud the issue. By “rules of conduct” I mean precepts such as “Stop if you see red lights,” on which one can act, and of which one can be conscious. By “laws of behaviour” I mean laws of nature as applied to a man’s body such as “if you pinch him he will squeak.” If we substitute “laws of behaviour which regulate his life” for “laws of conduct by which he regulates his life” in the argument quoted the undistributed middle is no longer insuperable. For we believe that it is not only true that being regulated by laws of behaviour implies being some sort of machine (though not necessarily a discrete-state machine), but that conversely being such a machine implies being regulated by such laws. However, we cannot so easily convince ourselves of the absence of complete laws of behaviour as of complete rules of conduct. The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, “We have searched enough. There are no such laws.” We can demonstrate more forcibly that any such statement would be unjustified. For suppose we could be sure of finding such laws if they existed. Then given a discrete-state machine it should certainly be possible to discover by observation sufficient about it to predict its future behaviour, and this within a reasonable time, say a thousand years. But this does not seem to be the case. I have set up on the Manchester computer a small programme using only 1,000 units of storage, whereby the machine supplied with one sixteen-figure number replies with another within two seconds. I would defy anyone to learn from these replies sufficient about the programme to be able to predict any replies to untried values.
Here’s a rough summary of what the above says:
- It is impossible to come up with rules that predict what humans will do in every circumstance.
— Turing cites a traffic light example, where the red and the green turn on simultaneously. One might stop for safety, but it is not possible to make every single plausible prediction for what each person might do. - Turing admits that there is an “undistributed middle” problem in the above example, however. Specifically, the two reasons by themselves, i.e., (a) machines are governed by deterministic laws and rules, and (b) there exist no such rules, cannot sufficiently conclude that humans do not equal machines.
- So one solution could be to replace “laws of conduct” applied to machines with “laws of behavior” (to fill in the undistributed middle). And — in discovering “laws of behavior” — we can use scientific methods to make observations.
— But this is also realistically difficult, because there is no finite point of time where we “searched enough” and definitively conclude that “there are no such laws”. - Given all of the above, is it actually possible to devise a scheme that uncovers/predicts every plausible behavior? Turing asserts it is implausible.
— Turing supports this thesis through contradiction. Suppose there indeed exists a set of laws; given that, a machine should be able discover all those laws within a reasonable period of time (say “a thousand years”)
— But this is extremely difficult in reality. He cites an example of a computer program he created, which returns an output for every sixteen-digit code input. What is difficult is that we cannot predict replied for untried inputs (as well as fully understanding the machine itself) solely based on what we’ve already tried.
Counterarguments:
The relevance of “informality of behavior” hypothesis in differentiating machines vs. humans.
- Even if Turing’s “no deterministic laws for every circumstance” hypothesis was true, informality of behavior actually does not really help distinguish humans from machines because machines can also exhibit non-deterministic behaviors.
— Revisiting the traffic light example where both red and green lights are on simultaneously:
— I agree that it is not possible to lay out all possible human behaviors. But machines also exhibit this non-deterministic behavior — some machines will stop because it does not know what to do (and will have a system error), some machines will move forward b/c seeing the green light takes logical precedence over red light (i.e., IF GREEN LIGHT = PROCEED; ELSE RED; END).
— A machine is “deterministic” in a sense that it will follow whatever the machine is programmed to do in its software/hardware.
— Furthermore, we cannot always know/predict what machines will do in every possible circumstance. Why? One reason is that humans designed machines; and machines can carry out conduct only insofar as what humans can know and predict. For unknown circumstances, us humans would not be able to program a machine that acts predictably under that circumstance (because we do not know how us humans will act ourselves). - Following from the above, with the exponentially increasing complexity of machines (e.g., ChatGPT), “informality of behavior” has become less relevant feature in differentiating us from machines. In other words, it has become so difficult to establish a definite set of rules and laws for machines these days (let alone humans, as Turing suggests), so how can we possibly distinguish based on informality of behavior?
Norms
- Norm problem — the should in the “[we do not have a set of rules that governs] what a man should do in every [possible circumstance]” is ambiguous, because “should/ought” describes what one does normatively. Thus, should is a different construct from predictability or determinacy; it’s not the predictability that differentiates, rather it’s how us humans incorporate normative perspectives in our actions.
— From a physicalist point of view, there indeed are consequences governed by the law of physics and science for every conduct carried out.
— Again, revisiting the traffic light example; we know that if we stop, we won’t be run over by a car, and if we cross, we will be run over by a car if there is an oncoming car. This is the simple, deterministic law of physics. - On the other hand, what we ought to do is a normative statement. We “ought” not to cross when it’s both green and red, because we should value safety. But if there was a person having a heart attack across the street, one could argue that we should cross no matter red or green due to perceived urgency (most machines surely won’t recognize the importance of saving people from heart attacks).
- Therefore, it is not quite the “deterministic” nature of machines that differentiate us. It is more so the fluidity of norms (societal, moral, folkway) that differentiate us.
Machines are designed by humans.
- Finally, machines are designed by humans after all. Machines are (arguably) copies of our own selves.
— This implies that machines are influenced implicitly by bias, norms, principles (physical, moral, etc.) by humans that designed and built the very machines; consequently, machines indeed follow laws of behavior to some degree.
— Thus, machines’ (both deterministic and non-deterministic) behaviors reflect significantly who we are. And also with the increasing complexity of machines, it is difficult to lay out a finite, definitive sets of laws even for machines, let alone humans.