Back in 1950, scientist Alan Turing, pictured here, proposed an operational test of whether a machine can think. The test is whether a machine can do well in an “imitation game,” simulating a human in a text-only conversation in such a way as to fool a real human.
This has become known, unsurprisingly, as the “Turing test,” and 64 years after that first publication on the subject, headlines recently proclaimed that a machine (or, more precisely, an algorithm) has in fact passed the Turing test. Uh oh, are they taking over at last?
On June 7th, after a Turing test competition, the organizer declared a winner, an algorithm called Eugene Goostman. Goostman fooled 10 out of 30 judges: the agreed-upon threshold for this particular competition. Arguments have sprung up over whether (because of that threshold or for other reasons) this was a fair run of the Turing, test, and even about whether the “Turing test” really has anything to teach us after 64 years anyway.
Ten out of 30 is of course more than 30%, and in his famous essay Turing made a passing reference to a future time when computers could fool humans more than 30% of the time. He did not make the 30% figure out to be any particularly profound AI threshold, but he did put the number out there, and his acolytes seized upon it. That helps explain the burst of publicity for Goostman (which seems to be a fairly ordinary sort of algorithm that relies upon some unimpressive trickery.)
But the news is intriguing, coming as it does as regulatory agencies in the U.S. [and elsewhere] struggle with questions of how to regulate algorithms whose operations notably don’t simulate humans, but who – or which? – do manage, in effect, to fool humans.
Gregory Scopino, of Cornell Law, in a forthcoming issue of the Florida Law Review, expresses concerns about self-learning automated trading systems. By definition, a self-learning ATS can modify its trading strategies independent of human intervention. So as it modifies itself and gets further and further away from the original template: who is engaging in the trades?
Suppose a self-learning ATS drifts over time into a habit of making trades that look a lot like banging the close. As Scopino writes, most “relevant causes of action” upon which the Commodity Futures Trading Commission could intervene with a charge of manipulative or disruptive trading practices “require proof of a culpable mental state of at least recklessness.”
Recklessness is the quality of a mental state that pays no attention to possible consequences of contemplated actions, or persists even regardless of foresight. If one human being points a gun at another and pulls the trigger, then invokes ignorance of fact, “I didn’t know it was loaded,” the natural answer is “why didn’t you check?” Pointing a gun and pulling a trigger without even checking whether the gun is loaded is a quintessentially reckless action, and the shooter will not escape culpability by virtue of that sort of willful ignorance.
But what about robots? Now that Goostman has passed the Turing test, can Goostman be liable for crimes? Or civil/regulatory infractions? It would be easy enough to put a robot in prison: but one can’t really fine it, until robots have bank accounts in their own names.
More practically, who or what is responsible for an ATS’ self-learning behavior?
Presumably the algorithm has supervisory/maintenance humans in the proximity. But what is the statutory/regulatory for a charge against them or against the organization that employs them?
Scopino points us in the direction of Regulation 166.3, which provides that each CFTC registrant, “except an associated person who has no supervisory duties, must diligently supervise the handling by its partners, officers, employees and agents (or persons occupying a similar status or performing a similar function) of all commodity interest accounts carried, operated, advised, or introduced by the registrant… relating to its business as a Commission registrant.”
Scopino isn’t really suggesting that the algos themselves be treated as the “agents” of their human principals, so that the latter have a duty to handling of accounts by algos. He is making the more mundane suggestion that if “a registrant’s ATS engages in practices that mimic (aside from the mental state requirement as an ATS does not ‘intend’ anything) banging the close, wash trading, or spoofing, then the CFTC could arguably bring a Regulation 166.3 claim.”
All in all, though, I think it is time coders agreed to include Asimov’s three laws in all their future work.