The Turing Test: Its Significance in Relation to Computer Software Systems

The Turing Test: Its Significance in Relation to Computer Software Systems

In 1950 Alan Turing developed an idea for a test to see if machines (computers) had the ability to think. This idea, which has become known simply as the Turing Test, was comprised of three participants; a computer, a human, and another human who poses questions to the other two. The respondents would send their answers to the interrogator in a neutral fashion, through a teletype device for example. The test was designed in such a way that the interrogator would have to determine which respondent was the machine by finding errors in the conversation logic. The computer respondent always told the truth but the human respondent could lie in order to appear human. The general idea is that if the interrogator cannot tell which respondent is the computer, the computer is said to have passed the Turing Test. According to French (1990) when Turing came up with his test, he appeared to be making two claims:

  1. The Philosophical Claim: If the machine could pass the test it acts as sufficiently intelligent, therefore it is.
  2. The Pragmatic Claim: In the future it would be possible to create a machine that could pass the test.

According to Cullen (2009), “French (1990) introduces the notion of asking sub-cognitive questions that are intentionally designed to reveal the Turing Test participant as not human, due to possible representational differences in the ‘brain’.” French’s sub-cognitive differences can be a variety of things that may be completely relevant to the interrogator, but the machine participant may have no context. Both Cullen (2009) and French (1990) give evidence that the Turing Test is flawed in that “the Test provides a guarantee not of intelligence but of culturally-oriented human intelligence.”

While the Turing Test may inherently be flawed through perceptual or contextual factors, the cognitive abilities of computers are increasing at an accelerating rate.  This rate of acceleration is exemplified by an excerpt where Kurzweil (2001) claims in his Law of Accelerating Returns:

Evolution applies positive feedback in that the more capable methods resulting from one stage of evolutionary progress are used to create the next stage. As a result, the rate of progress of an evolutionary process increases exponentially over time. Over time, the “order” of the information embedded in the evolutionary process increases. In another positive feedback loop, as a particular evolutionary process (e.g., computation) becomes more effective (e.g., cost effective), greater resources are deployed toward the further progress of that process.

For the purpose of examining what the significance of the Turing Test might be in terms of computer software systems, the Law of Accelerating Returns means that with each generation of computer processing, the potential for a machine to pass the test becomes greater. Through each generation the ability to store and process information increases. With each passing year more and more information and human experience models become digitally available through the internet and social media platforms. It is therefore not completely inconceivable that a system could eventually be built that could access this growing repository of human experience and knowledge in order to pass the Turing Test.

In recent years the Turing Test model has been used on the internet for security applications. Completely Automated Public Turing Test to tell Computer and Humans Apart (CAPTCHA) is a method that has been developed to prevent computer based attacks or spamming efforts. In this version of the Turing Test, the machine takes the role of the interrogator. The primary method that CAPTCHA uses is optical character recognition (OCR). According to Shirali-Shahreza et al. (2007), “OCR systems are used for the automatic reading of texts. But this software faces difficulty in reading printed texts with low quality or the handwritten texts and can only read typed articles which are of high quality and which follow the common and standard patterns.” Even these methods are becoming more complex as OCR programs. Chestnut (2005) shows how CAPTCHA can be beat through the use adaptive AI and vector image processing techniques.

The Turing Test was originally designed to test the possibility of a machine passing as human through question and answer sessions. The test has been modified and used in many different applications over the years including online security and continuing development of AI systems. Cullen (2009) argues that “contrary to Turing’s apparent intent, it can be shown that Turing’s Test is essentially a test for humans only.”  Cullen’s argument is in a sense supported by the test’s use in the development of CAPTCHA. Though the Turing Test may have its flaws, it will likely remain a constant in the development of AI systems until such a time that a machine actually does pass the test.

References

Skyttner, A. (2006). General Systems Theory, Problems, Perspectives, Practice. World Scientific Pub Co Inc.

Cullen, J. (2009). Imitation versus Communication: Testing for Human-Like Intelligence. Minds & Machines, 19(2), 237-254. doi:10.1007/s11023-009-9149-3

Shirali-Shahreza, S. S., Shirali-Shahreza, M. M., & Manzuri-Shalmani, M. T. (2007). Easy and Secure Login by CAPTCHA. International Review on Computers & Software, 2(4), 393-400. Retrieved from EBSCOhost.

French, R. (1990). Subcognition and the Limits of the Turing Test. Psychology Department, University of Liège, Liège, Belgium.

Kurzweil, R. (2001, March 7). The Law of Accelerating Returns. Retrieved from http://www.kurzweilai.net/the-law-of-accelerating-returns

Chestnut, C. (2005, January 30). Using ai to beat captcha and post comment spam. Retrieved from http://www.brains-n-brawn.com/default.aspx?vDir=aicaptcha