{"id":48372,"date":"2024-06-10T09:45:49","date_gmt":"2024-06-10T08:45:49","guid":{"rendered":"https:\/\/www.innovationnewsnetwork.com\/?p=48372"},"modified":"2024-06-10T09:45:49","modified_gmt":"2024-06-10T08:45:49","slug":"ignore-human-like-robots-imitation-is-no-longer-the-sincerest-form-of-flattery","status":"publish","type":"post","link":"https:\/\/www.innovationnewsnetwork.com\/ignore-human-like-robots-imitation-is-no-longer-the-sincerest-form-of-flattery\/48372\/","title":{"rendered":"Ignore human-like robots, imitation is no longer the sincerest form of flattery"},"content":{"rendered":"

Following Alan Turing\u2019s 70th anniversary, former Cambridge University computer science lecturer and software CEO Dr John Bates has some thoughts on the need to rethink a key part of his legacy.<\/h2>\n

On the 70th anniversary of his untimely passing<\/a> this June, many of us quietly raised a glass to honour the Father of Computer Science, Alan Turing\u2014a figure whose contributions have profoundly shaped the field of information technology.<\/p>\n

However, one aspect of his legacy that warrants a reassessment is the famous Turing Test<\/a>\u2014the benchmark for when a machine intelligence could be said to be \u2018thinking\u2019.<\/p>\n

Ever since Turing first published his reflections on artificial intelligence in a journal called <\/a>Mind<\/em><\/a> in 1950<\/a>, his idea centred on a machine imitating responses well enough to deceive an interlocutor into presuming they must be speaking with another person rather than a set of programming instructions. This construct was devised to answer his initial question: \u2018Can machines think?\u2019<\/p>\n

As the foundation for his conviction that machines would soon think, Turing offered this prediction: \u201cI believe that within approximately fifty years’ time, it will be feasible to program computers possessing a storage capacity of around 10 raised to the 9th power, enabling them to engage in the imitation game with such prowess that an average interrogator would have no greater than a 70% probability of correctly discerning the true nature of the respondent after five minutes of questioning.\u201d<\/p>\n

AI systems of the 1970s and 1980s were notoriously brittle<\/h3>\n

10 to the power of 9, or a billion (1 followed by 9 zeroes), was Turing’s conservative estimate for the storage capacity required, anticipating it would take half a century to achieve what we now know as a gigabyte of memory. But Turing’s paper isn\u2019t an engineering paper, it\u2019s a philosophical one, and the question Turing sets deserves a serious answer.<\/p>\n

In summarising decades of AI research into one paragraph, it’s notable that it didn\u2019t take 50 years to develop machines capable of mimicking human responses. Within just 14 years, a language programme called Eliza became so good at communicating that interlocutors began confiding all their secrets to her<\/a>. One could argue that the Turing Test was effectively achieved during that era.<\/p>\n

But that perception is misleading. Even the most impressive AI systems of the 1970s and 1980s were notoriously \u2018brittle<\/a>\u2019. They excelled at one specific task but lacked the transferable capability to adapt to different use cases. Only the most ardent AI scientists thought of them as anything more than sophisticated appliances.<\/p>\n

However, a significant shift is underway: ChatGPT<\/a>. This system and its counterparts, known as large language models, have truly remarkable underlying technology. These models possess billions of parameters, enabling them to access a vast corpus of information covering substantial portions of the public internet.<\/p>\n

And it will be clear to anyone who spends any time using these systems, that while remarkably proficient, they do not in fact possess true intelligence. They are exceptionally good pattern matchers and autocomplete algorithms. Their ability to generate coherent answers to our questions and prompts creates the illusion of genuine sentience.<\/p>\n

Alan Turing predicted limited intelligence<\/h3>\n

What\u2019s fascinating to me is that Alan Turing anticipated this. Just consider some of the questions he believed an intelligent computer should be able to handle:<\/p>\n