It would be able to read Shakespeare, grease a car, tell jokes, play office politics and even have a fight. Speaking to Life magazine in 1970, the Massachusetts Institute of Technology’s Marvin Minsky, a towering figure in AI, said that in three to eight years the world would have a machine with the general intelligence of an average human. Nevertheless, researchers threw themselves into a golden age of building programs and sensors that equipped computers to perceive and respond to their environments, to solve problems and plan tasks, and grapple with human language.Ĭomputerised robots carried out commands made in plain English on clunky cathode-ray tube monitors, while labs demonstrated robots that trundled around bumping into desks and filing cabinets. In the event, those gathered made negligible progress. So science and technology could not have been on a higher high.” “The US government had understood nuclear weapons to have won the war. “This is the postwar period,” says Dr Jonnie Penn, an associate teaching professor of AI ethics at the University of Cambridge. “We think that a significant advance can be made … if a carefully selected group of scientists work on it together for a summer,” he wrote. He was supremely optimistic about the prospects for progress. John McCarthy, a computer scientist at Dartmouth College, in New Hampshire, used the phrase in a proposal for a summer school. The term “artificial intelligence” didn’t appear until 1955. “There’s been a whole parallel universe of activity on Bayesian statistics over the past 70 years that completely enabled the generative AI we see today, and we can trace that all the way back to Turing’s work on encryption,” says Girolami. Word by word, Turing and his team used the statistics to answer questions such as: “What is the probability that this particular German word generated this encrypted set of letters?”Ī similar Bayesian approach now powers generative AI programs to produce essays, works of art and images of people that never existed. A declassified paper from the scientist’s time at Bletchley Park reveals how he drew on a method called Bayesian statistics to decode encrypted messages. Turing made another hefty contribution to AI that is often overlooked, Girolami says. In one recent eyebrow raiser, researchers claimed to have passed the test with a chatbot that claimed to be a 13-year-old Ukrainian with a pet guinea pig that squealed Beethoven’s Ode to Joy. It’s an ingenious test, but attempts to pass it have fuelled immense confusion. Today, machine learning, rewards and modifications are basic concepts in AI.Īs a means of marking progress towards thinking machines, Turing proposed the Imitation Game, commonly known as the Turing test, which rests on whether a human can discern whether a set of written exchanges come from a human or a machine.Īlan Turing was one of the first people to take seriously the idea that computers could think. Some machines could modify themselves by rewriting their own code. Machines could learn just as children learn, he said, with help from rewards and punishments. “The danger to the ordinary citizen would be serious,” he noted, dismissing the idea as too slow and impractical.īut many of Turing’s ideas have stuck. To find things out for itself, the machine “should be allowed to roam the countryside”, Turing quipped. One route to a “thinking machine”, he mused, was to replace a person’s parts with machinery: cameras for eyes, microphones for ears, and “some sort of electronic brain”. In a 1948 report, Intelligent Machinery, Turing surveyed how machines may mimic intelligent behaviour. He was one of the first people to take seriously the idea that computers could think. Alan Turing, the wartime codebreaker at Bletchley Park and founder of computer science, is considered a father of AI. Rosenblatt is sometimes referred to as the father of deep learning, a title shared with three other men. The history of AI, at least as written today, has no shortage of fathers. “That in itself is a fantastic advance, it will give us great tools for the good of humanity, but let’s not run away with ourselves.” “What we’ve got today are artificial parrots,” says Prof Mark Girolami, chief scientist at the Alan Turing Institute, in London. The Perceptron was the first neural network, a rudimentary version of the profoundly more complex “deep” neural networks behind much of modern artificial intelligence (AI).īut nearly 70 years on, there is still no serious rival to the human brain. The Perceptron was inspired by human neurons.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |