top of page
Search

Man Vs Computer.

Updated: Mar 30, 2020



The phrase Artificial intelligence (A.I) was first coined by John McCarty back in 1955 (Anne Sraders, 2019). Over the last decade, the technology has entered the frame of our day to day lives. Today we talk to chatbot online for assistance; businesses are using conative insights to increase profits. Some cities are even beginning to introduce self-driving cars. The systems that are being built are advancing faster than even those who created them could anticipate. Take Google's deep learning system they use to recognise objects in photos. Originally it was programmed to identify things like faces, dogs and cats. They were unsure how to program it to recognise nondescript items such as staplers or shredders (Jack Clark, 2013). But it turns they didn't need to; the system taught itself how to identify these items and the google engineers have no idea how! This Google deep learning system isn't the first time A.I outsmarted human. Examples of A.I challenging humans are almost as old as the technology itself. The first system to challenge us was MANIAC back in 1956. Developed at Los Alamos Scientific Laboratory, MANIAC became the first computer to beat a human at a chess-like game. The system was capable of beating a novice player using a simplified set of rules (Los Alamos rules). Despite this development, Hubert Dreyfus, a professor of philosophy at MIT, wrote a book named 'What Computers Can't Do'. He stated that no computer program could defeat even a 10-year-old child at proper chess match. The computer science faculty at MIT, who were at the forefront of artificial intelligence development, took this as a challenge. A team of students challenged Dreyfus to beat student Richard Greenblatt's chess program Mac Hack VI. Dreyfus lost what was described as a close game. Mac Hack VI went on to compete in tournaments and become the first computer to win a tournament game. Chess A.I continue to develop through the 70s and 80s beating masters, grandmasters, and even winning tournaments. Then in 1997 Deep blue, a chess-playing computer developed by IBM defeated the current world champion Gary Kasparov 3 1/2 to 2 1/2 in a six-game thriller. Deep blue's win was seen as evidence that artificial intelligence was catching up with human intelligence (Dirk Knemeyer and Jonathan Follett, 2019).

Deep Blue. Nowadays the level of chess that deep blue was capable of can be performed on the average smartphone.


Artificial Intelligence went on to beat humans in a host of other games such as, GO ( a two-player Chinese strategy game), checkers, jeopardy and even solving a Rubix cube!


Artificial Intelligence smashing the Rubik cube world record.

Human record = 3.47 seconds


But the next step for A.I is to see can it beat multiple humans at once? To answer this question, programmers took A.I to the poker table. Poker is a game with imperfect information, as the cards opponents are holding are concealed. The outcome of a hand is incredibly difficult to predict; players must use betting patterns, bet sizing, fold frequencies and any other they can gather in order to make predictions. This gap of information poses a difficult challenge for an A.I system. Last year an A.I program named Pluribus was created to attempt to beat multiple pros at once. The system was developed with the help of poker great Darren Elias. Elias would play the bot and alert programmers when it made mistakes. According to Verge (2019), the system was built with as little as $150 worth of cloud computing resources. It trained by competing with copies of itself and correcting its mistakes. It became the level of world-class poker player within the space of a few weeks. The key for Pluribus was its ability to randomly apply different styles. Human players usually have their own style and struggle when they try to mix it up. Human tendency to stick to their preferred style leaves some room for the predictability. Pluribus completely removed any hope of uncovering a style through sheer randomness. Once trained, Pluribus was put to the test. Over 12 days and 10,000 against 12 different pros in 6 people no limit texas hold 'em. Pluribus won on average $5 per hand or $1,000 an hour. Researchers called it a "decisive margin of victory". Another massive step for artificial intelligence.


Pluribus pulling off a glorious bluff.


The knowledge gained from this Pluribus is said to have many potential uses in future research. According to study co-author Dr.Tuomas Sandholm, Professor of Computer Science, Carnegie Mellon University, the strategic reasoning technologies have a range of applications from poker and video games, to strategy optimisation in investment banking, political campaigns, and even steering evolution and biological adaptation "such as for medical treatment planning and synthetic biology and so on." So what's next for Artificial intelligence? Can you think of any other games it could beat us at? Personally, I'd like to see a self-driving car take on Luis Hamilton.

References:











 
 
 

Comments


Post: Blog2_Post

Gateway Sports

Subscribe Form

Thanks for submitting!

bottom of page