Getting GPT-4 to Play Connect4

Ever since the dawn of artificial intelligence (AI), there's been a fascination with its potential to compete with or surpass human intelligence in a variety of tasks. Famous instances of this include IBM's Deep Blue defeating chess grandmaster Garry Kasparov in 1997, and Google DeepMind's AlphaGo triumphing over world champion Go player Lee Sedol in 2016. This led me to the question: Can GPT-4, the latest language model developed by OpenAI, play Connect4, and would it be unbeatable?

Firstly, a quick refresher on Connect4. The objective of this two-player game is easy: be the first to form a horizontal, vertical, or diagonal line of four of your coloured discs. It's a game of strategy and anticipation, where each move impacts the current state of the game and shapes future possibilities.

For this experiment, I set up a simple browser-based frontend to run the game, with the human as Player1 and GPT-4 as Player2. The operation was straightforward: when it was GPT-4's turn, the current state of the Connect4 game was sent to GPT-4 with a prompt to make a move. This allowed GPT-4 to focus its computational might on devising the best possible strategy.

Example response from GPT-4 after analysing the board :
objective : "Prioritise winning, block opponent, anticipate moves, offensive and defensive strategies, and avoid enabling opponent's win."
thought : "I have the opportunity to create a winning condition by placing my token in column 3, which will create a vertical line of three tokens with the potential for a fourth. This move also blocks the opponent's horizontal line."
move : 3

It was evident that GPT-4 understood the objective and had clear thought processes about why it wanted to make its move. I was happy and decided it was time to pit my skills against the AI.

Since I am only a mortal, I started off the game. I was represented by the Red token, and GPT-4 by the Yellow. The “Thought” notes are written by GPT-4 to accompany its move.

My first move was to occupy the center.

GPT-4 countered by blocking – a good move.

I added another token to the center column – GPT-4 blocked again, a sound strategy.

I added a token to column 5, and GPT-4 responded by stacking another in column 4. Although it seemed like an okay move, I wondered if it was neglecting its ground game?

I extended my horizontal connection in column 6. Surprisingly, GPT-4 made a questionable move by adding to column 4, neglecting the opportunity to block my potential win.

And there it was – Connect4! Human won.

In conclusion, while GPT-4 is impressively adaptable and can certainly learn to play Connect4 with proper prompting, it's clear that it doesn't match up to a dedicated Connect4-solving algorithm. These specialised algorithms can execute moves in milliseconds and are designed to win or draw every time if they get to start the game.

This experiment serves as a reminder that while large language models like GPT-4 have extraordinary capabilities, they aren't the solution to every problem.

Previous
Previous

Deep-fashion, using AI face-swapping for retail

Next
Next

Generative AI for finance and investing