Anxiety over an artificial intelligence tool called ChatGPT is spreading across a wide range of sectors, from education to business to cybersecurity circles. Numerous articles have shown ChatGPT’s efficiency in creating phishing emails, as well as passing medical and business school tests. Its ability to write, speak, and answer queries across a wide range of subjects as competently as many humans do, as well as its ability to find vulnerabilities in computer systems, has raised legitimate concerns over how it may be used to create effective phishing campaigns on a large scale.
While today it’s a toy, a parlor trick that people take out to show how much AI has improved, businesses and government institutions should be worried about what’s going to happen in two to five years, as AI models continue to improve and bad actors take advantage of what it can do. Organizations need to take steps now to strengthen their cyber defenses, against both current threats and what’s lurking around the corner.
AI’s Versatility Creates Risks
ChatGPT, created by OpenAI, has been available for queries since November 2022, in an open-ended beta testing period. OpenAI, a research and deployment company that pursues innovations in AI, says it created the chatbot to interact in a conversational way, study user feedback, and learn its own strengths and weaknesses. It’s been used to explore scientific subjects, help write a poem or a song, and even apply for a job. ChatGPT does make mistakes. The coding platform StackOverflow