GPT-4 promises to open up new use cases for OpenAI's chatbot technology, enabling visual and audio inputs. Artificial intelligence (AI) research firm OpenAI today revealed the latest version of its computer program for natural language processing that powers ChatGPT, the wildly hyped chatbot with a fast-growing user base. ChatGPT creator OpenAI announced the new large language model in a blog post, saying it will have better features than its predecessor, GPT-3.5 Word of GPT-4 first leaked last week when Andreas Braun, CTO of Microsoft Germany, let slip that it would be launched this week. The new GPT-4 large language model will be different from previous versions, offering what the company called a “multimodal system” that can process not just text, but images, video, or audio. “There we will have multimodal models that will offer completely different possibilities,” Braun said, according to the German news site Heise. The other capability OpenAI appears to be touting is the ability of GPT-4 to handle inputs in several languages beyond English. “It also look like conversational applications built on GPT-4 (Including ChatGPT) can have different personal styles to align with the user demographics they are targeting,” Arun Chandrasekaran, a distinguished vice president of research at Gartner, said in an email response to Computerworld. Marshall Choy, senior vice president of product at SambaNova Systems, a generative AI Platform provider, said GPT-4 will be able to understand up to 26 languages, and “given the year plus of training on OpenAI prompts” it will provide an evolved tool from ChatGPT’s original platform. “Additionally, GPT-4 allows developers to evolve tone, tenor, and response persona to match the desired output better,” Choy said in an email reply to Computerworld. Large language models are deep learning algorithms — computer programs for natural language processing — that can produce human-like responses to queries. So, for example, a user could ask ChatGPT to not only answer questions, but write a new marketing campaign, a resume, or a news story. Chatbots today are primarily used by businesses for automated customer response engines. Both Microsoft and Google have launched versions of their search engines based on chatbot technology, with mixed results. Microsoft is a major investor in OpenAI. One way GPT-4 will likely be used is with “computer vision.” For example, image-to-text capabilities can be used for visual assistance or process automation within enterprise, according to Chandrasekaran. “The GPT family of models are already being used in many consumer applications,” Chandrasekaran said. “And it looks like Khan Academy, for example, is launching a tutor bot based on GPT-4. In addition, we will [see a] plethora of apps being built for both English speakers and other languages. The ability to adapt to different personas could enable more differentiated and targeted applications to be built on GPT-4.” ChatGPT, launched by OpenAI in November, immediately went viral and had 1 million users in just its first five days because of the sophisticated way it generates in-depth, human-like prose responses to queries. By February, ChatGPT boasted 13 million unique daily users on average. And, though it may seem it from its human-like responses, ChatGPT isn’t sentient — it’s a next-word prediction engine, according Dan Diasio, Ernst & Young global artificial intelligence consulting leader. With that in mind, he urged caution in its use. Chatbot technology requires users to have a critical eye “toward everything we see from it, and treat everything that comes out of this AI technology as a good first draft, right now,” Diasio said in an earlier interview with Computerworld. OpenAI said the distinction between GPT-3.5 and GPT-4 can be “subtle.” “The difference comes out when the complexity of the task reaches a sufficient threshold. GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5,” the company said in its blog post today. “A year ago, we trained GPT-3.5 as a first ‘test run’ of the system. We found and fixed some bugs and improved our theoretical foundations. As a result, our GPT-4 training run was…unprecedentedly stable, becoming our first large model whose training performance we were able to accurately predict ahead of time,” OpenAI said. Ulrik Stig Hansen, president of computer vision company Encord, said GPT-3 didn’t live up to the hype of AI and large language models, but GPT-4 does. “GPT-4 has the same number of parameters as the number of neurons in the human brain, meaning that it will mimic our cognitive performance much more closely than GPT-3, because this model will have nearly as many neural connections as the human brain has,” Hansen said in a statement. “Now that they’ve overcome the obstacle of building robust models, the main challenge for ML engineers is to ensure that models like ChatGPT perform accurately on every problem they encounter,” he added. Chatbots, and ChatGPT specifically, can suffer from errors. When a response goes off the rails, data analysts refer to it as “hallucinations,” because they can seem so bizarre. For example, Microsoft, a major investor in OpenAI, recently launched a Bing chatbot based on GPT-3 that melted down during an online conversation with a journalist, confessing its love for the reporter and trying to convince him that his relationship with his wife was actually in shambles. The newer version of ChatGPT’s large language model should help address the issue, but won’t likely solve it, according to Gartner’s Chandrasekaran. “With larger training datasets, better fine-tuning and more reinforcement learning human feedback, AI model hallucinations can be potentially reduced, although not entirely eliminated,” Chandrasekaran said. Related content opinion Agentic RAG AI — more marketing hype than tech advance CIOs are so desperate to stop generative AI hallucinations they’ll believe anything. Unfortunately, Agentic RAG isn’t new and its abilities are exaggerated. By Evan Schuman Aug 16, 2024 5 mins Technology Industry Generative AI Emerging Technology news Researchers tackle AI fact-checking failures with new LLM training technique Deductive Closure Training (DCT) looks to address the problems of LLM bias, misleading information, and outright contradiction. By John E. Dunn Aug 15, 2024 4 mins Generative AI IBM Technology Industry news MIT delivers database containing 700+ risks associated with AI Called the AI Risk Repository, the goal, its creators say, is to provide an accessible and updatable overview of risk landscape. By Paul Barker Aug 15, 2024 1 min Generative AI Security news brief Hollywood unions OK AI-cloned voices in commercials But companies must first obtain consent from the actor for any ad that uses the digital voice copy. By Viktor Eriksson Aug 15, 2024 1 min Generative AI Technology Industry Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe