Evan Schuman has covered IT issues for a lot longer than he'll ever admit. The founding editor of retail technology site StorefrontBacktalk, he's been a columnist for CBSNews.com, RetailWeek, Computerworld and eWeek and his byline has appeared in titles ranging from BusinessWeek, VentureBeat and Fortune to The New York Times, USA Today, Reuters, The Philadelphia Inquirer, The Baltimore Sun, The Detroit News and The Atlanta Journal-Constitution. Evan can be reached at eschuman@thecontentfirm.com and he can be followed at twitter.com/eschuman. Look for his blog twice a week.
The opinions expressed in this blog are those of Evan Schuman and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.
CIOs are so desperate to stop generative AI hallucinations they’ll believe anything. Unfortunately, Agentic RAG isn’t new and its abilities are exaggerated.
Instructions must be explicit and not subject to interpretation. Some question how effective an instruction to “not hallucinate” will be.
Many organizations have experienced atrocious ROI for generative AI efforts, but that’s because they’ve been thinking the wrong way about both genAI and the kind of ROI they can expect from it.
In many ways, the rush to try out still-evolving generative AI tools really does feel like the Wild West. Business execs need to slow things down.
Generative AI advocates say genAI tools can catch errors made by other genAI tools — but humans must still check the AI checkers’ work.
As US representatives try to negotiate with Japan and the Netherlands to deny China the tools to make faster chips for AI work, some observers doubt they will succeed.
If you can't trust the product, can you trust the vendor behind it?
Corporate privacy policies are supposed to reassure customers that their data is safe. So why are companies listing every possible way they can use that data?