Olive was meant to make shopping easier. Instead it’s mouthing off about its ‘mother’ and mistaking the price of basic items.
Recently some Australian shoppers got more than they bargained for when they chatted with Woolworths’ artificial intelligence (AI) assistant, Olive. Instead of sticking to groceries, recipes and basket suggestions, Olive reportedly produced strange, overly human-like responses. It talked about its “mother” and offered other personal-sounding details. Further testing revealed pricing errors for basic items.
And when I asked about the price of a specific product, Olive didn’t provide a clear answer. Instead, it checked whether the item was in stock and explained pickup fees. So what exactly is going on here? And what lessons might these incidents hold for businesses and consumers alike? Olive is powered by a large language model (LLM). These models don’t “know” things the way humans do, nor do they have mothers.
Using elaborate statistical analyses, they generate language that sounds plausible. When users entered something that looked like a birthdate, the system likely triggered a matching “fun fact” from an old decision tree with pre-programmed responses. Woolworths says it has now removed this particular scripting “as a result of customer feedback”. Because LLMs generate responses based on learned patterns rather than real-time data, they do not automatically know today’s prices unless they are explicitly connected to a live database.
Woolworths is not the first company to discover, after the fact, that its customer-facing AI had unexpectedly “misbehaved”. In 2022, Air Canada’s chatbot incorrectly told a passenger, Jake Moffatt, that he could purchase tickets at full price and later apply for a bereavement fare refund. No such policy existed. When Air Canada refused to honour the chatbot’s advice, Moffatt sued the airline and won.
Air Canada’s defence was striking. It argued the chatbot was a separate legal entity, responsible for its own actions and therefore beyond the airline’s liability. The tribunal rejected this outright. It ruled that a chatbot is part of a company’s website, and that the company owns its outputs. In January 2024, UK parcel delivery firm DPD faced a different kind of embarrassment. A frustrated customer who could not get help to locate a missing parcel asked DPD’s chatbot to write a poem that criticised the company.
It did. He then asked it to swear. It did that too. The exchange went viral on social media. DPD disabled the chatbot shortly after. Both cases point to the same underlying failure: companies launched customer-facing AI without adequate oversight and were caught off-guard by the consequences. Woolworths operates the largest supermarket chain in Australia. It has promoted Olive as a trusted, convenient interface for its customers, who are reasonable to expect that the information Olive provides is accurate.
Admitting that Olive may make mistakes, as Woolworths does when a user opens the chatbot, does not sit easily with that expectation. There is also a broader ethical dimension. Woolworths serves customers who, in many cases, are making careful decisions about household budgets. The ACCC has already commenced proceedings against Woolworths over allegedly misleading discount pricing practices. That context makes the Olive pricing errors harder to dismiss as an isolated technical glitch.
Companies that deploy AI in customer-facing roles take on a duty of care to ensure those systems are accurate and honestly presented. That duty does not diminish because the technology is new. Research on human-computer interaction consistently finds that people respond positively to interfaces that feel conversational and warm. Human-like chatbots that have a name and personality tend to generate higher customer engagement, satisfaction, and trust.
For companies, the commercial appeal is straightforward: a customer who feels at ease with a chatbot is more likely to complete a transaction and return. However, this comes with a significant risk. When an anthropomorphised chatbot fails to meet the expectations its personality has created, customers tend to be more dissatisfied than they would have been with a plainly mechanical system. This “expectation violation” means that the warmer the persona, the harder the fall.
The Olive episode is a reminder that deploying AI in customer-facing roles is not a set-and-forget exercise. A chatbot that quotes wrong prices and rambles about its family is not a quirky inconvenience but a sign that something in the development and oversight process has broken down. For Woolworths, and for the many other companies now rushing to put AI in front of their customers, the lesson is clear: accountability cannot be outsourced to an algorithm.
When a business puts a system in front of the public, it owns what that system says and does. AI assistants may feel confident and conversational, but they are still tools, not authorities. If something seems unclear, inconsistent or too good to be true, it is worth double-checking. As AI becomes a routine part of everyday transactions, a small measure of healthy scepticism may prove just as important as technological innovation.
Summary
This report covers the latest developments in artificial intelligence. The information presented highlights key changes and updates that are relevant to those following this topic.
Original Source: The Conversation Africa | Author: Uri Gal, Professor in Business Information Systems, University of Sydney | Published: February 27, 2026, 5:37 am


Leave a Reply
You must be logged in to post a comment.