This reminds me a bit of the Eliza vs Racter test...
I guess everyone knows Eliza. If not, here is the gist:
Eliza was implemented in LISP by Joseph Weizenbaum, apparently to prove that computers cannot communicate with humans directly, but a lot of people misunderstood it.
It worked on simple keywords. If you typed a sentence with "father" in it, e.g., it would respond to you with: "Tell me more about your family."
I tested it once, and ran into a grammar issue (in the reply) after my third sentence or so.
There is another old AI called Racter, which had certain topics it would talk about, but was a bit erraticly jumping between them, which is why (according to an article) it occasionally was used to train people in communicating with schizophrenic people.
It even wrote a book with the title: "The Policeman's Beard Is Half Constructed"
Someone thought it would be a joke to have Racter communicate with Eliza.
Let's just say, Eliza was a bit overwhelmed by Racter.