There is no need to be nice to a machine, not an objective one at least.

In the context of robots pretending to be people (text NLP bots) we rarely consider the feelings (not utility) of our customer.

A socially balanced individual 20 years ago would have no reason to remove all manners and niceties when approaching a company, on an issue related to a recent purchase say.

Not necessarily the case today. If you know that you’re conversing with an algorithm you may remove niceties, to little effect. Beyond philosophizing, the machine would only compute your perceived tone if they fit in a model (say sentiment analysis: are you happy or angry).

As much discussed in various places (including here) current machines lack basic human mechanisms when it comes to actual conversation. Which is the reason we’re seeing more and more seamless human and machine collaborations (say in a messaging environment for a brand). A bot could get stuck and the system passes you on to a human, often within the same window.

Should we not alert the other side of the conversation if they’re talking binary (to a bot) or symbolically (to a human)?

How (1) can we design processes of sending routine communication to a machine and more abstract problems to a human (2) in a way that eliminates miscommunication and orients our customer?

Introducing robots to our communication landscape warrants closer to how we talk to robots (maybe the same), and more importantly how and when we switch from talking to robots, to talking to humans.

Daily Note

March 11, 2019