A Microsoft software will let us talk to the dead

It almost seems the latest macabre thing of some geek with a passion for the occult, it is instead one of Bill Gates’ company projects in the pipeline. This is what came out during the last few days after an article on the Ubergizmo website reported the news. There’s no need to worry, there will be no mediums that will be taken over by the souls of our loved ones, nor will ouija boards or séances get involved. This is actually, once again, related to the developments in technology and artificial intelligence, and the result may be far less frightening than it looks.

According to the popular tech blog, the Redmond-based company has filed a patent to allow a software, specifically a chatbot, to simulate the way a real person talks in minute detail, mimicking their communication styles and patterns. Since it is a computer program, with a life cycle obviously different from that of a human being and with the ability to store information for a potentially infinite time, it goes without saying that when the people who will lend themselves to the software development will no longer be there, their voice will continue to exist, in all its nuances.

Currently, chatbots are already used by several companies, especially in customer service, where they give automatic answers to the questions asked by consumers. An example can be found in Ryanair customer service, with which it is also possible to arrange for the issuance of vouchers and refunds without the mediation of a real assistant. The artificial intelligence that animates the software we are used to, however, is not that advanced, since it is generally able to answer only simple questions. When the language gets too complex, our artificial interlocutor struggles to keep the conversation alive.

For Microsoft’s chatbot to be as real as possible, it would need access to a whole range of social data, such as images, e-mails, written letters, social media posts and voice data. This is how the software would feed the machine learning algorithms and learn to duplicate a person’s linguistic styles and behaviors. The biggest obstacle in its development would lie precisely in the data collection phase, since it would represent a serious risk to privacy. Probably, they will need to ask for specific permissions to access the necessary data.

In the past, the company founded by Bill Gates had already tried a similar experiment. A few years ago they launched a chatbot for Twitter called Tay. The experiment, however, did not go well, since the software learned in a very short time to be racist, to praise Hitler and hit on its interlocutors. After less than 24 hours, Tay was then silenced.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

eighteen − 18 =