Losing Personality to the Code

When I was a pastoral intern at a Filipino church, one event highlighted the difference between my mentor and I. There was a singing group that came to give an evening concert. It was good. Pastor Joy, my mentor, had a short opportunity to speak. Before he got up he looked at me and said, “Time me, I will be five minutes.” Dutifully, I did. Twelve minutes later he sat down. Now, he could get away with it. Going long never seemed to drag on with him, as it does with some people.

If I had said five minutes, likely I would sit down at four and a half. This is who I am. I appreciate what Pastor Joy taught me, but we were different. What would be sad is if you had an intern that tried for twelve and everyone wished he quit at three. Such people exist. They need to understand their own personality, ability, or even how to speak publicly. I fear that the AI revolution will make their lives worse, because the proper conception of personality will atrophy into a blighted, generic stump.

AI takes large batches of data and collates them the data gets smoothed over. What it creates is functional, but it is merely average. The problem is no one is truly average. Overall, we might be in certain areas, but not always and everywhere.

What does smoothing of the data imply? The interesting turn of phrase or the unique name get lost in the plethora of adequate descriptions. The popular phrases repeat and always will because they are, in the end, overused by the nature of being popular. What is new will have a hard time gaining traction, because it is rare in the dataset. When something is unique it has not been repeated yet, nor has it been incorporated into a derivative work. So it is unlikely to be used by AI. In addition, AI has been shown to have poor discernment. (See my prior article.) It trusts in a large dataset to ameliorate any problems. A large data set is a solution and a problem. A Large set of data, as a whole, is boring. Part are interesting, but as a whole it is banal. Programmers and users are hoping there is enough data to produce decent results. Yet there may be a problem.

What if a dataset in incomplete? I have seen reports of investment banks forbidding the use of AI to keep their data private. No one should be surprised. The confidentiality of their data is part of how they function. Moreover, I am sure they are not alone in this policy. What other datasets are going to be kept behind a wall? That is a choice of the owner that will change the effectiveness of AI.

With data limited and only used in a style consistent with the whole; innovation will be stifled, if not killed. This will be the decline of personality in the west. Driven by our blind love of technology we may not realize what we are losing. Instead of a diverse group of people you will have a society more uniform than plain yogurt, and somehow less interesting. This is the natural outcome of the modern liberal project and AI will merely be a catalyst.

A simple example, when I get a new phone, I always turn off predictive text immediately. It is always a pain. I have several wider I run in and each has a different vocabulary. Technical words from seminary do not need to be auto-filled into a text to the volunteer fire department. These are very different contexts. I need to talk differently to each group. AI may handle this problem better, but it still does not create something new just regurgitates old data blended together.

This is about who is the audience and who is the writer. While AI may be able to write tailored to an audience, the function of AI is to create from a large dataset. This means writing that is an average of all writers. This also implies that the writing will become generic. The personality of the writer will become generic, especially if the more interesting data is withheld. It is the death of personality.

This does not mean that the technology does not have its uses, but we should never see it as a replacement of an unique person doing their own work. We should not trust it to do the work and my next article will highlight how untrustworthy AI truly is.

Photo by Evgeniy Alyoshin on Unsplash