Categories
Culture Philosophy Technology

The consequences of AI for human personhood and creativity

In the past, as I’ve thought about the nature of modern technology and its impact on human life, I’ve generally felt one step ahead of the technological developments under consideration. The philosophical tools at our disposal to analyze and criticize technology felt effective, if only in principle. Again in principle, it seemed that if enough people just took the time to think and reflect critically about technology, we might have a chance to encourage a more humane trajectory for it. But with the recent and rapid onset (onslaught?) of new AI technologies and platforms (ChatGPT and other large language models—”LLMs”—foremost among them), that feeling of theoretical manageability has evaporated. It was no doubt a naïve feeling to begin with, and is now replaced by a sense of being adrift, at sea, with no sight of land amidst the churning waters of technological change.

It has therefore been difficult for me to develop a practical framework for engaging with new technologies like ChatGPT. I already have a theoretical framework for understanding new technologies, informed by various thinkers (this is a long list of writers including standouts like Jacques Ellul or Albert Borgmann). This theoretical framework provides a straightforward analysis of AI technologies, just like it does for earlier technologies: the type of AI I have in mind (however it is instantiated, e.g., a model constituted by training data) is a device that reliably makes a certain good (in this case, novel, creative media) more efficiently available. But AI technologies themselves, as a force impacting culture, are already a runaway train. Trying to decide what to do about them is much more difficult than simply understanding what they are (understanding what they are as mere technologies, I mean). To be clear, I’m not attempting to consider whether AI systems can be conscious or actually “intelligent”. That is an extremely difficult and interesting set of philosophical and scientific questions, but what I’m trying to discuss here doesn’t depend on the ontological status of AI agents. (Of course, if it turns out that AI systems are “persons” in some sense, then we have a whole different set of awful and perplexing situations we find ourselves in, such as the ethics of how to treat these AI persons).

All that to say, as a result of my own aforementioned confusion and perplexity, I want to approach this critical discussion of AI not primarily from a theoretical or philosophical perspective, but from my own particular human (which is to say emotional) perspective. I’ll put it as bluntly as I can: the recent developments in AI make me sad, and worried. I understand that in saying these things I am departing from the general sentiment of excitement and possibility that surrounds the AI space, and probably deeply offending some people besides. So I’ll explain further: AI as a set of technical achievements does not make me sad. The ability of ChatGPT 4 (or 5, or 2000) to string together so many words that make so much sense is incredible, and does not itself make me sad. I myself am a technologist and delight in technological achievements, both my own and those of others. I’ve implemented machine learning training algorithms in the past, and understand, in some basic way, how these things work from a technical perspective. I am in total awe of what happens when you throw all the data of the entire Internet into a machine learning model. I would not have expected it to work half so well! But…

My concerns

There are two main reasons I’m sad and worried: first, that AI might create confusion about what is a human being—what is a person—and therefore might lead to confusion about what makes humans valuable, ultimately contributing to the devaluing of actual human persons. Second, I’m worried that, in relying on AI for the production of creative output of whatever kind (writing, music, visual art), we will allow the uniquely human capacity for creativity to atrophy. I view these as likely, if not inevitable, consequences of the current trend in AI development and use. The paradigm of modern technology is bent in the direction of such outcomes, but I do have a hope that, even in small and provisional ways, there will be a reaction to “AI overreach” that will encourage instead a more limited and responsible use of AI in daily life.

I want to focus on these dangers especially, not because there are not other significant dangers associated with AI, but because the other dangers seem to already be getting plenty of airtime: super-intelligent machines taking over the planet, or deepfake videos causing political chaos and division. The things I’m worried about are perhaps “fuzzier” or more “internal” than such dramatic, global outcomes. But precisely for that reason, I think we need to pause and think hard about these “internal” consequences. They threaten our human future in a less obvious and perhaps more long-term way, but they do so nonetheless, and it’s already abundantly clear that our species will happily sell our proverbial inheritance for the technological equivalent of a bowl of stew. Like Esau, we’re so often fixated on present desire and the ability of technology to satisfy it that we rarely think about the deeper value of what we might be giving up along the way. Often, by the time we realize what we’ve lost, culture has reorganized itself around the change, and the loss is not even apparent to new generations. The influx of AI technologies is another milestone along that journey, and therefore another chance for us to stop and think. That is what I am attempting to do now, however narrowly framed by my two worries.

Before diving in, I think it’s also important to qualify my worries as provisional. As with any concerns about the way the future might turn out, I could very well be wrong. Only time will tell. A good amount of empirical evidence would need to accumulate in order to turn my concerns into anything approaching proof. In general, I am apparently much more enamoured of the precautionary principle than the wider technological culture. I think it’s worth considering what could go wrong with technological innovations and working hard to prevent bad outcomes, even if such outcomes are not a logically necessary consequence of those innovations. I fully expect that some reading this will not feel as worried as I do, either in the sense of thinking the bad outcomes are less likely than I do, or in the sense of not agreeing that the outcomes are bad to begin with. Fair enough–those are precisely the debates which need to happen in a more rigorous way as AI technologies continue to develop. For now, I am mostly trying to call our attention to these worries so that the categories for debate still exist by the time any further debate is moot.

Confusing the concept of “person”

Let’s consider first the worry that heavy AI use might cause confusion about what constitutes human persons, and how we relate to them. One aspect of the new chat-based LLMs which is already clearly apparent is that users interact with them as communicative agents. They are explicitly designed in this way, as a natural human language interface for generating text or other media (just recall how even when qualifying its responses as that of an AI language model, ChatGPT continues to use the first person singular pronoun “I”!). I wager that we’ve all observed the very strange phenomenon in which human users, ostensibly being completely aware that they are not actually talking to another person, begin to act as though they are. In fact, these AI tools often seem to produce better artifacts when you engage in this charade with them. “Programming” ChatGPT, for example, looks a lot less like writing the explicit instructions of a computer language and a lot more like setting up a shared context for two human communicators, for example with an instruction like “You are an expert in the field of macro-economics who doesn’t shy away from radical theories,” designed to elicit or unlock a certain form or content in the response.

At the same time, in these conversations, the AI is very much in the position of a servant, and so the language mode we use tends to be that of the imperative: command and response (or, if you’ve had politeness embedded more deeply in your patterns of behaviour, you might ask ChatGPT questions with a “please”). It seems that the use of these AI systems fools our brains into making us think we are working together with another personal agent, albeit one that exists to satisfy our whims and experiments. One is reminded of the ur-chatbot ELIZA, which arguably passed the Turing Test in a limited way in the 1960s, and was based on technologies that were primitive in comparison with today’s LLMs. Sometimes, the servant/master paradigm is turned upside down, to devastating effect. A much more recent AI also named “Eliza” (do these people have no sense of irony?) took its human interlocutor down a dark path when it recommended he take his own life to help prevent climate change.

Now, in general I think treating non-persons like persons is prima facie less harmful than treating persons like non-persons. So how worrisome is this emerging pattern, dark edge cases aside? Who cares if we talk to our computers like people? We’ve been doing it for a while now with voice assistants like Siri, Alexa, and Cortana. We also talk to our pets like people, for that matter. In fact, the more we talk to other things like people, the more likely we are to treat them well, right? I’m not entirely sure. I worry that the flattening of the ontological landscape of personhood might have all kinds of adverse effects, the same way that the flattening of the relational landscape by social media has arguably wreaked havoc there. Facebook promised us greater connection with our friends, but by debasing the meaning of friendship in expanding it to the merest of relations, it has, with no small amount of irony, altered (and arguably eroded) traditional patterns of relationship.

Today, we treat ChatGPT like a person. Assuming ChatGPT isn’t feeding us unhealthy suggestions (as in the case of the modern “Eliza” above), that may not be the end of the world. But maybe tomorrow, we’ll be treating everyone else like ChatGPT, that is to say, as sheer surface interfaces we can use however we want. There is much more we could say here, but I don’t want to disappear into the realm of personalist philosophy and lose the thread. The main point is that the new AI interfaces blur the line between machine and human in a much more subtle and pervasive way than ever before. This is in fact one of the goals for these systems: to make it feel like you’re interacting with a person! AI or robotic “companions” are also widely available for romantic or caretaking purposes (particularly to ease the loneliness of the elderly), and it’s been well-documented that the attachments formed with these machine systems are robust and consequential. There have even been experiments run on users in mental health support settings where the “assistant” was ChatGPT instead of a human being, a fact undisclosed to the participants.

I worry that, in this mode, we will end up either infantilizing ourselves in dependence upon our individual AI genie, or becoming despotic overlords of an army of AI slaves (or, as is perhaps most likely, winding up in a bell-curve situation with people falling all across the spectrum). Neither extreme bodes well for our own character. Historically, questions about the validity of the personhood of one group or another have only led to oppression. Could this happen again? A pithy way of expressing the concern is thus: it’s dangerous to diminish anyone’s personhood, whether it be by me taking it from someone else (by treating a person like a non-person), or by me forfeiting my own (by treating a non-person like a person, i.e., giving my personal agency over to a non-entity in a kind of technological idolatry).

I think there is also a danger of subconsciously shifting the definition of “person” from “whatever it is that is intrinsically valuable about human beings qua human beings, regardless of individual capacity” to something more compatible with what AI presents in its capacities. What are these capacities? The ability to simulate reason, to produce novel texts, and to have disembodied textual conversations, for example. I worry that we may therefore wind up unconsciously holding the regressive philosophical position that identifies what’s valuable about a person with their ability to reason or produce abstract information. Or even if we don’t regress in our philosophical estimation of human personhood, we will at least have regressed to playing a form of social solitaire, pretending that we’re relating with another person when we’re just interacting with a machine dressed up as one. To the extent we surrender agency to these systems, we will also have regressed to making decisions via (technical) haruspex or by visiting (the AI equivalent of) the Oracle of Delphi. Here be dragons!

An obvious (instrumentalist) reply to this “confusion of personhood” concern (and to concerns about technologies in general) is that the danger here doesn’t lie in the technology itself, but would be solely the responsibility of the human user, since the technology is only ever active as a tool, an extension of human will. AI systems provide a convenient natural language interface, which in and of itself is a morally neutral fact, and affords for more efficient interaction. One could even argue that using our natural human language and voice re-humanizes the experience of interacting with computers, since we are no longer forced to “think like a computer”, e.g., when we come up with a search engine query. If someone goes “beyond the facts” and allows this more natural user interface to affect their social landscape in a harmful way, that’s on the user, not the AI. For example, to the person who falls in love with an AI system and then has their heart broken when the AI service is turned off, the instrumentalist would say that it’s their fault that they foolishly formed attachments to a transient machine in this way.

To a certain extent, the instrumentalist objection is true and correct. Whatever else technologies are, they are tools, and (leaving aside questions of consciousness on the part of AI) they are not agents with their own goals and desires. The objection misses two crucial points, however:

  1. Tools and the affordances they provide are never “mere”, but shape their users to various degrees. In particular, the tools of modern technology inhere in a paradigm that places efficiency of the means of production of goods as the highest value. This inherence naturally creates a situation where values that might compete with efficiency (say, consensus within a community, or the meaningfulness of human work) are deprioritized in the operations of the tools themselves. Many examples of this pattern could be given, from the fact that human components in automated systems like assembly lines have to conform to the system (rather than the system to human variability), to the commonplace example of not being able to enter accurate data into a form on a website because the form assumes that responses must fit a predefined pattern.
  2. Even if tools are not essentially value-laden, they pose moral challenges which individual users may not have the fortitude or ethical training to surmount. The more attractive and powerful the tools, the more we will be likely to engage with them without first having gained the moral character requisite for proper use. A list of obvious examples of such tools would include firearms, or addictive substances. In a world of perfect moral fortitude, it would theoretically be possible for every person on the planet to possess a nuclear bomb with no ill consequence, but this obviously has no bearing on actual reality. Why do we think that giving every teen unfettered access to Instagram will have better results? The fact that Instagram is a “mere tool” is no good response, if it time and time again poses ethical challenges too great for the average user to overcome.

Altering the structure of the creative process

Now let’s consider my second main worry: the role of AI in the process of human creativity. Many of the recent developments in AI have been as impressive as they are because they have shown an ability on the part of AI systems to generate creative artifacts previously associated only with significant human creative skill. These artifacts are highly underspecified by the prompts that the human generative “spark” provides as input, and thus represent a significant amount of “agency” (in the sense of determination over the final shape of the artifact) on the part of the AI. With a simple phrase or two, an AI “prompt engineer” can generate virtuoso-level digital paintings, play a song in the style of another artist, or write an essay that, while it may not win any awards (or pass a basic fact check for that matter), can arguably compete with what most undergraduates manage to produce in the hours before an assignment is due. These outputs are rightly lauded as incredible technical achievements. Working with ChatGPT is almost like having your own team of ghostwriters on call.

Of course the product is not always stellar (earlier versions of the Midjourney AI, for example, were famous for not being able to produce believable images of human hands, as good as it was at other things). There is thus a lot of debate about the quality of the output (on any number of levels, from aesthetic to ethical), but this is merely the feedback that will drive future improvements, and doesn’t count as deeper criticism. In my view, all of this focus on the output is precisely the problem, since it avoids looking at the human side of the equation altogether.

A 2-tier vs 3-tier model of the creative process

In the past (represented by what I’ll call the “traditional” or “2-tier” creative process), there were basically two halves to the creative equation: (1) the “creator” (the human being or beings), and (2) the “artifact” (the thing or event produced). Even if unskilled labour was involved somewhere in the process, it was either human labour (and therefore part of the human side of the equation), or a tool whose operation did not on its own determine the shape of the artifact per se, but was under the direct control of the human creator (making allowances for “chance” and “randomness” to count as such a tool).

In all of the AI examples, we have instead a 3-tier structure involving (1) the human “visionary”, (2) the impersonal AI “labourer” (which of course is trained on gazillions of examples of labour by humans), and (3) the creative “artifact” (output). It would be misleading or incorrect to call the human being in this case the “artist”, or the “writer”, or the “creator”, in any traditional sense. And so in a flash we’re in a totally different structural world than previously existed. What the current AI systems have enabled is the interposition of a significant creative force in between the human visionary and the artifact. We have taken something that was unified, and split it in two. (And hasn’t history shown, for example on the plane of the atom, that this practice can be explosive?)

Whether or not the move from a 2-tier to a 3-tier creative process is a regrettable change is up for debate. But certainly a world where human creators become more like directors or idea generators, more removed from the actual generative process, is a vastly different world than the one we have known, and that difference should be frankly acknowledged. It’s no counterargument to point out that humans have always used technology in the creative process. It’s true that tools like paintbrushes or, more recently, digital painting apps have been gladly adopted by artists looking for new ways to express their ideas. It’s tempting to view AI assistance as a difference in scale, but not in kind. What, at the end of the day, is the difference between a paintbrush and a visual AI assistant? Both allow the human creator to generate a visual artifact according to the human’s vision and goals. The AI assistant allows this to happen more or less instantaneously and more or less effortlessly, but so what?

I do want to argue, however, that there is a difference of kind at work as well. With AI we are not dealing with a dumb tool that might require a lesser or greater amount of skill to use fruitfully. With a paintbrush, there is a direct causal chain from my intentions to the final product, unbroken by another agent. With AI assistance, we have interposed an alien, or at least inscrutable, kind of intelligence in this chain. We have a kind of agent, however unconscious, that we let loose, with minimal guidance, to do the work on our behalf. (If I may be permitted a dramatic metaphor: what makes us think that we, in this picture, are the all-powerful Sorcerer, and not the bumbling Sorcerer’s Apprentice, about to be mired in a rising tide of consequences we are ill-equipped to resolve?)

I imagine that proponents of AI-assisted art would want to talk not about the loss of old creative processes but rather the new creative possibilities opened up by AI. They might say that we don’t have to focus purely on the creative artifact, but that human creativity can itself move up a layer of abstraction. In the same way that music producers nowadays take advantage of all manner of technological tools (software instruments, synthesizers, effects, and so on) to more rapidly get what’s in their head into audio form, so we also can view AI assistance in the creative process as merely a tool (however different in kind) that enables us to think about what we want to produce in the world at a higher level, and to get that into the world faster and with less effort. In fact, they might say, a new kind of skill and practice is in the process of being born: the practice of working in community with AI assistants to produce artifacts that take entire artistic fields to the next level. For now, I’m more than happy to give this perspective provisional space. It’s ultimately an empirical question whether this new creative dimension will be as satisfying as the old.

Regardless, I think it needs to be acknowledged that a myopic focus on the artifact (the product delivered by AI) will blind us to the consequences for the human side of the equation. It is ever thus with technological development. Albert Borgmann has collected a number of powerful examples of how, in chasing after a good, divorced by technology from the traditional modes of its production, we often lose something important that was intertwined with that mode. The introduction of central heating systems contributed to the collapse of community in family life among other things. The introduction of personal music players contributed to the decentralization of music in social life. And so on. The divorcing of the means from the ends is always a Faustian enterprise.

Let’s work through the example of AI technologies (vis a vis the creative process) now, by asking what part of the “means” of the traditional creative process might be beneficial for us as humans. What might be lost with this move to an AI-enabled, 3-tier model of creative production? I’ll offer a few suggestions, but want to flag this as an area that needs much more thought!

  1. World Engagement. Turning an idea into reality requires engagement with reality, working with the push and pull of a world that exists in its own right and does not immediately bow to our whims with the sheer malleability of “data” (the way that ChatGPT, for example, might encourage us to believe, based on how easy it is to manipulate). Engagement with reality breeds humility, a sense of our own limits, and awareness of the need for development of skill in that engagement.
  2. Authentic Self-Expression. The traditional creative process fully retains human agency throughout the chain of events. I may be a horrible visual artist, but at least the stick figures that I draw are exactly what I mean to draw (subject to the limits of my skill)—I do not need to go through n rounds of inquiry with an alien intelligence to generate them, and I do not need to settle for whatever the AI’s approximation of my intent turned out to be. We can think of this as a sort of Existential argument for authenticity, viewing artistic output as a more or less clear distillation of one’s self. ChatGPT, with the level of control it has over the form of the output, muddies the clarity of that distillation.
  3. Practice. The creative process, as something requiring skill, is intrinsically a practice, something we do over and over again to become better at. In the case of many art forms, this practice is also an embodied one, engaging a multiplicity of senses and modes of relating with the world. Practice does not only lead to better creative artifacts, but it forms us into different (and I would argue better) persons. Albert Borgmann calls these practices “focal“, and he makes a strong case that the more we trade them for technologically-produced goods, the more diminished our human experience becomes.
  4. Meaningful Effort and Achievement. The traditional creative process involves labour, and specifically meaningful labour, the kind which gives us a clear connection between effort and achievement, and results in well-deserved rest and celebration.
  5. Community. The traditional creative process can be (though it certainly isn’t by necessity) a communal affair. Human partnerships are possible, indeed often centred, in the production of creative artifacts in a way that, at least so far, they do not appear to be centred in the practice of AI “prompt engineering”.
  6. Understanding How Things Work. One of the consequences of the Device Paradigm is that, as technologies become more “available” (easier to use, among other things), they tend to become harder to understand for non-specialists. When my dad was young, he could tinker with and modify the engine in his cars. Nowadays, engines are so complex and integrated that they have become a “black box,” operated on only by highly trained and certified mechanics. Apple is famous for pushing in this direction with its hardware, focusing on sheer opacity for the sake of a streamlined user experience (and resulting in the backlash that is the Right to Repair movement). In the traditional creative process, the creator typically understands all or most of the elements of the process, even if in principle. A painter understands how a paintbrush works (not least because a paintbrush is a totally transparent device—its physical shape discloses its function almost immediately). AI assistance is, on the other hand, about as black a box as can be. With many large models, while it’s clear how the model is trained, it’s not always possible to see why any particular input generates any particular output. Even the developers of the model cannot always look at the system and understand how it works. Making changes to the model is often a process simply of selecting new sorts of training data and hoping things work better. The AI-assisted creator is in no better position to understand any of this. They trade in a formerly clear knowledge of the tools used in the creative process for the role of coaxing an inscrutable model to produce something akin to the desired result. People like my dad have lamented the lack of ability to understand what is happening in a modern car engine. People like myself have lamented the lack of ability to poke around inside an Apple computer, or make hardware upgrades. When we understand how things work, we feel empowered and capable. But I worry that we will face a significant loss of understanding and empowerment, as traditional creative methods are replaced by the black box of AI.

There are no doubt more aspects of the AI-free creative process that we could highlight. The point I want to drive home is that the loss of all of these “side effects” of the traditional process would be consequential indeed, even if only for those engaged in the process. The benefits are obvious—easy production, less gatekeeping of traditional roles in art, the proliferation of more creative content in the world. I don’t mean to devalue these positive developments, and to a certain extent I take them for granted. The benefits are right there on the surface of the technologies as they are advertised to us, after all. It is the costs which are generally hidden, and thus need to be considered carefully!

Ultimately, we can allow my concerns to be evaluated empirically (so long as the goalposts of what counts as meaningful for human existence don’t get moved along the way). What actually happens when artists or other creatives spend their time employing AI assistants instead of engaging in the traditional creative process? How do they feel about their work? Do they feel that the skills developed involve the use of all of their faculties in a challenging and fulfilling way? Do they feel that the fundamental impulse to connect emotionally and intellectually with their audiences in an authentic way has been made more or less transparent? And so on.

The relationship between personhood and creativity

What if the “personhood” concern and the “creative process” concern are not two totally separate things, despite how I’ve presented them so far? What if human personhood and human creativity are more fundamentally linked? Despite the common aversion of people to self-identify as “artists”, it appears to have been demonstrated that, at least in kindergarten, everyone is an artist

I have, it’s true, a vested interest in seeing the world this way. I co-founded a startup with a mission to convince everyone that they are indeed an “artist”, that the “quality” of their creative output depended more on the meaningful stories driving it than their technical ability to cast those stories into traditional artistic modes. My favourite theology sees God’s act of creation not as the laying down of immovable strictures but as the invitation to us to continue that act of creation through our own creative acts. And so I will not attempt to argue in a purely disinterested way that human beings are essentially creative. 

For one thing, to say that creativity is both necessary and sufficient for human personhood runs into a variety of philosophical problems. Some particular human beings might not evince much of what we judge to be creativity, after all. But it does seem evident that much of what we appreciate about the creative artifacts we recognize as such is the human meaning invested in them. Whether the artifact is a painting, a song, or a building, when we hear the human story that launched a particular creative process on its way, or indeed the story of the creative process itself (with all its accompanying challenges and accidents), we find a connection of meaning with the creator, via their artifact, that transcends the artifact’s particular physical or aesthetic form. Our aesthetic relationship with the products of human creativity therefore goes one step further than our relationship with, e.g., the beauty of the natural world. And so I take it for granted that human personhood is a necessary precondition for this heightened kind of aesthetic experience consisting in the connection of meaning established between the creator and the ones experiencing their creative artifact.

What kind of meaning lies behind a piece of AI-generated artwork? If we leave aside, as I have been doing, the possibility that AI systems are actually conscious agents, then it is hard to see how there is anything approaching “meaning” in their outputs. A connection of meaning can exist between human beings in a relatively transparent way given our shared evolutionary and historical trajectory, i.e., the fact that we can inhabit each others’ lived experiences to a significant degree. When a neural network produces a poem about the frustration of never-ending spring rains in Vancouver (something I would very much like to do at this moment), the poem does not result from any subjective experience of the rains in question, however much the AI feeds upon the tokens of people who have had those experiences. It is therefore a refracted representation of a set of tokens which were originally causally and meaningfully connected to spring rains, but that’s it. Nothing’s going on “inside” the AI which we might care about in the way we care about the backstory of a human creator.

If this is a true insight, then maybe we can propose a provisional “law of process aesthetics”, defined for now without rigour: creative artifacts are aesthetically valuable in proportion to the amount of human intention, effort, care, and meaning involved in their production, and to the extent these factors are known by the beholder. The two concerns about AI I discussed above are thus particularly concerning, because when conjoined they do away entirely with the important aesthetic relation of meaningfulness. When I identify what is most essential about personhood with qualities that can be said to belong to AI systems, then I forfeit an ability to relate on the plane of shared human experience outside those “rational” qualities. And when I settle for an AI-generated prelude to my novella, I miss the opportunity to implant a discoverable and relatable meaning in my work (or at least meaning that flows from me as a person instead of from the global amalgam of personal artifacts that constitutes the training data for an LLM).

One interesting corollary of the proposed “law” is that certain types of AI-assisted projects might be considered to be aesthetically valuable indeed. Imagine a scenario where a talented painter used something like ChatGPT to put pixels of specific colours in specific locations upon a canvas, as if they were talking to their paintbrush instead of manipulating it directly. A beautiful painting produced this way would exhibit a high degree of human intention, effort, care, and meaning, with potentially multiple layers of meaning given the choice to use a suboptimal tool in the production of the painting (or maybe the only available tool, if the artist did not have the use of their hands). In this case, the use of AI does not appear to diminish at all the human creativity and meaning involved, and I would not want to argue otherwise myself.

On the other hand, an AI painting generated by the prompt “Draw me a beautiful lake in the country in the style of Cezanne” would not count as aesthetically very valuable or meaningful on this scheme, regardless of the sheer visual beauty of the artifact. I take this to be a relatively uncontroversial outcome as well.

Living in an AI-assisted future

It’s time to descend from abstract and maybe tendentious thinking about aesthetics and human creativity and come back to the real world. Even if a number of people are swayed by arguments like the ones I’ve been making, the fact is that AI assistance is too powerful a promise for most people to refuse. Particularly in cases where the traditional creative process doesn’t feel all that fulfilling (like writing a paper for a class you don’t much care about), it might not seem that the cost of adopting AI is very high. To be honest, I’m not sure what to do about such cases, other than to make the personal and societal arguments from a place of vigorous and impassioned celebration of human creativity (i.e., to engage in “deictic discourse”), and hope they resonate at some level as a kind of virtue ethic. (I, for one, am not currently tempted to use an AI assistant even in mundane writing tasks, for all the reasons above, particularly because I want the things that are attached to my name to be authentic processions of me, not my intent filtered through an afterimage of all the words ever uploaded to the web. I leave open, of course, the possibility that use cases might arise for which my personal calculus changes!)

One of my hopes in all of this is a possible backlash to the looming flood of AI-created artifacts. When artifacts of sufficient quality are so easily produced, it may be that we start to value not so much the “quality” per se of the art but the human elements that make art meaningful and relatable, and the human practice and skill that we honour in the artifact. Until AI agents have lived experiences that approximate our own, the story behind a piece of AI-generated art is just not going to be as meaningful. I wonder if we will start to gravitate towards art that has an “AI-free” label because we know we are going to encounter something more “real”, the way a certain segment of the population prefers “hormone-free” and “additive-free” organic foods. In other words, we might add a new dimension to the concept of provenance of creative artifacts, and not just for high art but for everyday “knowledge worker” output as well. Even if this happens, I’m concerned that economic pressures will tend to make higher-human-involved products (like organic foods, or this putative 100% AI-free art) more expensive, with the concomitant reduction in access to goods which should arguably belong to everyone. (Update: a few days after posting this article, I saw an example of this in the wild!)

Regardless of how it shakes out economically, I do see the possibility of a natural reaction against commodified AI-generated artifacts on the horizon, which gives me hope that at least a significant minority will continue to value traditional, human-centred modes of creativity. This minority might even push more boldly into that field than we currently experience, rediscovering and moving forward older or forgotten arts that were superseded by technological means. External economic support from a patron class or a government source would no doubt be necessary to make this whole story sustainable, in the way Albert Borgmann proposes a political affirmation of traditional modes of production over against the commoditizing force of technology.

I think the basic questions we all need to ask are, (1) “What makes human life meaningful and valuable?” And (2) “What does a use of AI look like which fully respects those aspects of human life?” Answering the first question will be difficult because it has always been difficult. Answering the second question will involve confronting some of the unintended side effects of AI use that might ultimately be detrimental for faculties that we prize highly in ourselves, like our creativity. And maybe this is one good outcome of the rise of AI: it might compel us to take the first question more seriously than we have so far!

Right now, sitting as we are in the full upswing of the AI hype cycle, it’s easy to focus on the promises, the benefits, and the technical legerdemain enacted for our amazement. But we should not lose sight of the fact that technology, if it is to be good, must always be subjugated to, and limited by, higher human values. Too often technological development becomes its own end (the “efficiency” fetish, per Ellul). We already see this with AI, where the AI “arms race” is being intentionally escalated by major companies. Hubris is dangerous enough, but when coupled with wilful myopia, it is a recipe for catastrophe. I want us to take a deep breath and think a little bit about what we really value in our lives, our selves, our communities, and whether the catapulting of AI into all of these areas is really going to make them better. I fully expect that, in a number of ways and dimensions, the answer will be “yes”! And in some areas I expect that it will be “no”. My hope, whatever we decide, is that we come to these answers after a process of settled discernment, not simply as a result of the conveyor-belt adoption of whatever the tech magnates have decided will make our lives easier, or their pockets heavier. (One also gets the sneaking suspicion that capitalist money-grabbing is not even the primary motivation in the frenzied development of AI. A decidedly religious zeal accompanies the pronouncements of the field’s top stars, and the hope with which their flocks look to “AGI” or “The Singularity” to solve all our problems makes old-fashioned Fundamentalism look almost thoughtful by comparison).

A Tolkien-inspired technology virtue ethic

As a huge Sci-Fi/Fantasy nerd, I can’t conclude without attempting some reference to an epic story, in this case, The Lord of the Rings. I was struck recently by something I noticed about the narrative in those novels which is instructive for our purposes. It’s been frequently observed that Sauron’s One Ring stands for the kind of power which is so overwhelming, and so tied to Sauron’s nature, that it can only corrupt. (Some have indeed wondered if it is a cipher for modern technology.) Anyone that comes into contact with the Ring is ultimately overcome by its power and turns to evil. The truly “good” characters in the narrative are those who reject the offer of the Ring entirely (Gandalf, Galadriel, or Faramir–the book version of him–for example, in contrast to Boromir or even Frodo, who ultimately succumbs). So far, this could all be cast as a “substantivist” critique of modern technology: technology, like the Ring, has its own internal “force” that ultimately determines the outcome, regardless of the intentions of the mortal wielder. We don’t generally believe this hard-line take on technology, which appears to have a nature much more complex than either a simple instrumentalist or substantivist view can account for.

Interestingly, there’s a good deal more nuance implicit in the narrative world, which I only just recognized: Gandalf, Galadriel, and Elrond are all ring wielders themselves, bearing the three Elvish rings forged around the same time as the One Ring. Galadriel may have said “no” to the tempting possibilities of the One Ring, but she didn’t say “no” to rings of power in general, and was happy to make use of Nenya (her ring) for certain, more constrained, purposes (such as the sustaining of Lothlórien and its people). I think we have here a more realistic philosophy of technology, more or less dependent on a kind of virtue ethic, that can be useful for us in the current situation. Some technology may have the morally overwhelming force of the One Ring, and to such technologies the appropriate response is humility, the recognition that I, at least, will not be able to wield such power responsibly without dire consequence. But other technologies might be suitable for me.

Which technologies are which depend on a variety of factors, not least my own character. (Boromir, for example, would probably not have been trustworthy to wield one of the Elvish rings, either. And we are certainly led to assume that Saruman is not worthy of whatever ring it is that he apparently acquired along the way.) We can extrapolate this point to the level of society as well. Which technologies we allow to run free in our society depends jointly on the nature of the technologies and on the level of maturity we generally exhibit in society in relation to those technologies. If we want to take advantage of higher-powered and more consequential technologies, then perhaps we must first focus on asking which practices will generate the kind of character, at the level of both individual and society, that can safely engage with those technologies. We already encode this way of treating technology to a certain degree legally, for example with minimum drinking ages or driving tests. Until we can figure out the appropriate methods for developing and confirming character vis a vis other consequential technologies—and maybe forever in the case of some technologies—we are best off emulating Galadriel, who was content to allow her own diminishment rather than become something more powerful, yes, but ultimately less good.

(Many thanks to Matt Civico, Justin Smith, Adam Graber, and Nick Bott, who reviewed drafts of this essay and provided helpful feedback, insights, examples, and in some cases, copyediting! All views and errors are my own, etc…, etc…)

By Jonathan Lipps

Jonathan worked as a programmer in tech startups for several decades, but is also passionate about all kinds of creative pursuits and academic discussion. Jonathan has master’s degrees in philosophy and linguistics, from Stanford and Oxford respectively, and is working on another in theology. An American-Canadian, he lives in Vancouver, BC and has way too many hobbies.

1 reply on “The consequences of AI for human personhood and creativity”

Leave a Reply

Your email address will not be published. Required fields are marked *