Artificial Intelligence in the Arts

and Emerging Policies

Generated with KREA AI.

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.”
–Alan Kay

There’s an ineffable complexity that surfaces when we try to understand the world, advance technology, and develop frameworks that optimize our lives and work. It’s no different with the technologies of Artificial Intelligence (AI) that we would butt up against glass ceilings we seek to break and attempt to cognitively understand as humans. Today, AI is increasingly permeating global culture and art and causing philosophical as well as political issues—necessitating viable solutions for arts policy. This essay addresses the historical roots of AI, common societal concerns, ethics, concerns in art, and the laws and policies that are moving to meet the technology and create safeguards.

Origins: A Brief History of AI

Human beings have sought since time immemorial to advance technology with the purpose to increase the ease, longevity, and enjoyment of life. Although AI didn’t begin during the Industrial Revolution, we can look to the historical evidence of this time and see progress as a high human pursuit. From the 1793 invention of the cotton gin that revolutionized the cotton industry to Henry Ford’s mass production techniques in the 1920s, we see a past that has raced to further optimize and innovate production and process. The Industrial Revolution is an illustration of a point in time that demonstrates the pursuit of technological advancement, it’s reshaping of culture and the way we work. Because of this revolution, we moved from “cottage industries”—small scale manufacturing—and agriculture to one of mass scale manufacturing and an urban society.

By 1935, we meet the father of AI, Alan Turing who began his work at Kings College in Cambridge. Turing was a British mathematician and logician that developed the framework for the modern computer between 1912 and 1935. One of his most significant contributions was the cryptography and codebreaking during World War II (WWII). Yet, it was his 1935 invention of the Turing Machine and Turing Test that set the groundwork for AI research and work. With the Turing Machine, we see a demonstration of how computer programs run on a set of instructions or ingredients dictated by the programmer—much like a recipe you put together to bake a loaf of bread or—in this case—a computer program. After the Turing Machine, there was the Turing Test which is an experiment to test if individuals could tell the difference between AI and a human. Today, this test is considered a benchmark in measuring the success of AI research. Turing’s groundbreaking work in computer science and AI laid the foundation and by the 1950s, all the key ingredients for the modern computer had been developed.

During the 1950s to 1970s, the “golden age” of AI was bustling and thriving. It was during this time, in 1956, the American academic and professor at Dartmouth College—John McCarthy—organized a summer workshop to about “thinking machines” and coined the term “artificial intelligence.” McCarthy also founded the Stanford Artificial Intelligence Laboratory in 1965 to study machine intelligence, autonomous vehicles, and graphic computing.

McCarthy’s Stanford Artificial Intelligence Laboratory would be a pivotal space for AI advancement and in 1966, SHAKEY the robot was created as the first technology to reason about its actions. SHAKEY could plan, find routes, and re-arrange simple objects. Life Magazine infamized SHAKEY by naming it the “first electronic person” in 1970. During the 1960s and 1970s, the program SHRDLU—a natural language understanding computer program developed by Terry Winograd between 1968 to 1970—was developed and closely tied to AI research efforts. SHRDLU has been used for AI because of how a user carries on a conversation with the program.

Unfortunately, by 1974, AI would begin experiencing a series of boom-and-bust cycles with decreases in funding. Below, Google’s Ngram chart displays the peaks and valleys of interest in AI research and advancement. It reveals a peak in discourse during the 1980s that rapidly dropped in the 1990s.

Yet, by the 2010s, AI experienced a resurgence of interest and now holds more public and academic attention than the World Wide Web of the 1990s. Today, AI is applied to numerous industries including education, science, commerce, agriculture, healthcare, entertainment, media, the arts, and more. Yet, with this AI flood, comes common concerns around what its infiltration into our
lives means.

Will Robots Take Over? Common Concerns Regarding AI

Before diving into specific concerns, it’s relevant to address the sensationalism of AI and Ray Kurzweil’s work “The Singularity is Near.” Science fiction and news stories leverage titillating headlines that grab viewers and readers, playing up how robots will take over the world. Digesting these dramatized, fear inducing works causes anxiety around what AI means. With Ray Kurzweil’s 2005 work “The Singularity is Near” the claim is that computer intelligence will swiftly surpass that of humans. The issue with Kurzweil’s argument is how it hinges on hardware, not machine learning (ML) associated with intelligence, advancing. Computer hardware is not linked to intelligence and AI technology, due to its computational complexity, evolves at a slower pace than hardware. This computational complexity makes it difficult to create a level of intelligence that could “enslave” humanity.

With the “terminator narrative” out of the way, we examine what might go wrong. At the forefront of societal concerns are changes to the nature of work. To understand this, we loop back and revisit the historical nature of technology and work addressed earlier with the Industrial Revolution. The relevance of the Industrial Revolution between 1760 and 1840 illustrates the shift from agricultural work to manufacturing which changed in the 1970s and 1980s. During this time, social changes negatively impacted the economic viability of “boom towns” as globalization and offshore manufacturing was outsourced to countries such as China. At the same time, the microprocessor was developed and off-set these economic implications by creating more jobs than were destroyed. When looking at the historical data of technology and work, it’s inevitable that AI will change the nature of how we work. It’s primarily a question of how it will impact our world.

In a study by Carl Frey and Michael Osborne at the University of Oxford, it was found that 47% of jobs are susceptible to automation by AI or seven-hundred and two that can be automated. Frey and Osborne delineated three buckets of high-risk, low-risk, and additionally reduced-risk occupations to AI automation. High-risk occupations include telemarketers, hand sewers, insurance underwriters, data entry clerks, telephone operators, salespeople, engravers, and cashiers. Low-risk occupations are therapists, dentists, counselors, physicians, surgeons, and teachers. Finally, occupations with additionally less risk are ones that involve a substantial degree of creativity, require strong social skills, and degrees of dexterity that would be challenging to automate. Examples of occupations requiring substantial creativity include arts, media, and science. Robust social skills involving understanding and managing subtleties in human interactions and relationships would also be difficult to automate. Lastly, in terms of dexterity, plumbers’, carpenters’, and electricians’ detailed works would also be a stretch, at this time, to automate.

Beyond changes to the nature of work, another common concern of AI is its technological abuse that includes algorithmic bias, diversity, and fake news issues. Kate Crawford defined two forms of algorithmic bias which are allocative harm and representation harm. Allocative harm is when a group is denied or favored with respect to a specific resource. For instance, if an AI banking system is used to determine whether a potential client would be a good customer for a loan or not. Representation harm is when a system acts to create or reinforce stereotypes or prejudices. Google experienced an issue of representation harm when, in 2015, its photo classification system labeled black people as gorillas. The issue of representation harm can also be introduced through potential biases in data fed to an AI program.

Diversity also posits a problem on the AI front. As we’ve explored, AI has historically been dominated by Caucasian (white) males and is still largely so today making up many of its researchers. This diversity issue can be problematic and off-putting to potential female or BIPOC scientists and deny the field valuable talent. It also begs us to philosophically question that, if our AI is predominantly developed by men, is AI male and thus embodying a single worldview?

Fake news is the last form of technological abuse that we’ll explore which is defined as “false, misleading information presented as fact.” According to an article by the Department of Homeland Security one of their concerns are AI deep fakes which are “synthetic media, utilize a form of artificial intelligence/machine learning (AI/ML) to create believable, realistic videos, pictures, audio, and text of events which never happened.” Deep fakes are a form of disinformation that’s disseminated into the media and can cause harm to the person or subject of the deep fake as well as those falsely misled by them. AI also plays a role in how misinformation is spread due to its role in showing you information based on your “likes,” comments, links, and whom you follow on digital media platforms. Between deep fakes and algorithms, AI can inevitably spread fake news and sculpt a biased perception of the world.

Can AI Be Considered Art? Concerns Around AI and Art

Art, in its copious forms, have shaped culture through its visual communication of ideas, values, and themes. It could arguably be one of the most powerful modes of cultural shaping alongside, possibly, religion. The creative industries, as in other industries, have not been immune to technological advancements. Generative AI—programs used to generate pictures, visual aids, and/or “art”—analyze substantial amounts of data inputs to find patterns in the information or visuals fed into the program, giving it label such as “face” or “landscape.” In the digital art world, we see generative AI programs such as Midjourney, ChatGPT, Adobe Firefly, and even Photoshop’s built-in generative AI features begin to take the stage. After examining the foundations of AI and its common concerns, we ask ourselves, can AI generated imagery can be considered art? This section looks at the relationship of AI art to common definitions of creativity and artistic value, types of autonomy, types of creative thinking, authorship and issues around art, and what this means for creators.

Common Definitions of Creativity and Artistic Value

Aaron Hertzmann, principal scientist at Adobe Research in San Francisco, describes computational images as artworks produced by human procedure or instruction and thereby the author of the created work. He also suggests that no software can be an artist, and that art is ultimately a social activity. Hertzmann’s opinions around creativity assume that AI can be considered art but not the artist. This contrasts with AI programs such as Midjourney which dictates in their terms and conditions that no AI generated artwork on their platform can be copyrighted or claimed for sole ownership. This raises questions and debates around authorship issues to be discussed in a later section.

In terms of artistic versus aesthetic value, we look to John Berger, a British art critic, and British philosopher, Paul Crowther. Through direct quotation of these two intellectuals, we examine what art means to them.

John Berger writes:

“An artist has seen or experienced something which he or she has managed to render in such a way that the work acquires a life of its own. It carries on a dialogue with the artist during the creative work and later with spectators from different times and places.”

We might surmise Berger’s meaning as the connection between the hand of the artist, his instrument, and the direct production of art through their hand—the soul of the artist being infused with the paint, the clay, or the pencil.

Paul Crowther writes:

“An original work presents a way of seeing the world that has not been done before.”

With Crowther, we can assume that art is defined by the originality of which we portray the world. Between these two men as well as Hertzmann, AI may not be considered art possibly due to a lack of connection between artist and medium as well as the question of whether a composite of multiple data points being combined and reshaped through AI is an artwork.

Autonomy and Creative Thinking

What is freedom and is AI free thinking and creatively autonomous? Margaret Boden’s work regarding the freedom of AI programs delineates that its freedom is determined by the programmer or software engineer if there’s no mechanism for the program to change itself. In understanding autonomy and creative thinking, we might be able to grasp the meaning of the artistic authorship issues of AI.

Boden defines two types of autonomy which are non-human autonomy and autonomy as freedom. Non-human autonomy is self-organizing and explains how programs might adapt and learn. Contrary to AI, in the human sense, autonomy is where humans choose differently in various scenarios based on plans, motivations, and reasoning regarding alternatives, consequences, and probabilities. The second is autonomy as freedom that is the mentally intentional sense of freedom which also requires reasoning, planning, and motivation. Her argument, when looking at AI as self-organized autonomy, is that AI isn’t necessary or sufficient for the creation of art and that human freedom is required for the creation of artworks.

Autonomy is also linked to types of thinking that occurs when creative works are produced. The types of thinking we’ll examine are AI and human thinking and if they have relevance in the creation of art. AI can be looked at from the perspective of two modes of thinking which are analytical and instrumental thinking. Analytical thinking is an AI program’s ability to quickly analyze large sets of data and find patterns in them. The discovery of these patterns is used in planning, prediction, and control. Instrumental thinking is the second mode of AI thinking that uses precise instructions to clarify goals, obstacles, and tasks that can be solved quickly and accurately.

When we examine human thinking we see hermeneutic, empathetic, and critical thinking. Hermeneutic thinking requires the interpretation and understanding of the meaning and significance of signs and situations. This is often related to existential human problems such as suffering, sorrow, loneliness, and death that are also themes in art. Empathetic thinking is essential to care and care ethics and critical thinking is related to elements in identifying assumptions and finding knowledge gaps.

AI and human thinking differ in critical ways and are essential to defining what art is and who is the artist. Although AI is invaluable for problem solving, requiring reliable and fast information, it’s insufficient to problem solving that involves nuanced interpretations and understanding of situations, persons, or actions. It can then be suggested that human hermeneutic and empathetic thinking are essential to the creation of art and that without it, AI art can lack depth and a human dimension no matter how aesthetically pleasing it may appear.

Authorship and What It Means for Creators

We’ve reviewed the basic premises of what creativity is and its autonomy as well as thinking. With these definitions, it’s necessary to ask what is required for authorship of AI generated artworks. One method to solve authorship disputes is to use the Vancouver Recommendations that stipulates criteria for authorship. The criteria outlined states that for there to be authorship the following must be present: substantial contribution to the conception of the of the work, drafting and revising the work critically, final approval by the author of the version to be published, and an agreement to be accountable for all aspects of the work. Thus, if a human artist uses AI to generate an image with the intention to create art that authorship might be attributed to the human artist and not the program.

If we were to bestow authorship to the AI program, it gleans moral implications around the assumption that AI is sentient and has a consciousness on par to that of humans. According to Walter Glannon, “For many, the capacity for consciousness is an essential property of being a person.” To attribute authorship to an AI program would imply that AI has the properties of a person and yet, materially, it doesn’t and thereby can’t have moral status.

If the human individual is using AI as a tool to create art, can it still be considered art? The deeper issues around AI and art are beckoned to the surface. First, artists have historically worked “in a tradition against a tradition…they borrow and transform” learning to create a representation of the world from their predecessors. Creators are also inspired by their environment through nature, sounds, relationships, discussions, and feelings—something that a general adversarial network (GAN) is incapable of. Although, like AI, artists borrow and transform from other materials, can AI be considered art if it isn’t related to one person’s experience of the world but a pool of multiples?

Let’s look at this from another angle in that technology has nearly always affected the arts through advances such as the printing press, photography, and computers via graphic design. Seen in this light, it may be necessary for creators to release some of their oppositions to AI and view it as a fellow collaborator of the creative process. AI can also be viewed as an extension of us rather than the end-product resulting from a singular text input into a machine intelligence program. The technology of AI can also optimize the creative process and allow us to explore abundant amounts of data more quickly. Again, AI is not a result but a process of engagement and inspiration when used with ethical integrity.

Creative Examples and Statistics: Robots Vs. Michelangelo

It would be remiss to not discuss the genius of Michelangelo’s marble sculptures of antiquity and the recent advancements of robots sculpting the same Italian marble in Italy. Much of this paper discusses generative AI but it’s also worth examining an example of fine art using technology to create artworks. Carrara marble, procured from Tuscany’s Apuan Alps have supplied artists with raw material since ancient Roman times. In 1497, Michelangelo—at twenty-two years old—came to these mountains to find the ideal piece of marble for his La Pieta. He would chisel and chip away at a block of marble until “the figure revealed itself.”

Today, the company Litix, founded by Filippo Tincolini, puts robots to work on the Carrara marble to create sculptures. As a trained Carrara sculptor, Tincolini saw an opportunity to take the process of using diamond-beaded band saws and pneumatic chisels a step further by using robots in conjunction with a 3D model scan of artists’ sculptures to auto program their robots to sculpt it. After the robot complete 99% of a sculpture, it’s finished details are done by hand to fine-tune any imperfections.

With Litix taking on commissions of artists, architects, and designers, we see an example of the scope of how technology in art is reshaping our approach to its creative process with the intent to optimize it.

How do artists and the public feel about art generated by technology?

AI and technology in the arts has produced mixed feelings among both artists and the public. On one end of the spectrum, we see the sale of AI generated artwork titled “Portrait of Edmond Belamy” (pictured on page 9) created by the art collective Obvious and Generative Adversarial Network and sold in 2018 at a Christie’s house auction for $432,000.

The other end of the spectrum sees varying evidence from data collected by the Academy of Animated Art. One data point, according to PsyPost and YouGov, is that 56% of those who have seen AI generated artworks say they enjoy it but, when judging between human and AI art, prefer human-created art instead. Another data point from Playform states that 65% of artists have used text-to-image technology to expand their ideas. These data sets illustrate the spectrums of value placed on generative AI art and the use of the technology.

Emerging Policies and Ethics of AI

Through an examination of history, concerns, and statistics, it’s critical for policymakers to meet these challenges, concerns, and ethical dilemmas regarding AI. This section discusses proposed ethics for AI as well as policies emerging to meet them.

Ethical AI

Formal ethical examinations of AI first began in California between 2015 to 2017 when a group of AI scientists developed the twenty-three Asilomar Principles. To name a few of these, principle one states that the goals of AI should be to create beneficial intelligence, principle six says that AI systems should be safe and secure, and principle twelve that “people should have the right to access, manage, and control data that relates to them.”

Virginia Dignum also proposed three core elements for ethical AI around accountability, responsibility, and transparency. Accountable AI, according to Dignum, is when AI makes a decision that significantly affects an individual they have a right to an explanation for that decision. Responsible AI requires that it be clear who is responsible for the decision such as artists being responsible in analyzing and asking how the datasets could be misinterpreted when generating AI works. Dignum’s last ethical element is transparency where the data that a system uses should be made clear and available to the public. For artists, this transparency would be accurately representing the process of the machine and person in the creation of an artwork.

These ethical dictates from the Asilomar Principles and Virginia Dignum can become a solid groundwork for the development of laws and policies to manage and govern issues surrounding AI. By defining the ethics, we define the why of arts policy for AI.

Copyright and Fair Use

Copyright is an essential arts and cultural policy rooted in the framework of freedom of expression and the creation of art. It was also the first international cultural policy implemented at the Berne Convention of 1886 to protect the moral rights of human artists. These moral rights encompass the right of attribution, right to anonymity, and the right to preserve the integrity of a work.

Regarding AI, the United States is considering implementing the Generative AI Copyright Disclosure Act in response to challenges around the fair use doctrine. The fair use doctrine allows limited use of copyrighted materials without permission which is often the argument leveraged by AI developers. Unfortunately, the scale of AI’s use of copyrighted materials is prompting the government to create clearer guidelines to manage these challenges. The Generative AI Copyright Disclosure Act would require developers to provide a complete list of copyrighted works used in an AI’s training model be filed with the Copyright Office. Although AI companies have defended the practice with “fair use”, most individuals in the creative industries deem it as copyright infringement. This new act would, according to ASCAP CEO Elizabeth Matthews, ensure that the use of copyrighted works in training artificial intelligence provide due compensation to creators and have the
law put humans first.

Now, let’s return to a previous question that emerged around AI, can AI artworks be copyrighted? According to U.S. law, there are three requirements for copyright: fixation, originality, and human authorship. Fixation in Section 101 of the Copyright Act is when a work is permanent enough to be “perceived, reproduced, or communicated for a period” that isn’t temporary or fleeting. For example, a painting is sufficient to meet the fixation criteria whereas a bowl of spaghetti is not. Originality requires independent creation and creativity or possessing “some creative spark.” The last requirement is human authorship which is the current dilemma for artists creating AI art. Even though there’s no mention of human authorship in the Copyright Act, it has become a position of the U.S. Copyright Office to register works done by human authors and not AI.

The disruption of the copyright issue around the graphic novel Zarya of the Dawn is one such example. This work was granted limited copyright protection in 2023 that protects the arrangement of text and images but not the images themselves which were generated in Midjourney by Kris Kashtanova. We find the larger question hovering within the government’s decision on Kashtanova’s work to be whether her interactions with Midjourney were sufficient to be warranted as an independent creative work or not.

Conclusion: Big Philosophical Questions

Arts law and policy coaxes us to examine thoroughly the larger implications of AI on the world of the creative economy and its creative workforce. How do we integrate this budding technology while simultaneously protecting the moral rights of artists? It’s not just a political question but a philosophical question floating in the space between possible answers around the nature of consciousness and creativity. Is AI another paintbrush the human artist holds between their fingertips or is it the program weaving data-driven images together that creates the artwork? We are navigating a complex web of nuances when it comes to AI and its rapid-fire issues that arise from the depths like a mysterious loch ness monster rearing its grotesque head for us to address promptly. This paper is a broad-brush stroke on AI and is intended to raise awareness around its origins, thought-provoking questions around creativity and authorship as well as how it’s shaping today’s policies and laws. There will likely always be artists who work in traditional mediums but, much like the inception of the camera and photography, there will be artists who will explore the possibilities of AI technology in the creation of their artworks. We can also delineate that AI will invariably cause a shift in artistic standards, aesthetics, and how we see the world and our cultures. Ultimately, throughout this journey, we will need to be prepared to “problematize” the use of AI as it evolves.


Sources

Analla, T. (2023, March 6). Zarya of the Dawn: How AI is Changing the Landscape of Copyright Protection. Harvard Journal of Law & Technology. https://jolt.law.harvard.edu/digest/zarya-of-the-dawn-how-ai-is-changing-the-landscape-of-copyright-protection  

Armstrong, H., & Dixon, K. D. (2021). Big Data, Big Design: Why Designers Should Care About Artificial Intelligence. Princeton Architectural Press.

Bennington-Castro, J. (2024, May 30). Alan Turing: Biography, code breaking, Computer & Death. History.com. https://www.history.com/topics/world-war-ii/alan-turing  

Caldwell, M. (2023, December 11). What Is an “Author”? Copyright Authorship of AI Art Through a Philosophical Lens. Houston Law Review. https://houstonlawreview.org/article/92132-what-is-an-author-copyright-authorship-of-ai-art-through-a-philosophical-lens  

CHM. (2024, June 19). John McCarthy. https://computerhistory.org/profile/john-mccarthy/  

Department of Homeland Security. (n.d.). Increasing threat of Deepfake identities. https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf  

Haigh, T., Hazzan, O., & Geer, D. (2024, January 26). How the Ai Boom Went Bust. Communications of the ACM. https://cacm.acm.org/opinion/how-the-ai-boom-went-bust/  

Harvard Business School. (n.d.). Local enterprise: The pre-industrial era - railroads and the transformation of capitalism: Harvard Business School. Local Enterprise: The Pre-Industrial Era. https://www.library.hbs.edu/hc/railroads/pre-industrial-era.html  

Hermeren, G. (2024). Cambridge Elements: Art and Artificial Intelligence. Cambridge University Press.

History.com Editors. (2018, August 21). Automobile History. History.com. https://www.history.com/topics/inventions/automobiles  

History.com Editors. (2023, March 27). Industrial revolution: Definition, inventions & dates ‑ history. History.com. https://www.history.com/topics/industrial-revolution/industrial-revolution  

International, S. (2024, May 28). Shakey the robot. https://www.sri.com/hoi/shakey-the-robot/  

Lim, D. (2024, July 10). Artists’ Rights in the Age of Generative AI. Georgetown Journal of International Affairs. https://gjia.georgetown.edu/2024/07/10/innovation-and-artists-rights-in-the-age-of-generative-ai/  

LLNL Science and Technology. (n.d.). The Birth of Artificial Intelligence (AI) research. https://st.llnl.gov/news/look-back/birth-artificial-intelligence-ai-research  

Magazine, S. (2023, December 1). Can robots replace Michelangelo?. Smithsonian.com. https://www.smithsonianmag.com/innovation/can-robots-replace-michelangelo-180983240/  

Office, U. S. C. (2023, November). U.S. Copyright Office Fair Use Index. U.S. Copyright Office Fair use index. https://www.copyright.gov/fair-use/  

Ramberg, B. T., Halvorsen, M., Førde, R., Ambur, O. H., & Borge, A. I. H. (2019, August 8). Co-authorship – a bone of contention. Tidsskrift for Den norske legeforening. https://tidsskriftet.no/en/2019/08/kronikk/co-authorship-bone-contention#:~:text=Box%201%20According%20to%20the,for%20important%20intellectual%20content%3B%20AND  

Robinson, K. (2024, April 9). New federal Bill could require disclosure of songs used in AI training. Billboard. https://www-billboard-com.cdn.ampproject.org/c/s/www.billboard.com/business/legal/federal-bill-ai-training-require-disclosure-songs-used-1235651089/amp/  

Rosenstein, C. (2018). Understanding Cultural Policy. Routledge.

Previous
Previous

Relationship Building and Advertising