top of page
Writer's pictureellieisadragon

Perceptions of AI Generated Images in Comparison to Human-made Artwork: Evidence from Eye-tracking

For context: this is my dissertation. My full undergraduate dissertation. I do plan on making a shorter version, however for now, this is for those interested in reading the full and proper thing. If you'd rather read the shorter version, sign up to my mailing list; I only send emails a maximum of once per month, and you'll get to read the shortened version when it comes out!

(also, given it is quite literally copy-pasted from the word document... I hope you are willing to forgive some flaws)





ABSTRACT

 

Though generative AI has become an increasingly popular topic in computer science, art, and society in general, there is very little psychological research that explores how the images it generates are perceived. In the present study, eye-tracking technology was used to study how 10 art and 10 psychology students from Aberystwyth University viewed a series of images, 25 pieces of artwork created by humans and 25 images generated by AI, presented in a random order. Data on aesthetic preference, pupil size, average fixation duration and fixation count was gathered and analysed using a mixed factorial ANOVA. Results showed that pupil size was significantly greater when viewing human art than when viewing AI generated images. However, though there were some non-significant trends found that may suggest more information upon exploration in future research, there were no significant differences between human artwork and AI generated images within any of the other variables.




Acknowledgements


I would like to thank my supervisor, Victoria Wright, for guiding me through this project. I would also like to thank the Aberystwyth School of Art for supporting my project and helping me to seek art student participants, and David ****** from the computer science department of Aberystwyth University, for providing code with which I could measure and control for the colour values of the image stimuli, and for helping me to understand the more technical computer science aspects of this project.





INTRODUCTION

 

            Development of generative AI has been a major point of focus in art and computer science for the past few years, however, there has been little psychological research exploring perception of the images it produces, and how that might differ from that of images and artwork created by humans. The following study intends to fill this gap in past literature, develop understanding of our perceptions of AI generated images, and pave the way for future research regarding this relatively new topic.

The recent increase in popularity of tools using generative AI as a method of generating new images has left many artists fearing for the future of their career, alongside creating many concerns over their art being used without their consent to train these AIs (Oppenlaender et al, 2023a; Business Insider, 2023). This fear is not unfounded, as there are already examples of AI being used in place of real artists. For example, as far back as 2018, a “painting” created by AI was sold at an auction for the first time for over $400,000 (BBC, 2018), and the quality and accessibility of AI image generation has improved greatly since then. More recently, artists claim to have noticed fewer entry level jobs in the industry, as companies have started using AI to replace certain creative positions in development of games, films, etc. as a way of cutting costs (Business Insider, 2023). In 2022, an AI generated image even won a major fine arts competition, to much controversy (Gault, 2022).

Anantrasirichai & Bull (2021) reviewed how AI could be used in place of and alongside humans in creative industries and outlined examples in which this had already been done, explaining that it had been used for creating stories and scriptwriting, making short films and trailers, designing aspects of video games, writing articles based on structured data, among many other examples. As for the concern over the use of people’s art without consent, generative AI is trained using media collected from the web, most frequently without the original creators’ consent (Oppenlaender et al, 2023a). This has led to much criticism of generative AI, and multiple court cases relating to its copyright infringement. Getty images, for example, raised a complaint against StabilityAI for use of over 12 million of their images without consent to train the AI (Ghetty Images (US) inc. vs Stability AI inc, 2023).

            Artists aren’t the only ones whose jobs are at risk because of recent developments in AI. In fact, Frey & Osborne (2017) estimated that around 47% of US employment is at high risk of automation in the next decade or two, and evidence from OECD (2021) shows that exposure to AI and familiarity with it’s use is already impacting on employability, particularly in ‘white-collar’ jobs. Some generative AI users are even considering “prompt engineering”, or the process of writing a prompt that triggers the AI to produce an image accurate to what one is looking for, a creative skill in itself (Oppenlaender et al, 2023b). Even Google, and similarly other search engines, are threatened by the development of AI, as people can ask questions and receive a direct answer from AI such as ChatGPT rather than sifting through masses of search results to find information (Grant & Metz, 2022). However, with all this in mind, advancements in technology, particularly AI, are incredibly difficult to predict. In 2016, Halal et al outlined predictions that by 2020, a $1,000 PC would equal the human brain in intelligence, and that development of AI would result in the elimination of almost half the present jobs by 2025. Of course, the prediction of PC intelligence did not come to pass, and as of 2024, though AI has automated and eliminated many jobs, this is so far nowhere near half.

            The public opinion on text-to-image generation was investigated by Oppenlaender et al (2023a), who asked attendees of a university research event to fill out an online survey, asking about their technical understanding, thoughts on its ethics, personal experience in its use and considerations of societal impact. The main findings showed that 60% of participants did not understand how it worked, and most stated that it didn’t have any importance or use in their personal lives, however showed concern over its misuse and the problems it may cause in politics, as well as job loss, copyright, and even loss of creativity. The vast majority of participants stated its potential use in creative domains, and believed it could threaten to replace creative professionals, however some considered its use in other domains such as entertainment, education, therapy, and accessibility. This study was flawed in that its participants were all attending a research event, limiting the data to be from generally higher-educated individuals, however the demographics questions did show that the participants at least came from a variety of fields. Additionally, only 35 participants completed this survey, which is relatively low for an online survey, and more data from a more varied source could have led to better validity.

 

            Since the mainstream use of generative AI by the public is only a fairly recent phenomenon, there is little psychological research surrounding the concept, let alone comparing human-made artwork and AI generated images. A large proportion of the limited research in the area explores the ability to distinguish AI generated images from photography, with particular focus on faces. An early example of this research is that of Mader et al (2017), a study which took place before text-to-image generation became quite as widespread, in which participants attempted to categorize a set of images into photos and AI-generated images and did so before and after training the participants for fake-image-identification. They found that participants could accurately identify real images from fake ones, with an even greater accuracy rate of 85-90% after training, which suggested that providing training would significantly increase ability to identify fake images and could be crucial in avoiding misuse of AI. This study, however, falls victim to the temporal validity criticism, as the technology has evolved greatly since 2017. AI images are of much higher quality now than they were then, and more recent research on fake-image-identification-training has not been found. More recent research on general AI recognition (e.g. Hulzebosch et al, 2020), however, shows a much lower accuracy of identification of AI generated images in general. Lago et al (2021) shows this to an extreme level, as participants were provided with faces from 4 categories, these being real or generated by one of 3 different AIs, finding that participants only identified real images as being real at a 56% accuracy rate, whereas images generated by StyleGAN2 were only correctly identified as AI 26% of the time, suggesting that the faces generated by this AI were perceived as more “real” than the real images. Research has shown that human accuracy of conscious distinction of AI-generated-images is currently so low, that AIs are being developed to identify fake images (e.g. Gangan et al, 2022). Korshunov & Marcel (2020) found evidence that both humans and computers were equally fooled by “deepfake videos”, however in more recent research, Lu et al (2023) compared the fake image detection accuracy rate of an AI in comparison to that of human participants while viewing images from Fake2M, a fake image dataset, finding a 38.7% failure rate in humans compared to a 13% failure rate from the AI.

            Though the previously stated evidence clearly suggests that conscious identification of fake images is low, this still leaves the question of whether any differences are subconsciously noticed, and whether neural processing of fake images is any different to that of real ones. This has been explored through a couple of EEG studies, both of which used real and AI-generated facial images as their primary stimuli. Moshel et al (2022) intended to explore the potential difference in neural processing, using 25 fake images of faces, cars and bedrooms and real images drawn from the AIs training data. Their study involved two experiments: behavioral testing which involved 200 participants filling out an online questionnaire asking them to identify whether images were real or fake, with the images each being presented both upright and inverted, and the EEG experiment, in which 27 participants were shown the series of images while wearing an EEG cap, not informed until after the experiment was complete that AI was used at all. The stimuli were presented in very rapid succession in the EEG experiment, either 5Hz or 20Hz, so little time was given to look closely at the images. The behavioral testing showed that participants were able to accurately identify unrealistic AI-generated stimuli, but not realistic ones, from the real photos and that orientation had very little impact on accuracy of identification. The EEG experiment, however, showed that neural activity was different when presented with the computer-generated images in comparison to the real ones, suggesting that even though conscious identification of fake images was low, as shown in the behavioral experiment, a difference between them was subconsciously identified, and they were processed differently. Though this study was well controlled with the use of the training data as the “real image equivalent” to the computer-generated stimuli and with the same stimuli being used for both the behavioral testing and the EEG experiment, there was a potential limitation in the fact that different participants collected from different sources did each experiment. The participants of the behavioral testing were collected from Amazon MTurk, whereas those of the EEG were collected from the University of Sydney, meaning there is a chance that the EEG participants were more able to consciously identify the computer-generated images from the real ones. Individuals providing services on Amazon MTurk vary greatly, including many people without higher education, whereas the EEG participants were all gathered from a University, so were all either partaking in or had completed higher education. There is no research exploring any association between education and accuracy of identification between AI generated and real images, however this is a major difference between the two participant groups, and if there is an association between the factors, this might have heavily influenced the results. This could have been controlled for by asking the EEG participants to fill out the questionnaire of which images they thought were real after completion of the experiment. Additionally, they could have used images generated from multiple AIs, as quality and style of different AIs vary significantly. Similar research was performed more recently, however, by Tarchi et al (2023). Also an EEG experiment, this study presented 23 healthy participants with real photographic and “deepfaked” faces expressing either positive, neutral, or negative facial expressions. In this study, participants were aware that they would be viewing a mix between computer-generated and real faces and were asked to make a judgement about whether they thought each face was real while being presented with the stimuli, hence controlling for the criticism in the aforementioned experiment. In this study, participants were able to consciously identify real faces from computer-generated ones with a 77% accuracy; whether this was due to a different participant group or different stimuli to past research is unknown, but this disparity being unknown suggests that more research in this area controlling for more variables is needed. Another possibility is that, although quality of generative AI is improving, human exposure to its creations is also growing more frequent, meaning that people may be getting more familiar with generative AI, and identification between real and fake images is improving. This study did, however, back up the findings that neural processing of real faces is different to that of “deepfaked” faces, specifically finding more activation in the theta, alpha, high beta and gamma bands of the right hemisphere when viewing the real faces, and the delta band frontal and occipital areas with the computer-generated ones. These studies were the only ones found on the subconscious processing of computer-generated images in comparison to real photos, and none were found comparing them to human artwork, presenting a large gap in the literature. With the increasing concern for and social interest in the impact of the fast-evolving generative AI on modern society, more research within this area needs to be performed.

 

            Although there is little research on perceptions of computer-generated images with participants unaware of what they are, there is significantly more research on people’s opinions on “computer-generated artwork” when told it is as such. This goes as far back as 2009, when the problem was not yet with generative AI, but with human creation of digital artwork. Kirk et al (2009) showed participants a series of images, labelled randomly as either from an art gallery’ or ‘created by an experimenter on photoshop’, and asked them to rate the images based on their aesthetic preference. Participants rated images much higher when they were labelled as being from an art gallery than being supposedly created by someone on photoshop. These results may have been a product of their time, as digital art, when made by humans, is a much more respected medium now than it was a decade ago, however it still shows that bias against art with use of modern technology was in place long before the common use of generative AI. One potential issue with this study, however, is that it might not be an accurate measure of bias against use of technology in art, as the alternative of being “from an art gallery” suggests that the artwork has the support of an art critic, and the participants could have been influenced by the fact a professional must have given their praise. An alternative label could have been that the work was ‘created by a local artist’; this way the study could have more definitely measured whether the disparity between ratings was resultant of the use of technology, rather than reliance on a hypothetical ‘more qualified individual’s’ opinions. A more recent study by Chamberlin et al (2018) also explored bias when viewing computer-generated ‘artwork’ within a more modern context. This study involved two parts; the first part was an online questionnaire, with participants recruited from a university mailing list, involving the participants being shown 60 images and being asked to rate the images and determine whether they were made by humans or AI. Each of the images were either abstract or representational, and either AI-generated or made by a human. Participants were randomly allocated into either the “rate first” or “categorize first” group, to counterbalance the study. The results of this showed that more images were categorized as human-made, with abstract images being more difficult to accurately categorize, and that images categorized as computer-generated were rated lower aesthetically, supporting the idea of anti-AI bias. The second part of the study involved artwork created physically by robots in an exhibition and split the participants into 3 conditions; the first group watched the robots create the images, the second group were told that the images were created by robots, and the final group were not told anything about the image’s origins. Each group were asked to aesthetically rate the images. They found that the second group rated the images significantly lower than the other two groups, supporting the suggested anti-AI bias, however the first group that watched the robots create the drawings scored them highest. The robots, when creating the drawings, all created them in different ways (one was drawing hurriedly, being seen as ‘anxious’, one seemed more ‘relaxed’, etc.), giving them an appearance of a more human nature, and the suggestion was made that this anthropomorphisation of the robots made people more invested and interested in what they created. This suggestion could be backed up by past research by Riek et al (2009), who found that robots that looked more human were treated with more empathy when seen mistreated, and therefor that anthropomorphisation mediates bias against computers. It might instead, however, link to Kruger et al’s (2004) findings; upon participants evaluation of poems, paintings, and armour, an appearance of time/effort is associated with higher ratings, and seeing the robots create the drawings and spending time on them may be the factor that led to the high ratings in the first condition. Either of these reasonings, however, would result in the expectation that generative AI, with no physical appearance and the ability to create images in an instant, would be a subject of bias.

            Ragot et al (2020) also performed a study exploring the bias against use of modern technology in art, however used images produced by generative AI in direct comparison with human artwork in an online survey of 565 participants. The participants were informed which images were human-made, and which were generated by AI, and asked them to rate them on 4 dimensions: liking, beauty, novelty and meaning, and the finding showed that participants significantly preferred the human artwork. This study would have been a better comparison if the participants were randomly allocated into two conditions, one of which being told which images were AI-generated and the other not being informed until after the experiment took place, however as it stands, the survey supports the evidence of bias against computer-generated ‘artwork’. Even more recently, a study with an even greater participant group of 1708 between 4 experiments that controlled for this flaw was performed by Millet et al (2023). This study found that the same artwork was significantly preferred when being labelled as created by humans than when it was labelled as computer-generated, and that this effect was even stronger in those with stronger anthropocentric creativity beliefs.

            There are a few studies and theories on the perceived value of art that might explain the overall bias towards human artwork, including the previously discussed findings of Kruger et al (2004) that greater time spent in somethings creation is what gives it a higher value. Hawley-Dolan & Winner (2011) and Snapper et al (2015), in very similar studies, provided evidence that suggested the “mind” behind the art is visible, and is what gives the art value. Both studies were performed in response to the common criticism of modern art that a child could create it, however their findings can also be applied to computer-generated artwork, as they found that even the untrained eye preferred the abstract creations of an adult artist to visually similar creations of a child. From this, it was suggested that the thought process behind the artwork was evident, which could result in anti-AI bias, as the images are created algorithmically, rather than with any thought. Jucker et al (2014) also found results that would support the existence of this bias, as they found that art appreciation involved a recognition of the artists intentions, rather than just its beauty. Another suggestion of the value of artwork is that of Newman & Bloom, (2012), who suggested that originality and creativity were both determinants of value in artwork. Given generative AI is incapable of creating anything original, since it has to use training data in order to form an image, this could also explain the anti-AI bias.

           

With all this in mind, there is a major gap in the literature relating to AI generated images and the viewing behaviour of individuals presented with them in comparison to human artwork, which could have important implications, particularly in creative domains. Artwork and how people view it is commonly explored through eye-tracking; Massaro et al (2012) used eye-tracking to investigate how people viewed paintings involving a human subject in comparison to those of a landscape, Savazzi et al (2014) used it to explore the behavior of adolescents viewing paintings compared to adults, Cruc (2020) used it to find the impact of a ‘vanishing point’ in an art piece and how it changed how it was looked at, and Mandolesi et al (2022) explored how non-experts explored ‘Studiolo del Duca’, a historical room full of art. A study by Mitrovic et al (2020) explored the idea that ‘beauty demands longer looks’, seeing if, when viewing similar artworks, the one rated higher aesthetically would be looked at for longer. This concept had been explored before using many forms of stimuli, however Mitrovic et al wanted to see if it was still the case when the stimuli could not at all be associated with any adapted preferences, such as the adapted preference for more healthy-looking individuals. They used 50 pairs of simple abstract artworks from a pre-made dataset, which were created to look extremely similar, with minor differences that makes one more aesthetically pleasing than the other. The study involved four parts: 1) participants viewed the images placed next to each other for 20 seconds, 2) they were asked which of the two they liked more, 3) the images were presented individually in a randomized order and they were asked to rate on a scale of 1-7 how much they liked the images, and 4) they were asked to choose out of each pair which they thought an expert would rate more highly. The last three parts were primarily to control for each other and check the reliability of the rating systems, however they all approximately matched up in results, supporting its reliability. They found evidence to back up the initial idea that ‘beauty demands longer looks’, as participants tended to look for longer at the artworks that they then rated more highly, however there was a flaw in their participant group, as they were all female psychology students, and a more varied group could have been more representative. Additionally, eye tracking has been used to find evidence of associations between conscious preference in artwork and pupil size, with consciously reported aesthetic preference correlating with larger pupil size (Johnson et al, 2010).

Eye-tracking is also used to compare how groups with different levels of experience in various fields view relevant stimuli, with examples of this use including a study by Kundel et al (2008) finding different viewing behaviour of participants with experience in radiology compared to those without when shown mammography images, Bilalić et al (2011) finding different viewing behaviors shown by participants considered experts in chess compared to non-experts when provided with a board of pieces from a chess game, and Perra et al (2022) finding reduced fixation durations when exploring how expert musicians viewed sheet music in comparison to non-expert musicians. Furthermore, this technique has been used to compare how individuals view artwork dependent on differences in experience in a study by Pihko et al (2011). Specifically, comparisons were made between the aesthetic judgements, emotional evaluations, gaze patterns and electrothermal activity of 20 art history experts and 20 laypersons viewing artwork at differing levels of abstraction. To focus on the gaze patterns, eye-tracking data uncovered that, though both groups’ viewing patterns, or the number/duration of fixations and the length of scan paths, differed with abstraction, there were clear overall differences between that of the art history experts and the laypersons, suggesting that the viewing behaviour of those with greater experience in the field of art differs from those with less experience when viewing artwork.

 Despite the frequent use of eye-tracking to explore how people view artwork, as well as the ‘hot topic’ of generative AI and its questionable use in creative occupations in modern society, no past research involving the use of eye-tracking technology to explore participants’ viewing behaviours when presented with computer-generated images was found. There is also an extreme lack of any research comparing human art and AI generated images intended to be used in its place, with most research instead comparing real photography and fake images/’deepfakes’. The discussed EEG studies by Moshel et al (2022) and Tarchi et al (2023) were both examples of these, having used comparisons between real and fake facial images, however the provided evidence that neural processing between the two was different leads to the question of whether not the viewing behaviour differs between the two as well, and whether or not this might apply when shown human artwork and computer generated equivalents. The present study intends to look into these questions, and explore subconscious differences between how human artwork and AI-generated equivalents are perceived through viewing participants eye-movements. Additionally, the present review of past literature uncovered differences in the viewing behaviour of individuals with differing expertise in the relevant field. Pihko et al (2011) showed that this was the case in artistic fields, showing that art history experts showed different viewing behaviours to laypersons when shown artwork. This suggests a potential difference between how experts and laypersons might view human artwork compared to computer generated images of similar nature. Taking this into account, the present study gathered participants from two groups, art students and psychology students, to view whether or not creative experience was a mediating factor. Overall, the present study intends to further the understanding of how we perceive AI generated images as opposed to human artwork. To do this, human-made art and AI-generated images of a similar nature will be shown to participants with differing experience in art while eye-tracking is taking place, focusing on differences in pupil size, fixation duration and fixation count, as well as conscious ratings of preference. Since there is very limited past research on the topic, few research-backed predictions can be made, however the EEG studies’ evidence that neural processing of AI generated images differs from that of real photography potentially suggests a difference in the way images are viewed. Additionally, the past eye-tracking studies that showed differences in viewing behaviours of laypersons and experts in relevant fields show potential that mediation of artistic expertise and experience may exist in the present study.




METHOD

 

 

Participants

            The participant group involved ten art or photography students and ten psychology students between the ages of 18 and 38 at Aberystwyth University, gathered by different means. The psychology students were primarily gathered through the University’s SONA system, and they received SONA credits, which helped towards their grade, as an incentive. Since SONA is only in place for psychology students, art students could not be gathered in this way, and an incentive could not be offered for these students. The primary method of sampling used for collecting art student participants was opportunity sampling. Students were collected by directly asking individuals in the School of Art to take part. Volunteer sampling was also used to collect more participants in both groups. Recruitment flyers were placed around the university campus with details of the study [Appendix A], excluding the disclosure that AI was involved, asking people to get in contact if they were interested in taking part. A message was also added to various university mailing lists [Appendix B], and some lecturers and the student rep in the art department sent emails to their students or peers respectively. The sample size was relatively low, as an equal number of art and psychology students was wanted, and between the lack of incentives and the fact that access to the lab’s location was less convenient for art students, not many were willing to take part. A greater sample size would be preferable for future research.

 

Materials

Human art stimuli were gathered first, as the computer-generated images were based on them. They were found in the ‘daily deviation’ section of the art sharing site DeviantArt, with the exclusion criteria that if AI was involved in any part of the process, it could not be used. A message was sent requesting the consent of the artist to use their work in the study [Appendix C]. This message had to be crafted in a more casual and friendly manner, as many scam and/or scam messages appear on these types of sites, and a more formal message would have been more likely ignored. 25 pieces of artwork were collected. The AI counterparts to these images were generated on Bing CoPilot (DALL.E 3, 2023), and the resolution was enhanced on the website ‘Waifu2x’ (Waifu2x, 2023), as images generated on the free version of Bing Copilot are a much lower resolution. Since this site uses AI to enhance images, this could not be used on the human art, so only art of a higher resolution could be collected. The AI ‘art’ was created using prompts based on each of the human artworks, so each piece had a set AI counterpart [e.g. figure i]. All images generated by Bing Copilot are square shaped, so cropping had to be done so they were not as identifiable. Human images could not be cropped as conscious thought was placed into composition of human artwork, and the same is not true for AI, though an attempt was made to make the cropping look as natural as possible. Differences in the colour of the stimuli also were controlled for; the average hue, saturation and brightness were found for each image [Appendix D] as shown in figure iii, and then averaged for the AI generated and human artwork stimulus groups.

All stimuli were then placed on a plain black background, a similar size on this background, with the resolution equal to the monitor they would be shown on [figure ii]. The eye-tracker used was the Tobii TX300, and participants viewed the stimuli on a 1920x1080px monitor.

 

 

Human

AI

Hue

121.2

119.36

Saturation

30.04%

28.2%

Brightness

52.52%

62.4%

Figure iii – Mean colour values of stimuli groups


 

Procedure

            After informed consent was given [Appendix E & F], participants were asked to sit at the desk, approximately 60cm away from the screen. They were told they would be shown a series of images while their eye movements were being tracked, and the presence of AI generated images was not disclosed. When they were ready, the eye-tracker was calibrated, involving the participants following a dot on the screen with their eyes.

         The experiment itself was divided into two blocks. The first block was the eye-tracking study; they were shown text explaining the first part of the experiment and instructing them to press space when they were ready to continue. They were shown all 50 images, each image being on the screen for 7 seconds while their eye-movements were being tracked. The images were shown in random order to control for order effects. Between each image, a blank screen with a small cross on the left side of the screen which they were instructed to look at showed for 0.5 seconds, which was intended to reset their viewing patterns. If an image was shown directly after another, the starting viewpoint of a given stimulus would have been affected by the previous image shown. At the end of the first block of the experiment, another text screen was shown, explaining how to perform the following task. Participants were asked to press the space bar when they were ready to proceed to the second block of the experiment. In this block, the images were shown in a randomised order again, however eye-tracking data was not taken into account. Participants were asked to rate the images on a scale of 1 to 5 using the keyboard.

After the experiment was completed, participants were provided with a debrief sheet, explaining that half of the images were generated by AI [Appendix G]. They were also provided with a small information booklet showing which stimuli were AI generated and providing the artist credit for the human artwork [Appendix H].

 

Ethical considerations

            As mentioned, many artists are fearing the loss of their jobs because of generative AI, which can make this a very sensitive topic. Additionally, the knowledge that they were potentially fooled by certain AI ‘art’ could have caused them stress relating to how difficult identification is between real and fake artwork. Because of this, participants were reminded that they were allowed to withdraw their data at any point over the following 2 weeks, and the debrief sheet included where they could access student support services if the experiment caused them any stresses.

            The use of deception in this study was crucial, as one of the flaws of past research that was intended to be controlled for was the bias against AI when given knowledge of it’s use. The past literature has already shown a bias against images created by AI when participants are aware of its presence (Chamberlin et al, 2018; Ragot et al, 2020; Millet et al, 2023), so avoidance of allowing participants knowledge of the AI generated images present in the stimuli was important to avoiding this confounding variable. Because of this, participants were also asked not to discuss the true nature of the study outside of the lab, in case word spread to another student who was intending on participating. The use of deception, and it’s potential impacts, were taken into consideration when creating the debrief sheet to provide participants at the end of the experiment.



RESULTS

 

Data was drawn from the eye-tracker on pupil size (mms), fixation duration (seconds) and fixation count for both psychology and art students viewing human artwork and AI generated images, as well as the rating data, measured on a 5-point Likert scale, for each variable. A mixed factorial 2x2 analysis of variance (ANOVA) was performed for each variable [Appendix I]. Descriptive statistics were also taken and shown in the tables below.

 

 

 

Psychology students









 

 

Pupil size

(mm)

Fixation duration

(sec)

Fixation count

Rating





 

 

M

SD

M

SD

M

SD

M

SD

 

Human

3.31

.48

.24

.03

23.10

1.52

3.32

.52

 

AI

3.27

.46

.26

.02

23.2

.92

3.28

.40

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Art students










 

Pupil size

(mm)

Fixation duration

(sec)

Fixation count

Rating






 

M

SD

M

SD

M

SD

M

SD


Human

3.39

..40

.27

.05

23.00

3.46

3.25

.47


AI

3.34

.43

.27

.06

23.10

3.76

3.33

.42












 

 

Eye tracking data


Pupil size


There was a significant main effect of image type on pupil size, F(1, 18) = 18.625, MSE = .21, p < .001. Participants’ pupil size, as measured in millimetres, was significantly greater in response to human-made artwork (M = 3.348) than when showed AI generated equivalents (M = 3.302).

            No significant difference was found between the pupil size of art students and that of psychology students, F(1, 18) = .128, MSE = .05, p > .05. The pupil size of the psychology students (M = 3.29) was lower than that of the art students (M = 3.36), however this difference was not statistically significant as it was a between-subjects test, therefore having fewer data points for each variable.

            There was also no significant evidence of interaction between artistic experience and image type regarding pupil size, F(1, 18) = .148, p > .05, hence there is no evidence that the difference in pupil size when viewing AI images compared to human artwork is mediated by ones artistic experience.

 

Average fixation duration

 

          There was no significant effect of image type on average fixation duration, F(1, 18) = .124, MSE = 1.44, p = .729. The average fixation duration when viewing AI generated images (M = .226) was almost identical to, but marginally greater than, the average fixation duration when viewing human artwork (M = .225). This was nowhere near sufficient evidence to suggest any differences between the two.

            There was also no significant difference between the average fixation duration of art students and psychology students, F(1, 18) = .112, MSE = .000, p = .742. The average fixation duration of psychology students (M = .263) was slightly lower than that of art students (M = .269), however this is once again not a statistically significant difference.

            Finally, there was no significant evidence of interaction between artistic experience and image type regarding average fixation duration, F(1, 18) = .311, p > .05.


Number of fixations

 

            There was no significant main effect of image type on the average number of fixations, F(1, 18) = .228, MSE = .100, p = .639. The average number of fixations when viewing AI generated images (M = 23.150) was again very similar to, but slightly greater than, the number of fixations when viewing human artwork (M = 23.050); this is insufficient evidence to suggest a difference between the two.

            There was also no significant difference between the average number of fixations of art students and psychology students, F(1, 18) = .007, MSE = .100, p = .934. The average number of fixations of psychology students (M = 23.150) was slightly greater than that of art students (M = .23.050), however this is once again not a statistically significant difference.

            Finally, there was no significant evidence of interaction between artistic experience and image type regarding average fixation duration, F(1, 18) = .000, p > .05.

 

 

Aesthetic preference ratings


Rating data


There was no significant main effect of image type on the participants’ aesthetic preferences, F(1, 18) = .101, MSE = .004, p = .754. Participants’ aesthetic rating of AI generated images (M = .3.308) was slightly greater than their ratings of the human images (M = .3.288), but this is not sufficient evidence to suggest there is a difference.

            There was also no significant difference between the aesthetic preference of art students and psychology students, F(1, 18) = .004, MSE = .001, p = .951. The overall ratings from the psychology students (M = 3.304) were slightly higher than that of art students (M = 3.292), however this is once again not a statistically significant difference.

            Finally, there was no significant evidence of interaction between artistic experience and image type regarding average fixation duration, F(1, 18) = .910, p > .05; the average aesthetic rating of art students was greater for AI generated images (M = 3.332) than for human made artwork (M = 3.252), whereas the aesthetic ratings of psychology students were greater when viewing human made art (M = 3.324) then AI generated images (M = 3.284). As interesting as this difference is, it is not quite statistically significant, and no conclusion can be made that the present research can suggest a mediation of artistic experience on ratings of AI generated images compared to human artwork.

 

            As an additional, but unmeasured and anecdotal, finding in the present study; all participants from the art department, when it was revealed to them at the end of the experiment, were unsurprised that AI was used. Although, from discussing the stimuli with them whilst viewing the images used on the artist credit sheet, their identification of which images were AI generated and which were human made was relatively low, and they showed a lot of uncertainty towards their judgements; they were only able to identify that it was present at some point within the stimuli. The psychology students, in comparison, were more often surprised at the use of AI; about half of them were able to tell that AI was used before it was revealed to them, and the other half responded to the debrief in which this was brought up with surprise.



DISCUSSION

 

            The original intention of this study was to provide some early insight into potential differences in how people perceive AI generated images in comparison to human art, aiming to pave the way for future research, encouraging exploration of this topic in further detail in future work. With the quick advancements of generative AI and the public concern for what this might entail, research pertaining to and further understanding of the topic from a psychological perspective is of great public interest, especially to those who feel as though their jobs are under threat. By exploring the viewing behaviours of individuals when shown AI generated images in comparison to that when shown human artwork, any potential differences between how the two are viewed can be identified, helping to further the understanding of any unconsciously noticed differences between them.

            Although the only statistically significant results of this study suggest that image type, or whether the image presented was created by human or AI, influences pupil size, and that this difference is not mediated by artistic experience, there are a few non-significant trends to take note of. For example, the mean ratings showed that, surprisingly, the art students tended to prefer the AI generated images, whereas the psychology students tended to prefer the human made artwork. Though this is only anecdotal evidence, it is an unexpected and interesting trend which should be considered for future research, and a repeat of this aspect of the study with a greater number of participants could potentially yield an interesting result.

            Overall, however, the primary finding of this study is that the participants pupil size was greater when viewing human made artwork than when viewing their AI generated counterparts, and this difference was not mediated by artistic experience. This result could implicate a number of ideas, such as subconscious preference or differences in colour, which will be viewed in more depth in the following paragraphs. The rest of the eye-tracking data showed almost identical ways of viewing both types of images, and that the number of saccades, or the number of times the participants eyes moved, as well as the duration of saccades, were almost exactly the same between the two stimuli categories. No difference in these results was found between the art students and psychology students either, so there was no difference surrounding artistic experience. As for the results of the behavioural data, as previously mentioned, a non-significant trend, wanting further exploration in future research, was found in the ratings. The conscious mean rating of AI generated images was slightly higher than that of the human artwork, with the psychology students having rated the human artwork as higher than the AI generated images, but the art students having rated the AI generated images as higher than the human artwork. Though this is an interesting trend to take note of and explore in further research, this particular finding is purely anecdotal, and no conclusion should be drawn from this. Future research with a greater sample size would be beneficial to better conclude this idea.

 

            Past research suggests a number of theories and ideas in what a difference in pupil size might imply. The previously discussed research from Johnson et al (2010) is an example of evidence for the well regarded and widely accepted theory that pupil size is associated with aesthetic preference, and that greater pupil size when presented with a stimulus could be as a result greater aesthetic enjoyment of it. Taking this concept into account, the related findings in this study would suggest, at least subconsciously, a preference towards human artwork, despite the fact that the conscious ratings shown in the results of the behavioural experiment do not reflect this. Another potential reasoning behind this finding is related to colour differences in images produced by AI and those used in human artwork. An alternative proposal to what differences in pupil size when viewing various stimuli might suggest was put forward by Taniyama et al (2023), whose recent study suggested that, when viewing artwork, pupil size is more related to the colour composition of a painting, and that larger pupil size was greatly associated with more natural colours in a painting. As previously mentioned, the colour of the stimuli displayed in the present study was controlled for; though there is no numerical measure of natural colour, the average hue, saturation, and brightness was measured for each image, the results of which are shown in figure iii. When considering natural colours as the factor potentially impacting the pupil size, hue and saturation are the most relevant, as natural colours can be presented in many different brightnesses depending on lighting. As shown in figure iii, the mean hue (AI, M =121.2, Human, M=119.36) and saturation (AI, M =30.04, Human, M=28.2) are extremely similar, which might provide evidence against the claim that the difference in pupil size is resultant of this. However, as mentioned, there is no numerical measure for how ‘natural’ the colours used in artwork are, so this possibility cannot be ruled out.

 

 

Human

AI

Hue

121.2

119.36

Saturation

30.04%

28.2%

Brightness

52.52%

62.4%

Figure iii – Mean colour values of stimuli groups

 

            The results did, to a limited extent, meet expectations. There was an assumption of some difference between how people viewed AI generated images and human artwork, as this would be in line with the difference in the neural processing and hence possibly subconscious viewing AI generated faces and real human faces in the previously discussed EEG studies (Moshel et al, 2022; Tarchi et al, 2023). Despite this, no other significant differences in how the images were viewed through the average duration and frequency of saccades were observed, and there were not even any non-significant trends; the means of these values were almost identical between the two categories of stimuli, and there was a similar lack of difference between that of the two groups of participants. This would suggest that, when taking only eye-movements into consideration rather than pupil size, AI generated images and human artwork are viewed in an identical fashion.

            The behavioural experiment showing no significant difference between the image types was an unexpected result based on the past research, which showed a preference against AI (Chamberlin et al, 2018; Ragot et al, 2020; Millet et al, 2023). This suggests that when participants are unaware of an images status as manmade or AI generated, their aesthetic judgements are not influenced, and the participant’s experience in art, as represented by the subject they are studying, did not influence aesthetic preferences. The difference here was that in most past studies, participants were informed of whether each image was AI generated or made by a human. By keeping the distinction between, or even the presence of, the AI generated images alongside the human artwork the study controlled for the bias found in a lot of previous literature.

            To take these results together, the significant finding of increased pupil size for human artwork coupled with the null finding of the aesthetic preference ratings suggest subtle differences between human artwork and AI-generated images may be discernible at the subconscious, but not conscious, levels of experience.

            The more surprising finding, however, is the non-significant trends, which suggested that art students rated the AI generated images more highly, whereas the psychology students rated the human artwork more highly. This is an unexpected result as, though it was not noted or measured, all art students mentioned that they were aware before the debrief that AI had been used, as they were able to tell that some of the images were generated by AI. In comparison, less than half of the psychology students noticed this. This would contradict what would be expected based on the anti-AI bias identified by past research, however, since the results were not significant, no assumptions can be made that these results are not simply ‘by chance’. A repeat of the behavioural experiment with a greater sample size and a measurement of identification accuracy between AI and human images following the ratings experiment could clarify these results in future research. Additionally, although the art students recognised the presence of AI, their accuracy of identification was low, and when given the booklet to show which images were created by humans and which were created by AI, they showed surprise at a lot of the images in both categories.

            In summary, these results show the quick development of generative AI, and that AI generated images are perceived almost identically to human artwork, aside from a difference in average pupil size when viewing the images. They also show that artistic experience, when measured by degree type, is not a mediating factor in any differences between how AI images are viewed in comparison to human artwork, nor in how they are rated. They do, however, show some non-significant trends that would require further testing in future research.

 

            Though extraneous variables were controlled for in this study, e.g. by creating the text prompts for the AI counterparts based on the human artwork and ensuring that the AI generated images were all cropped and resolution-boosted so that the aspect ratio and image size would not impact results, there were still limitations that could have had an impact on the final results.

            The most obvious of these limitations was the small sample size. This was due to the intention of having the same size groups of art and psychology students, with art students being significantly more difficult to recruit. Psychology students, as previously mentioned, were offered incentives to participate in the study, however ethical approval to provide art students with incentives was not given. Additionally, the lab was more easily accessible to psychology students than art students, which was another limiting factor to art student participation.

            Another limitation related to the source of the stimuli. Given all human art was gathered from the same website, and all computer generated images were created using Bing CoPilot, the stimuli may not have been representative of the AI-generated images/artwork that one is generally exposed to. Additionally, if Oppenlaender et al (2023a) is correct in suggesting that ‘AI prompt engineering’ is a skill, then the fact that the images were prompted by someone unfamiliar with the skill is a potential flaw. An ideal situation would be the development of a dataset containing artwork gathered from a variety of sources, alongside paired images of a similar nature generated by a variety of AIs and prompted by someone familiar with ‘AI prompt engineering’. This dataset would also preferably contain more than 25 images of each category in order to be more representative of the variety in art. The images quantity was limited in the present study, as the experiment was already relatively long, and any longer might have discouraged participants from taking part.

            Additionally, to refer back to the colour values on the stimuli used, though the hue and saturation values are very similar, there is a significant difference between the brightness of the two stimuli groups. Though this was a relatively small difference, it does limit the conclusions that can be drawn from the data, as evidence from previous research (Mathôt & Van der Stigchel, 2015; Binda et al, 2013; etc) already suggests that greater brightness is associated with lower pupil size. The aforementioned dataset for future research would ideally use pairs of images for which brightness is more carefully controlled.

            Finally, the sample may not have been representative; the use of art and psychology students as a way of representing ‘artistic experience’ may not have been accurate. For example, the art students may have included first year students with little experience in art as of yet, and the psychology students may have had an interest in art outside of their course. Because of this, course may have been a poor indicator of artistic experience.  An alternative could be proposed; participants could fill out a short questionnaire asking about their artistic experience, and categorised based on their scores on this. Otherwise, a third category of professionals in artistic careers could have been beneficial. Additionally, though this demographic was not kept track of in order to avoid taking too many personal details, most of the participants were in their early 20s, and the inclusion criteria stated that participants had to be 18-36 years old. Including older participants could have also influenced the results of the study, as older individuals tend to be more critical of new technologies.

 

            This is a very under-studied area of research due to the only recent evolution of text-to-image technology, and given the public interest and lack of understanding of it as shown in Oppenlaender et als (2023a) previously discussed questionnaire, much further research could be of great interest to the public, especially those whose jobs are threatened by generative AI. Though this study had several limitations that could have impacted the results, it may give way to future research aiming to further understanding of perceptions of AI ‘artwork’ and how that could differ from that of human art. As mentioned, one of the primary aims of this study was to pave the way for future research and to provide an initial understanding of a very underdeveloped topic within psychology, so the future research that can be developed with the ideas presented in this study are of great interest. Future repetitions of the present experiment controlling for the aforementioned limitations to test the studies reliability would be beneficial in itself, however more research involving similar concepts can be followed up on in future having used this study as a basis.

            A potentially interesting future experiment could involve the combination of the present study with the earlier discussed study of Mitrovic et al (2020). With the idea that ‘beauty demands longer looks’, even when, as shown in the results of their study, that preference is in no way adaptive, it would be interesting to see if there was a significant difference in the amount of time spend looking at either image, if both the human artwork and it’s AI counterpart were presented on the screen at the same time. This would be an interesting way of exploring any subconscious distinction and/or preference for one form of image or the other. The potential issue with this is that people could be more likely to figure out the true nature of the study if there is a consistent pattern of there being one AI image and one human art piece at each given moment, the knowledge of which could influence the way they look at the images; for example, the may look for longer at what they suspect to be AI to find any giveaway signs.

            As previously mentioned, the participants of the present study were young, and having a larger variety of ages involved would have been interesting, however, adding age as a variable itself could uncover some interesting results too. On the one hand, older individuals would be expected to have had more experience with and exposure to art, as they have been alive for longer, however, on the other hand, they are usually considered to be less familiar with new technologies.

            Finally, a potentially interesting alteration to the present study was mentioned by an art student participant; they had shared that, when asked to analyse a piece of artwork, they were expected to view it for a lot longer, and often had to write about it. A future experiment could utilise this comment by using fewer stimuli, similarly presented in a random order from a set of human artwork and AI counterparts, however with fewer stimuli and longer viewing times, with the participants being told to write a short analysis of each image after it had been shown. The expectation to write a written analysis could encourage them to view the image more critically and more closely, which could yield different results in both the eye-tracking and rating data.

 

            The study at hand aimed to fill a large gap in psychological literature pertaining to the perception of images created by generative AI comparative to that of artwork created by humans. The few past studies performed within this field in combination with other relevant literature outside of it led to the hypothesis that the viewing behaviour of individuals would differ when presented with human artwork in comparison to AI generated counterparts, and that this difference would be mediated by artistic experience. This was measured using eye-tracking data, specifically focusing on the pupil size, fixation count and fixation duration. The data was analysed using a mixed factorial 2x2 ANOVA, the results of which showed no significant difference between the fixation count, fixation duration or rating data of participants when shown AI generated images and human artwork, however the mean pupil size was significantly greater when participants viewed human artwork than AI generated images, which to a limited extent supports the hypothesis. These findings give way for future research to further develop understanding of the psychological aspects of generative AI as well as providing some more information on how it may differ in response from artwork made by humans.



REFERENCES


Anantrasirichai, N. & Bull, D. (2021) Artificial intelligence in the creative industries: a review. Artificial Intelligence Review, 55, p. 589-656, doi: https://doi.org/10.1007/s10462-021-10039-7 

 

BBC (2018) Portrait by AI program sells for $432,000. url: https://www.bbc.co.uk/news/technology-45980863

 

Bilalić, M., Kiesel, A., Pohl, C., Erb, M. & Grodd, W. (2011) It takes two-skilled recognition of objects engages lateral areas in both hemispheres, PLoSONE, 6(1), doi: https://doi.org/10.1371/journal.pone.0016202

 

Binda, P., Pereverzeva, M. & Murray, S. O. (2013) Pupil constrictions to photographs of the sun, Journal of Vision, 13(6), doi: https://doi.org/10.1167/13.6.8

 

Business Insider (2023) Workers are worried about AI taking their jobs. Artists say it’s already happening. url: https://www.businessinsider.com/ai-taking-jobs-fears-artists-say-already-happening-2023-10?r=US&IR=T#

 

Chamberlain, R., Mullin, C., Scheerlinck, B., Wagemans, J. (2018) Putting the Art in Artificial: Aesthetic responses to Computer-Generated Art, Psychology of Aesthetics, Creativity, and the Arts, 12(2), p.177-192 doi: https://doi.org/10.1037/aca0000136

 

Crucq, A. (2020) Viewing-patterns and perspectival painting: An eye-tracking study on the effect of the vanishing point, Journal of eye movement research, 13(2), doi: https://doi.org/10.16910/jemr.13.2.15

 

DALL.E 3 (2023) Bing CoPilot, https://www.bing.com/images/create

 

Frey, C. B. & Osborne, M. A. (2017) The future of employment: How susceptible are jobs to computerization? Technological Forecasting & Social Change, 114 p. 254-280, doi: https://doi.org/10.1016/j.techfore.2016.08.019

 

Gangan, M. P., Anoop, K., Lajish, V. L. (2022) Distinguishing natural and computer generated images using Multi-Colorspace fused EfficientNet, Journal of Information Security and Applications, 68, doi: https://doi.org/10.1016/j.jisa.2022.103261

 

 

Grant, N. & Metz, C. (2022) A New Chat Bot Is a ‘Code Red’ for Google’s Search Business. url: https://www.nytimes.com/2022/12/21/technology/ai-chatgpt-google-search.html

 

Gault, M. (2022) An AI-Generated Artwork Won First Place at a State Fair Fine Arts Competition, and Artists Are Pissed. url: https://www.vice.com/en/article/bvmvqm/an-ai-generated-artwork-won-first-place-at-a-state-fair-fine-arts-competition-and-artists-are-pissed

 

Halal, W., Kolber, J., Davies, O. (2016) Forecasts of AI and Future Jobs in 2023: Muddling Through Likely, with Two Alternative Scenarios. Journal of Futures Studies, 21(2), p. 83-96, doi: :10.6531/JFS.2016.21(2).R83

 

Hawley-Dolan, A. & Winner, E. (2011) Seeing the mind behind the art: people can distinguish abstract expressionist paintings from highly similar paintings by children, chimps, monkeys, and elephants, Psychological Science, 22(4), doi: https://doi.org/10.1177/0956797611400915

 

Hulzebosch, N., Ibrahimi, S., Worring, M. (2020) Detecting CNN-Generated Facial Images in Real-world Scenarios in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops

 

Johnson, M. G., Muday, J. A., Schirillo, J. A. (2010) When viewing variations in paintings by Mondrain, aesthetic preferences correlate with pupil size, Psychology of Aesthetics, Creativity, and the Arts, 4(3), doi: 10.1037/a0018155

 

Jucker, J-L., Barrett, J. L., Wlodarski, R. (2014) “I just don’t get it”: Perceived Artists’ Intentions Affect Art Evaluations, Empirical Studies of the Arts, 32(2), doi: https://doi.org/10.2190/EM.32.2.c

 

Kirk, U., Skov, M., Hulme, O., Christensen, M. S., Zeki, S. (2009) Modulation of aesthetic value by semantic context: An fMRI study, NeuroImage, 44(3), p. 1125-1132, doi: https://doi.org/10.1016/j.neuroimage.2008.10.009

 

Korshunov, P. & Marcel, S. (2020) Deepfake detection: humans vs. machines, arXiv:2009.03155, doi: https://doi.org/10.48550/arXiv.2009.03155

 

Kruger, J., Wirtz, D., Boven, L. V., Altermatt, T. W. (2004) The Effort Heuristic, Journal of Experimental Social Psychology, 40(1), p.91-98, doi: https://doi.org/10.1016/S0022-1031(03)00065-9

 

Kundel, H. L., Nodine, C. F., Krupinski, E. A. & Mello-Thoms, C. (2008) Using gaze-tracking data and mixture distribution analysis to support a holistic model for the detection of cancers on mammograms, Academic Radiology, 15(7) p. 881-886, doi: https://doi.org/10.1016/j.acra.2008.01.023

 

Lago, F., Pasquini, C., Böhme, R., Dumont, H., Goffaux, V., Boato, G. (2021) More Real than Real: A Study on Human Visual Perception of Synthetic Faces. IEEE Signal Processing Magazine, 39(1), p. 109-116, doi: 10.1109/MSP.2021.3120982

 

Lu, Z., Huang, D., Bai, L., Qu, j., Wu, C., Liu, X., Ouyang., W. (2023) Seeing is not Always Believing: Benchmarking Human and Model Perception of AI-Generated Images, arXiv:2304.13023, doi: https://doi.org/10.48550/arXiv.2304.13023

 

Mader, B., Banks, M. S., Farid, H. (2017) Identifying Computer-Generated Portraits: The Importance of Training and Incentives. Perception. 46(9), doi: https://doi.org/10.1177/0301006617713633

 

Mandolesi, S., Gambelli, D., Naspetti, S., Zanoli, R. (2022) Exploring Visitors’ Visual Behaviour Using Eye-Tracking: The Case of the “Studiolo Del Duca”, J. Imaging, 8(1) doi: https://doi.org/10.3390/jimaging8010008

 

Massaro, D., Savazzi, F., Dio, C. D., Freedberg, D., Gallese, V., Gilli, G., Marchetti, A. (2012) When Art Moves the Eyes: A Behavioral and Eye-Tracking Study, Plos One, 7(5), doi: https://doi.org/10.1371/journal.pone.0037285

 

Mathôt, S. & Van der Stigchel, S. (2015) New light on the mind’s eye: The pupillary response as active vision, Current directions in psychological science, 24(5), doi: https://doi.org/10.1177/096372141559372

 

Millet, K., Buehler, F., Du, G., Kokkoris, M. D. (2023) Defending humankind: Anthropocentric bias in the appreciation of AI art, Computers in Human Behvaiour, 143, doi: https://doi.org/10.1016/j.chb.2023.107707

 

Mitrovic, A., Hegelmaier, L. M., Leder, H., Pelowski, M. (2020) Does beauty capture the eye, even if it’s not (overtly) adaptive? A comparative eye-tracking study of spontaneous attention and visual preference with VAST abstract art, Acta Psychologica, 209, doi: https://doi.org/10.1016/j.actpsy.2020.103133

 

Moshel, M. L., Robinson, A. K., Carlson, T. A., Grootswagers, T. (2022) Are you for real? Deconding realistic AI-generated faces from neural activity, Vision Research, 199, doi: https://doi.org/10.1016/j.visres.2022.108079 

 

Newman, G. E & Bloom, P. (2012) Art and authenticity: The importance of originals in judgements of value, Journal of Experimental Psychology: general, 141(3), p. 558-569, doi: https://doi.org/10.1037/a0026035 

 

OECD (2021) Artificial Intelligence and Employment: New evidence from occupations most exposed to AI. url: https://policycommons.net/artifacts/3864727/artificial-intelligence-and-employment/4670682/

 

Oppenlaender, J., Silvennoinen, J., Paananen, V., Visuri, A. (2023a) Perceptions and Realities of Text-to-Image Generation. Academic Mindtrek, doi:

 

Oppenlaender, J., Linder, R., Silvennoinen, J. (2023b) Prompting AI Art: An Investigation into the Creative Skill of Prompt Engineering. arXiv preprint, arXiv:2303.13534.

 

Perra, J., Latmier, Z., Poulin-Charronnat, B., Baccino, T. & Drai-Zerbib, V. (2022) A meta-analysis on the effect of expertise on eye movements during music reading, Journal of eye movement research, 16(4), doi: https://doi.org/10.16910/jemr.15.4.1

 

Pihko, E, Virtanen, A., Saarinen, V-M., Pannasch, S., Hirvenkari, L., Tossavainen, T., Haapala, A., Hari, T. (2011) Experiencing art:  the influence of expertise and painting abstraction level, Frontiers in Human Neuroscience, 5, doi: 10.3389/fnhum.2011.00094

 

Ragot, M., Martin, N., Cojean, S. (2020) AI-generated Vs. Human Artworks. A Perception Bias Towards Artificial Intelligence? In Extended abstracts of the 2020 CHI conference on human factors in computing systems, p. 1–10, doi: https://doi.org/10.1145/3334480.3382892

 

Riek, L. D., Rabinowitch, T-C., Chakarabarti, B., Robinson, P. (2009) How anthropomorphism affects empathy towards robots, 4th ACM/IEEE International Conference on Human-Robot Interaction, p 245-246, doi: 10.1145/1514095.1514158

 

Savassi, F., Massaro, D., Dio, C. D., Gallese, V., Gilli, G., Marchetti, A. (2014) Exploring Responses to Art in Adolescence: A Behavioural and Eye-Tracking Study, Plos One, 9(7), doi: https://doi.org/10.1371/journal.pone.0102888

 

Snapper, L., Oranç, C., Hawley-Dolan, A., Nissel, J., Winner, E. (2015) Your kid could not have done that: Even untutored observers can discern intentionality and structure in abstract expressionist art, Cognition, 137, p. 154-165, doi: https://doi.org/10.1016/j.cognition.2014.12.009

 

Taniyama, y., Suzuki, Y., Kondo, T., Minami, T., Nakauchi, S. (2023) Pupil dilation is driven by perceptions of naturalness of colour composition in paintings, Psychology of Aesthetics, Creativity and the Arts, doi: https://doi.org/10.1037/aca0000580

 

Tarchi, P., Lanini, M. C., Frassineti, L., Lanatà, A. (2023) Real and Deepfake Face Recognition: An EEG Study on Cognitive and Emotional Implications, Brain Sciences, 13(9) doi: https://doi.org/10.3390/brainsci13091233

 

Waifu2X (2023) https://www.waifu2x.net/ 


29 views0 comments

Comments


bottom of page