References
- Adorno, T., & Horkheimer, M. (1944). The Culture Industry: Enlightenment as Mass Deception.
- Ahmed, S. (2004). Affective Economies. Social Text, 22(2), 117–139. https://doi.org/10.1215/01642472-22-2_79-117.
- Ahmed, S. (2010). The Promise of Happiness. Duke University Press. https://doi.org/10.2307/j.ctv125jkj2.
- Amoore, L. (2019). Doubt and the Algorithm: On the Partial Accounts of Machine Learning. Theory, Culture & Society, 36(6), 147–169. https://doi.org/10.1177/0263276419851846.
- Amoore, L. (2020). Cloud ethics: Algorithms and the attributes of ourselves and others. Duke University Press.
- Ananny, M. (2023). Making Mistakes: Constructing Algorithmic Errors to Understand Sociotechnical Power. Osiris, 38, 223–241. https://doi.org/10.1086/725146.
- Appadurai, A., & Alexander, N. (2020). Failure. Polity press.
- Armandpour, M., Sadeghian, A., Zheng, H., Sadeghian, A., & Zhou, M. (2023). Re-imagine the Negative Prompt Algorithm: Transform 2D Diffusion into 3D, alleviate Janus problem and Beyond (Version 3). arXiv. https://doi.org/10.48550/ARXIV.2304.04968.
- Ballatore, A., & Natale, S. (2023). Technological failures, controversies and the myth of AI. In S. Lindgren (Ed.), Handbook of Critical Studies of Artificial Intelligence (pp. 237–244). Edward Elgar Publishing. https://doi.org/10.4337/9781803928562.00026.
- Ban, Y., Wang, R., Zhou, T., Cheng, M., Gong, B., & Hsieh, C.-J. (2024). Understanding the Impact of Negative Prompts: When and How Do They Take Effect? (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2406.02965.
- Barassi, V. (2024). Toward a Theory of AI Errors: Making Sense of Hallucinations, Catastrophic Failures, and the Fallacy of Generative AI. Harvard Data Science Review, Special Issue 5. https://doi.org/10.1162/99608f92.ad8ebbd4.
- Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Polity.
- Berlant, L. (2011). Cruel Optimism. Duke University Press. https://doi.org/10.2307/j.ctv1220p4w.
- Broussard, M. (2023). More than a glitch: Confronting race, gender, and ability bias in tech. The MIT Press.
- Chesher, C., & Albarrán-Torres, C. (2023). The emergence of autolography: The ‘magical’ invocation of images from text through AI. Media International Australia, 189(1), 57–73. https://doi.org/10.1177/1329878X231193252
- Chun, W. H. K. (2008). On “Sourcery,” or Code as Fetish. Configurations, 16(3), 299–324. https://doi.org/10.1353/con.0.0064.
- Chun, W. H. K. (2016). Updating to remain the same: Habitual new media. The MIT Press.
- Cohen, H. (1979). What is an image. International Joint Conference on Artificial Intelligence. https://api.semanticscholar.org/CorpusID:62789816.
- Dhariwal, P., & Nichol, A. (2021). Diffusion Models Beat GANs on Image Synthesis (Version 4). arXiv. https://doi.org/10.48550/ARXIV.2105.05233.
- Du, Y., Li, S., & Mordatch, I. (2020). Compositional Visual Generation and Inference with Energy Based Models (Version 3). arXiv. https://doi.org/10.48550/ARXIV.2004.06030.
- Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative AI. Business & Information Systems Engineering, 66(1), 111–126. https://doi.org/10.1007/s12599-023-00834-7.
- Galanter, P. (2016). Generative Art Theory. In C. Paul (Ed.), A Companion to Digital Art (1st ed., pp. 146–180). Wiley. https://doi.org/10.1002/9781118475249.ch5.
- Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative Adversarial Networks (Version 1). arXiv. https://doi.org/10.48550/ARXIV.1406.2661.
- Graham, S., & Thrift, N. (2007). Out of Order: Understanding Repair and Maintenance. Theory, Culture & Society, 24(3), 1–25. https://doi.org/10.1177/0263276407075954.
- Koulischer, F., Deleu, J., Raya, G., Demeester, T., & Ambrogioni, L. (2024). Dynamic Negative Guidance of Diffusion Models (Version 3). arXiv. https://doi.org/10.48550/ARXIV.2410.14398.
- Larsson, S., & Viktorelius, M. (2024). Reducing the contingency of the world: Magic, oracles, and machine-learning technology. AI & SOCIETY, 39(1), 183–193. https://doi.org/10.1007/s00146-022-01394-2.
- Lotringer, S., & Virilio, P. (2005). The accident of art. Semiotexte.
- Mansimov, E., Parisotto, E., Ba, J. L., & Salakhutdinov, R. (2015). Generating Images from Captions with Attention (Version 2). arXiv. https://doi.org/10.48550/ARXIV.1511.02793.
- Morozov, E. (2014). To save everything, click here: The folly of technological solutionism (Paperback 1. publ). PublicAffairs.
- Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York university press.
- Oppenlaender, J. (2024). The Cultivated Practices of Text-to-Image Generation. In R. Rousi, C. Von Koskull, & V. Roto (Eds.), Humane Autonomous Technology (pp. 325–349). Springer International Publishing. https://doi.org/10.1007/978-3-031-66528-8_14.
- Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2021). High-Resolution Image Synthesis with Latent Diffusion Models (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2112.10752.
- Ronge, R., Maier, M., & Rathgeber, B. (2025). Towards a Definition of Generative Artificial Intelligence. Philosophy & Technology, 38(1), 31. https://doi.org/10.1007/s13347-025-00863-y.
- Simon, H. A., & Newell, A. (1971). Human problem solving: The state of the theory in 1970. American Psychologist, 26(2), 145–159. https://doi.org/10.1037/h0030806.
- Wachter-Boettcher, S. (2017). Technically wrong: Sexist apps, biased algorithms, and other threats of toxic tech (First edition). W.W. Norton & Company.
- Weizenbaum, J. (1966). ELIZA – a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168.