参考文献
1. Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J. W., &
Brundage, M. (2021). Evaluating CLIP: Towards Characterization of
Broader Capabilities and Downstream Implications
(arXiv:2108.02818). arXiv. https://doi.org/10.48550/arXiv.2108.02818
2. Ali, J., Kleindessner, M., Wenzel, F., Budhathoki, K., Cevher, V.,
& Russell, C. (2023). Evaluating the Fairness of Discriminative
Foundation Models in Computer Vision. Proceedings of the 2023
AAAI/ACM Conference on AI, Ethics, and Society, 809–833. https://doi.org/10.1145/3600211.3604720
3. Andreessen, M. (2023, October 16). The techno-optimist manifesto.
Andreessen Horowitz. https://a16z.com/the-techno-optimist-manifesto/
4. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23).
Machine Bias. Propublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
5. Baack, S., & Mozilla Insights. (2024). Training Data for the
Price of a Sandwich: Common Crawl’s Impact on Generative AI. https://foundation.mozilla.org/en/research/library/generative-ai-training-data/common-crawl/
6. Bach, J. (2023, February 26). Joscha Bach: Open Sourcing AI &
it’s implications (C. Schuhmann, Interviewer) [Interview]. https://www.youtube.com/watch?v=MVm9FVfGrFQ
7. Bagdasaryan, E., & Shmatikov, V. (2019). Differential Privacy
Has Disparate Impact on Model Accuracy (arXiv:1905.12101). arXiv.
https://doi.org/10.48550/arXiv.1905.12101
8. Beaumont, R. (2022, March 31). A Call to Protect Open-Source AI in
Europe. LAION. https://laion.ai/notes/letter-to-the-eu-parliament
9. Beaumont, R. (2022, March 31). Laion-5B: A New Era of Open
Large-Scale Multi-Modal Datasets. LAION. https://laion.ai/blog/laion-5b
10. Bennett, C. L., Gleason, C., Scheuerman, M. K., Bigham, J. P., Guo,
A., & To, A. (2021). “It’s Complicated”: Negotiating
Accessibility and (Mis)Representation in Image Descriptions of Race,
Gender, and Disability. Proceedings of the 2021 CHI Conference on
Human Factors in Computing Systems, 1–19. https://doi.org/10.1145/3411764.3445498
11. Bianchi, F., Kalluri, P., Durmus, E., Ladhak, F., Cheng, M., Nozza,
D., Hashimoto, T., Jurafsky, D., Zou, J., & Caliskan, A. (2023).
Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale. 2023 ACM Conference on Fairness,
Accountability, and Transparency, 1493–1504. https://doi.org/10.1145/3593013.3594095
12. Bigham, J. P., Kaminsky, R. S., Ladner, R. E., Danielsson, O. M.,
& Hempton, G. L. (2006). WebInSight: Making Web Images Accessible.
Proceedings of the 8th International ACM SIGACCESS Conference on
Computers and Accessibility, 181–188. https://doi.org/10.1145/1168987.1169018
13. Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von
Arx, S., Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E.,
Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N.,
Chen, A., Creel, K., Davis, J. Q., Demszky, D., … Liang, P. (2022).
On the Opportunities and Risks of Foundation Models
(arXiv:2108.07258). arXiv. https://doi.org/10.48550/arXiv.2108.07258
14. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.,
Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A.,
Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R.,
Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020).
Language Models are Few-Shot Learners (arXiv:2005.14165).
arXiv. https://doi.org/10.48550/arXiv.2005.14165
15. Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative
AI at Work (Working Paper 31161). National Bureau of Economic
Research. https://doi.org/10.3386/w31161
16. Buolamwini, J., & Gebru, T. (2018). Gender Shades:
Intersectional Accuracy Disparities in Commercial Gender Classification.
Proceedings of the 1st Conference on Fairness, Accountability and
Transparency, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html
17. Cai, K., & Martin, I. (2024, March 29). How Stability AI’s
Founder Tanked His Billion-Dollar Startup. Forbes. https://www.forbes.com/sites/kenrickcai/2024/03/29/how-stability-ais-founder-tanked-his-billion-dollar-startup/
18. Castells, M. (2010). The Rise of the Network Society (2nd
edition). Wiley-Blackwell.
19. Chun, W. H. K. (with Barnett, A.). (2021). Discriminating Data:
Correlation, Neighborhoods, and the New Politics of Recognition.
The MIT Press.
20. Chung, J. J. Y., Kim, W., Yoo, K. M., Lee, H., Adar, E., &
Chang, M. (2022). TaleBrush: Sketching Stories with Generative
Pretrained Language Models. CHI Conference on Human Factors in
Computing Systems, 1–19. https://doi.org/10.1145/3491102.3501819
21. Collins, E., & Wang, M. (2025). Federated Learning: A Survey
on Privacy-Preserving Collaborative Intelligence
(arXiv:2504.17703). arXiv. https://doi.org/10.48550/arXiv.2504.17703
22. Crawford, K., & Joler, V. (2018). Anatomy of an AI System:
The Amazon Echo as an Anatomical Map of Human Labor, Data and Planetary
Resources. https://anatomyof.ai/
23. Crawford, K. (2022). Atlas of AI: Power, Politics, and the
Planetary Costs of Artificial Intelligence. Yale University Press.
24. Dauvergne, P. (2022). Is Artificial Intelligence Greening Global
Supply Chains? Exposing the Political Economy of Environmental Costs.
Review of International Political Economy, 29(3),
696–718. https://doi.org/10.1080/09692290.2020.1814381
25. Denton, R., Hanna, A., Amironesei, R., Smart, A., Nicole, H., &
Scheuerman, M. K. (2020). Bringing the People Back In: Contesting
Benchmark Machine Learning Datasets (arXiv:2007.07399). arXiv. https://doi.org/10.48550/arXiv.2007.07399
26. Denton, E., Hanna, A., Amironesei, R., Smart, A., & Nicole, H.
(2021). On the genealogy of machine learning datasets: A critical
history of ImageNet. Big Data & Society, 8(2),
20539517211035955. https://doi.org/10.1177/20539517211035955
27. DiBona, C., & Ockman, S. (1999). Open Sources: Voices from
the Open Source Revolution (1st edition). O’Reilly Media.
28. Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023).
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of
Large Language Models (arXiv:2303.10130). arXiv. http://arxiv.org/abs/2303.10130
29. Epstein, Z., Levine, S., Rand, D. G., & Rahwan, I. (2020). Who
gets credit for AI-generated art? iScience, 23(9),
101515. https://doi.org/10.1016/j.isci.2020.101515
30. Epstein, Z., Schroeder, H., & Newman, D. (2022). When happy
accidents spark creativity: Bringing collaborative speculation to life
with generative AI (arXiv:2206.00533). arXiv. https://doi.org/10.48550/arXiv.2206.00533
31. Epstein, Z., Hertzmann, A., Akten, M., Farid, H., Fjeld, J., Frank,
M. R., Groh, M., Herman, L., Leach, N., Mahari, R., Pentland, A.
“Sandy”., Russakovsky, O., Schroeder, H., Smith, A., & Smith, A.
(2023). Art and the science of generative AI. Science,
380(6650), 1110–1111. https://doi.org/10.1126/science.adh4451
32. Ferrara, E. (2023). Fairness and bias in artificial intelligence: a
brief survey of sources, impacts, and mitigation strategies.
Sci, 6(1), 3. https://doi.org/10.3390/sci6010003
33. Ferrari, F. (2023). Neural Production Networks: AI’s Infrastructural
Geographies. Environment and Planning F, 2(4),
459–476. https://doi.org/10.1177/26349825231193226
34. Fletcher, R. (2024). How many news websites block AI crawlers?
Reuters Institute for the Study of Journalism. https://doi.org/10.60625/risj-xm9g-ws87
35. Frenkel, S., & Thompson, S. A. (2023, July 15). “Not for
Machines to Harvest”: Data Revolts Break Out Against A.I. The
New York Times. https://www.nytimes.com/2023/07/15/technology/artificial-intelligence-models-chat-data.html
36. Gadekallu, T. R., Dev, K., Khowaja, S. A., Wang, W., Feng, H., Fang,
K., Pandya, S., & Wang, W. (2025). Framework, Standards,
Applications and Best practices of Responsible AI : A Comprehensive
Survey (arXiv:2504.13979; Version 1). arXiv. https://doi.org/10.48550/arXiv.2504.13979
37. Garg, N., Schiebinger, L., Jurafsky, D., & Zou, J. (2018). Word
embeddings quantify 100 years of gender and ethnic stereotypes.
Proceedings of the National Academy of Sciences,
115(16). https://doi.org/10.1073/pnas.1720347115
38. Gehman, S., Gururangan, S., Sap, M., Choi, Y., & Smith, N. A.
(2020). RealToxicityPrompts: Evaluating Neural Toxic Degeneration in
Language Models (arXiv:2009.11462). arXiv. https://doi.org/10.48550/arXiv.2009.11462
39. Ghosh, A., & Fossas, G. (2022). Can there be art without an
artist? (arXiv:2209.07667). arXiv. http://arxiv.org/abs/2209.07667
40. Gleason, C., Carrington, P., Cassidy, C., Morris, M. R., Kitani, K.
M., & Bigham, J. P. (2019). “It’s almost like they’re trying
to hide it”: How User-Provided Image Descriptions Have Failed to
Make Twitter Accessible. The World Wide Web Conference,
549–559. https://doi.org/10.1145/3308558.3313605
41. Goetze, T. S. (2024). AI Art is Theft: Labour, Extraction, and
Exploitation: Or, On the Dangers of Stochastic Pollocks. The 2024
ACM Conference on Fairness, Accountability, and Transparency,
186–196. https://doi.org/10.1145/3630106.3658898
42. Goh, G., Cammarata, N., Voss, C., Carter, S., Petrov, M., Schubert,
L., Radford, A., & Olah, C. (2021). Multimodal Neurons in Artificial
Neural Networks. Distill, 6(3), e30. https://doi.org/10.23915/distill.00030
43. Gordon, S., Mahari, R., Mishra, M., & Epstein, Z. (2022).
Co-creation and ownership for AI radio (arXiv:2206.00485).
arXiv. https://doi.org/10.48550/arXiv.2206.00485
44. Gorska, A. M., & Jemielniak, D. (2023). The invisible women:
uncovering gender bias in AI-generated images of professionals.
Feminist Media Studies, 23(8), 4370–4375. https://doi.org/10.1080/14680777.2023.2263659
45. Hanley, M., Barocas, S., Levy, K., Azenkot, S., & Nissenbaum, H.
(2021). Computer Vision and Conflicting Values: Describing People with
Automated Alt Text. Proceedings of the 2021 AAAI/ACM Conference on
AI, Ethics, and Society, 543–554. https://doi.org/10.1145/3461702.3462620
46. Heikkilä, M. (2022, September 16). This artist is dominating
AI-generated art. And he’s not happy about it. MIT Technology
Review. https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/
47. Hong, R., Agnew, W., Kohno, T., & Morgenstern, J. (2024). Who’s
in and who’s out? A case study of multimodal CLIP-filtering in DataComp.
Proceedings of the 4th ACM Conference on Equity and Access in
Algorithms, Mechanisms, and Optimization, 1–17. https://doi.org/10.1145/3689904.3694702
48. Howard, J. (2023, April 2). Jeremy Howard: His vision for
fast.ai & large language models (C. Schuhmann, Interviewer)
[Interview]. https://www.youtube.com/watch?v=J5DdTjIvd_E
49. 黄孙权 (Ed.). (2017). 让我们平台合作社吧. 网络社会研究所.
50. Hutson, J., & Harper-Nichols, M. (2023). Generative AI and
Algorithmic Art: Disrupting the Framing of Meaning and Rethinking the
Subject- Object Dilemma. Global Journal of Computer Science and
Technology: D, 23(1). https://digitalcommons.lindenwood.edu/faculty-research-papers/461
51. Inflection. (2024, March 19). The new Inflection: An important
change to how we’ll work. Inflection. https://inflection.ai/the-new-inflection
52. Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le,
Q. V., Sung, Y., Li, Z., & Duerig, T. (2021). Scaling Up Visual
and Vision-Language Representation Learning With Noisy Text
Supervision (arXiv:2102.05918). arXiv. https://doi.org/10.48550/arXiv.2102.05918
53. Kak, A., & West, S. M. (2023). 2023 Landscape: Confronting
Tech Power. AI Now Institute. https://ainowinstitute.org/2023-landscape
54. Kaltheuner, F., Saari, L., Kak, A., & West, S. M. (Eds.).
(2024). Redirecting Europe’s AI Industrial Policy: From
Competitiveness to Public Interest. AI Now Institute. https://ainowinstitute.org/wp-content/uploads/2024/10/AI-Now_EU-AI-Industrial-Policy_Oct.-2024.pdf
55. Kelty, C. M. (2008). Two Bits: The Cultural Significance of Free
Software (Illustrated edition). Duke University Press Books. https://twobits.net/
56. Khan, S. M., & Mann, A. (2020). AI Chips: What They Are and
Why They Matter. Center for Security and Emerging Technology. https://doi.org/10.51593/20190014
57. Kilcher, Y. (2022, April 22). LAION-5B: 5 billion
image-text-pairs dataset (with the authors) (LAION, Interviewer)
[Interview]. https://www.youtube.com/watch?v=AIOE1l1W0Tw
58. Kim, J., Lee, J., Jang, K. M., & Lourentzou, I. (2024).
Exploring the limitations in how ChatGPT introduces environmental
justice issues in the united states: a case study of 3,108 counties. In
Telematics and Informatics (Vol. 86). ELSEVIER. https://doi.org/10.1016/j.tele.2023.102085
59. Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J.,
Chen, S., Kalantidis, Y., Li, L.-J., Shamma, D. A., Bernstein, M. S.,
& Li, F.-F. (2016). Visual Genome: Connecting Language and
Vision Using Crowdsourced Dense Image Annotations
(arXiv:1602.07332). arXiv. https://doi.org/10.48550/arXiv.1602.07332
60. LAION. (2023, April 28). A Call to Protect Open-Source AI in
Europe. https://laion.ai/notes/letter-to-the-eu-parliament
61. LAION. (2023, March 29). Petition for keeping up the progress
tempo on AI research while securing its transparency and safety. https://laion.ai/blog/petition
62. LAION. (2023, December 19). Safety Review for LAION 5B.
LAION. https://laion.ai/notes/laion-maintenance
63. LAION. (2024, August 30). Releasing Re-LAION 5B: transparent
iteration on LAION-5B with additional safety fixes. LAION. https://laion.ai/blog/relaion-5b
64. Li, Z., Zhang, W., Zhang, H., Gao, R., & Fang, X. (2024). Global
digital compact: a mechanism for the governance of online discriminatory
and misleading content generation. In International Journal of
Human-Computer Interaction. TAYLOR & FRANCIS INC. https://doi.org/10.1080/10447318.2024.2314350
65. Liesenfeld, A., & Dingemanse, M. (2024). Rethinking open source
generative AI: open washing and the EU AI Act. The 2024 ACM
Conference on Fairness, Accountability, and Transparency,
1774–1787. https://doi.org/10.1145/3630106.3659005
66. Lin, T.-Y., Maire, M., Belongie, S., Bourdev, L., Girshick, R.,
Hays, J., Perona, P., Ramanan, D., Zitnick, C. L., & Dollár, P.
(2015). Microsoft COCO: Common Objects in Context
(arXiv:1405.0312). arXiv. https://doi.org/10.48550/arXiv.1405.0312
67. Lindgren, S. (2023). Critical Theory of AI (1st edition).
Polity. https://www.wiley.com/en-cn/Critical+Theory+of+AI-p-9781509555772
68. Liu, G. (2022, June 21). DALL-E 2 made its first magazine cover.
Cosmopolitan. https://www.cosmopolitan.com/lifestyle/a40314356/dall-e-2-artificial-intelligence-cover/
69. 刘雅典. (2023). 生成式人工智能艺术形式与情感关系辨.
文艺争鸣, 2023(7), 77–85.
70. Lu, Z., Huang, D., Bai, L., Qu, J., Wu, C., Liu, X., & Ouyang,
W. (2023). Seeing is not always believing: Benchmarking Human and Model
Perception of AI-Generated Images. Advances in Neural Information
Processing Systems, 36, 25435–25447. https://proceedings.neurips.cc/paper_files/paper/2023/hash/505df5ea30f630661074145149274af0-Abstract-Datasets_and_Benchmarks.html
71. Luccioni, A. S., & Viviano, J. D. (2021). What’s in the Box?
A Preliminary Analysis of Undesirable Content in the Common Crawl
Corpus (arXiv:2105.02732). arXiv. https://doi.org/10.48550/arXiv.2105.02732
72. Ma, D., Song, H., & Thomas, N. (2020). Supply Chain Jigsaw:
Piecing Together the Future Global Economy. https://macropolo.org/analysis/supply-chain-ai-semicondutor-lithium-oled-global-economy/
73. Maluleke, V. H., Thakkar, N., Brooks, T., Weber, E., Darrell, T.,
Efros, A. A., Kanazawa, A., & Guillory, D. (2022). Studying bias
in GANs through the lens of race (arXiv:2209.02836). arXiv. https://doi.org/10.48550/arXiv.2209.02836
74. Marcelline, M. (2023, March 26). Microsoft Reportedly Threatens
to Restrict Search Data From Rival AI Tools. Pcmag. https://www.pcmag.com/news/microsoft-reportedly-threatens-to-restrict-search-data-from-rival-ai-tools
75. Maslej, N., Fattorini, L., Parli, V., Reuel, A., Brynjolfsson, E.,
Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Niebles, J. C.,
Shoham, Y., Wald, R., & Clark, J. (2024). Artificial
intelligence index report 2024. Institute for Human-Centered AI. https://aiindex.stanford.edu/wp-content/uploads/2024/04/HAI_AI-Index-Report-2024.pdf
76. McCormack, J., Cruz Gambardella, C., Rajcic, N., Krol, S. J., Llano,
M. T., & Yang, M. (2023). Is writing prompts really making art? In
C. Johnson, N. Rodríguez-Fernández, & S. M. Rebelo (Eds.),
Artificial Intelligence in Music, Sound, Art and Design (pp.
196–211). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-29956-8_13
77. McCormack, J., Llano, M. T., Krol, S. J., & Rajcic, N. (2024).
No longer trending on artstation: prompt analysis of generative AI
art (arXiv:2401.14425). arXiv. https://doi.org/10.48550/arXiv.2401.14425
78. Mitchell, M. (2019). Artificial Intelligence: A Guide for
Thinking Humans (1st ed.). Farrar, Straus and Giroux.
79. Moruzzi, C. (2020, July 8). Should human artists fear AI? A
report on the perception of creative AI. xCoAx2020.
80. Naik, R., & Nushi, B. (2023). Social Biases through the
Text-to-Image Generation Lens. Proceedings of the 2023 AAAI/ACM
Conference on AI, Ethics, and Society, 786–808. https://doi.org/10.1145/3600211.3604711
81. Narayan, D. (2022). Platform Capitalism and Cloud Infrastructure:
Theorizing a Hyper-Scalable Computing Regime. Environment and
Planning A: Economy and Space, 54(5), 911–929. https://doi.org/10.1177/0308518X221094028
82. Newman, M., & Cantrill, A. (2023, April 24). The Future of AI
Relies on a High School Teacher’s Free Database. Bloomberg. https://www.bloomberg.com/news/features/2023-04-24/a-high-school-teacher-s-free-image-database-powers-ai-unicorns
83. Newstead, T., Eager, B., & Wilson, S. (2023). How AI can
perpetuate - or help mitigate - gender bias in leadership. In
Organizational Dynamics (Vol. 52, Issue 4). ELSEVIER SCIENCE
INC. https://doi.org/10.1016/j.orgdyn.2023.100998
84. Noever, D. A., & Noever, S. E. M. (2021). Reading Isn’t
Believing: Adversarial Attacks On Multi-Modal Neurons
(arXiv:2103.10480). arXiv. https://doi.org/10.48550/arXiv.2103.10480
85. Noy, S., & Zhang, W. (2023). Experimental evidence on the
productivity effects of generative artificial intelligence.
Science, 381(6654), 187–192. https://doi.org/10.1126/science.adh2586
86. NTIA. (2024). AI Accountability Policy Report. https://www.ntia.gov/sites/default/files/publications/ntia-ai-report-final.pdf
87. O’Neil, C. (2016). Weapons of Math Destruction: How Big Data
Increases Inequality and Threatens Democracy (1st edition). Crown.
88. Otterbacher, J., Barlas, P., Kleanthous, S., & Kyriakou, K.
(2019). How Do We Talk About Other People? Group (Un)Fairness in Natural
Language Image Descriptions. Proceedings of the AAAI Conference on
Human Computation and Crowdsourcing, 7, 106–114. https://aaai.org/ojs/index.php/HCOMP/article/view/5267
89. Patton, D. U., Frey, W. R., McGregor, K. A., Lee, F.-T., McKeown,
K., & Moss, E. (2020). Contextual Analysis of Social Media.
Proceedings of the AAAI/ACM Conference on AI, Ethics, and
Society, 337–342. https://doi.org/10.1145/3375627.3375841
90. Perrigo, B. (2023, January 18). Exclusive: The $2 Per Hour
Workers Who Made ChatGPT Safer. TIME. https://time.com/6247678/openai-chatgpt-kenya-workers/
91. Petrie, H., Harrison, C., & Dev, S. (2005). Describing images on
the web: a survey of current practice and prospects for the future.
Proceedings of Human Computer Interaction International (HCII),
71(2), 1–10. https://www.academia.edu/download/30388620/hcii05_alt_text_paper.pdf
92. Pi, Y. (2024). Missing value chain in generative AI governance
China as an example (arXiv:2401.02799). arXiv. http://arxiv.org/abs/2401.02799
93. Pilz, K., & Heim, L. (2023). Compute at Scale: A Broad
Investigation into the Data Center Industry (arXiv:2311.02651).
arXiv. https://doi.org/10.48550/arXiv.2311.02651
94. Prabhu, V. U., & Birhane, A. (2020). Large image datasets: A
pyrrhic win for computer vision? (arXiv:2006.16923). arXiv. https://doi.org/10.48550/arXiv.2006.16923
95. Qadri, R., Shelby, R., Bennett, C. L., & Denton, R. (2023). AI’s
regimes of representation: a community-centered study of text-to-image
models in south asia. 2023 ACM Conference on Fairness,
Accountability, and Transparency, 506–517. https://doi.org/10.1145/3593013.3594016
96. Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal,
S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., &
Sutskever, I. (2021). Learning transferable visual models from
natural language supervision (arXiv:2103.00020). arXiv. https://doi.org/10.48550/arXiv.2103.00020
97. Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A.,
Chen, M., & Sutskever, I. (2021). Zero-Shot Text-to-Image
Generation (arXiv:2102.12092). arXiv. https://doi.org/10.48550/arXiv.2102.12092
98. Raymond, E. S. (2014). 大教堂与集市 (卫剑钒, Trans.).
机械工业出版社.
99. Richter, F. (2025, February 27). Amazon and Microsoft Stay Ahead
in Global Cloud Market. Statista Daily Data. https://www.statista.com/chart/18819/worldwide-market-share-of-leading-cloud-infrastructure-service-providers
100. Romero, A. (2024, April 3). The state of generative AI,
2024. https://www.thealgorithmicbridge.com/p/the-state-of-generative-ai-2024
101. Roose, K. (2022, September 2). An AI generated picture won an art
prize. Artists aren’t happy. New York Times. https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html
102. Schaffer, S. (1994). Babbage’s Intelligence: Calculating Engines
and the Factory System. Critical Inquiry, 21(1),
203–227.
http://www.imaginaryfutures.net/2007/04/16/babbages-intelligence-by-simon-schaffer/.
https://www.jstor.org/stable/1343892
103. Schick, N. (2023, April 24). Nina Schick: How could societies
adapt to generative AI? (C. Schuhmann, Interviewer) [Interview]. https://www.youtube.com/watch?v=9HKzZIDqX_Y
104. Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R.,
Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M.,
Schramowski, P., Kundurthy, S., Crowson, K., Schmidt, L., Kaczmarczyk,
R., & Jitsev, J. (2022). LAION-5B: An open large-scale dataset
for training next generation image-text models (arXiv:2210.08402).
arXiv. https://doi.org/10.48550/arXiv.2210.08402
105. Schuhmann, C. (2023, June 21). AI as a Superpower: LAION and
the Role of Open Source in Artificial Intelligence (devmio,
Interviewer) [Interview]. https://mlconference.ai/blog/ai-as-a-superpower-laion-and-the-role-of-open-source-in-artificial-intelligence/
106. Schuhmann, C. (2023, May 1). Christoph Schuhmann on Open Source
AI (A. Chan, Interviewer) [Interview]. https://theinsideview.ai/christoph
107. Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S.,
& Vertesi, J. (2019). Fairness and Abstraction in Sociotechnical
Systems. Proceedings of the Conference on Fairness, Accountability,
and Transparency, 59–68. https://doi.org/10.1145/3287560.3287598
108. Shankar, S., Halpern, Y., Breck, E., Atwood, J., Wilson, J., &
Sculley, D. (2017). No classification without representation:
assessing geodiversity issues in open data sets for the developing
world (arXiv:1711.08536). arXiv. https://doi.org/10.48550/arXiv.1711.08536
109. 申一方. (2023). 从亚像似符理论看人工智能艺术生成趋势.
符号与传媒, 2023(2), 162–174.
110. Smith, A., Schroeder, H., Epstein, Z., Cook, M., Colton, S., &
Lippman, A. (2023). Trash to treasure: using text-to-image models to
inform the design of physical artefacts (arXiv:2302.00561). arXiv.
https://doi.org/10.48550/arXiv.2302.00561
111. Srinivasan, R., & Chander, A. (2021). Biases in AI Systems: A
survey for practitioners. Queue, 19(2), 45–64. https://doi.org/10.1145/3466132.3466134
112. Stewart, R., Andriluka, M., & Ng, A. Y. (2016). End-to-End
People Detection in Crowded Scenes. 2016 IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), 2325–2333. https://doi.org/10.1109/CVPR.2016.255
113. Stout, K. (2025). ICLE Comments to OSTP on Development of an AI
Action Plan. https://laweconcenter.org/wp-content/uploads/2025/03/OSTP-AI-2025-comments-v-1.pdf
114. Sun, L., Wei, M., Sun, Y., Suh, Y. J., Shen, L., & Yang, S.
(2023). Smiling Women Pitching down: Auditing Representational and
Presentational Gender Biases in Image-Generative AI. In Journal of
Computer-Mediated Communication (Vol. 29, Issue 1). OXFORD UNIV
PRESS INC. https://doi.org/10.1093/jcmc/zmad045
115. Suresh, H., & Guttag, J. (2021). A Framework for Understanding
Sources of Harm throughout the Machine Learning Life Cycle. Equity
and Access in Algorithms, Mechanisms, and Optimization, 1–9. https://doi.org/10.1145/3465416.3483305
116. 汤筠冰. (2024). 生成式AI影响下的艺术媒介本体论转向.
上海师范大学学报(哲学社会科学版), 1, 59–67.
117. Thomas, R. J., & Thomson, T. J. (2023). What does a journalist
look like? Visualizing journalistic roles through AI. In DIGITAL
Journalism. ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD. https://doi.org/10.1080/21670811.2023.2229883
118. Thompson, A. D. (2022). What’s in my AI? A Comprehensive
Analysis of Datasets Used to Train GPT-1, GPT-2, GPT-3, GPT-NeoX-20B,
Megatron-11B, MT-NLG, and Gopher. https://lifearchitect.ai/whats-in-my-ai/
119. Thylstrup, N. B. (2022). The ethics and politics of data sets in
the age of machine learning: deleting traces and encountering remains.
Media, Culture & Society, 44(4), 655–671. https://doi.org/10.1177/01634437211060226
120. Torralba, A., Fergus, R., & Freeman, W. T. (2008). 80 Million
Tiny Images: A Large Data Set for Nonparametric Object and Scene
Recognition. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 30(11), 1958–1970. https://doi.org/10.1109/TPAMI.2008.128
121. Tufano, M., Agarwal, A., Jang, J., Moghaddam, R. Z., &
Sundaresan, N. (2024). AutoDev: Automated AI-Driven Development
(arXiv:2403.08299). arXiv. https://doi.org/10.48550/arXiv.2403.08299
122. Turk, V. (2023, October 10). How AI reduces the world to
stereotypes. Rest of World. https://restofworld.org/2023/ai-image-stereotypes/
123. Turkle, S. (2017). Alone Together: Why We Expect More from
Technology and Less from Each Other. Basic Books.
124. Valyaeva, A. (2023, August 15). AI image statistics: how much
content was created by AI. Everypixel Journal. https://journal.everypixel.com/ai-image-statistics
125. Van Der Vlist, F., Helmond, A., & Ferrari, F. (2024). Big AI:
Cloud infrastructure dependence and the industrialisation of artificial
intelligence. Big Data & Society, 11(1),
20539517241232630. https://doi.org/10.1177/20539517241232630
126. Vipra, J., & West, S. M. (2023). Computational Power and
AI. AI Now Institute. https://ainowinstitute.org/publication/policy/compute-and-ai
127. Wang, Q., Bian, T., Yin, Y., Xu, T., Cheng, H., Meng, H. M., Zheng,
Z., Chen, L., & Wu, B. (2023). Language agents for detecting
implicit stereotypes in text-to-image models at scale
(arXiv:2310.11778). arXiv. http://arxiv.org/abs/2310.11778
128. Widder, D. G., West, S., & Whittaker, M. (2023). Open (For
Business): Big Tech, Concentrated Power, and the Political Economy of
Open AI (28 citation(s); SSRN Scholarly Paper 4543807). https://doi.org/10.2139/ssrn.4543807
129. Williams, S. (2015).
若为自由故:自由软件之父理查德·斯托曼传 (邓楠 & 李凡希,
Trans.). 人民邮电出版社.
130. Wojtkiewicz, K. (2023). How do you solve a problem like DALL-E 2?
The Journal of Aesthetics and Art Criticism, 81(4),
454–467. https://doi.org/10.1093/jaac/kpad046
131. Wolfe, R., & Caliskan, A. (2022). American == White in
Multimodal Language-and-Image AI. Proceedings of the 2022 AAAI/ACM
Conference on AI, Ethics, and Society, 800–812. https://doi.org/10.1145/3514094.3534136
132. Wu, D., Yu, Z., Ma, N., Jiang, J., Wang, Y., Zhou, G., Deng, H.,
& Li, Y. (2023). StyleMe: towards intelligent fashion generation
with designer style. Proceedings of the 2023 CHI Conference on Human
Factors in Computing Systems, 1–16. https://doi.org/10.1145/3544548.3581377
133. Xu, J., Li, H., & Zhou, S. (2015). An overview of deep
generative models. IETE Technical Review, 32(2),
131–139. https://doi.org/10.1080/02564602.2014.987328
134. Zhang, B., & Carpano, D. (2023). Chromium as a tool of
logistical power: A material political economy of open-source. Big
Data & Society, 10(1), 20539517231182399. https://doi.org/10.1177/20539517231182399
135. Zhou, J., & Guo, J. (2020). Why AI Alt Text Generator
Fail. https://ecommons.cornell.edu/server/api/core/bitstreams/cd4e8a1a-c42c-428b-96d4-c86da92c8dd1/content
136. Zhu, Y., Baca, J., Rekabdar, B., & Rawassizadeh, R. (2023).
A Survey of AI Music Generation Tools and Models
(arXiv:2308.12982). arXiv. https://doi.org/10.48550/arXiv.2308.12982