Gender Perception in Artificial Intelligence: Deepseek, Gemini and ChatGPT Comparison

Authors

  • Nebile Erogul Ministry of National Education, General Directorate of Measurement, Evaluation and Examination Services, Serhat mah. 1290. Sok. No: 8/B 06374 Yenimahalle Ankara, Turkey

DOI:

https://doi.org/10.15503/jecs2025.3.87.116

Keywords:

gender, gender perception, stereotypes, Artificial Intelligence, biased algorithm

Abstract

Aim. Based on the premise that artificial intelligence has a shaping, guiding, and reconstructive impact on gender perception, the main aim of this research is to contribute to understanding the reproduction processes of gender in the digital age by examining how gender representations are constructed in AI algorithms.

Method. This research follows a qualitative research model. Responses to 6 user queries asked to Deepseek, Gemini, and ChatGPT are analysed using critical discourse analysis. Themes such as dichotomous positioning, stereotyping, different experiences, activism, and breaking stereotypes were identified in the study.

Results. All three tools position femininity and masculinity dichotomously, clearly separating them based on physical appearance, roles, and social expectations. Compared to Deepseek and Gemini, ChatGPT presents masculinity as strong and protective, and femininity as self-sacrificing and passive by stereotyping gender roles more through media and social media representations. While highlighting the cultural and historical changes in gender roles Deepseek highlights the transformation of male roles, Gemini stresses the diversity of female roles, and ChatGPT foregrounds the conflict between traditional and modern gender relations and cultural diversity. Deepseek and Gemini underscore women’s fight for social change, while ChatGPT includes male activists as well, providing a more inclusive representation of activism. All three tools promote the breaking of gender stereotypes through equal responsibility, caregiving males, and sharing household duties.

Conclusion. The research reveals that Deepseek, Gemini, and ChatGPT reproduce traditional femininity and masculinity roles in their gender representations. While all tools highlight attributes traditionally associated with women, such as motherhood, domestic responsibilities, and caregiving roles, they position men as authority figures, breadwinners, and protectors. ChatGPT places more emphasis on the passive and obedient traits of women, and the strong and rational traits of men, presenting more stereotypes. Deepseek draws attention to sexuality-based stereotypes in masculinity, while Gemini highlights the diversity of women’s experiences. All three tools acknowledge men’s caregiving roles, though limited in challenging traditional norms.

Downloads

Download data is not yet available.

Author Biography

  • Nebile Erogul, Ministry of National Education, General Directorate of Measurement, Evaluation and Examination Services, Serhat mah. 1290. Sok. No: 8/B 06374 Yenimahalle Ankara, Turkey

    Researcher specializing in gender and critical masculinity studies in Türkiye. She holds a PhD in Sociology, with a dissertation focusing on the sociology of critical masculinities. Her main areas of study include education, and the cultural dynamics of femininity and masculinity. She currently works at the Ministry of National Education (General Directorate of Measurement, Evaluation and Examination Services) and teaches undergraduate courses at Gazi University. Her recent work has appeared in national and international journals focusing on gender and education.

References

Alfano, V., Ramaci, T., Landolfi, A., Lo Presti, A., & Barattucci, M. (2021). Gender patterns in mobbing victims: Differences in negative act perceptions, MMPI personality profile, perceived quality of life, and suicide risk. International Journal of Environmental Research and Public Health, 18(4), Article 2192.

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623). https://doi.org/10.1145/3442188.3445922

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. http://doi.org/10.1191/1478088706qp063oa

Carvajal, D., Franco, C., & Isaksson, S. (2024). Will Artificial Intelligence get in the way of achieving gender equality? (Paper No. 3). NHH Department of Economic Discussion. https://dx.doi.org/10.2139/ssrn.4759218

ChatGPT. (2025). ChatGPT (March 18 version) [Large language model]. https://chatgpt.com/

Chrisler, J. C., & Lamer, S. A. (2016). Gender, definitions of, in N. A. Naples (Ed.), The Wiley Blackwell encyclopedia of gender and sexuality studies (pp. 1-3). Jone Wiley and Sons. http://dx.doi.org/10.1002/9781118663219.wbegss171

Deepseek-Al. (2025). Deepseek (March 18 version) [Large language model]. https://chat.deepseek.com/

Elliott, K. (2016). Caring masculinities: Theorizing an emerging concept. Men and Masculinities, 19(3), 240-259. https://doi.org/10.1177/1097184X15576203

Fairclough, N. (2001). Critical discourse analysis as a method in social scientific research. In R. Wodak & M. Meyer (Eds.), Methods of critical discourse analysis (pp. 121-138). Sage. https://doi.org/10.4135/9780857028020.n6

Farlow, J. L., Abouyared, M., Rettig, E. M., Kejner, A., Patel, R., & Edwards, H. A. (2024). Gender Bias in Artificial Intelligence‐Written Letters of Reference. Otolaryngology–Head and Neck Surgery, 171(4), 1027-1032. https://doi.org/10.1002/ohn.806

Ferrara, E. (2023). Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models. arXiv, 28(12). https://doi.org/10.48550/arXiv.2304.03738

Fitrianti, R., Rahayu, L. F., & Saepudin, E. A. (2023). Gender perceptions in political speech: women and political style. The International Journal of Politics and Sociology Research, 11(2), 222-229. https://doi.org/10.35335/ijopsor.v11i2.149

Ghosh, S., & Caliskan, A. (2023, August). Chatgpt perpetuates gender bias in machine translation and ignores non-gendered pronouns: Findings across bengali and five other low-resource languages. In F. Rossi, S. Das, J. Davis, K. Firth-Butterfield, & A. John (Eds.), Proceedings of the 2023 AAAI/ACM conference on AI, ethics, and society (pp. 901-912). http://dx.doi.org/10.48550/arXiv.2305.10510

Google. (2025). Gemini (March 18 version) [Large language model]. https://gemini.google.com/app?hl=tr

Gross, N. (2023). What ChatGPT tells us about gender: A cautionary tale about performativity and gender biases in AI. Social Sciences, 12(8), Article 435. https://doi.org/10.3390/socsci12080435

Gurieva, S. D., Kazantseva, T. V., Mararitsa, L. V., & Gundelakh, O. E. (2022). Social perceptions of gender differences and the subjective significance of the gender inequality issue. Psychology in Russia, 15(2), 65-82. https://doi.org/10.11621/pir.2022.0205

Holloway, I., & Wheeler, S. (1996). Qualitative research for nurses. Blackwell Oxford; Cambridge, Mass.

Houser, J. (2015). Nursing research: Reading, using, andcreating evidence (3rd ed.). Jones ve Bartlett Learning.

Kaplan, H. (2025). Toplumsal cinsiyet bağlamında kadın aşıklar ve sosyal rolleri [Female ashiks and their social roles in the context of gender]. In A. E. Gündoğdu & B. Karataş (Eds.), Sosyal bilimler çerçevesinde kadın [Woman in the Framework of Social Sciences] (pp. 193-221). NEU Press.

Krefting, L. (1991). Rigor in qualitative research: The assessment of trustworthiness. The American Journal of Occupational Therapy, 45(3), 214-222. https://doi.org/10.5014/ajot.45.3.214

Leavy, S. (2018). Gender bias in artificial intelligence: The need for diversity and gender theory in machine learning. In Proceedings of the 1st international workshop on gender equality in software engineering (pp. 14-16). https://doi.org/10.1145/3195570.3195580

Maliki, K., & Naji, F. (2024). Gender inequality in the sphere of artificial intelligence: Theoretical approach. Journal of Autonomous Intelligence, 7(3). https://doi.org/10.32629/jai.v7i3.1394

McGovern, J. (2024). The Intersection of class, race, gender, and generation in shaping latinas' sport experiences. In S. B. Donley & M. Johnson (Eds.), Intersectional experiences and marginalized voices (pp. 118-136). Routledge.

Merriam, S. B. (2015). Nitel araştırma: Desen ve uygulama için bir rehber [Qualitative research: A guide for design and implementation] (S. Turan Ed.) Nobel.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook. Sage.

Noble. S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. https://doi.org/10.2307/j.ctt1pwt9w5

Oakley, A. (1972). Sex, gender and society. Maurice Temple Smith Ltd.

Panch, T., Mattie, H., & Atun, R. (2019). Artificial intelligence and algorithmic bias: implications for health systems. Journal of Global Health, 9(2), Article 020318. https://doi.org/10.7189/jogh.09.020318

Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121-154. https://doi.org/10.1016/j.iotcps.2023.04.003

Rozado, D. (2023). The political biases of ChatGPT. Social Sciences, 12(3), Article 148. https://doi.org/10.3390/socsci12030148

Santoniccolo, F., Trombetta, T., Paradiso, M. N., & Rollè, L. (2023). Gender and media representations: A review of the literature on gender stereotypes, objectification and sexualization. International Journal of Environmental Research and Public Health, 20(10), Article 5770. https://doi.org/10.3390/ijerph20105770

Timisi, N. (1997). Medyada cinsiyetçilik [Sexism in the media]. T.C Başbakanlık Kadın Statüsü ve Sorunları Genel Müdürlüğü Yayınları.

Van Dijk, T. A. (2001). Multidisciplinary CDA: A plea for diversity. In R. Wodak & M. Meyer (Eds.), Methods of critical discourse analysis (pp. 95-120). http://dx.doi.org/10.4135/9780857028020.d7

Whittemore, R., Chase, S. K., & Mandle, C. L. (2001). Validity in qualitative research. Qualitative Health Research, 11(4), 522-537. https://doi.org/10.1177/104973201129119299

Wodak, R. (2002). Aspects of critical discourse analysis. Zeitschrift für angewandte Linguistik, 36(10), 5-31.

Xu, Q., Fan, M., & Brown, K. A. (2021). Men’s sports or women’s sports?: Gender norms, sports participation, and media consumption as predictors of sports gender typing in China. Communication & Sport, 9(2), 264-286. http://dx.doi.org/10.1177/2167479519860209

Downloads

Published

2025-09-23

How to Cite

Erogul, N. . (2025). Gender Perception in Artificial Intelligence: Deepseek, Gemini and ChatGPT Comparison. Journal of Education Culture and Society, 16(2), 87-116. https://doi.org/10.15503/jecs2025.3.87.116