AI Systems Exhibit Gender and Racial Biases When Learning Language (3 of 3) (IMAGE)
Caption
Predicted percentage of women with a certain occupation. This material relates to a paper that appeared in the April 14, 2017, issue of Science, published by AAAS. The paper, by A. Caliskan at Princeton University in Princeton, NJ, and colleagues was titled, "Semantics derived automatically from language corpora contain human-like biases."
Credit
Aylin Caliskan
Usage Restrictions
Please cite the owner of the material when publishing. This material may be freely used by reporters as part of news coverage, with proper attribution. Non-reporters must contact <i>Science</i> for permission.
License
Licensed content