“And then I thought I should read the paper, so then I started reading the paper and remained outraged.” Excluding citations, the paper is 36 pages long, far more verbose than most AI papers you’ll see, and is fairly labyrinthian when describing the results of the authors’ experiments and their justifications for their findings.
Kosinski asserted in an interview with Quartz that regardless of the methods of his paper, his research was in service of gay and lesbian people that he sees under siege in modern society.
Last week, The Economist published a story around Stanford Graduate School of Business researchers Michal Kosinski and Yilun Wang’s claims that they had built artificial intelligence that could tell if we are gay or straight based on a few images of our faces.
It seemed that Kosinski, an assistant professor at Stanford’s graduate business school who had previously gained some notoriety for establishing that AI could predict someone’s personality based on 50 Facebook Likes, had done it again; he’d brought some uncomfortable truth about technology to bear.
But unlike a nuke, the fundamental architecture of today’s best AI makes the margin between success and failure fuzzy and unknowable, and at the end of the day accuracy doesn’t matter if some autocrat likes the idea and takes it.
I had done my best to pay as little attention to the matter as possible.
Do our faces show the world clues to our sexuality?
The Council that ruled included Judges Scirica, Sloviter, Mc Kee, Rendell, Barry, and Ambro.
(I tweeted some thoughts about that proceeding here and here.) Third, Judge Scirica also was involved in the current matter, as Heidi Bond recounts: On the advice of two friends, I spoke to several people in the federal judiciary—first, Jeffrey Minear, Counselor to Chief Justice Roberts, then, at his referral, to Judge Scirica of the Third Circuit, in his capacity as the chair of the Committee on Judicial Conduct and Disability.