![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]>
Not a Lexis+ subscriber? Try it out for free.
LexisNexis® CLE On-Demand features premium content from partners like American Law Institute Continuing Legal Education and Pozner & Dodd. Choose from a broad listing of topics suited for law firms, corporate legal departments, and government entities. Individual courses and subscriptions available.
Tajha Chappellet-Lanier, Fedscoop, Nov. 17, 2017 - "The use of algorithms for screening immigrants could easily lead to an “inaccurate and biased” process, a group of artificial intelligence and machine learning experts argues.
In a letter to acting Secretary of Homeland Security Elaine Duke, 54 computer scientists and mathematicians from Google, Microsoft, MIT and more strongly criticize the proposed use of automation in “extreme vetting” — a process by which immigration enforcement would consider an immigrant’s internet and social media presence as part of their visa application.
While the extreme vetting initiative has not yet begun, U.S. Immigration and Customs Enforcement is actively looking for a contractor to deliver a service that “automates, centralizes and streamlines the current manual vetting process.” According to the Brennan Center for Justice, ICE wants to award a contract by September 2018. The directive comes from President Donald Trump’s January 2017 executive order which calls for a screening process to “evaluate the applicant’s likelihood of becoming a positively contributing member of society and the applicant’s ability to make contributions to the national interest.”
Herein lies the problem, experts say — “As far as we are aware, neither the federal government nor anyone else has defined, much less attempted to quantify, these characteristics,” they write. “Algorithms designed to predict these undefined qualities could be used to arbitrarily flag groups of immigrants under a veneer of objectivity.”
“Inevitably, because these characteristics are difficult (if not impossible) to define and measure, any algorithm will depend on ‘proxies’ that are more easily observed and may bear little or no relationship to the characteristics of interest,” the letter continues. “For example, developers could stipulate that a Facebook post criticizing U.S. foreign policy would identify a visa applicant as a threat to national interests.”
It’s an interesting argument — while there is much potential for AI and machine learning in areas of government, the group says, this is not one of those areas. “We respectfully urge you to abandon the Extreme Vetting Initiative,” the letter concludes.
This letter is not alone. A group of 56 civil rights organizations also sent a letter to Secretary Duke opposing the policy."