‘Deep State’ federal surveillance can now use AI to calculate human faces

artificial-intelligence-AI-e1459877372521

If you feel invaded, disrespected, and mistreated as a result of the level of surveillance to which innocent people are subjected these days, whether that involves rights violations by the TSA at airports or the intense identity verification processes involved in getting a job, you will not like the latest twist on this Big Brother-type method. But you definitely need to be aware of it.

This latest craziness is a Deep State type of activity that involves imposing surveillance on regular people by using artificial intelligence (AI) to analyze human faces and label certain people as criminals. The information comes from Cornell University scientists.

How it’s done

The AI technology determines that certain people are more likely to be criminals based on specific facial features. These attributes include:

  • Mouth-nose angle: Supposedly, in photo subjects with “criminal” face types, if imaginary lines were drawn from the corners of the mouth to the tip of the nose, the angle made by the lines is about 20% smaller than in noncriminal faces.
  • Lip curvature: In people with “criminal” faces, the way the upper lip curves is about 23% larger compared to people with noncriminal faces.
  • Eye inner corner distance: When the distance between the eyes’ inner corners is measured, that distance is 6% shorter in crimanals’ photos than in noncriminals’ images.

Xi Zhang and Xiaolin Wuk, the two scientists at Cornell who performed the study, said:

“the faces of general law-biding public have a greater degree of resemblance compared with the faces of criminals, or criminals have a higher degree of dissimilarity in facial appearance than normal people.”

Gigantic red flags

None of us can control the unique features that make up our faces. And it is quite shaky to leap to the conclusion that having certain facial features makes a person likely to be a criminal. It is disturbing that people can be perceived and labeled as criminals or can in any other way have assumptions made about them based solely on photo recognition technology.

Cornell University’s law website, the same Cornell University where the researchers published the study about AI facial recognition, provides this definition:

“presumption of innocence: One of the most sacred principles in the American criminal justice system, holding that a defendant is innocent until proven guilty. In other words, the prosecution must prove, beyond a reasonable doubt, each essential element of the crime charged.”

The “innocent until proven guilty” concept is foundational to our lawful society. And the AI facial recognition photo technology, if used to peg people as criminals when they are innocent, flies in the face of that.

What’s the point of this AI technology, anyway? What is its intended purpose? There are more questions about the topic than there are answers. And both the questions and the likely answers are quite unsettling.

Would a government simply decide, based on AI photos that determine a person’s facial features are those of a criminal, that the person should be labeled as a criminal when they hadn’t even committed a crime? [Related: Learn more about the science behind face shapes.]

But here’s a novel idea: How about going after the real criminals rather than treating law-abiding, innocent citizens with suspicion? That would make more sense.

Keep informed when these types of technologies try to rear their ugly heads. Speak out and push back when it becomes increasingly apparent that the technologies are intended to be used for surveillance of innocent citizens, whether that means contacting your U.S. Senators and Congressional representatives or sharing information with loved ones. In this day and age, staying silent and doing nothing are not options for those who care about the future of freedom in our great nation.

Sources:

WakingTimes.com

CornellUniversity.edu

CornellUniversity.edu

AmericanConservative.com