Way we train AI fundamentally flawed

by David Solomonoff

AI is increasingly used for critical applications: medical, legal, infrastructure maintenance, law enforcement and military, which can involve life or death decisions including deployment of deadly force. “Loss of trust,” referenced in article seems like an understatement.

Process used to build most machine-learning models today cannot tell which models will work in the real world and which ones won’t.

Models that should have been equally accurate performed differently when tested with real-world data.

Some stress tests are also at odds with each other: models that were good at recognizing pixelated images were often bad at recognizing images with high contrast, for example. It might not always be possible to train a single model that passes all stress tests.

Katherine Heller, who works at Google on AI for healthcare: “We’ve lost a lot of trust when it comes to the killer applications, that’s important trust that we want to regain.”

Source: The way we train AI is fundamentally flawed – MIT Technology Review

Leave a Comment

%d bloggers like this: