Google-affiliated researchers today released the Language Interpretability Tool (LIT), an open source, framework-agnostic platform and API for visuali

Google open-sources LIT, a toolset for evaluating natural language models

submited by
Style Pass
2020-08-14 16:54:53

Google-affiliated researchers today released the Language Interpretability Tool (LIT), an open source, framework-agnostic platform and API for visualizing, understanding, and auditing natural language processing models. It focuses on questions about AI model behavior, like why models made certain predictions and why they’re performing poorly with input corpora, and it incorporates aggregate analysis into a browser-based interface that’s designed to enable explorations of text generation behavior.

Advances in modeling have led to unprecedented performance on natural language processing tasks, but questions remain about models’ tendencies to behave according to biases and heuristics. There’s no silver bullet for analysis — data scientists must often employ several techniques to build a comprehensive understanding of model behavior.

That’s where LIT comes in. The tool set is architected so that users can hop between visualizations and analysis to test hypotheses and validate those hypotheses over a data set. New data points can be added on the fly and their effect on the model visualized immediately, while side-by-side comparison allows for two models or two data points to be visualized simultaneously. And LIT calculates and displays metrics for entire data sets to spotlight patterns in model performance, including the current selection, manually generated subsets, and automatically generated subsets.

Leave a Comment