Ir al contenido principal

Introducing the Model Card Toolkit for Easier Model Transparency Reporting



Machine learning (ML) model transparency is important across a wide variety of domains that impact peoples’ lives, from healthcare to personal finance to employment. The information needed by downstream users will vary, as will the details that developers need in order to decide whether or not a model is appropriate for their use case. This desire for transparency led us to develop a new tool for model transparency, Model Cards, which provide a structured framework for reporting on ML model provenance, usage, and ethics-informed evaluation and give a detailed overview of a model’s suggested uses and limitations that can benefit developers, regulators, and downstream users alike.

Over the past year, we’ve launched Model Cards publicly and worked to create Model Cards for open-source models released by teams across Google. For example, the MediaPipe team creates state-of-the-art computer vision models for a number of common tasks, and has included Model Cards for each of their open-source models in their GitHub repository. Creating Model Cards like these takes substantial time and effort, often requiring a detailed evaluation and analysis of both data and model performance. In many cases, one needs to additionally evaluate how a model performs on different subsets of data, noting any areas where the model underperforms. Further, Model Card creators may want to report on the model’s intended uses and limitations, as well as any ethical considerations potential users might find useful, compiling and presenting the information in a format that’s accessible and understandable.

To streamline the creation of Model Cards for all ML practitioners, we are sharing the Model Card Toolkit (MCT), a collection of tools that support developers in compiling the information that goes into a Model Card and that aid in the creation of interfaces that will be useful for different audiences. To demonstrate how the MCT can be used in practice, we have also released a Colab tutorial that builds a Model Card for a simple classification model trained on the UCI Census Income dataset.

Introducing the MCT
To guide the Model Card creator to organize model information, we provide a JSON schema, which specifies the fields to include in the Model Card. Using the model provenance information stored with ML Metadata (MLMD), the MCT automatically populates the JSON with relevant information, such as class distributions in the data and model performance statistics. We also provide a ModelCard data API to represent an instance of the JSON schema and visualize it as a Model Card. The Model Card creator can choose which metrics and graphs to display in the final Model Card, including metrics that highlight areas where the model’s performance might deviate from its overall performance.
Once the MCT has populated the Model Card with key metrics and graphs, the Model Card creator can supplement this with information regarding the model’s intended usage, limitations, trade-offs, and any other ethical considerations that would otherwise be unknown to people using the model. If a model underperforms for certain slices of data, the limitations section would be another place to acknowledge this, along with suggested mitigation strategies to help developers address these issues. This type of information is critical in helping developers decide whether or not a model is suitable for their use case, and helps Model Card creators provide context so that their models are used appropriately. Right now, we’re providing one UI template to visualize the Model Card, but you can create different templates in HTML should you want to visualize the information in other formats.

Currently, the MCT is available to anyone using TensorFlow Extended (TFX) in open source or on Google Cloud Platform. Users who are not serving their ML models via TFX can still leverage the JSON schema and the methods to visualize via the HTML template.
Here is an example of the completed Model Card from the Colab tutorial, which leverages the MCT and the provided UI template.
Conclusion
Currently, the MCT includes a standard template for reporting on ML models broadly, but we’re continuing to create UI templates for more specific applications of ML. If you’d like to join the conversation about what fields are important and how best to leverage the MCT for different use cases, you can get started here or with the Colab tutorial. Let us know how you’ve leveraged the MCT for your use case by emailing us at model-cards@google.com. You can learn more about Google’s efforts to promote responsible AI in the TensorFlow ecosystem on our TensorFlow Responsible AI page.

Acknowledgements
Huanming Fang, Hui Miao, Karan Shukla, Dan Nanas, Catherina Xu, Christina Greer, Neoklis Polyzotis, Tulsee Doshi, Tiffany Deng, Margaret Mitchell, Timnit Gebru, Andrew Zaldivar, Mahima Pushkarna, Meena Natarajan, Roy Kim, Parker Barnes, Tom Murray, Susanna Ricco, Lucy Vasserman, and Simone Wu
Twitter Facebook