The Dataset Nutrition Label Project

Empowering data scientists and policymakers with practical tools to improve AI outcomes

The Problem

Garbage in, Garbage out

Incomplete, misunderstood, and historically problematic data can negatively influence AI algorithms.

Algorithms matter, and so does the data they’re trained on. To improve the accuracy and fairness of algorithms that determine everything from navigation directions to mortgage approvals, we need to make it easier for practitioners to quickly assess the viability and fitness of datasets they intend to train AI algorithms on.

There’s a missing step in the AI development pipeline: assessing datasets based on standard quality measures that are both qualitative and quantitative. We are working on packaging up these measures into an easy to use Dataset Nutrition Label.

Our Solution

Standard interactive reports

A "nutrition label" for datasets.

The Dataset Nutrition Label aims to create a standard for interrogating datasets for measures that will ultimately drive the creation of better, more inclusive algorithms.

Our current prototype includes a highly-generalizable interactive data diagnostic label that allows for exploring any number of domain-specific aspects in datasets. Similar to a nutrition label on food, our label aims to highlight the key ingredients in a dataset such as meta-data and populations, as well as unique or anomalous features regarding distributions, missing data, and comparisons to other ‘ground truth’ datasets. We are currently testing our label on several datasets, with an eye towards open sourcing this effort and gathering community feedback.

The design utilizes a ‘modular’ framework that can be leveraged to add or remove areas of investigation based on the domain of the dataset. For example, Nutrition Labels for data about people may include modules about the representation of race and gender, while Nutrition Labels for data about trees may not require that module.

To learn more about the prototype, check out our live protoype built on the Dollars for Docs dataset from ProPublica. A first draft of our paper can be found here.

Our Team

We are a group of researchers and technologists working together to tackle the challenges of ethics and governance of Artificial Intelligence as a part of the Assembly program at the Berkman Klein Center at Harvard University & MIT Media Lab.

Please note: This project is the work of individuals who participated in the Assembly program. If named, participants' employers are provided for identification purposes only.
chmielinski_kasia

Kasia Chmielinski

Product Management
Technologist at the U.S. Digital Service working to improve government technology for the American public. Previously the Product and Partnerships Lead for Scratch, a project at the MIT Media Lab. Ex-Googler, native Bostonian. Dabbled in architecture at the Chinese University of Hong Kong before graduating with a degree in physics from Harvard University. Avid bird-watcher.
hosny_ahmed

Ahmed Hosny

Data Science
Machine learning at Dana-Farber Cancer Institute & Harvard Medical School. Design Technology at Harvard Graduate School of Design. Previously Research Fellow at Wyss Institute and Research Affiliate at MIT Media Lab. Architect in a former life. Private Pilot. Chocolate chip cookie addict.
holland_sarah

Sarah Holland

Research and Public Policy
Public policy at Google, working on AI, emerging technology, and privacy issues. Previously worked to pass laws on consumer protection and internet accessibility in U.S. Senate, and in global security and foreign policy issues. Degrees from University of Arkansas and Johns Hopkins University, with a stint at the American University in Cairo, Egypt. Aspiring drummer and gardener. Keen baker.
Mary Jane

Sarah Newman

Research
Creative Researcher at metaLAB at Harvard, Fellow at the Berkman Klein Center for Internet & Society, and AI Grant Fellow. BA Philosophy, Washington University in St. Louis, MFA Imaging Arts, Rochester Institute of Technology. Creates interactive art installations to explore social, cultural, and philosophical dimensions of artificial intelligence.
josh_joseph

Josh Joseph

Data Science
AI/ML consultant. Previous startups in data distribution for hedge funds and ML-driven, fully-autonomous proprietary trading. Aero/Astro PhD on modeling and planning in the presence of complex dynamics from MIT. BS in Applied Mathematics and BS in Mechanical Engineering from RIT. Spends too much time arguing about consciousness. Terrible improviser.
Why choose Us?
Why choose Us?
Why choose Us?

Frequently Asked Questions

A few questions you might have

Q. Do you have a protoype or more information?

Yes, we do! You can take a look at a live protoype of the Dataset Nutrition Label for the Dollars for Docs dataset that our friends at ProPublica have made available to our group. We are also currently working on a paper describing our work, the protoype, and future directions.

Q. What inspired this project?

We believe that algorithm developers want to build responsible and smart AI models, but that there is a key step missing in the standard way these models are built. This step is to interrogate the dataset for a variety of imbalances or problems it could have and ascertain if it is the right dataset for the model. We are inspired by the FDA's Nutrition Facts label in that it provides basic yet powerful facts that highlight issues in an accessible way. We aspire to do the same for datasets.

Q. Whom have you been speaking with?

We have been speaking with researchers in academia, practitioners at large technology companies, individual data scientists, organizations, and government institutions that host or open datasets to the public. If you’re interested in getting involved, please contact us.

Q. Is your work open source?

Yes. You can view our live protoype here, and the code behind the prototype on Github.

Q. Who is the intended beneficiary of this work?

Our primary audience for the Dataset Nutrition Label is primarily the data science and developer community who are building algorithmic AI models. However, we believe that a larger conversation must take place in order to shift the industry. Thus, we are also engaging with educators, policymakers, and researchers on best ways to amplify and highlight the potential of the Dataset Nutrition Label and the importance of data interrogation before model creation. If you’re interested in getting involved, please contact us.

Q. How will this project scale?

We believe that the Dataset Nutrition Label addresses a broad need in the model development ecosystem, and that the project will scale to address that need. Feedback on our prototype and opportunities to build additional prototypes on more datasets will certainly help us make strides.

Q. Is this a Harvard/MIT project?

This is a project of Assembly, a program run by the MIT Media Lab and the Berkman Klein Center.

Contact

Are you a data scientist who would like to learn more about this project and use our prototype label? A researcher curious about building a nutrition label for your dataset or offering feedback on our label components? A policymaker or technologist who is working on a similar or related idea? A member of the public with comments or feedback? All or none of the above? We’d love to hear from you!
Work with us