We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions.
Disclaimer: The team releasing Flickr30k did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.
Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value":
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]} {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]} ... {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.
Install the ? Datasets library with pip install datasets and load the dataset from the Hub with:
from datasets import load_dataset dataset = load_dataset("embedding-data/flickr30k-captions")
The dataset is loaded as a DatasetDict has the format:
DatasetDict({ train: Dataset({ features: ['set'], num_rows: 31783 }) })
Review an example i with:
dataset["train"][i]["set"]
Thanks to Peter Young , Alice Lai , Micah Hodosh , Julia Hockenmaier for adding this dataset.