WiC: The Word-in-Context Dataset

A reliable benchmark for the evaluation of context-sensitive word embeddings



Depending on its context, an ambiguous word can refer to multiple, potentially unrelated, meanings. Mainstream static word embeddings, such as Word2vec and GloVe, are unable to reflect this dynamic semantic nature. Contextualised word embeddings are an attempt at addressing this limitation by computing dynamic representations for words which can adapt based on context.

A system's task on the WiC dataset is to identify the intended meaning of words. WiC is framed as a binary classification task. Each instance in WiC has a target word w, either a verb or a noun, for which two contexts are provided. Each of these contexts triggers a specific meaning of w. The task is to identify if the occurrences of w in the two contexts correspond to the same meaning or not. In fact, the dataset can also be viewed as an application of Word Sense Disambiguation in practise.

WiC features multiple interesting characteristics:

  • It is suitable for evaluating a wide range of applications, including contextualized word and sense representation and Word Sense Disambiguation;
  • It is framed asa binary classification dataset, in which, unlike Stanford Contextual Word Similarity (SCWS), identical words are paired with each other (in different contexts); hence, a context-insensitive word embedding model would perform similarly to a random baseline;
  • It is constructed using high quality annotations curated by experts.


Download


Participate in WiC's CodaLab competition: submit your results on the development set and see where you stand in the leaderboard!
Link: WiC CodaLab Competition

NOTE: the test set to be released in a shared task (details soon to be announced here). Meanwhile, you might use the development set for your evaluations.




Dataset details

Please see the following paper:

Examples from the dataset

Label Target Context-1 Context-2
F bed There's a lot of trash on the bed of the river I keep a glass of water next to my bed when I sleep
F land The pilot managed to land the airplane safely The enemy landed several of our aircrafts
F justify Justify the margins The end justifies the means
T beat We beat the competition Agassi beat Becker in the tennis championship
T air Air pollution Open a window and let in some air
T window The expanded window will give us time to catch the thieves You have a two-hour window of clear weather to finish working on the lawn


State-of-the-Art

Contextualised word embeddings Accuracy percentage
Context2vec 59.2
Elmo-3 57.4
Elmo-1 56.3
Sense representations
DeConf 59.4
SW2V 58.1
JBT 53.9
Sentence level baselines
Sentence Bag-of-words 59.3
Sentence LSTM 53.2
Random baseline 50.0


Performance upperbound

Accuracy
Human-level performance 80.5


References