Abstract
There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as which caption-generator best understands colors?' and
can caption-generators count?’
Abstract (translated by Google)
自动生成图像标题的任务有相当大的兴趣。但是,评估具有挑战性。现有的自动评估指标主要对n-gram重叠敏感,对模拟人类判断任务既不必要也不足够。我们假设语义命题内容是人类标题评估的重要组成部分,并提出了一个新的自动标题评估度量标准在SPICE创建的场景图。对一系列模型和数据集的广泛评估表明,SPICE比其他自动指标更好地捕捉模型生成的字幕的人为判断(例如,与MS COCO数据集上的人类判断相比,0.88与CIDEr的0.43相比,0.53流星)。此外,SPICE可以回答“哪个字幕生成器最能理解颜色?”等问题。和’字幕发电机可以算吗?
URL
https://arxiv.org/abs/1607.08822