模型:

laion/clap-htsat-fused

英文

CLAP模型卡片

CLAP模型卡片:对比语言音频预训练

内容目录

  • 概述
  • 模型详细信息
  • 使用方法
  • 应用
  • 引用
  • 概述

    该论文的摘要指出:对比学习在多模态表示学习领域取得了显著的成功。在本文中,我们提出了一个对比语言音频预训练的流程,通过将音频数据与自然语言描述相结合来发展音频表示。为了实现这个目标,我们首先发布了LAION-Audio-630K,一个来自不同数据源的633,526个音频-文本配对的大型数据集。其次,我们构建了一个对比语言音频预训练模型,该模型考虑了不同的音频编码器和文本编码器。我们将特征融合机制和关键词到字幕的增强引入模型设计中,进一步使模型能够处理可变长度的音频输入并提高性能。第三,我们进行了全面的实验,评估了我们模型在三个任务中的表现:文本到音频检索、零样本音频分类和有监督音频分类。结果表明,我们的模型在文本到音频检索任务中取得了卓越的性能。在音频分类任务中,该模型在零样本设置下达到了最先进的性能,并且能够获得与非零样本设置下模型结果相媲美的性能。LAION-Audio-630K数据集和所提出的模型都对公众开放。

    使用方法

    您可以将该模型用于零样本音频分类或提取音频和/或文本特征。

    应用

    执行零样本音频分类

    使用流程

    from datasets import load_dataset
    from transformers import pipeline
    
    dataset = load_dataset("ashraq/esc50")
    audio = dataset["train"]["audio"][-1]["array"]
    
    audio_classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-fused")
    output = audio_classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
    print(output)
    >>> [{"score": 0.999, "label": "Sound of a dog"}, {"score": 0.001, "label": "Sound of vaccum cleaner"}]
    

    运行模型:

    您还可以使用ClapModel获取音频和文本嵌入。

    在CPU上运行模型:

    from datasets import load_dataset
    from transformers import ClapModel, ClapProcessor
    
    librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
    audio_sample = librispeech_dummy[0]
    
    model = ClapModel.from_pretrained("laion/clap-htsat-fused")
    processor = ClapProcessor.from_pretrained("laion/clap-htsat-fused")
    
    inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt")
    audio_embed = model.get_audio_features(**inputs)
    

    在GPU上运行模型:

    from datasets import load_dataset
    from transformers import ClapModel, ClapProcessor
    
    librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
    audio_sample = librispeech_dummy[0]
    
    model = ClapModel.from_pretrained("laion/clap-htsat-fused").to(0)
    processor = ClapProcessor.from_pretrained("laion/clap-htsat-fused")
    
    inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt").to(0)
    audio_embed = model.get_audio_features(**inputs)
    

    引用

    如果您在工作中使用了该模型,请考虑引用原始论文:

    @misc{https://doi.org/10.48550/arxiv.2211.06687,
      doi = {10.48550/ARXIV.2211.06687},
      
      url = {https://arxiv.org/abs/2211.06687},
      
      author = {Wu, Yusong and Chen, Ke and Zhang, Tianyu and Hui, Yuchen and Berg-Kirkpatrick, Taylor and Dubnov, Shlomo},
      
      keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
      
      title = {Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation},
      
      publisher = {arXiv},
      
      year = {2022},
      
      copyright = {Creative Commons Attribution 4.0 International}
    }