模型:
pszemraj/grammar-synthesis-small
该模型是 google/t5-small-lm-adapt 用于对扩展版本的 JFLEG 数据集进行语法修正的经过微调的版本。
在Python中使用(在pip install transformers之后):
from transformers import pipeline corrector = pipeline( 'text2text-generation', 'pszemraj/grammar-synthesis-small', ) raw_text = 'i can has cheezburger' results = corrector(raw_text) print(results)
在 Google Colab here 中查看一个简单的演示。
本意是创建一个文本到文本的语言模型,成功完成对可能具有大量错误的潜在语法不正确的文本的“一次性语法校正”,并且不能在语法正确的文本/信息上进行语义改变。
可以通过对 other grammar correction models 中的一些较重错误的示例进行对比来看出区别 :)
显然,这一部分很一般,因为可以使用“通用一次性语法校正”来做很多事情。一些想法或用例:
这是一个在CPU上运行带有波束搜索的模型的示例:
original response: ive heard it attributed to a bunch of different philosophical schools, including stoicism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to synthesizing took 306.12 seconds Final response in 1294.857 s: I've heard it attributed to a bunch of different philosophical schools, including solipsism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to speak)
注意:我使用了一些其他的逻辑,删除了这个聊天机器人环境中最后一个句子末尾的句号。
需要更多信息?
在训练过程中使用了以下超参数: