模型:
pszemraj/flan-t5-large-grammar-synthesis
A fine-tuned version of google/flan-t5-large for grammar correction on an expanded version of the JFLEG dataset. Demo on HF spaces.
Compare vs. the original grammar-synthesis-large .
There's a colab notebook that already has this basic version implemented ( click on the Open in Colab button )
After pip install transformers run the following code:
from transformers import pipeline corrector = pipeline( 'text2text-generation', 'pszemraj/flan-t5-large-grammar-synthesis', ) raw_text = 'i can has cheezburger' results = corrector(raw_text) print(results)
For Batch Inference: see this discussion thread for details, but essentially the dataset consists of several sentences at a time, and so I'd recommend running inference in the same fashion: batches of 64-96 tokens ish (or, 2-3 sentences split with regex)
The intent is to create a text2text language model that successfully completes "single-shot grammar correction" on a potentially grammatically incorrect text that could have a lot of mistakes with the important qualifier of it does not semantically change text/information that IS grammatically correct.
Compare some of the heavier-error examples on other grammar correction models to see the difference :)
This model has been converted to ONNX and can be loaded/used with huggingface's optimum library.
You first need to install optimum
pip install optimum[onnxruntime] # ^ if you want to use a different runtime read their docs
load with the optimum pipeline
from optimum.pipelines import pipeline corrector = pipeline( "text2text-generation", model=corrector_model_name, accelerator="ort" ) # use as normal
If trading a slight decrease in grammatical correction quality for faster inference speed makes sense for your use case, check out the base and small checkpoints fine-tuned from the relevant t5 checkpoints.
Obviously, this section is quite general as there are many things one can use "general single-shot grammar correction" for. Some ideas or use cases:
An example of this model running on CPU with beam search:
Original response: ive heard it attributed to a bunch of different philosophical schools, including stoicism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to synthesizing took 306.12 seconds Final response in 1294.857 s: I've heard it attributed to a bunch of different philosophical schools, including solipsism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to speak)
Note: that I have some other logic that removes any periods at the end of the final sentence in this chatbot setting to avoid coming off as passive aggressive
If you find this fine-tuned model useful in your work, please consider citing it :)
@misc {peter_szemraj_2022, author = { {Peter Szemraj} }, title = { flan-t5-large-grammar-synthesis (Revision d0b5ae2) }, year = 2022, url = { https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis }, doi = { 10.57967/hf/0138 }, publisher = { Hugging Face } }