模型:

cardiffnlp/twitter-roberta-base-sep2022

英文

Twitter 2022年9月 (RoBERTa-base, 169M)

这是一个基于RoBERTa-base的模型,训练数据包括168.86M条推特直到2022年9月底(每次增加15M条推特)。更多详细信息和性能评分在 TimeLMs paper 中可见。

在下方,我们提供了一些使用标准Transformers接口的示例。如果需要使用另一种接口以便于比较在不同时间间隔训练的模型的预测和困惑度得分,请查看 TimeLMs repository

对于训练至不同时间段的其他模型,请查看 table

预处理文本

将用户名和链接替换为占位符:"@user"和"http"。如果您有兴趣保留在训练期间也被保留的已验证用户,请保留列在 here 中的用户。

def preprocess(text):
    preprocessed_text = []
    for t in text.split():
        if len(t) > 1:
            t = '@user' if t[0] == '@' and t.count('@') == 1 else t
            t = 'http' if t.startswith('http') else t
        preprocessed_text.append(t)
    return ' '.join(preprocessed_text)

示例:遮蔽语言模型

from transformers import pipeline, AutoTokenizer

MODEL = "cardiffnlp/twitter-roberta-base-sep2022"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)

def pprint(candidates, n):
    for i in range(n):
        token = tokenizer.decode(candidates[i]['token'])
        score = candidates[i]['score']
        print("%d) %.5f %s" % (i+1, score, token))

texts = [
    "So glad I'm <mask> vaccinated.",
    "I keep forgetting to bring a <mask>.",
    "Looking forward to watching <mask> Game tonight!",
]
for text in texts:
    t = preprocess(text)
    print(f"{'-'*30}\n{t}")
    candidates = fill_mask(t)
    pprint(candidates, 5)

输出:

------------------------------
So glad I'm <mask> vaccinated.
1) 0.60140  not
2) 0.15077  getting
3) 0.12119  fully
4) 0.02203  still
5) 0.01020  all
------------------------------
I keep forgetting to bring a <mask>.
1) 0.05812  charger
2) 0.05040  backpack
3) 0.05004  book
4) 0.04548  bag
5) 0.03992  lighter
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.39552  the
2) 0.28083  The
3) 0.02029  End
4) 0.01878  Squid
5) 0.01438  this

示例:推特嵌入

from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter

def get_embedding(text):  # naive approach for demonstration
  text = preprocess(text)
  encoded_input = tokenizer(text, return_tensors='pt')
  features = model(**encoded_input)
  features = features[0].detach().cpu().numpy() 
  return np.mean(features[0], axis=0) 


MODEL = "cardiffnlp/twitter-roberta-base-sep2022"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)

query = "The book was awesome"
tweets = ["I just ordered fried chicken ?", 
          "The movie was great",
          "What time is the next game?",
          "Just finished reading 'Embeddings in NLP'"]

sims = Counter()
for tweet in tweets:
    sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
    sims[tweet] = sim

print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
    print("%d) %.5f %s" % (idx+1, sim, tweet))

输出:

Most similar to:  The book was awesome
------------------------------
1) 0.98914 The movie was great
2) 0.96194 Just finished reading 'Embeddings in NLP'
3) 0.94603 What time is the next game?
4) 0.94580 I just ordered fried chicken ?

示例:特征提取

from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np

MODEL = "cardiffnlp/twitter-roberta-base-sep2022"
tokenizer = AutoTokenizer.from_pretrained(MODEL)

text = "Good night ?"
text = preprocess(text)

# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy() 
features_mean = np.mean(features[0], axis=0) 
#features_max = np.max(features[0], axis=0)

# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0) 
# #features_max = np.max(features[0], axis=0)