网站的公共头部怎么做,北京品牌网站,汕头微网站,重庆装修公司哪家好使用GPT2预训练模型的方法
flyfish
transformers库对所有模型统一的API 安装
pip install transformersGPT2模型主要包括以下文件
config.json
merges.txt
model.safetensors
tokenizer.json
tokenizer_config.json
vocab.json模型所在目录
\.cache\huggingface\hub\model…使用GPT2预训练模型的方法
flyfish
transformers库对所有模型统一的API 安装
pip install transformersGPT2模型主要包括以下文件
config.json
merges.txt
model.safetensors
tokenizer.json
tokenizer_config.json
vocab.json模型所在目录
\.cache\huggingface\hub\models--openai-community--gpt2\blobs模型链接
.cache\huggingface\hub\models--openai-community--gpt2\snapshotsconfig.json [..\..\blobs\10c66461e4c109db5a2196bff4bb59be30396ed8]
merges.txt [..\..\blobs\226b0752cac7789c48f0cb3ec53eda48b7be36cc]
model.safetensors [..\..\blobs\248dfc3911869ec493c76e65bf2fcf7f615828b0254c12b473182f0f81d3a707]
tokenizer.json [..\..\blobs\4b988bccc9dc5adacd403c00b4704976196548f8]
tokenizer_config.json [..\..\blobs\be4d21d94f3b4687e5a54d84bf6ab46ed0f8defd]
vocab.json [..\..\blobs\1f1d9aaca301414e7f6c9396df506798ff4eb9a6]可以到这里下载
链接https://pan.baidu.com/s/1A8MLV_BxcJLEIr4_oOVsUQ
提取码0000简单示例 from transformers import AutoTokenizer, GPT2Model
import torchtokenizer AutoTokenizer.from_pretrained(openai-community/gpt2)
model GPT2Model.from_pretrained(openai-community/gpt2)inputs tokenizer(Hello, my dog is cute, return_tensorspt)
outputs model(**inputs)last_hidden_states outputs.last_hidden_stateneuralforecast 的用法
from neuralforecast import NeuralForecast
from neuralforecast.models import TimeLLM
from neuralforecast.utils import AirPassengersPanel, augment_calendar_dffrom transformers import GPT2Config, GPT2Model, GPT2TokenizerAirPassengersPanel, calendar_cols augment_calendar_df(dfAirPassengersPanel, freqM)Y_train_df AirPassengersPanel[AirPassengersPanel.dsAirPassengersPanel[ds].values[-12]] # 132 train
Y_test_df AirPassengersPanel[AirPassengersPanel.dsAirPassengersPanel[ds].values[-12]].reset_index(dropTrue) # 12 testgpt2_config GPT2Config.from_pretrained(openai-community/gpt2)
gpt2 GPT2Model.from_pretrained(openai-community/gpt2, configgpt2_config)
gpt2_tokenizer GPT2Tokenizer.from_pretrained(openai-community/gpt2)prompt_prefix The dataset contains data on monthly air passengers. There is a yearly seasonalitytimellm TimeLLM(h12,input_size36,llmgpt2,llm_configgpt2_config,llm_tokenizergpt2_tokenizer,prompt_prefixprompt_prefix,batch_size24,windows_batch_size24)nf NeuralForecast(models[timellm],freqM
)nf.fit(dfY_train_df, val_size12)
forecasts nf.predict(futr_dfY_test_df)