博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Google的预训练模型又霸榜了,这次叫做T5(附榜单)
阅读量:4225 次
发布时间:2019-05-26

本文共 2241 字,大约阅读时间需要 7 分钟。

640?wx_fmt=png

来源:科学空间

本文约
1000字 ,建议阅读
5分钟

本文将介绍Google最近新出的预训练模型。

Google又出大招了,这次叫做T5:
T5 serves primarily as code for reproducing the experiments in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. The bulk of the code in this repository is used for loading, preprocessing, mixing, and evaluating datasets. It also provides a way to fine-tune the pre-trained models released alongside the publication.
T5 can be used as a library for future model development by providing useful modules for training and fine-tuning (potentially huge) models on mixtures of text-to-text tasks.

这次的结果基本上比之前最优的RoBERTa都要高出4%,其中BoolQ那个已经超过了人类表现了:

640?wx_fmt=png
榜单在此: https://super.gluebenchmark.com/leaderboard

这次的模型参数量,最多达到了110亿!
640?wx_fmt=png
Github上也给出了简单的使用教程,当然这么大的参数,估计也只能用tpu了:
https://github.com/google-research/text-to-text-transfer-transformer

此处之外,Github还提到另外一个贡献,就是一个名为tensorflow-datasets的库,里边可以方便地调用很多已经转好为tf.data格式的数据,无缝对接tensorflow。
详情请看: https://www.tensorflow.org/datasets

对了,还有论文: https://arxiv.org/abs/1910.10683
大名是
《Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer》

Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.
似乎叫做《Exploring the Limits of TPU》适合一些?
大致扫了一下论文,应该是用了类似UNILM的Seq2Seq预训练方式吧,把各种有标签的、无标签的数据都扔了进去。

现在就简单推送一下,细读之后再分享感想,现在就蹭一下热度。
编辑:王菁
校对:王欣
640?wx_fmt=jpeg

转载地址:http://zouqi.baihongyu.com/

你可能感兴趣的文章
makefile学习网站
查看>>
Lua教程:Lua调用C/C++函数(4)
查看>>
win下创建win32控制台工程,执行lua脚本
查看>>
cocos2dx android启动错误
查看>>
eclipse: android rename package name
查看>>
cocos2dx c++调用java思想
查看>>
lua math.ceil math.ceil
查看>>
cocos2dx CCNode计算node的大小
查看>>
cocos2dx 布局记录(1)
查看>>
lua 多行注释和取消多行注释
查看>>
缩放系数计算
查看>>
cocos2dx --- 按钮点击居中放大
查看>>
cocos2dx menu位置计算
查看>>
cocos2dx资源加载机制(同步/异步)
查看>>
cocos2dx C++调用java -- 字符串传递
查看>>
git学习网站
查看>>
JavaScript 学习网站
查看>>
cocos2dx java调用c++ -- 字符串传递
查看>>
CCScaleTo与CCScaleBy比较
查看>>
cocos2dx CCObject引用计数,内存释放分析(1)
查看>>