更全的杂志信息网

nlp核心论文发表

发布时间:2024-07-04 12:59:43

nlp核心论文发表

两篇都好发。opencv主要以实践和应用为主,同时需要研究成果可以指导应用。NLP是算法中最有挑战性的,因为在CV中,视频可以分割为一帧一帧的图像,像素点是有限的,这很适合计算机去解析。

让AI技术与基础医学理论结合,成为AI用于临床 探索 的新思路。目前这一新思路已被证实确有更大潜力——

最近,由广州市妇女儿童医疗中心教授夏慧敏和加州大学圣地亚哥分校教授张康领衔、人工智能公司依图 科技 等共同参与的科研团队设计出一套基于AI的疾病诊断系统,就将医学知识图谱加入其中,使AI可以像人类医生一样根据读取的电子病历来“诊病”。

结果也颇为乐观:用纳入系统的55种常见儿科疾病和部分危急重症作测试,AI的诊断水平可达到儿科主治医生的专业水准。

目前,这一研究成果《使用人工智能评估和准确诊断儿科疾病》已于2月中旬在线发表于《自然—医学》杂志。

将深度学习技术与专业医学知识图谱进行结合,是该人工智能辅诊平台的最大特色。依图医疗总裁倪浩在接受笔者采访时说,未来对临床数据进行学习、为医生提供更多的辅助诊断能力(病种),采用深度学习+知识图谱的方式“很可行”。

为了使AI辅诊平台拥有专业的儿科医学知识,科研团队让它学习了56.7万名儿童136万份电子文本病历中的诊断逻辑。这些来自广州市妇女儿童医疗中心2016年1月至2017年7月间的电子病历,覆盖了初始诊断包括儿科55种病例学中常见疾病的1.016亿个数据点。

除了将医疗知识进行整合,科研团队还利用依图 科技 的自然语言处理(NLP)技术构建了一个自然语言处理模型,以对这些电子病历进行注释——通过将病历变得标准化,该模型在未经过“培训”的情况下可以粗略地将临床信息进行分类。

“粗略分类是指,将整个电子病历当作输入,将专家诊断结果作为输出,以达到粗略的分类。但这样并没有真正理解疾病本身,也很难解释为何做出了这个诊断。”倪浩告诉笔者, NLP模型虽然突破了病历文本语言和计算机语言之间的障碍,但知识图谱才是让AI诊断平台获取专家能力的关键 。

这也是他们接下来的一项重要工作:由30余位高级儿科医师和10余位信息学研究人员组成的专家团队,手动给电子病历上的6183张图表进行注释、持续检验和迭代,以保证诊断的准确性。

通过资深医疗专家注释的图表对AI诊断平台进行“培训优化验证”后,研究人员发现,经过深度学习的NLP模型可以对电子病历进行很好的注释,在体检和主诉项目的注释上分别达到最高灵敏度和精确度。也就是说, 深度学习的NLP模型能够准确地读取电子病历中记录的信息,并可以准确作出符合临床标准的批注。而这也是整个研究中最为关键的部分。

“通过引入知识图谱将每种疾病的电子病历深入解构,使得NLP模型具备了理解电子病历的能力。例如手足口病与哪些特征密切相关,川崎病最相关的特征是什么,让模型在给出准确诊断的基础上,能够具备更好的医学可解释性。”倪浩解释说,“有了知识图谱,再用深度学习技术来解构电子病历,就能够真正理解临床数据。基于此,机器学习分类等算法就有用武之地,否则把电子病历当成‘黑盒子’,是无法构建高精度可解释的模型的。”

综合利用深度学习技术与医学知识图谱对电子病历数据进行解构,研究人员据此构建了高质量的智能病种库,这使得后续可以较容易地利用智能病种库建立各种诊断模型。

构建一个多层级的诊断模型,是研究人员把AI诊断平台打造成为儿科医生的第二步。倪浩介绍说,这一基于逻辑回归分类器创建的诊断模型,首先会按呼吸系统疾病、胃肠道疾病、全身性疾病等几大系统分,然后在每一类下面做细分—— 这是让AI模拟人类医生的诊疗路径,对目标患儿的数据进行逐级判定 。

结果显示,基于NLP模型准确读取的数据,AI诊断模型能够对儿科疾病作出精确诊断: 平均准确率达90%,对神经精神失调疾病的诊断准确率更是高达98%。

在对相应儿科疾病的划分和诊断上,该诊断模型同样表现不俗。系统对上呼吸道疾病和下呼吸道疾病的诊断准确率分别为89%和87%。同时,该系统对普通系统性疾病以及高危病症也有很高的诊断准确率,例如传染性单核细胞增多症准确率为90%,水痘为93%,玫瑰疹93%,流感94%,手足口病为97%和细菌性脑膜炎为93%。

这揭示出,该诊断系统可以根据NLP系统注释的临床数据信息对常见儿科疾病作出较高准确度的判断。

研究人员随后运用11926个临床病例比较了AI诊断系统和5个临床治疗组诊断儿科疾病的水平,其中参与研究的治疗组从事临床工作时间和资历逐渐增加。结果显示, AI诊断系统反映模型综合性能的F1评分均值高于2个年轻医生组成的治疗组,但稍逊于3个高年资医生组成的治疗组。

论文认为,这说明该AI诊断系统可以协助年轻治疗团队进行疾病诊断,提升团队诊疗水平。

今年1月1日,该系统在广州市妇女儿童医疗中心投入临床应用。 仅1月1日至1月21日短短20天,该院医生实际调用它开展辅助诊断30276次,诊断与临床符合率达到87.4%。广州市妇儿中心医务部主任孙新在体验该系统后表示,这套系统在对疾病进行分组分类方面“比较科学”。

上述论文发表后,《纽约时报》点评这项研究称,“前后访问了儿科医院18个月中数十万名中国就医儿童的数据,能有这么庞大的数据量用于研究,也是中国在全球人工智能和竞赛中的优势。”

“数据确实是我们此次研究成果的核心关键之一。”倪浩说,“不过,高质量标准数据来源于强大的联合团队,我们专门开发了数据标准系统,进行了大量的数据标注。”

论文通讯作者之一、广州市妇女儿童医疗中心教授夏慧敏表示,这篇文章的启示意义在于“通过系统学习文本病历,AI或将诊断更多疾病”。不过他提醒道, 当下还须清醒认识到,仍有很多基础性工作要做扎实,比如高质量数据的集成便是一个长期的过程。

笔者了解到,该医院在近3年里注重将数据标准化、结构化处理,实现了50多个诊断数据子系统的相互交流和互联互通,为该系统应用打下了基础。

“此外,A I学习了海量数据后,其诊断结果的准确性仍然需要更大范围的数据对其进行验证和比对。 ”夏慧敏说。

AI技术落地的4元素之中,场景也非常重要。论文的另一位通讯作者张康认为,该研究以儿科疾病为对象意义重大。

“对儿科疾病的诊断是医疗中的一大痛点。一些儿科疾病威胁程度较大需要尽快得到治疗,而儿童恰恰不善于表达病情,因此快速、准确地对儿科疾病进行诊断非常必要。”张康表示,当前儿科医生供不应求,论文中构建的AI诊断系统对于严重不足的医疗资源会有很大的辅助作用。

相关论文信息:DOI:10.1038/s41591-018-0335-9

nlp发论文

推荐下NLP领域内最重要的8篇论文吧(依据学术范标准评价体系得出的8篇名单): 一、Deep contextualized word representations 摘要:We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals. 全文链接: Deep contextualized word representations——学术范 二、Glove: Global Vectors for Word Representation 摘要:Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. 全文链接: Glove: Global Vectors for Word Representation——学术范 三、SQuAD: 100,000+ Questions for Machine Comprehension of Text 摘要:We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at this https URL 全文链接: SQuAD: 100,000+ Questions for Machine Comprehension of Text——学术范 四、GloVe: Global Vectors for Word Representation 摘要:Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. 全文链接: GloVe: Global Vectors for Word Representation——学术范 五、Sequence to Sequence Learning with Neural Networks 摘要:Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.  全文链接: Sequence to Sequence Learning with Neural Networks——学术范 六、The Stanford CoreNLP Natural Language Processing Toolkit 摘要:We describe the design and use of the Stanford CoreNLP toolkit, an extensible pipeline that provides core natural language analysis. This toolkit is quite widely used, both in the research NLP community and also among commercial and government users of open source NLP technology. We suggest that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage. 全文链接: The Stanford CoreNLP Natural Language Processing Toolkit——学术范 七、Distributed Representations of Words and Phrases and their Compositionality 摘要:The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. 全文链接: Distributed Representations of Words and Phrases and their Compositionality——学术范 八、Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank 摘要:Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive/negative classification from 80% up to 85.4%. The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7%, an improvement of 9.7% over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.  全文链接: Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank——学术范 希望可以对大家有帮助, 学术范 是一个新上线的一站式学术讨论社区,在这里,有海量的计算机外文文献资源与研究领域最新信息、好用的文献阅读及管理工具,更有无数志同道合的同学以及学术科研工作者与你一起,展开热烈且高质量的学术讨论!快来加入我们吧!

两篇都好发。opencv主要以实践和应用为主,同时需要研究成果可以指导应用。NLP是算法中最有挑战性的,因为在CV中,视频可以分割为一帧一帧的图像,像素点是有限的,这很适合计算机去解析。

发表nlp论文

你的论文准备往什么方向写,选题老师审核通过了没,有没有列个大纲让老师看一下写作方向? 老师有没有和你说论文往哪个方向写比较好?写论文之前,一定要写个大纲,这样老师,好确定了框架,避免以后论文修改过程中出现大改的情况!!学校的格式要求、写作规范要注意,否则很可能发回来重新改,你要还有什么不明白或不懂可以问我,希望你能够顺利毕业,迈向新的人生。 一、毕业论文选题的重要意义 第一、选题是撰写毕业论文的第一步,它实际上是确定“写什么”的问题,也就是确定论文论述的方向。如果“写什么”都不明确,“怎么写”根本无从谈起,因此毕业论文的顺利完成离不开合适的论文选题。 第二、毕业论文的写作一方面是对这几年所学知识的一次全面检验,同时也是对同学们思考问题的广度和深度的全面考察。因此,毕业论文的选题非常重要,既要考虑论文涉及的层面,又要考虑它的社会价值。 二、毕业论文选题的原则 (一)专业性原则 毕业论文选题必须紧密结合自己所学的专业,从那些学过的课程内容中选择值得研究或探讨的学术问题,不能超出这个范围,否则达不到运用所学理论知识来解决实际问题的教学目的。我们学的是工商管理专业,选题当然不能脱离这个大范畴,而且在限定的小范围内,也不能脱离工商管理、企业经营去谈公共事业管理或金融问题。学术研究是无止境的,任何现成的学说,都有需要完善改进的地方,这就是选题的突破口,由此入手,是不难发现问题、提出问题的。 (二)价值性原则 论文要有科学价值。那些改头换面的文章抄袭、东拼西凑的材料汇集以及脱离实际的高谈阔论,当然谈不上有什么价值。既然是论文,选题就要具有一定的学术意义,也就是要具有先进性、实践性和一定的理论意义。对于工商管理专业的学生而言,我们可以选择企业管理中有理论意义和实践指导意义的论题,或是对提高我国企业的管理水平有普遍意义的议题,还可以是新管理方法的使用。毕业论文的价值关键取决于是否有自己的恶创见。也就是说,不是简单地整理和归纳书本上或前人的见解,而是在一定程度上用新的事实或新的理论来丰富专业学科的某些氦姬份肯莓厩逢询抚墨内容,或者运用所学专业知识解决现实中需要解决的问题。 (三)可能性原则 选题要充分考虑到论题的宽度和广度以及你所能占有的论文资料。既要有“知难而进”的勇气和信心,又要做到“量力而行”。”选题太大、太难,自己短时间内无力完成,不行;选题太小、太易,又不能充分发挥自己的才能,也不行。一切应从实际出发,主要应考虑选题是否切合自己的特长和兴趣,是否可以收集到足够的材料和信息,是否和自己从事的工作相接近。一定要考虑主客观条件和时限,选择那些适合自己情况,可以预期成功的课题。一般来说,题目的大小要由作者实际情况而定,很难作硬性规定要求。有的同学如确有水平和能力,写篇大文章,在理论上有所突破和创新,当然是很好的。但从成人高校学生的总体来看,选题还是小点为宜。小题目论述一两个观点,口子虽小,却能小题大做,能从多层次多角度进行分析论证.这样,自己的理论水平可以发挥,文章本身也会写得丰满而充实。选择一个比较恰当的小论题,特别是与自己的工作或者生活密切相关的问题,不仅容易搜集资料,同时对问题也看得准,论述也会更透彻,结论也就可能下得更准确。 三、毕业论文选题的方法 第一、 浏览捕捉法。这种方法是通过对占有的论文资料快速、大量地阅读,在比较中来确定题目的方法。浏览,一般是在资料占有达到一定数量时集中一段时间进行,这样便于对资料作集中的比较和鉴别。浏览的目的是在咀嚼消化已有资料的过程中,提出问题,寻找自己的论题。这就需要我们对收集到的材料进行全面阅读研究,主要的、次要的、不同角度的、不同观点的都应了解,不能“先入为主”,不能以自己头脑中原有的观点决定取舍。而应冷静地、客观地对所有资料作认真的分析思考,从内容丰富的资料中吸取营养,反复思考琢磨之后,就会有所发现,然后再根据自己的实际确定自己的论题。 第二、 追溯验证法。这种方法要求同学们先有一种拟想,然后再通过阅读资料加以验证来确定选题的方法。同学们应该先有自己的主观论点,即根据自己平时的积累,初步确定准备研究的方向、题目或选题范围。这种选题方法应注意:看自己的“拟想”是否与别人重复,是否对别人的观点有补充作用;如果自己的“拟想”虽然别人还没有谈到,但自己尚缺乏足够的理由来加以论证,那就应该中止,再作重新构思。要善于捕捉一闪之念,抓住不放,深入研究。在阅读文献资料或调查研究中,有时会突然产生一些思想火花,尽管这种想法很简单、很朦胧,也未成型,但千万不可轻易放弃。 第三、 知识迁移法。通过四年的学习,对某一方面的理论知识(经济或者法律或者其它)有一个系统的新的理解和掌握。这是对旧知识的一种延伸和拓展,是一种有效的更新。在此基础之上,同学们在认识问题和解决问题的时候就会用所学到的新知识来感应世界,从而形成一些新的观点。理论知识和现实的有机结合往往会激发同学们思维的创造力和开拓性,为毕业论文的选题提供了一个良好的实践基础和理论基础。 第四、 关注热点法。热点问题就是在现代社会中出现的能够引起公众广泛注意的问题。这些问题或关系国计民生,或涉及时代潮流,而且总能吸引人们注意,引发人们思考和争论。同学们在平时的学习和工作中大部分也都会关注国际形势、时事新闻、经济变革。选择社会热点问题作为论文论题是一件十分有意义的事情,不仅可以引起指导老师的关注,激发阅读者的兴趣和思考,而且对于现实问题的认识和解决也具有重要的意义。将社会热点问题作为论文的论题对于同学们搜集材料、整理材料、完成论文也提供了许多便利。 第五,调研选题法。调研选题法类同于关注社会热点这样的选题方法,但所涉及的有一部分是社会热点问题,也有一部分并不是社会热点问题。社会调研可以帮助我们更多地了解调研所涉问题的历史、现状以及发展趋势,对问题的现实认识将更为清晰,并可就现实问题提出一些有针对性的意见和建议。同学们将社会调研课题作为毕业论文的论题,有着十分重要的现实意义,不仅可为地方经济建设和社会发展提供有价值的资料和数据,而且可为解决一些社会现实问题提供一个很好的路径。

两篇都好发。opencv主要以实践和应用为主,同时需要研究成果可以指导应用。NLP是算法中最有挑战性的,因为在CV中,视频可以分割为一帧一帧的图像,像素点是有限的,这很适合计算机去解析。

nlp论文发表

让AI技术与基础医学理论结合,成为AI用于临床 探索 的新思路。目前这一新思路已被证实确有更大潜力——

最近,由广州市妇女儿童医疗中心教授夏慧敏和加州大学圣地亚哥分校教授张康领衔、人工智能公司依图 科技 等共同参与的科研团队设计出一套基于AI的疾病诊断系统,就将医学知识图谱加入其中,使AI可以像人类医生一样根据读取的电子病历来“诊病”。

结果也颇为乐观:用纳入系统的55种常见儿科疾病和部分危急重症作测试,AI的诊断水平可达到儿科主治医生的专业水准。

目前,这一研究成果《使用人工智能评估和准确诊断儿科疾病》已于2月中旬在线发表于《自然—医学》杂志。

将深度学习技术与专业医学知识图谱进行结合,是该人工智能辅诊平台的最大特色。依图医疗总裁倪浩在接受笔者采访时说,未来对临床数据进行学习、为医生提供更多的辅助诊断能力(病种),采用深度学习+知识图谱的方式“很可行”。

为了使AI辅诊平台拥有专业的儿科医学知识,科研团队让它学习了56.7万名儿童136万份电子文本病历中的诊断逻辑。这些来自广州市妇女儿童医疗中心2016年1月至2017年7月间的电子病历,覆盖了初始诊断包括儿科55种病例学中常见疾病的1.016亿个数据点。

除了将医疗知识进行整合,科研团队还利用依图 科技 的自然语言处理(NLP)技术构建了一个自然语言处理模型,以对这些电子病历进行注释——通过将病历变得标准化,该模型在未经过“培训”的情况下可以粗略地将临床信息进行分类。

“粗略分类是指,将整个电子病历当作输入,将专家诊断结果作为输出,以达到粗略的分类。但这样并没有真正理解疾病本身,也很难解释为何做出了这个诊断。”倪浩告诉笔者, NLP模型虽然突破了病历文本语言和计算机语言之间的障碍,但知识图谱才是让AI诊断平台获取专家能力的关键 。

这也是他们接下来的一项重要工作:由30余位高级儿科医师和10余位信息学研究人员组成的专家团队,手动给电子病历上的6183张图表进行注释、持续检验和迭代,以保证诊断的准确性。

通过资深医疗专家注释的图表对AI诊断平台进行“培训优化验证”后,研究人员发现,经过深度学习的NLP模型可以对电子病历进行很好的注释,在体检和主诉项目的注释上分别达到最高灵敏度和精确度。也就是说, 深度学习的NLP模型能够准确地读取电子病历中记录的信息,并可以准确作出符合临床标准的批注。而这也是整个研究中最为关键的部分。

“通过引入知识图谱将每种疾病的电子病历深入解构,使得NLP模型具备了理解电子病历的能力。例如手足口病与哪些特征密切相关,川崎病最相关的特征是什么,让模型在给出准确诊断的基础上,能够具备更好的医学可解释性。”倪浩解释说,“有了知识图谱,再用深度学习技术来解构电子病历,就能够真正理解临床数据。基于此,机器学习分类等算法就有用武之地,否则把电子病历当成‘黑盒子’,是无法构建高精度可解释的模型的。”

综合利用深度学习技术与医学知识图谱对电子病历数据进行解构,研究人员据此构建了高质量的智能病种库,这使得后续可以较容易地利用智能病种库建立各种诊断模型。

构建一个多层级的诊断模型,是研究人员把AI诊断平台打造成为儿科医生的第二步。倪浩介绍说,这一基于逻辑回归分类器创建的诊断模型,首先会按呼吸系统疾病、胃肠道疾病、全身性疾病等几大系统分,然后在每一类下面做细分—— 这是让AI模拟人类医生的诊疗路径,对目标患儿的数据进行逐级判定 。

结果显示,基于NLP模型准确读取的数据,AI诊断模型能够对儿科疾病作出精确诊断: 平均准确率达90%,对神经精神失调疾病的诊断准确率更是高达98%。

在对相应儿科疾病的划分和诊断上,该诊断模型同样表现不俗。系统对上呼吸道疾病和下呼吸道疾病的诊断准确率分别为89%和87%。同时,该系统对普通系统性疾病以及高危病症也有很高的诊断准确率,例如传染性单核细胞增多症准确率为90%,水痘为93%,玫瑰疹93%,流感94%,手足口病为97%和细菌性脑膜炎为93%。

这揭示出,该诊断系统可以根据NLP系统注释的临床数据信息对常见儿科疾病作出较高准确度的判断。

研究人员随后运用11926个临床病例比较了AI诊断系统和5个临床治疗组诊断儿科疾病的水平,其中参与研究的治疗组从事临床工作时间和资历逐渐增加。结果显示, AI诊断系统反映模型综合性能的F1评分均值高于2个年轻医生组成的治疗组,但稍逊于3个高年资医生组成的治疗组。

论文认为,这说明该AI诊断系统可以协助年轻治疗团队进行疾病诊断,提升团队诊疗水平。

今年1月1日,该系统在广州市妇女儿童医疗中心投入临床应用。 仅1月1日至1月21日短短20天,该院医生实际调用它开展辅助诊断30276次,诊断与临床符合率达到87.4%。广州市妇儿中心医务部主任孙新在体验该系统后表示,这套系统在对疾病进行分组分类方面“比较科学”。

上述论文发表后,《纽约时报》点评这项研究称,“前后访问了儿科医院18个月中数十万名中国就医儿童的数据,能有这么庞大的数据量用于研究,也是中国在全球人工智能和竞赛中的优势。”

“数据确实是我们此次研究成果的核心关键之一。”倪浩说,“不过,高质量标准数据来源于强大的联合团队,我们专门开发了数据标准系统,进行了大量的数据标注。”

论文通讯作者之一、广州市妇女儿童医疗中心教授夏慧敏表示,这篇文章的启示意义在于“通过系统学习文本病历,AI或将诊断更多疾病”。不过他提醒道, 当下还须清醒认识到,仍有很多基础性工作要做扎实,比如高质量数据的集成便是一个长期的过程。

笔者了解到,该医院在近3年里注重将数据标准化、结构化处理,实现了50多个诊断数据子系统的相互交流和互联互通,为该系统应用打下了基础。

“此外,A I学习了海量数据后,其诊断结果的准确性仍然需要更大范围的数据对其进行验证和比对。 ”夏慧敏说。

AI技术落地的4元素之中,场景也非常重要。论文的另一位通讯作者张康认为,该研究以儿科疾病为对象意义重大。

“对儿科疾病的诊断是医疗中的一大痛点。一些儿科疾病威胁程度较大需要尽快得到治疗,而儿童恰恰不善于表达病情,因此快速、准确地对儿科疾病进行诊断非常必要。”张康表示,当前儿科医生供不应求,论文中构建的AI诊断系统对于严重不足的医疗资源会有很大的辅助作用。

相关论文信息:DOI:10.1038/s41591-018-0335-9

推荐下NLP领域内最重要的8篇论文吧(依据学术范标准评价体系得出的8篇名单): 一、Deep contextualized word representations 摘要:We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals. 全文链接: Deep contextualized word representations——学术范 二、Glove: Global Vectors for Word Representation 摘要:Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. 全文链接: Glove: Global Vectors for Word Representation——学术范 三、SQuAD: 100,000+ Questions for Machine Comprehension of Text 摘要:We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at this https URL 全文链接: SQuAD: 100,000+ Questions for Machine Comprehension of Text——学术范 四、GloVe: Global Vectors for Word Representation 摘要:Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. 全文链接: GloVe: Global Vectors for Word Representation——学术范 五、Sequence to Sequence Learning with Neural Networks 摘要:Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.  全文链接: Sequence to Sequence Learning with Neural Networks——学术范 六、The Stanford CoreNLP Natural Language Processing Toolkit 摘要:We describe the design and use of the Stanford CoreNLP toolkit, an extensible pipeline that provides core natural language analysis. This toolkit is quite widely used, both in the research NLP community and also among commercial and government users of open source NLP technology. We suggest that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage. 全文链接: The Stanford CoreNLP Natural Language Processing Toolkit——学术范 七、Distributed Representations of Words and Phrases and their Compositionality 摘要:The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. 全文链接: Distributed Representations of Words and Phrases and their Compositionality——学术范 八、Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank 摘要:Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive/negative classification from 80% up to 85.4%. The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7%, an improvement of 9.7% over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.  全文链接: Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank——学术范 希望可以对大家有帮助, 学术范 是一个新上线的一站式学术讨论社区,在这里,有海量的计算机外文文献资源与研究领域最新信息、好用的文献阅读及管理工具,更有无数志同道合的同学以及学术科研工作者与你一起,展开热烈且高质量的学术讨论!快来加入我们吧!

twitter发表的nlp论文

27岁华裔小伙一战成名!搞出美国新冠最准预测模型,一人干翻专业机构

年仅27岁的他,被彭博评价为“新冠病毒数据超级明星”。

为什么?

凭一己之力,仅用一周时间打造的新冠预测模型,准确度方面碾压那些数十亿美元、数十年经验加持的专业机构。

他就是Youyang Gu,拥有 MIT 电气工程和计算机科学硕士学位,以及数学学位。

但值得注意的是,他在医学和流行病学等方面却是一个小白。

他的模型,甚至被著名数据科学家、fast.ai创始人Jeremy Howard高度评价道:

唯一看起来合理的模型。

他是唯一一个真正查看数据,并且做得正确的人。

不仅如此,他的模型还被美国疾控中心采用。

到底是个怎样的预测模型?

时间点要追溯到去年年初。

当时疫情已然在全球蔓延开来,于是公众试图用建模的方式,来预测接下来疫情会带来的影响。

大多数的目光都将希望投向了2家专业机构打造的预测系统——伦敦帝国理工学院、总部位于西雅图的健康指标与评估研究所(IHME)。

但2家机构给出的预测结果却是天差地别:

伦敦帝国理工学院:到夏天,美国因新冠病毒而死亡的人数将达到200万。

IHME:预计到8月,死亡人数将达到6万。

(后来的事实证明,死亡人数是16万。)

2家专业机构给出的预测数据,差距为何能够如此之大?

这就让当时年仅26岁的Youyang Gu引起了注意。

虽然他没有任何医学或流行病方面的经验,但他坚信,数据预测在此时会派上大用场。

于是,大约在4月中旬,Youyang Gu便在家里仅花了一周时间,打造出了自己的预测器,以及一个可以显示相关信息的网站。

但Gu在这个过程中所用到的方法,并不是说有多么的高级,相反,恰恰是比较简单的那种。

他首先考虑的是新冠病毒检测数、住院人数和其他因素之间的关系,但在这个过程中,Gu却发现各个州和联邦政府所提供的数据是存在不一致的现象。

此时,问题就来了——什么样的数据才是靠谱的?

Gu认为,最靠谱的数据,似乎就是每天的死亡人数:

其他的模型用到了很多数据源,但我决定用过去的死亡人数,来预测未来的死亡人数。

至于这样做的原因,Gu给出的解释是“将它作为唯一的输入,有助于在噪音中过滤信号”。

那么,预测结果如何?

可以说是相当的精准了。

在模型刚刚完成时,他预测在5月9日,美国将有8万人死亡,当天的实际死亡人数为79926。

而同样来自IHME的预测数据却是“2020年一整年的死亡人数将不超过8万”。

Gu还预测在5月18日,死亡人数将达到9万;5月27日,死亡人数将达到10万。

事实证明,他的这两次预测再次“押中”!

除了精准数字的预测外,Gu基于许多州从封锁状态逐步转变开放状态,预测将出现第二波大规模感染和死亡。

而在Gu发出这样的预测当天,特朗普所发表的言论却是“IHME所预测的6万死亡人数表明,疫情很快将结束”……

或许正是因为Gu的模型预测之精准,越来越多人开始关注他的作品。

在Twitter上,Gu不仅@了各路记者,还给流行病学专家发邮件,让他们核实自己的数据。

去年4月底,华盛顿大学著名生物学家Carl Bergstrom便在Twitter上发布了Gu的模型。

不久之后,美国疾病控制和预防中心,也在其新冠预测网站上发布了Gu的数据。

不仅如此,随着疫情的发展,身为中国移民的Gu,还参与了由美国专家团队组织的定期会议,每个人都想更好的改善他的模型。

他的网站访问量也呈现出爆炸式增长,每天都有数百万人来看他的数据。

通常情况下,Gu的模型所预测的数据,基本在几周后便会达到,与实际的死亡人数非常接近。

随着类似的预测模型逐渐增多,阿默斯特马萨诸塞大学生物统计学和流行病学系的副教授Nicholas Reich,便统计了50个这样的模型:

Gu的模型一直位居前列。

但到了去年11月,Gu却做出了令人意外的一个决定——结束他的预测任务。

对此,Reich这样评价道:

Youyang Gu是一个非常谦卑的人,他看到其他人的模型也做得很好,便觉得自己的工作已经完成了。

而在Gu决定停止项目的前一个月,他预测11月1日死亡人数将达到231000人,而实际人数为230995人。

但IHME的Chris Murray认为:

Gu使用的机器学习方法,在短期预测方面的效果比较良好,但不太理解“大局中发生了什么”。

对此,Gu没有针对模型的评价做出回应,相反,他这样表态:

我非常感谢 Chris Murray 医生和他的团队所做的工作;没有他们,我就不会有今天的成就。

在休息了一段时间之后,Gu重新投入到了这份事业当中。

这一次,他要做的预测是“美国有多少人感染了新冠病毒”、“疫苗推出的速度有多快”、“美国可能何时(如果可能的话)达到群体免疫”等。

他的预测表明,到今年6月,大约61%的美国人口应该获得某种形式的免疫力——无论是疫苗还是因过去的感染。

Gu一直希望能够找到一份能对社会产生巨大影响的工作,同时避免政治、偏见以及大型机构有时会带来的负担。他认为:

在这个领域,有很多缺点可以通过我这种背景的人来改善。

谁是Youyang Gu?

Youyang Gu出身于美国华裔移民家庭,在伊利诺伊州和加州长大。

Gu从小喜欢数学和科学,直到高中毕业时,才真正接触计算机科学。而他能够进入这个行业得益于他的父亲,因为他的父亲是一名计算机从业者。

Gu本科和硕士都在MIT就读,在那里他获得了计算机科学与数学双学士学位,以及计算机科学的硕士学位。

毕业后他继续在MIT著名的CSAIL实验室的NLP组进行了一年的研究,同年在EMNLP 2016上发表了论文。

这也是他第一次接触大数据,并由此建立统计模型对数据进行预测。

不过他没有因此继续学术研究,而是进入产业界。从MIT离开后,他加入了金融行业,为高频交易系统编写算法。

在那里,他的数据建模能力得到了进一步磨练,因为在金融交易中,数据必须非常定量并尽可能地准确。

之后,他又进入了体育界,继续进行大数据方面的研究。这也为他提供了丰富的跨学科经验,使他能够成功应对新地领域,懂得如何更加准确地建模。

用他自己的话来说,他的专长是使用机器学习来理解数据,将信号与噪声分离并做出准确的预测。

在建立新冠死亡模型时,他起初考虑了确诊数量、住院数量和其他因素之间的关系。然后他发现各州和联邦政府报告的数据不一致,最可靠的数字是每天的死亡人数。

Gu认为,如果输入数据质量很低,那么数据越多,输出的性能就越差。

在一周的时间里,他就根据死亡数据便建立了一个简单模型,并将预测网站上线。

从去年4月以来,Gu已经自愿在这个项目中投入了几千个小时,而且是无偿的。

在接受医学网站Medscape主编Eric Topol采访时,Gu表示自己现在全职投入到新冠预测网站上,没有兼职、没有收入,他靠着过去的积蓄生活。

然而就是这样一个公益的项目却遭到了一些Twitter网友非议,但是他还是坚持了下来。

从12月开始,covid19-projections.com接受网友的捐赠帮助,现在已经完成了5万美元的筹款目标。

除了感染人数外,Gu的新冠网站又有了一个新的功能。从去年12月起,covid19-projections.com开始跟踪和模拟疫苗接种情况以及群体免疫的途径。

这个月,Gu又将“群体免疫”改成了“恢复常态”,因为他的模型预测表明,美国不太可能在2021年达到理论上的群体免疫。

未来的路怎么走?疫情结束后,Gu的职业规划如何?

他说现在还为时过早,虽然他现在的工作是预测疫情发展,但是他很难预测自己3个月或1年后要做什么。

因为这项工作,世界各地的高校和企业已经向他抛出了橄榄枝。

从别人发布的文章来看,大概介绍了美国一位华裔小伙用了没多久的时间建立了一个新冠死亡人数预测模型,他的厉害之处就是在于能准确击败了巨资建模这么个事。

推荐下NLP领域内最重要的8篇论文吧(依据学术范标准评价体系得出的8篇名单): 一、Deep contextualized word representations 摘要:We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals. 全文链接: Deep contextualized word representations——学术范 二、Glove: Global Vectors for Word Representation 摘要:Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. 全文链接: Glove: Global Vectors for Word Representation——学术范 三、SQuAD: 100,000+ Questions for Machine Comprehension of Text 摘要:We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at this https URL 全文链接: SQuAD: 100,000+ Questions for Machine Comprehension of Text——学术范 四、GloVe: Global Vectors for Word Representation 摘要:Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. 全文链接: GloVe: Global Vectors for Word Representation——学术范 五、Sequence to Sequence Learning with Neural Networks 摘要:Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.  全文链接: Sequence to Sequence Learning with Neural Networks——学术范 六、The Stanford CoreNLP Natural Language Processing Toolkit 摘要:We describe the design and use of the Stanford CoreNLP toolkit, an extensible pipeline that provides core natural language analysis. This toolkit is quite widely used, both in the research NLP community and also among commercial and government users of open source NLP technology. We suggest that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage. 全文链接: The Stanford CoreNLP Natural Language Processing Toolkit——学术范 七、Distributed Representations of Words and Phrases and their Compositionality 摘要:The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. 全文链接: Distributed Representations of Words and Phrases and their Compositionality——学术范 八、Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank 摘要:Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive/negative classification from 80% up to 85.4%. The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7%, an improvement of 9.7% over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.  全文链接: Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank——学术范 希望可以对大家有帮助, 学术范 是一个新上线的一站式学术讨论社区,在这里,有海量的计算机外文文献资源与研究领域最新信息、好用的文献阅读及管理工具,更有无数志同道合的同学以及学术科研工作者与你一起,展开热烈且高质量的学术讨论!快来加入我们吧!

美国疾病控制和预防中心在其新冠预测网站上发布了他的数据。不仅如此,随着疫情的发展,身为中国移民的他,还参与了由美国专家团队组织的定期会议,每个人都想更好的改善他的模型 他的网站访问量也呈现出爆炸式增长,每天都有数百万人的数据。

相关百科

服务严谨可靠 7×14小时在线支持 支持宝特邀商家 不满意退款

本站非杂志社官网,上千家国家级期刊、省级期刊、北大核心、南大核心、专业的职称论文发表网站。
职称论文发表、杂志论文发表、期刊征稿、期刊投稿,论文发表指导正规机构。是您首选最可靠,最快速的期刊论文发表网站。
免责声明:本网站部分资源、信息来源于网络,完全免费共享,仅供学习和研究使用,版权和著作权归原作者所有
如有不愿意被转载的情况,请通知我们删除已转载的信息 粤ICP备2023046998号-2