惠州 网站建设,温州网站建设方案服务,如何做电子书网站,WordPress怎么做CMS文章目录3.1NLTK工具集3.1.1常用语料库和词典资源3.1.2常见自然语言处理工具集3.2LTP工具集3.3pytorch基础3.3.1张量基本概念3.3.2张量基本运算3.3.3自动微分3.3.4调整张量形状3.3.5广播机制3.3.6索引与切片3.3.7降维与升维3.4大规模预训练模型3.1NLTK工具集 
3.1.1常用语料库和…
文章目录3.1NLTK工具集3.1.1常用语料库和词典资源3.1.2常见自然语言处理工具集3.2LTP工具集3.3pytorch基础3.3.1张量基本概念3.3.2张量基本运算3.3.3自动微分3.3.4调整张量形状3.3.5广播机制3.3.6索引与切片3.3.7降维与升维3.4大规模预训练模型3.1NLTK工具集 
3.1.1常用语料库和词典资源 
下载语料库 
import nltk
nltk.download()停用词 
from nltk.corpus import stopwordsprint(stopwords.words(english))[i, me, my, myself, we, our, ours, ourselves,常用词典 (1)wordNet 
from nltk.corpus import wordnet
syns  wordnet.synsets(bank)
print(syns[0].name())
print(syns[0].definition())
print(syns[0].examples())
print(syns[0].hypernyms())bank.n.01
sloping land (especially the slope beside a body of water)
[they pulled the canoe up on the bank, he sat on the bank of the river and watched the currents]
[Synset(slope.n.01)]3.1.2常见自然语言处理工具集 
分句 将一个长文档分成若干句子。 
from nltk.corpus import gutenberg
from nltk.tokenize import sent_tokenize
text  gutenberg.raw(austen-emma.txt)
sentences  sent_tokenize(text)
print(sentences[0])3.2LTP工具集 
from ltp import LTP
ltp  LTP()segment, hidden  ltp.seg([南京市长江大桥。])
print(segment)AttributeError: LTP object has no attribute seg
出现一些问题...3.3pytorch基础 
PyTorch是一个基于张量Tensor的数学运算工具包提供了两个高级功能 
具有强大的GPU图形处理单元也叫显卡加速的张量计算功能能够自动进行微分计算从而可以使用基于梯度的方法对模型参数进行优化。 
3.3.1张量基本概念 
import torchprint(torch.empty(2, 3))
print(torch.rand(2, 3)) # 0-1均匀
print(torch.randn(2, 3))    # 标准正态生成
print(torch.zeros(2, 3, dtypetorch.long))  # 设置数据类型
print(torch.zeros(2, 3, dtypetorch.double))
print(torch.tensor([[1.0, 2.0, 3.0],[4.0, 5.0, 6.0]
])) # 自定义
print(torch.arange(10)) # 排序tensor([[-8.5596e-30,  8.4358e-43, -8.5596e-30],[ 8.4358e-43, -1.1837e-29,  8.4358e-43]])
tensor([[0.7292, 0.9681, 0.8636],[0.3833, 0.8089, 0.5729]])
tensor([[-1.7307,  1.2082,  1.9423],[ 0.2461,  2.3273,  0.1628]])
tensor([[0, 0, 0],[0, 0, 0]])
tensor([[0., 0., 0.],[0., 0., 0.]], dtypetorch.float64)
tensor([[1., 2., 3.],[4., 5., 6.]])
tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])Process finished with exit code 0 
使用gpu 
print(torch.rand(2, 3).cuda())
print(torch.rand(2, 3).to(cuda))
print(torch.rand(2, 3), devicecuda)3.3.2张量基本运算 
pytorch的运算说白了就是将数据保存在向量中进行运算。 ±*/ 
x  torch.tensor([1, 2, 3], dtypetorch.double)
y  torch.tensor([4, 5, 6], dtypetorch.double)
print(x  y)
print(x - y)
print(x * y)
print(x / y)
print(x.dot(y))
print(x.sin())
print(x.exp())tensor([5., 7., 9.], dtypetorch.float64)
tensor([-3., -3., -3.], dtypetorch.float64)
tensor([ 4., 10., 18.], dtypetorch.float64)
tensor([0.2500, 0.4000, 0.5000], dtypetorch.float64)
tensor(32., dtypetorch.float64)
tensor([0.8415, 0.9093, 0.1411], dtypetorch.float64)
tensor([ 2.7183,  7.3891, 20.0855], dtypetorch.float64)x  torch.tensor([[1.0, 2.0, 3.0],[4.0, 5.0, 6.0]
]) # 自定义
print(x.mean(dim0))    # 每列取均值
print(x.mean(dim0, keepdimTrue))    # 每列取均值
print(x.mean(dim1))    # 每行取均值
print(x.mean(dim1, keepdimTrue))    # 每行取均值
y  torch.tensor([[7.0, 8.0, 9.0],[10.0, 11.0, 12.0]
])
print(torch.cat((x, y), dim0))
print(torch.cat((x, y), dim1))tensor([2.5000, 3.5000, 4.5000])
tensor([[2.5000, 3.5000, 4.5000]])
tensor([2., 5.])
tensor([[2.],[5.]])
tensor([[ 1.,  2.,  3.],[ 4.,  5.,  6.],[ 7.,  8.,  9.],[10., 11., 12.]])
tensor([[ 1.,  2.,  3.,  7.,  8.,  9.],[ 4.,  5.,  6., 10., 11., 12.]])Process finished with exit code 0 
3.3.3自动微分 
可自动计算一个函数关于一个变量在某一取值下的导数。 
x  torch.tensor([2.], requires_gradTrue)
y  torch.tensor([3.], requires_gradTrue)
z  (xy) * (y-2)
print(z)
z.backward()    # 自动调用反向传播算法计算梯度
print(x.grad, y.grad)tensor([5.], grad_fnMulBackward0)
tensor([1.]) tensor([6.])Process finished with exit code 03.3.4调整张量形状 
x  torch.tensor([2.], requires_gradTrue)
y  torch.tensor([3.], requires_gradTrue)
z  (xy) * (y-2)
print(z)
z.backward()    # 自动调用反向传播算法计算梯度
print(x.grad, y.grad)x  torch.tensor([[1.0, 2.0, 3.0],[4.0, 5.0, 6.0]
]) # 自定义
print(x, x.shape)
print(x.view(2, 3))
print(x.view(3, 2))
print(x.view(-1, 3))    # -1就是针对非-1的自动调整
y  torch.tensor([[7.0, 8.0, 9.0],[10.0, 11.0, 12.0]
])
print(y.transpose(0, 1))tensor([5.], grad_fnMulBackward0)
tensor([1.]) tensor([6.])
tensor([[1., 2., 3.],[4., 5., 6.]]) torch.Size([2, 3])
tensor([[1., 2., 3.],[4., 5., 6.]])
tensor([[1., 2.],[3., 4.],[5., 6.]])
tensor([[1., 2., 3.],[4., 5., 6.]])Process finished with exit code 0 
3.3.5广播机制 
3.3.6索引与切片 
3.3.7降维与升维 
x  torch.tensor([1.0, 2.0, 3.0, 4.0]
)
print(x.shape)
y  torch.unsqueeze(x, dim0)
print(y, y.shape)
y  x.unsqueeze(dim0)
print(y, y.shape)
z  y.squeeze()
print(z, z.shape)torch.Size([4])
tensor([[1., 2., 3., 4.]]) torch.Size([1, 4])
tensor([[1., 2., 3., 4.]]) torch.Size([1, 4])
tensor([1., 2., 3., 4.]) torch.Size([4])3.4大规模预训练模型