PyTorch tensors
Convert a list to a tensor
Output
tensor([1, 2, 3, 6]) torch.int64 <class 'torch.Tensor'>
torch.ones_like()
x1 = torch.ones_like(x) # returns a tensor of same
# shape as the input tensor but filled with ones hence the name "ones like"
print(x1,x,f'shape {x1.shape}',sep='\n')
torch.rand_like()
x2 = torch.rand_like(x1,dtype= torch.float16) # silimar to ones_like but fills the
# tensor with random numbers
x2_ = torch.rand_like(x,dtype=torch.float16)
print(x2,x2_,sep='\n')
Output
torch.tensor.numpy() & torch.from_numpy()
convert any tensor to numpy ndarray
Output
torch.rand(), torch.ones(), torch.zeros()
print('rand(): ', torch.rand((1,2)))
print('ones(): ', torch.ones((2,3)))
print('zeros(): ', torch.zeros((3,3)))
# torch.these_functions(shape)
Output
move tensors between platforms
platform = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f'device is {platform}')
x = x.to(platform)
print(x.device,'\t' ,torch.cuda.get_device_name())
Indexing
Output
Output
tensor([0.0695, 0.9168, 0.0840, 0.4950, 0.1248])
Output
tensor([0.5354, 0.6784, 0.0840, 0.7625, 0.4684, 0.9083])
Concatenation
print('Before concatenation')
print(x2_,x2)
print('After concatenation')
print(torch.cat((x2_,x2),dim=0))
Output
o = torch.ones(3,3)
z = torch.zeros(3,3)
display(o,z)
print('After concatenation')
print(torch.cat((o,z),dim=0)) # ( for this the
# number of column of the two tensors must be same)
print(torch.cat((o,z),dim=1)) # for this the number of rows must be same
Output
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
tensor([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
After concatenation
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
tensor([[1., 1., 1., 0., 0., 0.],
[1., 1., 1., 0., 0., 0.],
[1., 1., 1., 0., 0., 0.]])
Multiply two tensors using matmul
Output
torch.Size([5, 3, 4, 1])
Broadcasting in tensors
Well broadcasting is basically a way to get around the problem of adding dissimilar shaped tensors.
So if you are trying to add any two tensors of different shapes, then pytorch will broadcast
the tensors in such a way that the addition of those two tensors would be possible.
So there are a few rules which are to be followed.
- Each tensor must be of at least one dimensional
- lets say [a,b,c,d]
and [a,x,y]
are the the shapes of two tensors, then all the corresponding dimensions should be either equal or one of them has to be equal to 1 or none.
Example :
consider the following shapes
These two can be added because it hold the 2nd condition.but (5,1,5,7)
and (1,1,7,9)
these two can't be added because either of the dim2
is not 1
.
and the shape of the resulting tensor is (max(x.dim0,y.dim0),max(x.dim1,y.dim1), max(x.dim2,y.dim2), max(x.dim3,y.dim3)...)
Output
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[52], line 3
1 x=torch.ones(5,1,4,1)
2 y=torch.ones( 3,5,1)
----> 3 print((x+y).size())
RuntimeError: The size of tensor a (4) must match the size of tensor b (5) at non-singleton dimension 2
Output
torch.Size([5, 3, 4, 1])