What is a Tensor?

If you're new to machine learning, you've almost certainly seen the word "tensor." Tensors are common data structures in machine learning and deep learning (Google's open-source software library for machine learning is even called TensorFlow). But what is a tensor, exactly?

In simple terms, a tensor is a dimensional data structure. Vectors are one-dimensional data structures and matrices are two-dimensional data structures. Tensors are superficially similar to these other data structures, but the difference is that they can exist in dimensions ranging from zero to n (referred to as the tensor's rank, as in a first-rank tensor which is one-dimensional).

This surface similarity is often what makes tensors difficult for people to grasp at first. For instance, we can represent second-rank tensors as matrices. This stress on "can be" is important because tensors have properties that not all matrices will have. Using the logic of "all squares are rectangles, but not all rectangles are squares," Steven Steinke explains that "any rank-2 tensor can be represented as a matrix, but not every matrix is a rank-2 tensor."

Tensor vs Matrix - What is the Difference?

The critical difference that sets tensors apart from matrices is that tensors are dynamic. This mathematical entity means that tensors obey specific transformation rules as part of the structures they inhabit. If other mathematical entities in the structure transform in a regular way, the tensor will transform as well, according to the rules established for the system. Not all matrices have this property, hence the fact that not all matrices are second-rank tensors.

Why are Tensors Used in Machine Learning?

Now that we have a working definition for tensors, why are they so popular in machine learning? Well, computers need data to learn, and tensors are a more natural, intuitive way of processing many kinds of data, especially big and complex data sets. Here's an example:

Video is a series of images correlated over time. We can use tensors to represent that correlation better and more intuitively than trying to convert it down to two-dimensional matrices. A third-rank tensor can encode all the aspects of each image (height, width, and color), while a rank-4 tensor could also hold information about time or order for the images.

Thus, tensors allow powerful computers to solve big data problems more quickly and allow deep learning devices and neural networks (which usually require hundreds or thousands of dimensions of data) to process the data more intuitively.


최신 뉴스

Sorry, your filter selection returned no results.

개인정보 보호정책이 업데이트되었습니다. 잠시 시간을 내어 변경사항을 검토하시기 바랍니다. 동의를 클릭하면 Arrow Electronics 개인정보 보호정책 및 이용 조건에 동의하는 것입니다.

당사의 웹사이트에서는 사용자의 경험 향상과 사이트 개선을 위해 사용자의 기기에 쿠키를 저장합니다. 당사에서 사용하는 쿠키 및 쿠키 비활성화 방법에 대해 자세히 알아보십시오. 쿠키와 추적 기술은 마케팅 목적으로 사용될 수 있습니다. '동의'를 클릭하면 기기에 쿠키를 배치하고 추적 기술을 사용하는 데 동의하는 것입니다. 쿠키 및 추적 기술을 해제하는 방법에 대한 자세한 내용과 지침을 알아보려면 아래의 '자세히 알아보기'를 클릭하십시오. 쿠키 및 추적 기술 수락은 사용자의 자발적 선택이지만, 웹사이트가 제대로 작동하지 않을 수 있으며 사용자와 관련이 적은 광고가 표시될 수 있습니다. Arrow는 사용자의 개인정보를 존중합니다. 여기에서 당사의 개인정보 보호정책을 읽을 수 있습니다.