Matrices (or linear transformations in general) are important examples of tensors. There's a nice adjunction between tensor spaces A(x)B and the space of linear transformations B=>C given by:
Hom(A(x)B, C) = Hom(A, B=>C)
In the case of Tensorflow I think they do actually still talk about linear transformations of some kind so it's perfectly fine to call them tensors.
Tensors are introduced by physicists to ensure various physical quantities (which involve coordinates and their derivatives) do not depend on the arbitrarily chosen coordinate system. This is ensured through the transformation properties of tensors.
The name tensor itself comes from the theory of elasticity, Cauchy stress tensor, which BTW is uniform in many practical cases, and obeys the following tensor transformation rule:
https://en.wikipedia.org/wiki/Cauchy_stress_tensor#Transform...
like any other (contravariant) tensor must.
Matrices are not examples of tensors. Matrices can be used for representation of tensors, in which case tensor product becomes Kronecker product, but matrices in general don't have to represent tensors. You can put anything, including your favorite colors or a list of random numbers, in a matrix, and it won't be a tensor in general, not unless it must transform like a tensor under coordinate system changes.
Similarly, TensorFlow "tensor" is just a multidimensional data array, with no transformation rules enforced on it, and therefore is not a tensor.
it’s like some people invented a new word and won’t tell you what it actually means in sufficient detail to differentiate it from all the other words you know. so you keep using it with others in the hopes that contextual information will finally make it clear. one day.
Thankfully, there is a great historical example of this. The electric field vector \vec{E} = (E_x, E_y, E_z), is not a tensor. It doesn't obey the tensor-transformation law. Similarly, the magnetic field vector is not a tensor. These are matrices, but not tensors.
As you know the Electromagnetic tensor [1] is the tensor that correctly transforms under coordinate transformation, and hence allows different observers to agree with each other.
But not any multidimensional data is a valid tensor.
Sure there are: Any basis of the underlying vector space(s) induces a basis of the tensor space. Components respective to some basis are coordinates. You can then investigate what happens to the induced basis (or rather, the respective components) under a basis transformation of the underlying vector space(s), which is where the "physicist's" definition of tensors originates.
A change of coordinates does indeed induce a change of basis, but a change of basis isn't really a change of coordinates. And strictly speaking some vector spaces don't really have an obvious basis (without invoking choice), so having a basis be a prerequisite for the definition is not ideal.
The whole requirement that a tensor is 'something that transforms like [...] under a coordinate transformation' is just how physicists have chosen to phrase that a vector bundle is only well defined if it's definition isn't dependent on some arbitrary choice of coordinates. In my opinion this requirement is more easily apparent in the mathematical definition where there is no choice of coordinates in the first place, rather than the physicists way of working with some choice of coordinates and checking how things transform.
However, physicists get introduced to tensors far earlier than any excursions into differential geometry when discussing rigid bodies.