Tensor product v₁⊗v₂ of vectors form V is bilinear form on the dual space V* defined by v₁⊗v₂(ω₁,ω₂) = v₁(ω₁)v₂(ω₂). Tensor product V⊗V is space of all bilinear forms on V*,
This can easily be generalized to products of forms (covariant tensors), and mixed tensor products. Getting coordinate representation in basis (E₁ ... Eₙ) is trivial: (v₁⊗v₂)ₖ,ₗ = v₁⊗v₂(εₖ,εₗ) where εₖ is dual to Eₖ...
This is only true for finite-dimensional V. In general bilinear forms on V* are linear maps `V*⊗V* -> F` (where F is the base field) which is isomorphic to `(V*⊗V*)*`. For infinite-dimensional V this is much larger than `V⊗V`.
The general characterization is that `V⊗W` is the vector space (and associated linear map `⊗: V x W -> V⊗W`) such that for every bilinear map `h: V x W -> T` there's a unique linear g such that `h(v,w) = g(v⊗w)`. Intuitively, it adds "just enough" elements to `V x W` to encode arbitrary bilinear maps as linear ones.
1) A video[0] by Michael Penn exposing this idea of "tensor product of vector spaces." It's close to what is presented in this blog post, but more rigorous/complete.
2) Two videos[1][2] by Frederic Schuller. They are each from of bigger courses (resp. ~Differential Geometry for Physics and Quantum Mechanics), but I think they are self-contained enough to be intelligible. They both present tensors in different settings, so one will have to work a little to unify all this. I like in particular how in [1] he really takes the time to first distinguish between all the different tensor product (of space, of vectors, of operators): the usual notation/terminology can be needlessly confusing for beginners.
[0]: https://www.youtube.com/watch?v=K7f2pCQ3p3U
That's an odd thing for the author to say, because s/he gives the answer later in the very same passage:
> but perhaps the appropriateness of tensor products shouldn't be too surprising. The tensor product itself captures all ways that basic things can "interact" with each other!
That is the answer. It's the tensor product because there are logically no other possibilities. The tensor product says everything you can possibly say about the interactions of two systems whose states are described by a (possibly infinite) set of numbers and whose interactions correspond to some basic constraints, like being time-reversible. It just so happens that nature behaves according to those constraints, and that is why the tensor product describes the behavior of nature.
There are many other possibilities, unless you can provide satisfactory answers (from first principles) to the following questions: Why would we expect superpositions of quantum states to be encoded as a vector sum of the individual state vectors? Why is time evolution in quantum mechanics a linear operation on those state vectors?
If those things weren't true, tensor products would be utterly useless to describe product states.
Because that's what the word superposition means. If you don't have linear dynamics you don't have superpositions that aren't sums of other states, you just don't have superpositions.
> Why is time evolution in quantum mechanics a linear operation on those state vectors?
This, on the other hand, is an open physical question. An answer to "Why QM and not some completely different theory" is probably not in the cards, but as long as we're only considering "nearby" theories, nonlinearity gets you either superluminimal communication (bad) or basis-dependent observables (worse) depending on which bullets you bite.
"The tensor product itself captures all ways that basic things can 'interact' with each other!"
Tensor Product is also the way to go when combining classical probabilistic systems.
And you need the tensor product already for pure states in QM.
(mixed states need density matrices)
In quantum physics, "interacting" usually has a different meaning. So one should use these terms more carefully.
>And you need the tensor product already for pure states in QM.
No. Pure states are just vectors (or more precise: rays) in Hilbert space. The usual inner product is sufficient to work with them. An outer (=tensor) product of these states will just give you a density matrix with tr(ρ^2)=1.
Say you have 2x2 matrix A, B, C.
Any arbitrary component within the Tensor(A, B, C) is:
if A =(a1,a2) and B=(b1,b2) and C=(c1,c2): this value
The coordination is concatenation of matching dimension:
(a1b1c1, a2b2c2)While in many computational natural science, people using tensor product to store and manipulate data. Also it is how the mathematical equations being written on paper.
But in computer science's perspective, dealing with tensor matrix is simply a waste of memory since 90% of the time people are dealing with sparse system. System that their matrix is dominated by zero. Also it would be super clear if people just write if-then pseudocode instead of cryptic half-bake tensor expressions. People tend to invent their own notation while writting paper.
Not really, no. The way mathematicians actually express if conditions is with the word "if". The obvious pointlessly formal way to do it is with a pair of functions `ThingConditionedOn -> {0, 1}` and `{0, 1} -> Result`, but why would you?
> dealing with tensor matrix is simply a waste of memory since 90% of the time people are dealing with sparse system.
Tensors are not their components, any more than locations are their coordinates. Whether you choose a sparse or dense (or symbolic) encoding does not change the object being encoded.
Here I am talking about real memory space. There exist more efficient isometrical representation of the same object but since mathematicians don't usually know computer, inefficient tensor matrix is by default. That's the problem.
The author seems to be related to https://www.sandboxaq.com/.
"SandboxAQ leverages the compound effects of AI and Quantum technologies (AQ) to solve hard challenges impacting society."
This reads like a scam (don't know anything about it).