Not necessarily. Any serious user of autoencoders would apply some kind of L1 regularization or other sparsity constraint to the coefficients learned, so that the autoencoder does not learn the principal components of the data but instead learns an analogous sparse decomposition of the data (with the assumption that sparse representations have better generalization power).
Also I don't think any of the techniques you mentioned is being passed as "not SVD" by its practitioners. People know they're SVD. These names are just used as labels for use cases of SVD, each with their specific (and crucial) bells and whistles. And yes, these labels are useful.
Cognition is fundamentally dimensionality reduction over a space of information, so clearly most ML algorithms are going to be isomorphic to SVD in some way. More interesting to me are the really non-obvious ways in which that is happening (eg. RNNs learning word embeddings with skip-gram are actually factorizing a matrix of pairwise mutual information of words over a local context...)
That doesn't make these algorithms any less valuable.