Maybe you can make an argument that, in the absence of any information, your best bet is assuming a Gaussian distributon, but it definitely is not safe to assume so. Your data might not be symmetrically or even unimodally distributed and making these assumption can lead to completely wrong conclusions.
If you know that your data has a well-defined mean and standard deviation but you know nothing else about it then you start with a Gaussian distribution. This isn't an assumption. The Gaussian distribution has the highest entropy and hence encodes the least information possible about the data.
Then as you learn more about your data, you would update this distribution using Bayes' theorem. This could give rise to skew or multiple maxima.
I would consider a well-defined mean and standard deviation an assumption. The distribution of maximum entropy is determined by the constraints. Those constraints have to be assumed. If you constrain your problem to only have nonzero probabilities in a fixed interval, then a uniform distribution will have maximum entropy.
I agree that there is an assumption that the data are well described by a distribution on R instead of some interval [a,b]. I don't know how well justified this is.
The assumption of well defined mean and stddev is weak and better supported each time you collect more data. If your stddev is ill defined then you'll find your sample stddev will diverge (increase) as you add more data points.