1. As I said most people don't use plain k-anonymity as it can leak information about the sensitive attribute when the values of this attribute in a group are (almost) all the same. This is why extensions like l-diversity and t-closeness exist: l-diversity ensures that in each group there will be at least l different values of the sensitive attribute, t-closeness ensures that the resulting distribution of the sensitive attribute values in a group is close (as e.g. measured by the "earth mover's distance") to the distribution of the sensitive attribute in the entire dataset. Given the original data and the anonymized data sets it's pretty easy to measure the information gain (e.g. using a Bayesian approach) of an attacker if he/she knows in which group a given person is. In that sense k-anonymity (with l-diversity/t-closeness) can be analyzed in a formal context just like e.g. differential privacy.
2. Yes that's what I mentioned at the end, k-anonymity is not different from most other techniques here: If you use differential privacy with the Laplacian mechanism and repeatedly publish independently anonymized versions of the same underyling data you will leak information (as an attacker will be able to average the released values in order to get an estimate of the true value).
3. Yes sensitive attributes are often quasi-identifiers as well (at least in combination with other quasi-identifiers), they are treated differently because the underlying risk model does not regard a (non-sensitive) quasi-identifier as something that needs to be protected. Inferring e.g. your gender from your zip code, age and body weight using an anonymized data set is (usually) not considered problematic, whereas learning that you are HIV-positive would (almost always) be problematic, hence the distinction. Also, sensitive attributes are treated as a group when applying k-anonymity, i.e. if we have two binary attributes (HIV, Syphilis) one applies the anonymization criteria to the combinations of the attributes ((true,true), (false, true), (true, false), (false, false)), not individually to each attribute (as this can cause information leakage).
4. I honestly don't know what to reply to this, as l-diversity/t-closeness are well specified methods that were designed to overcome the (known) limitations of k-anonymity. Yes, these methods are not completely trivial to use, but if used correctly they can provide good and quantifiable protection. Not using them since they are hard to implement correctly is like saying we shouldn't use cryptographic algorithms like RSA because it's hard to get all the implementation details right.