There are several issues with that study design that leave it open to experimental bias:
* not randomized, as you mention
* the control is not great: "treatment as usual" is not a very good control in general and is a particular issue in psychiatric and behavioral research. with this design you cannot attribute any improvement specifically to the diet. improvement could be driven by better coaching, more frequent contact with health professionals, being part of a group, a psychological / placebo benefit, etc.
a better control would be an active control where subjects get the exact same thing as the Virta patients, but are on a low-fat diet, or all-plant diet, etc instead of a low carb diet
* different endpoints for active vs control arm: only the virta arm's primary outcomes are measured at 3 months, while the control and virta arms are measured at 12 and 24 months. there are differences for other endpoints as well. this is not necessarily that big of an issue, and they may just be wanting to measure exploratory endpoints and dont want to spend the extra money following control patients. however for a primary endpoint it seems odd.
a cynic could say that virta could claim success on a primary endpoint if the virta arm improves at 3 months even if it does not differ from control at 12 and 24 months. this study is just done by virta prob for marketing and no one is really overseeing this so wouldnt really call them out on it
* different inclusion criteria for active arm: only the active arm can enroll pre-diabetic subjects. the control arm cannot. if they don't pool these pts with others when analyzing results, maybe this is less concerning, otherwise its a pretty big deal in my opinion