Just today, i asked GPT and Bard(Gemini) to write code using slint, neither of them had any idea of slint. Slint being a relatively new library, like two and a half (0.1 version) to one and a half (0.2 version) years back [1] is not something they trained on.
Natural language doesn't change that much over the course of a handful of years, but in coding 2 years back may as well be a century. My argument is that, SmallLMs not only they are relevant, they are actually desirable, if the best solution is to be retrained from scratch.
If on the other hand a billion token context window proves to be practical, or the RAG technique solves most of use cases, then LLMs might suffice. This RAG technique, could it be aware, of million of git commits daily, on several projects, and keep it's knowledge base up to date? I don't know about that.
use slint::slint;
slint! { DialogBox := Window { width: 400px; height: 200px; title: "Input Dialog";
VerticalBox {
padding: 20px;
spacing: 10px;
TextInput {
id: input_field;
placeholder_text: "Enter text here";
}
Button {
text: "Submit";
clicked => {
// Handle the button click event
println!("Input: {}", input_field.text());
}
}
}
}
}fn main() { DialogBox::new().run(); }
I do not have a GPT4 subscription, i did not bother because it is so slow, limited queries etc. If the cutoff date is improved, like being updated periodically i may think about it. (Late response, forgot about the comment!)