I am a complete LLM beginner but would like to get into the practical application of the technology by interfacing a GPT instance with an internal database at work.
I understand creating LLM agents that handle unstructured data is a quite common use-case. Ironically, more experienced peers also told me that going through the same exercise to make agents work with structured data can be more challenging.
Are there any resources that you could point me to wrt best practices and tools to use when tackling such a project? In my mind, I would 'magically feed' the DB schema to the LLM, have it write valid SQL prompts and translate the results into text. Does this make sense? Are there better ways to do this?
The problem is that I have a few private projects that I would love to try out. They all need coding. How do you find the motivation to work on such things when you also code for work?
Personally, I think it's probably a healthy thing to pursue completely diverse activities professionally and privately, but at the same time I feel like not honing one's professional skills during your time off is a little bit of a wasted opportunity.
So far, I have not found a good solution to efficiently include LaTeX in any presentation software.
There are plugins, but they rely on a local LaTeX distribution which, let's be honest, is always a pain to manage.
These days, I find myself using a browser app to render equations, then download and import the math as SVG or PNG. This somewhat works: The problem is that I have yet to find a web app that supports both, a wide range of commands (like https://katex.org/) or the export to SVG (like https://viereck.ch/latex-to-svg/).
Any suggestions how to optimize my workflow?