There’s nothing stopping any LLM-backed chatbot from using plugins; the ReAct pattern discussed recently on HN is a general pattern for incorporating them.
The main limits are that unless they are integral and trained-in (which is less flexible), each takes space in the prompt, and in any case the interaction also takes token space, all of which reduces the token space available to the main conversation.