And by demo, I mean they actually ingested the AWS documentation so it is in their best interest to wire this up to docs-staging.aws.amazon.com or some such, with any necessary "this is not supported, it may go away at any time". They're playing with house money, after all
Opensearch actually does support some of the openai embedding models. Basically, as long as your embeddings can fit in the dense vector field type, you can use them. Opensearch has an advantage over Elasticsearch here as it supports higher dimensional vectors. Which means you can use the more fancy newer models provided by e.g. openai.
I've been diving a bit into vector search lately from the perspective of someone who isn't necessarily interested or skilled in creating bespoke AI models but someone who is interested in sticking bits and pieces of off the shelf technology together to implement search functionality. Basically, there are all these vector search engines out there and they kind of loosely do the same things: 1) given some blob of content, and some chunk of extremely expensive to run software that creates vectors for that content 2) store those vectors and 3) allow people to do distance search on those vectors with a second vector calculated from the query; typically using ANN.
That's it. There's a lot of hand-waviness around creating these embeddings vectors. Which is not what most of these products solve. Not even a little bit. You need to provide your own embeddings typically. There are many ways to do that. The simplest is using some docker container (e.g. easybert), writing a simple python script, or using something like the openai embeddings API with a suitable model.
The hard part is picking the right model and evaluating the model performance. Mostly the performance tends to be underwhelming. Especially for short queries. And there's a trade off, all the fancy models produce huge vectors. Which are expensive to query and store.