There's no need for research. The answer is simple: You can't run Erlang concurrency on a GPU. GPUs fundamentally get their advantage by running the same operations on a huge set of cores across different data. They aren't just Platonically faster than CPUs, they're faster than CPUs on very, very specific tasks. Out of the context of those tasks, they are in fact massively, massively
slower.
Some of the operations Erlang does, GPUs don't even want to do at all, including basic things like pattern matching. GPUs do not want that sort of code at all.
"Erlang" is being over specific here. No conventional CPU language makes sense on a GPU at all.