cat tweets.txt | layer language-embed | layer sentiment > out.txt
In your example, the sentiment layer will work without re-training or finetuning only if preceeded by the exact same language-embed layer as the one it was trained on. You can't swap in another layer there - even if you get a different layer that has the exact same dimensions, the exact same structure, the exact same training algorithm and hyperparameters, the exact same training data but a different random seed value for initialization, then it can't be a plug-in replacement. It will generate different language embeddings than the previous one - i.e. the meaning of output neuron #42 being 1.0 will be completely unrelated to what your sentiment layer expects in that position, and your sentiment layer will output total nonsense. There often (but not always!) could exist a linear transformation to align them, but you'd have to explicitly calculate it somehow e.g. through training a transformation layer. In the absence of that, if you want to invoke that particular version of sentiment layer, then you have no choice about the preceeding layers, you have to invoke the exact same version as was done during the training.
Solving that dependency problem requires strong API contracts about the structure and meaning of the data being passed between the layers. It might be done, but that's not how we commonly do it nowadays, and that would be a much larger task than this project. Alternatively, what could be useful is that if you want to pipe the tweets to sentiment_model_v123 then a system could automatically look up in the metadata of that model that it needs to transform the text by transformation_A followed by fasttext_embeddings_french_v32 - as there's no reasonable choice anyway.