Yes, when you construct a mutex, you give it ownership of the vector you want it to protect. Due to the way Rust works, once you've given ownership to something else, you can no longer access it yourself.
The only way you can get access to the data again is to "borrow" a reference to it, but this borrow has a "lifetime", which is tied to the period for which you're holding the mutex. This is how the compiler can spot that you've tried to access the vector after the mutex has been released.
As you say, this means that each vector can only have one mutex wrapping it - at least for this implementation of mutex, known as std::sync::Mutex ( https://doc.rust-lang.org/std/sync/struct.Mutex.html).
However, if you're looking for something like a read/write lock, Rust supports that too via std::sync::RwLock - see https://doc.rust-lang.org/std/sync/struct.RwLock.html for more details.
Obviously, the example uses a vector, but Rust has a pretty strong system of generics, so your mutex (or read/write lock) can wrap pretty much any type, e.g. a struct you've defined.
Hope that makes sense - this was one of the areas of Rust that impressed me most!
The mutex owns the vector. Locking it returns a MutexGuard, which is a smart pointer/reference to the data it owns[0]. You can nest mutexes and structures if you need something slightly more flexible than a single big lock.
Having multiple (side-by-side( locks for a single structure sounds like a recipe for concurrency bugs & deadlocks.
> Which seems quite limiting?
If your semantics call for a mutex it makes perfect sense. There are other types of locks if you don't want a 1:1 relationship e.g. RWLock allows multiple readers XOR a single writer.
[0] mutable, so you can replace the structure behind the guard, you just can't keep a reference to it after you release the lock due to Rust's borrow checking semantics
It is, however, often necessary in order to reduce an undesirable level of contention. For example, hash tables often use fine-grained locking (with one lock per bucket) in order to increase actual parallelism and to prevent the hash table from becoming a serialization bottleneck.
Another common example is the implementation of FIFO queues with two locks (one for the head and one for the tail), which allows threads that read from the queue to avoid/reduce contention with threads that write to the queue.
This is both normal and frequently necessary for performance.
I think an approach similar to this (except with multiple sub-hash-tables, each of which is individually locked) is used by https://github.com/veddan/rust-concurrent-hashmap.
For the two-lock FIFO case, you can definitely do this in Rust but you might need to mark it "unsafe". (I'm happy for someone to prove me wrong on this, though!) "Unsafe" tells Rust "look, I know what I'm doing here" and allows you to do things like operate on raw pointers and generally avoiding checks. However, since Rust is no longer checking these assumptions, you need to be very sure of them yourself!
On the plus side, the rest of your code (e.g. the code using your two-lock FIFO) need not know that an implementation is unsafe - the Rust compiler can still check all the assumptions in that code.