Go's GC's support for internal pointers means it can use a pointer-and-length representation for substring references. Java's lack of support for them means its string representation needs a pointer to the start of the char array and a separate offset and count in order to do the same substring-reference trick. (And, I'm saying, that helps explain why Java and Go now do substrings differently.)
There are other places where Go's ability to use internal pointers is exposed more directly to the programmer: for example, Go lets you take the address of an array element or struct field and pass around the resulting pointer.
Only if the String class is implemented in pure Java, which it currently is. But it doesn't have to be that way. Oracle could go around the Java language features and implement the String class in native code just as Go does with several builtin types. You may be right that it would be more difficult to do than in Go because of garbage collector specifics.
But I guess the real issue is a philosophical one. Is it a good idea to let the standard library use features that are not available to users of the language?
Given that GC design, Go's two-word substring references (pointer into middle of string + count) wouldn't work; even if String were a builtin, with the no-internal-pointers GC design it would need to be at least three words (pointer to start of string, offset, count).
tl;dr of my larger point is--I think Java needed a few extra bytes/String to support substrings by reference because of how its GC works differently from Go's, and I think that explains why Java decided to remove its substring-by-reference trick while Go didn't. (And I'm not trying to say either way is worse, just trying to really grok why they're different.)