Yes, sorry for being inexact/overusing the term--I understand the tests drive the recording.
What I meant by manual is getting the e2e system into your test's initial state.
E.g. tests are invariably "world looks like X", "system under test does Y", "world looks like Z".
In record/replay, "world looks like X" is not coded, isolated, documented in your test, and is instead implicit in "whatever the upstream system looked like when I hit record".
Which is almost always "the developer manually clicked around a test account to make it look like X".
This is basically a giant global variable that will change, and come back to haunt you when recordings fail, b/c you have to a) re-divine what "world looks like X" was for this test, and then b) manually restore the upstream system to that state.
If no one has touched the upstream test data for this specific test case, you're good, but when you get into ~10s/100s of test, it's tempting to share test accounts, someone accidentally changes it, or else you're testing mutations and your test explicitly changes it (so need to undo the mutation to re-record), or you wrote the test 2 years ago and the upstream system aged off your data.
All of these lead to manually clicking around to re-setup "world looks like X", so yes, that is what I should have limited the "manual" term to.