After testing the initial burst interaction, I realized I wanted to transcribe them, and relate notes together into a hierarchy. Other features came naturally, like geotagging each note and swiping during recording to change the “temperature” (importance) of a note.
The app is open source and written in Flutter.
I had a dream at the beginning of the pandemic that people who spend all day in Zoom calls might be able to spend all day hiking as well. I tried quite hard to make it work a couple of times - big USB battery in the backpack - but LTE signal was never good enough up in the hills here.
For my particular dataset it worked quite well. It was the best offline ASR that I tried.
My go-to brain dumping tool is simple note but it's too much separation between an ephemeral thought and the process of recording it.
Especially clever because I think all the tools to do this have existed for basically as long as Android has existed, but this is a very good application of those tools.
One feature request would be 'activate by airpod tap' so I wouldn't even have to hold my phone, just tap my airpods to make a note.
I use notion a TON but it’s not great for the most immediate time sensitive notes.
Do you think you could integrate google calendar / reminders in some way? A lot of the kind of notes I would record on this kind of app have some form of either deadline, or are only relevant after a certain date or time. For example, “check in on this thread Monday morning”. I use google calendar reminders for this right now as they stack up but it’s not a great solution.
- check this thing that opens tomorrow at 6pm
- talk to the lawyers about x before friday
- Fill in my PLF some time tomorrow so it’s ready for my trip
Etc.
Looks like there's an issue: https://github.com/maxkrieger/voiceliner/issues/35
I don't understand why in 2021 Bluetooth degrades to "worse than a 1970's land line" in quality as soon as something tries to use the microphone.
https://web.archive.org/web/20161231051940/http://getmyle.co...
https://shop.ainaptt.com/ptt-devices/21-ptt-voice-responder....
I've thought about taking voice notes before, though I've imagined that as more of a private hands-free thing. I'm curious what your experience of using it in public or with others around (the walks with friends, but also family or colleagues?) is like?
(I sometimes go on a walk while I talk myself through a problem; I've noticed I almost always stop speaking while someone else is in earshot. I suspect I'd also be inclined to avoid taking voice notes with others around.)
I also recently rolled out a “create text note” escape hatch in the menu.
I'll be trying this on as a replacement.
Example: I spend 5 weeks recording 200 sound bites about real estate development in PR. I do no organization. I click a button in the app marked "Organize by opportunity". It sorts my recordings into 4 folders with 2-3 nested with titles like "The Tulum project" and "Evan's group".
I don't particularly need transcription because I don't want to do any of the work implementing the feature I just described ...
As it is, it looks neat but I'll stick with iOS built-in recorder.
It could do a loose keyword match but unless you used the words "Tulum" or "Evan" how would it know to link notes together without context on who Evan is?
> It could do a loose keyword match but unless you used the words "Tulum" or "Evan" how would it know to link notes together without context on who Evan is?
Does it need to know? Fairly vanilla NLP can provide the data to categorize (or index) by identified parts of speech, such as verbs, proper nouns, etc. If you have a large enough pile of notes, categorizing or subcategorizing by combinations would be useful.
There are pitfalls, such as lacking sufficient context for disambiguating between identically named people (eg. your sister Mary vs. Mary from work), but that doesn't negate the utility of such a feature.
Further refinements for association and disambiguation would be highly contingent, but that very contingency can be modeled with Bayesian classification (or more advanced attentional mechanisms) that learns when to apply them. For example, a bit of sentiment analysis could help associate Mary (that you're often mad at) with the words 'project' and 'report', but Mary (that you like) with 'barbecue' and 'holiday' for clustering purposes.
These supplementary techniques necessarily operate on 'small data', and the real challenge is finding natural UI flows and affordances to suggest them to the user when appropriate and solicit feedback without overwhelming.
This is currently my main criterion. I want something that captures my thoughts while hiking without seeing or touching the screen. Currently dabbling with Siri shortcuts, but they're pretty buggy and lacking.
So if Voiceliner could either support the Shortcuts API and/or switch into a mode that's press-to-record / or start-stop, but somehow works on the connected Airpods only, that would be awesome.
Bonus points for re-reading the transcription to me and very light editing on top (like document switching).
Is there something like that?
The taps are finicky for me, maybe because of my phone case. I might try them out again and see if it's worth ditching my case.
I've wanted exactly-this for years. I've sketched a few versions but it stayed on the back-burner for me, partly because friends/etc didn't see the appeal.
I'm really excited to try it out.
- Sync the audio + text somewhere (although maybe this can be done with SyncThing already?) - Add a widget / app action to support one-tap voice notes from the home screen
Thank you, thank you, thank you.
And open sourcing it too - can't love you more ;)
Love the initial setup wizard. Great way to teach the user, clarifying what various permissions are for.