Our domain is precision oncology. In brief, this is about matching cancer patients with available targeted therapies by examining the genome of their cancer. This is different from how the majority of cancer patients are treated today: surgery and {chemo,radio}therapy.
Here is a problem in our domain where tech is one of the limiting factors.
If you look at the DNA of any cell and compare it against the reference genome, you'll find a lot of differences, aka variants. Typically even more so if you're looking at a sequenced tumour (~1e6 variants). This is your hay stack. And a variant that can be medically targeted to treat the cancer is the needle. The definition of what variant is "clinically relevant" is layered, context-dependent, and (partially) regulated. Software is responsible for automating away the majority of variants, say down to 10-100, in a justifiable, traceable way. It's also responsible for giving tools to the clinician to deal with the remaining ones. This manual step typically involves an informed line of questioning about each variant backed by 100s of supporting data points about it.
Without these two tech pieces, interpreting a single molecular pathology report can and does take many hours of (expensive) expert time, instead of minutes. For a rough sense of scale: human genome has ~3e9 nucleotides (ACTG), has ~3e4 known genes, ~1e6 known gene interactions, and ~1e8 known variants. Typical whole genome sequencing produces > 30GB of raw data (compressed).
This is probably the first problem everyone runs into. There are plenty of other ones, some more challenging and interesting than others. Feel free to send me an e-mail if you'd like to discuss this more! amir[at]streamlinegenomics.com