I don't think the problem is that we can't develop a practical implementation, it's that the constant being hidden by the asymptotic notation is inherently huge, making the algorithm impractical given the input sizes we are interested in.
There are a lot of problems where this shows up, notably testing primality (we can do that in poly time but it's O(n^6) or something iirc) and matrix multiplication (Strassen's algo).