For the edge-coloring problem, the optimal number of colors needed to properly color the edges of G is always either Delta(G) (the maximum degree of G) or Delta(G) + 1, but deciding which one is the true optimum is an NP-complete problem.
Nevertheless, you can always properly edge-color a graph with Delta(G) + 1 colors. Finding such a coloring could in principle be slow, though: the original proof that Delta(G) + 1 colors is always doable amounted to a O(e(G) * v(G)) algorithm, where e(G) and v(G) denote the number of edges and vertices of G, respectively. This is polynomial, but nowhere near linear. What the paper in question shows is how, given any graph G, to find an edge coloring using Delta(G) + 1 colors in O(e(G) * log(Delta(G))) time, which is linear time if the maximum degree is a constant.