Rather than hiding bugs, usually I wind up finding bugs when doing this because teasing apart the different concerns that were developed in parallel in the hacking session (while keeping your codebase compiling/tests running at every step) tends to expose codependence issues that you wouldn't find when everything's there at once.
It's basically a one-person code review. And when you're done you have a coherent story (in commits) which is perfectly suited for other people to review, rather than just a big diff (or smaller messy diffs).
It also lets me commit whenever I want to during development, even if the build is broken. This is useful for finding bugs during development as you'll have more recorded states to, i.e., find the last working state when you screw something up. And in-development commits can be more notes to myself about the current state of development rather than well-reasoned prose about the features contained.
I realize not everyone agrees with it, but I hope I've described some good reasons why I think modifying history (suitably constrained by the don't-do-it-once-you've-given-your-branch-to-the-public rule) is a good thing, not something to be shunned.
Rewriting local history seems no different than rewriting code in your editor.
Rewriting shared history is (almost) always bad.
There are a (very) few instances where you'd want to rewrite something pushed to a shared repo. One is if there's a shared understanding that that branch will be rewritten. Some examples would include git's own "pu" and "next" branches. "pu" is rebased every time it changes, and "next" is rebased after every release. Everyone knows this and knows not to base work off these branches. There's also the occasional "brown paper bag" cleanup like some proprietary information got into the repository by mistake and all the contributors have to cooporate to get it removed. But all of these take out-of-band communication somehow.
Everyone knows that it's "my branch" and that they're absolutely not supposed to use it for anything until it's merged back into master or whatever authoritative branch.
We using the forking model for bigger projects with more developers, and the branching model for smaller projects. It works out very nicely.
A lot of git is "magic" to many developers, and the way that rebase works is certainly one of the features poorly understood.
Yes, that's why Git doesn't allow you to push rewrites, at least not without '--force'.
Agreed. The one counterexample that I have is Github pull requests. Those are actually branches in your fork, and you do want to rewrite those when you get feedback on a pull request. That makes it easier for the owner of the repo to do the merge later.
* In Subversion, people track patches using tools like quilt to manage them before actually putting them together into a commit.
* In Mercurial, people use `hg mq` which is like a more featureful version `git-stash`.
These are basically all ways to track a series of patches prior to 'committing' them into the code base shared with others.
Rebase is a part of code review. If someone spots a typo and a "fix typo" commit follows it up as happens for a good proportion of GitHub model projects, I cringe. This information is uttery useless to the projects history, and should be rebased as a fixup. Only once code review is done, should a commit be considered for merge. It's at this point that rewriting becomes a problem.
I think most people forgot where Git came from, git is designed from the ground up for this! When someone emails a series of patches to the kernel mailing list for review, they iterate that series of commits over and over until its ready. They don't keep adding new patches on top like the Pull Request model proposed by GitHub/GitLab etc do.
There's something to be said for having every commit pass tests/work (or if it doesn't saying explicitly in the commit message), if anyone is ever going to step over this commit.
The problem is they ask /me/ to rebase it; I think they should take a little ownership in the potential rewriting of history.
The team I work with has recently started making sure every commit passes the build and it's had some fantastic results in our productivity. We know every individual commit passes on it's own. If we cherry-pick something in that it's most likely going to pass; so if it fails then usually the problem is in that specific commit, not one made days or weeks ago.
Indeed, i think the widespread rewriting of history that goes on in the Git world makes it more likely that there will be failing commits, because every time you rewrite, you create a sheaf of commits which have never been tested.
Now, in your case, it sounds like you have set up processes to check these commits, and that's absolutely great. Everyone should do this! But why not combine this with a non-rewriting, test-before-commit process that produces fewer broken commits in the first place?
As long as the components are logically separate, it's usually a good idea to make those changes in separate commits. While you can do that using selective git add, I personally often find it more convenient to just have a whole bunch of rather small "WIP" commits that you later group and squash together in a rebase.
Not least of the reason is that I like to make local commits often in general anyway, even when I know that the current state does not even compile. It's a form of backup. In that case, I really don't want to have to run tests before making commits.
And obviously, all of this only applies to my local work that will never be used by anybody else.
The problem I have with non-linear commit history is that I find it impossible to keep all the paths straight in my head when I am trying to understand a series of changes. Maybe you can do that, and I think that's awesome, but I like to see a master branch and then smaller feature branches that break off and then combine back with master.
To my understanding, Gerrit does grouped commits as part of the flow. Even better, groups all review-triggered commits under the same master commit, with the nice, extensive description that one carved for the PR. It's regrettable that GitHub popularized fork/pull request model instead.
Isn't this the same rationalization that drives Git Flow's feature branches and merging via --no-ff ? You can see the messy real work in the feature branch, but it gets merged to the main branch as one clean commit.
replaying changes is much more comfortable to me, especially when I have them in shot term memory, surely easier than merging other people stuff within your files
my average feature is around 7-10 commits, all replayed on latest commit on the branch. it forces me to catch up with other people work on shared areas and gives me quite some more confidence that merge isn't messing up with problematic files.