The power of git is the ability to work in parallel without getting in each-others way. No longer having a linear history is an acceptable consequence. In reality the history was never linear to begin with. It is better to have a messy but realistic history if you want to trace back what happend and who did and tested what at what time. I prefer my code to be clean and my history to be correct.
I prefer to start with GitHub flow (merging feature branches without rebasing) and described how to deal with different environments and release branches in GitLab Flow https://about.gitlab.com/2014/09/29/gitlab-flow/
The whole point of history is to have a record of what happened. If you're going around and changing it, then you no longer have a record of what happened, but a record of what you kind of wish had actually happened.
How are you going to find out when a bug was introduced, or see the context in which a particular bit of code was written, when you may have erased what actually happened and replaced it with a whitewashed version? What is the point of having commits in your repository which represent a state that the code was never actually in?
It always feels to me like people just being image-conscious. Some programmers really want to come across as careful, conscientious, thoughtful programmers, but can't actually accomplish it, so instead they do the usual mess, try to clean it up, then go back and make it look like the code was always clean. It doesn't actually help anything, it just makes them look better. The stuff about nonlinear history being harder to read is just rationalization.
Rather than hiding bugs, usually I wind up finding bugs when doing this because teasing apart the different concerns that were developed in parallel in the hacking session (while keeping your codebase compiling/tests running at every step) tends to expose codependence issues that you wouldn't find when everything's there at once.
It's basically a one-person code review. And when you're done you have a coherent story (in commits) which is perfectly suited for other people to review, rather than just a big diff (or smaller messy diffs).
It also lets me commit whenever I want to during development, even if the build is broken. This is useful for finding bugs during development as you'll have more recorded states to, i.e., find the last working state when you screw something up. And in-development commits can be more notes to myself about the current state of development rather than well-reasoned prose about the features contained.
I realize not everyone agrees with it, but I hope I've described some good reasons why I think modifying history (suitably constrained by the don't-do-it-once-you've-given-your-branch-to-the-public rule) is a good thing, not something to be shunned.
However in my observation I have found that more than any other revision control system I have used, the person ultimately responsible for the code spends far more time cleaning up history and recovering from developer mistakes on projects using git than any I can recall, and that goes back to CVS and Visual Source Safe, also including svn and hg.
I know a lot of people use git and love it so I'm prepared to accept that they're all smarter than I am. But IMHO, the version control system should be incidental to my work. It should not demand any significant fraction of my brainpower: that should be devoted to the code I'm working on. If I have to stop and THINK about the VCS every time I use it, or if it gives me some obscure "PC LOAD LETTER" type of response (which seems to happen to me when I use git) then it is a net negative. If I need to have a flowchart on my wall or keep some concept of a digraph in the front of my thinking or use a cheat sheet to work with the VCS, then it's just one more thing that gets in my way.
I think git probably has a place on very large codebases, with very distributed developers. For the typical case of a few developers who all work in the same office, I think in most cases it's overkill and people would be more productive using something simpler.
839a882 Fix bad code formatting [James Kyle]
6583660 Updated plugin paths for publish env [James Kyle]
847b8f3 First stab at a mobile friendly style. [James Kyle]
a70d3f7 Added new articles, updated a couple. [James Kyle]
b743ec3 format changes on article [James Kyle]
68231e7 Some udpates, added an article [James Kyle]
2a92c5e Added plugins to publish conf. [James Kyle]
6dec1e1 Added share_post plugin support. [James Kyle]
070bbd0 Added pep8, pylint, and nose w/ xunit article [James Kyle]
eb8dbcc Corrected spelling mistake [James Kyle]
0b89761 Minor article update [James Kyle]
677f635 Added TLS Docker Remote API article [James Kyle]
d8e94fd Fixed more bad code formatting in nose [James Kyle]
f06dc2d Syntax error for code in nose. [James Kyle]
606ac2b Removed stupid refactor for testing code. [James Kyle]
This might be a very short one. If the work goes on for a couple of days, could be dozens of commits like this.In the end, it'd be a veritable puzzle what I was trying to send upstream. Also, the merger has to troll through multiple commits and history. It's plain annoying.
So you rebase and send them something like this:
947d3e7 Implemented mobile friendly style. [James Kyle]
And if they want more, they can see the full log with a bullet list: 947d3e7 Implemented mobile friendly style.
- Added plugins x, y,
- Implemented nose tests to account for new feature
Rebasing is about taking a collection of discombobulated stream of thought work flow and condensing it into a single commit with an accurate, descriptive log entry.Makes everyone's life easier.
edit
It's also very nice to take out frustration generated commits like "fuck fuck fuck fuck fuck!!!" before committing upstream to your company's public repository. ;)
Maybe. I'm not actually sure, to be honest what's a good idea with git history, this included. Feedback welcome.
That seems overly broad. It seems to me that most people who use git agree that public history shouldn't be rewritten, especially on master.
> The whole point of history is to have a record of what happened.
On the other hand, a bunch of "Derp" or "Whoops" type commits aren't very useful. It's definitely beneficial to clean that sort of stuff up by rewriting local history before pushing.
Would I like to get away from that and do it from the get-go? Oh yes, it'd be great. But I'm not there yet and so re-writing history is nice. And doing so forces me to think about the code I've written and where the boundaries of the changes I've made are. Granted, I haven't done it on very long lived feature branches (or big ones) - that may be where most of the penalties are manifest.
Every author is "image-conscious" because they want to present their thoughts clearly to the world. That's where your rather substantial misconceptions about the application and utility of rebasing come from. This isn't about rewriting published history, which is rightly and nearly universally considered A Bad Idea(tm) in the git world. The recommendations around rebasing are essentially identical to authors editing their text before publication. Note "before". Before {an article, some code} is published, edit, rewrite, cleanup all you want. After it's published, an explicit annotation is the best practice. For an author, perhaps an "Updated" note in an article or a printing number in a book. For a developer, add a new commit recording the change.
For my part, I use rebasing extensively and lightly before I publish code. By "extensively" I mean, I just don't hesitate to edit for clarity. This is the same as I'd do in authoring a post or email. By "lightly", I mean that I don't waste time doing radical history surgery but I regularly do things like squash a commit into an earlier logical parent commit. E.g. I started a refactor, then a little while later found some more instances of the same change. Often, this is just amending the HEAD commit, but occasionally I need to go back a short ways on my working branch.
This also fluidly extends to use of git's index and the stash for separating out logical commits from what's in the working copy. A typical example:
1. git add <files for a logical change>
2. git stash -k # put everything not added into the stash
3. # run tests
4. git commit
5. git stash pop
Once you're used to the above workflow, an understanding of git's commit amending and rebasing tools extends this authoring capability into recent history. This is wonderful because it takes pressure off of committing, meaning that git history becomes a powerful first-class, editable history/undo stack.
In most organizations, we don't have anywhere near that number of participants and we don't want charismatic developers, we want something that works right now and we're confident that changing it is not merely a possible outcome but very very likely.
Editing draft commits is fine. Editing public commits is less fine. The problem is that git has no way to distinguish draft and public commits except by social convention.
Mercurial Evolve actually enforces the separation between draft and public commits, and can also allow setting up servers where people can collaboratively edit draft commits.
My talk about it:
At the end, it might all be squashed down into a single bug-fix commit for the devel branch.
The commit granularity that's desirable and effective for an individual is very different to the history you want in the main feature branches.
I disagree, and it's actually impossible not to use it. Rebase rewrites history. If you have a long-running feature branch you need to merge back into master, you have to rebase it against the current master. There's really no other choice.
> The whole point of history is to have a record of what happened.
Define "what happened" in this context...are we talking about what the feature's changes end up looking like, or the entire linear history of the work on this feature starting from the point at which the programmer experimented with a bunch of dead-ends before finding the right path?
Personally, I feel like an extremely detailed history of my personal problem-solving adventure on every complex ticket is irrelevant. At the end of the day, the code reviewer just wants to know what changed. When I review code, I prefer to look at a massive diff of everything that's been done, not read commit-by-commit. I'd rather see exactly what I'm going to pull in when I merge it into master.
I would also disagree with you here that the whole point of source control is to maintain a history of what happened, and argue that the point of source control is communicating changes between developers on a team. The fact that it backs up your code and keeps a history of what changed are merely secondary features to the central value of providing a way of communicating changes to a codebase between developers. I think Git is the best version control system for doing this, because it allows you to rewrite history. That said, rewriting history is very dangerous and if you use it incorrectly (like never ever rewriting history on a branch other people have to pull from), you're
> If you're going around and changing it, then you no longer have a record of what happened, but a record of what you kind of wish had actually happened.
If you're using Git, this is a complete falsehood if you are the person who made the commits. The reflog provides a reference to every single change made to your repository, so you can just reset back to the point before you rebased and voila, like magic everything is back to the way it was. This isn't a "hack", that's what reflog is for. It's a giant undo list for your local clone of the repo.
So in essence, history is never destroyed. It's just hidden from view. You can always go back in Git unless you actually `rm -rf .git/`.
> Some programmers really want to come across as careful, conscientious, thoughtful programmers, but can't actually accomplish it, so instead they do the usual mess, try to clean it up, then go back and make it look like the code was always clean.
You might be correct in some cases, but I think for the majority of the time you are confusing explicitness with vanity. Programmers want other people on their team to know what they did, or at least the intention of their code, and having commit messages that "tell a story" and make sense are vital for doing that.
I see statements like "The power of git is the ability to work in parallel without getting in each-others way" and get really worried about what people are trying to achieve. I want my team's code to be continuously integrated so that problems are identified early, not at some arbitrary point down the line when two features are finished but conflict with each other when both are merged. We seem to be reversing all the good work the continuous integration movement gave us; constant integration makes integration issues smaller and easier to fix.
I personally prefer to use toggling and techniques like branch by abstraction to enable a single mainline approach. Martin Fowler has a very good article on it here http://martinfowler.com/bliki/FeatureBranch.html
Even so, I see value in very short-lived branches for code review. Ideally a branch exists for about four hours before it is merged to master.
* most git UI don't provide for branch filtering or --left-only (which hides "accessory"/"temporary" merged branches unless explicitly required)
* developers won't necessarily care for correct merge order, breaking "left-only" providing a mainline view
The end result is, especially for largeish projects, merge-based workflows lead to completely unreadable logs.
If you test a commit and it passes, and then merge that commit into master, the merge may have changed the code that the commit modified, or something that the commit's code depended on. The green flag you had on the commit is no longer valid because the commit is in a new context now and may not pass.
If you rebase the commit onto master, you're explicitly changing the context of the commit. Yes, you get a different SHA and you're not linked to the original CI result anymore, but that CI result wasn't valid anymore anyway. This is exactly the same situation that the merge put you into, but without the false assurance of the green flag on the original test result.
As many others have noted, rebasing is only recommended on private branches to prepare them for sharing on a public branch. If you're running CI it's probably only on the public branches, so rebasing wouldn't affect that. But if you're running CI on your private branch too, then you're going to want to run it after rebasing onto the public branch and before merging into the public branch. That gives you assurance that your code works on the public branch before you share it. Again, if you're using a merge-based workflow you'd have to do the same testing regardless of your earlier test results.
1. If reviews + CI tests go well we fast forward merge onto master.
2. If the commit's parent isn't the latest commit on master, it is automatically rebased and the CI suite is kicked off again.
3. Upon successful fast forward merge into master, all in-flight reviews are automatically rebased on master's new head and CI's kicked off again.
4. Any open commit can become the top of master without worry it will break the build.
For our team of ~10 this works exceptionally well with master not being a broken due to our code in the last ~6 months. (edit: formatting)^^^ Couldn't agree more.
However, I don't know why people want to avoid rebasing feature branches. Rebasing feature branches means that you only have to resolve the conflict once and have a clean history for your release branch. Granted, it works well in my team where only a single developer owns a given feature branch.
If you have a feature branch with a number of changes in the same place, rebasing on to a branch that also changes in the same spot means you need to fix all of the related commits during the rebase. If it's a big feature that could end up being a huge task.
I may be wrong about that as I'm no git guru.
How does GitLab store the code-review data? Is it stored in the (or a) git repo? Is the feature compatible with rebasing feature branches before merge?
Also, pricing: I only just noticed that your pricing was per YEAR, not per MONTH. Most boostrap-pricing-page software is priced monthly and the user/year text is lowlighted. This has to be costing you sales.
When you accept a merge request the title, description and a link to the merge request are stored as the commit message. For example see https://gitlab.com/gitlab-org/gitlab-ce/commit/6c0db42951d65... This allows you to see any other things that were discussed. Hopefully any line comments were resolved with a commit (thus documenting them) or were based on a misunderstanding.
Thanks for the pricing tip, we'll fix it https://gitlab.com/gitlab-com/www-gitlab-com/issues/348
Are you sure your developers feel the same as you do? Are you sure they're willing to be open enough to you about their misgivings?
The only evil is a willingness to force push the updated history over our branches before the PR goes up. But no one shares branches usually. Or the collaborators on a branch are few and they agree when to rewrite history.
I disagree very very strongly with this. I wrote about this years ago at http://www.darwinweb.net/articles/the-case-for-git-rebase, but it's due to for an update to clarify my thinking.
Basically my beef is the idea that never rebasing is "true" history, and rebasing gives you "corrupt" or "whitewashed" history. In fact, the only thing you have weeks, months, years after pushing code is a logical history. It's not as if git automatically records every keystroke or every thought in someone's head—that would be an overwhelming amount of information and difficult to utilize anyway—instead it's all based on human decisions of what to commit and when. Rebasing doesn't "destroy" history, it's just a decision of where a commit goes that is distinct from the time it was first written, but in fact you lose almost no information—the original date is still there, and from that you can infer more or less where it originally branched from.
"But," you say, "surely complete retention of history is preferable to almost-complete retention?". Well, sure, all else being equal I would agree with that. But here's the crux of the issue: merging also loses information. What happens when you merge two conflicting commits is that bugs are swallowed up in merge commits rather than being pinned down to one changeset. This is true whether it is a literal merge conflict that git detects, or a silent logic error that no one discovers until the program blows up. With two branches merging that are logically incompatible, whose responsibility is it to fix their branch? Well, whoever merges last of course, and where does that fix happen under a never-rebase policy? In that single monstrous merge commit that can not be reasoned about or bisected.
But if you always rebase before merging to master, then the second integrator has to go back and correct each commit in the context of the new branch. In essence, they have to fix their work for the current state of the world, as if they had written it today. In this way each tweak is made where it is visible and bisectable instead of squashed into an intractable merge commit.
I get that there is some inconvenience around rebasing coordination and tooling integration (although GitHub PRs handle it pretty well), but the idea that the unadulterated original history has significant value is a straw man. If the branch as written was incompatible at the point it got merged, there is no value in retaining the history of that branch in an incompatible state because you won't be able to use it anyway. In extreme cases you might decide the entire branch is useless and just pitch it away entirely, and certainly no one is arguing to save history that doesn't make it onto master right?
It's simple. Read backwards down the `develop` branch and read off the `feature/whatever` branches. Just because the graph isn't "pretty" doesn't mean it's useless.
In general, I'm starting to dislike "XXX considered harmful" articles. It seems to me like you can spout any opinion under a title of that format and generate lingering doubt, even if the article itself doesn't hold water. Not to generalize, of course--not all "XXX considered harmful" articles are harmful. They generally make at least some good points. I just think the title format feels kind of clickbaity at this point.
That said, kudos to the author for suggesting an alternative rather than just enumerating the shortcomings of GitFlow.
And yet people who want to argue against the use of it simply because they don't want to learn something new now have a useful link to throw around as "proof" that a very successful strategy is "harmful". I guarantee in the next year I will have to go point by point and refute this damn article to some stubborn team lead or another senior dev.
Nobody should ever write an article outright bashing a strategy that they either don't fully understand or personally have not managed to integrate successfully in their own day-to-day. Bare minimum, if you're going to publish an article critical of a tool, don't name it so aggressively as to sound like it's fact rather than a single personal point of view.
Specifically, git log --first-parent (--oneline --decorate) would look much better with the documented strategy. Instead of seeing all the commits in the branch, all that's shown is the merge commit. If you used the article's branch names, all you'd see is:
* Merge branch 'feature/SPA-138' into develop
* Merge branch 'feature/SPA-156' into develop
* Merge branch 'feature/SPA-136' into develop
If you actually used descriptive branch names, that would actually seem to be quite useful - see immediately see the features being added without seeing all the gritty details!
- branches: Work in progress.
- develop: Code ready to share with others. It can break the build (merge conflicts, etc) and it won't be the end of the world.
- master: This shouldn't be broken. It needs to point to a commit that has already been proven not break the build/pass all the required tests.
As always, you need to find a balance with these things and adapt to the peculiarities of your code base and team. I really see them as suggestions...
- Feature branch: do whatever you want
- Develop: should be good enough for the client (product owner) to look at
- Release branch: should be good enough to be tested by the test/QA team
- Master: should be good enough for website visitors
Branches are meant to be shortlived and merged (and code reviewed) into develop as soon as possible. We use feature toggles to turn off functionalities that end up in develop but can not go to production.
For example:
Team is working on feature A and feature B. Each feature is developed on its own branch.
Feature A is ready for testing/integration, it is merged into develop.
Feature B is ready, it is merged into develop.
Now here is the problem:
Feature B is ready for release, but feature A is not. It is now not possible to merge develop into master without including both features.
The solution I use is to have master and branches. That's it. Master represents currently-running production code. Branches contain everything else.
This also happens to be how Github work. This article explains how it works with various levels of deployment before production - http://githubengineering.com/deploying-branches-to-github-co...
If Feature B isn't ready, it should stay in its own branch until it is.
Develop is for code that the developers say is ready. You might have bugs, poor merge resolution, etc, but any fixes made should be quick and should pave the path toward code that can be merged into master. If the problems are major, revert develop.
Master, on the other hand, should always be stable and rock solid. You can then have production servers that always pull master automatically, and staging servers that always pull develop automatically.
I've worked on multiple teams this way it works out quite well.
Let's say next release is Release-1.10
1. We merge Feature A and Feature B to it.
2. Feature A is tested and ready to be deployed. Feature B is not.
3. You either revert Feature B commit OR re-create the release branch with only Feature A.
4. Deploy the release branch.
5. Merge the release branch in to master.
This is exactly what we do at my current workplace where we have 4 developers working on changes with different release schedule.
We have one particular repo at work that is just a pain in the ass to work with (Typescript analytics code that has to be backwards compatible to about forever), and we've pretty much abandoned the develop branch since releases got held up due to bugs in less important features that had been merged in without comprehensive testing. Pretty much everything now gets tested extensively on a feature branch and then gets merged directly into a release branch.
We might have swung a little too far in the other direction, I'm thinking we want to at least merge well tested bits back onto develop, but at least we can release the features that are actually done and cut those that are still having issues without having to do any major git surgery.
I think you're describing "GitHub flow", or how most people starting out with git would probably use it given no exposure to git-flow.
Github flow introduction: http://scottchacon.com/2011/08/31/github-flow.html
Documentation (it's really unnecessary if you get the idea): https://guides.github.com/introduction/flow/
The main disadvantage, as the article rightly points out, is that it makes it much harder to read the history. But that's easily solved with a simple switch: --no-merges. It works with git-log, gitk, tig, and probably others too. Use --no-merges, and get a nice looking linear history without those pesky merge commits.
One trick that can really help with this is `git commit --amend`, which allows you to amend the last commit. If you encounter a bug or a typo in the your last commit, add your fix to the index and then do `git commit --amend`. This will replace your last commit with a new one that contains your latest fix. Of course, this should only be done if you did not push your last commit to remote.
For fixes to earlier commits, I don't bother much, and just live with the trivial commit. Though if I end up making several trivial commits in one setting, I do a cleanup and merge this fixes in one commit before pushing.
We use feature branches and rebase before merging to master (mostly for the reason stated above - keep things clean and treating the branch as a single logical unit, not caring about higher-resolution history within the branch).
However, some times, especially since we typically merge on Github from a PR, it's easy to forget to rebase, or notice that there are more commits. So our history is mostly clean, but occasionally contain some "messy" commits.
I know we can reset master, but it feels a bit too risky compared to living with a bit of noise when mistakes happen.
Anyone knows of some way to prevent / alert / notify or otherwise help us avoid this rather common mistake before it happens?
If anyone has a solution to this problem please share!
I like this theory, and generally like merge commits because of it -- but in practice, I've found it still _really really hard_ to revert a feature even if I have a merge commit. Simply reverting the SHA of the merge commit does not, I think, do it, git complains/warns about that. I have to admit I still havent' figured out how to do it reliably even with a merge commit!
Master should always be latest production code, development branch contains all code pre-release. That's the core.
The other branches let you scale gitflow - if you need to track upcoming release bugfixes etc, you can use a release branch. A team of maybe 6 or 7 would likely start to need a release branch. Feature branches at this point are best left local on the developers repository. They rebase to fixup commits, and then merge those into develop when they're ready.
If you get into bigger teams - like maybe 6 agile teams working on different larger features, then you can introduce feature branches for the teams to use on sprints to keep the work separate.
The issue with gitflow is the lack of continuous integration, so I personally like to get teams to work only on a develop branch during sprints and use feature toggles to commit work to the develop branch without breaking anything.
As I see it, gitflow and CI are at odds and that's my biggest gripe with integrating lots of feature branching for teams - everyone has to integrate at the end of the day.
So I believe the model can and should be scaled back as far as possible, using only develop and master and release as primary workflow branches, introducing the others when the need arises - doing it just because it says so in the doc isn't the right approach.
This seems more of a: "This tool is popular but it doesn't work for me so it's bad".
In fact, as you say, he dislikes the tool (from the get go):
> I remember reading the original GitFlow article back when it first came out. I was deeply unimpressed - I thought it was a weird, over-engineered solution to a non-existent problem. I couldn't see a single benefit of using such a heavy approach. I quickly dismissed the article and continued to use Git the way I always did (I'll describe that way later in the article). Now, after having some hands-on experience with GitFlow, and based on my observations of others using (or, should I say more precisely, trying to use) it, that initial, intuitive dislike has grown into a well-founded, experienced distaste.
Throwing my two cents. There's no perfect methodology and teams that communicate and adhere to a set of standards will probably find a good way to work productively with git. They can always be helped with scripts like the gitflow plugin or some other helper if they think the possibility of human errors is big.
I also have anecdotal experience of working with and without and, being fine with either although I do appreciate git flow in any project that starts getting releases and supporting bug fixes, hot fixes and has been living for a while so it incorporate orthogonal features at the same time.
He is suggesting to use 90% of what GitFlow suggests (feature/hotfix/release branches) but doesn't like the suggestion of non-fast-forward merge and master/Dev and that makes GitFlow harmful? I don't think I agree.
I think having the Dev branch is useful. Consider this actual scenario at my current workplace.
1. We have 4 developers. Nature of the project is such that we can all work independently on different features/changes.
2. We have Production/QA/Dev environment.
3. When we are working on our individual features, we do the work in individual branches and merge in to Dev branch (which is continuously deployed).This lets us know of potential code conflicts between developers in advance.
4. When a particular feature is 'developer tested', he/she merges it into a rolling release branch (Release-1.1, Release-1.2 etc) and this is continuously deployed to QA environment. Business user does their testing in QA environment and provides sign off.
5. We deploy the artifacts of the signed off release branch to Production and then merge it in to the master and tag it.
Without the development branch, the only place to find out code conflicts will be in the release branch. I and others on my team personally prefer the early feedback we can get thanks to the development branch.
Advantages of an explicit merge commit:
1. Creating the merge commit makes it trivial to revert your merge. [Yes, I know it is possible to revert the merge but it's not exactly a one step process.]
2. Being able to visually see that set of commits belongs to a feature branch. This is more important to me (and my team) than a 'linear history' that the author loves.
We have diverted from GitFlow in only one way, we create all feature/release/bugfix branches from 'master' and not 'develop'.
Now, don't get me wrong, GitFlow is not simple but it's not as complicated as author seems to suggest. I think the author was better served with article title like 'What I don't like in GitFlow'.
The idea of CI is that you integrate all commits, so you must integrate the develop branch - build the software, run the tests, deploy it to a production-like environment, test it there too.
So naturally, most testing happens in that environment; and then you make a production release starting from the master branch, and then install that -- and it's not the same binary that you tested before.
Sure, you could have two test/staging environments, but I don't think I could get anybody to test their features in both environments. That's just not practical.
I guess this does open up the possibility that merging master (with hotfixes) back into development could cause a regression, but we certainly try to keep hotfixes minimal and simple.
Now database changes...that's the real pain point. Both master and development need their own DB to apply changes scripts to. Otherwise, deltas from development make testing master an issue.
Ultimate they decided to move to team branches, where each team branch was free to operate how they want so long as the team branch itself built successfully before merging into master. I think most teams adopted the more natural-feeling GitHub Flow.
Personally, for me it's not even the god-awful history that makes me despise gitflow, but its reliance on additional tools to effectively manage the process. This should be a huge red flag to anyone seeking to change a process, and it's complained about a lot. Cowokers not knowing what git-flow is doing under the covers is dangerous. I consider myself pretty versatile with git at this point, but I have no idea what the tool does under the covers. I'm sure I could find out; however, when you're handed a piece of software, generally you learn the contract/api it provides, but most of us aren't going to delve into the implementation details.
So if you have just one environment for testing, you can decide to deploy the develop branch to it, in which case you deploy untested builds (from the master branch) to production.
Or you can decide to always deploy the master branch to the testing environment, in which case you have to do a release each time you want to show somebody your progress (and you can't easily show it in dev); that's just annoying extra work, and goes against the idea of continuous integration.
1. Merging vs Rebasing
Open source projects should stick with Merging over cherry-picking and rebasing especially if you want others to contribute. Unless you feel fine doing all of the rebasing and cherry-picking for them. Otherwise, good luck gathering a large enough pool of people to contribute. Simplicity always wins here.
2. GitFlow vs X
Once again do what is good for your company and the people around you. If you have a lot of developers having multiple branches is actually /beneficial/ as Master is considered ALWAYS working. Develop branch contains things that are ready to ship, and only things that are READY TO SHIP. So if your feature isn't ready yet, it can't go to develop, and it won't hit master. Your features are done in other branches.
3. Rewriting history
Never do this. Seriously, it will come to bite you in the ass.
4. Have fun.
Arguing is for people who don't get shit done.
To be fair, its a cookie-cutter approach that resonates with people unfamiliar with git but not ready/willing to invest the time to understand it deeply. That is understandable; a lot of people come from other systems and just need to get going right away and gits poor reputation for command-line consistency etc. is well-earned.
(To be clear, I am not a fan of git flow.)
If anyone is interested in truly understanding git, start here: http://ftp.newartisans.com/pub/git.from.bottom.up.pdf
(I believe that git flow is definitely better than "everyone does things their way", and that's one competing "rule-book" for a team new to git.)
I'm pretty confident that understanding the tool better will help you to judge how to use it more effectively. The best way to understand git is to understand its data-model.
Merge commits are great. They are here to group a list of commits into a logical set. This logical set could represent one "feature", but not necessarily. It is up to you to decide whether commits A B C D should or shouldn't be grouped by a merge commit. Merge commits also make regression searchs (i.e. git bisect) a lot faster. And to top it of, they will make your history extremely readable, but that is granted you merge correctly... and that is where git rebase and git merge --no-ff come into play.
At my company, every developer must rebase their topical branch on top of the master branch before merging. Once the topical branch is rebased, the merge is done with a --no-ff. With this extremely easy flow, you end up with a linear history, made of a master branch going straight up and only everyonce in a while a merge commit.
Our commit history looks like this:
*-------------*---------*---------*----------*----*------->
\-----------/ \---------/ \--/
Following the simple rule "commit, commit, commit..., rebase, merge --no-ff" avoided the merge spaghetti a lot of people compain about. Although, I have to admit our repository is small (6583 commits to date).This works even when multiple devs work on the same branch: they must get in touch on a regular basis, rebase the branch they are working on and force push it. Rewritting history of topical branches is only bad if it is not agreed on. As long as it is done in a controlled manner nothing's wrong with it.
Another rule we follow is to always "git pull --rebase" (or git config branch.autosetuprebase=true).
Our approach might not, however, scale for larger teams or open source projects.
From what I can tell no-ff exists to satisfy the aesthetic preference of your local team pedant. It gives them something to do between harping on whether your behavior is in the correct "domain", deciding if a list comprehensions are truly "pythonic", and spending that extra month perfecting the event sourcing engine to revolutionize the "contact us" section of your site.
> From what I can tell no-ff exists to satisfy the aesthetic preference of your local team pedant. Incredible irony.
I mean its certainly possible that in some tiny fraction of cases I might say "man I could fix this a lot easier if I had the merge commit" its just in the 10,000's of examples that form my experience I haven't stepped on that particularly landmine yet.
Even with that said, my development philosophy compels me to choose "Simplicity over Completeness" and is utilitarian to the core. I will chose whatever is most effective in the vast majority of cases.
Some folks look at "Source Control History" as some pristine, historical record of how things went down. Since I am not an accountant or auditor this has little value to me. It encumbers the day to day to optimize for a case that is almost certain to never happen. A first-order approximation of the history that optimizes for the day-to-day needs of an organization is far more suitable in almost every case.
I use the term "local team pedant" its not a bad thing. Some folks just have a need for things to be "complete" and feel compelled to do so for irrational(usually expensive) reasons. In my own experience the person that is the "no rebase/ never fast forward" cheerleader can never give a solid objective answer as to what the benefit is. Its usually always something like what this no-ff-er suggests(http://walkingthestack.blogspot.com/2012/05/why-you-should-u...) . Things like "I can see whats on a branch, etc." That in itself is not a justification. Its just words. If you could someone how demonstrate how this reduces development costs or offers a better way to organize work and is simultaneously better than the more idiomatic alternatives then I'm all for it.
This is my point I find the 12 commits to be unnecessary. I've never been burned by squashing. The only arguments I've heard against it are ideological(you're destroying history, etc.)
The time it takes to carefully rebase a branch onto another, and to compress commits for a feature into one, is still much longer than the time it takes for my eyes to pass over so-called "empty" merge commits.
If I want to look at when a feature entered a branch, I can look at its merge commit. And the feature branches are there to show how a feature was built; bugs could be the result of a design decision that happened in one of the midway commits.
I looked at OP's example pic in the blog, and I read all of his words, but I wasn't sold. His picture looks like a normal git history to me. It requires almost no effort to find what I'm looking for.
And that's not even touching his rage against the idea of a canonical release branch (master). But that's for another day.
And I must say I agree with all you say in that post.
Thanks!
GitFlow has been working great for us. A team of 15 developers, working with feature branches, we have our CircleCI configured to automatically deploy the "develop" branch to our "QA environment", and our "master" branch to "production" environment.
The "hotfix" and "release" are proven to be useful to us too; we just need to have effective communication with our team, so everybody rebase their feature branch after a merge in our main branches.
* "master" is the current stable release
* "develop" is the current "mostly stable" development version
The first time you clone a repository this is an extremely helpful convention to quickly get your head around the state of things.
If you're doing it right (and don't use --no-ff, which I agree is unreasonable), I can't think of a scenario where this causes extra merge commits. Merges to master should always be fast forward merges.
Also, I don't see his point about that messy history. I can see exactly what happened in that history (though the branch names could have been more informative). With multiple people working on the same project, feature branches will save your sanity when you need to do a release, and one feature turns out to not be ready.
https://www.kernel.org/pub/software/scm/git/docs/gitworkflow...
I like subsetting gitworkflows(7) because you can incrementally add process when the tangible benefits (like increased reliability and experimental access for eager users) outweigh their process cost (which depends on team experience). I wrote about these issues here:
http://mail-archive.com/search?l=mid&q=87zjx4x417.fsf@mcs.an...
This diagram represents a workflow that uses 'maint', 'master', and 'next' branches.
I think that is because I am used to using hg's branches, bookmarks, and tags for different use cases.
If I want to mark a revision as a particular release number (which is something we don't really do here but I can see the value) then I would use hg tag. Tag's are permanent.
If I want to mark a revision as "production" and then have some automated process take over based on the the updated info, I would use hg bookmark. Bookmarks are the closest equivalent to git's branches. Bookmarks can be updated to a new revision or removed.
If I wanted to work on a parallel branch of development for an experimental feature or if I am attempting to upgrade some dependencies, I can use hg branch. This creates a named branch in the code base which is permanent. This branch can eventually be either closed or merged back into the main.
I apologize for the self-promotion, but this answer on Stack Overflow (and the question) talks about this difference between Git and Mercurial, and includes links to articles that explain it better than I could:
One main branch is great, and also if working with a large number of contributors I really like a clean history, and makes things much easier to review.
It's kind of a shame something got branded with a slick name like "GitFlow", when "doing it the way you ought to be doing it" doesn't have a slick name :)
Not for any other project where maintenance releases are a norm. This includes stuff strict API compatibility projects, semantically versioned frameworks/plugins/libraries, many forms of desktop/offline apps, some android apps, most enterprise apps, etc - more or less where developers don't have the liberty to thrust the latest master on their users.
I'm not against CD, and not a big fan of Git Flow either. But different things have their own uses. I'm really liking GitHub Flow and GitLab Flow though!
From an open source developer's perspective I need more "eternal" branches because I need to plan future releases. Putting everything into master makes the decision for me (if I have a breaking change I have to bump a major version even if maybe I want to delay doing that).
Have you not found the ability to investigate/audit bugs hindered by non-linear histories?
I've used all kinds of branching models... I've used just a master branch and you commit directly to master. I've used full git-flow.
I think the branching model you use is dependent on the people and the project. But really no matter which model I've used it seemed to me to be fine... And if it wasn't fine, we extended it to meet our requirements.
You should still rebase your feature branch on top of whatever you're merging into whenever you can, even if you're using git-flow. That's just common sense. When you do, your history looks almost the same as in his 'pretty graph', there's just one more 'link' back to the previous feature merge.
The advantages of this additional context are important. Firstly, you can get a compressed view of only the features that were merged (without detailed commits) with something like `git log --first-parent`. I guess the only way to do that in OPs approach is `git log | grep 'SPA-'`? Rather... unreliable.
Using no-ff also means you don't have to do the silly thing of putting your issue name / branch name in every commit title. Titles are pretty short already, having to allocate ~10% of it to tracking the name of the branch is just wasteful. With no-ff it's obvious which feature the commit is for (the branch name in the merge). If your tool fails to present that in a reasonable fashion, that's disappointing, but the data includes this context and that's the most important thing.
As to the master/develop split, yeah I could be convinced it's unnecessary. Still, I think it's convenient to have a clear separation of 'this code is in production', 'this code is in development'. If you just make a release branch then merge it into develop, you have to know the exact tag before being able to find the latest release. 'master' being the alias for 'latest release' is fine.
I'd also suggest that you want to make sure you consider the full totality of costs, because it's very humanly easy to see this one feature that you recall using a lot, when in fact you can easily recall it precisely because it is a rare event (and thus worthy of memory), whereas the costs of a complicated branching structure are continuous and ongoing.
I'm not saying that linear is therefore guaranteed to win for you, just pointing out the cognitive danger of seeing the big, rare expensive costs and missing the continual drip of small ones.
That said, I'm not necessarily 100% linear myself, but I do sometimes feel like git made branches easy and some people overreacted. If you've got a branch that lived for at least, say, a week, and had significant independent work within it, then by all means merge it and keep a merge commit. But this workflow creates branches upon branches upon branches, and then keeps them around forever in the history. I'm not convinced that last bit is necessarily a good thing... I create a ton of branches, sure, but I only keep big ones that actually mean something, not every little bug branch with one commit of one line. There is a happy medium available here, too.
- Checkout master.
- Start an interactive rebase of master onto the last
commit before the series of commits you wish to remove.
- Mark all the commits you don't care about as "skip".
- Let the rebase run and resolve conflicts on the way,
the same as you'd do with your current work flow.While it's dated at this point, I've always felt that the Github flow [1] works best (for the projects I'm involved with anyway).
This also allows us to keep merging master into feature branches, (where there is only a single commit that might need to be manually merged) instead of rebasing feature branches on master (in which case it can be necessary to manually merge multiple intermediate commits).
What cleared up git merge --squash for me was a comment showing that:
git checkout master
git merge --squash feature
is the equivalent of doing: git checkout feature
git diff master > feature.patch
git checkout master
patch -p1 < feature.patch
git add .Git history has a lot of commits. That's OK.
We're currently having lots of success with this:
* Always work in a feature branch. * Pull master + rebase feature branch when done. * Merge to master with --no-ff --edit and include a summary.
Rebasing feature branches keeps them readable and avoids continuous merges. Disable fast-forward keeps the log for /master abstracted to feature-level, but the details are available in the graph.
Major releases are branched, minors (bugfixes) are tagged. Bugfixes are made in master and cherry-picked into the release where possible.
Currently our CI build only works on /master, but in the coming month it'll build all feature branches which have been pushed to the main repository.
This is very similar to how Perforce streams work, but it's distributed. If you really hate distributed version control and love GUIs then I can recommend Perforce.
What the author describes is fairly close to what I've been using in a number of companies now for the last 9 years or so.
Whether to rebase is a personal preference. I tell developers to always rebase local work before committing. Unobserved changes might as well not exist (if a tree falls in the forest and no one is there to hear it falling, does it make a sound?), so if you haven't pushed your work, rebase it. No one cares when you did the work.
As for feature branches, it depends. If the history is clean and there aren't too many at one time, we might merge without rebasing. But I still prefer to clean up the commit history and rebase. I don't understand the obsession with "true history". History is written by victors, in this case — resulting work/code.
So absolute minimum you need one persistent branch per old release, if you ever hotfixed it and still have it deployed in the field. GitFlow falls over here, because it only has one master. But at least it does recognize the fact that repairing released code is different from pushing the unreleased state of the art forward.
I've lost count of the number of times that two eternal branches and feature branches with pull requests (+ code review) has saved major flaws from getting to production.
The develop branch is perfect for automatically deploying our bleeding edge to our test server.
Although, if we move to a more continuous deployment approach, we may transition away from two eternal branches. But when GitFlow was first written about, continuous deployment really wasn't the trend that it is now.
Methods will continue to evolve...
Gitflow thinks about branches as lanes. Git branches are actually labels. What's the difference? In the gitflow model every commit belongs (implicitly) to a branch (or a lane). Git branches don't work that way. One could actually implement "lane" as an additional commit metadata and tweak git-log (and other git utilities) to always show lanes in straight lines in the graph.
We deploy once a week, but if we need to get something out the door quickly, we make a hotfix branch off of master, then merge it into both develop and master. This way, if we find something that needs to be fixed before the next release, but don't want to push half-done updates, we can seamlessly do it.
I would rather branch off of master, bring changes in via git am or rebasing when ready, then tag a release when it is ready to be released. If there is something wrong with master, the tagged releases serve as easy points to branch off of.
It might be fun to compute the number of branches needed as a function of the number of devs in your team.
SCCM system discussions should be banned on HN, as pointless and heated as vi vs. EMACS discussions.
To me what matters more is the consistency.
Also, the attitude and tone of this article straight up stinks.