Hello! Since I've encountered things like this (especially recently) I would like to write this post concerning the topic of version control. (NO! Don't leave yet!)

Why am I writing this?

So recently by talking to people in various communities and by simply browsing things I find people don't use version control when they should... It's quite annoying to be honest, even though when I'm not even involved in the project just to know something serious is going on without the help of a version control system. (If you have no clue what I'm talking about, please read on.) Maybe it's just me, maybe it's a "once you go ___, you never go back" type of thing but it really concerns me. How does someone or a group of people create a public folder on dropbox or some sort of similar service and sync the code with their buddies? How does this work? When I see these type of things I tell people to use version control software such as git (the best really) some know it exists but claim it won't work for their situation or that they don't want to lose their code (less likely with git...), these people are obviously misinformed. And some have absolutely no idea what it is. Finally, I am going to answer the initial question, Why am I writing this? I'm writing this because I want to point people in the right direction about version control that may not already have used, heard of it, or even understand its fantastic side effects.

What is version control?

Some people have absolutely no clue what version control is, which is fine, I like to think I'm doing them a tremendous favor by writing this post. git-scm has a fantastic article explaining the basics of what version control is, but I will try to summarize it here (read their article though). Version control simply records the history of your project, the changes you have made, and is just a fantastic tool for keeping track of your project. This can be handy if you mess some code up and need to revert to a previous time or you even lose it completely (depending on your project/setup) there are copies of it in multiple places. You may not consider this necessary for your solo projects (may be depending on the scale, best to be safe) but it's good practice for the future and incredibly handy for group projects with all of the features these various systems have to offer.

Git

Okay here's the main idea of this post, just learn git. It's not that difficult and will benefit you so much down the road. Github provides an outstanding introduction tutorial for git

I'm not going to write all about version control here, I'm just attempting to point someone in the right direction that is unfamiliar with git or version control in general. I know some of you out there have never used it and I hope this helps you (it will).

Thanks for the read, and good luck.

(For those of you that use version control I realise I may have left some things out but it's almost 1AM and I'm extremely tired...)

How does someone or a group of people create a public folder on dropbox or some sort of similar service and sync the code with their buddies?

I did that a few times for non-code stuff (word documents), and it's the worst thing ever. One time, it was really bad, in a team of 9 people working on a word document of 900 pages, over a drop-box style thing. We had to basically do the check-out / commit through emails (e.g., send an email to say "I'm currently working on it.", download the latest doc, make your changes one-by-one from your local (old) version to the latest doc, then upload the latest doc back, and email to say "Ok, I'm done!"). This was hell! And the sad thing is, we were following the recommended methodology that our supervisors at the European Space Agency told us to follow! So, this whole problem of people being oblivious to the existence of version control is worse than you think (not to mention the absurdity of creating large documents in MS Word, as opposed to a scalable text-engine like LaTeX).

This can be handy if you mess some code up and need to revert to a previous time

I remember making that (very obvious) point to someone and getting the reply: "I already have that with the 'undo' button in my editor"... talk about not knowing what you're missing.

Git

Just another thing I want to mention about Git is that Git is really awesome because it's decentralized (does not need a server, every copy of the repository is a full copy (with full history)). This is really great both because it's a backup system (all clones of the repo are backups) and because you can just run it off (internal/external) hard-drives or on local networks or private computers... in other words, you can use Git everywhere, at virtually no "setup cost" (time or money).

"once you go ___, you never go back"

Yeah, many good things are like that ;) I programmed (as hobby) for quite some time before "getting the news" about using version control, and the benefits are so great that I don't think I wrote any serious code without version control since. I think it's one of those things that you either don't know about it at all, or you use it all the time.

I wish it was taught in CS curriculums (i.e., initiate students to it, and require that they use it). Has anyone heard of people being required to use it in some CS courses? I sure haven't.

Member Avatar for iamthwee

I tried to use version control... such as github, and bitbucket.

But I don't work in a team. I found the process interfered with my work flow. Instead I just create a new revision with a folder name plus an incremented number on the end.

Instead I just create a new revision with a folder name plus an incremented number on the end.

That's all well and good...until you want to know what changes you made and when. Then the history features of a source control are amazingly helpful. Rolling back specific parts rather than the whole project is also handy during development.

All very good points!
iamthwee:
Git is the version control software and github/bitbucket simply host your git repositories. I suggest you read more about git because it doesn't seem like you understand how beneficial it can be.

But I don't work in a team. I found the process interfered with my work flow. Instead I just create a new revision with a folder name plus an incremented number on the end.

You don't need github or bitbucket, or any other "host" for using Git. For a very simple "one man" project, it could be as simple as turning your project's folder into a git repository (which you do with $ git init in the top-level folder), and then, just "commit" your changes once in a while (e.g., every day or every time you complete a good chunk of code). This requires very little work, and does not really disturb any kind of work flow (unless your work flow is extremely disorganized!). Doing things like copying folders (and tagging them by version number or date) or stashing tar-ballz of older versions is far worse in terms of work needed and far less convenient in terms of features.

Here is a typical situation:

I'm working on a project (just me, local repo), and I run some piece of code which fails (error, crash, whatever...). I find this odd because it used to work at some point, i.e., the last time I ran that particular test code, maybe a couple of months ago or so. So, I'm pretty sure I didn't change too many things, but I can't figure out or remember what could be causing this error. So, I go to my project directory, and check the logs for the file I suspect is broken:

$ git log suspected_file.cpp

and I get a history of the commits that included changes to "suspected_file.cpp". I find a commit from about the time that I think that things still worked, and I do:

$ git checkout [good_commit_hash]

then, I test that it works.. and maybe I try to checkout a commit further up-stream or down-stream to find, more or less, the last commit where it still worked. Then, I just restore the most current commit:

$ git checkout master

And then, I can check what has changed between the last good commit and now, maybe with this:

$ git diff [last_good_commit]..HEAD -- suspected_file.cpp

or, between the last good commit and the first bad commit:

$ git diff [last_good_commit]..[first_bad_commit] -- suspected_file.cpp

and I might look at any other file, or all, between the last-good and first-bad commit (or current). And if you commit frequently (e.g., everyday), finding that error won't be very difficult, as there are typically only a small number of lines of code that changed between any given commits.

And remember, none of the above require a server, or any kind of "distribution" of the repository.. we are still just talking about a "git'ed" folder. With a git'ed folder, you get all the advantage of version control, without any of the maintenance "trouble" typically associated with central version controls (svn, cvs, etc.).

It's true that if you use version-numbered or dated folders to backup your code once in a while, you could achieve roughly the same effect as in the example above (using the "diff" tool). But it will never be as easy, as versatile, and as fine-grained as with version control. And it is unlikely that you will ever even try to do that because of how much trouble it is and how bad the signal-to-noise ratio can be (get flooded with insignificant changes, with the erroneous change being buried in there). And if your "code history" is too coarse (too much distance between snapshots of the code) or your means to explore and inspect the changes that happened in the code are too difficult or inconvenient, then this completely destroys the purpose of even keeping track of that history because you will never refer back to it. And that's why these manual methods of keeping the history (tagged folders or tar-ballz) are just a waste of time, i.e., they might make you feel good (like "hey, I'm being cautious!"), but it really doesn't serve much of a purpose at all.

That was a fantastic example of use! Also I may add, versioning can be acheived with tags.

Member Avatar for iamthwee

Hmmm, maybe I'm a bit biased. But my first experiences with version control was bitbucket on a mac.

Why did I choose bitbucket? Because it allows you to sign up and create private repos. Stuff I don't want the general public viewing. Second, I wanted a cloud based solution. Knowing my code was safe in the cloud - otherwise I'd have to do a backup onto usb.

Issues... there was no good client for bitbucket on the mac at the time. As a result I spent a lot of time in the terminal. Not necessarily a bad thing, just having to remember all those commands interfered a bit too much with my workflow.

Additionally, because it was cloud based it seemed to slow everything down.

The other thing I didn't like was the roll backs. To get to an old piece of code you had to do a roll back... then a roll forward.

I might want to amend bits of code in an old folder but use stuff in a newer folder. Do I really want to issue a command to download/change the folder, then take it out for testing and after testing recommit it?

Hmmm maybe... but all these things greatly slowed me down. The other annoyance is that one of my machines is a mac the other is linux.

I work on both equally and using git in both environments was a ballache - too much in fact I just quit that and opted for what I'm doing now.

One folder with the different revisions of the project in other folders. Each folder newer than the last and small chunks of the code ammended in each folder so my 'rolling back' is in smaller increments and therefore more fine grained.

This absolutely works for me perfectly, without any fuss.

So to cut a long story short. Yes I understand the benefits of version control but don't assume it applies for everyone and all circumstances.

Issues... there was no good client for bitbucket on the mac at the time. As a result I spent a lot of time in the terminal.

As far as I'm concerned, there will never be good GUI clients for version control, because nothing beats the speed, effectiveness, and power of terminal-based interactions with it. But then again, I think that GUIs for Git have gotten a bit better lately, but I don't know.

Additionally, because it was cloud based it seemed to slow everything down.

This is really odd. That statement is extremely surprising to me because I have absolutely no idea how bitbucket (or github) could "slow everything down". This makes no sense. All your coding is done locally on your local folders and files. There is absolutely nothing running in the background or connecting with the "cloud" (i.e., a server) as you are coding. In fact, you shouldn't even need to have an internet connection at all. You have to explain this a bit more, because I really don't understand what this means or how it is even possible. I suspect there is something wrong with the way you used it, but I can't imagine what.

To get to an old piece of code you had to do a roll back... then a roll forward.

Maybe your limited GUI client or something required you to do that, but this is certainly not necessary. In git, if you just want to get an older version of a particular file, you just do:

$ git show [old_commit_hash]:path/to/file.txt

or, to save that to a file:

$ git show [old_commit_hash]:path/to/file.txt > file_old.txt

or, like I showed before, to get a diff-file:

$ git diff [old_commit_hash]..HEAD -- path/to/file.txt > file_diff.txt

Or, revert a single file to a previous commit:

$ git checkout [old_commit_hash] path/to/file.txt
$ git commit -a

I mean, the options are limitless and very fine-grained. And the commands are very intuitive, once you get a basic feel for it.

Do I really want to issue a command to download/change the folder, then take it out for testing and after testing recommit it?

That does not sound right. Again, I think your idea of how to work with a version control system seems very wrong. First, when you do a check-out or branch-switch, you are not really "downloading" anything, you are just applying diffs (i.e., kind of like "undo/redo" buttons), and everything is reversible at ease. In other words, it's a simple flip of a switch, not some sort of massive operation. Second, you never have to "take out" anything for testing, because your folder is your sandbox, that's where you work, you flip things forward and back, test in between, revert back, do whatever, and commit when you're done and it works. You are never supposed to need to copy things around or to create a "testing" folder on the side, or anything like that. If you add the correct stuff to your ".gitignore" file to ignore temporaries and binaries, and any other "generated" files, then the git repo is always clean while you do all your testing / compiling / etc. within that folder. That's how you are supposed to do it.

And yes, after testing, you need to commit. As in, when tests are successful and your code is working, you are supposed to take a snapshot of that (i.e., a commit). That's just good coding practices, whether you use version control or not. Nothing is going to take away that responsibility. And it could not possibly be easier than it is in Git (just write $ git commit -a and write a short description of it).

The other annoyance is that one of my machines is a mac the other is linux.

Well, those two are not very different. One neat things with Git, for example, is that you can do things like set up your git on Windows such that it replaces the new-lines in all files for Windows-style new-lines when it checks them out from the repository, and then, converts them back to Unix-style new-lines when it checks them back in (commit). These kind of features make cross-platform development a walk in the park.

Yes I understand the benefits of version control but don't assume it applies for everyone and all circumstances.

I can agree with that, I'm sure some circumstances lend themselves better to version control than others. But again, the way I see it, the three common situations are: (1) the project is very small and short, with one developer; (2) the project is medium-sized, with mainly one developer and maybe a couple others on occasion; and (3), the project is "serious", with a team of developers. In the first situation, you don't need version control or any other kind of "history tracking". In the second situation, you might not need a server-based version control (or a github / bitbucket repository) and you could just work locally, but then, you can just use a local git'ed folder. And in the third situation, you absolutely need a full-blown version control setup, and guidelines and all that.

Personally, I have a hard time seeing anything that could be squeezed in between the cases (1) and (2) which would mandates this kind of "poor-man's version control" based on folder snapshots. But maybe I am biased because I find version control so useful and easy to use that I don't see why one would deliberatly want to go for the "stone-age" version of it.

Member Avatar for iamthwee

Additionally, because it was cloud based it seemed to slow everything down.

Sorry, didn't explain myself properly. The 'slow down' has nothing to do with 'git' but more the upload into the cloud. As in the using service (bitbucket). I'd have to wait maybe five minutes to upload a big project to the cloud...

Perhaps it is totally wrong to make this comparison if we are just talking about git/version control and not actual apps or web based services.

Member Avatar for iamthwee

Maybe your limited GUI client or something required you to do that, but this is certainly not necessary. In git, if you just want to get an older version of a particular file, you just do:

$ git show [old_commit_hash]:path/to/file.txt

Again, maybe this is me and my bad habits but I would never just roll back a file, but rather an entire project. I don't know it's just how I work.

Member Avatar for iamthwee

Well, those two are not very different. One neat things with Git, for example, is that you can do things like set up your git on Windows such that it replaces the new-lines in all files for Windows-style new-lines when it checks them out from the repository, and then, converts them back to Unix-style new-lines when it checks them back in (commit). These kind of features make cross-platform development a walk in the park.

I feel a bit confused about this one. Are you talking just about the quirks with unix and newlines? I'm not. I'm talking about committing the same changes to a single project in linux and then mac.

Basically, doing some work in mac, then doing some work in linux but making sure both are now identical. I couldn't find an easy solution... but I'm all ears.

So I have my git folder setup in mac. I do some work on it. Back it up to usb. Go home open my linux machine. And overwrite git folder in linux with the one on my mac usb. But my mac usb file system isn't compatible with linux, or at least doesn't allow it to be written too.

There is another headache.

there are now cloud based version control systems, never used it and probably wouldn't use it for sensitive programs such as government classified programs. But for non-sensitive stuff I would be pretty handy because you wouldn't be restricted to accessing the data from just a single computer or usb file. So if you had a team of programmers who were scattered all over the world a cloud based version control system would be ideal, at IMO.

But my mac usb file system isn't compatible with linux, or at least doesn't allow it to be written too.

Well, that's easy to solve. To read/write Mac file-systems, you just need to install the "hfsplus" package in Linux... normally, I think it should be there by default. And if you don't have the correct file-permissions setup on your folder, then that's easy to fix too. And, I'm also a bit baffled as to why you would use a Mac file-system (HFS+) on a USB stick... why not a more portable file-system like NTFS?

Basically, doing some work in mac, then doing some work in linux but making sure both are now identical. I couldn't find an easy solution... but I'm all ears.

So I have my git folder setup in mac. I do some work on it. Back it up to usb. Go home open my linux machine. And overwrite git folder in linux with the one on my mac usb.

That's not the way you should do it at all. You should never "overwrite" folders like that, not when you have git setup on them.

I have a similar work-flow (except it's Linux on both ends) between my work computer (at office) and my home computer. Here is how I normally do things. I'll explain it in details, because it can be useful to people.

Initial Setup (which may seem long, but it's just once, and it isn't that much work, just a few quick commands in a couple of places):
I first create my folder for my project on one computer:

$ cd /home/computer1
$ mkdir new_project

Then, copy my project files (let's say I had some stuff done without version control):

$ cd new_project
$ cp -R /home/computer1/old_project/* ./

And then, turn the folder into a git repository:

$ git init

And then, commit the initial stuff I have in there:

$ git add .
$ git commit -m "Initial commit. Copied files from old_project folder."

Then, I plug in my external HDD (or USB stick), and I "clone" the git repository:

$ cd /media/externalHDD
$ mkdir new_project
$ cd new_project
$ git clone file:///home/computer1/new_project/ ./

And rename the "origin":

$ git remote add computer1 file:///home/computer1/new_project/

Then, I would setup an additional "remote" on computer1's repository, with this:

$ cd /home/computer1/new_project
$ git remote add externalHDD file:///media/externalHDD/new_project/

And, then, when I get to the second computer, I would plug in the external HDD, and then, clone the repository again:

$ cd /home/computer2
$ mkdir new_project
$ cd new_project
$ git clone file:///media/externalHDD/new_project/ ./
$ git remote add externalHDD file:///media/externalHDD/new_project/

And, finally, I would setup an additional remote on the external HDD:

$ cd /media/externalHDD/new_project
$ git remote add computer2 file:///home/computer2/new_project

And that's it for the setup..

The typical work-flow (daily):

After working all day on my project, in my office (on "computer1"), then I would do this:

$ cd /home/computer1/new_project
$ git add .
$ git commit -m "Today, I did X, Y, and Z."

And, then, I would plug in my external HDD, and do this:

$ cd /media/externalHDD/new_project
$ git pull computer1 master

Unmount / unplug my external HDD, and then, leave the office, and come back home, on to "computer2". And then, I would just plug in my external HDD and do this:

$ cd /home/computer2/new_project
$ git pull externalHDD master

And then, I can continue working from computer2... and then I would repeat the process in the other direction the next morning... and so on.. well.. maybe not every day.. of course.. everytime you do work on the project.

Now, if the two computers are not running the same OS, I don't see how it can make any difference at all. I could see how it could be an issue if you are constantly doing a full backup or full overwrite of the directories (because that will also overwrite the git configurations in the .git folder), but that's not something you are ever supposed to do. Initially, you clone the repositories between your multiple places (HDD, computers, or servers), and after that, you just work with committing code, and pulling and pushing between the repositories to synchronize them. The pulling and pushing is done in a way that respects both the "ignored" patterns (temporary files, build directories, binaries, OS-specific stuff, etc.) and any kind of file-system differences or other OS-specific differences. And, of course, pulling and pushing only copies the differences between the revisions, instead of a full copy of the files.

I haven't done this between Mac and Linux, but I have done it between Windows and Linux, and it works flawlessly. And if anything works flawlessly between Windows and Linux, then it sure must work pretty well between Mac and Linux (which are nearly identical operating systems, under the hood).

Again, maybe this is me and my bad habits but I would never just roll back a file, but rather an entire project. I don't know it's just how I work.

Now, I'm getting confused... in your previous post, as I understood it, you were complaining about having the roll-back the entire project when you just wanted to retrieve a piece of it (i.e., a file or sub-folder, I presume). And now, you seem to say that you prefer that to rolling back on a file-by-file basis... In any case, you can do both just as easily in Git, so, whichever you prefer doesn't really matter.

The 'slow down' has nothing to do with 'git' but more the upload into the cloud. As in the using service (bitbucket). I'd have to wait maybe five minutes to upload a big project to the cloud...

Obviously, if the server is really slow or your internet connection is bad, then any pushing and pulling that you do with the server is going to take a bit of time, although 5 minutes seems very long. Also, pulling and pushing with the server is probably not something you should even really be doing every day. Normally (at least, in my kind of projects), writing a "feature" or a significant addition to a project might take a few days to a couple of weeks, with lots of testing to make sure it works well. During those few days / weeks, I would normally do local commits every day or more, but I wouldn't upload those changes (through a push / pull) until I had completed a whole "feature" or whole chunk of code. So, even if you have a really slow server and it takes 5 minutes for pushing your latest changes to it, that cannot possibly be such a big problem if you normally do it once in a while.

And then, of course, this is not a problem of version control, it's a problem of good internet connection and having a good server to rely on. And by the way, you don't need github or bitbucket to setup a Git server.. any remote computer with ssh/RSA capability will do. For example, I set up a few Git servers on my lab's dedicated storage space on the university's network (just a Solaris file-server), which was just a matter of doing the same as I did above (more or less) but remotely on that file-server. It's as easy as that, no magic involved.

Wow, mike_2000_17 is handling all of my git elitest work for me, thank you sir! :D

Lots of good information being exchanged in this thread.

I hadn't looked too deeply into version control before; the closest I have come to it was using Tortoise SVN on a small office LAN to maintain all the procedural documents. With all the updates and changes being made within the company, people were always wondering what version of, say, a Word document was the most up-to-date. Beyond that, we never needed something much more sophisticated. But it is good to know what is out there and what it can do.

main problem I have with git is the same problem I have with anything "cloud" and that's that your data is out there out of your control, with only the promise of the current owner of the server/service that they'll not take it for their own and sell it and/or shut you off from your own data as a guarantee for future access.

@jwenting Have you considered hosting your own Mercurial server?

I have in the past hosted my own CVS and SVN servers, those work nicely for me. When and if I once again have need of a version control system SVN is the most likely candidate.
Good tool integration with the rest of my environment makes it a logical choice.
Already knowing how to set it up helps too :)

Mercurial support in my chosen tools (including my brain) is severely lacking.

@jwenting It seems you aren't understanding what version control (or git for that matter) is if you compare it to the cloud. The only time "your data is out there" is if YOU put it out there, either on Github Bitbucket or various other services.

Exactly, talking about version control is completely different from talking about cloud storage or anything of the sort. You can use version control any way you want, either just locally on one computer (e.g., a "version-controlled folder"), across a small local network, across a company network, to a private server (e.g., VPS) somewhere, or using one of those hosting sites like github / bitbucket / etc.. But you don't have to, that's important to understand.

A git repository is, in fact, nothing more than a folder that contains a hidden stash of diff-files that date back to the creation of that repository (i.e., the history of revisions). That's all it is. All the "magic" happens in the way that the git programs can manipulate and present that data to you, and how it can synchronize (push/pull) between different clones of the repository. Where you choose to put those repositories is entirely up to you. Basically, git can deal with repositories being on the same computer or somewhere remote that is accessible by ssh (or https, I think).

The buzzword "cloud" in this context is pretty misleading. To me, "cloud" is just a buzzword to sell the idea of storing stuff on a server to people who don't know what a server is. But to people who know what a server is, then clearly "the cloud" is just another word for "a server". And (savvy) people have been using servers to store their files for decades. They just call it "the cloud" since they made cutesy apps for the average-Joe to be able to easily store data on a server. And, I agree with jwenting, I do not trust those cutesy apps when it comes to privacy and all that, and therefore I don't like "the cloud", but this is different from using a server that you control or trust to host a version control repository with only secure channels (e.g., ssh / https). Using "the cloud" and using a server are apples and oranges because of the level of control you have over one and not the other.

Member Avatar for iamthwee

Mike the process you described has a lot of steps.

I will check out that thing for writing to mac formatted usb drives in linux. Obviously why I am using that is because I use a mac. That is the format the usb drive needs to be to write to. But the linux thing looks promising.

I still am not convinced. And those list of steps to clone a repo and work at home has said it all. I still think it is far easier to move through directories both forwards and backward without git - (for me personally)

Maybe another day in the future I'll pick it up again.

I said the same thing about MVC but now I won't use anything less!

@mike - The reason I prefer an IDE like Visual Studio over managing code and makefiles by manually creating config files and compiling/linking via command line is because I prefer coding to managing code. I looked at your example of git on the previous page and my first (and second and third) impression was that I have been dropped back into 1980. When I create a VB project exerything is created in a project folder. When I get it to a point where I want to do a checkpoint I type "archive projfolder" and it creates a zip file named

yyyy-mm-dd hh-mm projfolder.zip

I don't have to recall multiple command line options and parameters. Granted, I don't manage any massively large projects but until there is an IDE friendly front end I doubt I can see the point in adding still more overhead to my coding.

When I get it to a point where I want to do a checkpoint I type "archive projfolder" and it creates a zip file

Does VS do that for you or are you using something like WinZip? I looked all over VS C++ project and couldn't find such a menu item.

but until there is an IDE friendly front end I doubt I can see the point in adding still more overhead to my coding.

Most decent IDEs will have support for version control. In my IDE (Kdevelop), you can view your code under version control ("Review" view) to see history of changes and things like that (basically, most stuff you can do in command-line) through the GUI. There is also a "commit" button to commit your code after a session of coding. The IDE automatically detects if your folder is under version control (and which one), and adds those functions to the GUI if it does. I personally don't use that much because I'm more productive in the command-line, but if you're a GUI-monkey then you have that choice too.

Even Visual Studio has GUI support for git.

The reason I prefer an IDE like Visual Studio over managing code and makefiles by manually creating config files and compiling/linking via command line is because I prefer coding to managing code.

Weird. I use command-line tools, version control and build-systems exactly for the same reason that you like using an IDE. I like IDEs that can natively use the tools that I use (such as KDevelop), because, then, the IDE works for me, I don't work for the IDE. For example, I like Git (for many of the reasons mentioned here) and I like cmake (a robust and flexible cross-platform build-system), and when I load up my IDE, it just notices "ah, you like cmake and git, alright, I'll work with you on that.. you're the boss". Whenever I use Visual Studio, I feel that the relationship is opposite, it says "oh, I see you're trying to work on a non-VS project (or the wrong version), that's bad, you're a naughty programmer, and as punishment, you'll have to spend the next hour or day re-creating all your configurations in a VS-friendly format.. oh, and you'll also have to re-build all your external dependencies.. that'll teach you! you'll think twice the next time you're tempted to deviate from the MS software stack!". Of course, I have enough mojo to overpower Visual Studio and bend it to my will, but I shouldn't have to.

I don't have to do work to manage my code because my build-system automates the managing of my code.
I don't have to do work to manage the revision-history of my code, because git automates that.
That's the point. Of course, version-control and a build-system will require input from me here and there, like writing the commit messages (one sentence) or adding a build-target in the config-file, but, overall, that work is minimal compared to anything else that I ever had to deal with. And those systems are so flexible that they allow me to do things that you could never do otherwise (and I'm far from being an expert in Git or cmake, and so, I can't even imagine the possibilities that I haven't explored yet).

Does VS do that for you

It was a vbs file I threw together

archive.vbs



'
'  Name:
'
'    Archive.vbs
'
'  Description:
'
'    This script will (recursively) archive the given directory. The zip file will be named
'    as YYYY-MM-DD HH-MM dirname.zip.
'
'  Usage:
'
'    archive dirname [exclude]
'
'    dirname:      the name of the directory to archive
'    exclude:      file name (or pattern) to exclude
'
'  Example:
'
'    Let's say you are going to make code changes to one or more modules in the weather
'    application in f:\apps\weather. You want to archive the existing code in case it blows
'    up after the changes are made. You want to be able to restore the application files,
'    but you don't want to back up or restore the log files. Locate to the f:\apps
'    directory and type
'
'    archive weather *.log
'
'    All files in weather and below) will be archive except for files of the form *.log.
'
'  Audit:
'
'    2006-01-12  jd  changes to allof use from coontext menu
'    2001-07-30  jd  added exclude option
'

set wso = CreateObject("Wscript.Shell")
set fso = CreateObject("Scripting.FileSystemObject")

set arg = Wscript.Arguments

if arg.Count >= 1 then

   if arg.Count >1 then
      exclude = " -x " & arg(1) & " "
   else
      exclude = ""
   end if

   currdate = Now()

   path = fso.GetParentFolderName(arg(0))
   fold = fso.GetFileName(arg(0))

   archive = path & Year(currdate)                 _
           & "-" & Right("0" & Month(currdate),2)  _
           & "-" & Right("0" & Day(currdate),2)    _
           & " " & Right("0" & Hour(currdate),2)   _
           & "-" & Right("0" & Minute(currdate),2) _
           & " " & fold & ".zip"

   wso.run "zip -r " & """" & archive & """" & " " & """" & arg(0) & """" & exclude

end if

@mike - I can see how you'd like that but since I retired I've had the luxury of only developing (and maintaining skills) in one environment - Visual Studio. The only configuration I've ever had to do after installing is to set the default folder for my projects. Having said that I'll have a look at git for Visual Studio. Merci for the link.

@NardCake I'm fully aware of what version control is, thank you very much. I've been doing it for 20 years now, longer than many of the visitors to this site have been alive.

Git stores your data off site from you, in what's these days called "the cloud". And that's a problem. Data security is vital for me, and that includes controlling my data, my source code.
Yes, you can buy a license to host your own repository, but there's plenty of other options out there that are just as good (like SVN) where there's no such cost involved.

From what I understand, git allows you to set up a repository on your local system. Storing it in an off-site repository is an option, not a requirement.

A further comment on command line tools for version control...

Having to use command line tools with multiple arguments and switches is like requiring a carpenter to make his own lumber every time he wants to build something or forcing a bartender to make his own vodka. Sure, it's good to have had the experience of using the command line to compile/link, or building a GUI using only a text editor to create and configure the controls, but once or twice is sufficient to get an appreciation for the guts. After that I prefer the IDE to generate the necessary code.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.