I have been programming in Windows with C++ and I am tiptoeing linux (mainly because of its most up-to-date C++ and also because of it's open source).
I want to do multi-platform programming.

The programming part is easy: I don't need help there.

The part I need help is in the choice of a machine setup.

Possibilities:
1) separate computers with different os joined in a network sharing code files. Takes a lot of physical space but may not be as bad idea as it first seem
2) separate drives with different os in same computers, chosen at boot time (can only program in one os at once)
3) dual-booting (same as 2: can only program in one os at once)
4) virtualisation: that sounds great in theory, but my many years of experience in computing lead me in believing it's probably too good to be true.

Virtualisation:

I tried to use VirtualBox but after a day of trying to have a virtual windows 7 on a linux I have to give up. As far I can read in forum, I am not the only one unable to do so: we are all blocking on the "Fatal: no bootable medium found" even though there is a brand new factory w7 on a perfectly accessible and bootable cd/dvd rom (exactly like the others complaining in forum). Myriad advice on the subject on youtube and forums exist and none works to solve it (this is soooo typical of linux)

So:

1) has anybody tried it the other way: virtual box in windows7 loading linux (in theory that should probably work more since windows is known to be finnicky)?

2) also even if it works I would not be able to program on Mac osx unless I use a mac (licencing problem): so if ever I choose a mac for my next computer, does vmware stuff work well for linux and windows or will I experience myriad incompatibilities and spend other days and week of frustration trying to make it work?

I am running Windows 7 and if I need another OS I use VMWare player to run the different OS. Once it is running I normally RDP into it for a cleaner look and feel

When doing cross-platform work, there isn't really a need to constantly use both or move between two platforms all the time for doing the actual coding, only for compiling and resolving issues. You code on one platform (whichever you prefer), and frequently check that it compiles on your other target platforms, and that your unit-tests pass. Virtualization or dual-booting will do just fine, even remote desktoping works well to just make sure things compile and pass the test-suite (there's nothing easier than just SSH'ing to another computer, run the build+test script, and continue coding while you wait for the results).

The main challenges are not in the setup of the computer(s), but rather in the methodology used while coding. First, you need a decent way to easily transport your code-base from place to place, whether it's between a Linux-friendly folder/partition and a Windows-friendly folder/partition, or between two distinct computers, or into a third-party computer, or into the computer of a collaborator / contributor. Second, you should keep track of changes to be able to see what changed between the last successful cross-platform build and the current one (if it failed). You can solve both of these problems by hosting your code-base on a server like sourceforge or github (I highly recommend github), it's very convenient to commit all your daily changes to the server and be able to pull them from any other machine, anywhere (and it even does the Windows new-lines / UTF new-lines conversions automatically). Third, you're going to need a cross-platform build system, such that you don't have to separately (manually) maintain one build configuration per OS or per IDE (e.g., have a Visual Studio project file + a makefile + a qmake configuration + etc.). Unsurprisingly, I highly recommend cmake which has become the de-facto standard for this purpose. Fourth, you need to have the habit of compiling on as many compilers and systems as possible, and do so regularly, otherwise you end up having to solve a gizillion new problems every time (well, I'm exagerating, it's not that bad). Fifth, if you've never done cross-platform coding before, you will learn quite a few things about what is really standard C++, and how non-conformant all compilers are (especially Microsoft compilers), and how to amend your coding style to avoid those hurdles. And finally, the most important thing is to isolate all your dependencies (external libraries) as much as possible (with the exception of the standard C++ library, and Boost libraries). Put all platform-specific or dependency-specific code in isolated source files (the compilation firewall is an essential idiom for that).

1) separate computers with different os joined in a network sharing code files.

You don't need a network for sharing the code files, in fact, you probably shouldn't do that. If I understand, you mean a network-shared folder with your code files. That's not a good idea. If you use a version control system such as Git, SVN, CVS, or Mercurial, you can automatically keep multiple copies of your code folder synchronized, and it also serves as a backup system (which is important!). You can also get the same effect with a simpler data-synchronization program like rsync. And, even better, host the code-base on a web server, then it is accessible from anywhere, no need for a permanent network setup.

2) separate drives with different os in same computers, chosen at boot time (can only program in one os at once)
3) dual-booting (same as 2: can only program in one os at once)

These two options are the same, whether the OS's are on separate drives or on separate partitions makes no difference. In any case, that's a pretty good option, although having to reboot is annoying when you just want to test compilation. It depends a bit on how well you want to maintain support for the second or third platform (after your main development platform). I use dual booting, develop everything under Linux, and rarely switch to Windows because I have no real reason (beyond masochism) to maintain Windows support (and most of my external dependencies do not work in Windows either, so I can only support a reduced-feature version for Windows). If that (or the converse) is your case, this option might be good for you too.

4) virtualisation: that sounds great in theory, but my many years of experience in computing lead me in believing it's probably too good to be true.

I never tried virtualization, and I do agree that it sounds very good, in theory. I'm not sure how well it can work for the purpose of compiling on different virtualized platforms. But it's worth a try if you want to actively support multiple platforms.

commented: Detailed and helpful +0

To NathanOliver:
"Once it is running I normally RDP into it for a cleaner look and feel"
What do you mean?
I have tried Vitualbox on Windows with Ubuntu as a guest and it looks 100% as the real ubuntu.

Also why are you using vmware instead of virtualbox?

To Mike:

"The main challenges are not in the setup of the computer(s), but rather in the methodology used while coding. "
It may possibly the case for some (and I doubt it very much as any programmer can deal with this pretty much straighforwardly) and it is surely not the case for me. As I said earlier the coding part is not the problem and I don't need advice here. The REAL challenge (for me), the extremely time consuming part is the computer setup.

I spent litterally MONTHS trying to make Linux works on my many old Dell Insprion 8600 (1900x1200 screens!) without any success (including trying different distribution, different version of distributions, reading all the insanity of suppose "fixes" on the internet that never workd (do you realise just how much time reading and trying these represent? It's completely insane)

So seriously, the coding is a breeze in comparison.

So what I want here, as I had said, is not advice on programming aspect, but advice on the SETUP of machines. You seem to like dual-booting with shared folder/partition but this is a hassle to reboot and not have both os available at the same time.

So I am keenly interested in virtualisation.

Trying different virtualisation software, on different machine, with different os combination, with different bug fixes working (or most likely not working), can easily take another month(probably 2 or 3). So, really, why reinvent the wheel? Let's just digest the wisdom of battle-weary survivors lurking out there, such as perhaps NathanOliver.

I am also interested in knowing what actual make of computer works best.

The reason I use VMWare Player is that we use VMWare at work so I am more comfortable. When I said that I RDP into the virtual machine that means I do a remote desktop into the machine so it like I am just remotely accessing the machine instead of being in the player. I'm not sure what kind of challenges you have been facing but I haven’t had any issues starting a new virtual machine from an iso file to create the OS. I do have an i5 processor with 16GB of ram in my machine so I can run my main OS plus 2 virtual machines at the same time and have a pretty seamless experience.

With VMWare player all I do is tell the software what specs for the virtual machine I want and then the location of the iso file that has the OS installation on it. I tell it to create at and the player opens up a screen to the new machine. It will read from the iso file and install it just like if you put the disk into a brand new machine. After the OS installs it is good to go.

" I do a remote desktop into the machine so it like I am just remotely accessing the machine instead of being in the player."

1) Ok, but what are the advantages compared to just simply using it in the player?

2) What is the OS of your host?

3) Have you tried other OS as your host?

VMware or VirtualBox - they are basically the same in that they allow you to run other operating systems in virtual machines on a single host. Myself, I like VirtualBox (open source and free) and use it on both Linux and Windows hosts. For my personal work, I run VBox on a Scientific Linux (RHEL clone) host, and for work I run VBox on a Windows 7 host. Not much difference that I can tell.

As Mike2k said, cross-platform development should be done on the system you are most comfortable with, and then recompiled and tested on the target platforms. I've been doing this for well over 20 years. Developing code that runs identially on Windows and Linux/Unix however means that you need to develop some good conditional macros that will "tag" classes and functions appropriately for Windows, but not for Linux/Unix (import/export etc). Also, if you want to build stuff that will conform to COM and such for Windows then there are more issues you will need to resolve.

1) It is just the way I like to do it. It really doesnt make a difference. I have dual monitors so it makes the second monitor look more like it is its own computer. Like I said it's just my preference.

2) My host OS is windows 7 pro 64bit.

3) I have not used another OS for the host machine.

Rubberman:
I am glad virtual box seem to be working on both host (win and linux).
I am making progress in getting it work.

NathanOliver:
I want to add to my experiments of virtualisation your RDP hack.
Can you provide me some quick clues on the setting (including network setting) to use? (I have close to zero knowledge on networking except that I know how to access the internet with firefox :-) )

I also started cross-platform coding a while ago.. Maybe two-three months ago.

I use Windows 8 Pro and installed Linux on Hyper-V. Remote Desktop into the Hyper-V machine and set it to full screen :)

Can copy paste my code/folders/project from Windows to Linux and just press compile. Vice-Versa works too.

/*
(sorry, but i need to comment everything since otherwise for reasons unknown to me the post does not work otherwise)

Progress report

First impressions:

Note: these test done on a few years old computer running a duo-core. On a newer computer the results could be quite different.

Linux version: Ubuntu 12.04 pae
windows version: 7 professional

A) Windows host, Linux guest (with guest additions installed)
i) VirtualBox: 
Linux works correctly but is slow. Noticeable and annoying lag when dragging windows.       
ii) VMPlayer:
Results are exactly the same.
However it has more polished looks, features and is easier to use.    
Perhaps NathanOliver RDP ways solve the windows dragging lag. We will looking forward for what he has to say on this matter.

B) Linux host, Windows guest (with guest additions installed)
i)VirtualBox:
 Quite amazing.
 Windows as a guest works without any noticeable difference compared to running solo.
 Did not try to activate windows: I am unsure if the activation used for running solo in the computer would still work as a guest and I didn't want the risk to invalidate the first one
 Unfortunately was not able to load windows (as guest) from a w7 system image disk of w7 solo previously used in the same computer.
ii) VMPlayer:
Not tried      

Personal decisions:
 I am fortunate that my duo-core laptop comes with a base with a tray to put a hard drive
 I will be using 2 separate drives, swapped with the tray: 1 w7, 1 linux. This is different and preferable to multi-booting from different partitions on a same drive since the latter introduces all kinds of difficulties created by book loaders on reinstalling one or the other os, difficulties completely avoided by using 2 separate drives.
 W7 disk will be used mainly for transition, reorganising files for transfer to linux, and also as a fallback if my experiment with linux unfortunately didn't turned out as expected
 Programming will now be done in linux: indeed there are ther much better tools (fabulous choice of ide from the quick-and-easy lovely geany to the full-featured, better than visual studio, code::blocks, not to mention the promising anjuta especially geared (on first sight) for gtk user-interface. The immense flexibility and possibilities provided by:
 a) superb modularity
 b) understanding the innards (to fine-tune performance) and modify/combine parts provided by open-source aspect
 c) the stability over time (no need to relearn every couple of years to redo the same things in different manners as forced on us by Microsoft for marketting/sales purposes)
 d) the complete control of the computer + privacy aspect. I just read on Microsoft site that they were selling information of my usage of my computer to no less than 114 companies without my consent nor knowledge (it is not pleasant to have your computer running an operating system from a os company run by 2 psychopaths)
... 
 all this proved to be simply irresistible, despite the considerable annoyable of the huge learning curve (hopefully only done once for a long time of usage) of linux.
 Once conversion completed and once I know sufficiently linux not to be dangerous, virtualisation inside linux might be used to replace separate drives and might be useful for occasional porting (though I am moving more and more toward the idea of simply using linux for everything)

Next actions:
  completely clean the linux drive to reinstall everything after all experiments probably left lots a garbage potentially the source of future problems
  reinstall c++11 Clang 3.3 and gcc 4.8
  install ide's
  etc

Next question to knowledgeable people out there:
  If I install gcc 4.8, and then later I install other software (automatically with ubuntu software manager or manually), will this wreck everything (at least in some cases) as (possibly) the software downloaded is written for an *earlier* version of gcc (4.6) and might be incompatible with gcc 4.8 and crash on compilation or later? 
  If so, should I instead install everything I think I will need *before* installing gcc4.8? Would that work or will I be facing other problems?
  Or is there nothing to worry about here?
*/        

If I install gcc 4.8, and then later I install other software (automatically with ubuntu software manager or manually), will this wreck everything (at least in some cases) as (possibly) the software downloaded is written for an earlier version of gcc (4.6) and might be incompatible with gcc 4.8 and crash on compilation or later?
If so, should I instead install everything I think I will need before installing gcc4.8? Would that work or will I be facing other problems?
Or is there nothing to worry about here?

This mostly depends on where you install gcc 4.8. It is surely a good idea to install the standard stuff before (from repositories), and for GCC, you don't have a choice since you need a compiler to compile the compiler!

For general guidelines, I think I explained this recently in another thread, but here it is again. In Linux (or any Unix variant), there are a number of folders where applications and development libraries get installed:

  • /usr/bin : where top-level executable programs or scripts are put (i.e., when you execute a command like gcc, it fetches the executable /usr/bin/gcc automatically).
  • /usr/share : where you find all the "application data" for all the applications installed (e.g., icon images, docs, etc.).
  • /usr/lib : where you find all the compiled libraries (static or dynamic) (more or less equivalent to Windows' "System" or "System32" folders).
  • /usr/include : where you find all the header files (C / C++, whatever else) for the libraries. When you install packages marked as "-dev" or "-devel", what they usually do is put a folder of header files into this directory.

N.B.: This kind of very predictable and development-oriented structure for the folders is one of the strengths of Unix-like systems (as opposed to Windows which was clearly never designed to be a development-platform, just a user-platform).

Now, the above folders are where all packages get installed (executables, data, and libs are split in respective folders). If you want to build some software or library from source and install it on your system, outside of the package management system, there could be conflicts. In some sense the package management system is just a system to keep tabs of what is in those folders (and to what package they belong, and depend on, etc.). If you introduce things that are unaccounted for by the package manager, you might get some issues. Usually if the custom software or libraries do not interact with packages from the repositories, then there is no issue. If they depend on some packages from the repository, then the only issue might be that in the future you will have to do a re-build and re-install of your custom software because of a major change in one of the depend package, however, this is very rare because dev-package developpers do not mess with the API or ABI unless there is a major issue to resolve (or the library is still at an experimental stage).

If, however, the custom software or libraries are in direct conflict with existing packages (e.g., GCC), then you have to be a bit more careful. The general advice is that you should use a different set of subfolders which is located in /usr/local (i.e., Unix-like systems have folders: /usr/local/bin, /usr/local/share, /usr/local/lib and /usr/local/include). This folder is sort of your own little local playground for this sorts of things, and you can configure the system (if it hasn't already been configured this way by default) to give precedence to things that are found in that local playground over the things found in the system folders. So, things appear as if they were system software or libraries, but you can be sure that it won't conflict with package management (i.e., custom-built files being overwritten by package updates, etc.).

In the case of GCC, the GNU people have set in place mechanisms to deal with this, after all, if you are a developer of GCC, you'd probably end up with a ton of custom versions of GCC installed. So, if you go to a terminal, and move to the folder /usr/bin and type the command ls -l | grep gcc, you will see, among other things, the following:

/usr/bin$ ls -l | grep gcc
lrwxrwxrwx 1 root   root           7 Sep 21 12:25 gcc -> gcc-4.7
-rwxr-xr-x 1 root   root      275952 Jul  2  2012 gcc-4.5
-rwxr-xr-x 1 root   root      357344 Sep 18 19:22 gcc-4.6
-rwxr-xr-x 1 root   root      578808 Sep 21 13:33 gcc-4.7
-rwxr-xr-x 3 root   root     2115466 Nov  5 23:59 gcc-roll

As you can see, the executable called gcc is actual a symbolic link to another executable that is marked with a version number. GCC uses the same mechanism for everything else too. So, as you can see, on my system, I have a GCC version 4.5.x, 4.6.x, 4.7.x, and a mystery one called "roll". This is because whenever a new version of GCC is installed by the package manager, it installs the version-tagged file (e.g., gcc-4.7) and then updates the symbolic link to make the gcc command refer to the new version. If you build GCC from source, a similar version tag will also be put on it. So, issues will arise if your custom-built version of GCC has the same major-minor version number as that which is installed by the package manager, because every official update to that version of GCC will overwrite your custom installation, and that's annoying. This is why I have the suffix -roll on my custom build of GCC ("roll" for "rolling version"). So, I configure my build of GCC as follows:

~/dev/gnu-gcc/build$ ../src/configure --prefix=/usr --program-suffix=-roll

Which tells the configuration tool that I want the install-destination to be the system folder /usr and the suffix to be -roll instead of the default (which would be -4.7 or -4.8). GNU people recommend that you use /usr/local instead (and it is the default), which is probably a good idea, if you want to be safe.

Finally, if you want to make your custom version the default version, then it is only a matter of rewiring the symbolic links to point to your custom version. I use a simple script for that because there are quite a few symbolic links to rewire (i.e., GCC is a collection of many programs). The script is rather simple, looks like this:

#!/usr/bin/bash
cd /usr/bin
sudo ln -s -f cpp-roll cpp
sudo ln -s -f g++-roll g++
sudo ln -s -f gappletviewer-roll gappletviewer
sudo ln -s -f gc-analyze-roll gc-analyze
sudo ln -s -f gcc-roll gcc
sudo ln -s -f gcj-roll gcj
.....

And I have similar scripts to switch back to the system-installed versions. Whenever GCC gets updated by the system (which is quite rare, usual follows distro updates), the symbolic links will be overwritten by the package manager, but it's a simple matter to re-run the script. You can also avoid rewiring the symbolic links, just by configuring your build systems (IDE or cmake, or whatever else) to pick a specific compiler (setting the CC and CXX environment variable), the same way you would do it when switching from GCC to Clang or any other compiler.

will this wreck everything (at least in some cases) as (possibly) the software downloaded is written for an earlier version of gcc (4.6) and might be incompatible with gcc 4.8 and crash on compilation or later?

No, or it depends. For binaries (already compiled software or libraries), usually installed from the package manager but possibly from elsewhere (or an earlier custom build), then the version of GCC (or whatever else) that it was compiled with does not matter at all. GCC has an ABI (Application Binary Interface) that has not changed in any significant way since a long while (and is largely standardized in POSIX standards), and that's a good thing, it's very stable, and all other compilers (except Microsoft and Borland) use that very same ABI (even in non-Unix environments). This means you can link to and use binary applications or libraries compiled by any older version of GCC without any issue, as long as the version of GCC is not completely ancient (before major version 3).

N.B.: If under Windows, and with the Microsoft compiler, the above is not true at all, every new version of MSVC potentially breaks the previous ABI (and often does) and requires a rebuild of everything that wants to support that new compiler version. That's why many people still use Visual C++ 2003 or 2005 versions, because they are stuck there due to an ABI issue with a closed-source library they are using. This is an issue all other compilers have solved since 2001. I think MS wants to force everyone to adopt the new versions by deliberately breaking the previous ABI, but it actually has the opposite effect, it stops people from being able to upgrade.

If you are talking about source files for software or libraries that you are trying to build on your system (which is very rarely needed) or headers for development libraries, then it will depend on the library in question. About 99% of the time, there won't be any issues. Although now, with C++11, there are some libraries that might not yet be ready to compile with a new version of GCC using the new standard, although it is backward compatible in principle, there are a few corner issues. One notable example is Boost versions 1.48.0 and earlier cannot compile under C++11 standard (because of an implicit deletion of a move-constructor in Boost's shared_ptr implementation), but this has been fixed in all later versions. So, that might occur, but it is quite rare, GCC is pretty stable in that regard. And if there are issues, you take them up with the library developers themselves (file a bug-report), but this is extremely rare, unless you use a highly experimental library (i.e., one that isn't really ready for "prime time" yet).

Thank You very much Mike.
Immensely useful post for me.
Just what I needed at this point.
I have to take some time to digest it and relate it to how I had install my working gcc4.8 before to see if the scripts followed where similar to your suggestions.
Thanks again.

Tried to re-installed gcc and clang on a totally fresh ubuntu 12.04
Was expecting now to be done easily.
Proved to be horrendlously difficult.
It didn't work as expected on the first try.
The gcc instruction didn't work.
Re-trying worked in a way that each time a part of something would start to work. After a while g++ was installed with correct version 4.8 but gcc remained stubbornly at gcc 4.6, etc.
After many retries it finally work. But I don't know why and sincerely I don't thing the folks at gcc know either (or perhaps a handful of guys that wrote the stuff).
It's truly botched work. Yes I know what you and them will say: I'm lazy, I'm ignorant, Installation should be left to the priesthood who keeps the secret instruction for themself. Etc.)
It's really no wonder that linux has the market share it has, even being free and even with the immense advantage (mentionned earlier) it would be blessed with if it was possible to use it in a reasonable way.

Installation instruction should be clear and they should work. Period. They should be tested before being release.

And it is not the only thing botched.
Here we are about 50 years of history of computing... and the folks who wrote the file manager used by Ubuntu can't get the logic of copying a file correctly! UNBELIEVABLE.
Example:
you copy a large file
you press 'cancel' to abort the operation
guess what: the incompletely copied file is left on the directory with the name as it would have been if it had been copied correctly (no indication that it is a partial file, no '.part', no erasing)
If these people wants to make life a misery for everyone and they wouldn't do it differently.

But, if there is no ill-will on their part and just blatant incompetence, I will offer them an incredible insight to use freely (I am even GPL-ing this advice):

Advice (GPL licensed)
"if a file copy is aborted, the tentative target file should be either erased or clearly marked as being incomplete."

Should the programmers need more explanation or need advice on how to do this I will be please to explain it further in even more simpler term, if it was ever possible.

Hello! we're in 2013 !!!! Intel is spewing out incredible microprocessor, and linux programmers can't write a file copy operation correctly! No wonder getting anything else working is nearly impossible, one can imagine all the botched corners everwhere else if they aren't able to write code to copy a file!

But like any incomplete work, linux is quite unequal: something work were well, other don't, and most are in between, and there is seemingly no ethic to not release a product (like a file copy operation) until it is well done. (There is only a slightly better ethic at Microsoft: at least they got the file copy operation ok!)

....

Back to programming.

code::block is (at first sight) wonderful... but how to you get to the C++ help file?

Are there any?

Any plan to have some?

Any plan to have click on a keyword to get a refresher on the syntax ?

I see you vented a bit there.

I will admit that GCC's build configuration is not the greatest, mostly because they use an antiquated system called "autoconf". This is a configuration tool they wrote decades ago (long before Linux), and it simply hasn't been able to keep up with all the additional complexity of modern-day operating systems (remember, in the days of its inception, the entire operating system could fit on a floppy disk (1.2 Mb)). Switching to a newer and better configuration system (e.g., cmake) would be an enormous task, and a very boring and painful one, something that open-source developers aren't rushing in to volunteer for.

As for the file copying, you must know that all the "real stuff" in GNU/Linux is done under-the-hood by small command-line utility programs (e.g., cp in this case). These programs are extremely reliable and well-behaved, you can be sure of that, and it is largely the reason why Linux/Unix/Solaris servers can run for years without any major hick-up, while Windows servers can at best run continuously for about 1 month before something "weird" happens, requiring them to go down and reboot. If you experience something quirky in Linux, like what you described, it is usually due to the GUI program, because these are typically written by novice programmers and enthusiasts that are just doing this to learn or make their mark. The GUI programs and desktop environment is one of the weakest points of Linux, since it simply hasn't been the focus in most of the Linux/Unix history. For this particular problem, I would suspect it is a simple matter of the GUI not refreshing the list of files after the cancelation of the copy. I use Dolphin as the file explorer, and it has the same kind of quirks (sometimes doesn't refresh its file-list when it should), you hit F5 to refresh it.

code::block is (at first sight) wonderful... but how to you get to the C++ help file?

That reminds me of something you said earlier: "The programming part is easy: I don't need help there." ;)

I'm not sure what you mean by "the C++ help file". There are resources like http://www.cplusplus.com , or http://www.cppreference.com , or the web as a whole. If you mean something like the "help" menu in Visual Studio, then I don't know what CodeBlock has, if anything. Remember, open-source software is largely written on a voluntary basis (with some exceptions), meaning that the more boring and long-winded a task is, and the less "cool" or useful it is as a feature, the less chance there is that someone got around to doing it. In this case, writing an entire set of documentation and tutorials on C++, just to be integrated into one IDE, is one such boring and useless task, given that is already a few good community-driven resources for that, such as those I mentioned. For offline docs, just use wget on the reference page of your choice.

Any plan to have click on a keyword to get a refresher on the syntax ?

I don't know about CodeBlocks specifically. I use KDevelop, and it has integrated documentation in the form of Qt compressed help-files (a cross-platform equivalent to Windows compressed help-files, which is what pops up in just about any Help menu in Windows), it has fast code-completion both for guessing what you intend to write (variable names, etc.) and for syntax completion (but I always turn that off), it has integrated gathering and formatting of doxygen documentation and has it popping up as tool-tips when you hover the mouse over some class name or whatever. These are pretty standard features for a decent IDE, but I think CodeBlocks has a bit of catching up to do in that department. Qt Creator or KDevelop are much better in this department.

BTW, you can get a QT compressed help file for the C++ standard library here.

commented: Thoughtful and detailed +0

"For this particular problem, I would suspect it is a simple matter of the GUI not refreshing the list of files after the cancelation of the copy. I use Dolphin as the file explorer, and it has the same kind of quirks (sometimes doesn't refresh its file-list when it should), you hit F5 to refresh it."

Except that I am not stupid, and such a minor glitch like the one you suggest is no significant problem at all as anybody trying to open the file would receive an error or force a refresh and wouldn't receive my attention.

No, it's significantly more serious and and I am quite a bit puzzle by your slugishness in that you haven't already tested yourself on your 'Dolphin' which is probably a copy (with a different name) of the 'Nautilus' I am using.
1) It has nothing to do with refreshing.
2) In effect it leaves behind damaged files and therefore,
3) Put at risk the data of a user
4) Can crash the system if it's a binary file
5) Can mess any program that use that data file to direct its actions
6) Will propagate inconspicuously to the backups, because the file name is the same name but the modified date is later

That you had not realised the implication and had not tested a major programming error with such dramatic consequences indicates you must be very very tired today

"As for the file copying, you must know that all the "real stuff" in GNU/Linux is done under-the-hood by small command-line utility programs (e.g., cp in this case). These programs are extremely reliable and well-behaved,"

Dream, dream, dream ... I'd like to have your coolness and confidence.

But let's test it yourself as I did and see the result:

1) Use cp to copy a large file (say a 8gb file) to a usb drive.
2) See how cp returns before the file is actually finished copying (see the flashing light blink on your usb). Neat, would you normally think: the system can do the task in the background! But wait...
3) Now while the file is still copying in the background and your usb file is flashing: umount your drive!
4) Surprise, surprise: umount returns successfuly BEFORE the file copying has been completed.
5) And remember not every usb flash drive or external hard drive has a light to indicate activity (typically my sata external drive don't have any).
6) Now I let you conclude.

Normally a successful umount means you can unmount your drive, except in that case you would have another corrupted file that fortunately your usb file system will weed out (if the file system is transaction based and can recover...) but this is not what was expected at all: it was expected that the file had been copied and saved.

So unless there is here another specific quirk of linux here that can explain this and reassure me (and you) that something else occured that what seem to have occured, quirk kept well hidden somewhere in some code comment somewhere in on of those million line of code, then this get me worried... REAL WORRIED.

And the nonchalance of all you folks on these serious matters is disturbing (as explained next)

" it is usually due to the GUI program, because these are typically written by novice programmers and enthusiasts that are just doing this to learn or make their mark. "

You can't be serious here.
Wait, we are not talking about any part. We are talking about one of the most important part used everyday by everyone in a lot of linux version: the file manager.

Now this file manager is not only used by elementary school kids or your 100 year old senile grandmother who might not understand what a command line is: it is likely used by you and many other programmers. It is (hopefuly, but I doubt it very much) also used by Mark Shuttelworth the guy behind the Ubuntu success.
And this file manager is also supposed to accomodate novice users who won't have a clue of what is going on when they lose their file, some of which may be the result of years of work or be an important souvenir and may have the backup corrupted as explained as well as explained above.

So here is the crucial matter:
1) how come, Mark nor any other Ubuntu developper, did not consider essential to do a minimum of test or verification on this particular core part of the distribution?
2) how come, no linux programmers supposedly using Linux (and I doubt very much any serious programmer is going to use a linux desktop if as you said it is written haphazardly by novice and incompetent programmers), how come then they did not notice it while it is straight in your face the moment you use it for a few days?

These simple consideration are likely to dramatically alter once again my platform decision and also my view of linux.

Well, I dug around a bit and it appears there is a bug report about this problem with Nautilus here. Feel free to vent your misfortunes there, and petition to have it assigned a higher priority.

I am quite a bit puzzle by your slugishness in that you haven't already tested yourself on your 'Dolphin' which is probably a copy (with a different name) of the 'Nautilus' I am using.

Why would I do that? I have no stake in this. And Dolphin is not a copy of Nautilus. I use a KDE-based distribution, things like progress bars for file transfers / copies are handled differently, and it has been flawless in my experience with it. But, of course, for any kind of heavy-duty file manipulations / transfers, I do it in the terminal anyways, because it's faster.

But let's test it yourself as I did and see the result:

1) Use cp to copy a large file (say a 8gb file) to a usb drive.
2) See how cp returns before the file is actually finished copying (see the flashing light blink on your usb). Neat, would you normally think: the system can do the task in the background! But wait...
3) Now while the file is still copying in the background and your usb file is flashing: umount your drive!
4) Surprise, surprise: umount returns successfuly BEFORE the file copying has been completed.
5) And remember not every usb flash drive or external hard drive has a light to indicate activity (typically my sata external drive don't have any).
6) Now I let you conclude.

OK:

1) Done.
2) The command cp does not return until the entire file has been copied, or if an error occurred, the operation is rolled back. So, I did the other tasks in parallel in another terminal, while cp was operating.
3) I executed the command: $ sudo umount /dev/sdg1 (sdg1 is my USB key).
4) I obtained the following error:

umount: /media/test_usb: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))

5) Don't care about the flashing light, I have the protection of an iron-clad OS.
6) I conclude that all is fine and I can sleep easy tonight knowing that 70% of all servers around the world are not going to crash tomorrow because of this.

Wait, we are not talking about any part. We are talking about one of the most important part used everyday by everyone in a lot of linux version: the file manager.

You do understand that the GUI has never been a central part of Linux. By far the vast majority of Linux/Unix-like operating systems running today don't even have a GUI (or only a very limited one) because there is no use for it. Servers, routers, micro-controllers, televisions, auto-pilot chips, etc... that has always been the largest "market share" of Linux, and still is today. Most of these systems have no use for a GUI, and are almost entirely used via a shell (terminal). It has only been around 5-10 years since you can reasonably use a Linux distribution as an OS for a PC, having enough features and (GUI) usability to replace Windows. You can argue whether or not that has been reached today. But complex GUI systems (desktop environments) are still rather young and immature in Linux, but don't mistake the GUI for the OS. And desktop environments are getting better by the minute too, I think KDE is quite mature/robust now, but I wouldn't say the same about Gnome / Unity, which I have always found a bit flimsy.

1) how come, Mark nor any other Ubuntu developper, did not consider essential to do a minimum of test or verification on this particular core part of the distribution?

Have you ever developed complex graphical user interfaces? They are extremely difficult to test because there are no bounds to the stupidity of the users, and your code has to somehow predict and handle every stupid thing someone could do. Well, the only way to do it, unless you have a big staff of paid testers, is to put it out there and collect bug reports.

2) how come, no linux programmers supposedly using Linux (), how come then they did not notice it while it is straight in your face the moment you use it for a few days?

I don't know many they didn't care. Some people are more passionate about certain things than others. If you're passionate about this file-copying issue, report the bug and push for a fix on it.

and I doubt very much any serious programmer is going to use a linux desktop if as you said it is written haphazardly by novice and incompetent programmers

When it comes to serious, competent programmers that I have ever come across, the vast majority (probably 9/10 or so) were using Linux primarily (many of which, without a GUI at all, just terminal windows). In my field, robotics, which is all about programming, pretty much everything is done in Linux and with Linux desktop computers. And I personally only boot into Windows once in a (long) while to compile some code, to check that it works in Windows, but other than that, I don't need it and certainly don't miss it.

commented: Thoughtful and detailed +4

You must be indeed very tired because the test you did prove absolutely nothing except you didn't read or understood what I was saying.
I never said that the umount could be done while the cp command is busy in sync which clearly it can't.
What I said is that when the cp command has returned (asyncly), and the umount has been done after and it did unmount the drive, the data was still being written to the flash drive and that if the drive didn't had the light one would believe it safe to remove it at that time even though the data has still not be all transfered yet.
That's EXTREMELY different.

"Don't care about the flashing light, I have the protection of an iron-clad OS"

It's more like:
You should care about the flashing light
and
You think you have the protection of an OS which you think is iron-clad but has likely never been adequately tested in that typical desktop situation.

"Servers, routers, micro-controllers, televisions, auto-pilot chips, etc... that has always been the largest "market share" of Linux, and still is today"

And that's indeed probably the reasonable explanation: linux is meant to be use that way. It is not meant yet for a desktop as you correctly said: it is still being tested (well sort of... it's more like WE test it at our own expense and wounds, it's pretty much (the desktop part that is) like they throw out anything that sort of seem to work).
So if someone wants to use as a desktop, it's at their own risk and peril (including severe risk of data loss) until many years (decade?) pass and experience and error, sort of a biological evolution, has left us with something that work.
The problem with this: nobody has ever told the users about it. (In my opinion, it's as deceiptful as the microsoft practice of selling of private information and in some way much more hurtful for customer - especially for the time loss)

That explain well the blatant errors for GUI and the mismanagement of flash usb drive: it's just that since a server never deal with this, nobody has even cared about those issues.

But a desktop user does care! And if this data loss is quite acceptable when you use your computer simply to play songs or watch video or surf the net, it is wholly inadequate for anytime of serious work. It's no wonder there is no linux in offices: the IT managers have already figured this out.

Personal conclusion:

Server Linux is good:
for a very reliable server

Desktop Linus is good:
as a cheap way to surf the net, listen to songs, watch movies and write simple letters or do some calculation, as long as data integrity in not essential and as long as time loss caused by poor quality of software and bug discovery and forum reading to solve myriad of incompatibilities and problem is acceptable and as long as someone is willing to accept quite often sub-par drivers that however remain often sufficiently functional but at other times can make it impossible to use some part of a machine

Desktop Linux is totally inadequate:
- in a business setting where data integrity is paramount
- in any setting where time is highly valued
- when the quality of the result is primordial: video/sound processing, etc
- when an effective and safe GUI is desired

Therefore,
- it is not acceptable for me as my main programming platform
- might be useful eventually as a server

But
- I still want C++11

So next action:
- return to windows and try to see if I can make clang 3.3 work on it

otherwise
- continue to program on old C++ until visual studio finally catch up (but it could take a few more years (it took actually 6 years for c++03!)
OR
- wait for new ssd memory chip technology expected to hit market next year and buy an apple computer using them, using clang/xcode and vmware fusion for porting to windows with the advantage of learning bsd which is apparently an even more solid platform (and apparently provide a more secure server than linux).

I'll make myself even more clear:

"The command cp does not return until the entire file has been copied"
FALSE as shown by the continuing blinking light of the usb indicating data continuing to be written AFTER cp as returned and EVEN if after the drive have been succesfuly umount (and umount has returned).

"or if an error occurred, the operation is rolled back"
Except the end user won't be notified.

So if the user remove the USB after the cp and umount returns but before the blinking light indicated the continuing data transfer and looks only at the result of his command line,he will think the file as been copied/properly backed up to his usb which won't be the case

Therefore it does not matter that the main computer rolls back the operation. What matters it that the enduser thinks (and has no way to know otherwise except by looking for a blinking light and many usb keys don't have a light) he has a proper backup but he DOESN'T

"
. So, I did the other tasks in parallel in another terminal, while cp was operating.
3) I executed the command: $ sudo umount /dev/sdg1 (sdg1 is my USB key).
4) I obtained the following error:

umount: /media/test_usb: device is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))

"
The mistake of this test is that you executed the other task in parallel.
Instead you should have done:
cp your_file <yourusbdrive>/your_backupfile
... wait for cp to return... and THEN, and only THEN type:
umount: /media/test_usb
... in which case NO error will show, yet the data is still being transfered as show by the data transfer blinking light, the operation has therefore not truly completed and removing the usb at this time will be disastrous.

If you still don't understand or if you still play the ostrich and just think "this cannot be because I was told this was rock solid", and if you are not worried nor perform additional test (perhaps this is a particularly bad specific usb driver, or a special problem linked to a specific usb key), then I can't really do anything for you: you would be at the level of that clueless nautilus programmer who don't understant why it is important to copy a file the proper way. Fortunately I think you are more intelligent...

More info on the platform used:
file system of origin file: ext4
file system of destination file (on usb): ntfs
usb drive: FlashVoyagerGT usb3.0 64gb
The file transfered must be very large (in my case multi-gigabytes) so that the delay between the return of the commands and the end of the operation is sufficient large to be noticeable

trantran I just wanted to let you know that I use xrdp in linux in order to RDP into the machine. As for the problems you are experiancing I would have to try it on my own machine later but i can say i have never had a problem with this.

I also wanted to let you know that if you think that something that is open source and changing daily is a stable platform then you might need to rethink what you are using. Look at all of the SP's and patches microsft has to come up with and they are a multi billion dollar company. I have a server here at work that is linux and we are running VMWare on it for our virtual infrastructure. We last shut downt that server in october 2010. I would say that having an entier company running for 2+ year of solid uptime is pretty stable.

After digging a bit, this appears to be a bug reported of Gnome 3.2 / Nautilus, and has been fixed in the up-stream. The bug was reported on Dec. 7th, 2012, and was fixed in Dec. 20th, 2012. See the history of the bug here from Fedora's bug reports (uses the same desktop environment (gnome-shell) as Ubuntu). You might want to reflect the bug report on Ubuntu's bug reports, in case they haven't been made aware of it. The fix is (or will be) up-stream, so if you want it, you will have to sync to an up-stream repository. It appears that the main reason is the newer cached operations on flash drives and external HDDs that wasn't handled correctly in the gnome-shell / Nautilus code.

Thanks NathanOliver.

Mike, I think you are again not understanding.
I was reporting 2 bugs.
a) one is the nautilus beginner programmer bug: he simply does not know how the logic of a file copy works. It is extremely easy to solve this by adding a single line of code: just erase the incompletely copied target file if the user cancel the operation. High school kids learning programming know how to do his. I simply can't believe Mark can leave such extremely dangerous junk on his Ubuntu and let this dangerous programmer work without supervision.
b) the delayed write problem has nothing to do with nautilus which is not involved at all in this. I can think right away of many reason this bug could occur, some relating to serialised background thread launched by the kernel that are not reported back to the command window nor known by the typical user (like you), possible usb driver misprogramming (quite possible) or another possibility is that this particular usb has its own embedded circuit that moves the data inside itself from a rapid memory to a slower memory and that it flashes during that time (even though the data has completed its writing to its fast memory and therefore should be safe even if the usb removed) though that hopeful and reassuring scenario of delated writing inside the usb is somewhat far-fetched but still a possibility.

Of course it would help if people would just test the situation on their own usb drive.

But I have come to realise that desktop linux are typically not used in data-integrity crucial activities but only as a research playground and programming learning mode where it is expect to break up the system quite frequently while interacting with embedded system, robots, new microprossor, file system testing, etc, and this is something therefore (data safety when copying to a usb) that, in those scenario, is pretty much irrelevant and I now better understand therefore the apathy on this matter.

Unfortunately, Mark's Ubuntu presents itself like a serious desktop replacement for everyone. That's pure folly and these guys are reckless to do this in the current situation of linux. It's deceiptful and will hurt a lot of people and steal away their precious time.

b) the delayed write problem has nothing to do with nautilus which is not involved at all in this. I can think right away of many reason this bug could occur, some relating to serialised background thread launched by the kernel that are not reported back to the command window nor known by the typical user (like you), possible usb driver misprogramming (quite possible) or another possibility is that this particular usb has its own embedded circuit that moves the data inside itself from a rapid memory to a slower memory and that it flashes during that time (even though the data has completed its writing to its fast memory and therefore should be safe even if the usb removed) though that hopeful and reassuring scenario of delated writing inside the usb is somewhat far-fetched but still a possibility.

Read the bug report I linked to in my previous post. The issue has been identified and fixed, you don't need to speculate about what the source is. The issue is that newer flash drives and hard-drives have a hardware cache unit to speed up the reading / writing by buffering the data. The umount command can handle this already via a flag (command-line option to the command) that tells it to first request a sync'ing of the hardware cache (i.e., flushing the cache). Linux distributions like Fedora and Ubuntu use a desktop environment called "Gnome" which wraps those kinds of file operations and gives them additional functionality (e.g., the kind of things you need to be able to make a progress-bar appear, cancel a job, etc., i.e., the kinds of bells and whistles that aren't needed in non-desktop environments). This bug is essentially that Gnome didn't request the sync'ing before the un-mounting of the drives, leaving a possibility that the drive would be pulled (and thus, powered off) while the cache was still in the process of emptying into the persistent storage of the drive. It was a simple matter of fixing it in Gnome by setting the correct flag in the underlying call to umount.

a) one is the nautilus beginner programmer bug: he simply does not know how the logic of a file copy works. It is extremely easy to solve this by adding a single line of code: just erase the incompletely copied target file if the user cancel the operation. High school kids learning programming know how to do his. I simply can't believe Mark can leave such extremely dangerous junk on his Ubuntu and let this dangerous programmer work without supervision.

This is also a bug in Gnome, and I linked to the bug report earlier too. The problem hasn't been solved yet because it wasn't deemed to have enough importance. Feel free to make your views heard on that bug report and push for higher priority for it (or even fix it yourself by contributing a patch). There is no point in whining about it here.

It is important to understand that any Linux distribution is a layered cake. You have the Linux kernel, then the GNU tools (thus, the full name "GNU/Linux"). Then, you have the desktop environment (e.g., Gnome, KDE, LXDE, or XFCE, etc.). Then, you have the GUI libraries (e.g., Qt, GTK+, wxWidgets, etc.). And finally, you have the GUI applications (e.g. Nautilus, LibreOffice, Pidgin, Google-Chromium, etc.). Any operating system will be layered in a similar fashion, this is not specific to Linux, of course. But more layers also means more people to blaim, so be careful where you direct your ire. And like digging into the soil, the deeper you go, the more solid it gets. But, of course, there will be bugs and omissions from time to time, especially in mid- to up-stream versions.

It is also important to understand that there is a rather wide spectrum of "stability" with Linux distributions, especially for desktop use. Down-stream (i.e., older versions), you'll find distributions like Red Hat Enterprise Linux (RHEL) and Novell SUSE Enterprise Server. These down-stream distributions are usually the product of many years of development and testing, and most of the development and a large part of that testing, in the real-world, is done via up-stream derivatives such as Fedora (from RHEL) or OpenSUSE (from SUSE) which also serve as (free) alternatives to their down-stream parents, with no service-cost (cheaper) and with more recent features / software support (and thus, a bit less stable). Ubuntu, and other Debian derivatives, tend to hang mid-stream with respect to kernel versions, but up-stream for the upper layers. And finally, further up-stream, you can sync to development / pre-release versions, or even using a rolling distribution (e.g., Arch Linux, Gentoo, etc.). So, at the end of the day, there is something for every need, but it might be hard to know where you prefer to sit, between the highly stable and the bleeding-edge.

N.B.: I doubt that Mark Shuttleworth does much coding these days. And certainly not in Gnome, which is developed and maintained by Red Hat, Inc., in addition to whoever else wants to contribute code to it (under supervision of Red Hat maintainers, of course).

N.B.2: About my earlier comment about mainly novice programmers being involved. That was only referring to GUI programs like Nautilus and the like. Lower layers are programmed by far more experienced programmers (their full-time jobs). These projects are usually backed financially or even steered in-house by organisations like Oracle (incl. Sun Microsystems), Cisco, CERN, Novell, Red Hat, Google, Facebook, Nokia, etc... Which obviously have a very large stake in having robust Linux systems, but they also have very different needs from those of desktop users. That's why issue (b) was deemed critical and solved very quickly, while issue (a) is given less priority.

"Read the bug report I linked to in my previous post. "
I did.

"The issue is that newer flash drives and hard-drives have a hardware cache unit to speed up the reading / writing by buffering the data."

What you are talking about is nowhere to be seen in the bug report you linked. The word "cache" does not even appear anywhere in the page not even once

"The umount command can handle this already via a flag (command-line option to the command) that tells it to first request a sync'ing of the hardware cache (i.e., flushing the cache)"

Same thing. Nowhere is this mentionned.
There is no such umount flag mentionned when you type umount -h on a command line.

If there is one such flag, then what is it?

And how did you learn about it?

And how is a user supposed to know its existence given that in all likelihood, just a few hours ago, an experienced user like you didn't know about its existence and was likely putting his backup at risk of being corrupted by not using that flag when using the umount command! (They may already be corrupted without your knowledge, assuming you do your own backup)

What you are talking about is nowhere to be seen in the bug report you linked. The word "cache" does not even appear anywhere in the page not even once

From the first sentence: "if you tried to eject a flash drive and all data were still not synced to it". If that isn't clear enough for you, I don't know what will. If you thread in the realm of developers, you have to understand the vocabulary.

Same thing. Nowhere is this mentionned.
There is no such umount flag mentionned when you type umount -h on a command line.

Here's what 1 minute of googling can do for you:

--mount-options sync

I can't keep spoon-feeding you this stuff.

Clearly, Mike, you completely don't know what you are talking about.

You
"The issue is that newer flash drives and hard-drives have a hardware cache unit to speed up the reading / writing by buffering the data."
Me
What you are talking about is nowhere to be seen in the bug report you linked. The word "cache" does not even appear anywhere in the page not even once
You
From the first sentence: "if you tried to eject a flash drive and all data were still not synced to it"

You are confusing the cache created by the ram of the computer on writing to slower unit with a hardware cache on a flash drive (if it ever existed!!!), which here is not at all mentionned and could be quite different in the result if it truly exist. So where did you found that? Did you made it up? Probably..

You
"The umount command can handle this already via a flag"
"It was a simple matter of fixing it in Gnome by setting the correct flag in the underlying call to umount."

Now, a post later, that you realise there is no such flag neither in the command nor the function (you were making this up) you quickly spill out whatever you found on google:

"Here's what 1 minute of googling can do for you: --mount-options sync"

and try to mask your obvious ignorance by feigning irritated and knowledgeable "I can't keep spoon-feeding you this stuff." (and may I remind you kindly that it was me that was spoon-feeding you on C++ not so long ago: you were not understanding correctly the semantic of such basic keyword as protected ... hum... hum...)

Sorry, Mike, but you are still completely misunderstanding everything on the subject of this usb bug (and you have constantly been so since the beginning, really).

So for your education here it is:

sync:
The "sync" mount option specifies the input and output to the filesystem is done synchronously. When you copy a file to a removable media (like floppy drive) with "sync" option set, the changes are physically written to the floppy at the same time you issue the copy command

async
The "async" mount option specifies the input and output to the filesystem is done asynchronously. When you copy a file to a removable media (like floppy drive) with "async" option set, the changes are physically written to the floppy some time after issuing the copy command. If "async" option is set and if you remove the media without using the "unmount" command, some changes you made may be lost.

So the benefit of sync is that you don't need to use the unmount command.
But if you use async and unmount that it's like you are using sync.

Now go back, and reread that the umount command HAD BEEN ISSUED yet the data was still being written on the drive. So the problem has nothing to do with the sync or async mount options.

So clearly you have absolutely no clue on this subject and are trying to mask your ignorance.
This is indeed typical linux guy behavior:
a) they write a copy file when they don't have a clue how to do it correctly (the nautilus folks)
-when someone (me) kindly show them a bug that has possibly already caused serious damage to their backup, but they were wholy ignorant off, instead of a thank you, they treat him with nasty words (as you did)
-when it it plainly obvious they (you) don't know anything about what they are talking about, they simply don't say "I'm sorry I don't know the answer" they just make one up to look like they know even if they don't
-when it is clear there is a major risk that data loss has already occured on a large number of usb backup (has implicitly acknowledge by an obscure bug report), they (gnone/nautilus/ubuntu folks) don't issue a noticeable public warning to all user to let them know and urging them to check their backup and redo one immediately as any ethical company would
-they market a product that is clearly unfit for its suggested usage (ubuntu) and use it to take private data that they sell to others (ubuntu/amazon)

Sorry, but I don't want to be associated with such an unethical and deceiptful crowd.

That ends my participation.

On the positive side, it has been quite an education on the subject in ways I would never, never had expected, and the conclusion was really quite a surprise.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.