Here is a basic question, what is the fastest way to transfer a MySQL database?

Here are the variables at play:
1) Database is just shy of 3GB
2) Transfer is between two servers running different versions of MySQL
3) There will not be usage of the database during transfer, I can even shut down MySQL if needs be.
4) The database is mostly MYISAM but with a few InnoDBs and a couple Memory (VBulletin)
5) Both servers are Linux based and I have root access on both.

I've done the transfer using a mysql dump, but the import took ages (more than a working day).
I've done the transfer using a hotcopy, but it's no good when going across MySQL versions, all the tables are corrupt and need repair, and even after that it doesn't work quite right, plus I haven't found any bath repair function, I have to repair tables one by one, that's no good.
I've also tried turning off MySQL and copying the raw database files directly, similar issues as with hotcopy, can't do that across different versions.

I've got Navicat and using that to do a data transfer solves the mismatched version issue, but like a dump, it takes a long time to transfer. During tests, still over 7 hours.

Here are some questions then:
I'd like to use Navicat, or some type of database manager, because I have multiple databases to manage/backup etc.. and am looking for a good tool. So that being said I'm not sure if I can make Navicat any faster, what to put the settings on (lock or unlock, use transaction, complete and/or extended inserts, etc)
Maybe this is normal but doing a data transfer for this under 3GB database is over 7 hours. That seems ridiculous give that some DBs are way larger than this and transferring data seems like such a normal task. I'm kind of lost. Is it because my work computer is a "middle man" in that I'm using it to funnel data between two servers? Is there a way to go on one server and "pull" the data in directly and save time? Can I optimize Navicat transfer settings so that it's faster? Can I import data from a hotcopy directly on the one server but have it "update" the structure and tables for the newer version of MySQL automatically?

I'm looking for options here!

Thanks for the help, I'm on a deadline and don't really want a 7+ hour transfer for this!

I have personally used MySqlcc (MySql Control Centre) which is a free download. It works really fast, no corruption of data etc. Try it and see.

Downloadable from http://sourceforge.net/projects/mysqlcc/

In my experience the fastest method would be:

1) Copy only the structure of the source to the target.
2) Dump the table contents to single CVS (TXT) files.
3) Load the text files with "Load data..."

There is over 4 million records, in fact, Navicat has been transferring data for about 24 hours trait! And it's only at 6% as I speak!! So probably a lot more than that.

That just seems ridiculous, the DB isn't even over 3GB, but 6% in 24 hours? Something is seriously wrong here.

Not sure a CSV would handle it though (too big?), I know a sql dump works but the import took a long time, now I'm thinking it was quick compared to Navicat.

A follow up question is: is there a method that I can remove the middle man (my work PC) and through some mysql or linux tool, pull the data directly onto the new server from the old, bypassing my use of Navicat? For example hook the two databases together like as a remote server, then sync the databases via some command?

Remember, it's different versions of MySQL, so if I use a dump file or some other thing, the import has to automatically convert what it needs to.

Again, use MySqlcc. It lets you connect to different databases, servers, synchronise etc. This is what you need, and it is fast.

If you are using Navicat don't expect performance. Navicat is an easy interface, but not performant. Also it seems that you are transferring databases from one server to another one using your local PC as the middle man. This means that all the traffic is being downloaded to your PC and uploaded to the 2nd server.
Look into mysqldump and the "LOAD DATA" statement of mysql. These are the most performant tools for the task.
Dump the source database, zip it, transfer it to the target, unzip it and feed it into the target using "LOAD DATA". You need of course shell access to both source and target server.

Again, use MySqlcc. It lets you connect to different databases, servers, synchronise etc. This is what you need, and it is fast.

Well I used mysqlcc and there are not any functions to transfer data, sync, or any of that. I can connect to multiple DBs, but I see nothing as far as sync or transfer, backup, restore, etc... I've right-clicked everything, gone through the menus and toolbars. Where can I sync or backup/import?

Look into mysqldump and the "LOAD DATA" statement of mysql. These are the most performant tools for the task.
Dump the source database, zip it, transfer it to the target, unzip it and feed it into the target using "LOAD DATA". You need of course shell access to both source and target server.

I attempted to do LOAD DATA, but before I can use it, I have to create all the table files. And I can't do that. Even though I can log in to the server with root access, and mysql is running under root, I keep getting permission denied to save files. In other words I cannot use the "INTO OUTFILE" method of backup. I tried to create folders, set permissions, log in with different users, use the /tmp folder, nothing worked. The mysql users have full privileges, yet it refuses to save the files. So I can't use the LOAD DATA as that seems to be just for individual tables, not a whole DB from mysqldump, unless I'm missing something.
I would love to see how fast it goes importing data without using INSERTs. But can't get the dang files to export!

Create the table structure by
mysqldump myDatabase -w0 | mysql myNewDatabase
Then dump your contents with myssqldump -T myDatabase
Then you have a bunch of files (one for each table) which you can load with "LOAD DATA".

And are you sure that your mysql daemon runs as root? This would be a very unusual setup. If I remember right the "SELECT ... INTO OUTFILE" runs with the privileges of the mysqld process, not with those of the mysql user or the unix user who called mysql.

Thanks AndreRet, I do use Navicat as well as SQLYog. I like Navicat's interface better, it tells me more things "at a glance" that other software is harder to see. For example when I click on the DB in Nav I can see how many tables there are. Yog doesn't seem to tell me how many tables there are, which makes it harder for me to do a quick compare of structure.
BUT, Yog has the advantage when synchronizing DBs because my DB apparently has a bunch of tables with no primary key, and Nav will simply quit the sync upon hitting a table with no primary key. And Yog will throw an error, perhaps skip the table, but will keep going.

I wish I could do the OUTFILE and LOAD DATA method, I will probably look deeper in to that when I have time. I'm sure it has something to do with how the server is set up. Even though I buy dedicated servers, these companies usually configure them in their own way to block a lot of stuff.
In CPanel my mysql users are set to all privileges, but still some are missing. I tried to import a dump and got a permission error, just to import a dump! Even though my user reports as all privileges in cpanel, I still didn't have permission to import, or flush privileges.

In any event, since I'm transferring a VBulletin database, I found out I don't need the postindex table. It was used in our VB3, but since upgrading to VB4 it isn't used, so that table had over 28 million records, and only importing about 70 inserts a second with the dump file. I was copying it needlessly. Without those large index tables, the import of about 1.3GB dump file will (hopefully) take less than 4 hours.

I'm not done looking for solutions, but for now a dump file is still how I'm doing it, with Yog to sync it later and make sure they are identical.

I'm sorry of I could not help out more here. I do hope you find the answer soon enough. If you do, please post the solution here for others to see, thanks.

Also, if you are not to make use of this post anymore, please mark it as solved for us, thanks.

There are other solutions I want to try but for this particular issue it ended up being a strait mysqldump and imported directly from the new server.

I know there are faster ways but I was on a time line.

Thanks!

I use dbForge Studio for MySQL and for me the fastest way of database transfering is copy mySQL database method. This method is the most simple and requires the fewest clicks.

There are faster ways for your needs and it's different versions of MySQL,

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.