Dear All,
I have a system with the web server(apache) and db(mysql) server install on separate machines. So at times when the traffic is high I can not access my system. So first I would like to know where is the error and where is the error log file will be stored ? Secondly below I have copied some snippet from the db server /var/lib/mysql/mysql-bin.err

10124 11:52:12 [ERROR] /usr/libexec/mysqld: Sort aborted
110124 11:52:12 [ERROR] /usr/libexec/mysqld: Sort aborted
110124 11:52:12 [ERROR] /usr/libexec/mysqld: Sort aborted
110124 11:52:14 [Note] /usr/libexec/mysqld: Shutdown complete

110124 11:52:19 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.67-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution
110124 12:30:27 [Note] /usr/libexec/mysqld: Normal shutdown

110124 12:30:30 [Note] /usr/libexec/mysqld: Shutdown complete

110124 12:30:31 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.67-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution
110124 15:17:16 [Note] /usr/libexec/mysqld: Normal shutdown

110124 15:17:19 [Note] /usr/libexec/mysqld: Shutdown complete

110124 15:17:19 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.67-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution
110125 14:21:53 [Note] /usr/libexec/mysqld: Normal shutdown

110125 14:21:58 [Note] /usr/libexec/mysqld: Shutdown complete

110125 14:24:20 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.67-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution
110125 21:46:56 [Note] /usr/libexec/mysqld: Normal shutdown

110125 21:46:58 [Note] /usr/libexec/mysqld: Shutdown complete

110125 21:46:59 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.67-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution
110126 12:18:53 [Note] /usr/libexec/mysqld: Normal shutdown

110126 12:18:55 [Note] /usr/libexec/mysqld: Shutdown complete

110126 12:18:56 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.67-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution
110126 12:43:08 [Note] /usr/libexec/mysqld: Normal shutdown

110126 12:43:11 [Note] /usr/libexec/mysqld: Shutdown complete

110126 12:43:12 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.67-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution
110126 14:24:26 [Note] /usr/libexec/mysqld: Normal shutdown

110126 14:24:29 [Note] /usr/libexec/mysqld: Shutdown complete

110126 14:24:32 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.67-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution
110126 14:33:43 [Note] /usr/libexec/mysqld: Normal shutdown

110126 14:33:45 [Note] /usr/libexec/mysqld: Shutdown complete

110126 14:33:46 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.67-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution
110126 17:27:54 [Note] /usr/libexec/mysqld: Normal shutdown

110126 17:27:56 [Note] /usr/libexec/mysqld: Shutdown complete

110126 17:30:17 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.67-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution

Below is the snippet from /var/log/mysqld.log

110124 11:52:15  mysqld started
110124 12:30:30  mysqld ended

110124 12:30:31  mysqld started
110124 15:17:19  mysqld ended

110124 15:17:19  mysqld started
110125 14:21:58  mysqld ended

110125 14:24:19  mysqld started
110125 21:46:58  mysqld ended

110125 21:46:59  mysqld started
110126 12:18:55  mysqld ended

110126 12:18:56  mysqld started
110126 12:43:11  mysqld ended

110126 12:43:12  mysqld started
110126 14:24:29  mysqld ended

110126 14:24:30  mysqld started
110126 14:33:45  mysqld ended

110126 14:33:46  mysqld started
110126 17:27:56  mysqld ended

110126 17:30:16  mysqld started

So what else must I do to find out where is the error? Is there any general log file which store the error on fedora? Can it be the error due to apache it self? What must I tune further because so far the max connection for the db is set to 900? Thank you.

You "cannot access your system" -? What is your system? Your web server? Do you mean a browser connection timeout? Or which kind of error?
Are those manual mysql server restarts in the log? Or are they induced by the safe_mysqld script? If the latter, the question is, why keeps mysqld restarting?
Run "top" or "uptime" on both servers and catch the output. Which server has the really high load - the database or the web server?

Dear Smantscheff,
My system here means the web server. Yes I do get connection problem where I have to keep waiting before I log into my system. I did some manual restart but not that many too. On other hand my database is also being update in the background by gps device which keep inserting data into it. So both my servers are busy doing task too. So I will do the top command and what do you want me to look out for?

First you have to check if it is the web server or the database server which has too much load - too much meaning a load average above 1.0 over longer periods of time. If it is the web server, then check the number of page impressions or hits and post it together with the basic machine data (procezssor, speed, RAM). If it is the database server, check the slow query log. If there isn't one, configure MySQL to log the slow queries and check where they come from and how you can optimize them.

Dear Smantscheff,
How to decide if the load is above 1.0 is it the value from the top? Ok I have capture one of the top session for the web server and one for db server and is below. What I notice not all not cpus being utilised to maximum? Can you see anything from here? On the other hand I am in the process of setting slow query logs. Thank you.

Web Server

top - 00:57:58 up 188 days, 15:22,  1 user,  load average: 0.55, 0.61, 0.59
Tasks: 331 total,   1 running, 328 sleeping,   2 stopped,   0 zombie
Cpu0  : 11.4%us,  0.7%sy,  0.0%ni, 87.5%id,  0.0%wa,  0.0%hi,  0.4%si,  0.0%st
Cpu1  : 11.3%us,  0.6%sy,  0.0%ni, 87.8%id,  0.0%wa,  0.0%hi,  0.3%si,  0.0%st
Cpu2  :  8.2%us,  0.5%sy,  0.0%ni, 91.0%id,  0.0%wa,  0.0%hi,  0.3%si,  0.0%st
Cpu3  :  6.9%us,  0.4%sy,  0.0%ni, 92.4%id,  0.0%wa,  0.0%hi,  0.3%si,  0.0%st
Cpu4  :  3.0%us,  0.2%sy,  0.0%ni, 96.7%id,  0.0%wa,  0.0%hi,  0.1%si,  0.0%st
Cpu5  : 42.1%us,  2.4%sy,  0.0%ni, 39.6%id,  0.0%wa,  9.7%hi,  6.1%si,  0.0%st
Cpu6  :  3.2%us,  0.2%sy,  0.0%ni, 96.4%id,  0.0%wa,  0.0%hi,  0.2%si,  0.0%st
Cpu7  :  2.8%us,  0.2%sy,  0.0%ni, 96.8%id,  0.0%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:  16468596k total,  8274420k used,  8194176k free,   370672k buffers
Swap: 18513912k total,        0k used, 18513912k free,  6083752k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
28634 apache    20   0  338m  13m 6112 S 27.2  0.1   7:14.81 httpd              
28637 apache    20   0  339m  14m 6616 S 27.2  0.1   8:12.80 httpd              
28512 apache    20   0  338m  14m 6876 S  9.7  0.1  16:05.03 httpd              
29908 root      20   0 15008 1228  792 R  1.9  0.0   0:00.01 top                
    1 root      20   0  4080  864  616 S  0.0  0.0   0:50.87 init               
    2 root      15  -5     0    0    0 S  0.0  0.0   0:00.00 kthreadd           
    3 root      RT  -5     0    0    0 S  0.0  0.0   0:53.13 migration/0

Db Server Top. What I notice the load fluctuates at time it drop to 0.5 and this time I capture was 1.05.

top - 01:37:05 up 14:06,  1 user,  load average: 1.05, 0.76, 0.83
Tasks: 278 total,   1 running, 275 sleeping,   2 stopped,   0 zombie
Cpu(s):  8.3%us,  1.7%sy,  0.0%ni, 86.5%id,  0.5%wa,  0.6%hi,  2.4%si,  0.0%st
Mem:  33009800k total, 32839088k used,   170712k free,    83532k buffers
Swap: 35061752k total,        4k used, 35061748k free, 29950332k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 6576 mysql     20   0  906m 460m 4256 S  5.9  1.4   1623:05 mysqld             
26455 root      20   0 14876 1172  776 R  2.0  0.0   0:00.01 top                
    1 root      20   0  4080  852  608 S  0.0  0.0   0:01.58 init               
    2 root      15  -5     0    0    0 S  0.0  0.0   0:00.00 kthreadd           
    3 root      RT  -5     0    0    0 S  0.0  0.0   0:00.06 migration/0        
    4 root      15  -5     0    0    0 S  0.0  0.0   0:00.63 ksoftirqd/0        
    5 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/0         
    6 root      RT  -5     0    0    0 S  0.0  0.0   0:01.07 migration/1        
    7 root      15  -5     0    0    0 S  0.0  0.0   0:00.21 ksoftirqd/1        
    8 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/1         
    9 root      RT  -5     0    0    0 S  0.0  0.0   0:00.02 migration/2        
   10 root      15  -5     0    0    0 S  0.0  0.0   0:00.27 ksoftirqd/2        
   11 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/2

If the database server is running at or above 1.0 most of the time that means that the CPU is busy all the time. This explains the timeouts.
Have a look at the mysql slow query log on the database server and try to identify the problematic queries.
Could it be that your web server does not close the connections properly? What does
mysql> show status
tell you about the connected and running threads?
Can you help your slow queries by defining indexes?

Dear Smantscheff,
How about the top for the Web Server does it indicate anything? The problem is beside the web server in the background I also got many gps devices connecting and both update and insert data into the same database. So offcourse the database is very heavy processing. Below is the show status results.So far I have set the time for the slow query as 5 do you think I should increase because my log is keep updated by the slow query logs.

Aborted_clients 0 
Aborted_connects 0 
Binlog_cache_disk_use 0 
Binlog_cache_use 0 
Bytes_received 138 
Bytes_sent 87 
Com_admin_commands 0 
Com_alter_db 0 
Com_alter_table 0 
Com_analyze 0 
Com_backup_table 0 
Com_begin 0 
Com_call_procedure 0 
Com_change_db 0 
Com_change_master 0 
Com_check 0 
Com_checksum 0 
Com_commit 0 
Com_create_db 0 
Com_create_function 0 
Com_create_index 0 
Com_create_table 0 
Com_create_user 0 
Com_dealloc_sql 0 
Com_delete 0 
Com_delete_multi 0 
Com_do 0 
Com_drop_db 0 
Com_drop_function 0 
Com_drop_index 0 
Com_drop_table 0 
Com_drop_user 0 
Com_execute_sql 0 
Com_flush 0 
Com_grant 0 
Com_ha_close 0 
Com_ha_open 0 
Com_ha_read 0 
Com_help 0 
Com_insert 0 
Com_insert_select 0 
Com_kill 0 
Com_load 0 
Com_load_master_data 0 
Com_load_master_table 0 
Com_lock_tables 0 
Com_optimize 0 
Com_preload_keys 0 
Com_prepare_sql 0 
Com_purge 0 
Com_purge_before_date 0 
Com_rename_table 0 
Com_repair 0 
Com_replace 0 
Com_replace_select 0 
Com_reset 0 
Com_restore_table 0 
Com_revoke 0 
Com_revoke_all 0 
Com_rollback 0 
Com_savepoint 0 
Com_select 0 
Com_set_option 1 
Com_show_binlog_events 0 
Com_show_binlogs 0 
Com_show_charsets 0 
Com_show_collations 0 
Com_show_column_types 0 
Com_show_create_db 0 
Com_show_create_table 0 
Com_show_databases 0 
Com_show_errors 0 
Com_show_fields 0 
Com_show_grants 0 
Com_show_innodb_status 0 
Com_show_keys 0 
Com_show_logs 0 
Com_show_master_status 0 
Com_show_ndb_status 0 
Com_show_new_master 0 
Com_show_open_tables 0 
Com_show_privileges 0 
Com_show_processlist 0 
Com_show_slave_hosts 0 
Com_show_slave_status 0 
Com_show_status 1 
Com_show_storage_engines 0 
Com_show_tables 0 
Com_show_triggers 0 
Com_show_variables 0 
Com_show_warnings 0 
Com_slave_start 0 
Com_slave_stop 0 
Com_stmt_close 0 
Com_stmt_execute 0 
Com_stmt_fetch 0 
Com_stmt_prepare 0 
Com_stmt_reset 0 
Com_stmt_send_long_data 0 
Com_truncate 0 
Variable_name  Value  
Com_unlock_tables 0 
Com_update 0 
Com_update_multi 0 
Com_xa_commit 0 
Com_xa_end 0 
Com_xa_prepare 0 
Com_xa_recover 0 
Com_xa_rollback 0 
Com_xa_start 0 
Compression OFF 
Connections 56341 
Created_tmp_disk_tables 0 
Created_tmp_files 6 
Created_tmp_tables 1 
Delayed_errors 0 
Delayed_insert_threads 0 
Delayed_writes 0 
Flush_commands 1 
Handler_commit 0 
Handler_delete 0 
Handler_discover 0 
Handler_prepare 0 
Handler_read_first 0 
Handler_read_key 0 
Handler_read_next 0 
Handler_read_prev 0 
Handler_read_rnd 0 
Handler_read_rnd_next 0 
Handler_rollback 0 
Handler_savepoint 0 
Handler_savepoint_rollback 0 
Handler_update 0 
Handler_write 132 
Innodb_buffer_pool_pages_data 0 
Innodb_buffer_pool_pages_dirty 0 
Innodb_buffer_pool_pages_flushed 0 
Innodb_buffer_pool_pages_free 0 
Innodb_buffer_pool_pages_latched 0 
Innodb_buffer_pool_pages_misc 0 
Innodb_buffer_pool_pages_total 0 
Innodb_buffer_pool_read_ahead_rnd 0 
Innodb_buffer_pool_read_ahead_seq 0 
Innodb_buffer_pool_read_requests 0 
Innodb_buffer_pool_reads 0 
Innodb_buffer_pool_wait_free 0 
Innodb_buffer_pool_write_requests 0 
Innodb_data_fsyncs 0 
Innodb_data_pending_fsyncs 0 
Innodb_data_pending_reads 0 
Innodb_data_pending_writes 0 
Innodb_data_read 0 
Innodb_data_reads 0 
Innodb_data_writes 0 
Innodb_data_written 0 
Innodb_dblwr_pages_written 0 
Innodb_dblwr_writes 0 
Innodb_log_waits 0 
Innodb_log_write_requests 0 
Innodb_log_writes 0 
Innodb_os_log_fsyncs 0 
Innodb_os_log_pending_fsyncs 0 
Innodb_os_log_pending_writes 0 
Innodb_os_log_written 0 
Innodb_page_size 0 
Innodb_pages_created 0 
Innodb_pages_read 0 
Innodb_pages_written 0 
Innodb_row_lock_current_waits 0 
Innodb_row_lock_time 0 
Innodb_row_lock_time_avg 0 
Innodb_row_lock_time_max 0 
Innodb_row_lock_waits 0 
Innodb_rows_deleted 0 
Innodb_rows_inserted 0 
Innodb_rows_read 0 
Innodb_rows_updated 0 
Key_blocks_not_flushed 4996 
Key_blocks_unused 56693 
Key_blocks_used 50478 
Key_read_requests 3729731 
Key_reads 50478 
Key_write_requests 46608 
Key_writes 1 
Last_query_cost 0.000000 
Max_used_connections 49 
Ndb_cluster_node_id 0 
Ndb_config_from_host   
Ndb_config_from_port 0 
Ndb_number_of_data_nodes 0 
Not_flushed_delayed_rows 0 
Open_files 165 
Open_streams 0 
Open_tables 119 
Opened_tables 0 
Prepared_stmt_count 0 
Qcache_free_blocks 220 
Qcache_free_memory 168316528 
Qcache_hits 1220332 
Qcache_inserts 8115 
Qcache_lowmem_prunes 0 
Variable_name  Value  
Qcache_not_cached 42797 
Qcache_queries_in_cache 2655 
Qcache_total_blocks 5864 
Questions 1504058 
Rpl_status NULL 
Select_full_join 0 
Select_full_range_join 0 
Select_range 0 
Select_range_check 0 
Select_scan 1 
Slave_open_temp_tables 0 
Slave_retried_transactions 0 
Slave_running OFF 
Slow_launch_threads 0 
Slow_queries 0 
Sort_merge_passes 0 
Sort_range 0 
Sort_rows 0 
Sort_scan 0 
Ssl_accept_renegotiates 0 
Ssl_accepts 0 
Ssl_callback_cache_hits 0 
Ssl_cipher   
Ssl_cipher_list   
Ssl_client_connects 0 
Ssl_connect_renegotiates 0 
Ssl_ctx_verify_depth 0 
Ssl_ctx_verify_mode 0 
Ssl_default_timeout 0 
Ssl_finished_accepts 0 
Ssl_finished_connects 0 
Ssl_session_cache_hits 0 
Ssl_session_cache_misses 0 
Ssl_session_cache_mode NONE 
Ssl_session_cache_overflows 0 
Ssl_session_cache_size 0 
Ssl_session_cache_timeouts 0 
Ssl_sessions_reused 0 
Ssl_used_session_cache_entries 0 
Ssl_verify_depth 0 
Ssl_verify_mode 0 
Ssl_version   
Table_locks_immediate 30486 
Table_locks_waited 1622 
Tc_log_max_pages_used 0 
Tc_log_page_size 0 
Tc_log_page_waits 0 
Threads_cached 28 
Threads_connected 7 
Threads_created 49 
Threads_running 1 
Uptime 1455 
Uptime_since_flush_status 1455

The connections seem o.k.
I don't know about the table locks. Is there a lot of transaction processing going on?
Can you identify the queries in the slow query log which take most time? That might be a starting point for optimizing your apps.
The rest looks fine to me, so I fear you might to have to consider moving to a bigger machine.

Dear Smantscheff,
When you say the connection seem ok is referring to web or db server? Yes I have to agree a lot of transaction is going on escpecially in the background nearly 2k plus devices are inserting and updating the db and at the same time some user are using it from the front end web application too. My problem is now I dont know how to decide the best machine according to the traffic.Any guide? Must I move a bigger machine for my web server too is it? How do you find the web server is the machine sufficient? Anything to do there? On the other hand yesterday I increade the max client for apache from 512 to 612 and I still receive this error

[Thu Jan 27 02:56:15 2011] [notice] mod_python: Creating 4 session mutexes based on 612 max processes and 0 max threads.
[Thu Jan 27 02:56:15 2011] [notice] mod_python: using mutex_directory /tmp

Thank you.

For your web server you could quite easily duplicate the installation and install a load balancer. This would allow you to add more (small) machines as necessary.
But on your web server the swap space has not yet even been touched, so you seem to be still on the safe side there. Therefore I assume that the problem is the performance of the database server.
I rented some shared space for my web and db servers on a quite large machine. Here the typical top output:

top - 21:18:54 up 100 days,  9:32,  1 user,  load average: 0.03, 0.01, 0.00
Tasks:  24 total,   2 running,  22 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  109036776k total,    99396k used, 108937380k free,        0k buffers

If I had your problem I would rent such a machine (at www.netclusive.de, if you call them you can also have a one weeks test-drive on the machine) and install the database on it. If the problem goes away you can be sure that the DB performance was you problem.

Dear Smantscheff,
Yes I agree with you about the load balancing. I plan like first 1000 user use one web server and the following 1000 user another web server. Will this work or you have any specific load balancing software to do it? Beside the slow query what else can I do with my current db server do you think if I increase the file descriptor will that help?

There are (linux) load balancers out there (I forgot which) which switch incoming traffic randomly to various web servers (www1, www2 etc.). So install one of those, see how it works and install more web servers as needed. It does not make sense to count users or hits - just have the traffic dispersed to more than one web server.
Regarding the database server I do not see much potential for optimizing. Increasing the file descriptors would help if the system ran out of them - does it? I see the most potential in optimizing your tables and queries, and if that cannot be done or does not lead to significant less load, consider upgrading your database server.
You might also want to look in database partitioning which might be a chance to reduce the server load with several smaller machines. You might consider to run with replicated database servers, too, but if your apps do a lot of updating this might be the wrong way.

Dear Smantscheff,
Just to share with you my scenario is such that most of the data is about gps things like latitude,longitude,speed etc. So I notice according to one of the slow log query is about a table which store geofence details among the fields are latitude and longitude value? Do you think tables of this sort would able to index and bring value from it? I am now considering about partitioning too. What I plan is that the one database just to keep all the latitude and longitude. One db just to keep latest longitude and latitude. So those require reporting functionality goes to previous db and recent value comes to the later db? What is your comment? Thank you.

Replication and data portioning both can help. But you need to modify your application to work with replication. Replication is useful for read intensive sites. But you can also use circular replication to distribute write load.

You can use Ldirectord to load balance your users across multiple database and web servers.

Dear Wasif,
Yes I am planning to replicate any idea how to do it? Is it using the mysql master slave or master to master is better? I am very new to all this. What is circular replication? Is there any other load balancing software too is it? Thank you once again

Replication and data portioning both can help. But you need to modify your application to work with replication. Replication is useful for read intensive sites. But you can also use circular replication to distribute write load.

You can use Ldirectord to load balance your users across multiple database and web servers.

Here is the MySQL replication details and how to setup. I didn't read much much of this thread. What is your application request ration between read and write requests?

I am using Ldirectord from last 2 years and quite satisfied with this. Many high traffic sites are using this.

Dear Wasif,
Sorry I will go one by one first I guess I will learn the replication and implement it first. Actually my application is a gps based. In the backend I have around 2k plus device at different interval keep inserting location data into the database. On the front end we have few hundreds of user generating using this gps data.So what is your idea or solution? I dont know how to tell you the exact ration?

What is your comment?

Optimize the slow queries - which may mean: optimize your data structures and your app. Without a test case I cannot be more specific.
Then move the database to a really fast machine. Hardware is so cheap these days that you should at least give it a try before setting up partitioning and replication.

Dear Smantscheff,
Below is some of sample data for the table which I mentioned. I have also other tables but this is just a sample. Any idea what to optimise?Thank you.

geoFenceDetailsID	geoFenceID	geoFenceLat	geoFenceLong
15	3	5.24793104840194	100.49681425094604
14	3	5.250121236524251	100.4990029335022
13	3	5.248134041770815	100.50144910812378
12	3	5.245847691588138	100.49918532371521
6	2	5.379924108386956	100.30695140361786
7	2	5.379811951847182	100.30693262815475
8	2	5.379534230802661	100.30695140361786
9	2	5.3795422419884105	100.3071928024292
10	2	5.379988197828936	100.30712574720382
11	2	5.379924108386956	100.30695140361786
16	3	5.245847691588138	100.49918532371521

I cannot tell without seeing the table definition and the queries which take the most time.

Dear Smantscheff,
Below is my table definition. See if you can figure out anything? Thank you.

CREATE TABLE IF NOT EXISTS `geoFenceDetails` (
  `geoFenceDetailsID` int(11) NOT NULL auto_increment,
  `geoFenceID` int(11) NOT NULL,
  `geoFenceLat` varchar(50) NOT NULL,
  `geoFenceLong` varchar(50) NOT NULL,
  PRIMARY KEY  (`geoFenceDetailsID`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=15894 ;

Either you submit a complete test case with all relevant table definitions, test data and queries which take too long, or you will not get substantial help.
Why do you define latitude and longitute as varchar? Wouldn't they better be floats? Are any slow queries operating on lat/lon ranges? - And with only 15.000 rows in the table, there should not be serious performance issues.

Dear Smantscheff,
Sorry I am a bit confuse when you say table definitions isnt it what I posted in my previous post? There are no queries operating in lat/lon range. The problem is that this is not the only table being queried but also many other tables.

Your problem is bad performance and slow queries. To help you with them we need to see the queries and the table structures. What you posted is the definition for exactly one table. As you tell and as I assume there are more tables involved. What is their definition? What are the slow queries?

I got the same issue. I found out there are 2 MySQL architectures running on this server and I removed the 32 bit one.

rpm -e mysql-5.0.95-1.el5_7.1.i386

It's working well currently. Hope this would help.


Server configuration:
* CentOS 5.7 64 bit
* MySQL 5.0.95

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.