I am working on a software project that utilizes the MSSQL Express version. My question is...
I have checked out software that does the same thing that I want to do, but it seems that the competition uses a file based database system. I intend to use SQL because it seems easier and more reliable to me. Anyway, the issue I have is, the competition stores their data in generic dbf files that can be opened in excel or any other similar type of program. So I checked out the data structure, they store all addresses in one file, all contact names in another file, all billing information in one file, all billing addresses in another file etc... I am just wondering...because I know the company won't tell me if I ask, what would be the reason for splitting this information up into so many fragments? I assume because it is file based it is to help avoid read/write IO problems but I can't tell. I would think that having to look in multiple locations for one "record" would cause significant performance problems. I know once the data is retrieved from the file, the file is "closed" so to speak so that other instances across the network can view/modify the data (I tricked them into telling me that) but that's about all I can get out of them.
Any thoughts?
If I'm using an SQL database to store my information, would it improve or hinder performance to split data among all those different tables like they do in the file based system? I just can't believe a company would write software that is file based like that...I mean, these files could have the potential over years to grow to many gigabytes in size...just seems silly and a major performance short-sight to save them to the disk. Am I missing something? Would it really be better to be file based?