I was wondering if there is a way to set the size of the DB at creation time to avoid fragmentation on NTFS file system.
The DB will probably use most of the hard drive space, but I don't want to see the files being fragmented, so I'd like to set their max size from the get-go.
My bonus question: What happens if I run out of space on the hard drive. Can I 'expand' the DB to use another hard drive?
Do I have to use the logical drive feature from Windows or is it something managed directly by Texis?
And maybe more importantly: which one is better: adding a new hard drive or replacing the existing one by a larger one?
You can use tablespace and indexspace settings to put tables or indexes in different directories or drives. The indexes can be on a different disk than the table (which can help performance). Since each drive has a limit as to how fast it can operate having several physical drives can help, as long as each is big enough to hold a complete table or index.
The database will simply grow as data is added, you can not preallocate space.
Thanks for the answer.
Out of curiosity, what's a reasonable size for a table?
The NTFS file system can hold very large file, but is it realistic to have the table file reaching that limit?
I expect my table to be quite large (tens of GB) and I'm wondering if I'm not better off designing multiple tables to share the 'load'. But then I foresee conflict with the indexes: merging indexes, running queries against several indexes...
Any suggestion?
It really depends on how you are accessing the table. You can use RAID or logical drives to share the load of a single file across physical devices. If from application logic you can partition the data so that any query will hit one of the split up tables then that can help, especially if the data is dynamic as it reduces contention, and also reduces the index size for a given query. It will however increase the overall index overhead, which will make file caching less effective.