Insert/delete/update speed with large tables

Post Reply
tboyer
Posts: 68
Joined: Mon Aug 28, 2006 4:43 pm

Insert/delete/update speed with large tables

Post by tboyer »

Do you have any strategies we might employ to improve our speed for transactional operations on large tables. We have a large table (2 million records, a few gigabytes on disk) and routine inserts/updates/deletes seem to be taking longer and longer to run. Selects are super fast (we have appropriate indexing I think) but inserts/deletes/updates are getting to the point where we can only do a few per second. Adding or deleting a few thousand records is a coffee break kind of operation on this table.
User avatar
John
Site Admin
Posts: 2597
Joined: Mon Apr 24, 2000 3:18 pm
Location: Cleveland, OH
Contact:

Insert/delete/update speed with large tables

Post by John »

There are a couple of possibilities.

How many indexes do you have on the table? All the indexes have to be updated for an insert or delete, so if there are a lot then those updates may be taking some time.

Which OS and filesystem are you using? We have noticed that some can take a long time to seek near the end of large files. Using BLOBs can help keep the file size small.

Are you doing a lot of selects while updating? Is it potentially a lock contention issue? Running ltest can help see if that is happening.
John Turnbull
Thunderstone Software
User avatar
mark
Site Admin
Posts: 5513
Joined: Tue Apr 25, 2000 6:56 pm

Insert/delete/update speed with large tables

Post by mark »

Also, how are you doing the inserts? A bunch of SQL statements through tsql? Timport? Vortex (provide outline)?
tboyer
Posts: 68
Joined: Mon Aug 28, 2006 4:43 pm

Insert/delete/update speed with large tables

Post by tboyer »

John, Mark, this is Linux on the the TrendTrack hardware Mark has set up for us. The processor is screaming fast and memory is more than adequate.

This table is 2 million records, about 6 GB in size, with 12 indices on about 15 fields. The fields are varchar and counter.

We use Vortex, and our transactions are usually one record at a time based on record ids. The extra indices surely cost us, but we need them for other functions.

The idea of using BLOBS for our variable length data in order to keep the table small might be helpful. That implies that table size itself does affect performance -- so would breaking up the table be worth trying?

I was also wondering if fixed length text fields would perform better for inserts/deletes/updates than varchar when you get to this size table. Is a table of all fixed length records going to run faster because of similar memory and disk allocation?

Also, are there any techniques we might use like storing changes to a small temp table (or a text file) and then batch-inserting/updating to the main table (maybe using tsql for the batch operation?).

These are all things I can test out for our situation but it would be great if you could suggest anything that has worked for others.

thanks,

Tom
tboyer
Posts: 68
Joined: Mon Aug 28, 2006 4:43 pm

Insert/delete/update speed with large tables

Post by tboyer »

John and Mark, one more very simple example.

I inserted 6000 records in the table I referred to above, about 2.2. million records). Now I'm deleting them using a single sql (inside a vortex script).

Sql is "delete from table where id > '-1 hour' " and there is an index on id.

The delete is taking more than 5 minutes. And I suspect it would be a little faster novars but not greatly. This would be almost instantaneous on a small table I think.

Obviously we pay a performance penalty on a large table, but I'm curious if there are techniques for reducing that performance penalty, since our tables are getting always larger!

Thanks.

Tom
User avatar
mark
Site Admin
Posts: 5513
Joined: Tue Apr 25, 2000 6:56 pm

Insert/delete/update speed with large tables

Post by mark »

Don't forget other things that may be going on on the system. You're probably having contention for the disk and/or locks. Use "ltest" to watch the locks on the database. Once in ltest hit "1" to get more frequent screen updates.

Use iostat and/or sar to watch disk activity.
iostat 1 100
sar -b 1 100
See previous average activity with
sar -b
tboyer
Posts: 68
Joined: Mon Aug 28, 2006 4:43 pm

Insert/delete/update speed with large tables

Post by tboyer »

Yes, thanks, I have watched it with Ltest. This is running on a test database and it's the only process going, so it's not waiting for locks.

I'm fairly certain it's just table size. Writing to a small table takes 10-20% of the time as these large tables with lots of indexes. So my task is how to write to the large tables as efficiently as possible.
User avatar
mark
Site Admin
Posts: 5513
Joined: Tue Apr 25, 2000 6:56 pm

Insert/delete/update speed with large tables

Post by mark »

Any software fixes since Feb 2008 that would help?

Commercial Version 5.01.1202977031 20080214 (i686-unknown-linux2.6.9-64-32)
User avatar
mark
Site Admin
Posts: 5513
Joined: Tue Apr 25, 2000 6:56 pm

Insert/delete/update speed with large tables

Post by mark »

I wonder if the journalling fs is part of the problem. Journaling increases the I/O somewhat to improve integrity if there's a system crash. But on a stable system with a good ups crashes should be rare.
Post Reply