write_outbuf error while indexing
Posted: Tue May 17, 2005 5:27 pm
We're using the Linux version of Texis, release # 4.04.1067366033.
We have a 185 GB drive partition, on which a database sits. Right now the table, blob file and indexes take up 130 GB.
When we try to index the (by far) largest field in the table we eventually get an error that reads, "006 Can't write 0x20000 bytes at 0x2EC170E38 to KDBF file /database/558/mytable/mytable_DOCTEXT.dat: error 28: No space left on device in the function write_outbuf
".
We estimate that when the indexing is done this field's index will be ~15GB in size.
When the indexing died out it left behind four T*.dat files (and four associated .btrs). The .dats were all in the 900MB range, which from I've seen is the typical Texis pattern before starting the next "a.dat, b.dat, c.dat...." in the series.
So....we have 55 GB free at the moment on the volume, which I'd think is enough to get the job done? We looked at our tmp directory, but that's not filled up by any means.
Could you think of what's happening? If you think it's a disk space issue due to temp files we can expand the 185 GB to something larger.
We have a 185 GB drive partition, on which a database sits. Right now the table, blob file and indexes take up 130 GB.
When we try to index the (by far) largest field in the table we eventually get an error that reads, "006 Can't write 0x20000 bytes at 0x2EC170E38 to KDBF file /database/558/mytable/mytable_DOCTEXT.dat: error 28: No space left on device in the function write_outbuf
".
We estimate that when the indexing is done this field's index will be ~15GB in size.
When the indexing died out it left behind four T*.dat files (and four associated .btrs). The .dats were all in the 900MB range, which from I've seen is the typical Texis pattern before starting the next "a.dat, b.dat, c.dat...." in the series.
So....we have 55 GB free at the moment on the volume, which I'd think is enough to get the job done? We looked at our tmp directory, but that's not filled up by any means.
Could you think of what's happening? If you think it's a disk space issue due to temp files we can expand the 185 GB to something larger.