Monday, February 27, 2012

Help me! The log file for database is full

Hi,
Im working with a sql server 2000 bd and i have a bd with simple recovery model. Each day i have the next error:

"The log file for database x is full. Backup the transaction log for the database to free up some log space"

I tried to limit the transaction log file to 500Mb but then I have this error. I have done the reduction manually of transaction log file but the next day i have got the same error. If i dont try to limit, this file grows a lot of (1GB) and then i havent got enough disk space. Can you help me, please?

Thanks a lot.
MemupiI will offer you 2 solutions

1- if you wish to retain the log file information, then bak it up regularly. This will mean that SQL server will reuse the log file space it has backed up. The log file may still grow, although it should level out.

2 - if you don't care about the information held in the log file (I suggest this is true based on your simple recovery model) then you can set this via Query Analyzer

exec sp_dboption $DB, 'trunc. log on chkpt.', 'on'

This will throw away log file segments where all the transactions are committed, and as such keep your log file small. Note it then means that you cannot recover using the transaction log file using this method.

Hope this answers your question|||I have tested the value of this option using the following select:
SELECT DATABASEPROPERTY ('Northwind', 'IsTruncLog')

and the returned value was '1'.

Also if i see the options of database i can see that the option "autoshrink" also is set.

Then, i dont know what is the problem.

Thanks a lot.

Originally posted by dbabren
I will offer you 2 solutions

1- if you wish to retain the log file information, then bak it up regularly. This will mean that SQL server will reuse the log file space it has backed up. The log file may still grow, although it should level out.

2 - if you don't care about the information held in the log file (I suggest this is true based on your simple recovery model) then you can set this via Query Analyzer

exec sp_dboption $DB, 'trunc. log on chkpt.', 'on'

This will throw away log file segments where all the transactions are committed, and as such keep your log file small. Note it then means that you cannot recover using the transaction log file using this method.

Hope this answers your question|||OK - try dbcc loginfo in Query Analyzer in the db you are having probs with. This returns a status field (amoungst others) - 2 is active 0 is inactive. If all the segments are active then the log file will have to grow - it also suggestes that the truncate is not happening. Inactive segments will be reused

Another point to note (re autoshrink) - the log file can only shrink from the end backwards - ie if the active segement is at the end of the file it eill not shrink.|||I have executed loginfo query and the result is only one active segment. But, this segment is the last. Then, the shrink is not effective?? What can i do at this point?

But if i'd execute a manual command to shrink only the log transaction directly from Sql enterprise, the log file would be shorter. What is the reason? I can test this point.

On the other hand, i have limited the file to 500Mb. What happens if i execute a big transaction and i dont have enough space in the log file to save all ? Can i have this problems? When i didnt limite the space of log file i didnt have any error (the problem of space disk, of course).

Originally posted by dbabren
OK - try dbcc loginfo in Query Analyzer in the db you are having probs with. This returns a status field (amoungst others) - 2 is active 0 is inactive. If all the segments are active then the log file will have to grow - it also suggestes that the truncate is not happening. Inactive segments will be reused

Another point to note (re autoshrink) - the log file can only shrink from the end backwards - ie if the active segement is at the end of the file it eill not shrink.|||where the active segment is last, I usually create a dummy table and update the columns in it until the active segment "moves". Using the dbcc command I can track this.

As for a log file of 500Mb - few transactions would require this much space I think, althoughI suspect the transaction would fail if you exceed your imposed limit - def if there is no disk space|||I have tested this, and always that the active segment changes it become a new segment that it is the last. I refer to the last segment as the last FSegNo.

Can i remove the transaction log or config to not use?

Originally posted by dbabren
where the active segment is last, I usually create a dummy table and update the columns in it until the active segment "moves". Using the dbcc command I can track this.

As for a log file of 500Mb - few transactions would require this much space I think, althoughI suspect the transaction would fail if you exceed your imposed limit - def if there is no disk space|||Or how can i change the transaction file to another disk? Then i could not limit the log.

Originally posted by memupi
I have tested this, and always that the active segment changes it become a new segment that it is the last. I refer to the last segment as the last FSegNo.

Can i remove the transaction log or config to not use?|||Can't not use a transaction log - not possible

You can add a second log file through ent manager quite easily. Never actually moved a log file although should be able to - have a look in BOL - alter database command perhaps.|||OK. Thanks for all. You has helped me a lot.

Originally posted by dbabren
Can't not use a transaction log - not possible

You can add a second log file through ent manager quite easily. Never actually moved a log file although should be able to - have a look in BOL - alter database command perhaps.|||Hi

since ur drive is running out of space.
u can free up some space in which the log file is already present
else look out for another which is free of space
1. create a folder to store logfiles
2. go to the properties of the job that is taking the backup of log
3. go to edit and change the location (drive) from previous to present drive where there is enough space

hope it will work

No comments:

Post a Comment