Hi!
Wie have a very large list of urls of dynamic pages (press releases, initially approx. 4000-5000 URL).
I´ve got this URL File indexed properly in the first run. The problem is that due to the large amount of urls we don´t want to index the whole list again every day. The solution would be (after the initial walk) to write only the new press release urls into the text file (approx. 5 urls per day) and do a rewalk type "refresh", so that the index of the first walk is kept and the new urls are added.
But this doesn´t work
Is there a way to do this? Please help!! It´s very urgent...
Best Regards
Kai Tallafus
Wie have a very large list of urls of dynamic pages (press releases, initially approx. 4000-5000 URL).
I´ve got this URL File indexed properly in the first run. The problem is that due to the large amount of urls we don´t want to index the whole list again every day. The solution would be (after the initial walk) to write only the new press release urls into the text file (approx. 5 urls per day) and do a rewalk type "refresh", so that the index of the first walk is kept and the new urls are added.
But this doesn´t work

Is there a way to do this? Please help!! It´s very urgent...

Best Regards
Kai Tallafus