PDA

View Full Version : Offline pages/ Temporary Internet Files


siwalker66
18th Jan 2002, 23:52
I would appreciate any advice on this problem very much:
I wish to view a webpage and its associated links to other pages and graphics offline. I have added it to favs, made it available offline, synchronised it etc, no problems. However what I want to do is to be able to delete temp internet files without losing the offline page; it seems the info required to display the page is stored in T.I.F. rather than the offline web pages folder or elsewhere. There must surely be a way of doing this within '98/IE6.
Thanks in anticipation

RW-1
19th Jan 2002, 01:05
Very simple answer...

Save the page on your hard drive. File, save as ... (In IE anyways ...)

Give it a location, and you will be given the html page, and a folder containing relevent graphics, etc. for the page, that you can bring up locally on your hard drive.

You may then delete TIF with impunity. <img src="smile.gif" border="0">

Of course if you are deleting temp internet files from the control panel, you are given an option after selecting delete ON whether to delete offline content or not in the first place, not selecting to delete all offline content should result in removing temp internet files without affecting the ones you choose for offline viewing.

[ 18 January 2002: Message edited by: RW-1 ]</p>

siwalker66
19th Jan 2002, 02:27
Thanks RW-1.
I use eraser to delete my TIF more securely; it does not have a facility to distinguish offline content from other data. Hence I wanted to be able to delete it all.
Re saving to hard drive - the page I am trying to save is a series of lessons, to go to each succesive lesson you go to new page. Each page has a lot of graphics which need clicked on to view or enlarge. How do I use save as to get all these?
Thanks for your time

bblank
19th Jan 2002, 03:21
ch66, even if you saved all the graphics with the original web-posted names, the web pages would not load properly from your local drive if the webmaster did not use relative paths in the html anchors and image tags.

Best bet is to download a website capture utility. I use this one:
<a href="http://www.nonags.com/files/redirect/?http://www.httrack.com/httrack-3.10.exe" target="_blank">http://www.nonags.com/files/redirect/?http://www.httrack.com/httrack-3.10.exe</a>
It is free (with a GNU type license) and the source code is available. I haven't tried any other applications of this type to compare with but this one is very good.

King Chile
19th Jan 2002, 09:50
Whilst not free, but it is cheap, if you've a need to download either whole web sites or parts thereof, have a look at: <a href="http://www.goto.fr/uk/TEL/TelPresMW.htm" target="_blank">Memo Web</a>

siwalker66
20th Jan 2002, 01:16
Thanks for the advice. I am aware of page grabbing utilities like the ones suggested. HTT I can't head or tail of, Memo Web looks quite good though. However what I really wanted to be able to do was to use IE6/Windows itself - surely it must have this ability?
If not which is the best utility?
Thanks a lot

bblank
20th Jan 2002, 03:57
HTT I can't head or tail of

Download HTTrack, install, open, cancel proxy dialogue box, for "Project name" type the name of the folder you want the website copied into. If you don't want this folder in "c:\My web sites" then enter an alternative base path. Click "Next", then enter the URL, click Next, Click Finish.

As a test I just downloaded my entire website by specifying a path that placed it in the middle of my directory tree. It downloaded the entire thing, up to my root public_html, 1048 files amounting to 20MB, recreating the entire directory structure on my hard drive.

what I really wanted to be able to do was to use IE6/Windows itself - surely it must have this ability?

Surely not! There is no need for a browser to have a spidering function or to have to rewrite a lot of html tags so that the paths are correct on a local drive. That said, I don't know IE6 so I can't say what it does or does not do.

siwalker66
20th Jan 2002, 20:15
Thanks Brian Blank
OK so I need to use a separate utility.
But what I dont understand about HTT is how to tell how deep you need to explore to limit the size of a download. The content I wanted is a series of 12 ECG tutorials, each requiring about 10 graphcs which open in new windows when you click on an icon.Without limiting this you are looking at a 13MB download - does this mean a lot of unnecessary stuff is being downloaded?

bblank
20th Jan 2002, 22:16
ch66, for now HTTrack is a mirroring tool but
as the download progresses there is a Skip button beside each file being received.

If you only want to save a small portion of a website then you can follow RW-1's suggestion and use the "Save" menu choice. You will have to visit each page you want and separately save each graphic.

Then you have to hope that the web author used relative paths in his html or else when you load the page your browser will look for the on-line source. (That problem is easy enough to fix and may be worth the time if you use the pages a lot.)
I used to get a kick out of earlier versions of HTTrack because on loading its splashscreen showed the building in which the program was developed. I once worked there. But I have no connections to the program and only recommended it because it does what I want and I have no need to look further. If HTTrack does not meet your needs, then you might want to look into the following applications. *I have not used them so I cannot recommend them.* I skimmed over the descriptions but did not see any mention that html files are rewritten for local viewing.

<a href="http://subfiles.net/webcow/" target="_blank">http://subfiles.net/webcow/</a>
<a href="http://home.global.co.za/~antonia/netspider/" target="_blank">http://home.global.co.za/~antonia/netspider/</a>
<a href="http://www.netvampire.com/" target="_blank">http://www.netvampire.com/</a>