How to check if the value on a website has changed

Edit: I hadn't realized you were just looking for the problem with your script. Here's what I think is the problem, followed by my original answer which addresses another approach to the bigger problem you're trying to solve.

Your script is a great example of the dangers of using a blanket except statement: you catch everything. Including, in this case, your sys.exit(0).

I'm assuming you're try block is there to catch the case where D:\Download\htmlString.p doesn't exist yet. That error is called IOError, and you can catch it specifically with except IOError:

Here is your script plus a bit of code before to make it go, fixed for your except issue:

import sys
import pickle
import urllib2

request = urllib2.Request('http://www.iana.org/domains/example/')
response = urllib2.urlopen(request) # Make the request
htmlString = response.read()

try: 
    file = pickle.load( open( 'D:\\Download\\htmlString.p', 'rb'))
    if file == htmlString:
        print("Values haven't changed!")
        sys.exit(0)
    else:
        pickle.dump( htmlString, open( 'D:\\Download\\htmlString.p', "wb" ) )  
        print('Saving')
except IOError: 
    pickle.dump( htmlString, open( 'D:\\Download\\htmlString.p', "wb" ) )
    print('Created new file.')

As a side note, you might consider using os.path for your file paths -- it will help anyone later who wants to use your script on another platform, and it saves you the ugly double back-slashes.

Edit 2: Adapted for your specific URL.

There is a dynamically-generated number for the ads on that page which changes with each page load. It's right near the end after all the content, so we can just split the HTML string at that point and take the first half, discarding the part with the dynamic number.

import sys
import pickle
import urllib2

request = urllib2.Request('http://ecal.forexpros.com/e_cal.php?duration=weekly')
response = urllib2.urlopen(request) # Make the request
# Grab everything before the dynabic double-click link
htmlString = response.read().split('<iframe src="http://fls.doubleclick')[0]

try: 
    file = pickle.load( open( 'D:\\Download\\htmlString.p', 'r'))
    if pickle.load( open( 'D:\\Download\\htmlString.p', 'r')) == htmlString:
        print("Values haven't changed!")
        sys.exit(0)
    else:
        pickle.dump( htmlString, open( 'D:\\Download\\htmlString.p', "w" ) )  
        print('Saving')
except IOError: 
    pickle.dump( htmlString, open( 'D:\\Download\\htmlString.p', "w" ) )
    print('Created new file.')

Your string is not a valid HTML document anymore if that was important. If it was, you might just remove that line or something. There is probably a more elegant way of doing this, -- perhaps deleting the number with a regex -- but this at least satisfies your question.

Original Answer -- an alternate approach to your problem.

What do the response headers look like from the web server? HTTP specifies a Last-Modified property that you could use to check if the content has changed (assuming the server tells the truth). Use this one with a HEAD request as Uku showed in his answer. If you'd like to conserve bandwidth and be nice to the server you're polling.

And there is also an If-Modified-Since header which sounds like what you might be looking for.

If we combine them, you might come up with something like this:

import sys
import os.path
import urllib2

url = 'http://www.iana.org/domains/example/'
saved_time_file = 'last time check.txt'

request = urllib2.Request(url)
if os.path.exists(saved_time_file):
    """ If we've previously stored a time, get it and add it to the request"""
    last_time = open(saved_time_file, 'r').read()
    request.add_header("If-Modified-Since", last_time)

try:
    response = urllib2.urlopen(request) # Make the request
except urllib2.HTTPError, err:
    if err.code == 304:
        print "Nothing new."
        sys.exit(0)
    raise   # some other http error (like 404 not found etc); re-raise it.

last_modified = response.info().get('Last-Modified', False)
if last_modified:
    open(saved_time_file, 'w').write(last_modified)
else:
    print("Server did not provide a last-modified property. Continuing...")
    """
    Alternately, you could save the current time in HTTP-date format here:
    http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.3
    This might work for some servers that don't provide Last-Modified, but do
    respect If-Modified-Since.
    """

"""
You should get here if the server won't confirm the content is old.
Hopefully, that means it's new.
HTML should be in response.read().
"""

Also check out this blog post by Stii which may provide some inspiration. I don't know enough about ETags to have put them in my example, but his code checks for them as well.


You can always tell of ANY change within the data between the local stored file and the remote by hashing the contents of both. This is commonly employed to verify the veracity of downloaded data. For a continuous check, you will need a while loop.

import hashlib
import urllib
    
num_checks = 20
last_check = 1
while last_check != num_checks:
    remote_data = urllib.urlopen('http://remoteurl').read()
    remote_hash = hashlib.md5(remote_data).hexdigest()

    local_data = open('localfilepath').read()
    local_hash = hashlib.md5(local_data).hexdigest()
    if remote_hash == local_hash:
        print('right now, we match!')
    else:
        print('right now, we are different')

If the actual data need never be saved locally, I would only ever store the md5 hash and calculate it on the fly when checking.


This answer is an extension of @DeaconDesperado's answer

For the sake of simplicity and faster code execution, One could create a local hash initially(instead of storing a copy of page) and compare it with newly obtained hash

To create locally stored hash initially one can use this code

import hashlib
import urllib

    remote_data = urllib.urlopen('http://remoteurl').read()
    remote_hash = hashlib.md5(remote_data).hexdigest()
  
    # Open a file with access mode 'a'
    file_object = open('localhash.txt', 'a')
    # Append  at the end of file
    file_object.write(remote_hash)
    # Close the file
    file_object.close()

and replace local_data = open('localfilepath').read() with local_data = open('local\file\path\localhash.txt').read()

that is

    import hashlib
    import urllib

    num_checks = 20
    last_check = 1
    while last_check != num_checks:
    
    remote_data = urllib.urlopen('http://remoteurl').read()
    remote_hash = hashlib.md5(remote_data).hexdigest()

    local_hash = open('local\file\path\localhash.txt').read()`
   
    if remote_hash == local_hash:
    
    print( 'right now, we match!' )
    
    else:
    
    print('right now, we are different' )

sources:-https://thispointer.com/how-to-append-text-or-lines-to-a-file-in-python/

DeaconDesperado' answer


It would be more efficient to do a HEAD request and check the Content-Length of the document.

import urllib2
"""
read old length from file into variable
"""
request = urllib2.Request('http://www.yahoo.com')
request.get_method = lambda : 'HEAD'

response = urllib2.urlopen(request)
new_length = response.info()["Content-Length"]
if old_length != new_length:
    print "something has changed"

Note that it is unlikely although possible that the content-length will be exactly the same, but at the same time is the most efficient way. This method might be suitable or unsuitable depending what kind of changes your expect.

Tags:

Python

Compare