Protect application from being modified
The scenario you describe is very similar to the concept of "remote attestation". There has been a lot of research on this and there are two major results:
You need a trust anchor, such as the TPM or a trusted system service, to securely measure your app and report the results to the server. Otherwise you can always build a simulator that generates the correct responses[1].
Once you deploy and use all the trusted computing infrastructure, you still can't prevent or even detect exploitation of vulnerabilities in your app with any more assurance than by deploying today's standard anti-buffer overflow technology.
So, if you want to deploy this today, your best option is code obfuscation. Basically, you want to implement a copy protection mechanism, same as it has been implemented and broken for decades.
[1] There have been some very cool advancements that exploit computation and communication limits of the client platform, but this is still sci-fi.
What are the flaws in this scenario?
The basic flaw here is that you are assuming that the remote party is adhering to your rules.
You have a server with a program.
You receive a download request.
At this point you have no idea who or what the remote downloader is.
Once downloaded, an adversary can save it wherever they please. They can attempt to run your program in any one of a number of debuggers, emulators, or physical hardware. It is quite trivial for the adversary to prevent your program from communicating with any remote system while they debug, disassemble, run, modify, or analyze.
That someone will be able to disassemble the algorithm to compute the hash from the program and later modify the program to send the same value to the server?
The basic problem with sending the hash to the server is that there is no incentive to allow the program to send that hash.
Imagine a standard device with an application firewall. The user downloads and runs your program unmodified. When your program attempt to send the hash to the server, the application firewall pops up a dialogue asking 'Application YourApp is attempt to connect to the internet, do you want to allow this?' The user has no incentive to say yes.
Now imagine an adversary who has modified your program. They run your application in a standard device with an application firewall. Now, when the application firewall pops up a dialogue. The adversary has incentive not to allow the program to communicate with the server.
do you think that this is still a lot of work for someone to do, and it may not be worth it?
The problem of remotely verifying software is a fundamental problem that has been worked on for decades. There are approaches that work well for specific scenarios and potential attackers. However there is no general solution.
For your scenario, where some remote users have complete control over the target hardware platform, the application binary is easy to disassemble, and there is no incentive for a user to allow your application to communicate with your server, at best you will detect very few of the total modifications. At worst every single modification will go unreported, and the unmodified reports will give you a false sense that no one is modifying your software.
You can make it harder for an attacker to modify the files, but you cannot prevent modification of anything you give away to the attacker in the end.
That is assuming the attacker has full access to his computer. There is some work being done by the Trusted Computing Group and other vendors to restrict the abilities of the owner. Those trusting computing modules are mostly used in game consoles and smart phones. But as soon as this protection is moved out of the way (e. g. the phone is "rooted"), the above paragraph applies.
Those modification include the checksum send to the server. To give a noteworthy example: In the Netherlands the CEO of Nedap claimed that their voting computers are a dedicated special purpose machine that can only be used for elections and nothing else. WVSN and CCC ported an open source chess program to it to prove the CEO wrong. As a special feature of protection the voting computer have a button which calculates the checksum of the program and displays it in order to prevent manipulations. The chess program did display the same number. (Heise article in German).
You can make it more difficult by using many different checksum and not using them in plain but as part of calculations: Skype uses the checksums to calculate the destination of JMPs as an anti debug technology. A standard breakpoint will modify the debugged program and thus cause it to jump to the wrong places. Skype was eventually reverse engineered by running a second copy as oracle. (Silver Needle in the Skype)
An other example which probably fits better to your goals is SecondLife. Second Life is an online game in which players can create and sell their own content. Despite their DRM any type of information, that is send to the client (textures, animations, sound), has been illegally copied. Only the information that is kept on the server (user scripts) is secure. I went into more details on this at Are there DRM techniques to effectively prevent pirating?