Why do operating systems have file size limits?
Filesystems need to store file sizes (either in bytes, or in some filesystem-dependent unit such as sectors or blocks). The number of bits allocated to the size is usually fixed in stone when the filesystem is designed.
If you allow too many bits for the size, you make every file take a little more room, and every operation a little slower. On the other hand, if you allow too few bits for the size, then one day people will complain because they're trying to store a 20EB file and your crap filesystem won't let them.
At the time the filesystems you mention were designed, having a disk big enough to run into the limit sounded like science-fiction. (Except FAT32, but the company that promoted it intended it as an intermediate measure before everyone adopted their shiny new NTFS, plus they were never very good at anticipating growing requirements.)
Another thing is that until the end of the last century, most consumer (and even server) hardware could only accomodate fast computation with 32-bit values, and operating systems tended to use 32-bit values for most things, including file sizes. 32 bits means 4GB, so operating systems tended to be limited to 4GB files regardless of the filesystem, often even 2GB because they used signed integers. Any serious desktop or server OS nowadays uses 64 bits for file sizes and offsets, which puts the limit at 8EB.
The on-disk data structures are usually the limit. Research how these operating systems format their disks and how they track the portions of files on the disk, and you'll understand why they have these limitations. The FAT filesystem is pretty well documented on-line (see Wikipedia, for instance) and you can see that their choice of integer sizes for some disk structure fields ends up limiting the overall size of the file that you can store with this disk format.