Teaching a loved one about secure coding practices
I would say a great way to learn is for her to break the applications she has already written.
Assuming she is writing web applications, point her towards the OWASP Top 10. Have her see if she can find any of those flaws in her own code. There is no better way to learn about security concepts than actually seeing it happen on your own code.
Once a flaw has been found, have her rewrite the application to fix the flaw. Doing so will allow her to appreciate the effect of things like sanitation and validation of user inputs and parameterized queries.
Take incremental steps. I wouldn't jump straight into designing a new application with security in mind before truly understanding what type of codes result in security flaws.
I'm going to take the position that may get me flambéed...
The problem I see, is that secure programming is taught as an add on. Best practices should be taught from the beginning (including security). The lie people are taught is that practice makes prefect. The truth is practice makes permanent. So if you are doing it wrong, you have to unlearn what you have learned. That is a bassackwards approach.
I would say that secure coding practices should be taught from day one. There's no reason to learn how to do it, and then learn how to do it securely. It's a waste of time and money...
My 2 bits, 1, and zero more to say...
While I agree in principle with Everett, there is another point of view. The point of a lesson is to learn a concept, which can then be further built on. This lessens the slope of the learning curve. Teaching too much too fast is overwhelming; when faced with an onslaught of information, most brains "leak".
It's great to say "Secure coding practices should be taught from day one", and very hard to demonstrate how that day-one "Hello World" program may be vulnerable, especially when "what is a computer program" is a new concept for the class. A website project touching shared data stores (which I didn't really see till at least halfway through college) is easier to show weaknesses in, but often those weaknesses are inherent in setting up a basic proof-of-concept web environment with the "default" settings. It's very easy to prove an application has vulnerabilities, when the vulnerabilities are known. It's harder (impossible, really) to prove none exist.
I think similarly to the OP; at some point in the development of a programmer, the idea of "how could someone use this code to do things you don't want them to do, and how can you prevent it" can be merged into what you're doing. That starting point is probably somewhere around beginning to learn either about external communication or persistence (reading/writing data to a hard drive or database, or sending it across a network channel), or the object-oriented principles (inheritance, overloading/overriding, etc), whichever comes first. This is where a coder can begin to write programs that have the power to do damage in the wrong hands, and thus care should be taken to ensure the wrong hands are not able to misuse the program.
Some concepts are easy to grasp; "My program works with data that is a secret of its users; I am trusted with it and must ensure that only those who should see the data are allowed to". Some are harder; I've seen people who are genuinely shocked to discover that program binaries can be decompiled and read to discover hardcoded credentials or keys (and that some environments like .NET put enough metadata into the binaries to produce almost exactly the same source code used to build them), or that their assembled binary can be piggybacked on in its compiled form by an attacker who plugs in using public unsealed classes or members, and then has access to any secrets those classes work with. These are the basic gotchas that should be illustrated, and then solutions to them should be illustrated, and how and why they work explained.