Unit Tests vs. Acceptance Tests
As a summary of all the above,
- Acceptance tests make sure that you're building the right thing
- Unit tests make sure that you're building the thing, right
Acceptance and integration tests tell you whether your code is working and complete; unit tests tell you where it's failing.
If you've done a good job with acceptance and integration tests, and they're passing, your code is implementing all the functionality it's supposed to, and it's working. That's great to know (it's also great to know that it isn't). But if it isn't working, an acceptance test won't give you much insight into what has gone wrong; since it tests many units of functionality, it can be kind of a bird's-eye view of failure. This is where unit tests shine. Good unit tests tell you exactly what went wrong, with exactly what part of your code. It's harder to know whether you've written enough unit tests than acceptance tests, but when you have a failing acceptance test without a corresponding failing unit test - it's time to write that unit test.
That is all from the testing perspective. And, of course, TDD isn't (and ATDD isn't) about testing. With respect to driving your design, acceptance tests give you a broad roadmap ("here's where you want to go") while unit tests take you to the next intersection ("turn left"). They're both valuable in this regard and, again, their value complement one another.
Don't confuse them; don't miscegenate them. Unit tests, in particular, shouldn't depend on anything else, and it would be a mistake to constrain your unit tests by making acceptance test dependent on them. Of course they can share some framework code, but they should be independent.
However, for minimization of redundant work, is it a good idea to try to incorporate unit tests into acceptance tests?
No.
In other words, have the latter [acceptance] call the former [unit]. Does going in the opposite direction make any sense?
Don't bother.
Acceptance tests are often political. You show them to people who -- based on their gut instinct -- decide to accept or reject.
Then you argue about the validity of the acceptance tests.
Then you argue about the scope of work and the next release.
Acceptance tests aren't -- generally -- technical. If they were, then you'd have formal unit tests and that would be that.
Don't try to finesse the politics. Embrace it. Let it happen.
You can hope that Acceptance Test-Driven Development (ATDD) leads to "acceptance tests are written and agreed upon by the entire team before development begins." But you have to reflect the reality that anything written in advance is specious at best and negotiable at worst.
The premise behind all Agile methods is that you can only agree to get to something releasable. Everything after that is negotiable.
The premise behind all test-first (TDD, ATDD, or anything else) is that a test is an iron-clad agreement. Except it's not. With any TDD (or ATDD) method you can agree -- in principle -- to the test results, but you haven't really agreed to the test itself.
It may arise that the test cannot easily be written. Or worse, cannot be written at all. You may agree to results that seem testable, but turn out to be poorly-defined. What now? These are things you can't know until you start development and get to details.
All testing is important. And no particular kind of testing can be a superset or subset of any other kind testing. They're always partially overlapping sets. Trying to combine to somehow save some work is likely to turn out to be a waste of time.
More testing is better than anything else. The union of all tests has more value than trying to force a subset-superset relationship among tests.