Some time ago, I was in the always-super-happy and not-stressful-at-all situation of looking for a job and sitting in them makeshift interview rooms, ready for anything they would throw at me.
At one company I was grilled by two heavyweights, including the chief architect. I presented a challenging feature that I've tackled in the past, with two possible approaches to implement it - each carrying its own merits. At which point I was asked: How would you judge the better solution of the two? Could you make a generalized statement about the guidelines to apply? At the end of the interview, the architect-***-guru said: You know, the correct answer for that question is different; the better solution is the one easier to test. He paused for a bit, and remarked: Well now, you never heard this answer coming up, eh?
I did pass that interview regardless, and I'm not a zealot as that guy was, but there's something in that remark I would like to raise here.
This point is not about the cool tools that you could use. Tools for testing are aplenty: JUnit variants, clever hackeries for mocking, Selenium and the like for testing websites, and the list goes on. With all due respect given to all these, I've come to realize that if you want to automatically test your product, this must in fact be one of your prime considerations when designing your app, and this is especially true when developing the client-side: a server usually has well-defined entry points - these are the "API" that the server exposes, and naturally you use these APIs for automatic tests, your real concerns usually involve having a good test setup (realistic data entities or mocks, a lightweight container instead of the full one etc.). UIs, however, don't have a natural API - just a maze of features and possible usage scenarios to walk through. To me, this fact of life makes complex UIs usually much harder to develop than servers. Their usage patterns are just not as orderly as server code; They are as chaotic as some users seem to be.
If you ever recorded a macro in a Microsoft Excel or Word, you might have realized that every action committed by you has a parallel in script. This seems to be a rule which guides the Office developers: every action must be exposed through the COM API, period. One would guess that actions in the Office UI are just a thin layer over pretty much the same API. If these people weren't as obsessive about this, that level of scripting coverage would not be possible and would probably be broken again and again on every commit.
This made me realize that if you want to test, you must ensure to have that breadth and clarity of API in your UI as well. Your UI must be just a layer over a data and action model that other (scripted) clients might also use. Nothing in that observation is new, I know, but this is so easy to break: imagine you have in your app a tree-control which renders some hierarchical list of entities, each with its own actions, state, permissions etc. Would you make sure that none of the logic is implemented inside the control itself? If so, you're lucky - but this is one of the things that tend to break and leak over time all over the place, even if you started cleanly enough.
All that is well and good, but then... there's this thing called websites - and we would just love to test those automatically. Unfortunately, these web UIs do not lend themselves so easily to being API-friendly. If you work with Selenium, you might find youself validating some feature by looking for span inside a div inside another div, holding some specific piece of text which is bound to change without notice. If you take the Cucumber approach over WebDriver, you might find that you're still doing pretty much the same, but with a nice (really nice) descriptive layer on top.