SLIDE 34 testability API to perform rename. The tests bypass the rename dialog completely.
At this level, multiple components are tested together but without the entire system (SUT) being present. In the rename example the tests targeting this level use the testability API to exercise the feature as in the object model level. But the difference here is that the Visual Studio is not loaded while running the
- tests. Instead only the C# editor and couple of other dependent components are
loaded in isolation and tests directly interact with the editor component.
At this level, the Individual APIs are tested in isolation without taking into account the interactions between the components. Unit tests fall into this
- category. For the rename example, unit tests were written to test each public
member in the classes that implement the feature.
Layered approach – I m plem entation
Let’s look at how the layered approach was actually implemented. The overall test strategy for testing the C# editor consisted of focusing on all the four layers mentioned above. The degree of focus varied based on the specific feature being tested and the time of the product cycle the team is in. For example, towards the beginning of the product cycle when the UI is constantly changing, the tests were written targeting the object level so that tests don’t break because of these UI
- changes. As the product UI stabilized towards the end of the product cycle the tests
were targeted to run exercising the UI. Team wanted to reuse the tests as much as possible to target multiple levels and avoid writing duplicate tests to target each level. In the older approach of writing tests, tests contained too much information about the test execution environment like the dialogs to open, the buttons to click to navigate to the next UI element and so on. This prevented the ability to re-use the tests in other target levels as the tests were tightly coupled to the target level. Also the test data was embedded in the test itself resulting in lots of duplication of test across different test data. For example in the rename scenario, separate tests were written to rename a method in a class, to rename a property in a class and so on even though the steps in all these tests were essentially the same. In order to avoid these problems, a simple test engine was written that abstracted execution details from the tests. Tests then basically became set of actions user would perform on SUT. Test engine took care of interacting with the product. Abstraction layer enabled executing the same tests against multiple layers when needed with little additional cost. Also test data was separated from the test resulting in the ability to execute same tests across multiple test data (data-driven testing). Going back to our previous rename example, tests for the rename scenario were written as series of actions to be performed on SUT in xml format as shown below. This acts as test ‘intent’ file. Note that this file doesn’t have any information on how these actions need to be performed. <act i on nam e=" Cr eat ePr oj ect " / > <act i on nam e=" O penFi l e" f i l eNam e=" t est . cs" / > <act i on nam e=" Renam e" ol dNam e=" Cl ass1" newNam e=" NewCl ass1" / > <act i on nam e=" AddCondi t i onal Sym bol " sym bol =" VERI FY" / > <act i on nam e=" Bui l d" / > <act i on nam e=" Run" / > <act i on nam e=" Cl eanPr oj ect " / >