Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
MichalKrawczyk
Active Contributor

When we're starting a project both functional consultants and developers are both responsible to describe a set of test scenarios which always need to be executed to check if the interface is working properly. Functional consulstants will put all important business scenarios which need to work and developers will update those with some cases where they know the interface is being developed in a complex way (multiple lines, summarizations and other complex mapping logic). Thanks to this cooperation we can get a pretty decent subset of integration scenarios which once run will make sure the interface scenario is working perfectly. Running all of the prepared test scripts needs to happen in a few project phases:

a) during the first integration testing phase (when the interface is being executed end to end for the first time ever)

b) after we implement each change to the interface scenario during integration testing, user acceptance testing and any other testing phase which may be performed in between those two but before the first golive

c) after golive when we need to fix any existing scenario or add any new functionality to it

How does that look like in reality (at least from my 12 years of experience with >25 clients) ?

a) during the first integration testing phase we need to check all possible scenarios, otherwise the interface would not work

b) after we implement each change to the interface scenario we're usually in the middle of "rapid" development where everything needs to be finished ASAP and in many cases the development was already approved so testing is only run with a subset of the subset (maximum 1-2 testscripts)

c) after golive when we need to fix any existing scenario or add any new functionality to it the we have a few choices:

- hot fix - needs to be done immediatly (ASAP is too slow) - so we fix, run a test case and move to production (praying that it till not cause any failures to any other scenario)

- new functionality - depending on the possible lead time - a small change can either be implemented if the lead time is small (meaning we don't test too much) or we don't implement the change (as testing team needs to run all possible test scripts and it takes 10 days to do it so business realizes they can live without the change - sad but also happens)

What does that mean in reality? That we only have two choices:

a) we can either push for running all prepared test scripts but risk huge project delays or simply rejecting any changes to the existing interface scenarios

b) we can stop testing (vide articles's title) and run one or two test scripts and keep on praying when we transport to production environment

What is the reason for that ? I've been asking myself the same question many times and I came into the conclusion that it's because of lack of interface scenario testing tools. I'm not saying that they don't exist, I'm only saying that they do not respond to the needs of both business and developers. What would those two groups need ? I'm hoping for your input for the same but let me just present my short list.

Developers:

a) being able to run a full set of interface scenarios tests with a single click after implementing each change without waiting for anyone else (especially from the business)

b) not having the need to going to any transaction/entry screen as the module knowledge cannot be mandatory to retest an inteface after the change

c) being able to test the interface both on development and on quality boxes (not only on quality after the change is transported)

Business:

a) being able to record a test script case from any existing document which was processed in the past and was posted correctly without the need to recreate it again

b) being able to be sure that all of the fields will always be validated (and not only the ones selected during the initial test script preparation)

c) test script execution in backgrund everyday validating all transports and changes done by the developemnt teams (as the latter can often change and may not be aware of what needs to be retested from te technical perspective)

Request:

Would anyone have any inputs on this topic ? It would also be possible for me to organize a session (SAP Mentor expert table) at SAP Teched 2016 (Barcelona or Vegas) if someone would be interested to discuss how to test/retest integration scenarios or to show how it's being done at their company. I'd kindly ask you to provide any input if you think this is a valid but not that much discussed topic.

Important info:

If the testing process looks completely different then described please do let me know as I can only tell what from what I've experienced.

12 Comments
Labels in this area