Mobile automation: Boss wants 100% coverage. How feasible is that?
up vote
17
down vote
favorite
Just started a new mobile automation role using react native for iOS.
I'm not new to automation, but I am new to React Native (with detox and jest). The learning curve is slow, but we are getting there.
He wants a complete regression pack. That's fine...no problem.
However, he wants 100% coverage. That's my issue.
I will be using grey box testing. However, I do think certain principles are universal regardless...
I was always taught 'don't automate all the things....automate the RIGHT things'.
I will automating with a simulator (i.e. iPhone X). The thought has occurred to me. Can I automate with real device in grey box testing? As I would like to as then I will get more robust smoke and sanity test.
As I'm new to this role and still learning, it's discerning the RIGHT things the because I don't want to create unnecessary tests that create road blocks.
As I'm starting this framework from scratch I am wondering, if anyone has been in this position before?
How did they deal with it?
automated-testing test-automation-framework mobile reactjs
|
show 3 more comments
up vote
17
down vote
favorite
Just started a new mobile automation role using react native for iOS.
I'm not new to automation, but I am new to React Native (with detox and jest). The learning curve is slow, but we are getting there.
He wants a complete regression pack. That's fine...no problem.
However, he wants 100% coverage. That's my issue.
I will be using grey box testing. However, I do think certain principles are universal regardless...
I was always taught 'don't automate all the things....automate the RIGHT things'.
I will automating with a simulator (i.e. iPhone X). The thought has occurred to me. Can I automate with real device in grey box testing? As I would like to as then I will get more robust smoke and sanity test.
As I'm new to this role and still learning, it's discerning the RIGHT things the because I don't want to create unnecessary tests that create road blocks.
As I'm starting this framework from scratch I am wondering, if anyone has been in this position before?
How did they deal with it?
automated-testing test-automation-framework mobile reactjs
7
Unless the application is incredibly trivial, then it is impossible. One of the seven principles of software testing is that exhaustive testing is impossible. If your boss is asking for 100% coverage, remind them that execution time and all associated costs with rise. Testing based on risk and priority would be loads more beneficial, rather than a 'test everything' approach.
– trashpanda
yesterday
11
When you say 100% coverage are you talking about all code paths in a project or are you talking about 100% coverage of the different phone devices?
– Alex KeySmith
yesterday
@ Alex KeySmith . I think that is a very good question. This is what I've noticed in the last 10 days. I will be automating on a simulator and not a real device.
– fypnlp
yesterday
1
You don't have 100% test coverage unless you've tested every possible interrupt at every possible CPU instruction in the program. And this assumes the program has no input. It's worse for real programs.
– Joshua
yesterday
5
Coverage is like the speed of light. You can get arbitrarily close to 100% (with your investment growing exponentially as you get closer and closer to it), but you can never actually reach it.
– vsz
21 hours ago
|
show 3 more comments
up vote
17
down vote
favorite
up vote
17
down vote
favorite
Just started a new mobile automation role using react native for iOS.
I'm not new to automation, but I am new to React Native (with detox and jest). The learning curve is slow, but we are getting there.
He wants a complete regression pack. That's fine...no problem.
However, he wants 100% coverage. That's my issue.
I will be using grey box testing. However, I do think certain principles are universal regardless...
I was always taught 'don't automate all the things....automate the RIGHT things'.
I will automating with a simulator (i.e. iPhone X). The thought has occurred to me. Can I automate with real device in grey box testing? As I would like to as then I will get more robust smoke and sanity test.
As I'm new to this role and still learning, it's discerning the RIGHT things the because I don't want to create unnecessary tests that create road blocks.
As I'm starting this framework from scratch I am wondering, if anyone has been in this position before?
How did they deal with it?
automated-testing test-automation-framework mobile reactjs
Just started a new mobile automation role using react native for iOS.
I'm not new to automation, but I am new to React Native (with detox and jest). The learning curve is slow, but we are getting there.
He wants a complete regression pack. That's fine...no problem.
However, he wants 100% coverage. That's my issue.
I will be using grey box testing. However, I do think certain principles are universal regardless...
I was always taught 'don't automate all the things....automate the RIGHT things'.
I will automating with a simulator (i.e. iPhone X). The thought has occurred to me. Can I automate with real device in grey box testing? As I would like to as then I will get more robust smoke and sanity test.
As I'm new to this role and still learning, it's discerning the RIGHT things the because I don't want to create unnecessary tests that create road blocks.
As I'm starting this framework from scratch I am wondering, if anyone has been in this position before?
How did they deal with it?
automated-testing test-automation-framework mobile reactjs
automated-testing test-automation-framework mobile reactjs
edited yesterday
asked yesterday
fypnlp
10417
10417
7
Unless the application is incredibly trivial, then it is impossible. One of the seven principles of software testing is that exhaustive testing is impossible. If your boss is asking for 100% coverage, remind them that execution time and all associated costs with rise. Testing based on risk and priority would be loads more beneficial, rather than a 'test everything' approach.
– trashpanda
yesterday
11
When you say 100% coverage are you talking about all code paths in a project or are you talking about 100% coverage of the different phone devices?
– Alex KeySmith
yesterday
@ Alex KeySmith . I think that is a very good question. This is what I've noticed in the last 10 days. I will be automating on a simulator and not a real device.
– fypnlp
yesterday
1
You don't have 100% test coverage unless you've tested every possible interrupt at every possible CPU instruction in the program. And this assumes the program has no input. It's worse for real programs.
– Joshua
yesterday
5
Coverage is like the speed of light. You can get arbitrarily close to 100% (with your investment growing exponentially as you get closer and closer to it), but you can never actually reach it.
– vsz
21 hours ago
|
show 3 more comments
7
Unless the application is incredibly trivial, then it is impossible. One of the seven principles of software testing is that exhaustive testing is impossible. If your boss is asking for 100% coverage, remind them that execution time and all associated costs with rise. Testing based on risk and priority would be loads more beneficial, rather than a 'test everything' approach.
– trashpanda
yesterday
11
When you say 100% coverage are you talking about all code paths in a project or are you talking about 100% coverage of the different phone devices?
– Alex KeySmith
yesterday
@ Alex KeySmith . I think that is a very good question. This is what I've noticed in the last 10 days. I will be automating on a simulator and not a real device.
– fypnlp
yesterday
1
You don't have 100% test coverage unless you've tested every possible interrupt at every possible CPU instruction in the program. And this assumes the program has no input. It's worse for real programs.
– Joshua
yesterday
5
Coverage is like the speed of light. You can get arbitrarily close to 100% (with your investment growing exponentially as you get closer and closer to it), but you can never actually reach it.
– vsz
21 hours ago
7
7
Unless the application is incredibly trivial, then it is impossible. One of the seven principles of software testing is that exhaustive testing is impossible. If your boss is asking for 100% coverage, remind them that execution time and all associated costs with rise. Testing based on risk and priority would be loads more beneficial, rather than a 'test everything' approach.
– trashpanda
yesterday
Unless the application is incredibly trivial, then it is impossible. One of the seven principles of software testing is that exhaustive testing is impossible. If your boss is asking for 100% coverage, remind them that execution time and all associated costs with rise. Testing based on risk and priority would be loads more beneficial, rather than a 'test everything' approach.
– trashpanda
yesterday
11
11
When you say 100% coverage are you talking about all code paths in a project or are you talking about 100% coverage of the different phone devices?
– Alex KeySmith
yesterday
When you say 100% coverage are you talking about all code paths in a project or are you talking about 100% coverage of the different phone devices?
– Alex KeySmith
yesterday
@ Alex KeySmith . I think that is a very good question. This is what I've noticed in the last 10 days. I will be automating on a simulator and not a real device.
– fypnlp
yesterday
@ Alex KeySmith . I think that is a very good question. This is what I've noticed in the last 10 days. I will be automating on a simulator and not a real device.
– fypnlp
yesterday
1
1
You don't have 100% test coverage unless you've tested every possible interrupt at every possible CPU instruction in the program. And this assumes the program has no input. It's worse for real programs.
– Joshua
yesterday
You don't have 100% test coverage unless you've tested every possible interrupt at every possible CPU instruction in the program. And this assumes the program has no input. It's worse for real programs.
– Joshua
yesterday
5
5
Coverage is like the speed of light. You can get arbitrarily close to 100% (with your investment growing exponentially as you get closer and closer to it), but you can never actually reach it.
– vsz
21 hours ago
Coverage is like the speed of light. You can get arbitrarily close to 100% (with your investment growing exponentially as you get closer and closer to it), but you can never actually reach it.
– vsz
21 hours ago
|
show 3 more comments
6 Answers
6
active
oldest
votes
up vote
18
down vote
Your boss doesn't want a flat "No" or to hear their request is impractical.
They want to reduce the risk of releasing changes to the application.
Make your boss choose your priorities, that is one of the roles of a manager.
- Add scenarios based on team knowledge and work with your boss to rank them by risk.
- Implement top priority scenarios
- Go back to step 1
Eventually, you will get to diminishing returns and you can move onto the next task.
New contributor
I love that. That's an excellent idea.
– fypnlp
yesterday
add a comment |
up vote
13
down vote
John Ruberto wrote an article some years ago on Stickyminds entitled, "Is 100% Unit Test Coverage Enough?". The article can be found here:
https://www.stickyminds.com/article/100-percent-unit-test-coverage-not-enough
In it he presents the argument that there are different kinds of coverage. One could cover 100% of requirements, but that doesn't include edge cases a tester might look for during Exploratory Testing. It doesn't include security tests or UI tests. It doesn't cover unit tests. Now we're up to 5 passes over the application, or 500% testing. We haven't even covered the explosion of tests that configuration testing will bring.
Your boss is throwing out a number. What he actually wants is an assurance that the thing that got built is going to do what it's supposed to do. Instead of getting hung up on a number as a minimum pass requirement, show him that based on your experience, you covered as much of the application in as many ways as you can think of that matter, and found this list of things that probably shouldn't happen. Based on that information, your boss or his boss will have to make a go/no go decision.
Lee Copeland offers a different take on the subject, including answers to the question, "When do I stop testing?" in his presentation "The Banana Principle for Testers", found here:
http://www.squadco.com/Conference_Presentations/Banana_Principle.pdf
Asking for a minimum pass requirement as a percentage of anything is Management by Numbers and takes the go/no go decision out of your boss' hands. He's trying to absolve himself of the responsibility of failure by placing the requirement on you. In a shop where people make mistakes and learn from them instead of laying blame, this type of requirement doesn't come up.
Can I just say this is a shining example of how to link-and-summarize? Bravo!
– corsiKa♦
yesterday
add a comment |
up vote
8
down vote
It's always good to use numbers to make your point-
There are N models and sub models of iPhones (for some applications you should also count the phone's network sub-type), each with M available iOS versions and sub versions (this is not entirely accurate, some models and iOS versions don't work together, but never mind that now)
Older iOS versions needs different automation tools.
You can communicate over WiFi, 3G or 4G and maybe Bluetooth and USB
You will want to test P versions back on newer iOS and iPhone versions
Multiply the above and present to your boss
add a comment |
up vote
3
down vote
It depends what they consider 100%, but my basic answer would be 'no' especially if these are UI level tests. Tests at this level are slow and expensive to write and maintain.
It looks like you know this already, but write tests that cover the broad paths through the application/its functionality.
add a comment |
up vote
3
down vote
I've spent a fair bit of time working on safety-related software. Shit fails, people die, kind of safety.
The first thing to note is that we had 100% test coverage of requirements. However that didn't have to be automated, sometimes because it wasn't practical, and sometimes because it wasn't physically possible. No safety-related standard requires 100% automated testing even for the most critical of systems, so the idea of doing it for an iPhone app is pretty ludicrous.
The next thing to note is that there are various sorts of coverage which you might want for unit testing. Statement coverage is a nice start, because it guarantees that at least all code is reachable, but it doesn't say anything about whether it's been tested for correctness. Branch coverage is better, but you still can't be sure about why it branched. Condition coverage is more thorough and checks that for a comparison you've given it less than, equal and greater than conditions, but now your testing is expanding. Add combinatorial logic testing to check one of each condition true, all conditions true and no conditions true. Add boundary coverage to check for maximum/minimum value inputs and outputs. And so on.
On an average project, we reckoned on around 5-10% of our time being spent on coding, 10% on requirements, and 20% on design. The rest was testing. And that was for lower safety levels. For serious safety stuff where we had to unit test like that, we reckoned on doubling that test time.
We actually analysed this and decided to abandon function-level unit tests for most things. We found that the defect-to-cost ratio was not justifiable, and many of the bugs could not in fact manifest in the system because of limits higher up. Instead, we ramped up the amount of module-level and system-level testing, because evidence showed that this was where the majority of customer-visible bugs were found.
So your boss needs to decide why he thinks this is a good idea. He needs to define what "100%" measurements you should be aiming for. And he needs to justify a 10x increase in the time to complete your project, because that's what your estimated delivery date should now say.
New contributor
@Wildcard I get where you're coming from, but we put a lot more effort into requirements capture, so it covered a lot of stuff which might more normally be left to design. When requirements can be implemented unambiguously, design falls out much more naturally. If anything, I've overestimated coding time though.
– Graham
18 hours ago
@Wildcard I just read your first link. That's what we aimed for. Like the guy in there said, it's all about getting it in the spec.
– Graham
18 hours ago
add a comment |
up vote
0
down vote
Welcome to managing your manager. I agree with the sentiment that your boss doesn't want to hear a "No". That's not quite verbose enough. But, the answer is almost assuredly a "No".
What you need is a way to communicate how and why to limit your test scope. Even if you tested all the permutations of every make-model, screen-size and OS, you will soon see that oh, there are different version of OS's and then oh, different browsers and oh those browser version. Oh, wow, now, it's next year and you have a new job because you missed the bug in your other product.
So, really you need a risk-based approach. Knowing your test variables and what they mean to your likelihood of finding a bug is crucial.
Find out what your customer-base is using. Someone should be monitoring this or collecting. And if you haven't rolled this out yet, get some consulting from a company like Perfect.
Speaking of Perfecto, you will need to vary your tests in your test automation by desired capabilities or something like that. If you have to write different automation for each of your mobile device profiles in your test bed, then forget about it. Use a visual context or make sure the site or app you are testing is testable so that you can actually hook into the DOM.
But, more than anything, decide where the risk is and have confidence that you are limiting your test scope appropriately so you can find the bugs instead of buying the magic beans of infinite test coverage.
New contributor
add a comment |
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
18
down vote
Your boss doesn't want a flat "No" or to hear their request is impractical.
They want to reduce the risk of releasing changes to the application.
Make your boss choose your priorities, that is one of the roles of a manager.
- Add scenarios based on team knowledge and work with your boss to rank them by risk.
- Implement top priority scenarios
- Go back to step 1
Eventually, you will get to diminishing returns and you can move onto the next task.
New contributor
I love that. That's an excellent idea.
– fypnlp
yesterday
add a comment |
up vote
18
down vote
Your boss doesn't want a flat "No" or to hear their request is impractical.
They want to reduce the risk of releasing changes to the application.
Make your boss choose your priorities, that is one of the roles of a manager.
- Add scenarios based on team knowledge and work with your boss to rank them by risk.
- Implement top priority scenarios
- Go back to step 1
Eventually, you will get to diminishing returns and you can move onto the next task.
New contributor
I love that. That's an excellent idea.
– fypnlp
yesterday
add a comment |
up vote
18
down vote
up vote
18
down vote
Your boss doesn't want a flat "No" or to hear their request is impractical.
They want to reduce the risk of releasing changes to the application.
Make your boss choose your priorities, that is one of the roles of a manager.
- Add scenarios based on team knowledge and work with your boss to rank them by risk.
- Implement top priority scenarios
- Go back to step 1
Eventually, you will get to diminishing returns and you can move onto the next task.
New contributor
Your boss doesn't want a flat "No" or to hear their request is impractical.
They want to reduce the risk of releasing changes to the application.
Make your boss choose your priorities, that is one of the roles of a manager.
- Add scenarios based on team knowledge and work with your boss to rank them by risk.
- Implement top priority scenarios
- Go back to step 1
Eventually, you will get to diminishing returns and you can move onto the next task.
New contributor
New contributor
answered yesterday
rickjr82
2812
2812
New contributor
New contributor
I love that. That's an excellent idea.
– fypnlp
yesterday
add a comment |
I love that. That's an excellent idea.
– fypnlp
yesterday
I love that. That's an excellent idea.
– fypnlp
yesterday
I love that. That's an excellent idea.
– fypnlp
yesterday
add a comment |
up vote
13
down vote
John Ruberto wrote an article some years ago on Stickyminds entitled, "Is 100% Unit Test Coverage Enough?". The article can be found here:
https://www.stickyminds.com/article/100-percent-unit-test-coverage-not-enough
In it he presents the argument that there are different kinds of coverage. One could cover 100% of requirements, but that doesn't include edge cases a tester might look for during Exploratory Testing. It doesn't include security tests or UI tests. It doesn't cover unit tests. Now we're up to 5 passes over the application, or 500% testing. We haven't even covered the explosion of tests that configuration testing will bring.
Your boss is throwing out a number. What he actually wants is an assurance that the thing that got built is going to do what it's supposed to do. Instead of getting hung up on a number as a minimum pass requirement, show him that based on your experience, you covered as much of the application in as many ways as you can think of that matter, and found this list of things that probably shouldn't happen. Based on that information, your boss or his boss will have to make a go/no go decision.
Lee Copeland offers a different take on the subject, including answers to the question, "When do I stop testing?" in his presentation "The Banana Principle for Testers", found here:
http://www.squadco.com/Conference_Presentations/Banana_Principle.pdf
Asking for a minimum pass requirement as a percentage of anything is Management by Numbers and takes the go/no go decision out of your boss' hands. He's trying to absolve himself of the responsibility of failure by placing the requirement on you. In a shop where people make mistakes and learn from them instead of laying blame, this type of requirement doesn't come up.
Can I just say this is a shining example of how to link-and-summarize? Bravo!
– corsiKa♦
yesterday
add a comment |
up vote
13
down vote
John Ruberto wrote an article some years ago on Stickyminds entitled, "Is 100% Unit Test Coverage Enough?". The article can be found here:
https://www.stickyminds.com/article/100-percent-unit-test-coverage-not-enough
In it he presents the argument that there are different kinds of coverage. One could cover 100% of requirements, but that doesn't include edge cases a tester might look for during Exploratory Testing. It doesn't include security tests or UI tests. It doesn't cover unit tests. Now we're up to 5 passes over the application, or 500% testing. We haven't even covered the explosion of tests that configuration testing will bring.
Your boss is throwing out a number. What he actually wants is an assurance that the thing that got built is going to do what it's supposed to do. Instead of getting hung up on a number as a minimum pass requirement, show him that based on your experience, you covered as much of the application in as many ways as you can think of that matter, and found this list of things that probably shouldn't happen. Based on that information, your boss or his boss will have to make a go/no go decision.
Lee Copeland offers a different take on the subject, including answers to the question, "When do I stop testing?" in his presentation "The Banana Principle for Testers", found here:
http://www.squadco.com/Conference_Presentations/Banana_Principle.pdf
Asking for a minimum pass requirement as a percentage of anything is Management by Numbers and takes the go/no go decision out of your boss' hands. He's trying to absolve himself of the responsibility of failure by placing the requirement on you. In a shop where people make mistakes and learn from them instead of laying blame, this type of requirement doesn't come up.
Can I just say this is a shining example of how to link-and-summarize? Bravo!
– corsiKa♦
yesterday
add a comment |
up vote
13
down vote
up vote
13
down vote
John Ruberto wrote an article some years ago on Stickyminds entitled, "Is 100% Unit Test Coverage Enough?". The article can be found here:
https://www.stickyminds.com/article/100-percent-unit-test-coverage-not-enough
In it he presents the argument that there are different kinds of coverage. One could cover 100% of requirements, but that doesn't include edge cases a tester might look for during Exploratory Testing. It doesn't include security tests or UI tests. It doesn't cover unit tests. Now we're up to 5 passes over the application, or 500% testing. We haven't even covered the explosion of tests that configuration testing will bring.
Your boss is throwing out a number. What he actually wants is an assurance that the thing that got built is going to do what it's supposed to do. Instead of getting hung up on a number as a minimum pass requirement, show him that based on your experience, you covered as much of the application in as many ways as you can think of that matter, and found this list of things that probably shouldn't happen. Based on that information, your boss or his boss will have to make a go/no go decision.
Lee Copeland offers a different take on the subject, including answers to the question, "When do I stop testing?" in his presentation "The Banana Principle for Testers", found here:
http://www.squadco.com/Conference_Presentations/Banana_Principle.pdf
Asking for a minimum pass requirement as a percentage of anything is Management by Numbers and takes the go/no go decision out of your boss' hands. He's trying to absolve himself of the responsibility of failure by placing the requirement on you. In a shop where people make mistakes and learn from them instead of laying blame, this type of requirement doesn't come up.
John Ruberto wrote an article some years ago on Stickyminds entitled, "Is 100% Unit Test Coverage Enough?". The article can be found here:
https://www.stickyminds.com/article/100-percent-unit-test-coverage-not-enough
In it he presents the argument that there are different kinds of coverage. One could cover 100% of requirements, but that doesn't include edge cases a tester might look for during Exploratory Testing. It doesn't include security tests or UI tests. It doesn't cover unit tests. Now we're up to 5 passes over the application, or 500% testing. We haven't even covered the explosion of tests that configuration testing will bring.
Your boss is throwing out a number. What he actually wants is an assurance that the thing that got built is going to do what it's supposed to do. Instead of getting hung up on a number as a minimum pass requirement, show him that based on your experience, you covered as much of the application in as many ways as you can think of that matter, and found this list of things that probably shouldn't happen. Based on that information, your boss or his boss will have to make a go/no go decision.
Lee Copeland offers a different take on the subject, including answers to the question, "When do I stop testing?" in his presentation "The Banana Principle for Testers", found here:
http://www.squadco.com/Conference_Presentations/Banana_Principle.pdf
Asking for a minimum pass requirement as a percentage of anything is Management by Numbers and takes the go/no go decision out of your boss' hands. He's trying to absolve himself of the responsibility of failure by placing the requirement on you. In a shop where people make mistakes and learn from them instead of laying blame, this type of requirement doesn't come up.
edited 14 hours ago
answered yesterday
Jerry Penner
586210
586210
Can I just say this is a shining example of how to link-and-summarize? Bravo!
– corsiKa♦
yesterday
add a comment |
Can I just say this is a shining example of how to link-and-summarize? Bravo!
– corsiKa♦
yesterday
Can I just say this is a shining example of how to link-and-summarize? Bravo!
– corsiKa♦
yesterday
Can I just say this is a shining example of how to link-and-summarize? Bravo!
– corsiKa♦
yesterday
add a comment |
up vote
8
down vote
It's always good to use numbers to make your point-
There are N models and sub models of iPhones (for some applications you should also count the phone's network sub-type), each with M available iOS versions and sub versions (this is not entirely accurate, some models and iOS versions don't work together, but never mind that now)
Older iOS versions needs different automation tools.
You can communicate over WiFi, 3G or 4G and maybe Bluetooth and USB
You will want to test P versions back on newer iOS and iPhone versions
Multiply the above and present to your boss
add a comment |
up vote
8
down vote
It's always good to use numbers to make your point-
There are N models and sub models of iPhones (for some applications you should also count the phone's network sub-type), each with M available iOS versions and sub versions (this is not entirely accurate, some models and iOS versions don't work together, but never mind that now)
Older iOS versions needs different automation tools.
You can communicate over WiFi, 3G or 4G and maybe Bluetooth and USB
You will want to test P versions back on newer iOS and iPhone versions
Multiply the above and present to your boss
add a comment |
up vote
8
down vote
up vote
8
down vote
It's always good to use numbers to make your point-
There are N models and sub models of iPhones (for some applications you should also count the phone's network sub-type), each with M available iOS versions and sub versions (this is not entirely accurate, some models and iOS versions don't work together, but never mind that now)
Older iOS versions needs different automation tools.
You can communicate over WiFi, 3G or 4G and maybe Bluetooth and USB
You will want to test P versions back on newer iOS and iPhone versions
Multiply the above and present to your boss
It's always good to use numbers to make your point-
There are N models and sub models of iPhones (for some applications you should also count the phone's network sub-type), each with M available iOS versions and sub versions (this is not entirely accurate, some models and iOS versions don't work together, but never mind that now)
Older iOS versions needs different automation tools.
You can communicate over WiFi, 3G or 4G and maybe Bluetooth and USB
You will want to test P versions back on newer iOS and iPhone versions
Multiply the above and present to your boss
answered yesterday
Rsf
4,05911425
4,05911425
add a comment |
add a comment |
up vote
3
down vote
It depends what they consider 100%, but my basic answer would be 'no' especially if these are UI level tests. Tests at this level are slow and expensive to write and maintain.
It looks like you know this already, but write tests that cover the broad paths through the application/its functionality.
add a comment |
up vote
3
down vote
It depends what they consider 100%, but my basic answer would be 'no' especially if these are UI level tests. Tests at this level are slow and expensive to write and maintain.
It looks like you know this already, but write tests that cover the broad paths through the application/its functionality.
add a comment |
up vote
3
down vote
up vote
3
down vote
It depends what they consider 100%, but my basic answer would be 'no' especially if these are UI level tests. Tests at this level are slow and expensive to write and maintain.
It looks like you know this already, but write tests that cover the broad paths through the application/its functionality.
It depends what they consider 100%, but my basic answer would be 'no' especially if these are UI level tests. Tests at this level are slow and expensive to write and maintain.
It looks like you know this already, but write tests that cover the broad paths through the application/its functionality.
answered yesterday
anonygoose
1312
1312
add a comment |
add a comment |
up vote
3
down vote
I've spent a fair bit of time working on safety-related software. Shit fails, people die, kind of safety.
The first thing to note is that we had 100% test coverage of requirements. However that didn't have to be automated, sometimes because it wasn't practical, and sometimes because it wasn't physically possible. No safety-related standard requires 100% automated testing even for the most critical of systems, so the idea of doing it for an iPhone app is pretty ludicrous.
The next thing to note is that there are various sorts of coverage which you might want for unit testing. Statement coverage is a nice start, because it guarantees that at least all code is reachable, but it doesn't say anything about whether it's been tested for correctness. Branch coverage is better, but you still can't be sure about why it branched. Condition coverage is more thorough and checks that for a comparison you've given it less than, equal and greater than conditions, but now your testing is expanding. Add combinatorial logic testing to check one of each condition true, all conditions true and no conditions true. Add boundary coverage to check for maximum/minimum value inputs and outputs. And so on.
On an average project, we reckoned on around 5-10% of our time being spent on coding, 10% on requirements, and 20% on design. The rest was testing. And that was for lower safety levels. For serious safety stuff where we had to unit test like that, we reckoned on doubling that test time.
We actually analysed this and decided to abandon function-level unit tests for most things. We found that the defect-to-cost ratio was not justifiable, and many of the bugs could not in fact manifest in the system because of limits higher up. Instead, we ramped up the amount of module-level and system-level testing, because evidence showed that this was where the majority of customer-visible bugs were found.
So your boss needs to decide why he thinks this is a good idea. He needs to define what "100%" measurements you should be aiming for. And he needs to justify a 10x increase in the time to complete your project, because that's what your estimated delivery date should now say.
New contributor
@Wildcard I get where you're coming from, but we put a lot more effort into requirements capture, so it covered a lot of stuff which might more normally be left to design. When requirements can be implemented unambiguously, design falls out much more naturally. If anything, I've overestimated coding time though.
– Graham
18 hours ago
@Wildcard I just read your first link. That's what we aimed for. Like the guy in there said, it's all about getting it in the spec.
– Graham
18 hours ago
add a comment |
up vote
3
down vote
I've spent a fair bit of time working on safety-related software. Shit fails, people die, kind of safety.
The first thing to note is that we had 100% test coverage of requirements. However that didn't have to be automated, sometimes because it wasn't practical, and sometimes because it wasn't physically possible. No safety-related standard requires 100% automated testing even for the most critical of systems, so the idea of doing it for an iPhone app is pretty ludicrous.
The next thing to note is that there are various sorts of coverage which you might want for unit testing. Statement coverage is a nice start, because it guarantees that at least all code is reachable, but it doesn't say anything about whether it's been tested for correctness. Branch coverage is better, but you still can't be sure about why it branched. Condition coverage is more thorough and checks that for a comparison you've given it less than, equal and greater than conditions, but now your testing is expanding. Add combinatorial logic testing to check one of each condition true, all conditions true and no conditions true. Add boundary coverage to check for maximum/minimum value inputs and outputs. And so on.
On an average project, we reckoned on around 5-10% of our time being spent on coding, 10% on requirements, and 20% on design. The rest was testing. And that was for lower safety levels. For serious safety stuff where we had to unit test like that, we reckoned on doubling that test time.
We actually analysed this and decided to abandon function-level unit tests for most things. We found that the defect-to-cost ratio was not justifiable, and many of the bugs could not in fact manifest in the system because of limits higher up. Instead, we ramped up the amount of module-level and system-level testing, because evidence showed that this was where the majority of customer-visible bugs were found.
So your boss needs to decide why he thinks this is a good idea. He needs to define what "100%" measurements you should be aiming for. And he needs to justify a 10x increase in the time to complete your project, because that's what your estimated delivery date should now say.
New contributor
@Wildcard I get where you're coming from, but we put a lot more effort into requirements capture, so it covered a lot of stuff which might more normally be left to design. When requirements can be implemented unambiguously, design falls out much more naturally. If anything, I've overestimated coding time though.
– Graham
18 hours ago
@Wildcard I just read your first link. That's what we aimed for. Like the guy in there said, it's all about getting it in the spec.
– Graham
18 hours ago
add a comment |
up vote
3
down vote
up vote
3
down vote
I've spent a fair bit of time working on safety-related software. Shit fails, people die, kind of safety.
The first thing to note is that we had 100% test coverage of requirements. However that didn't have to be automated, sometimes because it wasn't practical, and sometimes because it wasn't physically possible. No safety-related standard requires 100% automated testing even for the most critical of systems, so the idea of doing it for an iPhone app is pretty ludicrous.
The next thing to note is that there are various sorts of coverage which you might want for unit testing. Statement coverage is a nice start, because it guarantees that at least all code is reachable, but it doesn't say anything about whether it's been tested for correctness. Branch coverage is better, but you still can't be sure about why it branched. Condition coverage is more thorough and checks that for a comparison you've given it less than, equal and greater than conditions, but now your testing is expanding. Add combinatorial logic testing to check one of each condition true, all conditions true and no conditions true. Add boundary coverage to check for maximum/minimum value inputs and outputs. And so on.
On an average project, we reckoned on around 5-10% of our time being spent on coding, 10% on requirements, and 20% on design. The rest was testing. And that was for lower safety levels. For serious safety stuff where we had to unit test like that, we reckoned on doubling that test time.
We actually analysed this and decided to abandon function-level unit tests for most things. We found that the defect-to-cost ratio was not justifiable, and many of the bugs could not in fact manifest in the system because of limits higher up. Instead, we ramped up the amount of module-level and system-level testing, because evidence showed that this was where the majority of customer-visible bugs were found.
So your boss needs to decide why he thinks this is a good idea. He needs to define what "100%" measurements you should be aiming for. And he needs to justify a 10x increase in the time to complete your project, because that's what your estimated delivery date should now say.
New contributor
I've spent a fair bit of time working on safety-related software. Shit fails, people die, kind of safety.
The first thing to note is that we had 100% test coverage of requirements. However that didn't have to be automated, sometimes because it wasn't practical, and sometimes because it wasn't physically possible. No safety-related standard requires 100% automated testing even for the most critical of systems, so the idea of doing it for an iPhone app is pretty ludicrous.
The next thing to note is that there are various sorts of coverage which you might want for unit testing. Statement coverage is a nice start, because it guarantees that at least all code is reachable, but it doesn't say anything about whether it's been tested for correctness. Branch coverage is better, but you still can't be sure about why it branched. Condition coverage is more thorough and checks that for a comparison you've given it less than, equal and greater than conditions, but now your testing is expanding. Add combinatorial logic testing to check one of each condition true, all conditions true and no conditions true. Add boundary coverage to check for maximum/minimum value inputs and outputs. And so on.
On an average project, we reckoned on around 5-10% of our time being spent on coding, 10% on requirements, and 20% on design. The rest was testing. And that was for lower safety levels. For serious safety stuff where we had to unit test like that, we reckoned on doubling that test time.
We actually analysed this and decided to abandon function-level unit tests for most things. We found that the defect-to-cost ratio was not justifiable, and many of the bugs could not in fact manifest in the system because of limits higher up. Instead, we ramped up the amount of module-level and system-level testing, because evidence showed that this was where the majority of customer-visible bugs were found.
So your boss needs to decide why he thinks this is a good idea. He needs to define what "100%" measurements you should be aiming for. And he needs to justify a 10x increase in the time to complete your project, because that's what your estimated delivery date should now say.
New contributor
edited 8 hours ago
New contributor
answered yesterday
Graham
1314
1314
New contributor
New contributor
@Wildcard I get where you're coming from, but we put a lot more effort into requirements capture, so it covered a lot of stuff which might more normally be left to design. When requirements can be implemented unambiguously, design falls out much more naturally. If anything, I've overestimated coding time though.
– Graham
18 hours ago
@Wildcard I just read your first link. That's what we aimed for. Like the guy in there said, it's all about getting it in the spec.
– Graham
18 hours ago
add a comment |
@Wildcard I get where you're coming from, but we put a lot more effort into requirements capture, so it covered a lot of stuff which might more normally be left to design. When requirements can be implemented unambiguously, design falls out much more naturally. If anything, I've overestimated coding time though.
– Graham
18 hours ago
@Wildcard I just read your first link. That's what we aimed for. Like the guy in there said, it's all about getting it in the spec.
– Graham
18 hours ago
@Wildcard I get where you're coming from, but we put a lot more effort into requirements capture, so it covered a lot of stuff which might more normally be left to design. When requirements can be implemented unambiguously, design falls out much more naturally. If anything, I've overestimated coding time though.
– Graham
18 hours ago
@Wildcard I get where you're coming from, but we put a lot more effort into requirements capture, so it covered a lot of stuff which might more normally be left to design. When requirements can be implemented unambiguously, design falls out much more naturally. If anything, I've overestimated coding time though.
– Graham
18 hours ago
@Wildcard I just read your first link. That's what we aimed for. Like the guy in there said, it's all about getting it in the spec.
– Graham
18 hours ago
@Wildcard I just read your first link. That's what we aimed for. Like the guy in there said, it's all about getting it in the spec.
– Graham
18 hours ago
add a comment |
up vote
0
down vote
Welcome to managing your manager. I agree with the sentiment that your boss doesn't want to hear a "No". That's not quite verbose enough. But, the answer is almost assuredly a "No".
What you need is a way to communicate how and why to limit your test scope. Even if you tested all the permutations of every make-model, screen-size and OS, you will soon see that oh, there are different version of OS's and then oh, different browsers and oh those browser version. Oh, wow, now, it's next year and you have a new job because you missed the bug in your other product.
So, really you need a risk-based approach. Knowing your test variables and what they mean to your likelihood of finding a bug is crucial.
Find out what your customer-base is using. Someone should be monitoring this or collecting. And if you haven't rolled this out yet, get some consulting from a company like Perfect.
Speaking of Perfecto, you will need to vary your tests in your test automation by desired capabilities or something like that. If you have to write different automation for each of your mobile device profiles in your test bed, then forget about it. Use a visual context or make sure the site or app you are testing is testable so that you can actually hook into the DOM.
But, more than anything, decide where the risk is and have confidence that you are limiting your test scope appropriately so you can find the bugs instead of buying the magic beans of infinite test coverage.
New contributor
add a comment |
up vote
0
down vote
Welcome to managing your manager. I agree with the sentiment that your boss doesn't want to hear a "No". That's not quite verbose enough. But, the answer is almost assuredly a "No".
What you need is a way to communicate how and why to limit your test scope. Even if you tested all the permutations of every make-model, screen-size and OS, you will soon see that oh, there are different version of OS's and then oh, different browsers and oh those browser version. Oh, wow, now, it's next year and you have a new job because you missed the bug in your other product.
So, really you need a risk-based approach. Knowing your test variables and what they mean to your likelihood of finding a bug is crucial.
Find out what your customer-base is using. Someone should be monitoring this or collecting. And if you haven't rolled this out yet, get some consulting from a company like Perfect.
Speaking of Perfecto, you will need to vary your tests in your test automation by desired capabilities or something like that. If you have to write different automation for each of your mobile device profiles in your test bed, then forget about it. Use a visual context or make sure the site or app you are testing is testable so that you can actually hook into the DOM.
But, more than anything, decide where the risk is and have confidence that you are limiting your test scope appropriately so you can find the bugs instead of buying the magic beans of infinite test coverage.
New contributor
add a comment |
up vote
0
down vote
up vote
0
down vote
Welcome to managing your manager. I agree with the sentiment that your boss doesn't want to hear a "No". That's not quite verbose enough. But, the answer is almost assuredly a "No".
What you need is a way to communicate how and why to limit your test scope. Even if you tested all the permutations of every make-model, screen-size and OS, you will soon see that oh, there are different version of OS's and then oh, different browsers and oh those browser version. Oh, wow, now, it's next year and you have a new job because you missed the bug in your other product.
So, really you need a risk-based approach. Knowing your test variables and what they mean to your likelihood of finding a bug is crucial.
Find out what your customer-base is using. Someone should be monitoring this or collecting. And if you haven't rolled this out yet, get some consulting from a company like Perfect.
Speaking of Perfecto, you will need to vary your tests in your test automation by desired capabilities or something like that. If you have to write different automation for each of your mobile device profiles in your test bed, then forget about it. Use a visual context or make sure the site or app you are testing is testable so that you can actually hook into the DOM.
But, more than anything, decide where the risk is and have confidence that you are limiting your test scope appropriately so you can find the bugs instead of buying the magic beans of infinite test coverage.
New contributor
Welcome to managing your manager. I agree with the sentiment that your boss doesn't want to hear a "No". That's not quite verbose enough. But, the answer is almost assuredly a "No".
What you need is a way to communicate how and why to limit your test scope. Even if you tested all the permutations of every make-model, screen-size and OS, you will soon see that oh, there are different version of OS's and then oh, different browsers and oh those browser version. Oh, wow, now, it's next year and you have a new job because you missed the bug in your other product.
So, really you need a risk-based approach. Knowing your test variables and what they mean to your likelihood of finding a bug is crucial.
Find out what your customer-base is using. Someone should be monitoring this or collecting. And if you haven't rolled this out yet, get some consulting from a company like Perfect.
Speaking of Perfecto, you will need to vary your tests in your test automation by desired capabilities or something like that. If you have to write different automation for each of your mobile device profiles in your test bed, then forget about it. Use a visual context or make sure the site or app you are testing is testable so that you can actually hook into the DOM.
But, more than anything, decide where the risk is and have confidence that you are limiting your test scope appropriately so you can find the bugs instead of buying the magic beans of infinite test coverage.
New contributor
New contributor
answered 9 hours ago
ModelTester
1
1
New contributor
New contributor
add a comment |
add a comment |
Thanks for contributing an answer to Software Quality Assurance & Testing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsqa.stackexchange.com%2fquestions%2f36694%2fmobile-automation-boss-wants-100-coverage-how-feasible-is-that%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
7
Unless the application is incredibly trivial, then it is impossible. One of the seven principles of software testing is that exhaustive testing is impossible. If your boss is asking for 100% coverage, remind them that execution time and all associated costs with rise. Testing based on risk and priority would be loads more beneficial, rather than a 'test everything' approach.
– trashpanda
yesterday
11
When you say 100% coverage are you talking about all code paths in a project or are you talking about 100% coverage of the different phone devices?
– Alex KeySmith
yesterday
@ Alex KeySmith . I think that is a very good question. This is what I've noticed in the last 10 days. I will be automating on a simulator and not a real device.
– fypnlp
yesterday
1
You don't have 100% test coverage unless you've tested every possible interrupt at every possible CPU instruction in the program. And this assumes the program has no input. It's worse for real programs.
– Joshua
yesterday
5
Coverage is like the speed of light. You can get arbitrarily close to 100% (with your investment growing exponentially as you get closer and closer to it), but you can never actually reach it.
– vsz
21 hours ago