Mobile automation: Boss wants 100% coverage. How feasible is that?











up vote
17
down vote

favorite
5












Just started a new mobile automation role using react native for iOS.



I'm not new to automation, but I am new to React Native (with detox and jest). The learning curve is slow, but we are getting there.



He wants a complete regression pack. That's fine...no problem.



However, he wants 100% coverage. That's my issue.



I will be using grey box testing. However, I do think certain principles are universal regardless...



I was always taught 'don't automate all the things....automate the RIGHT things'.



I will automating with a simulator (i.e. iPhone X). The thought has occurred to me. Can I automate with real device in grey box testing? As I would like to as then I will get more robust smoke and sanity test.



As I'm new to this role and still learning, it's discerning the RIGHT things the because I don't want to create unnecessary tests that create road blocks.



As I'm starting this framework from scratch I am wondering, if anyone has been in this position before?



How did they deal with it?










share|improve this question




















  • 7




    Unless the application is incredibly trivial, then it is impossible. One of the seven principles of software testing is that exhaustive testing is impossible. If your boss is asking for 100% coverage, remind them that execution time and all associated costs with rise. Testing based on risk and priority would be loads more beneficial, rather than a 'test everything' approach.
    – trashpanda
    yesterday






  • 11




    When you say 100% coverage are you talking about all code paths in a project or are you talking about 100% coverage of the different phone devices?
    – Alex KeySmith
    yesterday










  • @ Alex KeySmith . I think that is a very good question. This is what I've noticed in the last 10 days. I will be automating on a simulator and not a real device.
    – fypnlp
    yesterday








  • 1




    You don't have 100% test coverage unless you've tested every possible interrupt at every possible CPU instruction in the program. And this assumes the program has no input. It's worse for real programs.
    – Joshua
    yesterday






  • 5




    Coverage is like the speed of light. You can get arbitrarily close to 100% (with your investment growing exponentially as you get closer and closer to it), but you can never actually reach it.
    – vsz
    21 hours ago















up vote
17
down vote

favorite
5












Just started a new mobile automation role using react native for iOS.



I'm not new to automation, but I am new to React Native (with detox and jest). The learning curve is slow, but we are getting there.



He wants a complete regression pack. That's fine...no problem.



However, he wants 100% coverage. That's my issue.



I will be using grey box testing. However, I do think certain principles are universal regardless...



I was always taught 'don't automate all the things....automate the RIGHT things'.



I will automating with a simulator (i.e. iPhone X). The thought has occurred to me. Can I automate with real device in grey box testing? As I would like to as then I will get more robust smoke and sanity test.



As I'm new to this role and still learning, it's discerning the RIGHT things the because I don't want to create unnecessary tests that create road blocks.



As I'm starting this framework from scratch I am wondering, if anyone has been in this position before?



How did they deal with it?










share|improve this question




















  • 7




    Unless the application is incredibly trivial, then it is impossible. One of the seven principles of software testing is that exhaustive testing is impossible. If your boss is asking for 100% coverage, remind them that execution time and all associated costs with rise. Testing based on risk and priority would be loads more beneficial, rather than a 'test everything' approach.
    – trashpanda
    yesterday






  • 11




    When you say 100% coverage are you talking about all code paths in a project or are you talking about 100% coverage of the different phone devices?
    – Alex KeySmith
    yesterday










  • @ Alex KeySmith . I think that is a very good question. This is what I've noticed in the last 10 days. I will be automating on a simulator and not a real device.
    – fypnlp
    yesterday








  • 1




    You don't have 100% test coverage unless you've tested every possible interrupt at every possible CPU instruction in the program. And this assumes the program has no input. It's worse for real programs.
    – Joshua
    yesterday






  • 5




    Coverage is like the speed of light. You can get arbitrarily close to 100% (with your investment growing exponentially as you get closer and closer to it), but you can never actually reach it.
    – vsz
    21 hours ago













up vote
17
down vote

favorite
5









up vote
17
down vote

favorite
5






5





Just started a new mobile automation role using react native for iOS.



I'm not new to automation, but I am new to React Native (with detox and jest). The learning curve is slow, but we are getting there.



He wants a complete regression pack. That's fine...no problem.



However, he wants 100% coverage. That's my issue.



I will be using grey box testing. However, I do think certain principles are universal regardless...



I was always taught 'don't automate all the things....automate the RIGHT things'.



I will automating with a simulator (i.e. iPhone X). The thought has occurred to me. Can I automate with real device in grey box testing? As I would like to as then I will get more robust smoke and sanity test.



As I'm new to this role and still learning, it's discerning the RIGHT things the because I don't want to create unnecessary tests that create road blocks.



As I'm starting this framework from scratch I am wondering, if anyone has been in this position before?



How did they deal with it?










share|improve this question















Just started a new mobile automation role using react native for iOS.



I'm not new to automation, but I am new to React Native (with detox and jest). The learning curve is slow, but we are getting there.



He wants a complete regression pack. That's fine...no problem.



However, he wants 100% coverage. That's my issue.



I will be using grey box testing. However, I do think certain principles are universal regardless...



I was always taught 'don't automate all the things....automate the RIGHT things'.



I will automating with a simulator (i.e. iPhone X). The thought has occurred to me. Can I automate with real device in grey box testing? As I would like to as then I will get more robust smoke and sanity test.



As I'm new to this role and still learning, it's discerning the RIGHT things the because I don't want to create unnecessary tests that create road blocks.



As I'm starting this framework from scratch I am wondering, if anyone has been in this position before?



How did they deal with it?







automated-testing test-automation-framework mobile reactjs






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited yesterday

























asked yesterday









fypnlp

10417




10417








  • 7




    Unless the application is incredibly trivial, then it is impossible. One of the seven principles of software testing is that exhaustive testing is impossible. If your boss is asking for 100% coverage, remind them that execution time and all associated costs with rise. Testing based on risk and priority would be loads more beneficial, rather than a 'test everything' approach.
    – trashpanda
    yesterday






  • 11




    When you say 100% coverage are you talking about all code paths in a project or are you talking about 100% coverage of the different phone devices?
    – Alex KeySmith
    yesterday










  • @ Alex KeySmith . I think that is a very good question. This is what I've noticed in the last 10 days. I will be automating on a simulator and not a real device.
    – fypnlp
    yesterday








  • 1




    You don't have 100% test coverage unless you've tested every possible interrupt at every possible CPU instruction in the program. And this assumes the program has no input. It's worse for real programs.
    – Joshua
    yesterday






  • 5




    Coverage is like the speed of light. You can get arbitrarily close to 100% (with your investment growing exponentially as you get closer and closer to it), but you can never actually reach it.
    – vsz
    21 hours ago














  • 7




    Unless the application is incredibly trivial, then it is impossible. One of the seven principles of software testing is that exhaustive testing is impossible. If your boss is asking for 100% coverage, remind them that execution time and all associated costs with rise. Testing based on risk and priority would be loads more beneficial, rather than a 'test everything' approach.
    – trashpanda
    yesterday






  • 11




    When you say 100% coverage are you talking about all code paths in a project or are you talking about 100% coverage of the different phone devices?
    – Alex KeySmith
    yesterday










  • @ Alex KeySmith . I think that is a very good question. This is what I've noticed in the last 10 days. I will be automating on a simulator and not a real device.
    – fypnlp
    yesterday








  • 1




    You don't have 100% test coverage unless you've tested every possible interrupt at every possible CPU instruction in the program. And this assumes the program has no input. It's worse for real programs.
    – Joshua
    yesterday






  • 5




    Coverage is like the speed of light. You can get arbitrarily close to 100% (with your investment growing exponentially as you get closer and closer to it), but you can never actually reach it.
    – vsz
    21 hours ago








7




7




Unless the application is incredibly trivial, then it is impossible. One of the seven principles of software testing is that exhaustive testing is impossible. If your boss is asking for 100% coverage, remind them that execution time and all associated costs with rise. Testing based on risk and priority would be loads more beneficial, rather than a 'test everything' approach.
– trashpanda
yesterday




Unless the application is incredibly trivial, then it is impossible. One of the seven principles of software testing is that exhaustive testing is impossible. If your boss is asking for 100% coverage, remind them that execution time and all associated costs with rise. Testing based on risk and priority would be loads more beneficial, rather than a 'test everything' approach.
– trashpanda
yesterday




11




11




When you say 100% coverage are you talking about all code paths in a project or are you talking about 100% coverage of the different phone devices?
– Alex KeySmith
yesterday




When you say 100% coverage are you talking about all code paths in a project or are you talking about 100% coverage of the different phone devices?
– Alex KeySmith
yesterday












@ Alex KeySmith . I think that is a very good question. This is what I've noticed in the last 10 days. I will be automating on a simulator and not a real device.
– fypnlp
yesterday






@ Alex KeySmith . I think that is a very good question. This is what I've noticed in the last 10 days. I will be automating on a simulator and not a real device.
– fypnlp
yesterday






1




1




You don't have 100% test coverage unless you've tested every possible interrupt at every possible CPU instruction in the program. And this assumes the program has no input. It's worse for real programs.
– Joshua
yesterday




You don't have 100% test coverage unless you've tested every possible interrupt at every possible CPU instruction in the program. And this assumes the program has no input. It's worse for real programs.
– Joshua
yesterday




5




5




Coverage is like the speed of light. You can get arbitrarily close to 100% (with your investment growing exponentially as you get closer and closer to it), but you can never actually reach it.
– vsz
21 hours ago




Coverage is like the speed of light. You can get arbitrarily close to 100% (with your investment growing exponentially as you get closer and closer to it), but you can never actually reach it.
– vsz
21 hours ago










6 Answers
6






active

oldest

votes

















up vote
18
down vote













Your boss doesn't want a flat "No" or to hear their request is impractical.



They want to reduce the risk of releasing changes to the application.



Make your boss choose your priorities, that is one of the roles of a manager.




  1. Add scenarios based on team knowledge and work with your boss to rank them by risk.

  2. Implement top priority scenarios

  3. Go back to step 1


Eventually, you will get to diminishing returns and you can move onto the next task.






share|improve this answer








New contributor




rickjr82 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.


















  • I love that. That's an excellent idea.
    – fypnlp
    yesterday


















up vote
13
down vote













John Ruberto wrote an article some years ago on Stickyminds entitled, "Is 100% Unit Test Coverage Enough?". The article can be found here:



https://www.stickyminds.com/article/100-percent-unit-test-coverage-not-enough



In it he presents the argument that there are different kinds of coverage. One could cover 100% of requirements, but that doesn't include edge cases a tester might look for during Exploratory Testing. It doesn't include security tests or UI tests. It doesn't cover unit tests. Now we're up to 5 passes over the application, or 500% testing. We haven't even covered the explosion of tests that configuration testing will bring.



Your boss is throwing out a number. What he actually wants is an assurance that the thing that got built is going to do what it's supposed to do. Instead of getting hung up on a number as a minimum pass requirement, show him that based on your experience, you covered as much of the application in as many ways as you can think of that matter, and found this list of things that probably shouldn't happen. Based on that information, your boss or his boss will have to make a go/no go decision.



Lee Copeland offers a different take on the subject, including answers to the question, "When do I stop testing?" in his presentation "The Banana Principle for Testers", found here:



http://www.squadco.com/Conference_Presentations/Banana_Principle.pdf



Asking for a minimum pass requirement as a percentage of anything is Management by Numbers and takes the go/no go decision out of your boss' hands. He's trying to absolve himself of the responsibility of failure by placing the requirement on you. In a shop where people make mistakes and learn from them instead of laying blame, this type of requirement doesn't come up.






share|improve this answer























  • Can I just say this is a shining example of how to link-and-summarize? Bravo!
    – corsiKa
    yesterday


















up vote
8
down vote













It's always good to use numbers to make your point-




  • There are N models and sub models of iPhones (for some applications you should also count the phone's network sub-type), each with M available iOS versions and sub versions (this is not entirely accurate, some models and iOS versions don't work together, but never mind that now)


  • Older iOS versions needs different automation tools.


  • You can communicate over WiFi, 3G or 4G and maybe Bluetooth and USB


  • You will want to test P versions back on newer iOS and iPhone versions



Multiply the above and present to your boss






share|improve this answer




























    up vote
    3
    down vote













    It depends what they consider 100%, but my basic answer would be 'no' especially if these are UI level tests. Tests at this level are slow and expensive to write and maintain.



    It looks like you know this already, but write tests that cover the broad paths through the application/its functionality.






    share|improve this answer




























      up vote
      3
      down vote













      I've spent a fair bit of time working on safety-related software. Shit fails, people die, kind of safety.



      The first thing to note is that we had 100% test coverage of requirements. However that didn't have to be automated, sometimes because it wasn't practical, and sometimes because it wasn't physically possible. No safety-related standard requires 100% automated testing even for the most critical of systems, so the idea of doing it for an iPhone app is pretty ludicrous.



      The next thing to note is that there are various sorts of coverage which you might want for unit testing. Statement coverage is a nice start, because it guarantees that at least all code is reachable, but it doesn't say anything about whether it's been tested for correctness. Branch coverage is better, but you still can't be sure about why it branched. Condition coverage is more thorough and checks that for a comparison you've given it less than, equal and greater than conditions, but now your testing is expanding. Add combinatorial logic testing to check one of each condition true, all conditions true and no conditions true. Add boundary coverage to check for maximum/minimum value inputs and outputs. And so on.



      On an average project, we reckoned on around 5-10% of our time being spent on coding, 10% on requirements, and 20% on design. The rest was testing. And that was for lower safety levels. For serious safety stuff where we had to unit test like that, we reckoned on doubling that test time.



      We actually analysed this and decided to abandon function-level unit tests for most things. We found that the defect-to-cost ratio was not justifiable, and many of the bugs could not in fact manifest in the system because of limits higher up. Instead, we ramped up the amount of module-level and system-level testing, because evidence showed that this was where the majority of customer-visible bugs were found.



      So your boss needs to decide why he thinks this is a good idea. He needs to define what "100%" measurements you should be aiming for. And he needs to justify a 10x increase in the time to complete your project, because that's what your estimated delivery date should now say.






      share|improve this answer










      New contributor




      Graham is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.


















      • @Wildcard I get where you're coming from, but we put a lot more effort into requirements capture, so it covered a lot of stuff which might more normally be left to design. When requirements can be implemented unambiguously, design falls out much more naturally. If anything, I've overestimated coding time though.
        – Graham
        18 hours ago










      • @Wildcard I just read your first link. That's what we aimed for. Like the guy in there said, it's all about getting it in the spec.
        – Graham
        18 hours ago


















      up vote
      0
      down vote













      Welcome to managing your manager. I agree with the sentiment that your boss doesn't want to hear a "No". That's not quite verbose enough. But, the answer is almost assuredly a "No".



      What you need is a way to communicate how and why to limit your test scope. Even if you tested all the permutations of every make-model, screen-size and OS, you will soon see that oh, there are different version of OS's and then oh, different browsers and oh those browser version. Oh, wow, now, it's next year and you have a new job because you missed the bug in your other product.



      So, really you need a risk-based approach. Knowing your test variables and what they mean to your likelihood of finding a bug is crucial.



      Find out what your customer-base is using. Someone should be monitoring this or collecting. And if you haven't rolled this out yet, get some consulting from a company like Perfect.



      Speaking of Perfecto, you will need to vary your tests in your test automation by desired capabilities or something like that. If you have to write different automation for each of your mobile device profiles in your test bed, then forget about it. Use a visual context or make sure the site or app you are testing is testable so that you can actually hook into the DOM.



      But, more than anything, decide where the risk is and have confidence that you are limiting your test scope appropriately so you can find the bugs instead of buying the magic beans of infinite test coverage.






      share|improve this answer








      New contributor




      ModelTester is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.


















        Your Answer








        StackExchange.ready(function() {
        var channelOptions = {
        tags: "".split(" "),
        id: "244"
        };
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function() {
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled) {
        StackExchange.using("snippets", function() {
        createEditor();
        });
        }
        else {
        createEditor();
        }
        });

        function createEditor() {
        StackExchange.prepareEditor({
        heartbeatType: 'answer',
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader: {
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        },
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        });


        }
        });














        draft saved

        draft discarded


















        StackExchange.ready(
        function () {
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsqa.stackexchange.com%2fquestions%2f36694%2fmobile-automation-boss-wants-100-coverage-how-feasible-is-that%23new-answer', 'question_page');
        }
        );

        Post as a guest















        Required, but never shown

























        6 Answers
        6






        active

        oldest

        votes








        6 Answers
        6






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        18
        down vote













        Your boss doesn't want a flat "No" or to hear their request is impractical.



        They want to reduce the risk of releasing changes to the application.



        Make your boss choose your priorities, that is one of the roles of a manager.




        1. Add scenarios based on team knowledge and work with your boss to rank them by risk.

        2. Implement top priority scenarios

        3. Go back to step 1


        Eventually, you will get to diminishing returns and you can move onto the next task.






        share|improve this answer








        New contributor




        rickjr82 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.


















        • I love that. That's an excellent idea.
          – fypnlp
          yesterday















        up vote
        18
        down vote













        Your boss doesn't want a flat "No" or to hear their request is impractical.



        They want to reduce the risk of releasing changes to the application.



        Make your boss choose your priorities, that is one of the roles of a manager.




        1. Add scenarios based on team knowledge and work with your boss to rank them by risk.

        2. Implement top priority scenarios

        3. Go back to step 1


        Eventually, you will get to diminishing returns and you can move onto the next task.






        share|improve this answer








        New contributor




        rickjr82 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.


















        • I love that. That's an excellent idea.
          – fypnlp
          yesterday













        up vote
        18
        down vote










        up vote
        18
        down vote









        Your boss doesn't want a flat "No" or to hear their request is impractical.



        They want to reduce the risk of releasing changes to the application.



        Make your boss choose your priorities, that is one of the roles of a manager.




        1. Add scenarios based on team knowledge and work with your boss to rank them by risk.

        2. Implement top priority scenarios

        3. Go back to step 1


        Eventually, you will get to diminishing returns and you can move onto the next task.






        share|improve this answer








        New contributor




        rickjr82 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        Your boss doesn't want a flat "No" or to hear their request is impractical.



        They want to reduce the risk of releasing changes to the application.



        Make your boss choose your priorities, that is one of the roles of a manager.




        1. Add scenarios based on team knowledge and work with your boss to rank them by risk.

        2. Implement top priority scenarios

        3. Go back to step 1


        Eventually, you will get to diminishing returns and you can move onto the next task.







        share|improve this answer








        New contributor




        rickjr82 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        share|improve this answer



        share|improve this answer






        New contributor




        rickjr82 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        answered yesterday









        rickjr82

        2812




        2812




        New contributor




        rickjr82 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        New contributor





        rickjr82 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        rickjr82 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.












        • I love that. That's an excellent idea.
          – fypnlp
          yesterday


















        • I love that. That's an excellent idea.
          – fypnlp
          yesterday
















        I love that. That's an excellent idea.
        – fypnlp
        yesterday




        I love that. That's an excellent idea.
        – fypnlp
        yesterday










        up vote
        13
        down vote













        John Ruberto wrote an article some years ago on Stickyminds entitled, "Is 100% Unit Test Coverage Enough?". The article can be found here:



        https://www.stickyminds.com/article/100-percent-unit-test-coverage-not-enough



        In it he presents the argument that there are different kinds of coverage. One could cover 100% of requirements, but that doesn't include edge cases a tester might look for during Exploratory Testing. It doesn't include security tests or UI tests. It doesn't cover unit tests. Now we're up to 5 passes over the application, or 500% testing. We haven't even covered the explosion of tests that configuration testing will bring.



        Your boss is throwing out a number. What he actually wants is an assurance that the thing that got built is going to do what it's supposed to do. Instead of getting hung up on a number as a minimum pass requirement, show him that based on your experience, you covered as much of the application in as many ways as you can think of that matter, and found this list of things that probably shouldn't happen. Based on that information, your boss or his boss will have to make a go/no go decision.



        Lee Copeland offers a different take on the subject, including answers to the question, "When do I stop testing?" in his presentation "The Banana Principle for Testers", found here:



        http://www.squadco.com/Conference_Presentations/Banana_Principle.pdf



        Asking for a minimum pass requirement as a percentage of anything is Management by Numbers and takes the go/no go decision out of your boss' hands. He's trying to absolve himself of the responsibility of failure by placing the requirement on you. In a shop where people make mistakes and learn from them instead of laying blame, this type of requirement doesn't come up.






        share|improve this answer























        • Can I just say this is a shining example of how to link-and-summarize? Bravo!
          – corsiKa
          yesterday















        up vote
        13
        down vote













        John Ruberto wrote an article some years ago on Stickyminds entitled, "Is 100% Unit Test Coverage Enough?". The article can be found here:



        https://www.stickyminds.com/article/100-percent-unit-test-coverage-not-enough



        In it he presents the argument that there are different kinds of coverage. One could cover 100% of requirements, but that doesn't include edge cases a tester might look for during Exploratory Testing. It doesn't include security tests or UI tests. It doesn't cover unit tests. Now we're up to 5 passes over the application, or 500% testing. We haven't even covered the explosion of tests that configuration testing will bring.



        Your boss is throwing out a number. What he actually wants is an assurance that the thing that got built is going to do what it's supposed to do. Instead of getting hung up on a number as a minimum pass requirement, show him that based on your experience, you covered as much of the application in as many ways as you can think of that matter, and found this list of things that probably shouldn't happen. Based on that information, your boss or his boss will have to make a go/no go decision.



        Lee Copeland offers a different take on the subject, including answers to the question, "When do I stop testing?" in his presentation "The Banana Principle for Testers", found here:



        http://www.squadco.com/Conference_Presentations/Banana_Principle.pdf



        Asking for a minimum pass requirement as a percentage of anything is Management by Numbers and takes the go/no go decision out of your boss' hands. He's trying to absolve himself of the responsibility of failure by placing the requirement on you. In a shop where people make mistakes and learn from them instead of laying blame, this type of requirement doesn't come up.






        share|improve this answer























        • Can I just say this is a shining example of how to link-and-summarize? Bravo!
          – corsiKa
          yesterday













        up vote
        13
        down vote










        up vote
        13
        down vote









        John Ruberto wrote an article some years ago on Stickyminds entitled, "Is 100% Unit Test Coverage Enough?". The article can be found here:



        https://www.stickyminds.com/article/100-percent-unit-test-coverage-not-enough



        In it he presents the argument that there are different kinds of coverage. One could cover 100% of requirements, but that doesn't include edge cases a tester might look for during Exploratory Testing. It doesn't include security tests or UI tests. It doesn't cover unit tests. Now we're up to 5 passes over the application, or 500% testing. We haven't even covered the explosion of tests that configuration testing will bring.



        Your boss is throwing out a number. What he actually wants is an assurance that the thing that got built is going to do what it's supposed to do. Instead of getting hung up on a number as a minimum pass requirement, show him that based on your experience, you covered as much of the application in as many ways as you can think of that matter, and found this list of things that probably shouldn't happen. Based on that information, your boss or his boss will have to make a go/no go decision.



        Lee Copeland offers a different take on the subject, including answers to the question, "When do I stop testing?" in his presentation "The Banana Principle for Testers", found here:



        http://www.squadco.com/Conference_Presentations/Banana_Principle.pdf



        Asking for a minimum pass requirement as a percentage of anything is Management by Numbers and takes the go/no go decision out of your boss' hands. He's trying to absolve himself of the responsibility of failure by placing the requirement on you. In a shop where people make mistakes and learn from them instead of laying blame, this type of requirement doesn't come up.






        share|improve this answer














        John Ruberto wrote an article some years ago on Stickyminds entitled, "Is 100% Unit Test Coverage Enough?". The article can be found here:



        https://www.stickyminds.com/article/100-percent-unit-test-coverage-not-enough



        In it he presents the argument that there are different kinds of coverage. One could cover 100% of requirements, but that doesn't include edge cases a tester might look for during Exploratory Testing. It doesn't include security tests or UI tests. It doesn't cover unit tests. Now we're up to 5 passes over the application, or 500% testing. We haven't even covered the explosion of tests that configuration testing will bring.



        Your boss is throwing out a number. What he actually wants is an assurance that the thing that got built is going to do what it's supposed to do. Instead of getting hung up on a number as a minimum pass requirement, show him that based on your experience, you covered as much of the application in as many ways as you can think of that matter, and found this list of things that probably shouldn't happen. Based on that information, your boss or his boss will have to make a go/no go decision.



        Lee Copeland offers a different take on the subject, including answers to the question, "When do I stop testing?" in his presentation "The Banana Principle for Testers", found here:



        http://www.squadco.com/Conference_Presentations/Banana_Principle.pdf



        Asking for a minimum pass requirement as a percentage of anything is Management by Numbers and takes the go/no go decision out of your boss' hands. He's trying to absolve himself of the responsibility of failure by placing the requirement on you. In a shop where people make mistakes and learn from them instead of laying blame, this type of requirement doesn't come up.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited 14 hours ago

























        answered yesterday









        Jerry Penner

        586210




        586210












        • Can I just say this is a shining example of how to link-and-summarize? Bravo!
          – corsiKa
          yesterday


















        • Can I just say this is a shining example of how to link-and-summarize? Bravo!
          – corsiKa
          yesterday
















        Can I just say this is a shining example of how to link-and-summarize? Bravo!
        – corsiKa
        yesterday




        Can I just say this is a shining example of how to link-and-summarize? Bravo!
        – corsiKa
        yesterday










        up vote
        8
        down vote













        It's always good to use numbers to make your point-




        • There are N models and sub models of iPhones (for some applications you should also count the phone's network sub-type), each with M available iOS versions and sub versions (this is not entirely accurate, some models and iOS versions don't work together, but never mind that now)


        • Older iOS versions needs different automation tools.


        • You can communicate over WiFi, 3G or 4G and maybe Bluetooth and USB


        • You will want to test P versions back on newer iOS and iPhone versions



        Multiply the above and present to your boss






        share|improve this answer

























          up vote
          8
          down vote













          It's always good to use numbers to make your point-




          • There are N models and sub models of iPhones (for some applications you should also count the phone's network sub-type), each with M available iOS versions and sub versions (this is not entirely accurate, some models and iOS versions don't work together, but never mind that now)


          • Older iOS versions needs different automation tools.


          • You can communicate over WiFi, 3G or 4G and maybe Bluetooth and USB


          • You will want to test P versions back on newer iOS and iPhone versions



          Multiply the above and present to your boss






          share|improve this answer























            up vote
            8
            down vote










            up vote
            8
            down vote









            It's always good to use numbers to make your point-




            • There are N models and sub models of iPhones (for some applications you should also count the phone's network sub-type), each with M available iOS versions and sub versions (this is not entirely accurate, some models and iOS versions don't work together, but never mind that now)


            • Older iOS versions needs different automation tools.


            • You can communicate over WiFi, 3G or 4G and maybe Bluetooth and USB


            • You will want to test P versions back on newer iOS and iPhone versions



            Multiply the above and present to your boss






            share|improve this answer












            It's always good to use numbers to make your point-




            • There are N models and sub models of iPhones (for some applications you should also count the phone's network sub-type), each with M available iOS versions and sub versions (this is not entirely accurate, some models and iOS versions don't work together, but never mind that now)


            • Older iOS versions needs different automation tools.


            • You can communicate over WiFi, 3G or 4G and maybe Bluetooth and USB


            • You will want to test P versions back on newer iOS and iPhone versions



            Multiply the above and present to your boss







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered yesterday









            Rsf

            4,05911425




            4,05911425






















                up vote
                3
                down vote













                It depends what they consider 100%, but my basic answer would be 'no' especially if these are UI level tests. Tests at this level are slow and expensive to write and maintain.



                It looks like you know this already, but write tests that cover the broad paths through the application/its functionality.






                share|improve this answer

























                  up vote
                  3
                  down vote













                  It depends what they consider 100%, but my basic answer would be 'no' especially if these are UI level tests. Tests at this level are slow and expensive to write and maintain.



                  It looks like you know this already, but write tests that cover the broad paths through the application/its functionality.






                  share|improve this answer























                    up vote
                    3
                    down vote










                    up vote
                    3
                    down vote









                    It depends what they consider 100%, but my basic answer would be 'no' especially if these are UI level tests. Tests at this level are slow and expensive to write and maintain.



                    It looks like you know this already, but write tests that cover the broad paths through the application/its functionality.






                    share|improve this answer












                    It depends what they consider 100%, but my basic answer would be 'no' especially if these are UI level tests. Tests at this level are slow and expensive to write and maintain.



                    It looks like you know this already, but write tests that cover the broad paths through the application/its functionality.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered yesterday









                    anonygoose

                    1312




                    1312






















                        up vote
                        3
                        down vote













                        I've spent a fair bit of time working on safety-related software. Shit fails, people die, kind of safety.



                        The first thing to note is that we had 100% test coverage of requirements. However that didn't have to be automated, sometimes because it wasn't practical, and sometimes because it wasn't physically possible. No safety-related standard requires 100% automated testing even for the most critical of systems, so the idea of doing it for an iPhone app is pretty ludicrous.



                        The next thing to note is that there are various sorts of coverage which you might want for unit testing. Statement coverage is a nice start, because it guarantees that at least all code is reachable, but it doesn't say anything about whether it's been tested for correctness. Branch coverage is better, but you still can't be sure about why it branched. Condition coverage is more thorough and checks that for a comparison you've given it less than, equal and greater than conditions, but now your testing is expanding. Add combinatorial logic testing to check one of each condition true, all conditions true and no conditions true. Add boundary coverage to check for maximum/minimum value inputs and outputs. And so on.



                        On an average project, we reckoned on around 5-10% of our time being spent on coding, 10% on requirements, and 20% on design. The rest was testing. And that was for lower safety levels. For serious safety stuff where we had to unit test like that, we reckoned on doubling that test time.



                        We actually analysed this and decided to abandon function-level unit tests for most things. We found that the defect-to-cost ratio was not justifiable, and many of the bugs could not in fact manifest in the system because of limits higher up. Instead, we ramped up the amount of module-level and system-level testing, because evidence showed that this was where the majority of customer-visible bugs were found.



                        So your boss needs to decide why he thinks this is a good idea. He needs to define what "100%" measurements you should be aiming for. And he needs to justify a 10x increase in the time to complete your project, because that's what your estimated delivery date should now say.






                        share|improve this answer










                        New contributor




                        Graham is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.


















                        • @Wildcard I get where you're coming from, but we put a lot more effort into requirements capture, so it covered a lot of stuff which might more normally be left to design. When requirements can be implemented unambiguously, design falls out much more naturally. If anything, I've overestimated coding time though.
                          – Graham
                          18 hours ago










                        • @Wildcard I just read your first link. That's what we aimed for. Like the guy in there said, it's all about getting it in the spec.
                          – Graham
                          18 hours ago















                        up vote
                        3
                        down vote













                        I've spent a fair bit of time working on safety-related software. Shit fails, people die, kind of safety.



                        The first thing to note is that we had 100% test coverage of requirements. However that didn't have to be automated, sometimes because it wasn't practical, and sometimes because it wasn't physically possible. No safety-related standard requires 100% automated testing even for the most critical of systems, so the idea of doing it for an iPhone app is pretty ludicrous.



                        The next thing to note is that there are various sorts of coverage which you might want for unit testing. Statement coverage is a nice start, because it guarantees that at least all code is reachable, but it doesn't say anything about whether it's been tested for correctness. Branch coverage is better, but you still can't be sure about why it branched. Condition coverage is more thorough and checks that for a comparison you've given it less than, equal and greater than conditions, but now your testing is expanding. Add combinatorial logic testing to check one of each condition true, all conditions true and no conditions true. Add boundary coverage to check for maximum/minimum value inputs and outputs. And so on.



                        On an average project, we reckoned on around 5-10% of our time being spent on coding, 10% on requirements, and 20% on design. The rest was testing. And that was for lower safety levels. For serious safety stuff where we had to unit test like that, we reckoned on doubling that test time.



                        We actually analysed this and decided to abandon function-level unit tests for most things. We found that the defect-to-cost ratio was not justifiable, and many of the bugs could not in fact manifest in the system because of limits higher up. Instead, we ramped up the amount of module-level and system-level testing, because evidence showed that this was where the majority of customer-visible bugs were found.



                        So your boss needs to decide why he thinks this is a good idea. He needs to define what "100%" measurements you should be aiming for. And he needs to justify a 10x increase in the time to complete your project, because that's what your estimated delivery date should now say.






                        share|improve this answer










                        New contributor




                        Graham is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.


















                        • @Wildcard I get where you're coming from, but we put a lot more effort into requirements capture, so it covered a lot of stuff which might more normally be left to design. When requirements can be implemented unambiguously, design falls out much more naturally. If anything, I've overestimated coding time though.
                          – Graham
                          18 hours ago










                        • @Wildcard I just read your first link. That's what we aimed for. Like the guy in there said, it's all about getting it in the spec.
                          – Graham
                          18 hours ago













                        up vote
                        3
                        down vote










                        up vote
                        3
                        down vote









                        I've spent a fair bit of time working on safety-related software. Shit fails, people die, kind of safety.



                        The first thing to note is that we had 100% test coverage of requirements. However that didn't have to be automated, sometimes because it wasn't practical, and sometimes because it wasn't physically possible. No safety-related standard requires 100% automated testing even for the most critical of systems, so the idea of doing it for an iPhone app is pretty ludicrous.



                        The next thing to note is that there are various sorts of coverage which you might want for unit testing. Statement coverage is a nice start, because it guarantees that at least all code is reachable, but it doesn't say anything about whether it's been tested for correctness. Branch coverage is better, but you still can't be sure about why it branched. Condition coverage is more thorough and checks that for a comparison you've given it less than, equal and greater than conditions, but now your testing is expanding. Add combinatorial logic testing to check one of each condition true, all conditions true and no conditions true. Add boundary coverage to check for maximum/minimum value inputs and outputs. And so on.



                        On an average project, we reckoned on around 5-10% of our time being spent on coding, 10% on requirements, and 20% on design. The rest was testing. And that was for lower safety levels. For serious safety stuff where we had to unit test like that, we reckoned on doubling that test time.



                        We actually analysed this and decided to abandon function-level unit tests for most things. We found that the defect-to-cost ratio was not justifiable, and many of the bugs could not in fact manifest in the system because of limits higher up. Instead, we ramped up the amount of module-level and system-level testing, because evidence showed that this was where the majority of customer-visible bugs were found.



                        So your boss needs to decide why he thinks this is a good idea. He needs to define what "100%" measurements you should be aiming for. And he needs to justify a 10x increase in the time to complete your project, because that's what your estimated delivery date should now say.






                        share|improve this answer










                        New contributor




                        Graham is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.









                        I've spent a fair bit of time working on safety-related software. Shit fails, people die, kind of safety.



                        The first thing to note is that we had 100% test coverage of requirements. However that didn't have to be automated, sometimes because it wasn't practical, and sometimes because it wasn't physically possible. No safety-related standard requires 100% automated testing even for the most critical of systems, so the idea of doing it for an iPhone app is pretty ludicrous.



                        The next thing to note is that there are various sorts of coverage which you might want for unit testing. Statement coverage is a nice start, because it guarantees that at least all code is reachable, but it doesn't say anything about whether it's been tested for correctness. Branch coverage is better, but you still can't be sure about why it branched. Condition coverage is more thorough and checks that for a comparison you've given it less than, equal and greater than conditions, but now your testing is expanding. Add combinatorial logic testing to check one of each condition true, all conditions true and no conditions true. Add boundary coverage to check for maximum/minimum value inputs and outputs. And so on.



                        On an average project, we reckoned on around 5-10% of our time being spent on coding, 10% on requirements, and 20% on design. The rest was testing. And that was for lower safety levels. For serious safety stuff where we had to unit test like that, we reckoned on doubling that test time.



                        We actually analysed this and decided to abandon function-level unit tests for most things. We found that the defect-to-cost ratio was not justifiable, and many of the bugs could not in fact manifest in the system because of limits higher up. Instead, we ramped up the amount of module-level and system-level testing, because evidence showed that this was where the majority of customer-visible bugs were found.



                        So your boss needs to decide why he thinks this is a good idea. He needs to define what "100%" measurements you should be aiming for. And he needs to justify a 10x increase in the time to complete your project, because that's what your estimated delivery date should now say.







                        share|improve this answer










                        New contributor




                        Graham is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.









                        share|improve this answer



                        share|improve this answer








                        edited 8 hours ago





















                        New contributor




                        Graham is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.









                        answered yesterday









                        Graham

                        1314




                        1314




                        New contributor




                        Graham is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.





                        New contributor





                        Graham is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.






                        Graham is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.












                        • @Wildcard I get where you're coming from, but we put a lot more effort into requirements capture, so it covered a lot of stuff which might more normally be left to design. When requirements can be implemented unambiguously, design falls out much more naturally. If anything, I've overestimated coding time though.
                          – Graham
                          18 hours ago










                        • @Wildcard I just read your first link. That's what we aimed for. Like the guy in there said, it's all about getting it in the spec.
                          – Graham
                          18 hours ago


















                        • @Wildcard I get where you're coming from, but we put a lot more effort into requirements capture, so it covered a lot of stuff which might more normally be left to design. When requirements can be implemented unambiguously, design falls out much more naturally. If anything, I've overestimated coding time though.
                          – Graham
                          18 hours ago










                        • @Wildcard I just read your first link. That's what we aimed for. Like the guy in there said, it's all about getting it in the spec.
                          – Graham
                          18 hours ago
















                        @Wildcard I get where you're coming from, but we put a lot more effort into requirements capture, so it covered a lot of stuff which might more normally be left to design. When requirements can be implemented unambiguously, design falls out much more naturally. If anything, I've overestimated coding time though.
                        – Graham
                        18 hours ago




                        @Wildcard I get where you're coming from, but we put a lot more effort into requirements capture, so it covered a lot of stuff which might more normally be left to design. When requirements can be implemented unambiguously, design falls out much more naturally. If anything, I've overestimated coding time though.
                        – Graham
                        18 hours ago












                        @Wildcard I just read your first link. That's what we aimed for. Like the guy in there said, it's all about getting it in the spec.
                        – Graham
                        18 hours ago




                        @Wildcard I just read your first link. That's what we aimed for. Like the guy in there said, it's all about getting it in the spec.
                        – Graham
                        18 hours ago










                        up vote
                        0
                        down vote













                        Welcome to managing your manager. I agree with the sentiment that your boss doesn't want to hear a "No". That's not quite verbose enough. But, the answer is almost assuredly a "No".



                        What you need is a way to communicate how and why to limit your test scope. Even if you tested all the permutations of every make-model, screen-size and OS, you will soon see that oh, there are different version of OS's and then oh, different browsers and oh those browser version. Oh, wow, now, it's next year and you have a new job because you missed the bug in your other product.



                        So, really you need a risk-based approach. Knowing your test variables and what they mean to your likelihood of finding a bug is crucial.



                        Find out what your customer-base is using. Someone should be monitoring this or collecting. And if you haven't rolled this out yet, get some consulting from a company like Perfect.



                        Speaking of Perfecto, you will need to vary your tests in your test automation by desired capabilities or something like that. If you have to write different automation for each of your mobile device profiles in your test bed, then forget about it. Use a visual context or make sure the site or app you are testing is testable so that you can actually hook into the DOM.



                        But, more than anything, decide where the risk is and have confidence that you are limiting your test scope appropriately so you can find the bugs instead of buying the magic beans of infinite test coverage.






                        share|improve this answer








                        New contributor




                        ModelTester is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.






















                          up vote
                          0
                          down vote













                          Welcome to managing your manager. I agree with the sentiment that your boss doesn't want to hear a "No". That's not quite verbose enough. But, the answer is almost assuredly a "No".



                          What you need is a way to communicate how and why to limit your test scope. Even if you tested all the permutations of every make-model, screen-size and OS, you will soon see that oh, there are different version of OS's and then oh, different browsers and oh those browser version. Oh, wow, now, it's next year and you have a new job because you missed the bug in your other product.



                          So, really you need a risk-based approach. Knowing your test variables and what they mean to your likelihood of finding a bug is crucial.



                          Find out what your customer-base is using. Someone should be monitoring this or collecting. And if you haven't rolled this out yet, get some consulting from a company like Perfect.



                          Speaking of Perfecto, you will need to vary your tests in your test automation by desired capabilities or something like that. If you have to write different automation for each of your mobile device profiles in your test bed, then forget about it. Use a visual context or make sure the site or app you are testing is testable so that you can actually hook into the DOM.



                          But, more than anything, decide where the risk is and have confidence that you are limiting your test scope appropriately so you can find the bugs instead of buying the magic beans of infinite test coverage.






                          share|improve this answer








                          New contributor




                          ModelTester is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.




















                            up vote
                            0
                            down vote










                            up vote
                            0
                            down vote









                            Welcome to managing your manager. I agree with the sentiment that your boss doesn't want to hear a "No". That's not quite verbose enough. But, the answer is almost assuredly a "No".



                            What you need is a way to communicate how and why to limit your test scope. Even if you tested all the permutations of every make-model, screen-size and OS, you will soon see that oh, there are different version of OS's and then oh, different browsers and oh those browser version. Oh, wow, now, it's next year and you have a new job because you missed the bug in your other product.



                            So, really you need a risk-based approach. Knowing your test variables and what they mean to your likelihood of finding a bug is crucial.



                            Find out what your customer-base is using. Someone should be monitoring this or collecting. And if you haven't rolled this out yet, get some consulting from a company like Perfect.



                            Speaking of Perfecto, you will need to vary your tests in your test automation by desired capabilities or something like that. If you have to write different automation for each of your mobile device profiles in your test bed, then forget about it. Use a visual context or make sure the site or app you are testing is testable so that you can actually hook into the DOM.



                            But, more than anything, decide where the risk is and have confidence that you are limiting your test scope appropriately so you can find the bugs instead of buying the magic beans of infinite test coverage.






                            share|improve this answer








                            New contributor




                            ModelTester is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.









                            Welcome to managing your manager. I agree with the sentiment that your boss doesn't want to hear a "No". That's not quite verbose enough. But, the answer is almost assuredly a "No".



                            What you need is a way to communicate how and why to limit your test scope. Even if you tested all the permutations of every make-model, screen-size and OS, you will soon see that oh, there are different version of OS's and then oh, different browsers and oh those browser version. Oh, wow, now, it's next year and you have a new job because you missed the bug in your other product.



                            So, really you need a risk-based approach. Knowing your test variables and what they mean to your likelihood of finding a bug is crucial.



                            Find out what your customer-base is using. Someone should be monitoring this or collecting. And if you haven't rolled this out yet, get some consulting from a company like Perfect.



                            Speaking of Perfecto, you will need to vary your tests in your test automation by desired capabilities or something like that. If you have to write different automation for each of your mobile device profiles in your test bed, then forget about it. Use a visual context or make sure the site or app you are testing is testable so that you can actually hook into the DOM.



                            But, more than anything, decide where the risk is and have confidence that you are limiting your test scope appropriately so you can find the bugs instead of buying the magic beans of infinite test coverage.







                            share|improve this answer








                            New contributor




                            ModelTester is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.









                            share|improve this answer



                            share|improve this answer






                            New contributor




                            ModelTester is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.









                            answered 9 hours ago









                            ModelTester

                            1




                            1




                            New contributor




                            ModelTester is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.





                            New contributor





                            ModelTester is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.






                            ModelTester is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.






























                                draft saved

                                draft discarded




















































                                Thanks for contributing an answer to Software Quality Assurance & Testing Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.





                                Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                                Please pay close attention to the following guidance:


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function () {
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsqa.stackexchange.com%2fquestions%2f36694%2fmobile-automation-boss-wants-100-coverage-how-feasible-is-that%23new-answer', 'question_page');
                                }
                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Morgemoulin

                                Scott Moir

                                Souastre