Subtracting very small probabilities - How to compute? [duplicate]












4















This question already has an answer here:




  • Computation of likelihood when $n$ is very large, so likelihood gets very small?

    2 answers




This question is an extension of a related question about adding small probabilities. Suppose you have log-probabilities $ell_1 geqslant ell_2$, where the corresponding probabilities $exp(ell_1)$ and $exp(ell_2)$ are too small to be distinguished from zero in the initial computational facility being used (e.g., base R). We want to find the log-diffference of these probabilities, which we denote by:



$$ell_- equiv ln big( exp(ell_1) - exp(ell_2) big)$$



Questions: How can you effectively compute this log-difference? Can this be done in the base R? If not, what is the simplest way to do it with package extensions?










share|cite|improve this question













marked as duplicate by Xi'an, kjetil b halvorsen, mdewey, Carl, Ferdi Dec 18 at 16:11


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.















  • "where the corresponding probabilities exp(ℓ1) and exp(ℓ2) are too small to be distinguished from zero" - I think you mean one instead of zero?
    – Don Hatch
    Dec 18 at 7:20










  • @Don Hatch: In the context of small probabilities it will usually be the case that $ell_1$ and $ell_2$ are high-magnitude negative numbers, so $exp(ell_1) approx exp(ell_1) approx 0$. The former are the log-probabilities and the latter are the actual probabilities, which are near zero.
    – Ben
    Dec 18 at 7:25












  • Ah, I see. Thank you.
    – Don Hatch
    Dec 18 at 7:28










  • Why convert log probabilities rather than just take the difference of the logarithms of probabilities, which is the same as examining the log of the ratio of probabilities? This latter would seem more logical because if we did a log transformation for good reason, we should not abandon that good reason by back transforming before doing a calculation, but rather do testing in the preferred transformed system.
    – Carl
    Dec 18 at 13:44












  • @Carl: There are many instances when one wishes to add or subtract small probability values. In these cases, performing an entirely different operation (e.g., ratio) is not helpful. The purpose of the logarithmic transformation is to be able to store very small probabilities on a scale where they are distinguishable from zero.
    – Ben
    Dec 18 at 22:39
















4















This question already has an answer here:




  • Computation of likelihood when $n$ is very large, so likelihood gets very small?

    2 answers




This question is an extension of a related question about adding small probabilities. Suppose you have log-probabilities $ell_1 geqslant ell_2$, where the corresponding probabilities $exp(ell_1)$ and $exp(ell_2)$ are too small to be distinguished from zero in the initial computational facility being used (e.g., base R). We want to find the log-diffference of these probabilities, which we denote by:



$$ell_- equiv ln big( exp(ell_1) - exp(ell_2) big)$$



Questions: How can you effectively compute this log-difference? Can this be done in the base R? If not, what is the simplest way to do it with package extensions?










share|cite|improve this question













marked as duplicate by Xi'an, kjetil b halvorsen, mdewey, Carl, Ferdi Dec 18 at 16:11


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.















  • "where the corresponding probabilities exp(ℓ1) and exp(ℓ2) are too small to be distinguished from zero" - I think you mean one instead of zero?
    – Don Hatch
    Dec 18 at 7:20










  • @Don Hatch: In the context of small probabilities it will usually be the case that $ell_1$ and $ell_2$ are high-magnitude negative numbers, so $exp(ell_1) approx exp(ell_1) approx 0$. The former are the log-probabilities and the latter are the actual probabilities, which are near zero.
    – Ben
    Dec 18 at 7:25












  • Ah, I see. Thank you.
    – Don Hatch
    Dec 18 at 7:28










  • Why convert log probabilities rather than just take the difference of the logarithms of probabilities, which is the same as examining the log of the ratio of probabilities? This latter would seem more logical because if we did a log transformation for good reason, we should not abandon that good reason by back transforming before doing a calculation, but rather do testing in the preferred transformed system.
    – Carl
    Dec 18 at 13:44












  • @Carl: There are many instances when one wishes to add or subtract small probability values. In these cases, performing an entirely different operation (e.g., ratio) is not helpful. The purpose of the logarithmic transformation is to be able to store very small probabilities on a scale where they are distinguishable from zero.
    – Ben
    Dec 18 at 22:39














4












4








4


2






This question already has an answer here:




  • Computation of likelihood when $n$ is very large, so likelihood gets very small?

    2 answers




This question is an extension of a related question about adding small probabilities. Suppose you have log-probabilities $ell_1 geqslant ell_2$, where the corresponding probabilities $exp(ell_1)$ and $exp(ell_2)$ are too small to be distinguished from zero in the initial computational facility being used (e.g., base R). We want to find the log-diffference of these probabilities, which we denote by:



$$ell_- equiv ln big( exp(ell_1) - exp(ell_2) big)$$



Questions: How can you effectively compute this log-difference? Can this be done in the base R? If not, what is the simplest way to do it with package extensions?










share|cite|improve this question














This question already has an answer here:




  • Computation of likelihood when $n$ is very large, so likelihood gets very small?

    2 answers




This question is an extension of a related question about adding small probabilities. Suppose you have log-probabilities $ell_1 geqslant ell_2$, where the corresponding probabilities $exp(ell_1)$ and $exp(ell_2)$ are too small to be distinguished from zero in the initial computational facility being used (e.g., base R). We want to find the log-diffference of these probabilities, which we denote by:



$$ell_- equiv ln big( exp(ell_1) - exp(ell_2) big)$$



Questions: How can you effectively compute this log-difference? Can this be done in the base R? If not, what is the simplest way to do it with package extensions?





This question already has an answer here:




  • Computation of likelihood when $n$ is very large, so likelihood gets very small?

    2 answers








r computational-statistics underflow






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Dec 18 at 4:15









Ben

21.7k224103




21.7k224103




marked as duplicate by Xi'an, kjetil b halvorsen, mdewey, Carl, Ferdi Dec 18 at 16:11


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.






marked as duplicate by Xi'an, kjetil b halvorsen, mdewey, Carl, Ferdi Dec 18 at 16:11


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.














  • "where the corresponding probabilities exp(ℓ1) and exp(ℓ2) are too small to be distinguished from zero" - I think you mean one instead of zero?
    – Don Hatch
    Dec 18 at 7:20










  • @Don Hatch: In the context of small probabilities it will usually be the case that $ell_1$ and $ell_2$ are high-magnitude negative numbers, so $exp(ell_1) approx exp(ell_1) approx 0$. The former are the log-probabilities and the latter are the actual probabilities, which are near zero.
    – Ben
    Dec 18 at 7:25












  • Ah, I see. Thank you.
    – Don Hatch
    Dec 18 at 7:28










  • Why convert log probabilities rather than just take the difference of the logarithms of probabilities, which is the same as examining the log of the ratio of probabilities? This latter would seem more logical because if we did a log transformation for good reason, we should not abandon that good reason by back transforming before doing a calculation, but rather do testing in the preferred transformed system.
    – Carl
    Dec 18 at 13:44












  • @Carl: There are many instances when one wishes to add or subtract small probability values. In these cases, performing an entirely different operation (e.g., ratio) is not helpful. The purpose of the logarithmic transformation is to be able to store very small probabilities on a scale where they are distinguishable from zero.
    – Ben
    Dec 18 at 22:39


















  • "where the corresponding probabilities exp(ℓ1) and exp(ℓ2) are too small to be distinguished from zero" - I think you mean one instead of zero?
    – Don Hatch
    Dec 18 at 7:20










  • @Don Hatch: In the context of small probabilities it will usually be the case that $ell_1$ and $ell_2$ are high-magnitude negative numbers, so $exp(ell_1) approx exp(ell_1) approx 0$. The former are the log-probabilities and the latter are the actual probabilities, which are near zero.
    – Ben
    Dec 18 at 7:25












  • Ah, I see. Thank you.
    – Don Hatch
    Dec 18 at 7:28










  • Why convert log probabilities rather than just take the difference of the logarithms of probabilities, which is the same as examining the log of the ratio of probabilities? This latter would seem more logical because if we did a log transformation for good reason, we should not abandon that good reason by back transforming before doing a calculation, but rather do testing in the preferred transformed system.
    – Carl
    Dec 18 at 13:44












  • @Carl: There are many instances when one wishes to add or subtract small probability values. In these cases, performing an entirely different operation (e.g., ratio) is not helpful. The purpose of the logarithmic transformation is to be able to store very small probabilities on a scale where they are distinguishable from zero.
    – Ben
    Dec 18 at 22:39
















"where the corresponding probabilities exp(ℓ1) and exp(ℓ2) are too small to be distinguished from zero" - I think you mean one instead of zero?
– Don Hatch
Dec 18 at 7:20




"where the corresponding probabilities exp(ℓ1) and exp(ℓ2) are too small to be distinguished from zero" - I think you mean one instead of zero?
– Don Hatch
Dec 18 at 7:20












@Don Hatch: In the context of small probabilities it will usually be the case that $ell_1$ and $ell_2$ are high-magnitude negative numbers, so $exp(ell_1) approx exp(ell_1) approx 0$. The former are the log-probabilities and the latter are the actual probabilities, which are near zero.
– Ben
Dec 18 at 7:25






@Don Hatch: In the context of small probabilities it will usually be the case that $ell_1$ and $ell_2$ are high-magnitude negative numbers, so $exp(ell_1) approx exp(ell_1) approx 0$. The former are the log-probabilities and the latter are the actual probabilities, which are near zero.
– Ben
Dec 18 at 7:25














Ah, I see. Thank you.
– Don Hatch
Dec 18 at 7:28




Ah, I see. Thank you.
– Don Hatch
Dec 18 at 7:28












Why convert log probabilities rather than just take the difference of the logarithms of probabilities, which is the same as examining the log of the ratio of probabilities? This latter would seem more logical because if we did a log transformation for good reason, we should not abandon that good reason by back transforming before doing a calculation, but rather do testing in the preferred transformed system.
– Carl
Dec 18 at 13:44






Why convert log probabilities rather than just take the difference of the logarithms of probabilities, which is the same as examining the log of the ratio of probabilities? This latter would seem more logical because if we did a log transformation for good reason, we should not abandon that good reason by back transforming before doing a calculation, but rather do testing in the preferred transformed system.
– Carl
Dec 18 at 13:44














@Carl: There are many instances when one wishes to add or subtract small probability values. In these cases, performing an entirely different operation (e.g., ratio) is not helpful. The purpose of the logarithmic transformation is to be able to store very small probabilities on a scale where they are distinguishable from zero.
– Ben
Dec 18 at 22:39




@Carl: There are many instances when one wishes to add or subtract small probability values. In these cases, performing an entirely different operation (e.g., ratio) is not helpful. The purpose of the logarithmic transformation is to be able to store very small probabilities on a scale where they are distinguishable from zero.
– Ben
Dec 18 at 22:39










2 Answers
2






active

oldest

votes


















8














To see how to deal with differences of this kind, we first note a useful mathematical result concerning differences of exponentials:



$$begin{equation} begin{aligned}
exp(ell_1) - exp(ell_2)
&= exp(ell_1) (1 - exp(-(ell_1 - ell_2))). \[6pt]
end{aligned} end{equation}$$



This result converts the difference to a product, which allows us to present the log-difference as:



$$begin{equation} begin{aligned}
ell_-
&= ln big( exp(ell_1) - exp(ell_2) big) \[6pt]
&= ln big( exp(ell_1) (1 - exp(-(ell_1 - ell_2))) big) \[6pt]
&= ell_1 + ln (1 - exp(-(ell_1 - ell_2))). \[6pt]
end{aligned} end{equation}$$



In the case where $ell_1 = ell_2$ we obtain the expression $ell_+ = ell_1 + ln 0 = -infty$. Using the Maclaurin series expansion for $ln(1-x)$ we obtain the formula:



$$begin{equation} begin{aligned}
ell_-
&= ell_1 - sum_{k=1}^infty frac{exp(-k(ell_1 - ell_2))}{k} quad quad quad text{for } ell_1 neq ell_2. \[6pt]
end{aligned} end{equation}$$



Since $exp(-(ell_1 - ell_2)) < 1$ the terms in this expansion diminish rapidly (faster than exponential decay). If $ell_1 - ell_2$ is large then the terms diminish particularly rapid. In any case, this expression allows us to compute the log-sum to any desired level of accuracy by truncating the infinite sum to a desired number of terms.





Implementation in base R: It is possible to compute this log-difference accurately in base R using the log1p function. This is a primitive function in the base package that computes the value of $ln(1+x)$ for an argument $x$ (with accurate computation even for $x ll 1$). This primitive function can be used to give a simple function for the log-difference:



logdiff <- function(l1, l2) { l1 + log1p(-exp(-(l1-l2))); }


Implementation with VGAM package: Machler (2012) analyses accuracy issues in evaluating the function $ln(1-exp(-|x|))$, and suggests that use of the base R functions may involve a loss of accuracy. It is possible to compute this log-difference more accurately in using the log1mexp function in the VGAM package. This gives you the an alternative function for the log-difference:



logdiff <- function(l1, l2) { l1 + VGAM::log1mexp(l1-l2); }





share|cite|improve this answer





























    2














    The following workaround is often very useful for these sorts of problems:




    1. subtract the smaller of l1 and l2 from each of l1 and l2 (effectively, we have multiplied the probabilities by some constant z = exp(min(l1,l2)))

    2. Now compute the sum using standard functions (you can use log1p and expm1 if you want).

    3. Afterwards, add the quantity you subtracted in step 1.


    A simple example:



    l1 <- -2000 ## exp(-2000) is computational zero
    l2 <- -2002
    z <- min(l1,l2)
    l1 <- l1 - z
    l2 <- l2 - z
    ## now we are guaranteed that one of them is zero
    y <- log(exp(l1) - exp(l2))
    y + z


    returns the correct value -2000.145, which is equal to -2000 + log(1-exp(-2)).






    share|cite|improve this answer






























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      8














      To see how to deal with differences of this kind, we first note a useful mathematical result concerning differences of exponentials:



      $$begin{equation} begin{aligned}
      exp(ell_1) - exp(ell_2)
      &= exp(ell_1) (1 - exp(-(ell_1 - ell_2))). \[6pt]
      end{aligned} end{equation}$$



      This result converts the difference to a product, which allows us to present the log-difference as:



      $$begin{equation} begin{aligned}
      ell_-
      &= ln big( exp(ell_1) - exp(ell_2) big) \[6pt]
      &= ln big( exp(ell_1) (1 - exp(-(ell_1 - ell_2))) big) \[6pt]
      &= ell_1 + ln (1 - exp(-(ell_1 - ell_2))). \[6pt]
      end{aligned} end{equation}$$



      In the case where $ell_1 = ell_2$ we obtain the expression $ell_+ = ell_1 + ln 0 = -infty$. Using the Maclaurin series expansion for $ln(1-x)$ we obtain the formula:



      $$begin{equation} begin{aligned}
      ell_-
      &= ell_1 - sum_{k=1}^infty frac{exp(-k(ell_1 - ell_2))}{k} quad quad quad text{for } ell_1 neq ell_2. \[6pt]
      end{aligned} end{equation}$$



      Since $exp(-(ell_1 - ell_2)) < 1$ the terms in this expansion diminish rapidly (faster than exponential decay). If $ell_1 - ell_2$ is large then the terms diminish particularly rapid. In any case, this expression allows us to compute the log-sum to any desired level of accuracy by truncating the infinite sum to a desired number of terms.





      Implementation in base R: It is possible to compute this log-difference accurately in base R using the log1p function. This is a primitive function in the base package that computes the value of $ln(1+x)$ for an argument $x$ (with accurate computation even for $x ll 1$). This primitive function can be used to give a simple function for the log-difference:



      logdiff <- function(l1, l2) { l1 + log1p(-exp(-(l1-l2))); }


      Implementation with VGAM package: Machler (2012) analyses accuracy issues in evaluating the function $ln(1-exp(-|x|))$, and suggests that use of the base R functions may involve a loss of accuracy. It is possible to compute this log-difference more accurately in using the log1mexp function in the VGAM package. This gives you the an alternative function for the log-difference:



      logdiff <- function(l1, l2) { l1 + VGAM::log1mexp(l1-l2); }





      share|cite|improve this answer


























        8














        To see how to deal with differences of this kind, we first note a useful mathematical result concerning differences of exponentials:



        $$begin{equation} begin{aligned}
        exp(ell_1) - exp(ell_2)
        &= exp(ell_1) (1 - exp(-(ell_1 - ell_2))). \[6pt]
        end{aligned} end{equation}$$



        This result converts the difference to a product, which allows us to present the log-difference as:



        $$begin{equation} begin{aligned}
        ell_-
        &= ln big( exp(ell_1) - exp(ell_2) big) \[6pt]
        &= ln big( exp(ell_1) (1 - exp(-(ell_1 - ell_2))) big) \[6pt]
        &= ell_1 + ln (1 - exp(-(ell_1 - ell_2))). \[6pt]
        end{aligned} end{equation}$$



        In the case where $ell_1 = ell_2$ we obtain the expression $ell_+ = ell_1 + ln 0 = -infty$. Using the Maclaurin series expansion for $ln(1-x)$ we obtain the formula:



        $$begin{equation} begin{aligned}
        ell_-
        &= ell_1 - sum_{k=1}^infty frac{exp(-k(ell_1 - ell_2))}{k} quad quad quad text{for } ell_1 neq ell_2. \[6pt]
        end{aligned} end{equation}$$



        Since $exp(-(ell_1 - ell_2)) < 1$ the terms in this expansion diminish rapidly (faster than exponential decay). If $ell_1 - ell_2$ is large then the terms diminish particularly rapid. In any case, this expression allows us to compute the log-sum to any desired level of accuracy by truncating the infinite sum to a desired number of terms.





        Implementation in base R: It is possible to compute this log-difference accurately in base R using the log1p function. This is a primitive function in the base package that computes the value of $ln(1+x)$ for an argument $x$ (with accurate computation even for $x ll 1$). This primitive function can be used to give a simple function for the log-difference:



        logdiff <- function(l1, l2) { l1 + log1p(-exp(-(l1-l2))); }


        Implementation with VGAM package: Machler (2012) analyses accuracy issues in evaluating the function $ln(1-exp(-|x|))$, and suggests that use of the base R functions may involve a loss of accuracy. It is possible to compute this log-difference more accurately in using the log1mexp function in the VGAM package. This gives you the an alternative function for the log-difference:



        logdiff <- function(l1, l2) { l1 + VGAM::log1mexp(l1-l2); }





        share|cite|improve this answer
























          8












          8








          8






          To see how to deal with differences of this kind, we first note a useful mathematical result concerning differences of exponentials:



          $$begin{equation} begin{aligned}
          exp(ell_1) - exp(ell_2)
          &= exp(ell_1) (1 - exp(-(ell_1 - ell_2))). \[6pt]
          end{aligned} end{equation}$$



          This result converts the difference to a product, which allows us to present the log-difference as:



          $$begin{equation} begin{aligned}
          ell_-
          &= ln big( exp(ell_1) - exp(ell_2) big) \[6pt]
          &= ln big( exp(ell_1) (1 - exp(-(ell_1 - ell_2))) big) \[6pt]
          &= ell_1 + ln (1 - exp(-(ell_1 - ell_2))). \[6pt]
          end{aligned} end{equation}$$



          In the case where $ell_1 = ell_2$ we obtain the expression $ell_+ = ell_1 + ln 0 = -infty$. Using the Maclaurin series expansion for $ln(1-x)$ we obtain the formula:



          $$begin{equation} begin{aligned}
          ell_-
          &= ell_1 - sum_{k=1}^infty frac{exp(-k(ell_1 - ell_2))}{k} quad quad quad text{for } ell_1 neq ell_2. \[6pt]
          end{aligned} end{equation}$$



          Since $exp(-(ell_1 - ell_2)) < 1$ the terms in this expansion diminish rapidly (faster than exponential decay). If $ell_1 - ell_2$ is large then the terms diminish particularly rapid. In any case, this expression allows us to compute the log-sum to any desired level of accuracy by truncating the infinite sum to a desired number of terms.





          Implementation in base R: It is possible to compute this log-difference accurately in base R using the log1p function. This is a primitive function in the base package that computes the value of $ln(1+x)$ for an argument $x$ (with accurate computation even for $x ll 1$). This primitive function can be used to give a simple function for the log-difference:



          logdiff <- function(l1, l2) { l1 + log1p(-exp(-(l1-l2))); }


          Implementation with VGAM package: Machler (2012) analyses accuracy issues in evaluating the function $ln(1-exp(-|x|))$, and suggests that use of the base R functions may involve a loss of accuracy. It is possible to compute this log-difference more accurately in using the log1mexp function in the VGAM package. This gives you the an alternative function for the log-difference:



          logdiff <- function(l1, l2) { l1 + VGAM::log1mexp(l1-l2); }





          share|cite|improve this answer












          To see how to deal with differences of this kind, we first note a useful mathematical result concerning differences of exponentials:



          $$begin{equation} begin{aligned}
          exp(ell_1) - exp(ell_2)
          &= exp(ell_1) (1 - exp(-(ell_1 - ell_2))). \[6pt]
          end{aligned} end{equation}$$



          This result converts the difference to a product, which allows us to present the log-difference as:



          $$begin{equation} begin{aligned}
          ell_-
          &= ln big( exp(ell_1) - exp(ell_2) big) \[6pt]
          &= ln big( exp(ell_1) (1 - exp(-(ell_1 - ell_2))) big) \[6pt]
          &= ell_1 + ln (1 - exp(-(ell_1 - ell_2))). \[6pt]
          end{aligned} end{equation}$$



          In the case where $ell_1 = ell_2$ we obtain the expression $ell_+ = ell_1 + ln 0 = -infty$. Using the Maclaurin series expansion for $ln(1-x)$ we obtain the formula:



          $$begin{equation} begin{aligned}
          ell_-
          &= ell_1 - sum_{k=1}^infty frac{exp(-k(ell_1 - ell_2))}{k} quad quad quad text{for } ell_1 neq ell_2. \[6pt]
          end{aligned} end{equation}$$



          Since $exp(-(ell_1 - ell_2)) < 1$ the terms in this expansion diminish rapidly (faster than exponential decay). If $ell_1 - ell_2$ is large then the terms diminish particularly rapid. In any case, this expression allows us to compute the log-sum to any desired level of accuracy by truncating the infinite sum to a desired number of terms.





          Implementation in base R: It is possible to compute this log-difference accurately in base R using the log1p function. This is a primitive function in the base package that computes the value of $ln(1+x)$ for an argument $x$ (with accurate computation even for $x ll 1$). This primitive function can be used to give a simple function for the log-difference:



          logdiff <- function(l1, l2) { l1 + log1p(-exp(-(l1-l2))); }


          Implementation with VGAM package: Machler (2012) analyses accuracy issues in evaluating the function $ln(1-exp(-|x|))$, and suggests that use of the base R functions may involve a loss of accuracy. It is possible to compute this log-difference more accurately in using the log1mexp function in the VGAM package. This gives you the an alternative function for the log-difference:



          logdiff <- function(l1, l2) { l1 + VGAM::log1mexp(l1-l2); }






          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Dec 18 at 4:15









          Ben

          21.7k224103




          21.7k224103

























              2














              The following workaround is often very useful for these sorts of problems:




              1. subtract the smaller of l1 and l2 from each of l1 and l2 (effectively, we have multiplied the probabilities by some constant z = exp(min(l1,l2)))

              2. Now compute the sum using standard functions (you can use log1p and expm1 if you want).

              3. Afterwards, add the quantity you subtracted in step 1.


              A simple example:



              l1 <- -2000 ## exp(-2000) is computational zero
              l2 <- -2002
              z <- min(l1,l2)
              l1 <- l1 - z
              l2 <- l2 - z
              ## now we are guaranteed that one of them is zero
              y <- log(exp(l1) - exp(l2))
              y + z


              returns the correct value -2000.145, which is equal to -2000 + log(1-exp(-2)).






              share|cite|improve this answer




























                2














                The following workaround is often very useful for these sorts of problems:




                1. subtract the smaller of l1 and l2 from each of l1 and l2 (effectively, we have multiplied the probabilities by some constant z = exp(min(l1,l2)))

                2. Now compute the sum using standard functions (you can use log1p and expm1 if you want).

                3. Afterwards, add the quantity you subtracted in step 1.


                A simple example:



                l1 <- -2000 ## exp(-2000) is computational zero
                l2 <- -2002
                z <- min(l1,l2)
                l1 <- l1 - z
                l2 <- l2 - z
                ## now we are guaranteed that one of them is zero
                y <- log(exp(l1) - exp(l2))
                y + z


                returns the correct value -2000.145, which is equal to -2000 + log(1-exp(-2)).






                share|cite|improve this answer


























                  2












                  2








                  2






                  The following workaround is often very useful for these sorts of problems:




                  1. subtract the smaller of l1 and l2 from each of l1 and l2 (effectively, we have multiplied the probabilities by some constant z = exp(min(l1,l2)))

                  2. Now compute the sum using standard functions (you can use log1p and expm1 if you want).

                  3. Afterwards, add the quantity you subtracted in step 1.


                  A simple example:



                  l1 <- -2000 ## exp(-2000) is computational zero
                  l2 <- -2002
                  z <- min(l1,l2)
                  l1 <- l1 - z
                  l2 <- l2 - z
                  ## now we are guaranteed that one of them is zero
                  y <- log(exp(l1) - exp(l2))
                  y + z


                  returns the correct value -2000.145, which is equal to -2000 + log(1-exp(-2)).






                  share|cite|improve this answer














                  The following workaround is often very useful for these sorts of problems:




                  1. subtract the smaller of l1 and l2 from each of l1 and l2 (effectively, we have multiplied the probabilities by some constant z = exp(min(l1,l2)))

                  2. Now compute the sum using standard functions (you can use log1p and expm1 if you want).

                  3. Afterwards, add the quantity you subtracted in step 1.


                  A simple example:



                  l1 <- -2000 ## exp(-2000) is computational zero
                  l2 <- -2002
                  z <- min(l1,l2)
                  l1 <- l1 - z
                  l2 <- l2 - z
                  ## now we are guaranteed that one of them is zero
                  y <- log(exp(l1) - exp(l2))
                  y + z


                  returns the correct value -2000.145, which is equal to -2000 + log(1-exp(-2)).







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Dec 18 at 9:07

























                  answered Dec 18 at 9:02









                  JDL

                  83639




                  83639















                      Popular posts from this blog

                      Morgemoulin

                      Scott Moir

                      Souastre