Building a system of ODEs, efficiently updating parameters and evaluating vecors of ODEs
up vote
0
down vote
favorite
I am building quite large systems of ODEs programmatically. I intend to find the steady states of ODEs under different parameter values. A boiled down example of my code is shown below. Comments with #
are there to explain what code does. Comments with """ """
are there to point you to the problems in the code.
The first problem I run in to are that the evaluations of __call__
in the R1 - R3 and ODE classes are fairly slow (I looked at it with cProfile
). I am generally wondering why that is, and how to make it faster. In the ODE
class and the system
function below I run through a loop which I expect is an inefficient way of doing things. I just want to send the same state
variable to all functions in some vector. For this matter I already have a related StackOverflow question.
The second problem is that I think that the current solution to update parameter values using the F
and Parameter
classes is terrible. Refactoring code becomes tedious since you need to change the parameter name in a Parameter object and subsequently in the @property
of the F
object. It also seems to be working slow. What are your suggestions to make updating parameters? How do I make it kind of feel like call by reference from the R1-R3
classes?
import numpy as np
from scipy import integrate
import pandas as pd
import matplotlib.pyplot as plt
class F: # All R1 - Rn have this as a common parent class. It just serves to dish out parameters from a Parameter object
"""
Together with the Parameters class this is the one that I would like to change!
It seems arduous to have a class just to be able to handle parameter changes efficiently.
It also is hard to maintain, since I will need to change the @property every
time I change the name of some parameter like k.
When profiling my code it also emerged that calling @property k in this class is actually quite slow.
I was wondering why that is and what to do about it.
"""
def __init__(self, param):
self.param = param
@property
def k(self):
return self.param.k
class Parameters:
"""
This class only exists to store parameters.
The idea is to emulate call by reference value lookup from classes R1 - R3
"""
def __init__(self, k):
self.k = k
class R1(F): # its object represents an equation needed to build an ODE function for some state variable
def __init__(self, param, i):
F.__init__(self, param)
self.i = i
def __call__(self, state):
return self.k * state[self.i]
class R2(F): # its object represents an equation needed to build an ODE function for some state variable
def __init__(self, param, i, j):
F.__init__(self, param)
self.i = i
self.j = j
def __call__(self, state):
return self.k * state[self.i] * state[self.j]
class R3(F): # its object represents an equation needed to build an ODE function for some state variable
def __init__(self, param):
F.__init__(self, param)
def __call__(self, state):
return self.k # HERE we are for instance looking up the most current value of parameter in Parameters object
class ODE: # its object holds the full ODE function
def __init__(self):
self.r =
self.s =
def __add__(self, other): # has behavior such that ODE += other (e.g. R1)
self.r.append(other)
self.s.append(1)
return self
def __sub__(self, other): # has behavior such that ODE -= other (e.g. R1)
self.r.append(other)
self.s.append(-1)
return self
def finalize(self):
self.r = np.array(self.r)
self.s = np.array(self.s)
self.change = np.zeros(self.s.shape[0])
def __call__(self, state): # when called the full ODE is evalueated, for example ODE = R1 - R3 + R2
"""
In the question on StackOverflow above, I already inquired whether this is the fastest way to evaluate the ODE.
I would like to find a way to do: self.change = map(input_to_apply, list_of_functions), so that
the same input is applied to all functions without the for loop.
When profiling my code the __call__ functions are by far the major bottleneck time-wise. If this
could be made more efficient somehow that would be beautiful
"""
for i, rr in enumerate(self.r):
self.change[i] = self.s[i] * rr(state = state)
return self.change.sum()
state0 = np.array([ # initial state of the system
1.0, #A
2.0, #B
0.0 #C
])
S = np.array([ # nice representation of the 3 states and how the variables interact
# k1 k2 k3 k4
[ 1,-1,-1, 0], # A
[ 0, 1,-1, 0], # B
[ 0, 0, 1,-1] # C
])
init_parameters = np.array([ # IMPORTANT: these are the parameters I would like to vary
1.0, # k1
0.4, # k2
0.5, # k3
1.0, # k4
])
new_parameters = np.array([ # as an example these will be the new parameters
1.0, # k1
0.4, # k2
1.5, # k3
1.0, # k4
])
p = np.array([Parameters(k=i) for i in init_parameters]) # generate parameter objects
ode = np.array([ODE() for i in state0]) # generate 3 ODE objects for all state elements A, B & C
# Here we fill the ODE objects with the elements that govern change in state
# In my original code this is done automatically, and there are 10s to 100s of ODEs depending on the system
# Also, here we create 3 R2 objects with the exact same parameters, whereas
# in the real code these would have different values for i and j
ode[0] += R3(p[0]) # k1
ode[0] -= R1(p[1], i=0) # k2
ode[0] -= R2(p[2], i=0, j=1) # k3
ode[1] += R1(p[1], i=0) # k2
ode[1] -= R2(p[2], i=0, j=1) # k3
ode[2] += R2(p[2], i=0, j=1) # k3
ode[2] -= R1(p[3], i=2) # k4
for o in ode: # just to make np.array out of lists in the ODE objects
o.finalize()
change = np.zeros(3) # vector to temporarily store state changes
def system(t, state): # this is the system that we are going to solve
"""
Same thing, I would like to somehow: change = map(input_to_apply, list_of_functions)
"""
for i, o in enumerate(ode):
change[i] = o(state)
return change
def solve(ax): # solves the system of ODEs and plots it directly to a matplotlib axis
sol = integrate.solve_ivp(fun = system, t_span = (0, 12), y0 = state0, method = 'LSODA')
res = pd.DataFrame(sol.y.T, columns = ['A', 'B', 'C'])
res.index = sol.t
res.plot(ax = ax)
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize = (15, 6))
solve(ax1) # solve and plot for init_parameters
for pp, new_k in zip(p, new_parameters): # change parameters to new parameters
pp.k = new_k
solve(ax2) # solve and plot for new_parameters
python performance python-3.x numerical-methods scipy
add a comment |
up vote
0
down vote
favorite
I am building quite large systems of ODEs programmatically. I intend to find the steady states of ODEs under different parameter values. A boiled down example of my code is shown below. Comments with #
are there to explain what code does. Comments with """ """
are there to point you to the problems in the code.
The first problem I run in to are that the evaluations of __call__
in the R1 - R3 and ODE classes are fairly slow (I looked at it with cProfile
). I am generally wondering why that is, and how to make it faster. In the ODE
class and the system
function below I run through a loop which I expect is an inefficient way of doing things. I just want to send the same state
variable to all functions in some vector. For this matter I already have a related StackOverflow question.
The second problem is that I think that the current solution to update parameter values using the F
and Parameter
classes is terrible. Refactoring code becomes tedious since you need to change the parameter name in a Parameter object and subsequently in the @property
of the F
object. It also seems to be working slow. What are your suggestions to make updating parameters? How do I make it kind of feel like call by reference from the R1-R3
classes?
import numpy as np
from scipy import integrate
import pandas as pd
import matplotlib.pyplot as plt
class F: # All R1 - Rn have this as a common parent class. It just serves to dish out parameters from a Parameter object
"""
Together with the Parameters class this is the one that I would like to change!
It seems arduous to have a class just to be able to handle parameter changes efficiently.
It also is hard to maintain, since I will need to change the @property every
time I change the name of some parameter like k.
When profiling my code it also emerged that calling @property k in this class is actually quite slow.
I was wondering why that is and what to do about it.
"""
def __init__(self, param):
self.param = param
@property
def k(self):
return self.param.k
class Parameters:
"""
This class only exists to store parameters.
The idea is to emulate call by reference value lookup from classes R1 - R3
"""
def __init__(self, k):
self.k = k
class R1(F): # its object represents an equation needed to build an ODE function for some state variable
def __init__(self, param, i):
F.__init__(self, param)
self.i = i
def __call__(self, state):
return self.k * state[self.i]
class R2(F): # its object represents an equation needed to build an ODE function for some state variable
def __init__(self, param, i, j):
F.__init__(self, param)
self.i = i
self.j = j
def __call__(self, state):
return self.k * state[self.i] * state[self.j]
class R3(F): # its object represents an equation needed to build an ODE function for some state variable
def __init__(self, param):
F.__init__(self, param)
def __call__(self, state):
return self.k # HERE we are for instance looking up the most current value of parameter in Parameters object
class ODE: # its object holds the full ODE function
def __init__(self):
self.r =
self.s =
def __add__(self, other): # has behavior such that ODE += other (e.g. R1)
self.r.append(other)
self.s.append(1)
return self
def __sub__(self, other): # has behavior such that ODE -= other (e.g. R1)
self.r.append(other)
self.s.append(-1)
return self
def finalize(self):
self.r = np.array(self.r)
self.s = np.array(self.s)
self.change = np.zeros(self.s.shape[0])
def __call__(self, state): # when called the full ODE is evalueated, for example ODE = R1 - R3 + R2
"""
In the question on StackOverflow above, I already inquired whether this is the fastest way to evaluate the ODE.
I would like to find a way to do: self.change = map(input_to_apply, list_of_functions), so that
the same input is applied to all functions without the for loop.
When profiling my code the __call__ functions are by far the major bottleneck time-wise. If this
could be made more efficient somehow that would be beautiful
"""
for i, rr in enumerate(self.r):
self.change[i] = self.s[i] * rr(state = state)
return self.change.sum()
state0 = np.array([ # initial state of the system
1.0, #A
2.0, #B
0.0 #C
])
S = np.array([ # nice representation of the 3 states and how the variables interact
# k1 k2 k3 k4
[ 1,-1,-1, 0], # A
[ 0, 1,-1, 0], # B
[ 0, 0, 1,-1] # C
])
init_parameters = np.array([ # IMPORTANT: these are the parameters I would like to vary
1.0, # k1
0.4, # k2
0.5, # k3
1.0, # k4
])
new_parameters = np.array([ # as an example these will be the new parameters
1.0, # k1
0.4, # k2
1.5, # k3
1.0, # k4
])
p = np.array([Parameters(k=i) for i in init_parameters]) # generate parameter objects
ode = np.array([ODE() for i in state0]) # generate 3 ODE objects for all state elements A, B & C
# Here we fill the ODE objects with the elements that govern change in state
# In my original code this is done automatically, and there are 10s to 100s of ODEs depending on the system
# Also, here we create 3 R2 objects with the exact same parameters, whereas
# in the real code these would have different values for i and j
ode[0] += R3(p[0]) # k1
ode[0] -= R1(p[1], i=0) # k2
ode[0] -= R2(p[2], i=0, j=1) # k3
ode[1] += R1(p[1], i=0) # k2
ode[1] -= R2(p[2], i=0, j=1) # k3
ode[2] += R2(p[2], i=0, j=1) # k3
ode[2] -= R1(p[3], i=2) # k4
for o in ode: # just to make np.array out of lists in the ODE objects
o.finalize()
change = np.zeros(3) # vector to temporarily store state changes
def system(t, state): # this is the system that we are going to solve
"""
Same thing, I would like to somehow: change = map(input_to_apply, list_of_functions)
"""
for i, o in enumerate(ode):
change[i] = o(state)
return change
def solve(ax): # solves the system of ODEs and plots it directly to a matplotlib axis
sol = integrate.solve_ivp(fun = system, t_span = (0, 12), y0 = state0, method = 'LSODA')
res = pd.DataFrame(sol.y.T, columns = ['A', 'B', 'C'])
res.index = sol.t
res.plot(ax = ax)
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize = (15, 6))
solve(ax1) # solve and plot for init_parameters
for pp, new_k in zip(p, new_parameters): # change parameters to new parameters
pp.k = new_k
solve(ax2) # solve and plot for new_parameters
python performance python-3.x numerical-methods scipy
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I am building quite large systems of ODEs programmatically. I intend to find the steady states of ODEs under different parameter values. A boiled down example of my code is shown below. Comments with #
are there to explain what code does. Comments with """ """
are there to point you to the problems in the code.
The first problem I run in to are that the evaluations of __call__
in the R1 - R3 and ODE classes are fairly slow (I looked at it with cProfile
). I am generally wondering why that is, and how to make it faster. In the ODE
class and the system
function below I run through a loop which I expect is an inefficient way of doing things. I just want to send the same state
variable to all functions in some vector. For this matter I already have a related StackOverflow question.
The second problem is that I think that the current solution to update parameter values using the F
and Parameter
classes is terrible. Refactoring code becomes tedious since you need to change the parameter name in a Parameter object and subsequently in the @property
of the F
object. It also seems to be working slow. What are your suggestions to make updating parameters? How do I make it kind of feel like call by reference from the R1-R3
classes?
import numpy as np
from scipy import integrate
import pandas as pd
import matplotlib.pyplot as plt
class F: # All R1 - Rn have this as a common parent class. It just serves to dish out parameters from a Parameter object
"""
Together with the Parameters class this is the one that I would like to change!
It seems arduous to have a class just to be able to handle parameter changes efficiently.
It also is hard to maintain, since I will need to change the @property every
time I change the name of some parameter like k.
When profiling my code it also emerged that calling @property k in this class is actually quite slow.
I was wondering why that is and what to do about it.
"""
def __init__(self, param):
self.param = param
@property
def k(self):
return self.param.k
class Parameters:
"""
This class only exists to store parameters.
The idea is to emulate call by reference value lookup from classes R1 - R3
"""
def __init__(self, k):
self.k = k
class R1(F): # its object represents an equation needed to build an ODE function for some state variable
def __init__(self, param, i):
F.__init__(self, param)
self.i = i
def __call__(self, state):
return self.k * state[self.i]
class R2(F): # its object represents an equation needed to build an ODE function for some state variable
def __init__(self, param, i, j):
F.__init__(self, param)
self.i = i
self.j = j
def __call__(self, state):
return self.k * state[self.i] * state[self.j]
class R3(F): # its object represents an equation needed to build an ODE function for some state variable
def __init__(self, param):
F.__init__(self, param)
def __call__(self, state):
return self.k # HERE we are for instance looking up the most current value of parameter in Parameters object
class ODE: # its object holds the full ODE function
def __init__(self):
self.r =
self.s =
def __add__(self, other): # has behavior such that ODE += other (e.g. R1)
self.r.append(other)
self.s.append(1)
return self
def __sub__(self, other): # has behavior such that ODE -= other (e.g. R1)
self.r.append(other)
self.s.append(-1)
return self
def finalize(self):
self.r = np.array(self.r)
self.s = np.array(self.s)
self.change = np.zeros(self.s.shape[0])
def __call__(self, state): # when called the full ODE is evalueated, for example ODE = R1 - R3 + R2
"""
In the question on StackOverflow above, I already inquired whether this is the fastest way to evaluate the ODE.
I would like to find a way to do: self.change = map(input_to_apply, list_of_functions), so that
the same input is applied to all functions without the for loop.
When profiling my code the __call__ functions are by far the major bottleneck time-wise. If this
could be made more efficient somehow that would be beautiful
"""
for i, rr in enumerate(self.r):
self.change[i] = self.s[i] * rr(state = state)
return self.change.sum()
state0 = np.array([ # initial state of the system
1.0, #A
2.0, #B
0.0 #C
])
S = np.array([ # nice representation of the 3 states and how the variables interact
# k1 k2 k3 k4
[ 1,-1,-1, 0], # A
[ 0, 1,-1, 0], # B
[ 0, 0, 1,-1] # C
])
init_parameters = np.array([ # IMPORTANT: these are the parameters I would like to vary
1.0, # k1
0.4, # k2
0.5, # k3
1.0, # k4
])
new_parameters = np.array([ # as an example these will be the new parameters
1.0, # k1
0.4, # k2
1.5, # k3
1.0, # k4
])
p = np.array([Parameters(k=i) for i in init_parameters]) # generate parameter objects
ode = np.array([ODE() for i in state0]) # generate 3 ODE objects for all state elements A, B & C
# Here we fill the ODE objects with the elements that govern change in state
# In my original code this is done automatically, and there are 10s to 100s of ODEs depending on the system
# Also, here we create 3 R2 objects with the exact same parameters, whereas
# in the real code these would have different values for i and j
ode[0] += R3(p[0]) # k1
ode[0] -= R1(p[1], i=0) # k2
ode[0] -= R2(p[2], i=0, j=1) # k3
ode[1] += R1(p[1], i=0) # k2
ode[1] -= R2(p[2], i=0, j=1) # k3
ode[2] += R2(p[2], i=0, j=1) # k3
ode[2] -= R1(p[3], i=2) # k4
for o in ode: # just to make np.array out of lists in the ODE objects
o.finalize()
change = np.zeros(3) # vector to temporarily store state changes
def system(t, state): # this is the system that we are going to solve
"""
Same thing, I would like to somehow: change = map(input_to_apply, list_of_functions)
"""
for i, o in enumerate(ode):
change[i] = o(state)
return change
def solve(ax): # solves the system of ODEs and plots it directly to a matplotlib axis
sol = integrate.solve_ivp(fun = system, t_span = (0, 12), y0 = state0, method = 'LSODA')
res = pd.DataFrame(sol.y.T, columns = ['A', 'B', 'C'])
res.index = sol.t
res.plot(ax = ax)
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize = (15, 6))
solve(ax1) # solve and plot for init_parameters
for pp, new_k in zip(p, new_parameters): # change parameters to new parameters
pp.k = new_k
solve(ax2) # solve and plot for new_parameters
python performance python-3.x numerical-methods scipy
I am building quite large systems of ODEs programmatically. I intend to find the steady states of ODEs under different parameter values. A boiled down example of my code is shown below. Comments with #
are there to explain what code does. Comments with """ """
are there to point you to the problems in the code.
The first problem I run in to are that the evaluations of __call__
in the R1 - R3 and ODE classes are fairly slow (I looked at it with cProfile
). I am generally wondering why that is, and how to make it faster. In the ODE
class and the system
function below I run through a loop which I expect is an inefficient way of doing things. I just want to send the same state
variable to all functions in some vector. For this matter I already have a related StackOverflow question.
The second problem is that I think that the current solution to update parameter values using the F
and Parameter
classes is terrible. Refactoring code becomes tedious since you need to change the parameter name in a Parameter object and subsequently in the @property
of the F
object. It also seems to be working slow. What are your suggestions to make updating parameters? How do I make it kind of feel like call by reference from the R1-R3
classes?
import numpy as np
from scipy import integrate
import pandas as pd
import matplotlib.pyplot as plt
class F: # All R1 - Rn have this as a common parent class. It just serves to dish out parameters from a Parameter object
"""
Together with the Parameters class this is the one that I would like to change!
It seems arduous to have a class just to be able to handle parameter changes efficiently.
It also is hard to maintain, since I will need to change the @property every
time I change the name of some parameter like k.
When profiling my code it also emerged that calling @property k in this class is actually quite slow.
I was wondering why that is and what to do about it.
"""
def __init__(self, param):
self.param = param
@property
def k(self):
return self.param.k
class Parameters:
"""
This class only exists to store parameters.
The idea is to emulate call by reference value lookup from classes R1 - R3
"""
def __init__(self, k):
self.k = k
class R1(F): # its object represents an equation needed to build an ODE function for some state variable
def __init__(self, param, i):
F.__init__(self, param)
self.i = i
def __call__(self, state):
return self.k * state[self.i]
class R2(F): # its object represents an equation needed to build an ODE function for some state variable
def __init__(self, param, i, j):
F.__init__(self, param)
self.i = i
self.j = j
def __call__(self, state):
return self.k * state[self.i] * state[self.j]
class R3(F): # its object represents an equation needed to build an ODE function for some state variable
def __init__(self, param):
F.__init__(self, param)
def __call__(self, state):
return self.k # HERE we are for instance looking up the most current value of parameter in Parameters object
class ODE: # its object holds the full ODE function
def __init__(self):
self.r =
self.s =
def __add__(self, other): # has behavior such that ODE += other (e.g. R1)
self.r.append(other)
self.s.append(1)
return self
def __sub__(self, other): # has behavior such that ODE -= other (e.g. R1)
self.r.append(other)
self.s.append(-1)
return self
def finalize(self):
self.r = np.array(self.r)
self.s = np.array(self.s)
self.change = np.zeros(self.s.shape[0])
def __call__(self, state): # when called the full ODE is evalueated, for example ODE = R1 - R3 + R2
"""
In the question on StackOverflow above, I already inquired whether this is the fastest way to evaluate the ODE.
I would like to find a way to do: self.change = map(input_to_apply, list_of_functions), so that
the same input is applied to all functions without the for loop.
When profiling my code the __call__ functions are by far the major bottleneck time-wise. If this
could be made more efficient somehow that would be beautiful
"""
for i, rr in enumerate(self.r):
self.change[i] = self.s[i] * rr(state = state)
return self.change.sum()
state0 = np.array([ # initial state of the system
1.0, #A
2.0, #B
0.0 #C
])
S = np.array([ # nice representation of the 3 states and how the variables interact
# k1 k2 k3 k4
[ 1,-1,-1, 0], # A
[ 0, 1,-1, 0], # B
[ 0, 0, 1,-1] # C
])
init_parameters = np.array([ # IMPORTANT: these are the parameters I would like to vary
1.0, # k1
0.4, # k2
0.5, # k3
1.0, # k4
])
new_parameters = np.array([ # as an example these will be the new parameters
1.0, # k1
0.4, # k2
1.5, # k3
1.0, # k4
])
p = np.array([Parameters(k=i) for i in init_parameters]) # generate parameter objects
ode = np.array([ODE() for i in state0]) # generate 3 ODE objects for all state elements A, B & C
# Here we fill the ODE objects with the elements that govern change in state
# In my original code this is done automatically, and there are 10s to 100s of ODEs depending on the system
# Also, here we create 3 R2 objects with the exact same parameters, whereas
# in the real code these would have different values for i and j
ode[0] += R3(p[0]) # k1
ode[0] -= R1(p[1], i=0) # k2
ode[0] -= R2(p[2], i=0, j=1) # k3
ode[1] += R1(p[1], i=0) # k2
ode[1] -= R2(p[2], i=0, j=1) # k3
ode[2] += R2(p[2], i=0, j=1) # k3
ode[2] -= R1(p[3], i=2) # k4
for o in ode: # just to make np.array out of lists in the ODE objects
o.finalize()
change = np.zeros(3) # vector to temporarily store state changes
def system(t, state): # this is the system that we are going to solve
"""
Same thing, I would like to somehow: change = map(input_to_apply, list_of_functions)
"""
for i, o in enumerate(ode):
change[i] = o(state)
return change
def solve(ax): # solves the system of ODEs and plots it directly to a matplotlib axis
sol = integrate.solve_ivp(fun = system, t_span = (0, 12), y0 = state0, method = 'LSODA')
res = pd.DataFrame(sol.y.T, columns = ['A', 'B', 'C'])
res.index = sol.t
res.plot(ax = ax)
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize = (15, 6))
solve(ax1) # solve and plot for init_parameters
for pp, new_k in zip(p, new_parameters): # change parameters to new parameters
pp.k = new_k
solve(ax2) # solve and plot for new_parameters
python performance python-3.x numerical-methods scipy
python performance python-3.x numerical-methods scipy
edited yesterday
200_success
127k15149412
127k15149412
asked yesterday
Patrickens
362
362
add a comment |
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Code Review Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f209081%2fbuilding-a-system-of-odes-efficiently-updating-parameters-and-evaluating-vecors%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown