Lecture 19
Autoregulation (continued)
Can we say something quantitative about…
- Noise reduction for autorepression.
- Switching time for auto-activation.
Last lecture we came up with an expression for the steady-state probability distribution for an autoregulating protein, but the expression was unwieldy.
\[p_n = \frac{p_0}{n!r^n} \prod_{j=0}^{n-1}f_j \\ \dot{p}_n = f_{n-1}p_{n-1}+r(n+1)p_{n+1}-f_np_n-rnp_n\]In this class we will use the linear noise approximation (LNA) to get a nicer form.
Linear Noise Approximation
The LNA requires the following:
- Molecule number is large, \(n>>1\).
- Noise is small, \(\eta<<\bar{n}\)
We will treat these functions as being functions as continuous variables since for large molecule number, a change of \(\pm 1\) is small and approximately continuous. So we can do a Taylor series expansion about \(\pm 1\).
\[\begin{align} g(n-1) &\approx g(n)-(1)g'(n)+\frac{1}{2}(1)^2g''(n) \\ h(n+1) &\approx h(n) + (1)h'(n)+\frac{1}{2}(1)^2h''(n) \end{align}\]After some algebra this leads to the following expression for the probability time derivatives.
\[\Rightarrow \dot{p}(n) = -\partial_n \left\{ \left[ f(n)-rn \right] \right\} + \frac{1}{2} \partial_n^2 \left\{ \left[ f(n)+rn \right] \right\} \\ F(n) \equiv f(n)-rn \\ \Delta (n) \equiv f(n)+rn\] \[\dot{p} = -(pF)' + \frac{1}{2}(p\Delta)''\]The is the Fokker-Planck equation. The first term on the right-side of the equation represents drift in the probability density whereas the second term represents diffusion. We can further simplify the equality i the following way:
\[\text{let } j(n) \equiv p(n)F(n)-\frac{1}{2} \partial_n \left[p(n)\Delta(n) \right] \\ \Rightarrow \dot{p}(n)=-\partial_nj(n)\]We know that in steady-state the time derivative of the probability function equals zero.
\[\dot{p}(n)= 0 = -\partial_nj(n) \\ \Rightarrow j(n) = \text{constant}\]Now lets figure out what that constant is. We know that for our probability distribution to normalizable then it must go to zero as the number of molecules approaches infinity.
\[p(n\to\infty)=0 \ \ \text{and} \ \ p'(n\to\infty)=0 \\ \therefore j(n\to\infty)=0\]Since \(j\) is a constant then it must equal zero not just out at infinity, but for all values of \(n\).
\[j = 0\] \[\Rightarrow 0 = p(n)F(n)-\frac{1}{2} \partial_n \left[p(n)\Delta(n) \right]\\ p(n)F(n) = \frac{1}{2} \partial_n \left[p(n)\Delta(n) \right]\]Now lets use the second approximation we stated at the very beginning, that noise is small. In order to use this approximation we will perform a change of variables so that our function are expressed in terms of \(\eta\) instead of \(n\).
\[\frac{\partial n}{\partial \eta} = 1 \ \Rightarrow \ \frac{d}{dn}=\frac{d}{d\eta}\]For probability to be conserved the following must be true:
\[p(n)dn = p(\eta)d\eta \\ \Rightarrow p(n) = p(\eta)\]We can do a series expansion since \(\eta\) is small.
\[F(n) = F(\bar{n}+\eta)=F(\bar{n})+\eta F'(\bar{n})+\frac{1}{2}\eta^2F''(\bar{n}) + ...\\ \text{Note } F(\bar{n})=0 \\ \text{let } \eta \bar{F}' \equiv \eta F'(\bar{n})\]Ignoring terms in our expansion of order \(\eta^2\) or higher gives the following:
\[F(n) \approx \eta \bar{F}'\]We can do the same sort of thing for \(\Delta(n)\).
\[\Delta(n) = \Delta(\bar{n}+\eta)= \Delta(\bar{n})+\eta\Delta'(n)+... \\ \Delta(n) \approx \Delta(\bar{n}) \equiv \bar{\Delta}\] \[\therefore \eta p(\eta)\bar{F}' = \frac{1}{2} \partial_\eta \left[p(\eta)\bar{\Delta}(n) \right] \\ \eta p \bar{F}' = \frac{1}{2} \frac{\partial p}{\partial \eta} \bar{\Delta}\]If you want to take my word for it, the solution of \(p(\eta)\) should look something like…
\[p = Ae^{-\eta/2\sigma^2}\]And after doing some math we can the follwoing relationship:
\[\sigma^2 = \frac{\bar{\Delta}}{(-2\bar{F}')}\]The denominator is a positive quantity since \(\bar{F}'\) is negative.
Now let’s solve for the variance.
\[\text{Recall } f(n)=k\frac{n_0^h}{n_0^h+n^h} \ , \text{ and let } n_0\approx \bar{n}\ .\] \[\therefore \bar{\Delta} = \Delta(\bar{n})=f(\bar{n})+r\bar{n} \\ \bar{\Delta}=\frac{k}{2}+rn_0 \\ \therefore \bar{F}'=f'(\bar{n})-r \\ f'(n_0) = -\frac{kh}{4n_0} \\ \Rightarrow \sigma^2 = \frac{k/2 + rn_0}{ \frac{kh}{2n_0}+2r}\]Switching times in a Bi-stable System
The probability distribution of a bi-stable system always has two local maximums indicating separate locations of stability in the system. Let A represent the location of the first local maximum and B represent the location of the second local maximum. We also want to define the local minimum in the probability distribution betwenn A and B and we will designate that minimum as C.
Let’s seperate the probability distribution into the parts. One part is the probability of being below point C (associated with stable point A) and the other is the probability of being above point C (associated with stable point B).
\[\Pi_A = \sum_{n=0}^{n_c} p_n \\ \Pi_B = \sum_{n=n_c+1}^\infty p_n\]The average time spent in each respective bi-stable region is proportional to its probability distribution.
\[\Rightarrow \frac{\tau_A}{\tau_B} = \frac{\Pi_A}{\Pi_B}\]From Kramer’s escape rate we can relate the rate system switching from A or from B to the energies associated with being in those states.
\[p \cdot \Delta \sim e^{-\beta U}\]Let’s assume that the system starts a point A. Calculate the escape rate to the other stable point, B.
\[r_{A\to B} = \frac{De^{\beta(U_C-U_A)}}{ \sqrt{\beta | U''_A |} \sqrt{\beta | U''_C |} } = \frac{1}{\tau_A} \\ r_{B\to A} = \frac{De^{\beta(U_C-U_B)}}{ \sqrt{\beta | U''_B |} \sqrt{\beta | U''_C |} } = \frac{1}{\tau_B} \\ \frac{\tau_A}{\tau_B} = \frac{ \sqrt{\beta | U''_A |} }{ \sqrt{\beta | U''_B |}}\frac{e^{-\beta U_A}}{e^{-\beta U_B}}\]Let’s calculate $$\beta | U’’ | $$ using Kramer’s escape rate. |
Written with StackEdit.