Lecture 17

Network Motifs

In this lecture we will broaden our scope to a whole genetic network.

We define a motif as a pattern that is over-represented in a network. A pattern is considered to be over-represented by comparing it with its representation in a randomized network. We can create a random network with the following instructions:

  • Conserve the number of nodes and edges from the real, original network.
  • Randomly reassign edges to the nodes.

Recall that a self-edge in a network represent an autoregulating function.

Example Network

Let our example network consist of \(N\) nodes and \(E\) edges. Then the probability of a node having a self-edge is

\[q = \frac{1}{N} \ .\]

Let \(p_n\) be the probability that \(n\) out of \(E\) edges are self-edges.

\[p_n = {E \choose n} q^n (1-q)^{E-n}\]

This is a binomial distribution and it has the following properties…

\[\bar{n} = Eq \ , \ \ \ \ \ \sigma^2 = Eq(1-q)\]

We can derive these expression by making a generating function.

Generating Function

\[\begin{align} G(z) &= \sum_{n=0}^E p_n z^n = \sum_{n=0}^E {E \choose n} q^n (1-q)^{E-n} z^n \\ G(z) &= \sum_{n=0}^E {E \choose n} (qz)^n (1-q)^{E-n} \\ G(z) &= (qz + 1 - q)^E \\ G(z) &= \left[q(z-1)+1\right]^E \\ F(z) &= E \ln\left[q(z-1)+1\right] \end{align}\]

The mean and the variance can be calculated using the following formulas:

\[\bar{n} = F'(1) \\ \sigma^2 - \bar{n} = F''(1)\] \[F'(z) = \frac{Eq}{q(z-1)+1}\] \[\Rightarrow \bar{n}=Eq \\ \Rightarrow \sigma^2 = Eq(1-q)\] \[\text{Since } q=\frac{1}{N}<<1 \\ \sigma^2 \approx Eq = \bar{n}\]

So in the above case a binomial distribution starts to look more and more like a Poisson distribution.

Now lets focus back our attention on the example network. Lets assign some realistic values to our network and see how it compares to the random network we derived.

\[E_\text{real}= 519 \ , \ \ \ N_\text{real}=424 \ , \ \ \ n_\text{real}=40\]

Using the real network’s number of nodes and edges we can compare the amount of self-edges observed in the real network to that of a random network.

\[n_\text{rand} = \bar{n} \approx 1.2 \pm 1.1\]

The number of self-edges is the real network is way bigger than that found in a random network! This means that autoregulation is over-represented. Over-representation does not necessarily mean that the corresponding characteristics is important. Which leads to the question of whether autoregulation in purposefully kept inside the genetic network.

Is autoregulation actively conserved?

Lets use the example of E.coli for our argument. E.coli cells divide at a rate

\[\tau_\text{div} \sim 20 \text{min.}\]

The probability of the cell making an error in the process of copying its DNA is

\[p_\text{error} \sim 10^{-9}/\text{ base pair}\]

So if we start out with 1 cell, after 12 hours there will be approximately \(10^{11}\) cells. Therefore about 100 cells will have DNA with one different base pair. If autoregulation were not functionally important to E.coli then those corresponding self-edges would be lost on the order of days.

In summary:

  • Autoregulation is over-represented.
  • Autoregulation is convserved.

Autoregulation can be either autoactivation or autorepression. Lets first examine why autorepression could be useful to a cell.

Autorepression

Autorepression can be useful in the following ways:

  1. Reduces the response time of the gene.
  2. Buffers against fluctuations.

Recall our simple model for autorepression.

\[\dot{x} = \underbrace{\frac{x}{V}\frac{K_d^h}{K_d^h+x^h}}_{f_-(x)} -rx\] \[\dot{x} = f_-(x)-rx\]

As time progresses the concentration will go to the steady state concentration.

\[f(x^*) = rx^* \ , \ \ \ x^* = \text{steady state concentration}\]

The response time is the characteristic time scale related to how fast the concentration approaches its steady state.

Strong Repression

In the case of the strong repression we would expect that the response time to be short. In this case we have the following relationship:

\[K_d << x^* \\ \Rightarrow \dot{x} = \frac{k}{V} \left(\frac{K_d}{x}\right)^h-rx \\ x^h \dot{x} = \frac{k}{V} K_d^h -rx^{h+1}\] \[\text{Let } u \equiv x^{h+1} \\ \dot{u} = (h+1)x^h\dot{x} \\ \Rightarrow \frac{1}{h+1}\dot{u} = A-ru\] \[\dot{u} = 0 \ \Rightarrow u^* = \frac{A}{r} \\ \dot{u} = (h+1)r(u^*-u) \\ \Rightarrow u(t) = u^*\left[1-e^{-(h+1)rt}\right] \\ x(t) = x^*\left[1-e^{-(h+1)rt}\right]^{1/(h+1)}\] \[\text{Define } x(\tau)=\frac{1}{2}x^* \\ \therefore \tau = \frac{1}{r}\left[\frac{1}{h+1}\ln\left(\frac{2^{h+1}}{2^{h+1}-1}\right)\right]\]

Lets check the limiting case of no autorepression.

\[\Rightarrow h = 0 \\ \tau = \frac{1}{r}\ln 2\]

The other limit is more intuitive. Lets check the limit of very large autorepression. We expect that the response time will become very small since repression is so large, and we recover this result.

\[\Rightarrow h\to \infty \\ \tau \to 0\]

Written with StackEdit.

Written on March 25, 2015