Let $K_{n}$ denote an $n$-clique and $K_{n,m}$ denote a complete bipartite
graph. We are interested in the least number $m$, denoted as $m(n)$,
such that $K_{n}$ can be edge-decomposed into $m$ complete bipartite
graphs. Observe that $m(1)=0$, $m(2)=1$, $m(3)=2$,
$m(4)=3$, etc. It is not difficult to deduce $m(n)\le n-1$ after some
case studies, since we can obtain a decomposition of $K_{n+1}$ by
adding $K_{1,n}$ to a decomposition of $K_{n}$, which implies $m(n+1)\le m(n)+1$.
The nontrivial part lies in the reverse direction: how can we prove a strict
lowerbound for $m(n)$? If we want to establish the lowerbound by
analyzing the graphs directly, we have to show for all $K_n$ that there is no better decomposition than the one we offer, and this is far from an easy job.
R.$~$Graham proposed a very clever method to establish $m(n)\ge n-1$
based on elementary knowledge from linear algebra. He used $n$ variables
$x_{1},...,x_{n}$ to represent the $n$ vertices $v_{1},...,v_{n}$ of $K_{n}$. An edge $\overline{v_{i}v_{j}}$ is selected iff $x_{i},x_{j}$ are both nonzero.
The key observation is: if $(A_{1},B_{1}),...,(A_{m},B_{m})$
is a decomposition of $K_{n}$ into complete bipartite graphs, then
the equation
\[
\sum_{1\le i < j\le n}x_{i}x_{j}=\sum_{k=1}^{m}\sum_{x\in A_{k}}\sum_{y\in B_{k}}xy=\sum_{k=1}^{m}\left(\sum_{x\in A_{k}}x\right)\left(\sum_{y\in B_{k}}y\right)
\]
must holds for any assignment of $x_{1},...,x_{m}$. (An equation like this is called a double counting in the buzzwords of
Combinatorics.)
With the observation in mind, he considered the following homogeneous
system of linear equations:
\[
\begin{cases}
\sum_{i=1}^{n}x_{i}=0,\\
\sum_{x\in A_{k}}x=0, & k=1,...,m
\end{cases}
\]
Now suppose that $m < n-1$. The system then has $n$ unknowns and $m+1 < n$ equations. This implies that the system has an infinitely number of solutions. In particular, there
exists a nontrivial assignment for $x_{1},...,x_{n}$ such that
\[ 2\cdot\sum_{1\le i < j\le n}x_{i}x_{j}=\left(\sum_{i=1}^{n}x_{i}\right)^{2}- \sum_{i=1}^{n}x_{i}^{2}=-\sum_{i=1}^{n}x_{i}^{2}=0,
\]
which is impossible since the last equality enforces $x_{1}=...=x_{n}=0$. It follows that $m(n)\ge n-1$ by contradiction. Hence, we can conclude $m(n)=n-1$.
This is ericpony's blog
Thursday, September 18, 2014
Monday, September 8, 2014
A note on expectation transformer semantics
An expectation is a function that maps program states to either
a nonnegative real value, or a non-negative real-valued expression
over state variables. When the program contains randomized operations, an expectation here is in fact a nonnegative random variable over the distribution induced by the reachability probability of program states. An expectation is called a post-expectation when it is to
be evaluated in on final states. An expectation is called a pre-expectation
if it is to be evaluated on initial states.
Intuitively, we may regard a post-expectation as a random variable representing nonnegative "rewards" of final states. One can examine the program beforehand to estimate for each initial state the expected rewards when the program runs repeatedly from there. This estimate, called the weakest pre-expectation, is a nonnegative function that maps an initial state to the expected reward over the final states starting from that state. In other words, a weakest pre-expectation evaluates to the expected value of the post-expectation over the distribution of final states produced by program executions.
Given a probabilistic program $prog$ and a post-expectation $postE$ over final states, we use $wp(prog,postE)$ to denote the weakest pre-expectation of $prog$ with respect to $postE$. As a function over state variables, $wp(prog,postE)$ maps an initial state of $prog$ to the expected value of $postE$ in the final distribution reached via executions of $prog$. In particular, $wp(prog,1)$ maps an initial state to the probability that the program will terminate from that state, and $wp(prog,[pred])$ is the probability that the program will terminate in a state satisfying predicate $pred$. (Here we use square brackets $[\cdot]$ to denote an indicator function, so that given a predicate $pred$, $[pred]$ evaluates to 1 on states satisfying $pred$ and evaluates to 0 otherwise.)
Expectations are quantitative analogue to predicates in the predicate transformer semantics; they yield expectation transformer semantics of probabilistic programs. An expectation transformer is a total function between two expectations on the states of a program. The expectation transformer $wp(prog,postE)$ gives the least expected value of $postE$ over the distribution produced by the executions of $prog$. The annotation $\langle {preE} \rangle\; prog\;\langle {postE} \rangle$ holds for total correctness if and only if \begin{equation} preE\le wp(prog,postE),\label{eq:pre-wp-post} \end{equation} where $\le$ is interpreted in a point-wise manner. In other words, $preE$ gives in each initial state a lower bound for the expected value of $postE$ on final states when $prog$ starts from that initial state.
In the special case when expectations $preE$ and $postE$ are given by $[pre]$ and $[post]$, respectively, where $pre$ and $post$ are predicates, annotation $\langle {preE} \rangle\; prog\;\langle {postE} \rangle$ is just the embedded form of a standard Hoare-triple specification $\{pre\}\; prog\;\{post\}$, i.e., we can assume the truth of $post$ after the execution of $prog$, provided $pre$ holds in the initial state. More precisely, we have \[ [pre]\le wp(prog,[post])\quad\mbox{iff}\quad pre\Rightarrow wp(prog,post). \]In this view, inequality between expectations in expectation transformer semantics can be seen as a generalization of implication between predicates in predicate transformer semantics.
Intuitively, we may regard a post-expectation as a random variable representing nonnegative "rewards" of final states. One can examine the program beforehand to estimate for each initial state the expected rewards when the program runs repeatedly from there. This estimate, called the weakest pre-expectation, is a nonnegative function that maps an initial state to the expected reward over the final states starting from that state. In other words, a weakest pre-expectation evaluates to the expected value of the post-expectation over the distribution of final states produced by program executions.
Given a probabilistic program $prog$ and a post-expectation $postE$ over final states, we use $wp(prog,postE)$ to denote the weakest pre-expectation of $prog$ with respect to $postE$. As a function over state variables, $wp(prog,postE)$ maps an initial state of $prog$ to the expected value of $postE$ in the final distribution reached via executions of $prog$. In particular, $wp(prog,1)$ maps an initial state to the probability that the program will terminate from that state, and $wp(prog,[pred])$ is the probability that the program will terminate in a state satisfying predicate $pred$. (Here we use square brackets $[\cdot]$ to denote an indicator function, so that given a predicate $pred$, $[pred]$ evaluates to 1 on states satisfying $pred$ and evaluates to 0 otherwise.)
Expectations are quantitative analogue to predicates in the predicate transformer semantics; they yield expectation transformer semantics of probabilistic programs. An expectation transformer is a total function between two expectations on the states of a program. The expectation transformer $wp(prog,postE)$ gives the least expected value of $postE$ over the distribution produced by the executions of $prog$. The annotation $\langle {preE} \rangle\; prog\;\langle {postE} \rangle$ holds for total correctness if and only if \begin{equation} preE\le wp(prog,postE),\label{eq:pre-wp-post} \end{equation} where $\le$ is interpreted in a point-wise manner. In other words, $preE$ gives in each initial state a lower bound for the expected value of $postE$ on final states when $prog$ starts from that initial state.
In the special case when expectations $preE$ and $postE$ are given by $[pre]$ and $[post]$, respectively, where $pre$ and $post$ are predicates, annotation $\langle {preE} \rangle\; prog\;\langle {postE} \rangle$ is just the embedded form of a standard Hoare-triple specification $\{pre\}\; prog\;\{post\}$, i.e., we can assume the truth of $post$ after the execution of $prog$, provided $pre$ holds in the initial state. More precisely, we have \[ [pre]\le wp(prog,[post])\quad\mbox{iff}\quad pre\Rightarrow wp(prog,post). \]In this view, inequality between expectations in expectation transformer semantics can be seen as a generalization of implication between predicates in predicate transformer semantics.
Reference
McIver, Annabelle, and Charles Carroll Morgan. "Abstraction, refinement and proof for probabilistic systems." Springer, 2006.Monday, September 1, 2014
A note on predicate transformers and loop invariants
For standard programs, the computational model of execution supports a "logical" view—given a set of final states, we can examine the program to determine the largest set of initial states from which execution of the program is guaranteed to reach the final states. The sets of states are characterized by logical formula called predicates, and the program is being regarded as a predicate transformer, i.e., a total function that maps one predicate to another. We give some important terminology before we proceed:
Predicate transformer semantics provide an effective algorithm to verify a Hoare triple by reducing the verification problem to the problem of proving logical entailment: Given a Hoare triple $\{pre\} \; prog \; \{post\}$, we first compute the weakest precondition of executing $prog$ starting from $post$, and then check that it entails $pre$. This amounts to proving that $$pre \Rightarrow wp(prog, post).$$When $prog$ doesn't contain a loop, the weakest precondition is simply given by syntactic rules. However, the presence of loops may pose a problem because the precondition is given in terms of a least fixed point. Besides, a separate proof is required to establish that the loops terminate from their initial state. Formally, if $G$ is a predicate and $body$ is a loop-free program segment, then \begin{equation} wp({\rm while}(G)\{body\}, post) = \mu X.((G\Rightarrow wp(body, X)) \wedge (\neg G\Rightarrow post)),\label{loop-wp} \end{equation} where $\mu$ is the least fixed point operator w.r.t. logic implication. Given that the loop terminates and the least fixed point exists, the construction of the weakest precondition can be sketched in set theory as follows. Let $\Sigma$ denote the state space. We define a family $\{A_k:k\ge0\}$ of subsets of $\Sigma$ by induction over $k$: \begin{array}{rcl} A_0 & = & \emptyset \\ A_{k+1} & = & \left\{\ y \in \Sigma: ((G \Rightarrow wp(body, x \in A_k)) \wedge (\neg G \Rightarrow post))[x \leftarrow y]\ \right\} \\ \end{array} Informally, $A_k$ represents the set of initial states that makes the postcondition $post$ satisfied after less than $k$ iterations of the loop. We can then define the weakest precondition in \eqref{loop-wp} as a first-order predicate $\exists k. x \in A_k$.
While an abstract construction as above is elegant, it can not be handled efficiently by theorem provers in practice. Fortunately, using special assertions called loop invariants we may by and large avoid the need of calculating fixed points. Suppose that we want to verify \begin{equation} pre \Rightarrow wp({\rm while} (G) \{body\}, post).\label{loop-hoare} \end{equation} Instead of calculating the least fixed point of the loop directly, we may divide the problem into simpler subtasks:
With invariant $I$, the relation between $pre$ and $post$ in \eqref{loop-hoare} is established as $$pre \Rightarrow I \Rightarrow wp({\rm while}(G)\{body\}, I\wedge\neg G)\quad {\rm and }\quad I\wedge\neg G \Rightarrow post.$$ Checking correctness for code without loops turns out be easy in practice. Hence, the crucial point in verifying a Hoare triple is to discover the necessary loop invariants for each loop in the program.
- An assertion is a predicate placed in a program to assert that it holds at that point
- A precondition is an assertion placed before a program segment
- A postcondition is an assertion placed after a program segment
- A formula in form $\{precond\} \; program \; \{postcond\}$ is called a Hoare triple
- A Hoare triple is valid iff whenever the precondition holds, the postcondition holds after the execution of the program segment terminates
- A loop invariant is an assertion supposed to be true before and after each iteration of a loop
// Precondition: (x=0 and y=0 and n≥0) while (y<n) { // Loop invariant: 2*x = y*(y-1) x+=y; y++; } // Postcondition: 2*x = n*(n-1)Observe that $\{x=0 \wedge y=0 \wedge n\ge0\}$ while(y<n){x+=y;y++;} $\{2x=n(n-1)\}$ is a valid Hoare triple and $2x = y(y-1)$ is an invariant for the while loop.
Predicate transformer semantics provide an effective algorithm to verify a Hoare triple by reducing the verification problem to the problem of proving logical entailment: Given a Hoare triple $\{pre\} \; prog \; \{post\}$, we first compute the weakest precondition of executing $prog$ starting from $post$, and then check that it entails $pre$. This amounts to proving that $$pre \Rightarrow wp(prog, post).$$When $prog$ doesn't contain a loop, the weakest precondition is simply given by syntactic rules. However, the presence of loops may pose a problem because the precondition is given in terms of a least fixed point. Besides, a separate proof is required to establish that the loops terminate from their initial state. Formally, if $G$ is a predicate and $body$ is a loop-free program segment, then \begin{equation} wp({\rm while}(G)\{body\}, post) = \mu X.((G\Rightarrow wp(body, X)) \wedge (\neg G\Rightarrow post)),\label{loop-wp} \end{equation} where $\mu$ is the least fixed point operator w.r.t. logic implication. Given that the loop terminates and the least fixed point exists, the construction of the weakest precondition can be sketched in set theory as follows. Let $\Sigma$ denote the state space. We define a family $\{A_k:k\ge0\}$ of subsets of $\Sigma$ by induction over $k$: \begin{array}{rcl} A_0 & = & \emptyset \\ A_{k+1} & = & \left\{\ y \in \Sigma: ((G \Rightarrow wp(body, x \in A_k)) \wedge (\neg G \Rightarrow post))[x \leftarrow y]\ \right\} \\ \end{array} Informally, $A_k$ represents the set of initial states that makes the postcondition $post$ satisfied after less than $k$ iterations of the loop. We can then define the weakest precondition in \eqref{loop-wp} as a first-order predicate $\exists k. x \in A_k$.
While an abstract construction as above is elegant, it can not be handled efficiently by theorem provers in practice. Fortunately, using special assertions called loop invariants we may by and large avoid the need of calculating fixed points. Suppose that we want to verify \begin{equation} pre \Rightarrow wp({\rm while} (G) \{body\}, post).\label{loop-hoare} \end{equation} Instead of calculating the least fixed point of the loop directly, we may divide the problem into simpler subtasks:
- Find an assertion $I$ such that $pre \Rightarrow I$ and $I\wedge\neg G \Rightarrow post$.
- Show that $I$ is an invariant: $I\wedge G \Rightarrow wp(body, I)$.
- Show that $I$ is sound: $I \Rightarrow wp({\rm while}(G)\{body\}, I\wedge\neg G)$.
With invariant $I$, the relation between $pre$ and $post$ in \eqref{loop-hoare} is established as $$pre \Rightarrow I \Rightarrow wp({\rm while}(G)\{body\}, I\wedge\neg G)\quad {\rm and }\quad I\wedge\neg G \Rightarrow post.$$ Checking correctness for code without loops turns out be easy in practice. Hence, the crucial point in verifying a Hoare triple is to discover the necessary loop invariants for each loop in the program.
References
Gries, David. "The science of programming." Vol. 1981. Heidelberg: Springer, 1981.
Subscribe to:
Posts (Atom)