Hypercontractivity is the notion that connects to Gaussian logarithmic Sobolev inequality and Data processing inequality.
In this post we will see Hypercontractivity
begining by describing contractivity as preparation.
This post is based on the lecture by Himanshu Tyagi.
Norm
For and any function , the norm of is
Markov Kernel
Consider two random variable and with joint distribution .
Letting , we define such that
where is the realization of .
The is denoted by .
The is the operator that converts to as , and it corresponds to Markov kernel.
can be thought of as a transition matrix with an innumerable number of rows and columns.
The following figure illustrates how works.

It would be ideal if we could get directly,
but what we can observe is a signal passsing through some noisy channel.
The noise in the channel is decided by and depends on .
Thus,
Contraction for Norm
In the above setting, is contraction for norm, that is, for ,
The proof begins with
By Jensen’s inequality, the RHS is bounded as
Because the inner conditional expectation is canceled by the outer expectation, the above becomes
Hypercontractivity
While the contraction written as Eq.() is the conversion between -th norm and -th norm,
Hypercontractivity allows us to connect -th norm to -th norm where .
A joint distribution of and , , is -hypercontractivity for , if
Since norm is non-decrease function of ,
Eq.() is stronger than Eq.() when .
Two-function hypercontractivity
Eq.() holds if and only if
where is Hölder conjugate of , i.e., .
From Eq.() to Eq.()
We first see the proof of the direction from Eq.() to Eq.().
As
by applying Hölder’s inequality, the RHS is bounded as
Since we are assuming Eq.(), the above is bounded as
saying that Eq.() is true.
From Eq.() to Eq.()
Suppose Eq.() holds any
and consider .
Then,
By cancellation by outer expectation over similar to that seen above, the above becomes
Defining , we first derive .
Since
applying the contractivity, i.e., Eq.(), yields
where the last inequality comes from the assumption of ,
stating .
Since , we can apply the assumption of Eq.() to Eq.(), yielding
The LHS is simply
Regarding the RHS, we know that
Putting them togather, we obtain
It is equal to
and since , it becomes
The proof of the direction from Eq.() to Eq.() is done.
In this post, we saw the -dimensional case.
We will see the extension to the -dimensional case in the next post.