级数不等式问题

The following is probably a math contest problem. I have been unable to locate the original source.

Suppose that ${a_i}$ is a set of positive real numbers and the series $$sum_{n = 1}^infty frac{1}{a_n}$$ converges.

Show $$sum_{n = 1}^infty frac{n^2a_n}{(a_1+cdots+a_n)^2}$$

also converges.


Define at first some quantities to simplify the typing for the rest of the proof

- $$C^2:=sum_{n=1}^{+infty}frac{1}{a_n}.$$
- $$A_n=a_1+dotso+a_n.$$

Moreover let $$P_N:=sum_{n=1}^Nfrac{n^2a_n}{(a_1+dotso+a_n)^2}.$$ Clearly $P_{N+1}>P_N$, that is, $P_N$ is an increasing sequence. If we can prove that it is also bounded above, we are done with the proof. To reach this goal, notice that $$egin{split}P_N<&frac{1}{a_1}+sum_{n=2}^Nfrac{n^2(A_n-A_{n-1})}{A_nA_{n-1}}\ =&frac{1}{a_1}+sum_{n=2}^Nleft(frac{n^2}{A_{n-1}}-frac{n^2}{A_n} ight).end{split} ag{1}$$ Since $(n+1)^2-n^2=2n+1<5n$ for every $ninmathbb N$, one gets from $(1)$ that $$egin{split}P_N<&frac{1}{a_1}+frac{4}{a_1}+sum_{n=2}^{N-1}frac{2n+1}{A_n}-frac{N^2}{A_N}\ <&frac{5}{a_1}+frac{5}{A_2}+dots+frac{2N-1}{A_{N-1}}-frac{N^2}{A_N}\<&5left(frac{1}{A_1}+frac{2}{A_2}+dots+frac{N}{A_N} ight).end{split} ag{2}$$ By Cauchy Schwarz we also have $$sqrt{left(frac{1}{a_1}+dots+frac{1}{a_N} ight)}sqrt{left(frac{a_1}{A_1^2}+dots+frac{N^2a_N}{A_N^2} ight)}geqleft(frac{1}{A_1}+frac{2}{A_2}+dots+frac{N}{A_N} ight), ag{3}$$ from which, following $(2)$, it turns out that $$P_N<5Csqrt{P_N}.$$ It is then clear that the sequence $P_N$ is bounded from above, since for any $Ninmathbb N$, we have estabilished $$P_N<25C^2.$$ Therefore, since $P_N$ is also increasing as observed at the beginning, we can conclude that $P_N$ converges. This concludes the proof.


I wrote this answer for the [closed duplicate of this question](https://math.stackexchange.com/questions/241436/showing-a-series-is-convergent), but it works here as well.

Define
$$
ar{p}_n=frac1nsum_{k=1}^np_k ag{1}
$$
then the series in question is
$$
sum_{k=1}^inftyfrac{p_k}{ar{p}_k^2} ag{2}
$$
Simply, for $nge m$, we have that
$$
ar{p}_n=frac1nsum_{k=1}^np_kgefrac mnfrac1msum_{k=1}^mp_k=frac mnar{p}_m ag{3}
$$
which, for $nge1$, says that
$$
ar{p}_{n+1}gefrac12ar{p}_nquad ext{and}quadar{p}_{2n+1}gefrac23ar{p}_{2n} ag{4}
$$
Furthermore,
$$
egin{align}
sum_{k=1}^inftyfrac{p_{k+1}}{ar{p}_{k+1}ar{p}_k}
&=sum_{k=1}^inftyfrac{(k+1)ar{p}_{k+1}-kar{p}_k}{ar{p}_{k+1}ar{p}_k}\
&=sum_{k=1}^inftyleft(frac{k}{ar{p}_k}+frac1{ar{p}_k}-frac{k+1}{ar{p}_{k+1}}+frac1{ar{p}_{k+1}} ight)\
&=2sum_{k=1}^inftyfrac1{ar{p}_k} ag{5}
end{align}
$$
Combining $(4)$ and $(5)$ yields
$$
egin{align}
sum_{k=1}^inftyfrac{p_k}{ar{p}_k^2}
&=frac1{p_1}+sum_{k=1}^inftyfrac{p_{k+1}}{ar{p}_{k+1}^2}\
&lefrac1{p_1}+2sum_{k=1}^inftyfrac{p_{k+1}}{ar{p}_{k+1}ar{p}_k}\
&=frac1{p_1}+4sum_{k=1}^inftyfrac1{ar{p}_k} ag{6}
end{align}
$$
Use $color{#C00000}{(4)}$, [$color{#00A000}{ ext{Jensen's Inequality}}$](http://en.wikipedia.org/wiki/Jensen%27s_inequality), and change the $color{#0000FF}{ ext{order of summation}}$ to get
$$
egin{align}
sum_{k=1}^inftyfrac1{ar{p}_k}
&=frac1{p_1}+sum_{k=1}^inftyleft(frac1{ar{p}_{2k}}+frac1{ar{p}_{2k+1}} ight)\
&lefrac1{p_1}+color{#C00000}{frac52sum_{k=1}^inftyfrac1{ar{p}_{2k}}}\
&lefrac1{p_1}+5sum_{k=1}^inftyfrac1{displaystylesmallfrac2{2k}sum_{j=k+1}^{2k}p_j}\
&lefrac1{p_1}+5sum_{k=1}^inftycolor{#00A000}{frac1ksum_{j=k+1}^{2k}frac1{p_j}}\
&=frac1{p_1}+5color{#0000FF}{sum_{j=2}^inftyfrac1{p_j}sum_{k=lceil j/2 ceil}^{j-1}frac1k}\
&lefrac1{p_1}+5sum_{j=2}^inftyfrac1{p_j} ag{7}
end{align}
$$
Combining $(6)$ and $(7)$ gives
$$
sum_{k=1}^inftyfrac{p_k}{ar{p}_k^2}le20sum_{j=1}^inftyfrac1{p_j} ag{8}
$$


Assume this fact $(clubsuit)$: https://math.stackexchange.com/questions/214634/prove-that-sum-k-1n-frac2k1a-1a-2-a-k4-sum-k-1n-frac1a-k/223836#223836.

If you define $P_N=sum_{n=1}^N a_n\,,; C=sum_{n=1}^{+infty}frac{1}{a_n}$ and
$S_N=sum_{n=1}^{N}frac{n^2 a_n}{P_n^2}$ you have:

$$S_N < frac{1}{a_1} + sum_{n=2}^{N}frac{n^2(P_n-P_{n-1})}{P_n P_{n-1}} = frac{5}{a_1}+sum_{n=2}^{N-1}frac{2n+1}{P_n}-frac{N^2}{P_N},$$

so, in virtue of $(clubsuit)$, you have:

$$S_N < frac{2}{a_1} + 4C.$$ 这里


Prove that for $a_k>0,k=1,2,dots,n$,

$$sum_{k=1}^n frac{2k+1}{a_1+a_2+ldots+a_k}<4sum_{k=1}^nfrac1{a_k};.$$


I must confess this problem took me a **very** long time!

**Step1.** If $a_1,a_2,alpha,eta,gamma$ are positive real numbers and $gamma=alpha+eta$ holds,
$$frac{gamma^2}{a_1+a_2}leq frac{alpha^2}{a_1}+frac{eta^2}{a_2}$$
holds too, since it is equivalent to $(alpha a_2-eta a_1)^2geq 0$.

**Step2.** If $a_1,a_2,alpha,eta,gamma,delta$ are positive real numbers and $delta=alpha+eta+gamma$ holds,
$$frac{delta^2}{a_1+(a_2+a_3)}leq frac{alpha^2}{a_1}+frac{(eta+gamma)^2}{a_2+a_3}leqfrac{alpha^2}{a_1}+frac{eta^2}{a_2}+frac{gamma^2}{a_3}$$
holds too, in virtue of Step2. By induction, it is easy to prove the analogous statement for $k$ variables $a_1,ldots,a_k$. In fact, this is useless to the proof, but quite interesting in itself :)

**Step3.**
By Step1,
$$sum_{k=1}^{n}frac{2k+1}{a_1+ldots+a_k}-frac{4}{a_n}leq sum_{k=1}^{n-1}frac{2k+1}{a_1+ldots+a_k}+frac{(sqrt{2n+1}-2)^2}{a_1+ldots+a_{n-1}}leq sum_{k=1}^{n-2}frac{2k+1}{a_1+ldots+a_k}+frac{n^2}{a_1+ldots+a_{n-1}}$$

**Step4.**
By Step3,
$$sum_{k=1}^{n}frac{2k+1}{a_1+ldots+a_k}-left(frac{4}{a_n}+frac{4}{a_{n-1}} ight)leq sum_{k=1}^{n-2}frac{2k+1}{a_1+ldots+a_k}+frac{(n-2)^2}{a_1+ldots+a_{n-2}}leq sum_{k=1}^{n-3}frac{2k+1}{a_1+ldots+a_k}+frac{(n-1)^2}{a_1+ldots+a_{n-2}}. $$

**Step5.**
By Step3, Step4, induction and Step1 again:
$$sum_{k=1}^{n}frac{2k+1}{a_1+ldots+a_k}leq frac{3}{a_1}+frac{9}{a_2}+sum_{j=3}^{n}frac{4}{a_j}leq sum_{j=1}^{n}frac{4}{a_j}.$$

 In fact, it is much easier to prove the stronger inequality:

[sum_{k=1}^{n}frac{2k+1}{a_1+ldots+a_k}leq -frac{n^2}{a_1+ldots+a_n}+sum_{k=1}^{n}frac{4}{a_k}.]


In virtue of this result: https://math.stackexchange.com/questions/214634/prove-that-sum-k-1n-frac2k1a-1a-2-a-k4-sum-k-1n-frac1a-k/223836#223836 it is possible to state that, if 

$$sum_{n=1}^{+infty}frac{1}{a_n}$$

is a converging series with positive terms,

$$sum_{n=1}^{+infty}frac{n}{a_1+ldots+a_n}<2sum_{n=1}^{+infty}frac{1}{a_n},$$

and this is exactly the statement of the [Hardy's inequality][1] for $p=-1$.

**(1)** Is the constant $2$ in the RHS the best possible constant?

**(2)** Does the integral analogue holds? I.e., is it true that, if $f$ is a positive function belonging to $L^1(mathbb{R}^+)$,

$$int_{0}^{+infty}left(frac{1}{x}int_{0}^{x}frac{dy}{f(y)} ight)^{-1}\,dx<2int_{0}^{+infty}f(x)\,dx;?$$

**(3)** Does the Hardy-type inequality with negative exponent

$$int_{0}^{+infty}left(frac{1}{x}int_{0}^{x}frac{dy}{f(y)^p} ight)^{-1/p}\,dx< C_p int_{0}^{+infty}f(x)\,dx;?$$

holds for any $pgeq 1$? If so, what are the best possible constants $C_p$?

  [1]: http://en.wikipedia.org/wiki/Hardy%27s_inequality 


I managed to prove many things through the following inequality.

For any $pgeq 1$ and for every $a,b,alpha,eta>0$ we have:
$$left(frac{(alpha+eta)^{p+1}}{a^p+b^p} ight)^{1/p}leq left(frac{alpha^{p+1}}{a^p} ight)^{1/p}+ left(frac{eta^{p+1}}{b^p} ight)^{1/p}.$$
If we set $b/a=x$, it is sufficient to prove that the minimum of the function $f:mathbb{R}^+ o mathbb{R}^+$ defined by:
$$ f(x) = alpha^{frac{p+1}{p}}(1+x^p)^{1/p} + eta^{frac{p+1}{p}}(1+x^{-p})^{1/p} $$
is exactly $(alpha+eta)^{frac{p+1}{p}}$. To do that, it is sufficient to consider that $f'(x)$ has a unique zero in
$$ x = left(frac{eta}{alpha} ight)^{1/p}. $$

Using this inequality, I managed to show that for any real number $pgeq 1$ there exists a constant $C_pinmathbb{R}^+$ such that, if $a_1,ldots,a_N$ are positive real numbers,
$$sum_{n=1}^{N}left(frac{n}{a_1^p+ldots+a_n^p} ight)^{1/p}< C_psum_{n=1}^{N}frac{1}{a_n}$$
holds. I prove that there exists a positive increasing function $f:mathbb{N}_0 omathbb{R}^+$ for which:
$$(diamondsuit)left(frac{f(N)}{sum_{n=1}^N a_n^p} ight)^{1/p}+sum_{n=1}^{N}left(frac{n}{a_1^p+ldots+a_n^p} ight)^{1/p} leq frac{C_p}{a_N}+ left(frac{f(N-1)}{sum_{n=1}^{N-1} a_n^p} ight)^{1/p}+sum_{n=1}^{N-1}left(frac{n}{a_1^p+ldots+a_n^p} ight)^{1/p},$$
such that, by induction, we have:
$$ left(frac{f(N)}{sum_{n=1}^N a_n^p} ight)^{1/p}+sum_{n=1}^{N}left(frac{n}{a_1^p+ldots+a_n^p} ight)^{1/p} leq frac{1+f(1)^{1/p}}{a_1}+sum_{n=2}^{N}frac{C_p}{a_n}.$$
In order to $(diamondsuit)$ imply the discrete "reverse Hardy" inequality it is sufficient to have $f(1)leq(C_p-1)^p$ and:
$$ forall Ngeq 2,qquad left(frac{f(N)}{sum_{n=1}^N a_n^p} ight)^{1/p}+left(frac{N}{sum_{n=1}^N a_n^p} ight)^{1/p} leq frac{C_p}{a_n}+left(frac{f(N-1)}{sum_{n=1}^{N-1} a_n^p} ight)^{1/p}.$$
In virtue of the initial inequality, if we have $f(N)^{1/p}+N^{1/p}geq C_p^{p/(p+1)}$, then:
$$ frac{f(N)^{1/p}+N^{1/p}}{left(sum_{n=1}^N a_n^p ight)^{1/p}} leq frac{C_p}{a_N}+frac{left(left(f(N)^{1/p}+N^{1/p} ight)^{frac{p}{p+1}}-C_p^{frac{p}{p+1}} ight)^{frac{p+1}{p}}}{left(sum_{n=1}^{N-1}a_n^p ight)^{1/p}},$$
so it suffices to find a $f$ such that:
$$ left(f(N)^{1/p}+N^{1/p} ight)^{frac{p}{p+1}}leq f(N-1)^{frac{1}{p+1}}+C_p^{frac{p}{p+1}}.$$
Now we take $C_p = (1+p)^{frac{1}{p}}$, since this is the best possible constant in the "reverse Hardy" inequality if $a_n=n$, then we take $f(N) = kcdot N^{p+1}$; the previous inequality become:
$$ (spadesuit)quad k^{frac{1}{p+1}} N left(1+frac{1}{N k^{1/p}} ight)^{frac{p}{p+1}}leq k^{frac{1}{p+1}}(N-1)+ (1+p)^{frac{1}{p+1}}.$$
In virtue of the Bernoulli inequality we have:
$$ left(1+frac{1}{N k^{1/p}} ight)^{frac{p}{p+1}} leq 1 + frac{p}{N (p+1) k^{1/p}}, $$
so, if we find a positive $k$ such that:
$$ (heartsuit)qquad frac{p}{p+1}\,k^{-frac{1}{p(p+1)}}+k^{frac{1}{p+1}}leq C_p^{frac{p}{p+1}}=(p+1)^{frac{1}{p+1}}$$
the inequality $(spadesuit)$ is fulfilled. By studying the stationary points of $g(x)=A x^{-alpha}+ x^{eta}$ it is quite simple to prove that, with the choice
$$ k = (p+1)^{-p} $$
$(heartsuit)$ holds as an equality. The last thing is to verify that, with the choice $f(N) = frac{N^{p+1}}{(p+1)^p}$ we have $f(1)leq (C_p-1)^p$, or $C_pgeq 1+frac{1}{p+1}$, or:
$$ (p+1)^{frac{1}{p}} geq 1+frac{1}{p+1}. $$
By multiplying both sides by $(p+1)$ we have that the inequality is equivalent to:
$$ (p+1)^{frac{p+1}{p}} geq p+2, $$
that is a consequence of the Bernoulli inequality, since:
$$ (p+1)^{frac{p+1}{p}} geq 1 + frac{p+1}{p}cdot p = p+2. $$

This proves that for any $pgeq 1$ and for any sequence $a_1,ldots,a_N$ of positive real numbers we have:

$$ frac{N^{frac{p+1}{p}}}{(p+1)left(a_1^p+ldots+a_N^p ight)^{1/p}}+sum_{n=1}^N left(frac{n}{a_1^p+ldots+a_n^p} ight)^{1/p} leq (1+p)^{frac{1}{p}}sum_{n=1}^{N}frac{1}{a_n},$$
that is a substantial extension of the discrete Hardy inequality to negative exponents, with an optimal constant, too.

Once proven the discrete version, proving the integral version should be quite straightforward, now.


(2) is also a direct consequence of the Godunova's inequality
$$ int_0^{+infty} phileft(frac{1}{x}int_0^x g(t)\,dt ight)frac{dx}{x}leqint_0^{+infty}phi(g(x))frac{dx}{x}$$
which holds for any positive convex function $phi$ over $mathbb{R}^+$. It suffices to consider $phi(x)=frac{1}{x}$ then take the change of variable $y=x^2$ to have:
$$int_0^{+infty}frac{dy}{frac{1}{y}int_0^y g(z)\,dz}leq 2int_0^{+infty}frac{dy}{g(y)}.$$

来源:这里


If a series $sumlimits_{n=1}^infty a_n$ is convergent, and $a_ngt0$...

1. Do not refer to [Carleman's inequality][1] or [Hardy's inequality][2],
show that the series

$$sum_{n=1}^infty frac n{frac1{a_1}+frac1{a_2}+dotsb+frac1{a_n}} $$

is also convergent.

2. What is the minimum positive real number $k$ such that the following
inequality holds for all convergent series $a_ngt0$?


$$sum_{n=1}^infty frac n{frac1{a_1}+frac1{a_2}+dotsb+frac1{a_n}}le ksum_{n=1}^infty a_n$$

3. Does it exist a positive real number $l$ such that

$$sum_{n=1}^infty a_n le lsum_{n=1}^infty frac n{frac1{a_1}+frac1{a_2}+dotsb+frac1{a_n}}$$

holds?

[1]: http://en.wikipedia.org/wiki/Carleman%27s_inequality
[2]: http://en.wikipedia.org/wiki/Hardy%27s_inequality


**Edit** One step was flawed in the proof of 1). Many thanks to my friend Luc for pointing this out. I have fixed it, thanks to a lemma that I prove at the end.

Using AM-GM inequality, we see that the general term is not greater than $sqrt[n]{a_1cdots a_n}$. Hence Carleman's inequality indeed shows that 2) holds with constant $e$. But we need a significantly different approach if we want to get the optimal constant in this case, since it turns out to be $2$. I found this method by investigating the integral version of the problem first. It is indeed easier (the lemma below is exactly what is tedious in the discrete case, while straightforward in the integral case) to see that
$$
int_0^infty frac{x}{int_0^xfrac{1}{f(t)}dt}dxleq 2int_0^infty f(t)dt
$$
for every positive measurable function $f$, and that **$2$ is optimal**.

**The answer to 3) is no**.

**Remark** When proving 1), we can assume that $a_k$ is nonincreasing withtout loss of generality, since reordering the sequence does not affect the rhs, and can only increase the lhs when making the sequence nonincreasing. Maybe we coould take advantage of that observation to simplify the argument in 1).

___________________

1) We will show that for every positive sequence $a_n$, we have
>$$
sum_{n=1}^infty frac{n}{sum_{k=1}^n frac{1}{a_k}}leq 2 sum_{k=1}^infty a_k
$$


By Faulhaber's formula and Cauchy-Schwarz
$$
frac{n^2(n+1)^2}{4}=left( sum_{k=1}^nk ight)^2=left(sum_{k=1}^n ksqrt{a_k}frac{1}{sqrt{a_k}} ight)^2leq sum_{k=1}^n k^2a_ksum_{k=1}^n frac{1}{a_k}
$$
whence
$$
frac{n}{sum_{k=1}^n frac{1}{a_k}}leq frac{4}{n(n+1)^2}sum_{k=1}^n k^2a_k
$$
Therefore
$$
sum_{n=1}^inftyfrac{n}{sum_{k=1}^n frac{1}{a_k}}leq 4sum_{n=1}^inftyfrac{1}{n(n+1)^2} sum_{k=1}^n k^2a_k
$$
(by Fubini)
$$
=4 sum_{k=1}^infty k^2a_ksum_{n=k}^inftyfrac{1}{n(n+1)^2}
$$
(by the lemma below)
$$
qquad leq 4 sum_{k=1}^infty k^2a_kfrac{1}{2k^2}
=2 sum_{k=1}^infty a_k
$$

2) We will now show that $2$ is optimal. Assume $C>0$ is such that
$$
sum_{n=1}^infty frac{n}{sum_{k=1}^n frac{1}{a_k}}leq C sum_{k=1}^infty a_k
$$
for every positive sequence $a_n$. Then for every $alpha >1$, consider $a_n=frac{1}{n^alpha}$. This yields
$$
Csum_{n=1}^infty frac{1}{n^alpha}geq sum_{n=1}^inftyfrac{n}{sum_{k=1}^nk^alpha}geq sum_{n=1}^inftyfrac{n}{int_{x=1}^{n+1}x^alpha dx}
=(alpha+1) sum_{n=1}^infty frac{n}{(n+1)^{alpha+1}-1}
$$
Now
$$
sum_{n=1}^infty frac{n}{(n+1)^{alpha+1}-1}geq sum_{n=1}^infty frac{n}{(n+1)^{alpha+1}}
=sum_{n=1}^infty frac{1}{(n+1)^{alpha}}- frac{1}{(n+1)^{alpha+1}}
$$
$$
=sum_{n=1}^infty left(frac{1}{n^{alpha}}- frac{1}{n^{alpha+1}} ight)
geq sum_{n=1}^infty left(frac{1}{n^{alpha}}- frac{1}{n^{2}} ight)
=sum_{n=1}^infty frac{1}{n^{alpha}} -frac{pi^2}{6}
$$
Since $lim_{alpha ightarrow 1^+} sum_{n=1}^infty frac{1}{n^{alpha}}=sum_{n=1}^inftyfrac{1}{n}=+infty $ by monotone convergence,
$$
frac{C}{alpha+1}geq frac{sum_{n=1}^infty frac{1}{n^{alpha}} -frac{pi^2}{6}}{sum_{n=1}^infty frac{1}{n^{alpha}}}quadRightarrowquad frac{C}{2}geq 1
$$
by letting $alpha>1$ tend to $1$. So $Cgeq 2$ which proves that $2$ is optimal.

3) If such a constant existed, we would get, by AM-GM inequality
$$
sum_{n=1}^infty a_nleq Csum_{n=1}^infty frac{n}{sum_{k=1}^nfrac{1}{a_k}}leq Csum_{n=1}^inftysqrt[n]{a_1cdots a_n}
$$
for every positive sequence. That is, setting $b_n=sqrt[n]{a_1cdots a_n}iff a_n=frac{b_n^n}{b_{n-1}^{n-1}}$
$$
sum_{n=1}left(frac{b_n}{b_{n-1}} ight)^nb_{n-1}leq Csum_{n=1}^infty b_n
$$
for every positive sequence $b_n$. Of course this is true with $C=1$ if $b_n$ is nonincreasing. To get our contradiction, we can construct a converging $sum b_n$ such that the lhs diverges. For example, set $b_n:=frac{1}{2^n}$ for every $n$ but $b_n=frac{4}{2^n}$ for, say, every $n=2^k$. That is: we make a bump at every $n=2^k$ which is large enough to make the lhs diverge, but small enough to keep the rhs converging.

__________
>**Lemma** For every $kgeq 1$ we have
$$
sum_{ngeq k}^infty frac{1}{n(n+1)^2}leq frac{1}{2k^2}
$$

**Proof** Let us consider the sequence
$$
x_k:= frac{1}{2k^2} - sum_{ngeq k}^infty frac{1}{n(n+1)^2}
$$
Note that $lim_{k ightarrow +infty} x_k=0$.
Then compute
$$
x_{k}-x_{k+1}=frac{1}{2k^2}-frac{1}{k(k+1)^2}-frac{1}{2(k+1)^2}=frac{1}{2k^2(k+1)^2}geq 0
$$
So $x_k$ is nonincreasing and converges to $0$.
Therefore $x_kgeq 0$ for every $kgeq 1$. $Box$ 这里


let sequence $a_{n}>0,nin N^{+}$,and such
$displaystylesum_{n=1}^{infty}dfrac{1}{a_{n}}$ convergent,show that
$$sum_{n=1}^{infty}dfrac{n^2}{a^2_{1}+a^2_{2}+cdots+a^2_{n}}$$ is also convergent?


Jack A related result:maybe I guess this also is hold?
$$sum_{k=1}^{n}dfrac{k^2}{a^2_{1}+cdots+a^2_{k}}leleft(dfrac{1}{a_{1}}+cdots+dfrac{1}{a_{n}} ight)^2$$


The Polya-Knopp's inequality (that is an instance of Hardy's inequality for negative exponents) states that for any $pgeq 1$ and for every positive sequence ${a_n}_{ninmathbb{N}}$ we have:

$$ frac{N^{frac{p+1}{p}}}{(p+1)left(a_1^p+ldots+a_N^p ight)^{1/p}}+sum_{n=1}^N left(frac{n}{a_1^p+ldots+a_n^p} ight)^{1/p} leq (1+p)^{frac{1}{p}}sum_{n=1}^{N}frac{1}{a_n}, ag{1}$$
hence by taking $p=2$ it follows that:

$$ sum_{n= 1}^{N}frac{sqrt{n}}{sqrt{a_1^2+a_2^2+ldots+a_n^2}}leq sqrt{3}sum_{n=1}^{N}frac{1}{a_n} ag{2} $$

Now we re-write the LHS of $(2)$ by partial summation.

Let $Q_n^2 riangleq a_1^2+ldots+a_n^2$ and $h(n) riangleqsum_{k=1}^{n}sqrt{k}$:

$$sum_{n=1}^N frac{sqrt{n}}{Q_n}=frac{h(N)}{Q_N}-sum_{n=1}^{N-1}h(n)left(frac{1}{Q_{n+1}}-frac{1}{Q_n} ight)=frac{h(N)}{Q_N}+sum_{n=1}^{N-1}h(n)frac{a_{n+1}^2}{Q_n Q_{n+1}(Q_{n+1}+Q_n)} $$
since $h(n)geqfrac{2}{3}n^{3/2}$, it follows that:

$$ frac{2Nsqrt{N}}{Q_N}+sum_{n=1}^{N-1}frac{n^{3/2} a_{n+1}^2}{Q_{n+1}^3}leq 3sqrt{3}sum_{n=1}^{N}frac{1}{a_n}. ag{3}$$
If we let $g(n)=sum_{k=1}^{n}k^2$ and apply partial summation to the original series we get:

$$ sum_{n=1}^{N}frac{n^2}{Q_n^2}=frac{g(N)}{Q_N^2}+sum_{n=1}^{N-1}g(n)frac{a_{n+1}^2}{Q_n^2 Q_{n+1}^2} ag{4}$$
hence by $(3)$ we just need to show that $frac{g(n)}{Q_n Q_{n+1}}$ is bounded by some constant times $frac{h(n)}{Q_n+Q_{n+1}}$, or:

$$ g(n)left(Q_n+ Q_{n+1} ight) leq K cdot h(n) Q_n Q_{n+1} $$
or:
$$ frac{1}{Q_n}+frac{1}{Q_{n+1}}leq Kcdotfrac{h(n)}{g(n)} ag{5}$$
that follows from the fact that $frac{sqrt{n}}{Q_n}$ is summable by $(2)$.

**Edit:** A massive shortcut. If a positive sequence ${b_n}$ is such that $sum {b_n}$ is convergent, then $sum n b_n^2 $ is convergent too, since ${n b_n}$ must be bounded in order that $sum b_n$ converges. So we can just use this lemma and $(2)$ to prove our claim. 这里


How to show that for $a_1,a_2,cdots,a_n >0$ real numbers and for $n ge 3$:

$$sum_{k=1}^{n}dfrac{k}{a_{1}+a_{2}+cdots+a_{k}}leleft(2-dfrac{7ln{2}}{8ln{n}} ight)sum_{k=1}^{n}dfrac{1}{a_{k}}$$

This version seems stronger than the inequality mentioned [here](https://math.stackexchange.com/a/847709/129017).

**Addition:** *A sister problem*: For $a_1,a_2,cdots,a_n >0$ real numbers and for $n ge 2$, we have the version:

$$displaystyle dfrac{1}{1+a_{1}}+dfrac{1}{1+a_{1}+a_{2}}+cdots+dfrac{1}{1+a_{1}+a_{2}+cdots+a_{n}} le sqrt{dfrac{1}{a_{1}}+dfrac{1}{a_{2}}+cdots+dfrac{1}{a_{n}}}$$

The stronger vesrion claims that: For each $n$, $c_n = left(1-dfrac{ln{n}}{2n} ight)$ we have:
$$displaystyle dfrac{1}{1+a_{1}}+dfrac{1}{1+a_{1}+a_{2}}+cdots+dfrac{1}{1+a_{1}+a_{2}+cdots+a_{n}} le c_nsqrt{dfrac{1}{a_{1}}+dfrac{1}{a_{2}}+cdots+dfrac{1}{a_{n}}}$$


Haven't had any progress with the original problem but I have done significant progress on the "sister" problem:

Let $A_k=sumlimits_{i=1}^k a_i $, thus the problem can be re-written as
$$sumlimits_{k=1}^n frac{1}{1+A_k} leq sqrt{sumlimits_{k=1}^n frac{1}{a_k}}$$
or, since both sides are positive,
$$left( sumlimits_{k=1}^n frac{1}{1+A_k} ight)^2 leq {sumlimits_{k=1}^n frac{1}{a_k}}$$
Now this is very similiar to the Cauchy-Schwarz inequality, that is, for any positive integers $x_1,x_2...x_n$ and $y_1,y_2...y_n$, the following inequality holds:
$$left( sumlimits_{k=1}^n x_ky_k ight)^2 leq left( sumlimits_{k=1}^n x_k^2 ight)left( sumlimits_{k=1}^n y_k^2 ight)$$
So, assuming $x_k=frac{1}{sqrt{a_k}}$, we gain that $y_k=frac{sqrt{a_k}}{1+A_k}$,
now if $ sumlimits_{k=1}^n y_k^2 =sumlimits_{k=1}^n frac{a_k}{(1+A_k)^2}leq1$
then the original inequality holds. A very nice proof of this due to r9m himself can be found [here][1].

In case of the stronger version
$$sumlimits_{k=1}^n frac{1}{1+A_k} leq c_nsqrt{sumlimits_{k=1}^n frac{1}{a_k}}$$
Using the Cauchy-Schwarz inequality in a similiar fashion we gain that in order for the original inequality to hold,
$$sumlimits_{k=1}^n frac{a_k}{(1+A_k)^2}leq c_n$$ must also hold.
To prove this let us look at the maximal values of $sumlimits_{k=1}^n frac{a_k}{(1+A_k)^2}$. To obtain the maxima we must solve a system of $n$ partial derivatives of this sum equal to zero, or notationally:

egin{cases}
&frac{partial}{partial a_1}sumlimits_{k=1}^n frac{a_k}{(1+A_k)^2}=0 \
&frac{partial}{partial a_2}sumlimits_{k=1}^n frac{a_k}{(1+A_k)^2}=0\
&.\
&.\
&frac{partial}{partial a_n}sumlimits_{k=1}^n frac{a_k}{(1+A_k)^2}=0
end{cases}

Note that if we define $S(i)=sumlimits_{k=i}^n frac{a_k}{(1+A_k)^2}$, then, since no terms of our sum with index less than $i$ contain $a_i$, we may rewrite the system as:
egin{cases}
&frac{partial}{partial a_1}S(1)=0 \
&frac{partial}{partial a_2}S(2)=0\
&.\
&.\
&frac{partial}{partial a_n}S(n)=0
end{cases}

Note that $S(n)= frac{a_n}{(1+A_{n-1}+a_n)^2}$, thus $$frac{partial}{partial a_n}S(n)=frac{1+A_{n-1}-a_n}{(1+A_{n-1}+a_n)^3}$$
Note that it is zero if $a_n=1+A_{n-1}$
Now let us show that in order for maxima to be gained $a_{i}=b_{n-i}(1+A_{i-1})$ for all natural $i<n$ with some sequence $b_i$. So we just gained that $a_{n}=b_0(1+A_{n-2})$ with $b_0=1$.
The maximal value of $S(n)$ is thus
$$max(S(n))=max(frac{a_n}{(1+A_{n-1}+a_n)^2})=frac{b_0(1+A_{n-1})}{(1+A_{n-1}+b_0(1+A_{n-1}))^2}=frac{b_0}{(1+b_0)^2(1+A_{n-1}))}$$

We may now simplify $S(n-1)$, since we know that $a_n=b_0(1+A_{n-1})$,
$$S(n-1)=frac{a_{n-1}}{(1+A_{n-2}+a_{n-1})^2}+S(n)=frac{a_{n-1}}{(1+A_{n-2}+a_{n-1})^2}+frac{a_n}{(1+A_{n-1}+a_n)^2}=frac{a_{n-1}}{(1+A_{n-2}+a_{n-1})^2}+frac{b_0}{(b_0+1)^2(1+A_{n-2}+a_{n-1})}=frac{((b_0+1)^2+b_0)a_{n-1}+b_0(1+A_{n-2})}{(b_0+1)^2(1+A_{n-2}+a_{n-1})^2}$$


So the partial derivative:
$$frac{partial}{partial a_{n-1}}S(n-1)=frac{partial}{partial a_{n-1}}frac{((b_0+1)^2+b_0)a_{n-1}+b_0(1+A_{n-2})}{(b_0+1)^2(1+A_{n-2}+a_{n-1})^2}=frac{((b_0+1)^2-b_0)(1+A_{n-2})-((b_0+1)^2+b_0)a_{n-1}}{(1+b_0)^2(1+A_{n-2}+a_{n-1})^3}$$

Thus for maxima to be atained, $a_{n-1}=b_1(1+A_{n-2})$, and $b_1=frac{(b_0+1)^2-b_0}{(b_0+1)^2+b_0}$ (particularly, $b_1=frac 3 5$),

The maximal value of $S(n-1)$ is then
$$max(S(n-1))=max(frac{((b_0+1)^2+b_0)a_{n-1}+b_0(1+A_{n-2})}{(1+b_0)^2(1+A_{n-2}+a_{n-1})^2})=frac{((b_0+1)^2-b_0)(1+A_{n-2})+b_0(1+A_{n-2})}{(b_0+1)^2(1+A_{n-2}+b_1(1+A_{n-2}))^2}=frac{(b_0+1)^2(1+A_{n-2})}{(b_0+1)^2(b_1+1)^2(1+A_{n-2})^2}=frac{1}{(b_1+1)^2(1+A_{n-2})}$$

$S(n-2)$ can then be calculated as
$$S(n-2)=frac{a_{n-2}}{(1+A_{n-3}+a_{n-2})^2}+S(n-1)=frac{a_{n-2}}{(1+A_{n-3}+a_{n-2})^2}+frac{1}{(b_1+1)^2(1+A_{n-3}+a_{n-2})}=frac{1+A_{n-3}+(1+(b_1+1)^2)a_{n-2}}{(b_1+1)^2(1+A_{n-3}+a_{n-2})^2}$$

The partial derivative is thus:
$$frac{partial}{partial a_{n-2}}S(n-2)=frac{partial}{partial a_{n-2}}frac{1+A_{n-3}+(1+(b_1+1)^2)a_{n-2}}{(b_1+1)^2(1+A_{n-3}+a_{n-2})^2}=frac{b_1(b_1+1)(1+A_{n-3})-(1+(b_1+1)^2)a_{n-2}}{(b_1+1)^2(1+A_{n-3}+a_{n-2})^3}$$

So, once again to attain maxima $a_{n-2}=b_2(1+A_{n-3})$ and $b_2=frac{(b_1+1)^2-1}{(b_1+1)^2+1}$.
And the maximum of $S(n-2)$ is
$$max(S(n-2))=max(frac{1+A_{n-3}+(1+(b_1+1)^2)a_{n-2}}{(b_1+1)^2(1+A_{n-3}+a_{n-2})^2})=frac{((b_1+1)^2-1)(1+A_{n-3})+1+A_{n-3}}{(b_1+1)^2(1+A_{n-3}+b_2(1+A_{n-2}))^2}=frac{(b_1+1)^2(1+A_{n-3})}{(b_1+1)^2(b_2+1)^2(1+A_{n-3})^2}=frac{1}{(b_2+1)^2(1+A_{n-3})}$$


Notice that the expression for the maximum value of $S(n-2)$ is almost identical to the one with $S(n-1)$, only all the indices are shifted by one, thus all the following operations that will be done on higher indices will be analogous, thus we can say that in general
$$max(S(n-i))=frac{1}{(1+b_i)^2(1+A_{n-i-1})}$$
$$b_i=frac{(b_{i-1}+1)^2-1}{(b_{i-1}+1)^2+1}$$
Now notice that
$$sumlimits_{k=1}^n frac{a_k}{(1+A_k)^2}=S(1)=S(n-n+1)$$
So
$$max(sumlimits_{k=1}^n frac{a_k}{(1+A_k)^2})=max(S(n-n+1))=frac{1}{(1+b_{n-1})^2(1+A_0)}$$
$A_0=0$ since it is the sum of no variables, thus the maxima of $sumlimits_{k=1}^n frac{a_k}{(1+A_k)^2}$ can be expressed as
$$max(sumlimits_{k=1}^n frac{a_k}{(1+A_k)^2})=frac{1}{(1+b_{n-1})^2}$$
Now, let $m(n)=frac{1}{(1+b_{n-1})^2}$, it can be shown that $m(n)$ follows the recurrence relation
$$m(n+1)=frac{(m(n)+1)^2}{4}$$
So, if $c_n=1-frac{ln(n)}{2n}$ follows this property, we are done:
$$1-frac{ln(n+1)}{2(n+1)}=frac{(2-frac{ln(n)}{2n})^2}{4}$$
$$4-frac{2ln(n+1)}{n+1}=(2-frac{ln(n)}{2n})^2$$
$$4-frac{2ln(n+1)}{n+1}=4-frac{2ln(n)}{n}+frac{ln(n)^2}{4n^2}$$
$$frac{2ln(n+1)}{n+1}=frac{2ln(n)}{n}-frac{ln(n)^2}{4n^2}$$
Now this almost holds, since $frac{2ln(n+1)}{n+1} sim frac{2ln(n)}{n}$ and $frac{ln(n)^2}{4n^2}$ is a decreasing function in the interval $(e;+infty)$, and its maxima in $(1;+infty)$ is about $0.034$, in other words very very small, so $frac{2ln(n+1)}{n+1}$ comes very close to $frac{2ln(n)}{n}-frac{ln(n)^2}{4n^2}$, which explains why $1-frac{ln(n)}{2n}$ was such a good bound, additionally, it is slightly bigger than the first few values of $m(n)$ and I think that it keeps on being just a little bit larger than $m(n)$ as $n$ approaches infinity

Now to get the super sharp boundry, we should find a function satisfying $m(n+1)=frac{(m(n)+1)^2}{4}$
(On a sidenote: Happy new year!)
[1]: https://math.stackexchange.com/questions/998612/question-regarding-an-inequality

这里


For any $p > 1$ and for any sequence ${a_j}_{j=1}^infty$ of nonnegative numbers, a classical inequality
of Hardy states that
$$ sumlimits_{k=1}^nleft(frac{sum_{i=1}^ka_i}{k} ight)^ple left(frac{p}{p-1} ight)^p sumlimits_{k=1}^n a_k^p$$
for each $nin N$.

There are now many many proofs of Hardy's inequality. Which proof is your favourite one, which would be the simplest proof? It is preferable if you could present the detailed proof here so that everyone can share it.


Let $(mathbb{R}^{+},frac{dt}{t})$ be the multiplicative group of positive real numbers with the usual topology and Haar measure $frac{dt}{t}$. Define functions $g:mathbb{R}^{+} o [0,infty)$, $h:mathbb{R}^{+} o [0,infty)$ by $g(x)=left|f(x) ight|x^{1-frac{b}{p}}$ and $h(x)=x^{-frac{b}{p}}chi_{[1,infty)}(x)$. We will apply Minkowski's inequality to the convolution $F=gstar h$. Note that:

egin{align*}
F(x)= &int_{0}^{infty} left|f(t) ight|t^{1-frac{b}{p}};;frac{t^{frac{b}{p}}}{x^{frac{b}{p}}};;chi_{[1,infty)}left(frac{x}{t} ight)frac{dt}{t} \
=& frac{1}{x^{frac{b}{p}}}int_{0}^{x} left|f(t) ight| dt \
end{align*}

if $xinmathbb{R}^{+}$. Furthermore,

$$
left|h ight|_{L^1(mathbb{R}^{+},frac{dt}{t})}= int_{1}^{infty} t^{-frac{b}{p}-1}dt=frac{p}{b}
$$

and

$$left|g ight|_{L^p(mathbb{R}^{+},frac{dt}{t})}=left(int_{0}^{infty} left|f(t) ight|^p t^{p-b-1} dt ight)^{frac{1}{p}}
$$

Minkowski's inequality thus implies that

egin{align*}
left(int_{0}^{infty} left(int_{0}^{x} left|f(t) ight| dt ight)^p x^{-b-1} dx ight)^{frac{1}{p}}leq frac{p}{b}left(int_{0}^{infty} left|f(t) ight|^p t^{p-b-1} dt ight)^{frac{1}{p}}
end{align*}

Let us now redefine the functions $g:mathbb{R}^{+} o [0,infty), h:mathbb{R}^{+} o [0,infty)$ by $g(x)=left|f(x) ight|x^{1+frac{b}{p}}$ and $h(x)=x^{frac{b}{p}}chi_{(0,1]}(x)$. We will apply Minkowski's inequality to the convolution $F=gstar h$. Note that,

$$
F(x)= int_{0}^{infty} left|f(t) ight|t^{1+frac{b}{p}};;frac{x^{frac{b}{p}}}{t^{frac{b}{p}}}chi_{(0,1]}left(frac{x}{t} ight)frac{dt}{t}
= x^{frac{b}{p}}int_{x}^{infty} left|f(t) ight| dt
$$

if $xinmathbb{R}^{+}$. Furthermore,

$$
left|h ight|_{L^1(mathbb{R}^{+},frac{dt}{t})}=int_{0}^{1} t^{frac{b}{p}-1} dt
= frac{p}{b}
$$

and

$$left|g ight|_{L^p(mathbb{R}^{+},frac{dt}{t})}=left(int_{0}^{infty} left|f(t) ight|^p t^{p+b-1} dt ight)^{frac{1}{p}}
$$

Minkowski's inequality thus implies that,

egin{align*}
left(int_{0}^{infty} left(int_{x}^{infty} left|f(t) ight| dt ight)^p x^{b-1} dx ight)^{frac{1}{p}}leq frac{p}{b} left(int_{0}^{infty} left|f(t) ight|^p t^{p+b-1} dt ight)^{frac{1}{p}}
end{align*}

这里.


Given a sequence $(a_n)_{n=1}^infty$ of positive reals. How do I prove that

$$sum_{n=1}^infty frac{n}{a_1 + ldots + a_n}leqslant 2 sum_{n=1}^infty frac{1}{a_n}$$

Of course if the right hand side converges, then $a_n$ is eventually increasing to $infty$ but the difficulty for me arises from the fact that the behaviour of some first finite number of terms can be arbitrary...


This is based on Grahame Bennett's solution to American Mathematical Monthly problem 11145 published in April 2005. The solution appeared in the October 2006 issue.

The Cauchy-Schwarz inequality gives $(sum_1^k j)^2leq sum^k_1j^2/a_j\, sum^k_1 a_j$, or equivalently,
$${koversum_{j=1}^k a_j}leq{4over k(k+1)^2}sum_{j=1}^k {j^2over a_j}.$$
Summing over $k$ yields
egin{eqnarray*}
sum_{k=1}^infty frac{k}{a_1 + cdots + a_k}&leq&2 sum_{j=1}^infty{j^2over a_j}sum_{k=j}^infty {2over k(k+1)^2}leq 2 sum_{j=1}^infty {j^2over a_j}sum_{k=j}^infty{2k+1over k^2(k+1)^2} \[5pt]
& = & 2 sum_{j=1}^infty{j^2over a_j}sum_{k=j}^inftyleft({1over k^2}-{1over(k+1)^2} ight)
= 2 sum_{j=1}^infty {1over a_j}.
end{eqnarray*}

这里


Prove that for real numbers $a_1 ,a_2 ,...,a_n >0$ the following inequality holds
$$frac{1}{a_1 } +frac{2}{a_1 +a_2 } +...+frac{n}{a_1 +a_2 +...+a_n }leq 4cdot left(frac{1}{a_1} +frac{1}{a_2 } +...+frac{1}{a_n} ight).$$


**AMM problem 11145** (April 2005)

Proposed by Joel Zinn, Texas A&M University, College Station, TX.

Find the least $c$ such that if $ngeq 1$ and $a_1,dots,a_n>0$, then
$$sum_{k=1}^n{koversum_{j=1}^k 1/a_j}leq csum_{k=1}^n a_k.$$


----------


**A Sum Inequality** (October 2006)

Solution by Grahame Bennett, Indiana University, Bloomington, IN.

Let $S_n$ denote the sum on the left hand side of the proposed inequality. The Cauchy-Schwarz inequality gives $(sum_1^k j)^2leq sum^k_1j^2a_j\, sum^k_1 1/a_j$, or equivalently,
$${koversum_{j=1}^k 1/a_j}leq{4over k(k+1)^2}sum_{j=1}^k j^2a_j.$$
Summing over $k$ yields
egin{eqnarray*}
S_n&leq&2 sum_{j=1}^nj^2a_jsum_{k=j}^n{2over k(k+1)^2}leq 2 sum_{j=1}^nj^2a_jsum_{k=j}^n{2k+1over k^2(k+1)^2} \
& = & 2 sum_{j=1}^nj^2a_jsum_{k=j}^nleft({1over k^2}-{1over(k+1)^2} ight)
= 2 sum_{j=1}^n j^2 a_j left({1over j^2}-{1over(j+1)^2} ight) < 2sum_{j=1}^n a_j,
end{eqnarray*}
from which it follows that the stated inequality holds with $c=2$.

To see that no smaller value of $c$ is possible, we set $a_j=1/j$, in which case the left side
is $sum_{k=1}^n 2/(k+1)$ and the right side is $csum_{k=1}^n1/k$. Since the harmonic series diverges, we must have $cgeq 2$.

*Editorial comment.* The upper bound is a special case of a theorem in K. Knopp's article
"Uber Reihen mit positiven Gliedern," *J. London Math. Soc.* **3** (1928), 205--211.
It states that for positive $p$
$$sum_{n=1}^infty left({nover sum_{j=1}^n1/a_j} ight)^p leqleft({p+1over p} ight)sum_{n=1}^infty a_n^p.$$


**Upper Bound**

By Cauchy-Schwarz, we have
$$
egin{align}
left(sum_{j=1}^ka_j ight)left(sum_{j=1}^kfrac{j^2}{a_j} ight)
&geleft(sum_{j=1}^kj ight)^2\[3pt]
&=frac{k^2(k+1)^2}{4} ag{1}
end{align}
$$
Thus,
$$
egin{align}
sum_{k=1}^nfrac{k}{sumlimits_{j=1}^ka_j}
&lesum_{k=1}^nfrac4{k(k+1)^2}sum_{j=1}^kfrac{j^2}{a_j}\
&=sum_{j=1}^nfrac{j^2}{a_j}sum_{k=j}^nfrac4{k(k+1)^2}\
&lesum_{j=1}^nfrac{j^2}{a_j}sum_{k=j}^n2left(frac1{k^2}-frac1{(k+1)^2} ight)\
&lesum_{j=1}^nfrac{j^2}{a_j}frac2{j^2}\
&=2sum_{j=1}^nfrac1{a_j} ag{2}
end{align}
$$
Thus, the ratio is at most $2$.
***
**Sharpness**

Set $a_k=k^eta$ for $0ltetalt1$. First
$$
egin{align}
sum_{k=1}^nfrac{k}{sumlimits_{j=1}^ka_j}
&=sum_{k=1}^nfrac{k}{frac1{1+eta}k^{eta+1}+O(k^eta)}\
&=(1+eta)sum_{k=1}^nfrac1{k^eta+O(k^{eta-1})}\
&=frac{1+eta}{1-eta}n^{1-eta}+O(1)\[3pt]
&simfrac{1+eta}{1-eta}n^{1-eta} ag{3}
end{align}
$$
Next
$$
egin{align}
sum_{k=1}^nfrac1{a_k}
&=sum_{k=1}^nfrac1{k^eta}\
&=frac1{1-eta}n^{1-eta}+O(1)\[3pt]
&simfrac1{1-eta}n^{1-eta} ag{4}
end{align}
$$
Thus, as $eta o1^-$, the ratio tends to $2$ for large $n$. Therefore, $2$ is sharp.


The linked proof of Carleman's Inequality (in the comment) indicates the method of balancing coefficients in weighted mean inequalities. In the same spirit we can show, $$frac{1}{a_1 } +frac{2}{a_1 +a_2 } +...+frac{n}{a_1 +a_2 +...+a_n } < 2 left(frac{1}{a_1} +frac{1}{a_2 } +...+frac{1}{a_n} ight)$$ for positive $a_i$'s.

We choose a set of positive real numbers $x_1,x_2,ldots,x_n$ such that, by Cauchy-Schwarz Inequality: $$(a_1+a_2+cdots+a_k)left(dfrac{x_1^2}{a_1}+dfrac{x_2^2}{a_2}++cdotsdfrac{x_k^2}{a_k} ight) ge (x_1+x_2+cdots+x_k)^2$$

$$implies dfrac{k}{a_1+a_2+cdots+a_k} le dfrac{k}{(x_1+x_2+cdots+x_k)^2}left(dfrac{x_1^2}{a_1}+dfrac{x_2^2}{a_2}++cdotsdfrac{x_k^2}{a_k} ight)$$

for each $k=1,2,cdots,n$.

Adding them up from $k=1$ to $n$, we get:

$$frac{1}{a_1 } +frac{2}{a_1 +a_2 } +...+frac{n}{a_1 +a_2 +...+a_n } le dfrac{c_1}{a_1} + frac{c_2}{a_2}+ldots+frac{c_n}{a_n}$$

Where, $c_k = dfrac{kx_k^2}{(x_1+x_2+cdots+x_k)^2} + dfrac{(k+1)x_k^2}{(x_1+x_2+cdots+x_{k+1})^2}+ldots+dfrac{nx_k^2}{(x_1+x_2+cdots+x_n)^2}$ for each $k=1,2,ldots,n$.

It remains to choose a set of ${x_k}_{k=1}^n$ such that $maxlimits_{k in {1,2,cdots,n}}{c_k}$ is minimized.

For example if we plug in $x_k = k$, we have,

$c_k = k^2left(sumlimits_{j=k}^n dfrac{j}{(1+2+cdots+j)^2} ight) = 4k^2left(sumlimits_{j=k}^n dfrac{1}{j(j+1)^2} ight) $

$$le 2k^2left(sumlimits_{j=k}^n dfrac{2j+1}{j^2(j+1)^2} ight) = 2k^2left(sumlimits_{j=k}^n dfrac{1}{j^2} - sumlimits_{j=k}^n dfrac{1}{(j+1)^2} ight)$$

$$ = 2k^2left(dfrac{1}{k^2} - dfrac{1}{(n+1)^2} ight) < 2$$

这里


How can one prove for any sequence of positive numbers $a_n, nge1,$ we have
$$sum_{n=1}^infty frac{n}{a_1+a_2+a_3+cdots+a_n}le 2sum_{n=1}^infty frac{1}{a_n}$$


----------
Added later:

Apparently, this is a version of [Hardy's inequality][1]. The above is the case $p=-1$. (See the wiki for what $p$ is).

The case $p=2$ appears here: https://math.stackexchange.com/questions/110963/proving-a-l-2-to-l-2-is-a-bounded-operator


[1]: http://en.wikipedia.org/wiki/Hardy%27s_inequality


Here is a proof.

We try using induction, but as usual, a direct approach seems to fail and we have to try and prove a stronger statement.

So we try and pick a positive function $f(n)$ such that

$$ frac{f(n)}{a_1 + a_2 + dots + a_n} + sum_{k=1}^{n} frac{k}{a_1 + a_2 + dots + a_k} le sum_{j=1}^{n} frac{2}{a_j}$$

Let $S = a_1 + a_2 + dots + a_n$ and let $x = a_{n+1}$.

In order to prove that $n$ implies $n+1$ it would be sufficient to prove

$$frac{2}{x} + frac{f(n)}{S} ge frac{f(n+1) + n+1}{S+x}$$

This can be rearranged to

$$2S^2 + f(n) x^2 + (f(n) + 2) Sx ge (f(n+1) + n+1) Sx$$

Since $$ 2S^2 + f(n) x^2 ge 2 sqrt{2 f(n)} Sx$$

it is sufficient to prove that $f(n)$ satisfies

$$ f(n) + 2 + 2sqrt{2 f(n)} ge f(n+1) + n + 1$$

Choosing $f(n) = dfrac{n^2}{2}$ does the trick.

We can easily verify the base case for this choice of $f(n)$.

Thus we have:

$$ frac{n^2}{2(a_1 + a_2 + dots + a_n)} + sum_{k=1}^{n} frac{k}{a_1 + a_2 + dots + a_k} le sum_{j=1}^{n} frac{2}{a_j}$$

In the infinite summation case, the constant $2$ is the best we can do, as we can see by setting $a_1 = 1$ and $a_n = 2^{n-2}$ for $n gt 1$.

这里


It is tagged as an open problem in the book Fractional parts,series and integrals. If this proof is valid , I don't have any idea how to get it published so I posted it here .

$displaystyle sum_{a_1,a_2,cdots,a_n=1}^infty frac{a_1a_2cdots a_n}{(a_1+a_2+cdots+a_n)!} = ; ?$

I am posting a proof for the closed form of the above series, please let me know if there are flaws,I came by some special cases of the above sum, that is for the case of $2$ & $3$ variables. They are .

$$ displaystyle sum_{a=1}^infty sum_{b=1}^infty frac{ab}{(a+b)!} ;=;frac{2}{3}e $$


$$ displaystyle sum_{a=1}^infty sum_{b=1}^inftysum_{c=1}^infty frac{abc}{(a+b+c)!} ;=;frac{31}{120}e $$

This led me to solve the general version of the sum for any number of variables,So if $S$ is our sum then,
$$displaystyle egin{align} S &= sum_{k=n}^inftyfrac{1}{k!};left( sum_{a_1+a_2+cdots+a_n=k}a_1 a_2cdots a_n ight) end{align}$$

This was achieved by setting $sum_{i=1}^n a_i =k$, and what remains to calculate is the inner sum enclosed by brackets.

We start by investigating the lower cases , suppose we have only two variables $a_1,a_2$ with $a_1+a_2=k$ then

$$displaystyle sum_{a_1+a_2=k}(a_1 a_2) =sum_{N=1}^{k-1} N(k-N)=frac{k(k-1)(k+1)}{3!}=inom{k+1}{3}$$

Now if we take the case of $3$ variables where $a_1+a_2+a_3=k$ , we can achieve the sum as :

$$ displaystyle sum_{a_1+a_2+a_3=k} a_1 a_2 a_3 = sum_{N=1}^{k-2} Ninom{k+1-N}{3}= frac{k(k-1)(k+1)(k-2)(k+2)}{5!}$$

Similarly for $4$ variables it turns out to be ,

$$ displaystyle sum_{a_1+a_2+a_3+a_4=k}a_1 a_2 a_3 a_4 = frac{k(k-1)(k+1)(k-2)(k+2)(k-3)(k+3)}{7!} $$

I believe for the same reason that that ,

$$ displaystyle sum_{a_1+a_2+cdots+a_n=k}a_1 a_2cdots a_n = frac{k}{(2n-1)!}prod_{m=1}^{n-1}(k-m)(k+m)$$

This is indeed tough to prove by induction , but I guess it can be proved due to the great symmetry and pattern this sequence follows. I haven't tried but will try to update a proof on this asap, but till then it's reasonable to conjecture this.

Lastly we have that ,

$$displaystyle egin{align} S &= sum_{k=n}^infty frac{1}{k!} left(frac{k}{(2n-1)!}prod_{m=1}^{n-1}(k-m)(k+m) ight) \ &= frac{1}{(2n-1)!}sum_{k=n}^infty frac{1}{k.k!} (k)_n (k)^n \ &= frac{1}{(2n-1)!}sum_{k=n}^infty frac{1}{k.k!} left(sum_{r=1}^{n}s(n,r)k^r ight) left(sum_{t=1}^n {nrack t}k^t ight) \ &= frac{1}{(2n-1)!}sum_{r,t=1}^n (-1)^{n+r} {nrack r}{nrack t}left(sum_{k=n}^infty frac{k^{r+t-1}}{k!} ight) end{align}$$

Now using Dobinski's Formula we have finally,

$$displaystyle sum_{a_1,a_2,cdots,a_n=1}^infty frac{a_1a_2cdots a_n}{(a_1+a_2+cdots+a_n)!}\ = frac{1}{(2n-1)!}sum_{r=1}^nsum_{t=1}^n (-1)^{n+r} {nrack r}{nrack t} left[eB_{r+t-1}-sum_{m=1}^{n-1}frac{m^{r+t-1}}{m!} ight] $$

where $B_n$ is the n-th Bell Number.

**Edit:**

After some investigation it was clear that the constant term in the final closed form which always disappeared whenever you calculate the sum for a specific $n$ and you are left with a rational multiple of $e$ was no magic but some logic. I proved it by induction.

Firstly, if we separate the answer into two parts and take the constant term which doesn't have $e$ , we get

$$displaystyle sum_{r=1}^n sum_{t=1}^n sum_{m=1}^{n-1} (-1)^{n-r}{nrack r}{nrack t}frac{m^{r+t-1}}{m!} $$

A little modification and interchange of sums will give the result in terms of the Pochammer symbol.

$$ displaystyle sum_{m=1}^{n-1} frac{(m)_n m^{t-1}}{m!} =0$$

This sum is eventually equal to zero and is easy to prove by induction.

Thus the answer is :

$$displaystyle sum_{a_1,a_2,cdots,a_n=1}^infty frac{a_1a_2cdots a_n}{(a_1+a_2+cdots+a_n)!}\ = eleft[frac{1}{(2n-1)!}sum_{r=1}^nsum_{t=1}^n (-1)^{n+r} {nrack r}{nrack t} B_{r+t-1} ight]$$


I did not check your computation thoroughly, but your idea of resummation followed by the use of Stirling numbers definitely seems to work.

Alternatively, here is another answer in finite sum:

$$ sum_{a_1, cdots, a_n geq 1} frac{a_1 cdots a_n}{(a_1 + cdots + a_n)!}
= left( sum_{k=0}^{n}inom{n}{k} (-1)^{n+k-1} sum_{j=0}^{n+k-1} frac{(-1)^j}{j!} ight) e. ag{1} $$

This formula also tells that the sum is always a rational multiple of $e$.

---

**Proof of (1).** We begin by recalling the multivariate beta identity. Let

$$Delta^{n-1} = { (x_1, cdots, x_n) in mathbb{R}^n : x_i geq 0 ext{ and } x_1 + cdots + x_n = 1 }$$

denote the [$(n-1)$-simplex](https://en.wikipedia.org/wiki/Simplex). Then

$$ mathrm{B}(a_1,cdots,a_n) := int_{Delta^{n-1}} left( prod_{k=1}^{n} x_k^{a_k-1} ight) \, dx_1 cdots dx_{n-1} = frac{Gamma(a_1)cdotsGamma(a_n)}{Gamma(a_1+cdots+a_n)}. $$

This is essentially equivalent to the usual [beta identity](https://en.wikipedia.org/wiki/Beta_function#Relationship_between_gamma_function_and_beta_function). Now denoting the sum by $S_n$, we have

egin{align*}
S_n
&= sum_{a_1,cdots,a_n geq 1} frac{1}{a_1+cdots+a_n} left( prod_{k=1}^{n} frac{a_k}{Gamma(a_k)} ight) mathrm{B}(a_1, cdots, a_n) \
&= sum_{a_1,cdots,a_n geq 1} left( int_{0}^{1} u^{a_1+cdots+a_n-1} \, du ight) left( prod_{k=1}^{n} frac{a_k}{Gamma(a_k)} ight) left( int_{Delta^{n-1}} left( prod_{k=1}^{n} x_k^{a_k-1} ight) \, dmathrm{x} ight) \
&= int_{Delta^{n-1}} int_{0}^{1} left( prod_{k=1}^{n} sum_{a_k=1}^{infty} frac{(u x_k)^{a_k-1} a_k}{(a_k - 1)!} ight) \, u^{n-1} du dmathrm{x}.
end{align*}

The inner sum can be easily computed by the formula

$$ sum_{a=0}^{infty} frac{a+1}{a!}z^a = e^z (1+z), $$

and hence we obtain

egin{align*}
S_n
&= int_{Delta^{n-1}} int_{0}^{1} u^{n-1} e^u (1 + ux_1)cdots(1 + ux_n) \, du dmathrm{x} \
&= sum_{I subset {1,cdots,n}} int_{0}^{1} u^{n+|I|-1} e^u left( int_{Delta^{n-1}} prod_{iin I}x_i \, dmathrm{x} ight) \, du \
&= sum_{k=0}^{n} inom{n}{k} B(underbrace{2,cdots,2}_{k ext{ terms}}, underbrace{1,cdots,1}_{n-k ext{ terms}}) int_{0}^{1} u^{n+k-1} e^u \, du.
end{align*}

Evaluating the last sum gives the expression $ ext{(1)}$ as desired.

---

**Addendum.** I checked the preface of the book and found the quote:

> *Each chapter contains a section of difficult problems, motivated by other problems in the book, which are collected in a special section entitled “Open problems”...*.

That being said, they are truly intended as exercises for readers! 这里


Assume that $a_n>0$ such that $sum_{n=1}^{infty}a_n $ converges.

>**Question:** For what values of $sin Bbb R$ does the following series :
$$ I_s= sum_{n=1}^{infty} left(frac{a_1^{1/s}+a_2^{1/s}+cdots +a_n^{1/s}}{n} ight)^s.$$ converges or diverges?

This question is partially motivated by some comments [on this post][1] where it is shown that $I_s$ converges for $s>1$. Moreover, [it is well known][n] that
$$lim_{s oinfty}left(frac{a_1^{1/s}+a_2^{1/s}+cdots +a_n^{1/s}}{n} ight)^s = left(a_1a_2cdots a_n ight)^{1/n}$$

Accordingly, Taking $b_n= 1/a_n$ is one readily get,

$$lim_{color{red}{s o-infty}}left(frac{a_1^{1/s}+a_2^{1/s}+cdots +a_n^{1/s}}{n} ight)^s = left(a_1a_2cdots a_n ight)^{1/n}$$
it draws from [Carleman's inequality][a] that :

$$color{red}{ I_{-infty}}=I_infty= sum_{n=1}^{infty}left(a_1a_2cdots a_n ight)^{1/n} le e sum_{n=1}^{infty} a_n<infty .$$
Patently it is also true that the convergence holds for $s=-1$ this is proven [here][2]. Whereas the convergence fails for $0<s<1$
Indeed, $$sum_{n=1}^{infty} left(frac{a_1^{1/s}+a_2^{1/s}+cdots +a_n^{1/s}}{n} ight)^s ge sum_{n=1}^{infty} frac{a_1}{n^s}=infty$$

>**So we have that $I_s$ converges for $1<sleinfty$ or $s=in{-1,-infty}$ and diverges for $0<s<1$**.

>Hence the original question reduces on studying $I_s$ for $sle0$ can anyone help?

Clearly the hope is that $I_s$ converges for for $-inftyle sle -1 $ and diverges for $-1<s<0.$`

I don't know if one could infer some conjecture for the case $s=0$ since it seems pathological.

<and obviously for $s=0^+$ and with $a_n=frac{2}{n^2}$ we have
$$ sum_{n=1}^{infty} left(frac{a_1^{1/s}+a_2^{1/s}+cdots +a_n^{1/s}}{n} ight)^s=sum_{n=1}^{infty} left(frac{2^{1/0^+}+a_2^{1/s}+cdots +a_n^{1/s}}{n} ight)^s $$

[n]:https://math.stackexchange.com/questions/357138/if-a-1-a-2-dotsc-a-n0-then-lim-limits-x-to-infty-left-frac-a-11?rq=1
[a]:https://en.wikipedia.org/wiki/Carleman%27s_inequality
[2]:https://math.stackexchange.com/questions/599999/the-series-sum-limits-n-1-infty-frac-n-frac1a-1-frac1a-2-dotsb-fra
[1]:https://math.stackexchange.com/questions/2587408/proving-that-sum-n-1-infty-left-fraca-11-sa-21-s-cdots-a-n1


**Answer:** Using the [power mean (generalized mean) inequality][1] the convergence for $s<0$ follows easily from the convergence for $s>1$. This probably does not give optimal bounds for $I_s$.

---
It's more natural to set $s=1/t$, so that the integrands are [generalized means][1] that, for fixed $(a_n)$, are increasing in $t in mathbb R$ by the power mean inequality (which says exactly that). We know that $I_{1/t}$ converges for $0<t<1$ (i.e. $s>1$, by Hardy's inequality, [see here][2]) and thus for all $-infty leq t < 1$ by the comparison test.

We also have that:

* the supremum $S_t=sup(I_{1/t}/sum a_n)$ over all sequences is finite for all $t<1$
* $S_t$ is (monotonically) increasing in $t$
* $S_{-infty}=1$ (take a decreasing sequence)
* $S_{-1}=2$ ([see this question][3])
* $S_0 leq e$ ([Carleman's inequality][4])
* $S_t leq (1-t)^{-1/t}$ for $0<t<1$ (Hardy's inequality, [see this question][2])

It's natural to conjecture that $S_t=(1-t)^{-1/t}$ for all $t<1$, which is $1$ at $-infty$ and $e$ at $0$.


[1]: https://en.wikipedia.org/wiki/Generalized_mean
[2]: https://math.stackexchange.com/questions/2587408
[3]: https://math.stackexchange.com/questions/599999
[4]: https://en.wikipedia.org/wiki/Carleman%27s_inequality

这里

原文地址:https://www.cnblogs.com/Eufisky/p/9778670.html