## On the Number of Primesless than a given Magnitude

The best way to express my gratitude for the honor, which the Academy has made me by selecting me among its correspondents, is, as I believe, to make immediately use of the privilege I received and communicate an investigation about the frequency of the prime numbers. This topic, which by the interest which Gauß and Dirichlet have shown in it over many years, might not seem entirely unworthy of such a communication.

The starting point for this investigation is Euler's observation, that the product

\begin{equation} \prod\frac1{1-\frac1{p^s}}=\sum\frac1{n^s}, \end{equation}

where $p$ ranges over the prime numbers and $n$ over all integers.

The function of a complex variable $s$, which is represented by these two expressions wherever they converge, I denote by $\zeta(s).$ Both converge only as long as the real part of $s$ is greater than $1;$ however, one can easily find an expression for the function which is always valid. By applying the equation

\begin{equation} \int\limits_0^\infty e^{-nx}x^{s-1}dx =\frac{\Gamma(s)}{n^s} \end{equation}

one finds first

\begin{equation} \Gamma(s)\zeta(s) =\int\limits_0^\infty\frac{x^{s-1}dx}{e^x-1}. \end{equation}

If one now considers the integral

\begin{equation} \int\frac{(-x)^{s-1}dx}{e^x-1} \end{equation}

from $+\infty$ to $+\infty$ in a positive sense around a domain that includes the value 0, but does not contain any other discontinuity of the integrand in its interior, then this is easily seen to be equal to

\begin{equation} (e^{-\pi si}-e^{\pi si})\int\limits_0^\infty\frac{x^{s-1}dx}{e^x-1}, \end{equation}

provided that in the many-valued function $(-x)^{s-1}=e^{(s-1)\log(-x)}$ the logarithm of $-x$ has been set as to be real for a negative $x$. Thus

\begin{equation} 2\sin\pi s \,\Gamma(s)\zeta(s) =i\int\limits_\infty^\infty\frac{(-x)^{s-1}dx}{e^x-1}, \end{equation}

where the integral is understood as defined above.

This equation gives the value of the function $\zeta(s)$ for all complex $s$ and shows that it is single-valued and finite for all values of $s$ except 1, and also that it vanishes when $s$ is a negative even integer.

When the real part of $s$ is negative, the integral can be taken, instead by positively traversing the boundary of the given domain, by traversing the boundary of the complement of that domain in the negative sense, since then the integral is infinitely small over values with infinitely large modulus.

In the interior of this complementary domain, however, the integrand becomes discontinuous only if $x$ equals an integer multiple of $\pm2\pi i$. The integral is therefore equal to the sum of the integrals taken around these values in the negative sense. Since the integral around the value $n2\pi i$ equals $(-n2\pi i)^{s-1}(-2\pi i)$ this gives

\begin{equation} 2\sin\pi s\, \Gamma(s)\zeta(s) = (2\pi)^s\sum n^{s-1}\bigl((-i)^{s-1}+i^{s-1}\bigr). \end{equation}

This relation between $\zeta(s)$ and $\zeta(1-s)$ can also be stated using known properties of the function $\Gamma$ as the proposition that

\begin{equation} \Gamma( \frac{s}{2} )\,\pi^{-\frac s2}\zeta(s) \end{equation}

remains unchanged when $s$ is replaced by $1-s$.

This property of the function motivated me to replace the integral $\Gamma(s)$ with the integral $\Gamma(\frac {s}{2})$ in the general term of the series $\sum {\frac {1}{n^{s}}}$, which leads to a very convenient expression for the function $\zeta(s).$ In fact one then has

\begin{equation} \frac1{n^s}\, \Gamma(\frac s2 )\,\pi^{-\frac s2} =\int\limits_0^\infty e^{-nn\pi x}\,x^{\frac s2-1}dx. \end{equation}

So, if one sets

\begin{equation} \sum_1^\infty e^{-nn\pi x}=\psi(x) \end{equation}

it follows that

\begin{equation} \Gamma(\frac s2 )\pi^{-\frac s2}\zeta(s) =\int\limits_0^\infty\psi(x)x^{\frac s2-1}dx. \end{equation}

Also, since (Jacobi. Fund. p. 184)

\begin{equation} 2\psi(x)+1=x^{-\frac12}\left(2\psi\left(\frac1x\right)+1\right), \end{equation}

one has

\begin{multline} \Gamma(\frac s2 )\pi^{-\frac s2}\zeta(s)\\ =\int\limits_1^\infty\psi(x) x^{\frac s2-1}dx +\int\limits_0^1\psi\left(\frac1x\right)x^{\frac s2-\frac32}dx\qquad\\ \quad +\frac12\int\limits_0^1\left(x^{\frac s2-\frac32} -x^{\frac s2-1}\right)dx\\ =\frac1{s(s-1)}+\int\limits_1^\infty\psi(x) \left(x^{\frac s2-1}+x^{-\frac12-\frac s2}\right)dx. \end{multline}

I now set $s = \tfrac12+ti$ and

\begin{equation} \Gamma\left(\frac s2 + 1 \right)(s-1) \pi^{-\frac s2}\zeta(s)=\xi(t) \end{equation}

so that

\begin{equation} \xi(t)=\frac12-\left(t^2+\frac14\right) \int\limits_1^\infty\psi(x)x^{-\frac34} \cos\left(\frac12t\log x\right)dx \end{equation}

or also

\begin{equation} \xi(t)=4\int\limits_1^\infty\frac{d\left(x^\frac32\psi'(x)\right)} {dx}x^{-\frac14}\cos\left(\frac12t\log x\right)dx. \end{equation}

This function is finite for all finite values of $t$ and can be developed into a very rapidly converging series in powers of $t^2.$ Since for values of $s$ with real part greater than 1, $\log\zeta(s)=-\sum\log(1-p^{-s})$ remains finite, and since the same is true for the logarithms of the other factors of $\xi(t),$ the function $\xi(t)$ can only vanish if the imaginary part of $t$ lies between $\tfrac12i$ and $-\tfrac12i.$

The number of roots of $\xi(t)=0$ whose real part lie between 0 and $T$ is approximately

\begin{equation} \tfrac T{2\pi}\log\tfrac T{2\pi}-\tfrac T{2\pi}. \end{equation}

This is because the integral $\textstyle\int d\log\xi(t)$ taken in the positive sense around the region of all values of $t$ whose imaginary parts lie between $\tfrac12i$ and $-\tfrac12i$ and whose real part lie between 0 and $T$ is (up to a fraction of the order of magnitude of $\tfrac1T$ ) equal to $(T\log\tfrac T{2\pi}-T)i\,;$ on the other hand this integral is also equal to the number of roots of $\xi (t) = 0$ lying within this region, multiplied by $2\pi i.$

In fact there are about as many real roots within these bounds, and it is very probable that all roots are real. Certainly one would wish for a strict proof of this; I have, though, after some fleeting futile attempts, temporarily put aside the search for such, as it appears unnecessary for the next objective of my investigation.

If $\alpha$ denotes a root of the equation $\xi(\alpha)=0,$ then one can express $\log\xi(t)$ as

\begin{equation} \sum\log\left(1-\frac{t^2}{\alpha^2}\right)+\log\xi(0). \end{equation}

Since the density of the roots of the size $t$ grows only like $\log\frac t{2\pi},$ this expression converges and becomes for infinite $t$ infinite like $t\log t;$ thus it differs from $\log\xi(t)$ by a function of $t^2,$ which remains continuous and finite for finite $t$ and, when divided by $t^2,$ becomes infinitely small for infinite $t.$ This difference is therefore a constant whose value can be determined by setting $t=0.$

The number of prime numbers smaller than $x$ can now be determined using these facts. Let $\Pi(x)$ when $x$ is not a prime number be equal to this number, but when $x$ is a prime number, be greater by $\tfrac12,$ so that for an $x$ at which the value of $\Pi(x)$ jumps,

\begin{equation} \Pi(x)=\frac{\Pi(x+0)+\Pi(x-0)}2. \end{equation}

If one now replaces in

\begin{align} \log\zeta(s)&=-\sum\log(1-p^{-s})\\ &=\sum p^{-s}+\frac12\sum p^{-2s}+\frac13\sum p^{-3s}+\ldots \end{align}

the terms

\begin{equation} p^{-s}\, \text{ by } s\int_p^\infty x^{-s-1}dx, \ p^{-2s}\, \text{ by } s\int_{p^2}^\infty x^{-s-1}dx,\ \ldots, \end{equation}

one obtains

\begin{equation} \frac{\log\zeta(s)}s=\int\limits_1^\infty \Pi(x)\,x^{-s-1}dx \end{equation}

if one writes $\Pi(x)$ for

\begin{equation} \pi(x)+\frac12\pi(x^\frac12)+\frac13\pi(x^\frac13)+\ldots\,. \end{equation}

This equation is valid for any complex value $a+bi$ of $s$ for which $a \gt 1.$ But if, in the given range, the equation

\begin{equation} g(s)=\int\limits_0^\infty h(x)x^{-s}d\log x \end{equation}

holds, then Fourier's theorem allows to express the function $h$ in terms of the function $g$. The equation decomposes if $h(x)$ is real and

\begin{equation} g(a+bi)=g_1(b)+ig_2(b),\, \end{equation}

into the following two

\begin{align} g_1(b)&=\int\limits_0^\infty h(x)x^{-a}\cos(b\log x)d\log x,\\ ig_2(b)&=-i\int\limits_0^\infty h(x)x^{-a}\sin(b\log x)d\log x. \end{align}

If both equations are multiplied by

\begin{equation} \bigl(\cos(b\log y)+i\sin(b\log y)\bigr)\,db \end{equation}

and integrated from $-\infty$ to $\infty$, one gets on the right hand side of both $\pi h(y)y^{-a},$ by Fourier's theorem. Thus, if one adds both equations and multiplies with $iy^a$ one obtains

\begin{equation} 2\pi ih(y)=\int\limits_{a-\infty i}^{a+\infty i}g(s)y^sds, \end{equation}

where the integration has to be carried out such that the real part of $s$ remains constant.

The integral represents, for a value of $y$ at which a jump of the function $h(y)$ occurs, the mean of the function values of $h$ on either side of the jump. By the way the function $\Pi(x)$ is defined it too has this property; therefore one has in full generality

\begin{equation} \Pi(y)=\frac1{2\pi i}\int\limits_{a-\infty i}^{a+\infty i} \frac{\log\zeta(s)}sy^sds. \end{equation}

For $\log\zeta$ one can substitute the expression found earlier

\begin{multline} \frac s2\log\pi-\log(s-1) - \log\Gamma\left( \frac s2 + 1 \right) \\ +\sum_\alpha\log\left(1+\frac{(s-\frac12)^2}{\alpha^2 }\right)+\log\xi(0). \end{multline}

But since the integrals of the individual terms of this expression do not converge when extended to infinity, it is desirable to first transform the equation by integration by parts into

\begin{equation} \Pi(x)=-\frac1{2\pi i}\frac1{\log x}\int \limits_{a-\infty i}^{a+\infty i}\frac{d\frac{\log\zeta(s)}s}{ds}x^sds . \end{equation}

Since, for $m=\infty,$

\begin{multline} -\log\Gamma\left( \frac s2 + 1 \right)\\ =\lim\left(\sum_{n=1}^{n=m}\log\left(1+\frac s{2n}\right) -\frac s2\log m\right), \end{multline}

and therefore

\begin{equation} -\frac{d\frac1s \log\Gamma\left( \frac s2 + 1 \right) }{ds} =\sum_1^\infty\frac{d\frac1s\log\left(1+\frac s{2n}\right)}{ds}, \end{equation}

all terms of the expression for $\Pi(x)$ with the exception of

\begin{equation} \frac1{2\pi i}\frac1{\log x}\int\limits_{a-\infty i}^{a+\infty i} \frac1{s^2}\log\xi(0)x^sds=\log\xi(0) \end{equation}

take the form

\begin{equation} \pm\frac1{2\pi i}\frac1{\log x}\int\limits_{a-\infty i}^{a+\infty i} \frac{d\left(\frac1s\log\left(1-\frac s\beta\right)\right)}{ds}x^sds. \end{equation}

But now

\begin{equation} \frac{d\left(\frac1s\log\left(1-\frac s\beta\right)\right)}{d\beta} =\frac1{(\beta-s)\beta}. \end{equation}

Also, if the real part of $s$ is greater than the real part of $\beta,$ then one obtains

\begin{align} \frac1{2\pi i}&\int\limits_{a-\infty i}^{a+\infty i}\frac{x^sds}{(\beta-s)\beta} =\frac{x^\beta}\beta\\ &=\int\limits_\infty^x x^{\beta-1}dx\;\, \text{ or } \,=\int\limits_0^x x^{\beta-1}dx. \end{align}

depending on the real part of $\beta$ being negative or positive. As a result

\begin{align} \frac1{2\pi i}&\frac1{\log x}\int\limits_{a-\infty i}^{a+\infty i} \frac{d\left(\frac1s\log\left(1-\frac s\beta\right)\right)}{ds}x^sds\\ &= -\frac1{2\pi i}\int\limits_{a-\infty i}^{a+\infty i}\frac1s\log\left(1-\frac s\beta\right)x^sds\\ &= \int\limits_\infty^x\frac{x^{\beta-1}}{\log x}dx+c.\ \text{ or } \ =\int\limits_0^x\frac{x^{\beta-1}}{\log x}dx+c. \end{align}

In the first case the constant of integration can be determined by letting the real part of $\beta$ approach negative infinity.

In the second case the integral from $0$ to $x$ takes two values which differ by $2\pi i$, depending on whether the path of integration is taken through the complex values with positive or negative argument. If the path is in the upper halfplane the integral becomes infinitely small when the coefficient of $i$ in $\beta$ becomes infinite and positive; but if the path is in the lower halfplane the integral becomes infinitely small when this coefficient becomes infinite and negative.

This shows how on the left hand side $\log(1- s/\beta)$ has to be determined in order that the constant of integration vanish.

Inserting these values in the expression for $\Pi(x)$ gives

\begin{align} \Pi(x)=\Li(x)&-\sum_\alpha \left(\Li\left(x^{\frac12+\alpha i}\right) +\Li\left(x^{\frac12-\alpha i}\right)\right)\\ &+\int\limits_x^\infty\frac1{x^2-1}\frac{dx}{x\log x}-\log 2, \end{align}

where in $\sum_\alpha$ for $\alpha$ all positive roots (or all roots with positive real part) of the equation $\xi(\alpha)=0$ are taken, ordered by size. It can easily be shown, by means of a more detailed discussion of the function $\xi,$ that with this ordering the sum of the series

\begin{equation} \sum_\alpha\left(\Li\left(x^{\frac12+\alpha i}\right) +\Li\left(x^{\frac12-\alpha i}\right)\right)\log x \end{equation}

is the same as the limit of

\begin{equation} \frac1{2\pi i}\int\limits_{a-bi}^{a+bi}\frac{d\frac1s\sum\log\left(1 +\frac{\left(s-\frac12\right)^2}{\alpha^2}\right)}{ds}x^sds \end{equation}

to which $b$ converges if it grows unlimited; however by changing the order it can approach any arbitrary real value.

From $\Pi(x)$ one can find $\pi(x)$ by inverting

\begin{equation} \Pi(x)=\sum\frac1n\pi\left(x^\frac1n\right)\,. \end{equation}

The resulting equation is

\begin{equation} \pi(x)=\sum(-1)^\mu\frac1m\Pi\left(x^\frac1m\right), \end{equation}

in which $m$ ranges over all positive integers which are not divisible by any square except 1 and $\mu$ denotes the number of prime factors of $m$.

If one limits $\sum_\alpha$ to a finite number of terms, then the derivative of the expression for $\Pi(x)$ or, except for a part that decreases very rapidly with increasing $x$,

\begin{equation} \frac1{\log x}-2\sum^\alpha\frac{\cos(\alpha\log x)x^{-\frac12}}{\log x} \end{equation}

gives an approximate expression for the density of the prime numbers, plus $1/2$ of the density of the prime number squares, plus $1/3$ of the density of the prime number cubes, plus $\ldots$ etc., of the size $x$.

The known approximation $\pi(x)=\Li(x)$ is therefore valid only up to an order of magnitude of $x^{1/2}$ and gives a somewhat too large value, because the non-periodic terms in the expression of $\pi(x)$ are, apart from terms which remain bounded as $x$ increases,

\begin{multline} \Li(x)-\frac12\Li\left(x^\frac12\right)-\frac13 \Li\left(x^\frac13\right)- \frac15\Li\left(x^\frac15\right)\\ +\frac16\Li\left(x^\frac16\right)-\frac17 \Li\left(x^\frac17\right)+\ldots\,. \end{multline}

Indeed, in the comparison made by Gauß and Goldschmidt for $x$ up to three million of $\Li(x)$ with the number of prime numbers less than $x$ this number has always been less than $\Li(x),$ beginning with the first hundred thousand; in fact the difference is gradually increasing, with many fluctuations.

But also the local increase and decrease in the density of the prime numbers, which is represented by the periodic terms in the formula, was already observed in the counting, although no law for it could be established.

In the event of a new count, it would be interesting to examine the effect of the individual periodic terms contained in the formula for the density of the prime numbers. The function $\Pi(x)$ would show a more regular behavior than $\pi(x)$, which already in the first hundred is on average very nearly equal to $\Li(x)+\log\xi(0)$.

As pdf. Please improve the translation on github. French version.