%%
%% This is a LaTeX document generated by automatic conversion of a
%% notebook-format document using Publicon 1.0.
%%
\documentclass{article}
\usepackage{graphicx,amscd,amsmath,amssymb,verbatim}
\usepackage[dvips]{hyperref}
\usepackage[TS1,OT1,T1]{fontenc}
\begin{document}
\title{Mathematical Formulation of the Quantum Theory of Electromagnetic
Interaction}
\author{R.P. Feynman}
\date{\today}
\maketitle
\begin{abstract}
The validity of the rules given in previous papers for the solution
of problems in quantum electrodynamics is established. Starting
with Fermi's formulation of the field as a set of harmonic oscillators,
the effect of the oscillators is integrated out in the Lagrangian
form of quantum mechanics. There results an expression for the effect
of all virtual photons valid to all orders in $e^{2}/\hbar c$. It
is shown that evaluation of this expression as a power series in
$e^{2}/\hbar c$ gives just the terms expected by the aforementioned
rules.
In addition, a relation is established between the amplitude for
a given process in an arbitrary unquantized potential and in a quantum
electrodynamical field. This relation permits a simple general statement
of the laws of quantum electrodynamics.
A description, in Lagrangian quantum-mechanical form, of particles
satisfying the Klein-Gordon equation is given in an Appendix. It
involves the use of an extra parameter analogous to proper time
to describe the trajectory of the particle in four dimensions.
A second Appendix discusses in the special case of photons, the
problem of finding what real processes are implied by the formula
for virtual processes.
Problems of the divergences of electrodynamics are not discussed.
\end{abstract}
\section{Introduction}
\noindent In two previous papers\footnote{R P. Feynman, Phys. Rev.
{\bfseries 76}, 749 (1949), hereafter called \textbf{I}, and Phys.
Rev. {\bfseries 76}, 769 (1949), hereafter called \textbf{II}.}
rules were given for the calculation of the matrix element for any
process in electrodynamics, to each order in $e^{2}/\hbar c$. No
complete proof of the equivalence of these rules to the conventional
electrodynamics was given in these papers. Secondly, no closed expression
was given valid to all orders in $e^{2}/\hbar c$. In this paper
these formal omissions will be remedied.\footnote{See in this connection
also the papers of S. Tomonaga, Phys. Rev. {\bfseries 74}, 224 (l948);
S. Kanesawa and S. Tomonaga, Prog. Theoret. Phys. {\bfseries 3},
101 (1948); J. Schwinger, Phys. Rev. {\bfseries 76}, 790 (1949);
F. Dyson, Phys. Rev. {\bfseries 75}, 1736 (1949); W. Pauli and F.
Villars, Rev. Mod. Phys. {\bfseries 21}, 434 (1949). The papers
cited give references to previous work.}
In paper \textbf{II} it was pointed out that for many problems in
electrodynamics the Hamiltonian method is not advantageous, and
might be replaced by the over-all space-time point of view of a
direct particle interaction. It was also mentioned that the Lagrangian
form of quantum mechanics\footnote{R. P. Feynman, Rev. Mod. Phys.
{\bfseries 20}, 367 (1948), hereafter called \textbf{C}.} was useful
in this connection. The rules given in paper \textbf{II} were, in
fact, first deduced in this form of quantum mechanics. We shall
give this derivation here.
The advantage of a Lagrangian form of quantum mechanics is that
in a system with interacting parts it permits a separation of the
problem such that the motion of any part can be analyzed or solved
first, and the results of this solution may then be used in the
solution of the motion of the other parts. This separation is especially
useful in quantum electrodynamics which represents the interaction
of matter with the electromagnetic field. The electromagnetic field
is an especially simple system and its behavior can be analyzed
completely. What we shall show is that the net effect of the field
is a delayed interaction of the particles. It is possible to do
this easily only if it is not necessary at the same time to analyze
completely the motion of the particles. The only advantage in our
problems of the form of quantum mechanics in \textbf{C} is to permit
one to separate these aspects of the problem. There are a number
of disadvantages, however, such as a lack of familiarity, the apparent
(but not real) necessity for dealing with matter in non-relativistic
approximation, and at times a cumbersome mathematical notation and
method, as well as the fact that a great deal of useful information
that is known about operators cannot be directly applied.
It is also possible to separate the field and particle aspects of
a problem in a manner which uses operators and Hamiltonians in a
way that is much more familiar. One abandons the notation that the
order of action of operators depends on their written position on
the paper and substitutes some other convention (such that the order
of operators is that of the time to which they refer). The increase
in manipulative facility which accompanies this change in notation
makes it easier to represent and to analyze the formal problems
in electrodynamics. The method requires some discussion, however,
and will be described in a succeeding paper. In this paper we shall
give the derivations of the formulas of \textbf{II} by means of
the form of quantum mechanics given in \textbf{C}.
The problem of interaction of matter and field will be analyzed
by first solving for the behavior of the field in terms of the coordinates
of the matter, and finally discussing the behavior of the matter
(by matter is actually meant the electrons and positrons). That
is to say, we shall first eliminate the field variables from the
equations of motion of the electrons and then discuss the behavior
of the electrons. In this way all of the rules given in the paper
\textbf{II} will be derived.
Actually, the straightforward elimination of the field variables
will lead at first to an expression for the behavior of an arbitrary
number of Dirac electrons. Since the number of electrons might be
infinite, this can be used directly to find the behavior of the
electrons according to hole theory by imagining that nearly all
the negative energy states are occupied by electrons. But, at least
in the case of motion in a fixed potential, it has been shown that
this hole theory picture is equivalent to one in which a positron
is represented as an electron whose space-time trajectory has had
its time direction reversed. To show that this same picture may
be used in quantum electrodynamics when the potentials are not fixed,
a special argument is made based on a study of the relationship
of quantum electrodynamics to motion in a fixed potential. Finally,
it is pointed out that this relationship is quite general and might
be used for a general statement of the laws of quantum electrodynamics.
Charges obeying the Klein-Gordon equation can be analyzed by a special
formalism given in Appendix \ref{XRef-AppendixSection-227155027}.
A fifth parameter is used to specify the four-dimensional trajectory
so that the Lagrangian form of quantum mechanics can be used. Appendix
\ref{XRef-AppendixSection-227155120} discusses in more detail the
relation of real and virtual photon emission. An equation for the
propagation of a self-interacting electron is given in Appendix
\ref{XRef-AppendixSection-227155141}.
In the demonstration which follows we shall restrict ourselves temporarily
to cases in which the particle's motion is non-relativistic, but
the transition of the final formulas to the relativistic case is
direct, and the proof could have been kept relativistic throughout.
The transverse part of the electromagnetic field will be represented
as an assemblage of independent harmonic oscillators each interacting
with the particles, as suggested by Fermi.\footnote{E. Fermi, Rev.
Mod. Phys. {\bfseries 4}, 87 (1932).} We use the notation of Heitler.\footnote{W.
Heitler, {\itshape The Quantum Theory of Radiation}, second edition
(Oxford University Press, London, 1944).}
\section{Quantum electrodynamics in Lagrangian form}
\noindent The Hamiltonian for a set of non-relativistic particles
interacting with radiation is, classically, $H=H_{p}+H_{I}+H_{c}+H_{tr}$,
where $H_{p}+H_{I}=\sum _{n}\frac{1}{2}m_{n}^{-1}( p_{n}-e_{n}A^{tr}(
x_{n}) ) ^{2}$ is the Hamiltonian of the particles of mass $m_{n}$,
charge $e_{n}$, coordinate $x_{n}$, and momentum $p_{n}$, and their
interaction with the transverse part of the electromagnetic field.
This field can be expanded into plane waves
\begin{equation}
A^{t r}( x) =\left( 8\pi \right) ^{\frac{1}{2}}\sum _{K}\left[ e_{1}(
q_{K}^{\left( 1\right) }\cos ( K\cdot x) +q_{K}^{\left( 3\right)
}\sin ( K\cdot x) ) +e_{2}( q_{K}^{\left( 2\right) }\cos ( K\cdot
x) +q_{K}^{\left( 4\right) }\sin ( K\cdot x) ) \right] %
\label{XRef-Equation-227163828}
\end{equation}
where $e_{1}$ and $e_{2}$ are two orthogonal polarization vectors
at right angles to the propagation vector $K$, magnitude $k$. The
sum over $K$ means, if normalized to unit volume, $\frac{1}{2}\int
d^{3}K/8\pi ^{3}$, and each $q_{K}^{(r)}$, can be considered as
the coordinate of a harmonic oscillator. (The factor $\frac{1}{2}$
arises for the mode corresponding to $K$ and to $-K$ is the same.)
The Hamiltonian of the transverse field represented as oscillators
is
\[
H_{tr}=\frac{1}{2}\sum \limits_{K}^{ } \sum \limits_{r=1}^{4}\left(
\left( p_{K}^{\left( r\right) }\right) ^{2}+k^{2}( q_{K}^{\left(
r\right) }) ^{2}\right)
\]
where $p_{K}^{(r)}$ is the momentum conjugate to $q_{K}^{(r)}$.
The longitudinal part of the field has been replaced by the Coulomb
interaction\footnote{The term in the sum for $n=m$ is obviously
infinite but must be included for relativistic invariance. Our problem
here is to re-express the usual (and divergent) form of electrodynamics
in the form given in \textbf{II}. Modifications for dealing with
the divergences are discussed in \textbf{II} and we shall not discuss
them further here.} ,
\[
H_{c}=\frac{1}{2}\sum _{n}\sum _{m}e_{n}e_{m}/r_{nm}
\]
where $r_{nm}^{2}=(x_{n}-x_{m})^{2}$. As is well known, when this
Hamiltonian is quantized one arrives at the usual theory of quantum
electrodynamics. To express these laws of quantum electrodynamics
one can equally well use the Lagrangian form of quantum mechanics
to describe this set of oscillators and particles. The classical
Lagrangian equivalent to this Hamiltonian is $L=L_{p}+L_{I}+L_{
c}+L_{t r}$ where
\begin{equation}
\begin{array}{rl}
L_{p} & =\frac{1}{2}\sum _{n}m_{n}x_{n}^{2} \\
L_{I} & =\sum _{n}e_{n}x^{.}_{n}\cdot A^{t r}\left( x_{n}\right)
\\
L_{tr} & =\frac{1}{2}\sum _{K}\sum _{r}\left( \left( \overset{.}{q}_{K}^{\left(
r\right) }\right) ^{2}-k^{2}\left( q_{K}^{\left( r\right) }\right)
^{2}\right) \\
L_{c} & =-\frac{1}{2}\sum _{n}\sum _{m}e_{n}e_{m}/r_{mn}.
\end{array}%
\label{XRef-Equation-227163810}
\end{equation}
When this Lagrangian is used in the Lagrangian forms of quantum
mechanics of \textbf{C}, what it leads to is, of course, mathematically
equivalent to the result of using the Hamiltonian $H$ in the ordinary
way, and is therefore equivalent to the more usual forms of quantum
electrodynamics (at least for non-relativistic particles). We may,
therefore, proceed by using this Lagrangian form of quantum electrodynamics,
with the assurance that the results obtained must agree with those
obtained from the more usual Hamiltonian form.
The Lagrangian enters through the statement that the functional
which carries the system from one state to another is $\exp ( i
S) $ where
\begin{equation}
S=\int L dt =S_{p}+S_{I}+S_{c}+S_{tr}.
\end{equation}
The time integrals must be written as Riemann sums with some care;
for example,
\begin{equation}
S_{I}=\sum _{n}\int e_{n}x^{.}_{n}( t) \cdot A^{t r}( x_{n}( t)
) dt
\end{equation}
becomes according to \textbf{C}, Eq. (19)
\begin{equation}
S_{I}=\sum _{n}\sum _{i}\frac{1}{2}e_{n}( x_{n,i+1}-x_{n,i}) \cdot
\left( A^{t r}( x_{n,i+1}) +A^{t r}( x_{n,i}) \right)
\end{equation}
so that the velocity $x^{.}_{n,i}$ which multiplies $A^{t r}( x_{n,i})
$ is
\begin{equation}
x^{.}_{n,i}=\frac{1}{2}\epsilon ^{-1}( x_{n,i+1}-x_{n,i}) +\frac{1}{2}\epsilon
^{-1}( x_{n,i}-x_{n,i-1}) .%
\label{XRef-Equation-22716409}
\end{equation}
In the Lagrangian form it is possible to eliminate the transverse
oscillators as is discussed in \textbf{C}, Section 13. One must
specify, however, the initial and final state of all oscillators.
We shall first choose the special, simple case that all oscillators
are in their ground states initially and finally, so that all photons
are virtual. Later we do the more general case in which real quanta
are present initially or finally. We ask, then, for the amplitude
for finding no quanta present and the particles in state $\chi _{t^{{\prime\prime}}}$,
at time $t^{{\prime\prime}}$, if at time $t^{\prime }$ the particles
were in state $\psi _{t^{\prime }}$ and no quanta were present.
The method of eliminating field oscillators is described in Section
13 of \textbf{C}. We shall simply carry out the elimination here
using the notation and equations of \textbf{C}. To do this, for
simplicity, we first consider in the next section the case of a
particle or a system of particles interacting with a single oscillator,
rather than the entire assemblage of the electromagnetic field.
\section{Force harmonic oscillator}
\noindent We consider a harmonic oscillator, coordinate $q$, Lagrangian
$L=\frac{1}{2}(\overset{.}{q}^{2}-\omega ^{2}q^{2})$ interacting
with a particle or system of particles, action $S_{p}$, through
a term in the Lagrangian $q( t) \gamma ( t) $ where $\gamma ( t)
$ is a function of the coordinates (symbolized as $x$) of the particle.
The precise form of $\gamma ( t) $ for each oscillator of the electromagnetic
field is given in the next section. We ask for the amplitude that
at some time $t^{{\prime\prime}}$ the particles are in state $\chi
_{t^{{\prime\prime}}}$, and the oscillator is in, say, an eigenstate
$m$ of energy $\omega ( m+\frac{1}{2}) $ (units are chosen such
that $\hbar =c=1$) when it is given that at a previous time $t^{\prime
}$ the particles were in state $\psi _{t^{\prime }}$. and the oscillator
in $n$. The amplitude for this is the transition amplitude [see
\textbf{C}, Eq. (61)]
\begin{multline}
\left\langle \chi _{t^{{\prime\prime}}}\varphi _{m}\left| 1\right|
\psi _{t^{\prime }}\varphi _{n}\right\rangle _{S_{p}+S_{0}+S_{I}}=\int
\int \chi _{t^{{\prime\prime}}}^{*}( x_{t^{{\prime\prime}}}) \varphi
_{m}^{*}( q_{t^{{\prime\prime}}}) \exp i( S_{p}+S_{0}+S_{I}) \\
\cdot \varphi _{n}( q_{t^{\prime }}) \psi _{t^{\prime }}( x_{t^{\prime
}}) dx_{t^{{\prime\prime}}}dx_{t^{\prime }}dq_{t^{{\prime\prime}}}dq_{t^{\prime
}}{\mathcal D} x( t) {\mathcal D} q( t)
\end{multline}
where $x$ represents the variables describing the particle, $S_{p}$
is the action calculated classically for the particles for a given
path going from coordinate $x_{t^{\prime }}$ at $t^{\prime }$ to
$x_{t^{{\prime\prime}}}$ at $t^{{\prime\prime}}$, $S_{0}$ is the
action $\int \frac{1}{2}(\overset{.}{q}^{2}-\omega ^{2}q^{2})dt$
for any path of the oscillator going from $q_{t^{\prime }}$ at $t^{\prime
}$ to $ q_{t^{{\prime\prime}}}$ at $t^{{\prime\prime}}$, while
\begin{equation}
S_{I}=\int q( t) \gamma ( t) dt,
\end{equation}
the action of interaction, is a functional of both $q( t) $ and
$x( t) $, the paths of oscillator and particles. The symbols ${\mathcal
D} x( t) $ and ${\mathcal D} q( t) $ represent a summation over
all possible paths of particles and oscillator which go between
the given end points in the sense defined in \textbf{C}, Eq. (9).
(That is, assuming time to proceed in infinitesimal steps, $\epsilon
$, an integral over all values of the coordinates $x$ and $q$ corresponding
to each instant in time, suitably normalized.)
The problem may be broken in two. The result can be written as an
integral over all paths of the particles only, of $(\exp i S_{p})\cdot
G_{mn}$ :
\begin{equation}
\left\langle \chi _{t^{{\prime\prime}}}\varphi _{m}\left| 1\right|
\psi _{t^{\prime }}\varphi _{n}\right\rangle _{S_{p}+S_{0}+S_{I}}=
\left\langle \chi _{t^{{\prime\prime}}}\left| G_{mn}\right| \psi
_{t^{\prime }}\right\rangle _{S_{p}}
\end{equation}
where $G_{mn}$ is a functional of the path of the particles alone
(since it depends on $\gamma ( t) $) given by
\begin{multline}
G_{mn }=\left\langle \varphi _{m}\left| \exp i \int q( t) \gamma
( t) dt\right| \varphi _{n}\right\rangle _{S_{0}}=\int \varphi
_{m}^{*}( q_{t^{{\prime\prime}}}) \exp i \left( S_{0}+S_{I}\right)
\varphi _{n}( q_{t^{\prime }}) dq_{t^{\prime }}dq_{t^{{\prime\prime}}}{\mathcal
D} q( t) \\
=\int \varphi _{m}^{*}( q_{j}) \exp i \epsilon \sum \limits_{i=0}^{j-1}\left[
\frac{1}{2}\epsilon ^{-2}( q_{i+1}-q_{i}) ^{2}-\frac{1}{2}\omega
^{2}q_{i}^{2}+q_{i}\gamma _{i}\right] \\
\cdot \varphi _{n}( q_{0}) dq_{0} a^{-1}dq_{1} a^{-1}dq_{2} \cdots
a^{-1}dq_{j}
\end{multline}
where we have written the $ {\mathcal D} q( t) $ out explicitly
(and have set $a=(2\pi i \epsilon )^{\frac{1}{2}}$, $t^{{\prime\prime}}-t^{\prime
}=j \epsilon $, $q_{t^{\prime }}=q_{0}$, $q_{t^{{\prime\prime}}}=q_{j}$).
The last form can be written as
\begin{equation}
G_{mn }= \int \varphi _{m}^{*}( q_{j}) k( q_{j},t^{{\prime\prime}};q_{0},t^{\prime
}) \varphi _{n}( q_{0}) dq_{0} dq_{j}%
\label{XRef-Equation-227163344}
\end{equation}
where $k( q_{j},t^{{\prime\prime}};q_{0},t^{\prime }) $ is the kernel
[as in \textbf{I}, Eq. (2)] for a forced harmonic oscillator giving
the amplitude for arrival at $q_{j}$ at time $t^{{\prime\prime}}$
if at time $t^{\prime }$ it was known to be at $q_{0}$. According
to \textbf{C} it is given by
\begin{equation}
k( q_{j},t^{{\prime\prime}};q_{0},t^{\prime }) =\left( 2 \pi i
\omega ^{-1}\sin \omega ( t^{{\prime\prime}}-t^{\prime }) \right)
^{-\frac{1}{2}}\exp i Q( q_{j},t^{{\prime\prime}};q_{0},t^{\prime
}) %
\label{XRef-Equation-227163312}
\end{equation}
where $Q( q_{j},t^{{\prime\prime}};q_{0},t^{\prime }) $ is the action
calculated along the classical path between the end points $q_{j},t^{{\prime\prime}};q_{0},t^{\prime
}$, and is given explicitly in \textbf{C}.\footnote{That (12) is
correct, at least insofar as it depends on $q_{0}$, can be seen
directly as follows. Let $\overset{\_}{q}( t) $ be the classical
path which satisfies the boundary condition $\overset{\_}{q}( t^{\prime
}) =q_{0}$, $\overset{\_}{q}( t^{{\prime\prime}}) =q_{j}$. Then
in the integral defining $k$ replace each of the variables $q_{i}$
by $q_{i}=\overset{\_}{q}_{i}+y_{i}$, ($\overset{\_}{q}_{i}=\overset{\_}{q}(
t_{i}) $) that is, use the displacement $y_{i}$ from the classical
path $\overset{\_}{q}_{i}$ as the coordinate rather than the absolute
position. With the substitution $q_{i}=\overset{\_}{q}_{i}+y_{i}$
in the action $ S_{0}+S_{I}=\int (\frac{1}{2}\overset{.}{q}^{2}-\frac{1}{2}\omega
^{2}q^{2}+\gamma q)dt=$$\int (\frac{1}{2}\overset{.}{\overset{\_}{q}}^{2}-\frac{1}{2}\omega
^{2}\overset{\_}{q}^{2}+\gamma \overset{\_}{q})dt+\int (\frac{1}{2}\overset{.}{y}^{2}-\frac{1}{2}\omega
^{2}y^{2})dt$ the terms linear in $y$ drop out by integrations by
parts using the equation of motion $\overset{\_}{q}=-\omega ^{2}\overset{\_}{q}+\gamma
( t) $ for the classical path, and the boundary conditions $y( t^{\prime
}) =y( t^{{\prime\prime}}) =0$. That this should occur should occasion
no surprise, for the action functional is an extremum at $q( t)
=\overset{\_}{q}( t) $ so that it will only depend to second order
in the displacements $y$ from this extremal orbit $\overset{\_}{q}(
t) $. Further, since the action functional is quadratic to begin
with, it cannot depend on $y$ more than quadratically. Hence $S_{0}+S_{I}=Q+\int
(\frac{1}{2}\overset{.}{y}^{2}-\frac{1}{2}\omega ^{2}y^{2})dt$ so
that since $dq_{i}= dy_{i}$, $k( q_{j},t^{{\prime\prime}};q_{0},t^{\prime
}) = \exp ( i Q) \int \exp ( i \int \frac{1}{2}(\overset{.}{y}^{2}-\omega
^{2}y^{2})dt) {\mathcal D} y( t) $. The factor following the $ \exp
iQ$ is the amplitude for a free oscillator to proceed from $y=0$at
$t=t^{\prime }$ to $y=0$ at $t^{\prime }=t^{{\prime\prime}}$ and
does not therefore depend on $q_{0}$, $q_{j}$, or $\gamma ( t) $,
being a function only of $t^{{\prime\prime}}-t^{\prime }$. [That
it is actually $(2\pi i\omega ^{-1}\sin \omega ( t^{{\prime\prime}}-t^{\prime
}) )^{-\frac{1}{2}}$ can be demonstrated either by direct integration
of the $y$ variables or by using some normalizing property of the
kernels $k$, for example that $G_{0 0}$ for the case $\gamma =0$
must equal unity.] The expression for $Q$ given in \textbf{C} on
page 386 is in error, the quantities $q_{0}$ and $q_{j}$ should
be interchanged.} It is
{\rmfamily \begin{multline}
Q=\frac{\omega }{2 \sin \omega ( t^{{\prime\prime}}-t^{\prime })
}[\left( q_{j}^{2}+q_{0}^{2}\right) \cos \omega ( t^{{\prime\prime}}-t^{\prime
}) -2q_{j}q_{0}+\frac{2q_{j}}{\omega }\int _{t^{\prime }}^{t^{{\prime\prime}}}\gamma
( t) \sin \omega ( t-t^{\prime }) dt\\
\left. +\frac{2q_{0}}{\omega }\int _{t^{\prime }}^{t^{{\prime\prime}}}\gamma
( t) \sin \omega ( t^{{\prime\prime}}-t) dt-\frac{2}{\omega ^{2}}\int
_{t^{\prime }}^{t^{{\prime\prime}}}<>% ""
\int _{t^{\prime }}^{t}\gamma ( t) \gamma ( s) \sin \omega ( t^{{\prime\prime}}-t)
\sin \omega ( s-t^{\prime }) ds dt\right] %
\label{XRef-Equation-227163331}
\end{multline}}
The solution of the motion of the oscillator can now be completed
by substituting (\ref{XRef-Equation-227163312}) and (\ref{XRef-Equation-227163331})
into (\ref{XRef-Equation-227163344}) and performing the integrals.
The simplest case is for $m, n=0 $for which case\footnote{It is
most convenient to define the state $\varphi _{n}$ with the phase
factor $\exp [ -i\omega ( n+\frac{1}{2}\omega ) t^{\prime }] $ and
the final state with the factor $\exp [ -i\omega ( m+\frac{1}{2}\omega
) t^{{\prime\prime}}] $ so that the results will not depend on the
particular times $t^{\prime }$, $t^{{\prime\prime}}$ chosen. }
{\rmfamily \[
\varphi _{0}\left( q_{0}\right) =\left( \omega /\pi \right) ^{\frac{1}{2}}\exp
\left( -\frac{1}{2}\omega q_{0}^{2}\right) \exp \left( -\frac{1}{2}i
\omega t^{\prime }\right)
\]}
so that the integrals on $q_{0}$, $q_{j}$ are just Gaussian integrals.
There results
{\rmfamily \[
G_{00}= \exp ( -\frac{1}{2 \omega }\int _{t^{\prime }}^{t^{{\prime\prime}}}<>% ""
\int _{t^{\prime }}^{t}\exp ( -i \omega ( t-s) ) \gamma ( t) \gamma
( s) dt ds)
\]}
a result of fundamental importance in the succeeding developments.
By replacing $t-s$ by its absolute value $|t-s|$ we may integrate
both variables over the entire range and divide by 2. We will henceforth
make the results more general by extending the limits on the integrals
from $-\infty $ to $+\infty $. Thus if one wishes to study the effect
on a particle of interaction with an oscillator for just the period
$t^{\prime }$ to $t^{{\prime\prime}}$ one may use
{\rmfamily \begin{equation}
G_{00}=\exp ( -\frac{1}{4 \omega }\int _{-\infty }^{\infty }\int
_{-\infty }^{\infty } \exp ( -i \omega \left| t-s\right| ) \gamma
( t) \gamma ( s) dt ds) %
\label{XRef-Equation-22814921}
\end{equation}}
imagining in this case that the interaction $\gamma ( t) $ is zero
outside these limits. We defer to a later section the discussion
of other values of $m$, $n$.
Since $G_{00}$ is simply an exponential, we can write it as $\exp
( i I) $, consider that the complete ``action'' for the system of
particles is $S=S_{p}+I$ and that one computes transition elements
with this ``action'' instead of $S_{p}$ (see \textbf{C}, Sec. 12).
The functional $I$ which is given by
{\rmfamily \begin{equation}
I=\frac{1}{4}i \omega ^{-1}\int \int \exp ( -i \omega \left| t-s\right|
) \gamma ( s) \gamma ( t) ds dt%
\label{XRef-Equation-227163755}
\end{equation}}
is complex, however; we shall speak of it as the complex action.
It describes the fact that the system at one time can affect itself
at a different time by means of a temporary storage of energy in
the oscillator. When there are several independent oscillators with
different interactions, the effect if they arc all in the lowest
state at $t^{\prime }$ and $t^{{\prime\prime}}$, is the product
of their separate $G_{00}$ contributions. Thus the complex action
is additive, being the sum of contributions like (\ref{XRef-Equation-227163755})
for each of the several oscillators.
\section{Virtual transitions in the electromagnetic field}
\noindent We can now apply these results to eliminate the transverse
field oscillators of the Lagrangian (\ref{XRef-Equation-227163810}).
At first we can limit ourselves to the case of purely virtual transitions
in the electromagnetic field, so that there is no photon in the
field at $t^{\prime }$ and $t^{{\prime\prime}}$. That is, all of
the field oscillators are making transitions from ground state to
ground state.
The $\gamma _{K}^{(r)}$, corresponding to each oscillator $q_{K}^{(r)}$,
is found from the interaction term $L_{I}$ [Eq. (2b)], substituting
the value of $A^{t r}( x) $ given in (\ref{XRef-Equation-227163828}).
There results, for example,
\begin{equation}
\begin{array}{cc}
\gamma _{K}^{\left( 1\right) } & =\left( 8 \pi \right) ^{\frac{1}{2}}\sum
\limits_{n}e_{n} \left( e_{1}\cdot x_{n}^{.}\right) \cos ( K\cdot
x_{n}) \\
\gamma _{K}^{\left( 3\right) } & =\left( 8 \pi \right) ^{\frac{1}{2}}\sum
\limits_{n}e_{n} \left( e_{1}\cdot x_{n}^{.}\right) \sin ( K\cdot
x_{n})
\end{array}%
\label{XRef-Equation-228141022}
\end{equation}
the corresponding results for $\gamma _{K}^{(2)}$, $\gamma _{K}^{(4)}$,
replace $e_{1}$, by $e_{2}$.
The complex action resulting from oscillator of coordinate $q_{K}^{(1)}$,
is therefore
{\rmfamily \[
I_{K}^{\left( 1\right) }= \frac{8\pi i}{4 k}\sum \limits_{n}\sum
\limits_{m}\int \int e_{n}e_{m} \exp \left( -i k \left| t-s\right|
\right) \times \left( e_{1}\cdot x_{n}^{.}( t) \right) \left( e_{1}\cdot
x_{m}^{.}( s) \right) \cos ( K\cdot x_{n}( t) ) \cos ( K\cdot x_{m}(
s) ) ds dt
\]}
The term $I_{K}^{(3)}$ exchanges the cosines for sines, so in the
sum $I_{K}^{(1)}+$$I_{K}^{(3)}$, the product of the two cosines,
$\cos A \cdot \cos B$ is replaced by $(\cos A \cos B+\sin A
\sin B) $or $\cos ( A-B) $. The terms $I_{K}^{(2)}+$$I_{K}^{(4)}$,
give the same result with $e_{2}$ replacing $e_{1}$. The sum $(e_{1}
\cdot V)(e_{1} \cdot V^{\prime }) +(e_{2} \cdot V)(e_{2} \cdot V^{\prime
})$ is $V\cdot V^{\prime }-k^{-2}( K\cdot V) (K\cdot V^{\prime })$
since it is the sum of the products of vector components in two
orthogonal directions, so that if we add the product in the third
direction (that of \textbf{K}) we construct the complete scalar
product. Summing over all \textbf{K} then, since $\sum K=\frac{1}{2}\int
d^{3}K/8\pi ^{3}$ we find for the total complex action of all of
the transverse oscillators,
{\rmfamily \begin{multline}
I_{t r}= i\sum \limits_{n}\sum \limits_{m}\int _{t^{\prime }}^{t^{{\prime\prime}}}dt\int
_{t^{\prime }}^{t^{{\prime\prime}}} ds\int e_{n}e_{m}\exp ( -i k
\left| t-s\right| ) \times \left[ x_{n}^{.}( t) \cdot x_{m}^{.}(
s) -k^{-2}( K\cdot x_{n}^{.}( t) ) \left( K\cdot x_{m}^{.}( s) \right)
\right] \\
\cdot \cos ( K\cdot \left( x_{n}^{.}( t) -x_{m}^{.}( s) \right)
) \frac{d^{3}K}{8 \pi ^{2}k}.%
\label{XRef-Equation-3214938}
\end{multline}}
This is to be added to $S_{p}+S_{c}$ to obtain the complete action
of the system with the oscillators removed.
The term in $(K\cdot x_{n}^{.}( t) ) (K\cdot x_{m}^{.}( s) )$ can
be simplified by integration by parts with respect to $t$ and with
respect to $s$ [note that $\exp ( -i k|t-s|) $ has a discontinuous
slope at $t=s$, or break the integration up into two regions]. One
finds
\begin{equation}
I_{t r}=R-I_{c}+I_{\textup{transient}}
\end{equation}
where
\begin{multline}
R=-i\sum \limits_{n}\sum \limits_{m}\int _{t^{\prime }}^{t^{{\prime\prime}}}dt\int
_{t^{\prime }}^{t^{{\prime\prime}}}ds\int e_{n}e_{m} \times \exp
( -i k \left| t-s\right| ) \left( 1-x^{.}_{n}\left( t\right) \cdot
x^{.}_{m}\left( s\right) \right) \\
\cdot \cos K\cdot \left( x_{n}\left( t\right) -x_{m}( s) \right)
d^{3}K/8\pi ^{2}k<>% ""
%
\label{XRef-Equation-227163856}
\end{multline}
and
\begin{equation}
I_{c}=-\sum \limits_{n}\sum \limits_{m}\int _{t^{\prime }}^{t^{{\prime\prime}}}dt\int
e_{n}e_{m}\cos K\cdot \left( x_{n}\left( t\right) -x_{m}( t) \right)
d^{3}K/4\pi ^{2}k^{2}
\end{equation}
comes from the discontinuity in slope of $\exp ( -i k|t-s|) $ at
$t=s$. Since
\[
\int \cos \left( K\cdot R\right) d^{3}K/4\pi ^{2}k^{2}=\int _{0}^{\infty
}\left( kr\right) ^{-1}\sin \left( kr\right) dk/\pi =\left( 2r\right)
^{-1}
\]
this term $I_{c}$ just cancels the Coulomb interaction term $S_{c}=\int
L_{c}dt$. The term
\begin{multline}
I_{\textup{transient}}=-\sum \limits_{n}\sum \limits_{m}e_{n}e_{m}\int
\frac{d^{3}K}{4 \pi ^{2}k^{2}}\times \left\{ \int _{t^{\prime }}^{t^{{\prime\prime}}}\left[
\exp ( -i k \left( t^{{\prime\prime}}-t\right) ) \cos K\cdot \left(
x_{n}\left( t^{{\prime\prime}}\right) -x_{m}( t) \right) \right.
\right. \\
\left. +\exp \left( -i k \left( t-t^{\prime }\right) \right) \cos
K\cdot \left( x_{n}\left( t\right) -x_{m}\left( t^{\prime }\right)
\right) \right] dt+\\
\left. \left( 2k\right) ^{-1}i[ \cos K\cdot \left( x_{n}( t^{{\prime\prime}})
-x_{m}( t^{{\prime\prime}}) \right) +\cos K\cdot \left( x_{n}(
t^{\prime }) -x_{m}( t^{\prime }) \right) -2 \exp ( -i k \left(
t^{{\prime\prime}}-t^{\prime }\right) ) \cos K\cdot \left( x_{n}(
t^{\prime }) -x_{m}( t^{{\prime\prime}}) \right) ] \right\}
\end{multline}
is one which comes from the limits of integration at $t^{\prime
}$ and $t^{{\prime\prime}}$, and involves the coordinates of the
particle at either one of these times or the other. If $t^{\prime
}$ and $t^{{\prime\prime}}$ are considered to be exceedingly far
in the past and future, there is no correlation to be expected between
these temporally distant coordinates and the present ones, so the
effects of $I_{\textup{transient}}$ will cancel out quantum mechanically
by interference. This transient was produced by the sudden turning
on of the interaction of field and particles at $t^{\prime }$ and
its sudden removal at $t^{{\prime\prime}}$. Alternatively we can
imagine the charges to be turned on after $t^{\prime }$ adiabatically
and turned off slowly before $t^{{\prime\prime}}$ (in this case,
in the term $L_{c}$, the charges should also be considered as varying
with time). In this case, in the limit, $I_{\textup{transient}}$
is zero.\footnote{One can obtain the final result, that the total
interaction is just $R$, in a formal manner starting from the Hamiltonian
from which the longitudinal oscillators have not yet been eliminated.
There are for each $K$ and $\cos $ or $\sin $, four oscillators
$q_{\mu K}$ corresponding to the three components of the vector
potential ($\mu =1,2,3$) and the scalar potential ($\mu =4$). It
must then be assumed that the wave functions of the initial and
final state of the $K$ oscillators is the function $(k/\pi )\exp
[ -\frac{1}{2}k( q_{1 K}^{2}+q_{2 K}^{2}+q_{3 K}^{2}-q_{4 K}^{2})
] $. The wave function suggested here has only formal significance,
of course, because the dependence on $q_{4 K}$ is not square integrable,
and cannot be normalized. If each oscillator were assumed actually
in the ground state, the sign of the $q_{4 K}$ term would be changed
to positive, and the sign of the frequency in the contribution of
these oscillators would be reversed (they would have negative energy).}
Hereafter we shall drop the transient term and consider the range
of integration of $t$ to be from $-\infty $ to $+\infty $, imagining,
if one needs a definition, that the charges vary with time and vanish
in the direction of either limit.
To simplify $R$ we need the integral
\begin{equation}
J=\int \exp ( -i k \left| t\right| ) \cos ( K\cdot R) d^{3}K/8\pi
^{2}k=\int _{0}^{\infty }\exp ( -i k \left| t\right| ) \sin ( kr)
dk/2\pi r
\end{equation}
where $r$ is the length of the vector $R$. Now
\[
\int _{0}^{\infty }\exp ( -i k x) dk=\operatorname*{\lim }\limits_{\epsilon
\rightarrow 0}\left( -i( x-i \epsilon ) ^{-1}\right) =-i x^{-1}+\pi
\delta ( x) =\pi \delta _{+}( x)
\]
where the equation serves to define $\delta _{+}( x) $ [as in \textbf{II},
Eq. (3)]. Hence, expanding $\sin ( k r) $ in exponentials find
\begin{multline}
J=-\left( 4 \pi r\right) ^{-1} \left( \left( \left| t\right| -r\right)
^{-1}-\left( \left| t\right| +r\right) ^{-1}\right) +\left( 4 i
r\right) ^{-1}\left( \delta ( \left| t\right| -r) -\delta ( \left|
t\right| +r) \right) \\
=-\left( 2 \pi \right) ^{-1}\left( t^{2}-r^{2}\right) ^{-1}+\left(
2 i\right) ^{-1} \delta ( t^{2}-r^{2}) \\
=-\frac{1}{2}i \delta _{+}( t^{2}-r^{2})
\end{multline}
where we have used the fact that
\[
\delta ( t^{2}-r^{2}) =\left( 2r\right) ^{-1}\left( \delta ( \left|
t\right| -r) +\delta ( \left| t\right| +r) \right)
\]
and that $\delta ( |t|+r) =0$ since both $| t|$ and $r$ are necessarily
positive.
Substitution of these results into (\ref{XRef-Equation-227163856})
gives finally,
\begin{equation}
R=-\frac{1}{2}\sum \limits_{n}\sum \limits_{m}\int _{-\infty }^{+\infty
}\int _{-\infty }^{+\infty }e_{n}e_{m}\left( 1-x^{.}_{n}( t) \cdot
x^{.}_{m}( s) \right) \times \delta _{+}( \left( t-s\right) ^{2}-\left(
x_{n}( t) -x_{m}( s) \right) ^{2}) dtds.%
\label{XRef-Equation-227163911}
\end{equation}
The total complex action of the system is then\footnote{The classical
action for this problem is just $S p+R^{\prime }$ where $R^{\prime
}$ is the real part of the expression (24). In view of the generalization
of the Lagrangian formulation of quantum mechanics suggested in
Section 12 of \textbf{C}, one might have anticipated that $R$ would
have been simply $R^{\prime }$. This corresponds, however, to boundary
conditions other than no quanta present in past and future. It is
harder to interpret physically. For a system enclosed in a light
tight box, however, it appears likely that both $R$ and $R^{\prime
}$ lead to the same results.} $S_{p}+R$. Or, what amounts to the
same thing; to obtain transition amplitudes including the effects
of the field we must calculate the transition element of $\exp (
i R) $:
\begin{equation}
\left\langle \chi _{t^{{\prime\prime}}}\left| \exp i R\right|
\psi _{t^{\prime }}\right\rangle _{S_{p}}%
\label{XRef-Equation-227163942}
\end{equation}
under the action $S_{p}$ of the particles, excluding interaction.
Expression (\ref{XRef-Equation-227163911}) for $R$ must be considered
to be written in the usual manner as a Riemann sum and the expression
(\ref{XRef-Equation-227163942}) interpreted as defined in \textbf{C}
[Eq. (39)]. Expression (\ref{XRef-Equation-22716409}) must be used
for $x_{n}^{.}$ at time $t$.
Expression (\ref{XRef-Equation-227163942}), with (\ref{XRef-Equation-227163911}),
then contains all the effects of virtual quanta on a (at least non-relativistic)
system according to quantum electrodynamics. It contains the effects
to all orders in $e^{2}/\hbar c$ in a single expression. If expanded
in a power series in $e^{2}/\hbar c$, the various terms give the
expressions to the corresponding order obtained by the diagrams
and methods of \textbf{II}. We illustrate this by an example in
the next section.
\section{Example of application of expression (\ref{XRef-Equation-227163942})}
\noindent We shall not be much concerned with the non-relativistic
case here, as the relativistic case given below is as simple and
more interesting. It is, however, very similar and at this stage
it is worth giving an example to show how expressions resulting
from (\ref{XRef-Equation-227163942}) are to be interpreted according
to the rules of \textbf{C}. For example, consider the case of a
single electron, coordinate $x$, either free or in an external given
potential (contained for simplicity in\footnote{One can show from
(25) how the correlated effect of many atoms at a distance produces
on a given system the effects of an external potential. Formula
(24) yields the result that this potential is that obtained from
Li\'enard and Wiechert by retarded waves arising from the charges
and currents resulting from the distant atoms making transitions.
Assume the wave functions $\chi$ and $\psi$ can be split into products
of wave functions for system and distant atoms and expand $\exp
( iR) $ assuming the effect of any individual distant atom is small.
Coulomb potentials arise even from nearby particles it they are
moving slowly.} $S_{p}$, not in $R$). Its interaction with the field
produces a reaction back on itself given by $R$ as in (\ref{XRef-Equation-227163911})
but in which we keep only a single term corresponding to $m=n$.
Assume the effect of $R$ to be small and expand $\exp ( i R) $ as
$1+i R$. Let us find the amplitude at time $t^{{\prime\prime}}$
of finding the electron in a state $\psi $ with no quanta emitted,
if at time $t^{\prime }$ it was in the same state. It is
\[
\left\langle \psi _{t^{{\prime\prime}}}\left| 1+i R\right| \psi
_{t^{\prime }}\right\rangle _{S_{p}}=\left\langle \psi _{t^{{\prime\prime}}}\left|
1\right| \psi _{t^{\prime }}\right\rangle _{S_{p}}+i\left\langle
\psi _{t^{{\prime\prime}}}\left| R\right| \psi _{t^{\prime }}\right\rangle
_{S_{p}}
\]
where $\langle \psi _{t^{{\prime\prime}}}|1|\psi _{t^{\prime }}\rangle
_{S_{p}}=\exp [ -i E( t^{{\prime\prime}}-t^{\prime }) ] $ if $E$
is the energy of the state, and
\begin{equation}
\left\langle \psi _{t^{{\prime\prime}}}\left| R\right| \psi _{t^{\prime
}}\right\rangle _{S_{p}}=-\frac{1}{2}e^{2}\int _{t^{\prime }}^{t^{{\prime\prime}}}<>% ""
dt \int _{t^{\prime }}^{t^{{\prime\prime}}}ds\left\langle \psi
_{t^{{\prime\prime}}}\left| \left( 1-x^{.}_{t}\cdot x^{.}_{s}\right)
\delta _{+}\left( \left( t-s\right) ^{2}-\left( x_{t}-x_{s}\right)
^{2}\right) \right| \psi _{t^{\prime }}\right\rangle _{S_{p}}.%
\label{XRef-Equation-227164053}
\end{equation}
Here $x_{s}=x( s) $, etc. In (\ref{XRef-Equation-227164053}) we
shall limit the range of integrations by assuming $s>% ""
}\left| F( x_{1},t_{1};x_{2},t_{2};\cdots x_{k},t_{k}) \right|
\psi _{t^{\prime }}\right\rangle _{S_{p}}=\int \chi ^{*}( x_{t^{{\prime\prime}}})
K( x_{t^{{\prime\prime}}},t^{{\prime\prime}};x_{1},t_{1}) \cdot
K( x_{1},t_{1};x_{2},t_{2}) \cdots \\
\times K( x_{k-1},t_{k-1};x_{k},t_{k}) K( x_{k},t_{k};x_{t^{\prime
}},t^{\prime }) \cdot F( x_{1},t_{1};x_{2},t_{2};\cdots x_{k},t_{k})
\\
\times \psi ( x_{t^{\prime }}) d^{3}x_{t^{{\prime\prime}}}d^{3}x_{1}d^{3}x_{2}\cdots
d^{3}x_{k}d^{3}x_{t^{\prime }}%
\label{XRef-Equation-227164143}
\end{multline}}
where $F$ is any function of the coordinate $x_{1}$, at time $t_{1}$,
$x_{2}$ at $t_{2}$ up to $x_{k}$, $t_{k}$, and, it is important
to notice, we have assumed $t^{{\prime\prime}}>t_{1}>t_{2}>\cdots
t_{k}>t^{\prime }$.
Expressions of higher order arising for example from $R^{2}$ are
more complicated as there are quantities referring to several different
times mixed up, but they all can be interpreted readily. One simply
breaks up the ranges of integrations of the time variables into
parts such that in each the order of time of each variable is definite.
One then interprets each part by formula (\ref{XRef-Equation-227164143}).
As a simple example we may refer to the problem of element of the
transition element
{\rmfamily \[
\left\langle \chi _{t^{{\prime\prime}}}\left| \int U( x( t) ,t)
dt\int V( x( s) ,s) ds\right| \psi _{t^{\prime }}\right\rangle
\]}
arising, say, in the cross term in $U$ and $V$ in an ordinary second
order perturbation problem (disregarding radiation) with perturbation
potential $U( x,t) +V( x,t) $. In the integration on $s$ and $t$
which should include the entire range of time for each, we can split
the range of $s$ into two parts, $st$. In the first case,
$si$). In
nonrelativistic mechanics $i( Hx -xH) $ is the momentum operator
$p_{x}$ divided by the mass $m$. Thus in (\ref{XRef-Equation-228135128})
the expression $\frac{1}{2}(x_{i+1}-x_{i})+\frac{1}{2}(x_{i}-x_{i-1})\cdot
A( x_{i},t_{i}) $ becomes $\epsilon ( p\cdot A+A\cdot p) /2m$. Here
again we must split the sum into two regions $j*i$ so
the quantities in the usual notation will operate in the right order
such that eventually (\ref{XRef-Equation-228135128}) becomes identical
with the right-hand side of Eq. (\ref{XRef-Equation-22813526}) but
with $U( x_{t},t) $ replaced by the operator
{\rmfamily \[
\frac{1}{2m}\left( \frac{1}{i}\frac{\partial }{\partial x_{t}}\cdot
A\left( x_{t},t\right) +A\left( x_{t},t\right) \cdot \frac{1}{i}\frac{\partial
}{\partial x_{t}}\right)
\]}
standing in the same place, and with the operator
{\rmfamily \[
\frac{1}{2m}\left( \frac{1}{i}\frac{\partial }{\partial x_{s}}\cdot
B\left( x_{s},s\right) +\frac{1}{i}B\left( x_{s},s\right) \cdot
\frac{\partial }{\partial x_{s}}\right)
\]}
standing in the place of $V( x_{s},s) $. The sums and factors $\epsilon$
have now become $\int dt\int ds$.
This is nearly but not quite correct, however, as there is an additional
term coming from the terms in the sum corresponding to the special
values, $j=i$, $j=i+1$ and $j= i-1$. We have tacitly assumed from
the appearance of the expression (\ref{XRef-Equation-228135128})
that, for a given $i$, the contribution from just three such special
terms is of order $\epsilon ^{2}$. But this is not true. Although
the expected contribution of a term like $(x_{i+1}-x_{i})(x_{j+1}-x_{j})$
for $j\neq i$ is indeed of order $\epsilon ^{2}$, the expected contribution
of $(x_{i+1}-x_{i})^{2}$ is $+i\epsilon m^{-1 }$[\textbf{C}, Eq.
(50)], that is, of order $\epsilon$. In nonrelativistic mechanics
the velocities are unlimited and in very short times $\epsilon$
the amplitude diffuses a distance proportional to the square root
of the time. Making use of this equation then we see that the additional
contribution from these terms is essentially
{\rmfamily \[
im^{-1}\epsilon \sum \limits_{i}A\left( x_{i},t_{i}\right) \cdot
B\left( x_{i},t_{i}\right) =im^{-1}\int A\left( x\left( t\right)
,t\right) \cdot B\left( x\left( t\right) ,t\right) dt
\]}
when summed on all $i$. This has the same effect as a first-order
perturbation due to a potential $A\cdot B/m$. Added to the term
involving the momentum operators we therefore have an additional
term\footnote{The term corresponding to this for the self-energy
expression (26) would give an integral over $\delta _{+}((t-t^{\prime
})^{2}-(x_{t}-x_{t^{\prime }})^{2})$ which is evidently infinite
and leads to the quadratically divergent self-energy. There is no
such term for the Dirac electron, but there is for Klein-Gordon
particles. We shall not discuss the infinities in this paper as
they have already been discussed in \textbf{II}.}
{\rmfamily \begin{equation}
\frac{i}{m}\int _{t^{\prime }}^{t^{{\prime\prime}}}dt\int \chi ^{*}(
x_{t^{{\prime\prime}}}) K( x_{t^{{\prime\prime}}},t^{{\prime\prime}};x_{t},t)
A( x_{t},t) \cdot B( x_{t},t) \cdot K( x_{t},t;x_{t^{\prime }},t^{\prime
}) \psi ( x_{t^{\prime }}) d^{3}x_{t^{{\prime\prime}}}d^{3}x_{t}d^{3}x_{t^{\prime
}}.%
\label{XRef-Equation-22813560}
\end{equation}}
In the usual Hamiltonian theory this term arises, of course, from
the term $A^{2}/2m$ in the expansion of the Hamiltonian
\[
H=\left( 2m\right) ^{-1}\left( p-A\right) ^{2}=\left( 2m\right)
^{-1}\left( p^{2}-p\cdot A-A\cdot p+A^{2}\right)
\]
while the other term arises from the second-order action of $p\cdot
A+A\cdot p$. We shall not be interested in non-relativistic quantum
electrodynamics in detail. The situation is simpler for Dirac electrons.
For particles satisfying the Klein-Gordon equation (discussed in
Appendix A) the situation is very similar to a four-dimensional
analog of the non-relativistic case given here.
\section{Extension to Dirac particles}
\noindent Expressions (\ref{XRef-Equation-227163911}) and (\ref{XRef-Equation-227163942})
and their proof can be readily generalized to the relativistic case
according to the one electron theory of Dirac. We shall discuss
the hole theory later. In the non-relativistic case we began with
the proposition that the amplitude for a particle to proceed from
one point to another is the sum over paths of $\exp ( iS_{p}) $,
that is, we have for example for a transition element
{\rmfamily \begin{equation}
\left\langle \chi \left| 1\right| \psi \right\rangle =\operatorname*{\lim
}\limits_{\epsilon \rightarrow 0}\int \cdots \int \chi ^{*}( x_{N})
\Phi _{p}( x_{N},x_{N-1},\cdots x_{0}) \cdot \psi ( x_{0}) d^{3}x_{0}d^{3}x_{1}\cdots
d^{3}x_{N}%
\label{XRef-Equation-228135413}
\end{equation}}
where for $\exp ( iS_{p}) $ we have written $\Phi _{p}$, that is
more precisely,
{\rmfamily \[
\Phi _{p}=\Pi _{i}A^{-1}\exp iS\left( x_{i+1},x_{i}\right) .
\]}
As discussed in \textbf{C} this form is related to the usual form
of quantum mechanics through the observation that
{\rmfamily \begin{equation}
\left( x_{i+1}|x_{i}\right) _{\epsilon }=A^{-1}\exp [ iS( x_{i+1},x_{i})
] %
\label{XRef-Equation-228135448}
\end{equation}}
where $(x_{i+1}|x_{i})_{\epsilon }$ is the transformation matrix
from a representation in which $x$ is diagonal at time $t_{i}$ to
one in which $x$ is diagonal at time $t_{i+1}=t_{i}+\epsilon $ (so
that it is identical to $K_{0}( x_{i+1},t_{i+1};x_{i},t_{i}) $ for
the small time interval $\epsilon $). Hence the amplitude for a
given path can also be written
{\rmfamily \begin{equation}
\Phi _{p}=\Pi _{i}\left( x_{i+1}|x_{i}\right) _{\epsilon }%
\label{XRef-Equation-228135513}
\end{equation}}
for which form, of course, (\ref{XRef-Equation-228135413}) is exact
irrespective of whether $(x_{i+1}|x_{i})_{\epsilon }$ can be expressed
in the simple form (\ref{XRef-Equation-228135448}).
For a Dirac electron the $(x_{i+1}|x_{i})_{\epsilon }$ is a $4\times
4$ matrix (or $4^{N}\times 4^{N}$ if we deal with $N$ electrons)
but the expression (\ref{XRef-Equation-228135413}) with (\ref{XRef-Equation-228135513})
is still correct (as it is in fact for any quantum-mechanical system
with a sufficiently general definition of the coordinate $x_{i}$).
The product (\ref{XRef-Equation-228135513}) now involves operators,
the order in which the factors are to be taken is the order in which
the terms appear in time.
For a Dirac particle in a vector and scalar potential (times the
electron charge $e$) $A( x,t) $, $A_{4}( x,t) $, the quantity $(x_{i+1}|x_{i})_{\epsilon
}^{(A)}$ is related to that of a free particle to the first order
in $\epsilon $ as
{\rmfamily \begin{equation}
\left( x_{i+1}|x_{i}\right) _{\epsilon }^{\left( A\right) }=\left(
x_{i+1}|x_{i}\right) _{\epsilon }^{\left( 0\right) }\exp [ -i( \epsilon
A_{4}( x_{i},t_{i}) -\left( x_{i+1}-x_{i}\right) \cdot A( x_{i},t_{i})
) ] .%
\label{XRef-Equation-321412}
\end{equation}}
This can be verified directly by substitution into the Dirac equation.\footnote{Alternatively,
note that Eq. (37) is exact for arbitrarily large $\epsilon $ if
the potential $A_{\mu }$ is constant. For if the potential in the
Dirac equation is the gradient of a scalar function $A_{\mu }=\partial
\chi /\partial x_{\mu }$ the potential may be removed by replacing
the wave function by $\psi =\exp ( -i\chi ) \psi ^{\prime }$ (gauge
transformation). This alters the kernel by a factor $\exp [ -i(
\chi ( 2) -\chi ( 1) ) ] $ owing to the change in the initial and
final wave functions. A constant potential $A_{\mu }$ is the gradient
of $\chi =A_{\mu }x_{\mu }$ and can be completely removed by this
gauge transformation, so that the kernel differs from that of a
free particle by the factor $\exp [ -i( A_{\mu }x_{\mu 2}-A_{\mu
}x_{\mu 1}) ] $ as in (37).} It neglects the variation of $A$ and
$A_{4}$ with time and space during the short interval $\epsilon
$. This produces errors only of order $\epsilon ^{2}$ in the Dirac
case for the expected square velocity $(x_{i+1}-x_{i})^{2}/\epsilon
^{2}$ during the interval $\epsilon$ is finite (equaling the square
of the velocity of light) rather than being of order $1/\epsilon
$ as in the non-relativistic case. [This makes the relativistic
case somewhat simpler in that it is not necessary to define the
velocity as carefully as in (\ref{XRef-Equation-22716409}); $(x_{i+1}-x_{i})/\epsilon
$ is sufficiently exact, and no term analogous to (\ref{XRef-Equation-22813560})
arises.]
Thus $\Phi _{p}^{(A)}$ differs from that for a free particle, $\Phi
_{p}^{(0)}$, by a factor $\Pi _{i}\exp -i( \epsilon A_{4}( x_{i},t_{i})
-(x_{i+1}-x_{i})\cdot A( x_{i},t_{i}) ) $ which in the limit can
be written as
{\rmfamily \begin{equation}
\exp \left\{ -i \int \left[ A_{4}( x( t) ,t) -x^{.}( t) \cdot A(
x( t) ,t) \right] dt\right\} %
\label{XRef-Equation-22814144}
\end{equation}}
exactly as in the non-relativistic case.
The case of a Dirac particle interacting with the quantum-mechanical
oscillators representing the field may now be studied. Since the
dependence of $\Phi _{p}^{(A)}$ on $A$, $A_{4}$ is through the same
factor as in the non-relativistic case, when $A$, $A_{4}$ are expressed
in terms of the oscillator coordinates $q$, the dependence of $\Phi
$ on the oscillator coordinates $q$ is unchanged. Hence the entire
analysis of the preceding sections which concern the results of
the integration over oscillator coordinates can be carried through
unchanged and the results will be expression (\ref{XRef-Equation-227163942})
with formula (\ref{XRef-Equation-227163911}) for $R$. Expression
(\ref{XRef-Equation-227163942}) is now interpreted as
{\rmfamily \begin{equation}
\left\langle \chi _{t^{{\prime\prime}}}\left| \exp iR\right| \psi
_{t^{\prime }} \right\rangle =\operatorname*{\lim }\limits_{\epsilon
\rightarrow 0} \int \chi ^{*}( x_{t^{{\prime\prime}}}^{\left( 1\right)
},x_{t^{{\prime\prime}}}^{\left( 2\right) }\cdots ) \times \prod
\limits_{n}\left( \Phi _{p,n}^{\left( 0\right) }d^{3}x_{t^{{\prime\prime}}}^{\left(
n\right) }d^{3}x_{t^{{\prime\prime}}-\epsilon }^{\left( n\right)
}\cdots d^{3}x_{t^{\prime }}^{\left( n\right) }\right) \exp (
iR) \psi ( x_{t^{\prime }}^{\left( 1\right) },x_{t^{\prime }}^{\left(
2\right) }\cdots ) %
\label{XRef-Equation-22814059}
\end{equation}}
where $\Phi _{p,n}^{(0)},$ the amplitude for a particular path for
particle $n$ is simply the expression (\ref{XRef-Equation-228135513})
where $(x_{i+1}|x_{i})_{e}$ is the kernel $K_{0,n}( x_{i+1}^{(n)},
t_{i+1};x_{i}^{(n)}, t_{i}) $ for a free electron according to the
one electron Dirac theory, with the matrices which appear operating
on the spinor indices corresponding to particle ($n$) and the order
of all operations being determined by the time indices.
For calculational purposes we can, as before, expand $R$ as a power
series and evaluate the various terms in the same manner as for
the non-relativistic case. In such an expansion the quantity $x^{.}$($t$)
is replaced, as we have seen in (\ref{XRef-Equation-228135940}),
by the operator $i( Hx-xH) $, that is, in this case by $\alpha $
operating at the corresponding time. There is no further complicated
term analogous to (\ref{XRef-Equation-22813560}) arising in this
case, for the expected value of $(x_{i+1}-x_{i})^{2}$ is now of
order $\epsilon ^{2}$ rather than $\epsilon $.
For example, for self-energy one sees that expression (\ref{XRef-Equation-22814043})
will be (with other terms coming from those with $x^{.}$($t$) replaced
by $\alpha $ and with the usual $\beta $ in back of each $K_{0}$
because of the definition of $K_{0}$ in relativity theory)
\begin{multline}
\left\langle \psi _{t^{{\prime\prime}}}\left| R\right| \psi _{t^{\prime
}}\right\rangle _{S_{p}}=-e^{2}\int \psi ^{*}( x_{t^{{\prime\prime}}})
K_{0}( x_{t^{{\prime\prime}}},t^{{\prime\prime}};x_{t},t) \beta
a_{\mu }\cdot \delta _{+}( \left( t-s\right) ^{2}-- \left( x_{t}--
x_{s}\right) ^{2}) K_{0}\left( x_{t},t;x_{s},s\right) \beta \alpha
_{\mu }\\
\cdot K_{0}( x_{s},s;x_{t^{\prime }},t^{\prime }) \beta \psi ( x_{t^{\prime
}}) d^{3}x_{t^{{\prime\prime}}}d^{3}x_{t}d^{3}x_{s}d^{3}x_{t^{\prime
}}dtds,
\end{multline}
where $a_{4}=1,a_{1,2,3}=a_{x,y,z}$ and a sum on the repeated index
$\mu$ is implied in the usual way; $a_{\mu }b_{\mu }=a_{4}b_{4}-a_{1}b_{1}-a_{2}b_{2}-a_{3}b_{3}$.
One can change $\beta a_{\mu }$ to $\gamma _{\mu }$ and $\psi ^{*}$
to $\overset{\_}{\psi }\beta .$ In this manner all of the rules
referring to virtual photons discussed in {\bfseries II} are deduced;
but with the difference that $K_{0}$ is used instead of $K_{+}$
and we have the Dirac one electron theory with negative energy states
(although we may have any number of such electrons).
\section{Extension to positron theory}
\noindent Since in (\ref{XRef-Equation-22814059}) we have an arbitrary
number of electrons, we can deal with the hole theory in the usual
manner by imagining that we have an infinite number of electrons
in negative energy states.
On the other hand, in paper {\bfseries I} on the theory of positrons,
it was shown that the results of the hole theory in a system with
a given external potential $A_{\mu }$ were equivalent to those of
the Dirac one electron theory if one replaced the propagation kernel,
$K_{0}$, by a different one, $K_{+}$, and multiplied the resultant
amplitude by factor $C$$<>% ""
_{\nu }$ involving $A_{\mu }$. We must now see how this relation,
derived in the case of external potentials, can also be carried
over in electrodynamics to be useful in simplifying expressions
involving the infinite sea of electrons.
To do this we study in greater detail the relation between a problem
involving virtual photons and one involving purely external potentials.
In using (\ref{XRef-Equation-227163942}) we shall assume in accordance
with the hole theory that the number of electrons is infinite, but
that they all have the same charge, $e$. Let the states $\psi _{t^{\prime
}},\chi _{t^{{\prime\prime}}},$ represent the vacuum plus perhaps
a number of real electrons in positive energy states and perhaps
also some empty negative energy states. Let us call the amplitude
for the transition in an external potential $B_{\mu }$, but \textit{excluding
virtual photons}, $T_{0}[ B] $, a functional of $B_{\mu }$(1). We
have seen (\ref{XRef-Equation-22814144})
\begin{equation}
T_{0}[ B] =\left\langle \chi _{t^{{\prime\prime}}}\left| \exp \textup{iP}\right|
\psi _{t^{\prime }}\right\rangle
\end{equation}
where
\[
P=-\sum \limits_{n}\int \left[ B_{4}( x^{\left( n\right) }( t) ,t)
-- x^{.\left( n\right) }( t) \cdot B( x^{\left( n\right) }( t) ,t)
\right] dt
\]
by (\ref{XRef-Equation-22814144}). We can write this as
\[
P=-\sum \limits_{n}\int B_{\mu }( x_{v}^{\left( n\right) }( t) )
\overset{.}{x}_{\mu }^{\left( n\right) }( t) dt
\]
where $x_{4}( t) =t$ and $\overset{.}{x}_{4}=1$, the other values
of $\mu$ corresponding to space variables. The corresponding amplitude
for the same process in the same potential, but {\itshape including}
all the virtual photons we may call,
\begin{equation}
T_{e^{2}}[ B] =\left\langle \chi _{t^{{\prime\prime}}}\left| \exp
( \textup{iR}) \exp ( \textup{iP}) \right| \psi _{t^{\prime }}\right\rangle
.%
\label{XRef-Equation-22814125}
\end{equation}
Now let us consider the effect on $T_{e^{2}}[ B] $ of changing the
coupling $e^{2}$ of the virtual photons. Differentiating (\ref{XRef-Equation-22814125})
with respect to $e^{2}$ which appears only\footnote{In changing
the charge $e^{2}$ we mean to vary only the degree to which virtual
photons are important. We do not contemplate changes in the influence
of the external potentials. If one wishes, as $e$ is raised the
strength of the potential is decreased proportionally so that $B$$<>% ""
_{\mu }$, the potential times the charge $e$, is held constant.
} in $R$ we find
\begin{equation}
d T_{e^{2}}[ B] /d( e^{2}) = \left\langle \chi _{t^{{\prime\prime}}}\left|
---\frac{i}{2}\sum \limits_{n}\sum \limits_{m}\int \int dtds\overset{.}{x}_{\mu
}^{\left( n\right) }( t) \overset{.}{x}_{\mu }^{\left( m\right)
}( s) \cdot \delta _{+}( \left( x_{v}^{\left( n\right) }( t) - x_{v}^{\left(
m\right) }( s) \right) ^{2}) \exp i( R+P) \right| \psi _{t^{\prime
}}\right\rangle .%
\label{XRef-Equation-22814235}
\end{equation}
We can also study the first-order effect of a change of $B_{\mu
}$:
{\rmfamily \begin{equation}
dT_{e^{2}}[ B] /\delta B_{\mu }( 1) =-i\left\langle \chi _{t^{{\prime\prime}}}\left|
\sum \limits_{n}\int dt\overset{.}{x}_{\mu }^{\left( n\right) }\delta
^{4}( x_{\alpha }^{\left( n\right) }( t) -x_{\alpha ,1}) \cdot \exp
i( R+P) \right| \psi _{t^{\prime }}\right\rangle %
\label{XRef-Equation-22814217}
\end{equation}}
where $x_{a,1}$ is the field point at which the derivative with
respect to $B_{\mu }$, is taken\footnote{The functional derivative
is defined such that if $T$[$B$] is a number depending on the functions
$B_{\mu }$(l), the first order variation in $T$ produced by a change
from $B_{\mu }$ to $B_{\mu }+\triangle B_{\mu }$, is given by\ \ $T[
B+\triangle B] --T[ B] =\int (\delta T[ B] /\delta B_{\mu }( 1)
)\triangle B_{\mu }( 1) d\tau _{1}$ the integral extending over
all four-space $x_{\alpha ,1}.$} and the term (current density)
$--\sum _{n}\int dt\overset{.}{x}_{\mu }^{(n)}( t) \delta ^{4}(
x_{\alpha }^{(n)}( t) -- x_{a,1}) $ is just $\delta P/\delta B_{\mu
}( 1) $. The function $\delta ^{4}( x_{\alpha }^{(n)}--x_{\alpha
,1}) $ means $\delta ( x_{4}^{(n)}-- x_{4,1}) \cdot \delta ( x_{3}^{(n)}-x_{3,1})
\cdot \delta ( x_{2}^{(n)}-x_{2,1}) \cdot \delta ( x_{1}^{(n)}-x_{1,1})
$ that is, $\delta ( 2,1) $ with $x_{\alpha ,2}=x_{\alpha }^{(n)}(
t) $. A second variation of $T$ gives, by differentiation of (\ref{XRef-Equation-22814217})
with respect to $B_{\nu }( 2) $,
{\rmfamily \begin{multline*}
\delta ^{2}T_{e^{2}}[ B] /\delta B_{\mu }\left( 1\right) \delta
B_{\nu }\left( 2\right) =-\left\langle \chi _{t^{{\prime\prime}}}\left|
\sum \limits_{n}\sum \limits_{m}\int dt\int ds\overset{.}{x}_{\mu
}^{\left( n\right) }\left( t\right) \overset{.}{x}_{\nu }^{\left(
m\right) }\left( s\right) \right. \right. \\
\left. \left. \cdot \delta ^{4}\left( x_{\alpha }^{\left( n\right)
}\left( t\right) -x_{\alpha ,1}\right) \delta ^{4}\left( x_{\beta
}^{\left( n\right) }\left( s\right) -x_{\beta ,2}\right) \exp
i\left( R+P\right) \right| \psi _{t^{\prime }}\right\rangle .
\end{multline*}}
Comparison of this with (\ref{XRef-Equation-22814235}) shows that
{\rmfamily \begin{equation}
d T_{e^{2}}[ B] /d( e^{2}) =\frac{1}{2}i\int \int \left( \delta
^{2}T_{e^{2}}[ B] /\delta B_{\mu }( 1) \delta B_{\mu }( 2) \right)
\times \delta _{+}( s_{12}^{2}) d\tau _{1}d\tau _{2}%
\label{XRef-Equation-22814256}
\end{equation}}
where $s_{12}^{2}=(x_{\mu ,1}-x_{\mu ,2})(x_{\mu ,1}-x_{\mu ,2})$.
We now proceed to use this equation to prove the validity of the
rules given in \textbf{II} for electrodynamics. This we do by the
following argument. The equation can be looked upon as a differential
equation for $T_{e^{2}}[ B] $. It determines $T_{e^{2}}[ B] $ uniquely
if $T_{o}[ B] $ is known. We have shown it is valid for the hole
theory of positrons. But in \textbf{I} we have given formulas for
calculating $T_{o}[ B] $ whose correctness relative to the hole
theory we have there demonstrated. Hence we have shown that the
$T_{e^{2}}[ B] $ obtained by solving (\ref{XRef-Equation-22814256})
with the initial condition $T_{o}[ B] $ as given by the rules in
\textbf{I} will be equal to that given for the same problem by the
second quantization theory of the Dirac matter field coupled with
the quantized electromagnetic field. But it is evident (the argument
is given in the next paragraph) that the rules\footnote{That is,
of course, those rules of \textbf{II} which apply to the unmodified
electrodynamics of Dirac electrons. (The limitation excluding real
photons in the initial and final states is removed in Sec. 8.) The
same arguments clearly apply to nucleons interacting via neutral
vector mesons, vector coupling. Other couplings require a minor
extension of the argument. The modification to the $(x_{i+1}|x_{i})_{\epsilon
}$,as in (37), produced by some couplings cannot very easily be
written without using operators in the exponents. These operators
can be treated as numbers if their order of operation is maintained
to be always their order in time. This idea will be discussed and
applied more generally in a succeeding paper.} given in \textbf{II}
constitute a solution in power series in $e^{2}$ of the Eq. (\ref{XRef-Equation-22814256})
[which for $e^{2}=0$ reduce to the $T_{0}[ B] $ given in \textbf{I}].
Hence the rules in \textbf{II} must give, to each order in $e^{2}$,
the matrix element for any process that would be calculated by the
usual theory of second quantization of the matter and electromagnetic
fields. This is what we aimed to prove.
That the rules of \textbf{II} represent, in a power series expansion,
a solution of (\ref{XRef-Equation-22814256}) is clear. For the rules
there given may be stated as follows: Suppose that we have a process
to order $k$ in $e^{2}$ (i.e., having $k$ virtual photons) and order
$n$ in the external potential $B_{\mu }$. Then, {\itshape the matrix
element for the process with one more virtual photon and two less
potentials is that obtained from the previous matrix by choosing
from the }$n${\itshape potentials a pair, say }$B_{\mu }( 1) ${\itshape
acting at 1 and} $B_{\nu }( 2) ${\itshape acting at 2, replacing
them by }$i e^{2}\delta _{\mu \nu }\delta _{+}( s_{12}^{2}) ${\itshape
, adding the results for each way of choosing the pair, and dividing
by }$k+1${\itshape , the present number of photons}. The matrix
with no virtual photons $(k=0)$ being given to any $n$ by the rules
of \textbf{I}, this permits terms to all orders in $e^{2}$ to be
derived by recursion. It is evident that the rule in italics is
that of \textbf{II}, and equally evident that it is a word expression
of Eq. (\ref{XRef-Equation-22814256}). [The factor $\frac{1}{2}$
in (\ref{XRef-Equation-22814256}) arises since in integrating over
all $d\tau _{1}$and $d\tau _{2}$ we count each pair twice.The division
by $k+1$ is required by the rules of \textbf{II} for, there, each
diagram is to be taken only once, while in the rule given above
we say what to do to add one extra virtual photon to $k$ others.
But which one of the $k+1$ is to be identified at the last photon
added is irrelevant. It agrees with (\ref{XRef-Equation-22814256})
of course for it is canceled on differentiating with respect to
$e^{2}$ the factor $(e^{2})^{k+1}$ for the $(k+1)$ photons.]
\section{Generalized formulation of quantum electrodynamics}
\noindent The relation implied by (\ref{XRef-Equation-22814256})
between the formal solution for the amplitude for a process in an
arbitrary unquantized external potential to that in a quantized
field appears to be of much wider generality. We shall discuss the
relation from a more general point of view here (still limiting
ourselves to the case of no photons in initial or final state).
In earlier sections we pointed out that as a consequence of the
Lagrangian form of quantum mechanics the aspects of the particles'
motions and the behavior of the field could be analyzed separately.
What we did was to integrate over the field oscillator coordinates
first. We could, in principle, have integrated over the particle
variables first. That is, we first solve the problem with the action
of the particles and their interaction with the field and then multiply
by the exponential of the action of the field and integrate over
all the field oscillator coordinates. (For simplicity of discussion
let us put aside from detailed special consideration the questions
involving the separation of the longitudinal and transverse parts
of the field.) Now the integral over the particle coordinates for
a given process is precisely the integral required for the analysis
of the motion of the particles in an unquantized potential. With
this observation we may suggest a generalization to all types of
systems.
Let us suppose the formal solution for the amplitude for some given
process with matter in an external potential $B_{\mu }( 1) $ is
some numerical quantity $T_{0}$. We mean matter in a more general
sense now, for the motion of the matter may be described by the
Dirac equation, or by the Klein-Gordon equation, or may involve
charged or neutral particles other than electrons and positrons
in any manner whatsoever. The quantity $T_{0}$ depends of course
on the potential function $B_{\mu }( 1) $; that is, it is a functional
$T_{0}[ B] $ of this potential. We assume we have some expression
for it in terms of $B_{\mu }$ (exact, or to some desired degree
of approximation in the strength of the potential).
Then the answer $T_{e^{2}}[ B] $ to the corresponding problem in
quantum electrodynamics is $T_{0}[ A_{\mu }( 1) +B_{\mu }( 1) ]
\exp ( i S_{0}) $ summed over all possible distributions of field
$A_{\mu }( 1) $, wherein $S_{0}$ is the action for the field $S_{0}=-(8
\pi e^{2})^{-1}\sum _{\mu }\int ((\partial A_{\mu }/\partial t)^{2}-(\nabla
A_{\mu })^{2})d^{3}x d t$ the sum on $\mu $ carrying the usual minus
sign for space components.
If $F[ A] $ is any functional of $A_{\mu }( 1) $ we shall represent
by ${}_{0}| F[ A] |_{0}$ this superposition of $F[ A] \exp ( i
S_{0}) $ over distributions of $A_{\mu }$ for the case in which
there are no photons in initial or final state. That is, we have
\begin{equation}
T_{e^{2}}[ B] ={}_{0}\left| T_{0}[ A+B] \right| _{0}.%
\label{XRef-Equation-2281479}
\end{equation}
The evaluation of ${}_{0}| F[ A] |_{0}$ directly from the definition
of the operation ${}_{0}| |_{0}$ is not necessary. We can give the
result in another way. We first note that the operation is linear,
\begin{equation}
{}_{0}\left| F_{1}[ A] +F_{2}[ A] \right| _{0}={}_{0}\left| F_{1}[
A] \right| _{0}+{}_{0}\left| F_{2}[ A] \right| _{0}%
\label{XRef-Equation-22814356}
\end{equation}
so that if $F$ is represented as a sum of terms each term can be
analyzed separately. We have studied essentially the case in which
$F[ A] $ is an exponential function. In fact, what we have done
in Section 4 may be repeated with slight modification to show that
\begin{equation}
{}_{0}\left| \exp ( -i \int j_{\mu }( 1) A_{\mu }( 1) d\tau _{1})
\right| _{0}=\exp ( -\frac{1}{2} ie^{2} \int \int j_{\mu }( 1)
j_{\mu }( 2) \delta _{+}( s_{1 2}^{2}) d\tau _{1}d\tau _{2}) %
\label{XRef-Equation-22814535}
\end{equation}
where $j_{\mu }( 1) $ is an arbitrary function of position and time
for each value of $\mu $.
Although this gives the evaluation of ${}_{0}|\ \ |_{0}$ for only
a particular functional of $A_{\mu }$ the appearance of the arbitrary
function $j_{\mu }( 1) $ makes it sufficiently general to permit
the evaluation for any other functional. For it is to be expected
that any functional can be represented as a superposition of exponentials
with different functions $j_{\mu }( 1) $ (by analogy with the principle
of Fourier integrals for ordinary functions). Then, by (\ref{XRef-Equation-22814356}),
the result of the operation is the corresponding superposition of
expressions equal to the right-hand side of (\ref{XRef-Equation-22814535})
with the various $j$'s substituted for $j_{\mu }$.
In many applications $F[ A] $ can be given as a power series in
$A_{\mu }$:
\begin{equation}
F[ A] =f_{0}+\int f_{\mu }( 1) A_{\mu }( 1) d\tau _{1}+\int \int
f_{\mu \nu }( 1,2) A_{\mu }( 1) A_{\nu }( 2) d\tau _{1}d\tau
_{2}+\cdots
\end{equation}
where $f_{0},f_{\mu }( 1) ,f_{\mu \nu }( 1,2) \cdots $ are known
numerical functions independent of $A_{\mu }$. Then by (\ref{XRef-Equation-22814356})
\begin{equation}
{}_{0}\left| F[ A] \right| _{0}=f_{0}+\int f_{\mu }( 1) {}_{0}\left|
A_{\mu }( 1) \right| _{0}d\tau _{1}+\int \int f_{\mu \nu }( 1,2)
{}_{0}\left| A_{\mu }( 1) A_{\nu }( 2) \right| _{0}d\tau _{1}d\tau
_{2}+\cdots %
\label{XRef-Equation-22814748}
\end{equation}
where we set ${}_{0}|1|_{0}=1$ (from (\ref{XRef-Equation-22814535})
with $j_{\mu }=0$). We can work out expressions for the successive
powers of $A_{\mu }$ by differentiating both sides of (\ref{XRef-Equation-22814535})
successively with respect to $j_{\mu }$ and setting $j_{\mu }=0$
in each derivative. For example, the first variation (derivative)
of (\ref{XRef-Equation-22814535}) with respect to $j_{\mu }( 3)
$ gives
\begin{multline}
{}_{0}\left| -i A_{\mu }( 3) \exp ( -i \int j_{\nu }( 1) A_{\nu
}( 1) d\tau _{1}) \right| _{0}=-ie^{2} \int \delta _{+}( s_{3 4}^{2})
j_{\mu }( 4) d\tau _{4}\\
\times \exp ( -\frac{1}{2} ie^{2}\int \int j_{\nu }( 1) j_{\nu
}( 2) \delta _{+}( s_{1 2}^{2}) d\tau _{1}d\tau _{2}) .
\end{multline}
Setting $j_{\mu }=0$ gives
\[
{}_{0} \left| A_{\mu }( 3) \right| _{0}=0.
\]
Differentiating (5l) again with respect to $j_{\nu }( 4) $ and setting
$j_{\nu }=0$ shows
\begin{equation}
{}_{0}\left| A_{\mu }( 3) A_{\nu }( 4) \right| _{0}=ie^{2} \delta
_{\mu \nu } \delta _{+}( s_{3 4}^{2})
\end{equation}
and so on for higher powers. These results may be substituted into
(\ref{XRef-Equation-22814748}). Clearly therefore when $T_{0}[ B+A]
$ in (\ref{XRef-Equation-2281479}) is expanded in a power series
and the successive terms are computed in this way, we obtain the
results given in {\bfseries II}.
It is evident that (\ref{XRef-Equation-2281479}), (\ref{XRef-Equation-22814356}),
(\ref{XRef-Equation-22814535}) imply that $T_{e^{2}}[ B] $ satisfies
the differential equation (\ref{XRef-Equation-22814256}) and conversely
(\ref{XRef-Equation-22814256}) with the definition (\ref{XRef-Equation-2281479})
implies (\ref{XRef-Equation-22814356}) and (\ref{XRef-Equation-22814535}).
For if $T_{0}[ B] $ is an exponential
\begin{equation}
T_{0}[ B] =\exp ( -i \int j_{\mu }( 1) B_{\mu }( 1) d\tau _{1})
%
\label{XRef-Equation-2281482}
\end{equation}
we have from (\ref{XRef-Equation-2281479}), (\ref{XRef-Equation-22814535})
that
\begin{equation}
T_{e^{2}}[ B] =\exp [ -\frac{1}{2} ie^{2} \int \int j_{\mu }( 1)
j_{\mu }( 2) \delta _{+}( s_{1 2}^{2}) d\tau _{1}d\tau _{2}] \cdot
\exp [ -i \int j_{\nu }( 1) B_{\nu }( 1) d\tau _{1}] .%
\label{XRef-Equation-22814815}
\end{equation}
Direct substitution of this into Eq. (\ref{XRef-Equation-22814256})
shows it to be a solution satisfying the boundary condition (\ref{XRef-Equation-2281482}).
Since the differential equation (\ref{XRef-Equation-22814256}) is
linear, if $T_{0}[ B] $ is a superposition of exponentials, the
corresponding superposition of solutions (\ref{XRef-Equation-22814815})
is also a solution.
Many of the formal representations of the matter system (such as
that of second quantization of Dirac electrons) represent the interaction
with a fixed potential in a formal exponential form such as the
left-[hand] side of (\ref{XRef-Equation-22814535}), except that
$j_{\mu }( 1) $ is an operator instead of a numerical function,
Equation (\ref{XRef-Equation-22814535}) may still be used if care
is exercised in defining the order of the operators on the right-hand
side. The succeeding paper will discuss this in more detail.
Equation (\ref{XRef-Equation-22814256}) or its solution (\ref{XRef-Equation-2281479}),
(\ref{XRef-Equation-22814356}), (\ref{XRef-Equation-22814535}) constitutes
a very general and convenient formulation of the laws of quantum
electrodynamics for virtual processes. Its relativistic invariance
is evident if it is assumed that the unquantized theory giving $T_{0}[
B] $ is invariant. It has been proved to be equivalent to the usual
formulation for Dirac electrons and positrons (for Klein-Gordon
particles see Appendix A). It is suggested that it is of wide generality.
It is expressed in a form which has meaning even if it is impossible
to express the matter system in Hamiltonian form; in fact, it only
requires the existence of an amplitude for fixed potentials which
obeys the principle of superposition of amplitudes. If $T_{0}[ B]
$ is known in power series in $B$, calculations of $T_{e}^{2}[ B]
$ in a power series of $e^{2}$ can be made directly using the italicized
rule of Sec. 7. The limitation to virtual quanta is removed in the
next section.
On the other hand, the formulation is unsatisfactory because for
situations of importance it gives divergent results, even if $T_{0}[
B] $ is finite. The modification proposed in \textbf{II} of replacing
$\delta _{+}( s_{12}^{2}) $ in (\ref{XRef-Equation-22814256}), (\ref{XRef-Equation-22814535})
by $f_{+}( s_{12}^{2}) $ is not satisfactory owing to the loss of
the theorems of conservation of energy or probability discussed
in \textbf{II} at the end of Sec. 6. There is the additional difficulty
in positron theory that even $T_{0}[ B] $ is infinite to begin with
(vacuum polarization). Computational ways of avoiding these troubles
are given in \textbf{II} and in the references of footnote 2.
\section{Case of real photons}
\noindent The case in which there are real photons in the initial
or the final state can be worked out from the beginning in the same
manner.\footnote{For an alternative method starting directly from
the formula (24) for virtual photons, see Appendix B.} We first
consider the case of a system interacting with a single oscillator.
From this result the generalization will be evident. This time we
shall calculate the transition element between an initial state
in which the particle is in state $\psi _{t^{\prime }}$ and the
oscillator is in its $n$th eigenstate (i.e., there are $n$ photons
in the field) to a final state with particle in $\chi _{t^{{\prime\prime}}}$,
oscillator in $m$th level. As we have already discussed, when the
coordinates of the oscillator are eliminated the result is the transition
element $\langle \chi _{t^{{\prime\prime}}}|G_{m n}|\psi _{t^{\prime
}}\rangle $ where
\[
G_{mn }= \int \varphi _{m}^{*}( q_{j}) k( q_{j},t^{{\prime\prime}};q_{0},t^{\prime
}) \varphi _{n}( q_{0}) dq_{0} dq_{j}
\]
where $\varphi _{m}$, $\varphi _{n}$$\text{}$ are the wave functions
[8] for the oscillator in state $m$, $n$ and $k$ is given in (\ref{XRef-Equation-227163312}).
The $G_{m n}$ can be evaluated most easily by calculating the generating
function
\begin{equation}
g( X,Y) =\sum \limits_{m}\sum \limits_{n}G_{m n}X^{m}Y^{n}( m!n!)
^{-\frac{1}{2}}%
\label{XRef-Equation-2281492}
\end{equation}
for arbitrary $X$, $Y$. If expression (\ref{XRef-Equation-227163344})
is substituted in the left-hand side of (\ref{XRef-Equation-2281492}),
the expression can be simplified by use of the generating function
relation for the eigenfunctions [8] of the harmonic oscillator
\[
\sum \limits_{n}\varphi _{n}\left( q_{0}\right) Y^{n}\left( n!\right)
^{-\frac{1}{2}}=\left( \omega /\pi \right) ^{\frac{1}{2}}\exp \left(
-\frac{1}{2}i\omega t^{\prime }\right) \times \exp \frac{1}{2}[
\omega q_{0}^{2}-\left( Y \exp [ -i\omega t^{\prime }] -\left(
2\omega \right) ^{\frac{1}{2}}q_{0}\right) ^{2}]
\]
Using a similar expansion for the $\varphi _{m}^{*}$ one is left
with the exponential of a quadratic function of $q_{0}$ and $q_{j}$.
The integration on $q_{0}$ and $q_{j}$ is then easily performed
to give
\begin{equation}
g( X,Y) =G_{0 0}\exp ( X Y+i\beta ^{*}X+i\beta Y) %
\label{XRef-Equation-22814120}
\end{equation}
from which expansion in powers of $X$ and $Y$ and comparison to
(3.11) gives the final result
\begin{equation}
G_{m n}= G_{0 0}\left( m! n!\right) ^{-\frac{1}{2}}\sum \limits_{r}\frac{m!}{\left(
m-r\right) !r!}\frac{n!}{\left( n-r\right) !r!}\times r!\left( i\beta
^{*}\right) ^{m-r}\left( i\beta \right) ^{n-r}%
\label{XRef-Equation-22814938}
\end{equation}
where $G_{0 0}$ is given in (\ref{XRef-Equation-22814921}) and
\begin{equation}
\begin{array}{cc}
\beta & =\left( 2\omega \right) ^{-\frac{1}{2}}\int _{t^{\prime
}}^{t^{{\prime\prime}}}\gamma ( t) \exp ( -i\omega t) dt, \\
\beta ^{*} & =\left( 2\omega \right) ^{-\frac{1}{2}}\int _{t^{\prime
}}^{t^{{\prime\prime}}}\gamma ( t) \exp ( +i\omega t) dt,
\end{array}%
\label{XRef-Equation-22814107}
\end{equation}
and the sum on $r$ is to go from 0 to $m$ or to $n$ whichever is
the smaller. (The sum can be expressed as a Laguerre polynomial
but there is no advantage in this.)
Formula (\ref{XRef-Equation-22814938}) is readily understandable.
Consider first a simple case of absorption of one photon. Initially
we have one photon and finally none. The amplitude for this is the
transition element of $G_{0 1}=i\beta G_{0 0}$ or $\langle \chi
_{t^{{\prime\prime}}}|i\beta G_{0 0}|\psi _{t^{\prime }}\rangle
$. This is the same as would result if we asked for the transition
element for a problem in which all photons are virtual but there
was present a perturbing potential $-(2\omega )^{-\frac{1}{2}}\gamma
( t) \exp ( i\omega t) $ and we required the first-order effect
of this potential. Hence photon absorption is like the first order
action of a potential varying in time as $\gamma ( t) \exp ( i\omega
t) $ that is with a positive frequency (i.e., the sign of the coefficient
of $t$ in the exponential corresponds to positive energy). The amplitude
for emission of one photon involves $G_{1 0}=i\beta ^{*}G_{0 0}$,
which is the same result except that the potential has negative
frequency. Thus we begin by interpreting $i\beta ^{*}$ as the amplitude
for emission of one photon $i\beta $ as the amplitude for absorption
of one.
Next for the general case of $n$ photons initially and finally we
may understand (\ref{XRef-Equation-22814938}) as follows. We first
neglect Bose statistics and imagine the photons as individual distinct
particles. If we start with $n$ and end with $m$ this process may
occur in several different ways. The particle may absorb in total
$n-r$ of the photons and the final $m$ photons will represent $r$
of the photons which were present originally plus $m-r$ new photons
emitted by the particle. In this case the $n-r$ which are to be
absorbed may be chosen from among the original $n$ in $n!/(n-r)!r!$
different ways, and each contributes a factor $i\beta $, the amplitude
for absorption of a photon. Which of the $m-r$ photons from among
the $m$ arc emitted can be chosen in $m!/(m-r)!r!$ different ways
and each photon contributes a factor $i \beta ^{*}$ in amplitude.
The initial $r$ photons which do not interact with the particle
can be re-arranged among the final $r$ in $r!$ ways. We must sum
over the alternatives corresponding to different values of $r$.
Thus the form of $G_{m n}$ can be understood. The remaining factor
$(m!)^{-\frac{1}{2}}(n!)^{-\frac{1}{2}}$ may be interpreted as saying
that in computing probabilities (which therefore involves the square
of $G_{m n}$) the photons may be considered as independent but that
if $m$ are actually equal the statistical weight of each of the
states which can be made by rearranging the m equal photons is only
$1/m!$. This is the content of Bose statistics; that $m$ equal particles
in a given state represents just one state, i.e., has statistical
weight unity, rather than the $m!$ statistical weight which would
result if it is imagined that the particles and states can be identified
and rearranged in $m!$ different ways. This holds for both the initial
and final states of course. From this rule about the statistical
weights of states the derivation of the blackbody distribution law
follows.
The actual electromagnetic field is represented as a host of oscillators
each of which behaves independently and produces its own factor
such as $G_{m n}$. Initial or final states may also be linear combinations
of states in which one or another oscillator is excited. The results
for this case are of course the corresponding linear combination
of transition elements.
For photons of a given direction of polarization and for sin or
cos waves the explicit expression for $\beta$ can be obtained directly
from (\ref{XRef-Equation-22814107}) by substituting the formulas
(\ref{XRef-Equation-228141022}) for the $\gamma$'s for the corresponding
oscillator. It is more convenient to use the linear combination
corresponding to running waves. Thus we find the amplitude for absorption
of a photon of momentum {\bfseries K}, frequency $k=(K\cdot K)^{\frac{1}{2}}$
polarized in direction {\bfseries e} is given by including a factor
$i$ times
{\rmfamily \begin{equation}
\beta _{K,e}=\left( 4 \pi \right) ^{\frac{1}{2}}\left( 2 k\right)
^{-\frac{1}{2}}\sum \limits_{n}e_{n}\int _{t^{\prime }}^{t^{{\prime\prime}}}\exp
( -ikt) \times \exp ( i K\cdot x_{n}( t) ) e\cdot x_{n}^{.}\left(
t\right) dt%
\label{XRef-Equation-228141048}
\end{equation}}
in the transition element (\ref{XRef-Equation-227163942}). The density
of states in momentum space is now $(2\pi )^{-3}d^{3}K$. The amplitude
for emission is just $i $times the complex conjugate of this expression,
or what amounts to the same thing, the same expression with the
sign of the four vector $k,'$ (?) reversed. Since the factor (\ref{XRef-Equation-228141048})
is exactly the first-order effect of a vector potential
{\rmfamily \[
A^{P H}= \left( 2 \pi /k\right) ^{\frac{1}{2}}e \exp \left( -i \left(
kt-K\cdot x\right) \right)
\]}
of the corresponding classical wave, we have derived the rules for
handling real photons discussed in \textbf{II}.
We can express this directly in terms of the quantity $T_{e^{2}}[
B] $, the amplitude for a given transition without emission of a
photon. What we have said is that the amplitude for absorption of
just one photon whose classical wave form is $A_{\mu }^{P H}( 1)
$ (time variation $\exp ( -ikt_{1}) $ corresponding to positive
energy $k$) is proportional to the first order (in $\epsilon$) change
produced in $T_{e^{2}}[ B] $ on changing $B$ to $B+\epsilon A^{P
H}$. That is, more exactly,
{\rmfamily \begin{equation}
\int \left( \delta T_{e^{2}}[ B] /\delta B_{\mu }( 1) \right) A_{\mu
}^{P H}( 1) d\tau _{1}%
\label{XRef-Equation-22814229}
\end{equation}}
is the amplitude for absorption by the particle system of one photon,
$A^{P H}$. (A superposition argument shows the expression to be
valid not only for plane waves, but for spherical waves, etc., as
given by the form of $A^{P H}$.) The amplitude for emission is the
same expression but with the sign of the frequency reversed in $A^{P
H}$. The amplitude that the system absorbs two photons with waves
$A_{\mu }^{P H_{1}}$ and $A_{\nu }^{P H_{2}}$ is obtained from the
next derivative,
{\rmfamily \[
\int \int \left( \delta ^{2}T_{e^{2}}[ B] /\delta B_{\mu }\left(
1\right) \delta B_{\nu }\left( 2\right) \right) A_{\mu }^{P H_{1}}\left(
1\right) A_{\nu }^{P H_{2}}\left( 2\right) d\tau _{1}d\tau _{2}
\]}
the same expression holding for the absorption of one and emission
of the other, or emission of both depending on the sign of the time
dependence of $A^{P H_{1}}$ and $A^{P H_{2}}$. Larger photon numbers
correspond to higher derivatives, absorption of $l_{1}$ emission
of $l_{2}$ requiring the $(l_{1}+l_{2})$ derivaties. When two or
more of the photons are exactly the same (e.g., $ A^{P H_{1}} =
A^{P H_{2}}$) the same expression holds for the amplitude that $l_{1}$
are absorbed by the system while $l_{2}$ are emitted. However, the
statement that initially $n$ of a kind are present and $m$ of this
kind are present finally, does not imply $l_{1}=n$ and $l_{2}= m$.
It is possible that only $n-r= l_{1}$ were absorbed by the system
and $m-r=l_{2}$ emitted, and that $r$ remained from initial to final
state without interaction. This term is weighed by the combinatorial
coefficient $(m!n!)^{-\frac{1}{2}}(\begin{array}{c}
m \\
r
\end{array})(\begin{array}{c}
n \\
r
\end{array})r!$ and summed over the possibilities for $r$ as explained
in connection with (\ref{XRef-Equation-22814938}). Thus once the
amplitude for virtual processes is known, that for real photon processes
can be obtained by differentiation.
It is possible, of course, to deal with situations in which the
electromagnetic field is not in a definite state after the interaction.
For example, we might ask for the total probability of a given process,
such as scattering, without regard for the number of photons emitted.
This is done of course by squaring the amplitude for the emission
of $m$ photons of a given kind and summing on all $m$. Actually
the sums and integrations over the oscillator momenta can usually
easily be performed analytically. For example, the amplitude, starting
from vacuum and ending with $m$ photons of a given kind, is by (\ref{XRef-Equation-22814120})
just
\begin{equation}
G_{m 0}=\left( m!\right) ^{-\frac{1}{2}}G_{0 0}( i\beta ^{*}) ^{m}.%
\label{XRef-Equation-228142023}
\end{equation}
The square of the amplitude summed on $m$ requires the product of
two such expressions (the $\gamma ( t) $ in the $\beta$ of one and
in the other will have to be kept separately) summed on $m$:
\[
\sum _{m}G_{m 0}^{*}G_{m 0}^{\prime }=\sum _{m}G_{0 0}^{*}G_{0 0}^{\prime
}( m!) ^{-1}\beta ^{m}( \beta ^{\prime *}) ^{m}=G_{0 0}^{*}G_{0
0}^{\prime }\exp ( \beta \beta ^{\prime *})
\]
In the resulting expression the sum over all oscillators is easily
done. Such expressions can be of use in the analysis in a direct
manner of problems of line width, of the Bloch-Nordsieck infra-red
problem, and of statistical mechanical problems, but no such applications
will be made here.
The author appreciates his opportunities to discuss these matters
with Professor H. A. Bethe and Professor J. Ashkin, and the help
of Mr. M. Baranger with the manuscript.
\appendix
\section{The Klein-Gordon equation}\label{XRef-AppendixSection-227155027}
\noindent In this Appendix we describe a formulation of the equations
for a particle of spin zero which was first used to obtain the rules
given in \textbf{II} for such particles. The complete physical significance
of the equations has not been analyzed thoroughly so that it may
be preferable to derive the rules directly from the second quantization
formulation of Pauli and Weisskopf. This can be done in a manner
analogous to the derivation of the rules for the Dirac equation
given in \textbf{I} or from the Schwinger-Tomonaga formulation [2]
in a manner described, for example, by Rohrlich.\footnote{F. Rohrlich
(to be published). } The formulation given here is therefore not
necessary for a description of spin zero particles but is given
only for its own interest as an alternative to the formulation of
second quantization.
We start with the Klein-Gordon equation
\[
\left( i\partial \left/ \partial x_{\mu }\right. -A_{\mu }\right)
^{2}\psi =m^{2}\psi
\]
for the wave function $\psi$ of a particle of mass $m$ in a given
external potential $A_{\mu }$. We shall try to represent this in
a manner analogous to the formulation of quantum mechanics in \textbf{C}.
That is, we try to represent the amplitude for a particle to get
from one point to another as a sum over all trajectories of an amplitude
$\exp ( iS) $ where $S$ is the classical action for a given trajectory.
To maintain the relativistic invariance in evidence the idea suggests
itself of describing a trajectory in space-time by giving thc four
variables $x_{\mu }( u) $ as functions of some fifth parameter $u$
(rather than expressing $x_{1}$, $x_{2}$, $x_{3}$ in terms of $x_{4}$).
As we expect to represent paths which may reverse themselves in
time (to represent pair production, etc., as in \textbf{I}) this
is certainly a more convenient representation, for all four functions
$x_{\mu }( u) $ may be considered as functions of a parameter $u$
(somewhat analogous to proper time) which increase as we go along
the trajectory, whether the trajectory is proceeding forward $(d
x_{4}/d u>0)$ or backward $(d x_{4}/d u<0)$ in time.\footnote{The
physical ideas involved in such a description are discussed in detail
by Y. Nambu, Prog. Theor. Phys. \textbf{5}, 82 (1950). An equation
of type (A2) extended to the case of Dirac electrons has been studied
by V. Fock, Physik Zeits. Sowjetunion \textbf{12}, 404 (1937).}
We shall then have a new type of wave function $\varphi ( x,u) $
a function of five variables, $x$ standing for the four $x_{\mu
}$. It gives the amplitude for arrival at point $x_{\mu }$ with
a certain value of the parameter $u$. We shall suppose that this
wave function satisfies the equation
\[
i\partial \varphi /\partial u=-\frac{1}{2}\left( i\partial \left/
\partial x_{\mu }\right. -A_{\mu }\right) ^{2}\varphi
\]
which is seen to be analogous to the time-dependent Schr\"odinger
equation, $u$ replacing the time and the four coordinates of space-time
$x_{\mu }$ replacing the usual three coordinates of space.
Since the potentials $A_{\mu }( x) $ are functions only of coordinates
$x_{\mu }$, and are independent of $u$, the equation is separable
in $u$ and we can write a special solution in the form $\varphi
=\exp ( \frac{1}{2}im^{2}u) \psi ( x) $ where $\psi ( x) $, a function
of the coordinates $x_{\mu }$ only, satisfies (\ref{XRef-Equation-228141222})
and the eigenvalue $\frac{1}{2}m^{2}$ conjugate to $u$ is related
to the mass $m$ of the particle. Equation (\ref{XRef-Equation-228141319})
is therefore equivalent to the Klein-Gordon Eq. (\ref{XRef-Equation-228141222})
provided we ask in the end only for the solution of (\ref{XRef-Equation-228141222})
corresponding to the eigenvalue $\frac{1}{2}m^{2}$ for the quantity
conjugate to $u$.
We may now proceed to represent Eq. (\ref{XRef-Equation-228141319})
in Lagrangian form in general and without regard to this eigenvalue
condition. Only in the final solutions need we apply the eigenvalue
condition. That is, if we have some special solution $\varphi (
x,u) $ of (\ref{XRef-Equation-228141319}) we can select that part
corresponding to the eigenvalue $\frac{1}{2}m^{2}$ by calculating
\[
\psi ( x) =\int _{-\infty }^{\infty }\exp ( -\frac{1}{2}im^{2}u)
\varphi ( x,u) du
\]
and thereby obtain a solution $\psi$ of Eq. (\ref{XRef-Equation-228141222}).
Since (\ref{XRef-Equation-228141319}) is so closely analogous to
the Schr\"odinger equation, it is easily written in the Lagrangian
form described in \textbf{C}, simply by working by analogy. For
example if $\varphi ( x,u) $ is known at one value of $u$ its value
at a slightly larger $u+\epsilon $ is given by
\[
\varphi ( x,u+\epsilon ) =\int \exp i\left. \epsilon [-\frac{\left(
x_{\mu }-x_{\mu }^{\prime }\right) ^{2}}{2\epsilon ^{2}}-\frac{1}{2}\left(
\frac{\left( x_{\mu }-x_{\mu }^{\prime }\right) }{\epsilon }\right)
\left( A_{\mu }( x) +A_{\mu }( x^{\prime }) \right) \right] \cdot
\varphi ( x^{\prime },u) d^{4}\tau _{x^{\prime }}2\left( \pi i\epsilon
\right) ^{\frac{3}{2}}\left( -2\pi i\epsilon \right) ^{-\frac{1}{2}}
\]
where $(x_{\mu }-x_{\mu }^{\prime })^{2}$ means $(x_{\mu }-x_{\mu
}^{\prime })$$(x_{\mu }-x_{\mu }^{\prime })$, $d^{4}\tau _{x^{\prime
}}=dx_{1}^{\prime }dx_{2}^{\prime }dx_{3}^{\prime }dx_{4}^{\prime
}$ and the sign of the normalizing factor is changed for the $x_{4}$
component since the component has the reversed sign in its quadratic
coefficient in the exponential, in accordance with our summation
convention $a_{\mu }b_{\mu }=a_{4}b_{4}-a_{1}b_{1}-a_{2}b_{2}-a_{3}b_{3}$.
Equation (\ref{XRef-Equation-228141346}), as can be verified readily
as described in \textbf{C}, Sec. 6, is equivalent to first order
in $\epsilon$ to Eq. (\ref{XRef-Equation-228141319}). Hence, by
repeated use of this equation the wave function at $u_{0}=n_{\epsilon
}$ can be represented in terms of that at $u=0$ by:
\begin{multline*}
\varphi ( x_{\nu ,n},u_{0}) =\int \exp -\frac{i\epsilon }{2}\sum
\limits_{i=1}^{n}\left[ \left( \frac{x_{\mu ,i}-x_{\mu ,i-1}}{\epsilon
}\right) ^{2}+\epsilon ^{-1}\left( x_{\mu ,i}-x_{\mu ,i-1}\right)
\left( A_{\mu }\left( x_{i}\right) +A_{\mu }\left( x_{i-1}\right)
\right) \right] \\
\cdot \varphi ( x_{\nu ,0},0) \prod \limits_{i=0}^{n-1}\left( d^{4}\tau
_{i}/4\pi ^{2}\epsilon ^{2}i\right) .
\end{multline*}
That is, roughly, the amplitude for getting from one point to another
with a given value of $u_{0}$ is the sum over all trajectories of
$\exp ( iS) $ where
\[
S=-\operatorname*{\int }\limits_{0}^{u_{0}}\left[ \frac{1}{2}\left(
dx_{\mu }/du\right) ^{2}+\left( dx_{\mu }/du\right) A_{\mu }( x)
\right] du,
\]
when sufficient care is taken to define the quantities, as in \textbf{C}.
This completes the formulation for particles in a fixed potential
but a few words of description may be in order.
In the first place in the special case of a free particle we can
define a kernel $k^{(0)}( x,u_{0};x^{\prime },0) $ for arrival from
$x_{\mu }^{\prime },0$ to $x_{\mu }$ at $u_{0}$ as the sum over
all trajectories between these points of $\exp -i\int _{0}^{u_{0}}\frac{1}{2}(d
x_{\mu }/d u)^{2}d u$. Then for this case we have
\[
\varphi ( x,u_{0}) =\int k^{\left( 0\right) }( x,u_{0};x^{\prime
},0) \varphi ( x^{\prime },0) d^{4}\tau _{x^{\prime }}
\]
and it is easily verified that $k_{0}$ is given by
\[
k^{\left( 0\right) }( x,u_{0};x^{\prime },0) =\left( 4\pi ^{2}u_{0}^{2}i\right)
^{-1}\exp -i( x_{\mu }-x_{\mu }^{\prime }) ^{2}/2u_{0}
\]
for $u_{0}>0$ and by $0$, by definition, for $u_{0}<0$. The corresponding
kernel of importance when we select the eigenvalue $\frac{1}{2}m^{2}$
is\footnote{The factor $2i$ in front of $I_{+}$ is simply to make
the definition of $I_{+}$ here agree with that in \textbf{I} and
\textbf{II}. In \textbf{II} it operates with $p\cdot A+A\cdot p$
as a perturbation. But the perturbation coming from (A3) in a natural
way by expansion of the exponential is $-\frac{1}{2}i( p\cdot A+A\cdot
p) $.}
\[
\begin{array}{cc}
2iI_{+}( x,x^{\prime }) & =\int _{--\infty }^{\infty }k^{\left(
0\right) }( x,u_{0};x^{\prime },0) \exp ( -\frac{1}{2}im^{2}u_{0})
du_{0} \\
<>% ""
& =\int _{0}^{\infty }du_{0}( 4 \pi ^{2}u_{0}^{2}i) ^{-1}\exp -\frac{1}{2}i(
m^{2}u_{0}+u_{0}^{-1}( x_{\mu }-x_{\mu ^{\prime }}) ^{2})
\end{array}
\]
(the last extends only from $u_{0}=0$ since $k_{0}$ is zero for
negative $u_{o}$) which is identical to the $I_{+}$ defined in \textbf{II}.\footnote{Expression
(A8) is closely related to Schwinger's parametric integral representation
of these functions. For example, (A8) becomes formula (45) of F.
Dyson, Phys. Rev. \textbf{75}, 486 (1949)) for $\Delta _{F}\equiv
\Delta ^{(1)}-2i\overset{\_}{\Delta }\equiv 2iI_{+}$ if $(2\alpha
)^{-1}$ is substituted for $u_{0}$.} This may be seen readily by
studying the Fourier transform, for the transform of the integrand
on the right-hand side is
{\rmfamily \[
\int \left( 4 \pi ^{2}u_{0}^{2}i\right) ^{-1}\exp ( ip\cdot x)
\exp -\frac{1}{2}i( m^{2}u_{0}+x_{\mu }^{2}/u_{0}) d^{4}\tau _{x}=
\exp -\frac{1}{2}i u_{0}( m^{2}-p_{\mu }^{2})
\]}
so that the $u_{0}$ integration gives for the transform of $I_{+}$
just $1/(p_{\mu }^{2}-m^{2})$ with the pole defined exactly as in
\textbf{II}. [21] Thus we are automatically representing the positrons
as trajectories with the time sense reversed.
If $\Phi ^{(0)}[ x( \mu ) ] =\exp -i\int _{0}^{u_{0}}\frac{1}{2}(dx_{\mu
}/du)^{2}du$ is the amplitude for a given trajectory $x_{\nu }(
u) $ for a free particle, then the amplitude in a potential is
{\rmfamily \[
\Phi ^{\left( A\right) }[ x( \mu ) ] =\Phi ^{\left( 0\right) }[
x( \mu ) ] \exp -i\int _{0}^{u_{0}}\left( dx_{\mu }/du\right) A_{\mu
}( x) du.
\]}
If desired this may be studied by perturbation methods by expanding
the exponential in powers of $A_{\mu }$.
For interpretation, the integral in (\ref{XRef-Equation-22814145})
must be written as a Riemann sum, and if a perturbation expansion
is made, care must be taken with the terms quadratic in the velocity,
for the effect of $(x_{\mu ,i+1}-x_{\mu ,i})(x_{\nu ,i+1}-x_{\nu
,i})$ is not of order $\epsilon ^{2}$ but is $i\delta _{\mu \nu
}\epsilon $. The ``velocity'' $dx_{\mu }/du$ becomes the momentum
operator $p_{\mu }=+i\partial /\partial x_{\mu }$ operating half
before and half after $A_{\mu }$, just as in the non-relativistic
Schr\"odinger equation discussed in Sec. 5. Furthermore, in exactly
the same manner as in that case, but here in four dimensions, a
term quadratic in $A_{\mu }$ arises in the second-order perturbation
terms from the coincidence of two velocities for the same value
of $u$.
As an example, the kernel $k^{(A)}( x,u_{0};x^{\prime },0) $ for
proceeding from $x_{\mu }^{\prime },0$ to $x_{\mu },u_{0}$ in a
potential $A_{\mu }$ differs from $k^{(0)}$ to first order in $A_{\mu
}$ by a term
\[
-i\int _{0}^{u_{0}}du k^{\left( 0\right) }( x,u_{0};y,u) \frac{1}{2}\left(
p_{\mu }A_{\mu }( y) +A_{\mu }( y) p_{\mu }\right) k^{\left( 0\right)
}( y,u_{0};x^{\prime },0) d\tau _{y}
\]
the $p_{\mu }$ here meaning $+i\partial /\partial y_{\mu }$. The
kernel of importance on selecting the eigenvalue $\frac{1}{2}m^{2}$
is obtained by multiplying this by $\exp ( -\frac{1}{2}im^{2}u_{0})
$ and integrating $u_{0}$ from 0 to $\infty $. The kernel $k^{(0)}(
x,u_{0};y,u) $ depends only on $u^{\prime }=u_{0}-u$ and in the
integrals on $u$ and $u_{0}$; $\int _{0}^{\infty }du_{0}\int _{0}^{\infty
}du \exp ( -\frac{1}{2}i m^{2}u_{0}) \ldots $, can be written, on
interchanging the order of integration and changing variables to
$u$ and $u^{\prime }$, $\int _{0}^{\infty }du\int _{0}^{\infty }du^{\prime
}\exp ( -\frac{1}{2}im^{2}( u+u^{\prime }) ) \ldots $. Now the integral
on $u^{\prime }$ converts $k^{(0)}( x,u_{0};y,u) $ to $2iI_{+}(
x,y) $ by (\ref{XRef-Equation-228141418}), while that on $u$ converts
$k^{(0)}( y,u;x^{\prime },0) $ to $2iI_{+}( y,x^{\prime }) $, so
the result becomes
\[
\int 2 i I_{+}( x,y) \left( p_{\mu }A_{\mu }+A_{\mu }p_{\mu }\right)
I_{+}( y,x^{\prime }) d^{4}\tau _{y}
\]
as expected. The same principle works to any order so that the rules
for a single Klein-Gordon particle in external potentials given
in \textbf{II}, Section 9, are deduced.
The transition to quantum electrodynamics is simple for in (\ref{XRef-Equation-228141438})
we already have a transition amplitude represented as a sum (over
trajectories, and eventually $u_{0}$) of terms, in each of which
the potential appears in exponential form. We may make use of the
general relation (\ref{XRef-Equation-22814815}). Hence, for example,
one finds for the case of no photons in the initial and final states,
in the presence of an external potential $B_{\mu }$, the amplitude
that a particle proceeds from $(x_{\mu }^{\prime },0)$ to $(x_{\mu
},u_{0})$ is the sum over all trajectories of the quantity
\begin{multline*}
\exp -i[\frac{1}{2}\int _{0}^{u_{0}}\left( \frac{dx_{\mu }}{du}\right)
^{2}du+\int _{0}^{u_{0}}\frac{dx_{\mu }}{du}B_{\mu }( x( u) ) du+\frac{e^{2}}{2}\int
_{0}^{u_{0}}\int _{0}^{u_{0}}\frac{dx_{\mu }( u) }{du}<>% ""
\frac{dx_{\nu }( u^{\prime }) }{du^{\prime }}\\
\left. \times \delta _{+}( \left( x_{\mu }( u) -x_{\mu }( u^{\prime
}) \right) ^{2}) dudu^{\prime }\right] .
\end{multline*}
This result must be multiplied by $\exp ( -\frac{1}{2}im^{2}u_{0})
$ and integrated on $u_{0}$ from zero to infinity to express the
action of a Klein-Gordon particle acting on itself through virtual
photons. The integrals are interpreted as Riemann sums, and if perturbation
expansions arc made, the necessary care is taken with the terms
quadratic in velocity. When there are several particles (other than
the virtual pairs already included) one use a separate $u$ for each,
and writes the amplitude for each set of trajectories as the exponential
of $-i$ times
\begin{multline*}
\frac{1}{2}\sum \limits_{n}\int _{0}^{u_{0}^{\left( n\right) }}\left(
\frac{dx_{\mu }^{\left. n\right) }}{du}\right) ^{2}du+\sum \limits_{n}\int
_{0}^{u_{0}^{\left( n\right) }}\frac{dx_{\mu }^{\left( n\right)
}}{du}B_{\mu }( x^{\left( n\right) }( u) ) du\\
+\frac{e^{2}}{2}\sum \limits_{n}\sum \limits_{m}\int _{0}^{u_{0}^{\left(
n\right) }}\int _{0}^{u_{0}^{\left( m\right) }}\frac{dx_{\nu }^{\left(
n\right) }( u) }{du} \frac{dx_{\nu }^{\left( m\right) }( u^{\prime
}) }{du^{\prime }}\\
\times \delta _{+}( \left( x_{\mu }^{\left( n\right) }( u) -x_{\mu
}^{\left( m\right) }( u^{\prime }) \right) ^{2}) dudu^{\prime }
\end{multline*}
where $x_{\mu }^{(n)}( u) $ are the coordinates of the trajectory
of the \textit{n}th particle.\footnote{$ <>% "Counter" -> "AppendixEquation"
$} The solution should depend on the $u_{0}^{(n)}$ as $\exp ( -\frac{1}{2}i
m^{2}\sum \limits_{n}u_{0}^{(n)}) $.
Actually, knowledge of the motion of a single charge implies a great
deal about the behavior of several charges. For a pair which eventually
may turn out to be a virtual pair may appear in the short run as
two ``other particles.'' As a virtual pair, that is, as the reverse
section of a very long and complicated single track we know its
behavior by (\ref{XRef-Equation-228141728}). We can assume that
such a section can be looked at equally well, for a limited duration
at least, as being due to other unconnected particles. This then
implies a definite law of interaction of particles if the self-action
(\ref{XRef-Equation-228141728}) of a single particle is known. (This
is similar to the relation of real and virtual photon processes
discussed in detail in Appendix B.) It is possible that a detailed
analysis of this could show that (\ref{XRef-Equation-228141728})
implied that (\ref{XRef-Equation-228141750}) was correct for many
particles. There is even reason to believe that the law of Bose-Einstein
statistics and the expression for contributions from closed loops
could be deduced by following this argument. This has not yet been
analyzed completely, however, so we must leave this formulation
in an incomplete form. The expression for closed loops should come
out to be $C_{\nu }=\exp +L$ where $L$, the contribution from a
single loop, is
{\rmfamily \[
L=2\int _{0}^{\infty }l\left( u_{0}\right) \exp \left( -\frac{1}{2}i
m^{2}u_{0}\right) du_{0}/u_{0}
\]}
where $l( u_{0}) $ is the sum over all trajectories which close
on themselves $(x_{\mu }( u_{0}) =x_{\mu }( 0) )$ of $\exp ( i S)
$ with $S$ given in (\ref{XRef-Equation-228141438}), and a final
integration $d\tau _{x( 0) }$ on $x_{\mu }( 0) $ is made. This is
equivalent to putting
{\rmfamily \[
l\left( u_{0}\right) =\int \left( k^{\left( A\right) }\left( x,u_{0};x,0\right)
-k^{\left( 0\right) }\left( x,u_{0};x,0\right) \right) d\tau _{x}.
\]}
The term $k^{(0)}$ is subtracted only to simplify convergence problems
(as adding a constant independent of $A_{\mu }$, to $L$ has no effect).
\section{The relation of real and virtual processes}\label{XRef-AppendixSection-227155120}
\noindent If one has a general formula for all virtual processes
he should be able to find the formulas and states involved in real
processes. That is to say, we should be able to deduce the formulas
of Section 9 directly from the formulation (\ref{XRef-Equation-227163911}),
(\ref{XRef-Equation-227163942}) (or its generalized equivalent such
as (\ref{XRef-Equation-2281479}), (\ref{XRef-Equation-22814535}))
without having to go all the way back to the more usual formulation.
We discuss this problem here.
That this possibility exists can be seen from the consideration
that what looks like a real process from one point of view may appear
as a virtual process occurring over a more extended time.
For example, if we wish to study a given real process, such as the
scattering of light, we can, if we wish, include in principle the
source, scatterer, and eventual absorber of the scattered light
in our analysis. We may imagine that no photon is present initially,
and that the source then emits light (the energy coming say from
kinetic energy in the source). The light is then scattered and eventually
absorbed (becoming kinetic energy in the absorber). From this point
of view the process is virtual; that is, we start with no photons
and end with none. Thus we can analyze the process by means of our
formula for virtual processes, and obtain the formulas for real
processes by attempting to break the analysis into parts corresponding
to emission, scattering, and absorption.\footnote{The formulas for
real processes deduced in this way are strictly limited to the case
in which the light comes from sources which are originally dark,
and that eventually all light emitted is absorbed again. We can
only extend it to the case for which these restrictions do not hold
by hypothesis, namely, that the details of the scattering process
are independent of these characteristics of the light source and
of the eventual disposition of the scattered light. The argument
of the text gives a method for discovering formulas for real processes
when no more than the formula for virtual processes is at hand.
But with this method belief in the general validity of the resulting
formulas must rest on the physical reasonableness of the above-mentioned
hypothesis.}
To put the problem in a more general way, consider the amplitude
for some transition from a state empty of photons far in the past
(time $t^{\prime }$) to a similar one far in the future ($t=t^{{\prime\prime}}$).
Suppose the time interval to be split into three regions $a$, $b$,
$c$ in some convenient manner, so that region $b$ is an interval
$t_{2}>t>t_{1}$ around the present time that we wish to study. Region
$a$, ($t_{1}>t>t^{\prime }$), precedes $b$, and $c$, ($t^{{\prime\prime}}>t>t_{2}$),
follows $b$. We want to see how it comes about that the phenomena
during $b$ can be analyzed by a study of transitions $g_{j i}( b)
$ between some initial state $i$ at time $t_{1}$, (which no longer
need be photon-free), to some other final state $j$ at time $t_{2}$.
The states $i$ and $j$ are members of a large class which we will
have to find out how to specify. (The single index $i$ is used to
represent a large number of quantum numbers, so that different values
of $i$ will correspond to having various numbers of various kinds
of photons in the field, etc.) Our problem is to represent the over-all
transition amplitude, $g( a,b,c) $, as a sum over various values
of $i$, $j$ of a product of three amplitudes,
{\rmfamily \[
g( a,b,c) =\sum _{i}\sum _{j}g_{0 j}( c) g_{j i}( b) g_{i 0}( a)
;
\]}
first the amplitude that during the interval $a$ the vacuum state
makes transition to some state $i$, then the amplitude that during
$b$ the transition to $j$ is made, and finally in $c$ the amplitude
that the transition from $j$ to some photon-free state 0 is completed.
The mathematical problem of splitting $g( a,b,c) $ is made definite
by the further condition that $g_{j i}( b) $ for given $i$, $j$
must not involve the coordinates of the particles for times corresponding
to regions $a$ or $c$, $g_{i 0}( a) $ must involve those only in
region $a$, and $g_{0 j}( c) $ only in $c$.
To become acquainted with what is involved, suppose first that we
do not have a problem involving virtual photons, but just the transition
of a one-dimensional Schr\"odinger particle going in a long time
interval from, say, the origin $o$ to the origin $o$, and ask what
states $i$ we shall need for intermediary time intervals. We must
solve the problem (\ref{XRef-Equation-228141931}) where $g( a,b,c)
$ is the sum over all trajectories going from $o$ at $t^{\prime
}$ to $o$ at $t^{{\prime\prime}}$ of $\exp i S$ where $S=\int L
dt$. The integral may be split into three parts $S=S_{a}+S_{b}+S_{c}$
corresponding to the three ranges of time. Then $\exp ( i S) =\exp
( i S_{a}) \exp ( i S_{b}) \exp ( i S_{c}) $ and the separation
(\ref{XRef-Equation-228141931}) is accomplished by taking for $g_{i
0}( a) $ the sum over all trajectories lying in $a$ from $o$ to
some end point $x_{t_{1}}$ of $\exp ( i S_{a}) $, for $g_{j i}(
b) $ the sum over trajectories in $b$ of $\exp ( i S_{b}) $ between
end points $x_{t_{1}}$ and $x_{t_{2}}$ and for $g_{0 j}( c) $ the
sum of $\exp ( i S_{c}) $ over the section of the trajectory lying
in $c$ and going from $x_{t_{2}}$ to $o$. Then the sum on $i$ and
$j$ can be taken to be the integrals on $x_{t_{1}}$, $x_{t_{2}}$,
respectively. Hence the various states $i$ can be taken to correspond
to particles being at various coordinates $x$. (Of course any other
representation of the states in the sense of Dirac's transformation
theory could be used equally well. Which one, whether coordinate,
momentum, or energy level representation, ;is of course just a matter
of convenience and wee cannot determine that simply from (\ref{XRef-Equation-228141931}).)
We can consider next the problem including virtual photons. That
is, $g( a,b,c) $ now contains an additional factor $\exp ( i R)
$ where $R$ involves a double integral $\int \int $ over all time.
Those parts of the index $i$ which correspond to the particle states
can be taken in the same way as though $R$ were absent. We study
now the extra complexities in the states produced by splitting the
$R$. Let us first (solely for simplicity of the argument) take the
case that there are only two regions $a$, $c$ separated by time
$t_{0}$, and try to expand
{\rmfamily \[
g\left( a,c\right) =\sum _{i}g_{0 i}\left( c\right) g_{i 0}\left(
a\right) .
\]}
The factor $\exp ( iR) $ involves $R$ as a double integral which
can be split into three parts $\int _{a}\int _{a}+\int _{c}\int
_{c}+\int _{a}\int _{c}$ for the first of which both $t$, $s$ are
in $a$, for the second both are in $c$, for the third one is in
$a$ the other in $c$. Writing $\exp ( iR) $ as $\exp ( iR_{c c})
\exp ( iR_{a a}) \exp ( iR_{a c}) $ shows that the factors $R_{c
c}$ and $R_{a a}$ produce no new problems for they can be taken
bodily into $g_{ 0 i}( c) $ and $g_{i 0}( a) $ respectively. However,
we must disentangle the variables which are mixed up in $\exp (
iR_{a c}) $.
The expression for $R_{a c}$, is just twice (\ref{XRef-Equation-227163911})
but with the integral on $s$ extending over the range $a$ and that
for $t$ extending over $c$. Thus $\exp ( iR_{a c}) $ contains the
variables for times in $a$ and in $c$ in a quite complicated mixture.
Our problem is to write $\exp ( iR_{a c}) $ as a sum over possibly
a vast class of states $i$ of the product of two parts, like$ h_{i}^{\prime
}( c) h_{i}( a) $, each of which involves the coordinates in one
interval alone.
This separation may be made in many different ways, corresponding
to various possible representations of the state of the electromagnetic
field. We choose a particular one. First we can expand the exponential,
$\exp ( iR_{a c}) $, in a power series, as $\sum \limits_{n}i^{n}(
n!) ^{-1}(R_{a c})^{n}$. The states $i$ can therefore be subdivided
into subclasses corresponding to an integer $n$ which we can interpret
as the number of quanta in the field at time $t_{0}$. The amplitude
for the case $n=0$ clearly just involves $\exp ( iR_{a a}) $ and
$\exp ( iR_{c c}) $ in the way that it should if we interpret these
as the amplitudes for regions $a$ and $c$, respectively, of making
a transition between a state of zero photons and another state of
zero photons.
Next consider the case $n=1$. This implies an additional factor
in the transitional element; the factor $R_{a c}$. The variables
are still mixed up. But an easy way to perform the separation suggests
itself. Namely, expand the $\delta _{+}( (t-s)^{2}-(x_{n}( t) -x_{m}(
s) )^{2}) $ in $R_{a c}$, as a Fourier integral as
{\rmfamily \[
i\int \exp \left( -ik\left| t-s\right| \right) \exp \left( -iK\cdot
\left( x_{n}\left( t\right) -x_{m}\left( s\right) \right) \right)
d^{3}K/4\pi ^{2}k.
\]}
For the exponential can be written immediately as a product of $\exp
+i( K\cdot x_{m}( s) ) $, a function only of coordinates for times
$s$ in $a$ (suppose $s*