gtag('con g', 'UA-164373203-1'); gtag('con g', 'UA-164373203-2'); gtag('con g', 'UA-164373203-3'); gtag('con g', 'UA-164373203-4'); gtag('con g', 'UA-164373203-5'); </script>

## Entries for items 1-10 on the Home page

**(1) **Gravitational time dilation

**(1)**Gravitational time dilation

This is a derivation, based on just special relativity (SR) and the Equivalence Principle (EP - which says that being accelerated in flat space-time by an amount a is equivalent to being unaccelerated in a gravitational field g = -a oriented in the opposite direction to the acceleration) of general relativity (GR), of the time-dilation effect of a gravitational field (more nearly precisely, of the ratio of the time rates at two spatial points due to the (assumed to be non-time-varying in the coordinate system used) gravitational potential difference between the two). [This derivation works only if there is such a thing as the gravitational potential p defined on the space-time, which requires that the energy required to go from one point to another be independent of the path taken. This is the case for some GR space-times, but I don't know whether it is true for all possible GR time-independent space-times; I don't even know whether the answer to this is known.]

I got the basic idea and part of the derivation for this from an Internet article that I can no longer find and whose author I don't know. The basic idea is to calculate the ratio of the time rates (more on this later) at spatial points x1 and x2 in a time-invariant gravitational field which are a fixed distance d apart by using EP together with the SR equation, from the Lorentz Transformations, which gives the time rate in an inertial coordinate system I of a clock in uniform motion in I, on the basis of its speed in I and its time rate when stationary.

For the derivation we need the relativistic Doppler shift (red- or blue-shift) equation, which can be gotten from the SR relative time rates equation as follows: Suppose an E-M wave is oscillating at frequency f at point p fixed in inertial frame I2 which is moving at speed v in I1 along an axis oriented in the direction of motion of the wave. In time 1/f in I2 the wave goes through a complete cycle, so has wavelength c/f in I2. Thus in time γ /f [γ ≡ 1/√(1-(v^2)/(c^2))] in I1 the wave at moving point p in I1 goes through a complete cycle while the wave advances ( γ/f)c and p advances ( γ/f)v in I1. Thus the wave in I1 has wavelength ( γ/f)(c-v) = (c/f)γ(1-v/c) = (c/f)[1/ √(1-v^2/c^2)](1-v/c) = (c/f)(1/ √[(1-v/c)(1+v/c)])(1-v/c) = (c/f)√[(1-v/c)/(1+v/c)], so the Doppler shift factor (wavelength in I1)/(wavelength in I2) is √[(1-v/c)/(1+v/c)], and that for frequency is its reciprocal √[(1+v/c)/(1-v/c)].

The relative time-rate factor R(p(x1,x2)) between two spatial points x1 & x2 with a constant gravitational potential difference p(x1,x2) = the energy per unit mass necessary to move an object of mass M from x2 to x1 due to the gravitational force F = Mg(x), p(x1,x2) = - ∮g(x)dx, where g(x) is the gravitational acceleration wrt the space-time geodesics at x, can be obtained from the EP and the Doppler shift formula above if by R(p(x1,x2)) we mean the ratio f2/f1 of the frequency f2 at x2 to the frequency at x1 of an E-M signal sent from x2 whose frequency at x1 is f1 when received there, both measured by local clocks. We assume that p(x1,x2) is a continuously differentiable function of d = x1 - x2, so for small d, p(x1,x2) = -dg(x1) = -dg(x2) to first order. To first order, the time for the E-M wave to go from x2 to x1 is t = d/c, where t is measured at some point along the path between x2 and x1. Assuming EP, the change in the frequency of the E-M signal between x2 and x1 due to the gravitational field of acceleration g is the same as it would be if g were due to an actual acceleration a = -g of x1 and x2 wrt an inertial coordinate system instead of being due to the gravitational field. Assuming an actual acceleration a = -g of x1 and x2 in a mutual inertial frame I in which they are at rest just before acceleration, during the transit time t, x1 would have changed its speed by Δv= at = -dg/c, so x2 will have changed its speed wrt x1 by - Δv = -at = dg/c, so the shift factor of the signal frequency, according to the Doppler shift formula, would, to first order, be (remember R is f2/f1, not f1/f2) R = √[(1+ Δv/c)/(1- Δv/c)] = √[(1- dg/c^2)/(1+ dg/c^2)] = √[(1+ p/c^2)/(1- p/c^2)], so to first order, as t, d, & p → 0, R → √(1+ 2p/(c^2)) → 1 + p/(c^2). So far, the derivation is roughly as it was in the online article, except for the derivation of the Doppler shift formula, which is mine, but of course has been done in various ways by many other people. However, the article took R to be exactly 1 + p/(c^2), maybe unintentionally ignoring the second and higher order inaccuracies involved in taking it to be exactly that, which are due to the uncertainties of t and g above; they are small for small p, but can be large when p is large. If the gravitational potential between x1 and x2 is p, and that between x2 and x3 is also p, the gravitational potential between x1 and x3 is 2p, so if R = 1 + p/(c^2) exactly, R(x1,x3) = 1 + 2p/(c^2), but also must = R(x1,x2)R(x2,x3) = (1 +p/(c^2))(1 + p/(c^2)) = 1 + 2p/(c^2) + (p^2)/(c^4), which differs from the first by (p^2)/(c^4), which is large for large p. The exact expression (assuming that, as in the situation described above, the relative time-rate factor R is indeed a function of the gravitational potential difference between x1 an x2) can be obtained by noting that R(p1 + p2) = R(p1)R(p2), from the additivity of potentials and the multiplicitivity of time rates, and the only function which satisfies this is the exponential function, so R(p) = e^(kp) = 1 + kp + O(p^2) for some k, and the first order term in this is kp, so k = 1/(c^2), so R(p) = e^[p/(c^2)] = exp[p/(c^2)], which is the correct general relativistic formula. Note that R(0) = 1. Einstein reportedly knew of the existence of gravitational time dilation before 1915, when he completed his final version of GR [except for his final view of what the Cosmological Constant was (0, which may or may not turn out to be the correct value)], but I don't know whether he knew this exact formula before then, or if he did, how he derived it.

Numerous Internet discussions of gravitational time dilation say that it is caused, at some spatial point x, by the gravitational field at x, with a stronger field at x causing a greater time dilation. That this is the cause is entirely false. It is true that in the gravitational field of just one massive body, such as the Earth, clocks run slower nearer to it than farther away, as is shown by the above formula, and the gravitational field also is stronger nearer the body than it is farther away, but the correspondence stronger field ↔ smaller time rate does not hold in general. There can be easily imagined an arrangement of masses for which the gravitational field at x1 is smaller than at x2, but also time at x1 is proceeding more slowly than at x2, and another arrangement in which the fields at x2 and x3 are the same, but the time rate at x2 relative to some third point x1 is very different from that at x3 relative to x1. The relative time rate between two points is determined by the total space-time, not by just the local space-time at the two points.

That R not= 1 represents a real difference in time rates (and R = 1 a real equality of time rates), rather than being caused, e.g., by part of the E-M cycles getting delayed or lost when on the way from x2 to x1, when R > 1, or extra cycles being somehow generated, when R < 1, is at least strongly suggested by having the sender at x2 mark the individual cycles differently in some way so that they can be identified, or by sending individually identifiable pules as timing marks, and traveling from x1 to x2 through the known gravitational field, identifying all the cycles or pulses you pass and the times at which you pass them as you go, and determining at x2 what pulses have been transmitted so far. If these accountings are what my formula predicts, assuming that R actually indicates the relative time rates, that agreement is good evidence that it really does indicate the relative rates (whatever the exact meaning of that phrase is).

According to the above, local conditions at x1 and x2 can be identical, yet the time rates at the two points be quite different. This is not the only thing in GR that is not determined locally, but it is nevertheless puzzling. What determines the time-rate at a point, if not local conditions? Is this supposed question meaningful? A partial answer to what determines the time rate at a point is, of course, given by the above derivation, but this, to me at least, doesn't seem to be an entirely satisfactory explanation.

## (2) My calculation of the GW150914 black hole combined masses

LIGO, the Laser Interferometer Gravitational Wave Observatory, made the first-ever direct detection of gravitational waves (GWs), which are ripples in the fabric of space-time, on September 14, 2015; this was the GW150914 GW event. Three of the principals won a Nobel prize for this, together with their part in the development of LIGO, in 2017. (GWs had previously been indirectly detected by their effect on the orbital period of orbiting neutron stars, one of which was a pulsar. This detection also won a Nobel prize.) GWs were predicted possibly to exist by Albert Einstein's General Theory of Relativity (GR), but Einstein didn't think they would ever be detected, due (I think) to the extremely small magnitude, as received on Earth, of any GW likely ever to be generated. However, advances in technology and our understanding of the universe made some people think GWs could be directly detected, which would be a valuable confirmation of GR and also a new way to observe the distant universe, and by heroic effort, they and others in the LIGO collaboration managed to do so.

The LIGO GW detectors (there are two of them, one in Washington state at Hanford, the other in Louisiana at Livingston) are Michelson interferometers that detect passing GWs by detecting changes they cause in the relative lengths of the two 4 km arms of each interferometer, which arms are at right angles to each other, by measuring changes in the magnitude of the combined beams of laser light traveling the two arms which are due to changes in the relative phases of the two beams caused by the relative arm length changes. There is a relative phase change for low frequency GWs and so low frequency arm length changes because there is very little corresponding wavelength change in each photon of light while it is traversing an arm. With higher frequency GWs, it is more complicated. If the wavelength of the light were changed by the same factor as the arm lengths, there would be no change in the relative phases of the light beams where they were combined, and so no change in the interferometer detector output. Part of the complication arises from the fact that at the entrance to each arm is a partially reflecting mirror, so each photon of light bounces back and forth an average of 300 times before exiting through that same mirror. This is done to increase the effective arm length 300 times, to 1200 km, which increases the sensitivity by the same factor, but for GWs high enough in frequency that they change in wavelength a significant amount while a light photon is in the cavity, the output is the result of the combination of photons of different wavelengths, since different photons were stretched by different amounts. A computer is required to extract the relative arm length information from the combined beam amplitude; I haven't analyzed the difficulties fully, but there are limits to what the computer can do in this regard, and there is an upper limit to the frequency of a gravitational wave that can be usefully detected. LIGO's statements for the general public about this wavelength change versus arm length change question are confused and in some cases stupidly incorrect; don't pay attention to them. (This problem has been somewhat corrected by LIGO since the foregoing was written.) I assume that the LIGO scientists and engineers directly involved with the question correctly understood it. (A Michelson interferometer was first used in the Michelson-Morley experiment to look for changes in the combined beams due to changes in the velocity and orientation of the interferometer relative to the supposed aether. In that experiment, changes looked for would have been due to changes in the relative speeds of light in the two arms rather than in the arms' lengths. No changes were found, which later was explained by Einstein's special theory of relativity.)

The sensitivity, SensA, of the Advanced LIGO interferometers, i.e., the smallest change in relative arm length that could be detected, was advertised by the LIGO organisation as 1/10,000 of the diameter of a proton, SensA = about 10 ^(-19) meters. I, as well as many other people, thought that the interferometer couldn't possibly be this sensitive, due to the magnitude of the several noise sources that would mask the GW signal. These sources are principally ground vibrations of many sorts, thermal noise of the interferometer components, and shot noise due to the random arrival times of the laser photons at the interferometer detector. That all these could be reduced to a total (RMS) noise effect of 1/10,000 a proton diameter seemed incredible, but the noise in the published GW150914 signal was only about 4 times that. (The 1/10,000 proton diameter sensitivity was for when LIGO reached design sensitivity, scheduled for 2020.) The maximum p-p amplitude of the GW150914 signal at the LIGO detectors output was at least 6 times the actual typical p-p noise, so the S/N ratio was good enough that the probability that the signal wasn't just noise was very high (or, strictly speaking, that if the signal was just noise the probability that the noise would be as large as the GW150914 signal would be very low), and the GW150914 signal waveform was close enough to that predicted by GR for two black holes of certain masses spiraling into each other and merging, and different enough from the waveform predicted for any other type of non-noise event, even two neutron stars merging, that the probability that the waveform was due to something other than a binary black hole merger was small. (The signal-to-noise ratios above are those for the waveform record of just one site. However, noise at one site is mostly absent at the other site, so considering the waveforms together, using some sort of cross-correlation between the two, results in a considerably larger S/N ratio, as LIGO in fact reported.)

The black holes, however they are formed and come to be orbiting around each other, in their center-of-mass frame lose energy and angular momentum by radiating gravitational waves, due to their being accelerated in the orbit, as a result of this they come closer to each other, so revolve around each other faster, both in relative speed and in angular velocity (in inertial space), so radiate at a higher rate (in the GW150914 merger, almost all the GW radiation occurred between 0.2 seconds before and 0.03 seconds after merger), so spiral in faster, and finally merge with each other. (There is a well-known problem with this- that objects falling into a black hole never quite reach it, from the point of view of a distant observer, because the gravitational time rate retarding effect approaches infinity as the object approaches the event horizon (boundary) of the BH in such a way that the BHs never quite merge, in the coordinate system of a distant observer.) LIGO (and to some extent Wikipedia) literature indicate a method of approximately determining what is probably the most important feature of the black hole merger, the combined masses of the two black holes, before, during, or at the end of the inspiral, based on just the GW interferometer signal amplitude vs. time records produced by the two LIGO observatories, which is the only record of the GW produced by LIGO. (The masses vary as the gravitational potential energy of the BHs, which is energy of their gravitational fields, is converted into their kinetic energy, and some of that is radiated away as GW waves.) To carry out this calculation accurately for the last of the inspiral, when the BHs are each in the near and so very intense gravitational field of the other BH, requires that the Einstein equation of GR for the system be solved. The only known way to do this for the end of the inspiral was to numerically solve the equation, and a way to do this wasn't discovered until 2005.

It occurred to me that, since the radius r of an uncharged black hole with zero angular momentum in otherwise empty space, which each of the GW150914 pair approximately was, is, according to GR, directly proportional to its mass (r = 2Gm/(c^2)), and the square of the (Newtonian) frequency f of their orbiting about their center-of-mass, assuming they have equal masses, which the GW record showed they approximately did, and that the orbit is circular (LIGO says, p. 2, [1] below, that radiation reaction is efficient in circularizing orbits) is proportional to their mass divided by the cube of their distance apart (f^2 = mG/[r(2π d)^2)] , then assuming that the BHs at merger were each lengthened in the direction of the other BH by 25% of the diameter of an undistorted BH of equal mass, as suggested by LIGO literature, their masses m at merger, when d = 2r, could be approximately calculated from their orbital frequency F at merger, and would be inversely proportional to that frequency. In detail, from the two equations above, F^2 = mG/[r(4π r)^2] = mG/[(2.5Gm/(c^2))( 4π (2.5Gm/c^2))^2] = mG/[2.5Gm(π^2)(100(G^2)(m^2))/(c^6)] = (c^6)/(250(π^2)(m^2)(G^2)), so F = (c^3)/[(5√10)πmG], so M ≡ 2m = (c^3)/[(2.5√10)πGF], r = c/(2.5πF√10), and 1.25r = 25c/(8πF√10). Some complications occur because the frequency at merger indicated in the LIGO record is not the actual frequency at merger in the reference frame of the BHs, because of cosmological red shift, GR gravitational time dilation produced by the gravitational fields of the BHs, and the fact that at merger each BH is moving at a considerable fraction of the speed of light wrt the pair's center-of-mass, so there is special relativistic time dilation also. Also, for two BHs of equal mass, the waveform of the GW wave record of one BH's half orbit is nearly the same as of the previous or next half orbit of the other BH, so the apparent period of the GW signal is just ½ the orbital period, so the LIGO record's frequency is twice the actual BH orbital frequency. I did this calculation, as well as I could (the tidal stretching and the gravitational time dilation were difficult to know how to do correctly), just to see how close the resulting M value would come to the probably more accurate LIGO value Msource for the combined BH rest masses. It came out to be just 7% bigger than the LIGO value (assuming certain things about the meaning of LIGO's "source frame", referring to the BH's reference frame, elucidation of which I haven't been able to get from LIGO), which was undoubtedly to a certain extent a fluke, due to some of my approximations' errors cancelling others.

The only LIGO GW parameter required for the above formula to calculate M, the combined BH masses just before they merged, is the revolution frequency F of the BHs orbit around each other just before the merger. Which part of the LIGO interferometer signal record which was produced just before, during, and after their merger is fairly clear, and its frequency can be measured from the published LIGO GW record shown in Figure 1. Similar diagrams, technical articles which include them, and other material is available at www.ligo.caltech.edu and www.ligo.mit.edu. I measured the displayed frequency at merger as about 153 Hz. The correction factors which have to be applied to get the actual revolution frequency F from this, as described above, are: ½; the cosmological redshift factor RS ≡ 1+ z; the gravitational time dilation factor; and the special relativistic mass dilation factor γ .

Figure 1: Graphs of the advanced LIGO waveforms for the GW150914 gravitational wave event, recorded September 14, 2015 [Image courtesy Caltech/MIT/LIGO Laboratory]

By a calculation I am not showing, since it turned out to be not all that useful for calculating F, I got RS (the redshift factor) ≡ 1+z = e^(R/c), where R is the recession speed from Earth of the BH merger and remnant site, assuming a Friedmann universe without any gravitational deceleration or dark energy acceleration, & using the LIGO values of the merger distance from Earth, Dm = 410 Mpc (calculated, I imagine, from the calculated total GW energy, about 3 solar masses equivalent, and the received GW amplitude, with some LIGO assumptions or conclusions from the GW signals about the BHs orbital inclination), and the present Hubble parameter Ho = 67.9 km/(sMpc), and R = HoDm, getting a value of z = 0.0973, whereas the LIGO value is 0.088. However, my assumption of no deceleration or acceleration was unrealistic. An accelerating expansion of the universe during the GW travel time from the merger to Earth would probably be more realistic, and would produce a smaller z. I did not try to calculate its value using the given LIGO value of Ωm = 0.306 and the dark energy/cosmological constant Λ , whatever its value is, which calculation would involve solving a nonlinear differential equation which I didn't immediately know how to solve, but instead used the LIGO value of z for my calculation of M. The LIGO value of RS ≡ 1+z is 1.088, which is just 0.99 of what mine would have been using my unrealistic cosmological assumptions, so the effect on my calculated value of F if I had used mine would have been an increase of 1%, so a decrease in my M of also 1%, so my error in M would have been about 1% (of M) less than it actually was, since my final calculated value for M was a few % bigger than LIGO's value, which was probably about right and more reliable than mine (see below).

Calculating the gravitational redshift/time-rate-reduction of F due to the BH masses was the most uncertain part of my calculation. I just used the Schwarzschild metric for a BH of radius r (even though a BH merger isn't a time-independent situation and neither BH is isolated, so the Schwarzschild solution didn't apply exactly to them), and assumed the principal observed GW wave frequency at merger was lowered by the gravitational field of one of the BHs as if that wave had been generated at a Schwarzschild coordinate (of that BH) of 2.25r, i.e., at the center of one of the tidally distorted BHs if its event horizon was just touching the other undistorted one's event horizon; that factor is √5/3. Thus, F = (153/2)(1.088)(3/√5) = 111.67 Hz. This gives M = 1.455x10^32 kgm = 73.2 solar masses (M⨀ or SM). However, this is in a non-rotating reference frame stationary wrt the center of mass of the merging BHs. At coalescence, each of them is moving at a speed of 2πFr = c/(√10) in that frame (this is oddly independent of all BH parameters--is it correct?), so their masses in that frame are √(10/9) times their rest masses, so M (sum of the rest masses of the two BHs) is 73.2 √(9/10) = 69.45 SM, which is just 7% bigger than the LIGO value of Msource, the "source-frame total mass", 65.0 SM, which itself is given as being about 7% (90% credible interval) uncertain. My calculation doesn't include the mass equivalent of the inspiraling BH's final kinetic energy, which is considerable: 73.2 - 69.45 = 3.75 SM. [The LIGO values of Msource, DL, and z are from Table I, p. 2, and of Ho and Ωm from p. 7, of [1] "Properties of the Binary Black Hole Merger GW150914", B. P. Abbott et al., Phys. Rev. Lett. 116, 241102 (2016).]

I emailed some questions about GW150914 to LIGO in 2016, and got a reply from one of its employees, with whom I had an email conversation which answered some of them. However, in December of last year I emailed part of the above derivation, together with several different questions, to the Caltech LIGO office after being told by the woman who answered the LIGO publicly available contact phone number that she would forward this to the appropriate person or group for an answer, but I never received one. I will try again with still different questions, leaving out the derivation (which I have done, see below). The questions may include:

What is the meaning of "Detector-frame total mass M/M⨀", = 70.6, shown in Table I, p. 2, [1]? 70.6 is about (1+z)65.0 = 70.72. Is the significance of M that it is the Msource that would have been if the GW signal received at LIGO had actually been the source-frame signal?

The LIGO value for the source-frame final mass Mf,source/M⨀, = 62.0, shown in Table I mentioned above, is = 65.0-3.0, so Mf,source = Msource - the mass-equivalent of the radiated GW energy. Why is the source-frame final mass equal to the initial total rest mass of the two BHs minus the mass-equivalent of the radiated GW energy? I would have thought that the radiated GW energy was part of the difference between the energy (referenced to that of the 2 BHs at infinity) of the initial gravitational field before inspiral, which was about zero, and its energy Ef at coalescence, when the field was considerably greater, so its energy was considerably negative, so the difference -Ef would be considerably positive, and would be equal to the BH's kinetic energy KE + the so-far-radiated GW energy, both at coalescence, both of which had come from the total gravitational field energy. From the GW graph, the GW energy received at LIGO after coalescence, during ringdown, was about 1/3 of the total GW energy received, so unless during ringdown the GW was for some reason more directed away from Earth than before ringdown (which it might have been), the radiated energy before ringdown, and approx. so before coalescence, was about 2/3 the total GW radiated energy, about 2 SM x c^2. However, on p. 8 of the cited paper it is indicated that the GW energy radiated during ringdown was about 1.5 SM equivalent, so the GW energy radiated before ringdown would also be about 1.5 SM equiv. At coalescence, the two BHs had a combined kinetic energy of about 3.5 solar masses equivalent, using my calculated CM frame speed for them of 0.316c and LIGO's value of Msource = 65 SM. Therefore, -Ef = at least 3.5 SM + 1.5 SM = 5 SM. The BH rest masses would not change during the inspiral. The 1.5 SM equiv. was radiated away as the energy of GWs before merger, the 3.5 SM would be added to the merged BH mass, and during ringdown an additional 1. 5 SM equiv. would be radiated away as GW energy, with the 1.5 SM being subtracted from the mass of the final BH, so Mf,source would have been = (at least) Msource + 3.5 SM - 1.5 SM = 67 SM. Was the difference Msource - Mf,source being = 3 SM due to something other than subtraction of the radiated GW energy equiv., & what happened to the 3.5 SM equiv. kinetic energy of the BHs at coalescence; why wasn't it added to Msource to make Mf,source bigger than Msource, rather than smaller?

To match the top orange Hanford trace to the orange Hanford trace in the bottom graph which also has the Livingston one, in Figure 1, it is necessary to shift the top trace in time, by about -7 milliseconds, and invert it, which is needed because of the separation between and differences in interferometer arm orientations at Hanford and Livingston, but that doesn't exactly match it to the orange trace in the bottom graph. How was the original Hanford data further manipulated to create the Hanford graph in the composite picture? In some other sets of graphs showing the individual H & L waveforms & also a H & L composite, the shifted & inverted original H trace seems to exactly match the H trace in the composite.

I later emailed a question to Hanford asking about the peculiar LIGO stated energy/mass balance described above, and on November 4, 2020 I got a not entirely satisfactory reply from a LIGO senior scientist, who didn't give his name, supposedly explaining why Mf,source = Msource - 3SM. I replied to it with a request that the email conversation be forwarded to Kip Thorne together with a recommendation from the scientist that he study it and perhaps also this website entry, probably none of which will be done.

I finally found the reason that my calculated value for the final black hole mass, that is, the mass of the remnant black hole resulting from the merger of the two inspiraling black holes, was larger than the sum of the original masses of those two when they were beginning their inspiral, due to the addition of their kinetic energy acquired from their mutual gravitational field during the inspiral, instead of smaller than that sum, as LIGO said it was, which reason is described in the below email conversation with Kip Thorne. Briefly, LIGO and I were talking about different things when we used the term "mass" of one or more black holes. I meant by "mass" just the rest mass of the hole or holes, whereas LIGO meant that rest mass plus the mass of its or their gravitational field, which is always negative, so their "mass" was less than my "mass", and less by an amount that is approximately correctly given by the difference of our respective final mass values together with the mass equivalent of the radiated gravitational waves.

Re: Final black hole masses

KIP THORNE <kipst@me.com> Sun, Jul 31, 2022 at 3:04 PM

To: Michael Fox <cmichaelfox@gmail.com>

I have no objection.

Kip Thorne

On Jul 31, 2022, at 11:15 AM, Michael Fox <cmichaelfox@gmail.com> wrote:

I want to put this forwarded email from you on my personal website as part of my discussion of LIGO and my semi-Newtonian, semi-relativistic calculation of the sum of the

GW150914 BH masses, but I thought I should first check that this is OK with you. I can't think of any likely reason that it wouldn't be OK, but if you do object to my publishing this

email on my website, please let me know within a few days.

I finally resolved the apparent disagreement of LIGO and your value of the final mass of the GW150914 BH remnant with my larger value (even assuming LIGO & your value for the

initial sum m1 + m2 of the two merging components; mine was about 7% larger), resolved on the basis of this email and later, beginning 2021-2-14, an email conversation with

Nathan Johnson-McDaniel, a scientist with the LIGO Scientific Collaboration. The resolution, as you may have realized, and as I should have realized sooner, is that what I termed

the "mass" of the remnant BH was what might be called, in analogy with the QM usage, the "bare" mass, the mass without the associated gravitational field, while by "mass" of the

remnant you & LIGO meant its bare mass together with the mass of its gravitational field, what Nathan called the "astrophysical" mass. Since the energy, and so mass, of a

gravitational field should be counted as negative, your calculated mass of the remnant would be less than my calculated mass of the remnant, by an amount equal to the (absolute

value of the) mass-equivalent of the overall increase in strength of the binary BHs' (static) gravitational fields between the start of their inspiral to their merger, which should be

equal to the mass-equivalent of the radiated gravitational waves, which it roughly is as judged by your/LIGO's & my calculated values.

Michael Fox

On Wed, Jan 20, 2021 at 2:22 PM KIP THORNE <kipst@me.com> wrote:

Dear Michael Fox,

The m1 that the LIGO team infers from the observational data is the mass of the heavier black hole as measured in its own rest frame when it is far from all other objects …

i.e. when it is just beginning its long inspiral toward collision; and similarly for m2 - the mass of the lighter hole. Mf is the mass of the final black hole as measured in its own rest

frame after all the radiation has gone away and it has finished vibrating. The final black hole will be given some (small) kick by the departing gravitational waves, so in the center of

mass frame of the initial binary, it will have some (small) kinetic energy Ekinf = 1/2 Mf vf^2 (where vf is that small kick velocity).

We apply energy conservation to the transition from the initial binary when the holes are very far apart to the final black hole and the emitted gravitational waves. That

conservation says m1 + m2 = Mf + Ekinf/c^2 + Erad/c^2

We do not attempt to compute or pay attention to what is happening just before collision … e.g. the kinetic energy of the two holes just before they collide. That is much too

difficult to analyze. By contrast, the initial state long before collision and the final state long after collision are easy to analyze; so that is our focus.

We do not actually measure the kick velocity vf or the final kinetic energy; we rely on computer simulations of colliding black holes which show that the kick velocities are

always less than or about 3000 km/sec and therefore Ekinf/c^2 is less than or about 1/2 Mf x (3 thousand / 300 thousand)^2 = Mf/20000 — which is far far smaller than the errors

in our measurements, so we ignore Ekinf/c^2. This leads to the equation we use m1 + m2 = Mf + Erad/c^2

I hope this is helpful.

Best regards

Kip Thorne

**(3)** The Ultraviolet Catastrophe

**(3)**The Ultraviolet Catastrophe

Max Planck supposedly formulated the quantum hypothesis, that electromagnetic radiation was emitted from heated bodies only in quanta of energy E = hf, where f was the frequency of the radiation and h was a constant now called “Planck's Constant”, in order to solve the Ultraviolet Catastrophe--that classical Electromagnetic and Statistical Mechanical theory predicted that infinite power would be radiated from a finite material body at any finite non-zero temperature in the form of electromagnetic radiation with frequencies above any given value, which of course was observed not to occur.

For a long time I didn't know the reason that this was supposed to occur, but it seemed to me that classical EM and SM theory wouldn't predict this. My reasoning was that in any finite body at a finite temperature there were only a finite number of charged particles each moving at a finite speed relative to the others, and while the collisions between them would cause infinite accelerations which, according maybe to some extension of Maxwell's EM theory, would result in the power of the EM radiation emitted by such a body being infinite, for at least an instant, if the particles were completely rigid, such a model was unrealistic. Instead of the collisions being ones with infinite accelerations, they would be interactions between particles caused by finite EM or other forces, so the accelerations would be bounded above over time for each particle, so the power radiated by each particle would also be bounded above, so there would be only a finite total amount of power radiated.

When recently I looked up the reasoning which had led to the infinite power prediction, I found that the finite body which was supposed to radiate infinite power was modeled as a finite amount of matter inside a cavity, supposedly to simulate a black body, and the matter and cavity were supposed to be at thermal equilibrium! In such a cavity there are an infinite number of EM modes, all but a finite number being above any finite frequency, each of which counts as a degree of freedom of the body, and so according to the Equipartition Theorem of Statistical Mechanics, at equilibrium at any non-zero temperature T, each degree of freedom accounts for an equal, non-zero amount kT/2 of energy, so there would be an infinite total amount of EM energy in the body, and an infinite amount of power radiated from it.

While this infinite radiated power prediction was true according to classical theory under the assumed conditions, why the body was modeled as one inside a cavity was unclear, and much more importantly, that the material and the cavity were at thermal equilibrium was as unrealistic an assumption as the infinite acceleration assumption, since it requires an infinite amount of energy to be in the matter and cavity, in the EM radiation in the infinite number of EM modes, which energy could be supplied to the matter/cavity, which initially would contain only a finite amount of energy, only from the environment at a finite rate, bounded above, so it would take an infinite amount of time for the matter/cavity to reach the assumed equilibrium; thus, classical theory does not predict an infinite amount of radiation at any (finite) time, and actual measurements made under the assumed conditions, equilibrium, which supposedly contradicted the classical theory's prediction of infinite radiated power, actually, according to that classical theory, have obviously never been made. According to quantum mechanics and to actual experiments, however, equilibrium is reached within a finite time with a finite amount of energy in the matter/cavity, with the frequency-intensity curve, and so the wavelength-intensity curve, of the emitted radiation at that equilibrium being as predicted by Planck and quantum mechanics rather than classical theory. This is the real conflict of thermal radiation experiments with classical theory. (The theory vs. experiment situation, including what was theoretically considered to be a black body vs. what was used for one in supposed black body radiation experiments, is more complicated than the brief discussion above indicates, and I may post further comments about this later.)

The quantum mechanics which was started by Planck's quantum hypothesis has been very successful in making true predictions about the world, and can hardly be doubted to be in many ways correct, but the supposed Ultraviolet Catastrophe, that the infinite radiated power at thermal equilibrium prediction of classical electromagnetic together with thermodynamic/statistical mechanical theory was contradicted by observation-experiment, which supposedly led Planck to propose his quantum hypothesis, seems to be a misconception of even reputable physics textbooks and a factual error of historical sources. Does anyone have an answer opposing, or at least explaining, this?

## (5) Mass increase with speed increase

Several YouTube physics popularizers, including Sean Carroll, Don Lincoln, and Andrew Dotson, contend that it is a myth that the mass M of an object O, as measured in an inertial coordinate frame F, increases as its speed in F increases. How do they claim to show this, in seeming defiance of Einstein's special theory of relativity (SR)? [To be fair to YouTube and those who put videos on it, I should note that there are apparently a number of people who, in videos on YouTube, discuss mass increase with increase in relative speed without claiming that it doesn't occur. I mostly haven't watched those videos, since I don't obviously disagree with them.]

According to SR, both the length of a measuring rod moving with speed v lengthwise in F and the rate of a clock moving with speed v in F, both as measured in F, are smaller by (are divided by) a factor of γ ≡ 1/√(1-(v^2)/(c^2)), which increases with increasing v (for v ≤ c) and always is ≥ 1 (where c is the speed of light in vacuum), compared to their values in a reference frame in which they are stationary- their "rest" frames. In SR, where M varies with the speed v of O, there is defined the "relativistic mass" of O, Mr ≡ γMo, where Mo is the mass of O in its rest frame. The popularizers generally don't dispute any of this. Why they dispute increase of mass with increase of v is not clear; two of their trivially defective arguments are discussed below, with the arguments' errors pointed out.

One argument some use is that before relativity, in Newtonian physics, M was O's inertia I, its resistance to acceleration a ≡ dv/dt, with M ≡ I ≡ f/a for a not= 0, where f is the force causing the acceleration a. [Whether this is an empirical fact, discovered by experiment, or is true by definition, is disputed.] It is found experimentally, in the speed range v<<c in which Newtonian mechanics is close to being correct, that the ratio M = f/a is very close to being independent of a and f. Then f = Ma = M(dv/dt), and since in Newtonian physics M during the acceleration is independent of the speed and time, this is = d(Mv)/dt = dp/dt, where p ≡ Mv, which is what O's momentum is defined to be. However, they say, in SR, while (as determined by experiment) f = dP/dt ≡ d((Mr)v)/dt, during acceleration Mr varies with v and so t, so d((Mr)v)/dt is not= Mr(dv/dt). Instead, f = (d(Mr)/dt)v + Mr(dv/dt) = v(d(γMo)/dt) + (Mr)a = vMo(dγ/dt) + (Mr)a. Therefore, Mr = [f - vMo(dγ/dt)]/a = f/a - vMo(dγ/dt)/a is not= f/a for v, Mo > 0, so Mr is not= I ≡ f/a, so Mr isn't the inertia of O and therefore shouldn't be considered O's mass. This is true, **but**: dγ /dt = d[1/√(1-(v^2)/(c^2))]/dt = [(1-(v^2)/(c^2))^(-3/2)](va/(c^2)), so f = (Mr)a + [(1-(v^2)/(c^2))^(-3/2)](a(v^2)/(c^2))Mo = (Mr)a + [(Mr)a(v^2)/(c^2)][1/(1-(v^2)/(c^2))] = (Mr)a + (Mr)(v^2)a/(c^2 - v^2) = (Mr)a[1+(v^2)/(c^2 - v^2)] = (Mr)a[(c^2)/(c^2 - v^2)] = (Mr)a[1/(1-(v^2)/(c^2))] = (γ^2)(Mr)a. Thus I ≡ f/a = (γ^2)Mr = (γ^3)Mo, so the inertia of O increases with its speed v even faster than its relativistic mass Mr does.

The above demolishes the argument that the mass of an object doesn't increase with its relative speed. However, it is further stated by some popularizers that in the famous equation E = M(c^2), "M" stands for just the rest mass Mo of O, and the energy E is just the energy equivalent of that rest mass. They go on to say that the actual total energy of O, E = its rest mass equivalent energy + its kinetic energy (but not including its potential energy due to external fields, which is counted separately, as the energy of the fields) satisfies E^2 = (P^2)(c^2) + (Mo^2)(c^4), (Eq. 1), which (they say) gives E = M(c^2) only when P, and so v, is 0, i.e., for M = Mo, the rest mass, which they insist M is equal to. That is actually a very good reason to take M in M(c^2) to be Mr, the relativistic mass, which is what Einstein did and (I think) most physicists still do. That this is so is shown by something that those physics popularizers seem not to realize: From Eq. 1, E^2 = (P^2)(c^2) + (Mo^2)(c^4) = [((Mr)v)^2](c^2) + (Mo^2)(c^4) = ((Mr)^2)(v^2)(c^2) + ((Mr)^2)(1 - (v^2)/(c^2))(c^4) = ((Mr)^2)(c^2)(v^2 + c^2 - v^2) = ((Mr)^2)(c^4) ⇔ E = Mr(c^2). Thus, what they declare to be the correct general formula for the total energy of O, the right-hand side of Eq. 1, instead of what they say is the limited formula M(c^2) which they say gives only the rest-mass equivalent energy (because they, for some unaccountable reason, insist that M = Mo), is trivially = to M(c^2) with M = Mr.

Three of the four most important conserved quantities in physics are energy, linear momentum, and angular momentum. The conserved energy of a closed (sometimes now called "isolated") system is E = Mr(c^2) (where Mr includes the mass of the system's fields), the conserved (vector) linear momentum is **P** = (Mr)**v**, and the conserved angular momentum is similarly computed using the relativistic Mr or corresponding relativistic mass densities (except for quantum mechanical spins). [In general relativity, there seems to be a problem with defining an exactly conserved energy density. I am ignoring this.]

In Albert Einstein's 1905 paper, "Does the inertia of a body depend upon its energy-content?", Einstein stated and demonstrated that a body's mass decreases by an amount ∆E/(c^2) when its energy decreases by ∆E.

There seem to be no significant justifications in physics for these people's insistence that the mass of a body isn't greater the greater its energy (rest mass equivalent + kinetic). They refer to "common misconceptions", or "popular myths", that its mass increases (or decreases) when its energy, including its kinetic energy of motion, increases (or decreases). The misconceptions and myths are theirs.

## (6) Wave-wave interactions

See (6) on the Home page for an introduction to this. The wave-wave interaction problem is discussed in (1) below.

Michael Fox <cmichaelfox@gmail.com> Tue, Dec 15, 2020 at 9:04 AM To: sabine.hossenfelder@gmail.com

Sabine Hossenfelder,

My email of the 7th was sent to your professional email address that I had. Not believing that you wouldn't reply to such a brilliant, fascinating email as mine if you had received it, I am resending it to the personal email address for you that I have. I don't mean to be pushy.

Michael Fox

---------- Forwarded message ---------

From: Michael Fox <cmichaelfox@gmail.com> Date: Mon, Dec 7, 2020 at 1:12 PM Subject: Comments on and questions about your book and one video To: <hossi@fias.uni-frankfurt.de>

Sabine Hossenfelder,

My name is Charles Michael Fox, I live in Clarksville, Indiana, USA, I have an AB in Mathematics with a minor in Physics, an MA in Mathematics and one in Philosophy of Science, but no Ph.D. I am retired.

Below are (1) what I think is an interesting addition to what you said about the origin of the contradictions in a theory combining general relativity with quantum mechanics, (2) a reply, in fact I think clearly a correction, to your dismissal of the causality-violation objection to faster-than-light travel, and (3) a comment and question about the triviality of quantum field theory.

(1) On p. 179 of the paperback edition of Lost in Math you said that GR doesn't fit with QM because GR requires that, to determine the curvature of space-time, particles with non-zero rest mass, also radiation, have a definite location, whereas in QM they usually don't. I had never before heard anyone give this as a reason for the GR & QM incompatibility, but it agrees with what I had believed to be a reason for, if not GR & QM absolute incompatibility, at least general QM incompleteness when combined with any of the 4 known force laws.

This problem had bothered me vaguely for some time after I took beginning QM, but came into focus only at a brief talk by A. S. Wightman at which he said, in connection with exactly what, I don't remember, "You say we should be doing wave-wave interactions rather than wave-particle interactions? Just go to it, kid." (This may be an inexact quote.) He didn't describe the problem further, but right after that said he had almost gotten the solution to it while waiting in an airport. I decided that he had been referring to the fact that QM, for the Schrodinger equation of a system S, requires the Hamiltonian for S, and this cannot be obtained using the QM wave function of the rest of the universe, R, it requires that R be considered as a classical system. This is good enough in many cases, but is not strictly correct, and might give very inaccurate results in the case of 2 charged particles with extended wave functions which are near to each other and are interacting via their electromagnetic fields, if, say, one of them were treated as a point particle at its position expectation value in order to compute a Hamiltonian for the other for the S. eq. involving it, to compute its further time evolution. The problem is, at least in part, that you don't know where the particles' [sic, I should have written "particle's"] charge is. You can't just equate the charge density with |psi|^2, which is now considered to be the position probability density specified by the wave function; Schrodinger tried using it as the charge density, but gave that up, for reasons I don't know, except that charge density is obviously not the position, upon observation, probability density. This charge density problem is analogous to the GR mass density problem, and both are obviously closely related to the QM measurement problem. To discover this problem's current status, I once Googled "wave-wave interactions in quantum mechanics", but got nothing useful. I also searched for that and also "wave-wave interactions" in Google Scholar, but got only references to papers about the nonlinear interactions of ocean waves.

You said we can't measure this, because the gravitational pull of an electron is too weak. The EM force is so much stronger than the gravitational force that it would be immensely easier, maybe even possible, to measure the EM interaction between 2 charged particles under the conditions described above, using particles with different wave functions, and get some insight into how their mutual interaction varies with their wave functions, and so some insight into the measurement problem. Perhaps similar experiments using the strong or weak force could be done. Whatever insights were gained would probably carry over to the gravitational wave-wave interaction problem.

Very many particle-particle interaction experiments have been done. Have any of the results of ones involving close interactions of particles with extended wave functions been studied for their implications for the vexing measurement problem? The fact that wave-wave interaction calculations, which are really required to properly do quantum mechanics, are (probably) currently not doable, as well as other important calculations being not doable because the measurement problem hasn't been solved, shows that the measurement problem involves more than just a fuzzy wish by philosophers to know what is "really" represented by a quantum wave function and its "collapse" when a measurement or observation is made, and under what conditions a measurement happens, but also involves a hard computational problem. Oddly, the following is a consequence of facts accepted by even those in the "shut up and compute" [does the usually quoted phrase have "calculate" instead?] school of quantum mechanics, which, however, I haven't seen expressed in quite so extreme a form as this: **Quantum mechanics**, considered to be a theory of initial conditions consisting of the wavefunctions of all particles in the system in question, some version of the Schrodinger equation as the time-evolution law, and all the Hermitian operators representing observables, **cannot predict the probability of any experimental or observational outcome** (because all it can predict is the probability of experimental or observational outcomes **given** that the experiment or observation is carried out. Of course, if it could predict the probability of an experiment or observation being carried out, it could predict the probability of any outcome of such experiment or observation, but it can predict the probability of one of these first order experiments or observations occurring only **given** that there occurs a second order experiment or observation to determine whether there occurs the first order experiment or observation, and so on… This is related to the infinite regression of observers that John von Neumann described, and to the (supposed) division of the world into separate quantum and classical parts insisted upon, maybe reluctantly, by Neils Bohr.

BTW, in your description of this problem, you say that, according to the standard model, an electron can be in 2 places at once because it is described by a wave function. Come on, this is bad pop physics. Maybe you were deliberately using pop physics language when you said it, and didn't really believe it. Otherwise, and if you still do believe it, you need a refresher in elementary QM. In bad pop physics, a particle is said to be in 2 places at once if its wave function is a superposition of 2 wave functions localized in 2 separate regions. However, the meaning, in standard physics terminology, of its being in location A is that if a measurement of its position is made, it will with probability 1 be found to be in location A, that is, it is in an eigenstate of the projection-onto-location-A operator observable. If Prtcl's wave function is a (nontrivial) superposition of its being in location A and in location B, the probabilities of its being found to be in location A, and that for being found in location B, are both less than 1, so it isn't in locations A and B at the same time. (Schrodinger's cat isn't both alive and dead at the same time when its wave function gives it a probability of 1/2 of being found alive, and 1/2 of being found dead, when observed with an observable that will determine whether it is alive or dead. Even saying it is half alive and half dead isn't correct. It would, with probability 1, be in a half-alive-half-dead state if observed with an observable whose eigenfunction was such a state, but, for reasons unknown, just looking at it isn't such an observable, and no one seems to know of any such.)

(2) In your video "Is faster-than-light travel possible?", you say that the causality-violation, grandfather paradox objection to such travel is nonsense, rubbish, since it is based on a confusion about the direction of time, that if we have a consistent direction of time, there is no such paradox. You claimed the confusion is that the paradox says that by traveling FTL you are going backward in time in some inertial coordinate frame IF, but nevertheless are getting older, so your entropy is increasing. As far as I can tell, your argument involves this Claim: By traveling backward in time in any inertial frame IF you would be going in the time direction of decreasing entropy in IF, since entropy decreases in the backwards time direction in each IF. This Claim is self-contradictory, for somewhat the same reason that if you are traveling FTL in some frame IF1, you are, according to special relativity, traveling backward in time in some other frame IF2. Specifically, if at point p1 along your FTL path you are, in frame IF1's time, earlier than you are at point p2, so the entropy (of some closed subsystem S) at p1 would be, according to Claim, less than it is at p2, in some other frame IF2's time, p1 will be later than p2, so the entropy of S at p1 would be, according to Claim, greater than it would be at p2 (contradiction).

While, as shown above, the entropy (of S), or any other (single-valued) scalar function of all space-time points cannot be an increasing function of (frame) time everywhere in each inertial frame, and so it wouldn't be true for all inertial frames IF that travel backward in time in IF would be travel in the direction of decreasing entropy everywhere (at the same frame time) in IF, it is true, assuming that the 2nd law of thermodynamics holds everywhere and always along each time-like curve, and so entropy of S in a past light cone would be less than the entropy of S in the corresponding forward light cone, that travel into your own past light cone would involve travel to a s-t region where your own entropy would be lower than before reaching the past light cone. However, this no more shows that travel into the past with FTL travel would be impossible than does the fact that in such FTL travel one's own proper time is going in the opposite direction to the frame time of each frame in which one is traveling backward in time. It is assumed that along your FTL path going backward in time in some frame, if you could travel such a path, in your own proper time, as your mental time increased, i.e., as the time seemed to get later, according to a popular theory of the relation of mental time direction to the temporal direction of entropy increase, your own entropy would increase, while the entropy of most other things, those not traveling with you, would decrease. This is just one of the peculiar aspects of time travel into the past, but is not a strict contradiction of natural law even if the 2nd law applies with the time it refers to being the external frame's time, since, as Maxwell emphasized, the 2nd law is only a statistical law, holding only almost all the time, for large systems, but not always. More importantly, with FTL travel into the past, the time to use in the application of the 2nd law to the time traveler would clearly be her/his own proper time, not the time of the frame wrt which he/she was traveling backward in time, so his/her-her/his entropy could increase while the entropy of the things wrt whoever was traveling FTL was decreasing, without contradicting the 2nd law.

Your 2nd law argument that I just, I think, demonstrated to be faulty was an argument against the causality-violation/grandfather paradox's showing that FTL travel would result in time travel into the past in some inertial frame, so could lead to causal contradictions, and so FTL travel is impossible. The causality-violation/grandfather paradox does involve travel by something into the past light cone of something, neither necessarily a person, but the argument for it is based more-or-less rigorously on the generally agreed-upon local causal structure of space-time. It shows that FTL travel could result in travel into the past. If the argument conflicts with the 2nd law of thermodynamics, which it doesn't, so much the worse for the 2nd law.

The causality objection assumes a time-orientable (that there exists a continuous choice of light-cones to be the future light-cone) universe, which I think you would agree this universe is, at least locally. Also, it is stated for a region U of space-time which is causally iseomorphic (bicontinuously isomorphic) to a convex open region of Minkowski s-t. This could be time-oriented by the local temporal direction of the increase of entropy if one of the 2 Minkowski light-cone orientations is everywhere in U the same as that of the local entropy increase orientation. The causality-violation argument against FTL travel applies to FTL travel of any controllable signal, including sending a person, which could be sent; it doesn't apply to the signal sent by a spin or polarization measurement of one of the pair to the location of the other of the pair for an entangled pair which is in the singlet state used in tests of locality via the Bell inequality, since these signals are uncontrollable by the sender. (Many people deny that a signal is sent in this situation, even though there is a correlation between the results of the distant measurements which cannot be explained by a local theory (either deterministic or probabilistic). They are wrong. What signal is sent is decided by inanimate nature, however, rather than the experimenter. Probably relevant to this is a Japanese conference paper titled approximately "Controllable and uncontrollable signaling", by Abner Shimony, which I haven't been able to obtain, but might yet if I try harder.) All this is gone into in greater detail in my unpublished paper "A Possible Severe Conflict between Quantum Mechanics and Special Relativity", available at https://www.logic-physics-settheory-math.com by clicking on "Conflict between QM and SR" at the top of the page.

The causality argument against the possibility of FTL travel doesn't depend on any judgement of which of 2 spacelike separated s-t points **a** or **b** is earlier, except for using a time-oriented region U, or on whether in traveling a s-t path in one path-direction something is going forward or backward in time. According to special relativity, if **a** & **b** are 2 s-t points, whether **a** or **b** occurs earlier is not physically significant. The causality argument against FTL travel uses only the assumption that for some pair of spacelike separated s-t points, a controllable action at one of them guarantees a specified response at the other. This is what is meant by being able to send a controllable signal FTL. The argument proceeds as follows:

Suppose, for some inertial frame IF, signaling FTL wrt IF from one s-t point to some other space-like related s-t point were possible. Then, by the principle of special relativity & the Lorentz transformations, it would be possible, for each inertial frame IF, to signal instantaneously wrt IF from each s-t point to every other space-like related s-t point. In such a case, signaling from some s-t point (**x**,t) into the past of (**x**,t), & next preventing that signaling, could be arranged as follows: Send a signal instantaneously, in some inertial frame IF, from (**x**,t) to some (**y**,t), with **y** not = **x**, from (**y**,t) send, instantaneously in frame IF' moving away from **x**, a signal to some (**z**,t') in the past light cone of (**x**,t), which is possible since a simultaneity slice in IF' thru (**y**,t) intersects the past light cone of (**x**,t), next from (**z**,t'), by slower-than-light signaling, send a signal to a mechanism that will prevent the initial sending of the signal from (**x**,t) to (**y**,t).

(3) In note 22 to Ch. 8 in *LiM*, you say that Haag's theorem, which states that all quantum field theories are trivial and physically irrelevant, is a math problem with quantum field theories generally. I know very little quantum field theory, much less than you do, but I thought that Haag's theorem, which according to Streater & Wightman's *PCT, Spin and Statistics, and All That* states roughly that the interaction picture for QFTs that satisfy the Wightman axioms is nonexistent, that is, there can be no interactions in such theories, so they are indeed physically trivial, was gotten around in modern QFTs by modifying or deleting at least one Wightman axiom. Is this belief incorrect, does, for example, the standard model assume all of those axioms, and thus strictly speaking is trivial, with no interactions? The actual calculational procedures used in QFT calculations, which I gather are mostly the Feynman path-integral method, with Feynman diagrams used for bookkeeping, are said to arrive at results agreeing very closely with experimental results, when they give results, but are also said to be not mathematically rigorous. When they don't give results, are they just not applicable, or do they rather maybe specify as the answer a series that doesn't converge, so the standard model would be theoretically inadequate because it doesn't allow interactions, but calculationally either very accurate or very wrong?

Michael Fox