It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: Arbitrageur
I just demonstrated nearly 200 sigma with a not so great measurement system giving us plus or minus 3 sigma of plus or minus 3%. Do you see this now?
There are several issues with this. Your first comment about infinite numbers between 0 and 1 was so abstract as to be almost meaningless, which you have somewhat addressed here and maybe you should have used this phrasing to begin with. You have removed some of the abstraction by talking about "a indestructible machine could start counting from 0 and never reaches 1". It's still somewhat abstract since it's likely impossible to build and the numbers don't seem to correlate with anything in the real world, but it's a valid thought experiment like Maxwell's demon which is also unlikely to be built.
originally posted by: zundier
The fact that density can vary within random values from the exact number of molecules that you pointed, doesn't prove that a indestructible machine could start counting from 0 and never reaches 1.
My analysis is that you are using a mathematical abstraction to suggest math is somehow a problem dealing with the real world. If you just want to play with math and see what it can do, abstractions are fine, but if you want to say it's a problem with how science describes the real world then you have to abandon your abstractions and discuss some examples where you think the application of math is a problem in the real world.
Therefore, the suggestion of this dilemma is that math - the fundamental tool to physics - it's explicitly contrary to its very essential objective - that is logic. Since science is mostly based on it, and math itself has this irrationality embeded within it, I wonder if we're missing an additional tool beyond mathematics?
In conversation with Nima Arkani-Hamed
41:50 How does your job work?
(Nima explains how it might seem like it would be easy to just dream up new stuff like the Higgs then wait around 50 years for the experiment to be conducted which proves 99% of the ideas wrong)
44:30 "things don't work that way...we don't know the answers to all the questions, in fact we have very profound mysteries. But what we already know about the way the world works is so constraining that it's almost impossible (since we have to change something...), it's almost impossible to have a new idea which doesn't destroy everything that came before it. Even without a single new experiment, just agreement with all the old experiments, is enough to kill almost every idea that you might have....
It's almost impossible to solve these problems, precisely because we know so much already that anything you do is bound to screw everything up. So if you manage to find one idea that's not obviously wrong, it's a big accomplishment. Now that's not to say that it's right. But not obviously being wrong is already a huge accomplishment in this field. That's the job of a theoretical physicist."
(ditto the galaxy, the local group)? Probably. you have to go outside the local group to measure cosmological expansion. I think using the 10 megaparsec distance as a starting point would be a good idea, which is way farther than Andromeda at only 0.45 megaparsecs away:
In newtonian terms, one says that the Solar System is "gravitationally bound" (ditto the galaxy, the local group). So the Solar System is not expanding.
So you can apply Hubble's law to not less than 10 megaparsecs and not more than a few hundred megaparsecs. Within that range my guess would be somewhere around 70 km/s per megaparsec as a value for "Hubble's constant" which is the relationship between recessional velocity versus distance, but you can see a whole list of computed values at that link. 70 is about what WMAP was suggesting, but Planck Mission came up with a little less and Hubble Telescope came up with a little more. Any of those could be right or they could all be wrong but I don't think 70 is too far off if it's wrong.
Hubble's law is the name for the observation in physical cosmology that:
Objects observed in deep space (extragalactic space, 10 megaparsecs (Mpc) or more) are found to have a Doppler shift interpretable as relative velocity away from Earth;
This Doppler-shift-measured velocity, of various galaxies receding from the Earth, is approximately proportional to their distance from the Earth for galaxies up to a few hundred megaparsecs away.
I'm afraid your comprehension of what Hamad said is way off. He said theoreticians of a century ago or so realized that there was a problem if the electron was a point, which was that the energy density of the point would tend toward infinity, so how could the electron move around if it's dragging this infinite energy around it. He also said they were right to be concerned about that because with the model they were using, it was a problem. They tried to solve the problem by considering the electron to not be a point-like particle which would require the size to be on the order of 10^-13 cm to not have the point-like problem, but they could never make that work and further we now know that penning trap experiments have constrained the electron size to less than 10^-22m.
originally posted by: delbertlarson
1. Nima commented how the old thinking concerning the classical radius of the electron was wrong. I think he used terminology similar to "wrong", but in any event I was left with that impression. Not "experiment suggests that it is wrong", but rather, "wrong". The video is long, and I don't want to be a lawyer here, so others can correct me if I am a bit off base on that. He also said the classical theory was that the electron was a shell of charge. My thoughts have always been that experiment has simply indicated that the charge was more centralized than the classical radius of the electron. That would indicate that the electron might itself be made of something much smaller, and that something might have much more mass, than what the original classical model stated. Yes, it could be the B preon, but this isn't a plug for my model, the point is that it might be something else, perhaps even smaller and more massive than the B preon.
Your view is correct, if somewhat limited. The more comprehensive model which your view doesn't seem to include (correct me if I'm wrong) is that the Higgs boson is an excitation of the Higgs field which theoretically has a non-zero vacuum expectation value of 246 GeV, which underlies the Higgs mechanism of the standard model. So I suppose you could say that if you had a different model which predicts the Higgs Boson observed at the LHC and the model doesn't include any such vacuum expectation value and it's consistent with other results that Hamed's statement about "looking at the vacuum" through the "lens" of the LHC was not correct, his statement was somewhat of a metaphor for how the experimental result can be viewed looking through the lens of the standard model which predicted the Higgs.
2. Nima made mention of how the "big microscope" of the LHC was looking at what the vacuum was made of. I have never looked at things that way at all. My view is that we are accelerating existing entities and smashing them off of one another. When we do so we create a lot of energy in very small volumes during collisions. We then see what that energy makes. It is not required that the vacuum is constantly making these things (even virtually) when the beams are not present. At least that is my view.
Going back to my earlier explanation of Newton's model, whether you consider that right or wrong depends on how you look at it so I still prefer George Box's context that all models are wrong and in that view we have no doubt that space-time and relativity in addition to Newton's laws are all wrong. Clearly Hamed thinks space-time is wrong, and to the extent they are a part of both relativity and quantum mechanics, both relativity and quantum mechanics are wrong or you could say they are using them as a crutch.
3. Space-time must not exist. One of the "wow" moments was the discussion involving an assertion that we now know that space-time does not exist. (Again, my recollection of the exact words might be a bit off, but I recall that being the gist of it.) From the discussion, what I believe was meant was that at some level, relativity must be wrong. But such statements are two entirely different things. To say a theory (relativity) must at some level be wrong is completely different from saying that space-time doesn't exist. In my view, this use of language is what a magician does, which is to distract so as to induce awe.
Thanks for the feeback, glad you find it interesting.
originally posted by: pfishy
a reply to: Arbitrageur
Thank you again for the response. As a quick side note, this thread is the only one I have continuously monitored and have interacted with consistently since joining 2 years ago. I absolutely love this discussion. Thank you.
I'd like to say I was just testing you to see if you were on your toes but that was just pure laziness on my part because I already spent enough time on that reply and I was too lazy to click another link. My off the top and incorrect idea of the distance was about 2mly which I knew should have been well above .45 mpc, so I probably should have investigated further but I just copied the .45 from the search result without clicking the link. Now that you pointed out the discrepancy, I can inform you the search box wasn't completely wrong, rather the result was obsolete. .45mpc was actually the first distance estimate published in 1922 but of course I concur with your corrected more recent estimates. Anyway the point I was trying to make was that anything under 10 MPC should probably be avoided and that point stands whether using the 1922 estimate or the more recent one.
The references I read listed Andromeda as .78 mpc
Which seems to equate nicely with the 2.5 mly distance it is normally measured as. But that is not the important point here.
Yes this view was apparent in the context of what you wrote which is why I provided the link refuting it, so I can consider one of three things might have happened regarding the link I included in my reply to try to help you correct that view:
From what I thought I knew, gravity will hold galaxies and structures together, but the fabric of spacetime expands regardless. Basically, that is to say that although likely impossible to discern at such scales, the length between your fingertip and wrist is expanding at the same rate as an equal length of empty space, glue, Axlotl tanks or fingers anywhere else in the Cosmos.
originally posted by: Arbitrageur
... but they could never make that work and further we now know that penning trap experiments have constrained the electron size to less than 10^-22m.
That size limit I referred to was derived from an experiment using an electron in a Penning trap. Maybe you can show me exactly where they use a proton to extract the size limit but I see no mention of it here:
originally posted by: moebius
From my understanding the 10^22m number is an extrapolation, derived from composite particles like the proton:
|g - 2| = radius / compton wavelength
Note they say this result is 10^4 x smaller, so even if you question this result, the 10^4 larger result which preceded this is still a problem for the old model, since that's a limit of less than 10^-16 cm and it was determined that 10^-13 cm was required to solve the energy density problem of the old model according to Hamed.
Received 28 August 1987
The quantum numbers of the geonium "atom", an electron in a Penning trap, have been continuously monitored in a non-destructive way by the new "continuous" Stern-Gerlach effect. In this way the g-factors of electron and positron have been determined to unprecedented precision,
½g ≡ vs/vc ≡ 1.001 159 652 188(4),
providing the most severe tests of QED and of the CPT symmetry theorem, for charged elementary particles. From the close agreement of experimental and theoretical g-values a new, 10^4 × smaller, value for the electron radius, Rg < 10^-20 cm, may be extracted.
I'm afraid your comprehension of what Hamad said is way off.....
The more comprehensive model which your view doesn't seem to include (correct me if I'm wrong) is that the Higgs boson is an excitation of the Higgs field.....
Hamed and his colleagues feel that time might also be "emergent"....
Maybe not. Hamad's solution relies on the same model that seems to suggest according to him that you and I and everyone else should collapse into tiny black holes and we don't understand exactly why we don't do that (do you remember that from his discussion?) so until that question is answered I think it's fair to say more research is needed.
originally posted by: delbertlarson
I don't believe that question is answered yet.
We (you, I and ErosA433) discussed the state of that measurement some time ago, and I still think the standard deviation claimed is not correct. (There was never a further rebuttal of my last post on that. I tried to make my point clear, and don't know if it was accepted or not.)
I'm not saying Hamad is right, just that I'm open-minded to the line of research investigating whether time and/or space-time might be emergent. He explained his rationale for why he thinks space-time has to be emergent and he's more convinced by it than I am. I'm unconvinced, just open-minded.
For me, time is the parameter that orders events. That's it. Purely classical. Space is Euclidean and three dimensional. Purely classical. It then becomes our job to explain things on that theatre.
Time is an emergent phenomenon that is a side effect of quantum entanglement, say physicists. And they have the first experimental results to prove it
originally posted by: Arbitrageur
Feel free now to ask any physics questions, and hopefully some of the people on ATS who know physics can help answer them.
No it doesn't and Einstein specifically said he thought that idea which was being promoted by some others was not the correct way to look at it.
originally posted by: AMPTAH
Einstein's special relativity claims the mass of an object increases with speed, as seen by an observer that sees it moving.
originally posted by: Arbitrageur
Here is the actual quote from Einstein:
Mass in special relativity
originally posted by: Arbitrageur
No it doesn't and Einstein specifically said he thought that idea which was being promoted by some others was not the correct way to look at it.
...
What increases at relativistic velocities are energy and momentum, not mass.
If you know absolutely nothing about your data, this is probably the calculation method you would need to use and the calculation is correct. However as I already explained by painstakingly providing a detailed example where we know more about the data, which I think was a good analogy for the data on the Higgs, the problem with this method is that it ignores the additional information we have about the Higgs data and it arrives at the wrong result.
originally posted by: delbertlarson
For A, the standard deviation is: sd_A = sqrt[[(1.1-1.05)^2+(1-1.05)^2]/2] = sqrt[[.05^2+.05^2]/2] = .05. For B, since I chose a simple case, sd_B also is .05.
Now the overall mean is 1.15, whether you take the mean of the means, or the mean of the data directly. And I assert that the overall standard deviation is sd_Total = sqrt[[(1.1-1.15)^2 + (1-1.15)^2 + (1.3-1.15)^2 + (1.2-1.15)^2]/4]
= sqrt[[.05^2 + .15^2 + .15^2 + .05^2]/4] = sqrt[[.0025 + .0225 + .0225 + .0025]/4] = sqrt[.05/4] = 0.1118.
Yes of course I see a huge flaw in the assumption that we don't know more about the data when we do know more about the data, and the variances in the data are similar which certainly supports the idea that the data sets may be considered to have the same variance and are therefore candidates for pooled variance or some similar technique, and one of the features of the pooled variance method is that unlike your 2nd example where the combined standard deviation ends up the same as the individual deviations, the pooled variance method results in a standard deviation that is lower, which is exactly how you described the published results in the paper: "The additional data results in a situation where the overall standard deviation they state is less than the standard deviations of any of the individual sets." That is exactly one of the features of using the pooled variance method.
And then they calculate the standard deviation:
sd_Total' = sqrt[[(1.2-1.15)^2 + (1.1-1.15)^2 + (1.2-1.15)^2 + (1.1-1.15)^2]/4]
= sqrt[[.05^2 + .05^2 + .05^2 + .05^2]/4] = sqrt[[4 x .05^2]/4] = .05
Now in the collaboration case there are four data sets, and vastly more data points in each set than I have here. The additional data results in a situation where the overall standard deviation they state is less than the standard deviations of any of the individual sets.
Do you see my point? Do you see a flaw? This seems pretty straight-forward to me.