Originally posted by @sonhouseThere's a certain amount of "so what" to this. It puts limits on omniscience, which might be a problem for some theists, but science is more concerned with predicting bulk behaviour rather than trying to know every detail of a system.
https://phys.org/news/2018-05-proof-reveals-fundamental-limits-scientific.html
Goes way past the limits of math proofs.
05 May 18
Originally posted by @deepthoughtI wonder what it means in a practical sense, like limit to sensitivity of sensors? We already know some sensors perform better close to 0 K but what does it mean in the real world?
There's a certain amount of "so what" to this. It puts limits on omniscience, which might be a problem for some theists, but science is more concerned with predicting bulk behaviour rather than trying to know every detail of a system.
Originally posted by @sonhouseIt depends on the system and what you are testing I suppose.
I wonder what it means in a practical sense, like limit to sensitivity of sensors? We already know some sensors perform better close to 0 K but what does it mean in the real world?
In biology, there have been a number of advances that have allowed for detailed molecular analysis of a single mammalian cell. Prior to this, we could easily chop up a tissue, extract RNA and run comparative analyses on the bulk tissue. This was extremely precise and highly reproducible because you were essentially measuring the "average" gene expression in a million + cells. With single cell, now we can visualize the complete cellular heterogeneity of the tissue at the molecular level as one cell appears much differently than another genetically-identical cell in the same tissue.
This is a huge technical advance for developmental biology and cancer biology etc. but the sensitivity of the assay creates a lot of problems too. You can find huge statistical significant differences between 1 or 2 molecules in a cell vs. another cell. If there is a single detectable molecule in a single cell, is this practically any different than having 2 molecules, or is this natural fluctuation over time, or is this error/noise in the assay? Since you have already lysed the cells for the assay, how would you test a functional difference?
Originally posted by @wildgrassSounds like a Nobel prize in the making🙂 Sounds like there are cases where a small change in molecular structure doesn't change functionality. So a bit of a window of error allowed. I suppose that will be figured out, just how many molecules can you slide around and still maintain function.
It depends on the system and what you are testing I suppose.
In biology, there have been a number of advances that have allowed for detailed molecular analysis of a single mammalian cell. Prior to this, we could easily chop up a tissue, extract RNA and run comparative analyses on the bulk tissue. This was extremely precise and highly reproducible becau ...[text shortened]... ince you have already lysed the cells for the assay, how would you test a functional difference?
I think in the overall picture the limits they are talking about would be more on subtle quantum physics level at the Planck's distance so we won't have to worry about that kind of thing for a thousand years probably.
07 May 18
Originally posted by @sonhouseYeah I (naively) thought this example of scientific limitations was one that we are dealing with currently. With the reductionist approach, it seems like we know a lot more about the system. But we know very little about the functional relevance of the measurement, and since it is coincident with thousands of other "differences" between the two cells, we cannot simply test this. We just can't know everything about the biological system we are studying.
I think in the overall picture the limits they are talking about would be more on subtle quantum physics level at the Planck's distance so we won't have to worry about that kind of thing for a thousand years probably.