This is about using AI to evolve better superconductors by making the AI evolve a model to optimisation where to position purposely made defects in the superconductor lattice to maximise the trapping of magnetic vortices that would otherwise tend to reduce the superconductor's critical temperature.
https://phys.org/news/2019-05-power-ai-high-performance-evolution-superconductors.html
@humy saidIt'll be interesting to see whether any of their hits can be validated experimentally.
Here is another AI scientific application;
https://medicalxpress.com/news/2019-05-artificial-intelligence-class-mutations-autism.html
In this case, the algorithm teaches itself how to identify biologically relevant sections of DNA and predicts whether those snippets play a role in any of more than 2,000 protein interactions that are known to affect the regulation of genes. The system also predicts whether disrupting a single pair of DNA units would have a substantial effect on those protein interactions.
I have a basic question: Once the data output is generated from a self-taught computer, do the researchers know HOW the computer reached its conclusions as to what regions of DNA are biologically relevant?
@wildgrass saidUnfortunately I think the answer to your question is simply "No".
I have a basic question: Once the data output is generated from a self-taught computer, do the researchers know HOW the computer reached its conclusions as to what regions of DNA are biologically relevant?
This is because that link says;
"The system uses an artificial intelligence technique called deep learning ..."
And what is meant by "deep learning" in AI terminology is AI learning using "deep neural networks" which are neural networks with multiple hidden layers. One down side of using neural networks as opposed to using, say, an AI knowledge based system, is that it is generally very difficult to know HOW it reached its conclusion because that information is highly implicit and, especially if its an extremely complex neural network like deep neural networks are, tends to be 'hidden' in the neural network.
In general, if you want to know HOW the computer reached its conclusions you must avoid using neural networks altogether but then avoiding neural networks altogether generally makes it a lot harder to get your computer to recognize complex patterns in noisy and fuzzy data.
One of the things I am currently researching is a way around that using a new kind of AI 'logic' (too complicated to explain here) that should allow an AI, that is represented as software only in a conventional computer so it is NOT neural network, to STILL, despite NOT being a neural network, efficiently recognize complex patterns in noisy and fuzzy data AND then to still tell you clearly HOW it reached its conclusions. If you know anything about AI you should know that is a highly difficult non-trivial task and you may well be very sceptical I could achieve this.
@humy saidTo design better superconductors, not evolve unless you are making the claim like cars improve over time, can also mean cars have evolved over time.
This is about using AI to evolve better superconductors by making the AI evolve a model to optimisation where to position purposely made defects in the superconductor lattice to maximise the trapping of magnetic vortices that would otherwise tend to reduce the superconductor's critical temperature.
https://phys.org/news/2019-05-power-ai-high-performance-evolution-superconductors.html
@humy saidHow could you know if the results were logical or gibberish?
Unfortunately I think the answer to your question is simply "No".
This is because that link says;
"The system uses an artificial intelligence technique called deep learning ..."
And what is meant by "deep learning" in AI terminology is AI learning using "deep neural networks" which are neural networks with multiple hidden layers. One down side of using neural networks as oppo ...[text shortened]... that is a highly difficult non-trivial task and you may well be very sceptical I could achieve this.
@wildgrass saidI would say that unless you could directly/indirectly find a way to empirically test the result, you couldn't know that. But EVEN if you directly/indirectly empirically tested the result and found it seemed to be vindicated, you cannot rule out the possibility that the neural network made a correct prediction despite using seriously flawed implicit neural network 'logic' to reach that conclusion. That can happen; sometimes a bad model, despite being bad, makes a correct prediction so it ends up being right for the wrong reason.
How could you know if the results were logical or gibberish?
That is one of the significant weaknesses of neural networks; no easy way to test its very implicit and encrypted inferences to see if they were sound. But neural networks also have certain significant strengths which is why they are still used and have proven to be very useful.
@humy saidCould you build in some sort of positive control that was previously-tested? Sort of like a training set, but you would not inform the computer ahead of time that this gene locus represented a known autism-susceptibility region. If the neural network doesn't find it, this might indicate a flaw in its logic.
I would say that unless you could directly/indirectly find a way to empirically test the result, you couldn't know that. But EVEN if you directly/indirectly empirically tested the result and found it seemed to be vindicated, you cannot rule out the possibility that the neural network made a correct prediction despite using seriously flawed implicit neural network 'logic' to rea ...[text shortened]... ve certain significant strengths which is why they are still used and have proven to be very useful.
@wildgrass saidYes, that would be an example of an indirect way to empirically test its 'logic'. I guess you could call that a 'calibration' of sorts.
Could you build in some sort of positive control that was previously-tested? Sort of like a training set, but you would not inform the computer ahead of time that this gene locus represented a known autism-susceptibility region. If the neural network doesn't find it, this might indicate a flaw in its logic.
@kellyjay saidDo you have to put a religious spin on every aspect of science? Is your main argument humans are just too stupid to ever figure things out?
To design better superconductors, not evolve unless you are making the claim like cars improve over time, can also mean cars have evolved over time.