Go back
using AI to evolve better superconductors

using AI to evolve better superconductors

Science

h

Joined
06 Mar 12
Moves
642
Clock
24 May 19
2 edits

This is about using AI to evolve better superconductors by making the AI evolve a model to optimisation where to position purposely made defects in the superconductor lattice to maximise the trapping of magnetic vortices that would otherwise tend to reduce the superconductor's critical temperature.

https://phys.org/news/2019-05-power-ai-high-performance-evolution-superconductors.html

h

Joined
06 Mar 12
Moves
642
Clock
27 May 19

Here is another AI scientific application;

https://medicalxpress.com/news/2019-05-artificial-intelligence-class-mutations-autism.html

w

Joined
20 Oct 06
Moves
9627
Clock
27 May 19
1 edit

@humy said
Here is another AI scientific application;

https://medicalxpress.com/news/2019-05-artificial-intelligence-class-mutations-autism.html
It'll be interesting to see whether any of their hits can be validated experimentally.
In this case, the algorithm teaches itself how to identify biologically relevant sections of DNA and predicts whether those snippets play a role in any of more than 2,000 protein interactions that are known to affect the regulation of genes. The system also predicts whether disrupting a single pair of DNA units would have a substantial effect on those protein interactions.

I have a basic question: Once the data output is generated from a self-taught computer, do the researchers know HOW the computer reached its conclusions as to what regions of DNA are biologically relevant?

h

Joined
06 Mar 12
Moves
642
Clock
27 May 19
3 edits
Vote Up
Vote Down

@wildgrass said
I have a basic question: Once the data output is generated from a self-taught computer, do the researchers know HOW the computer reached its conclusions as to what regions of DNA are biologically relevant?
Unfortunately I think the answer to your question is simply "No".
This is because that link says;
"The system uses an artificial intelligence technique called deep learning ..."
And what is meant by "deep learning" in AI terminology is AI learning using "deep neural networks" which are neural networks with multiple hidden layers. One down side of using neural networks as opposed to using, say, an AI knowledge based system, is that it is generally very difficult to know HOW it reached its conclusion because that information is highly implicit and, especially if its an extremely complex neural network like deep neural networks are, tends to be 'hidden' in the neural network.
In general, if you want to know HOW the computer reached its conclusions you must avoid using neural networks altogether but then avoiding neural networks altogether generally makes it a lot harder to get your computer to recognize complex patterns in noisy and fuzzy data.

One of the things I am currently researching is a way around that using a new kind of AI 'logic' (too complicated to explain here) that should allow an AI, that is represented as software only in a conventional computer so it is NOT neural network, to STILL, despite NOT being a neural network, efficiently recognize complex patterns in noisy and fuzzy data AND then to still tell you clearly HOW it reached its conclusions. If you know anything about AI you should know that is a highly difficult non-trivial task and you may well be very sceptical I could achieve this.

KellyJay
Walk your Faith

USA

Joined
24 May 04
Moves
160375
Clock
28 May 19
Vote Up
Vote Down

@humy said
This is about using AI to evolve better superconductors by making the AI evolve a model to optimisation where to position purposely made defects in the superconductor lattice to maximise the trapping of magnetic vortices that would otherwise tend to reduce the superconductor's critical temperature.

https://phys.org/news/2019-05-power-ai-high-performance-evolution-superconductors.html
To design better superconductors, not evolve unless you are making the claim like cars improve over time, can also mean cars have evolved over time.

w

Joined
20 Oct 06
Moves
9627
Clock
28 May 19
Vote Up
Vote Down

@humy said
Unfortunately I think the answer to your question is simply "No".
This is because that link says;
"The system uses an artificial intelligence technique called deep learning ..."
And what is meant by "deep learning" in AI terminology is AI learning using "deep neural networks" which are neural networks with multiple hidden layers. One down side of using neural networks as oppo ...[text shortened]... that is a highly difficult non-trivial task and you may well be very sceptical I could achieve this.
How could you know if the results were logical or gibberish?

h

Joined
06 Mar 12
Moves
642
Clock
28 May 19
5 edits
Vote Up
Vote Down

@wildgrass said
How could you know if the results were logical or gibberish?
I would say that unless you could directly/indirectly find a way to empirically test the result, you couldn't know that. But EVEN if you directly/indirectly empirically tested the result and found it seemed to be vindicated, you cannot rule out the possibility that the neural network made a correct prediction despite using seriously flawed implicit neural network 'logic' to reach that conclusion. That can happen; sometimes a bad model, despite being bad, makes a correct prediction so it ends up being right for the wrong reason.
That is one of the significant weaknesses of neural networks; no easy way to test its very implicit and encrypted inferences to see if they were sound. But neural networks also have certain significant strengths which is why they are still used and have proven to be very useful.

w

Joined
20 Oct 06
Moves
9627
Clock
28 May 19
Vote Up
Vote Down

@humy said
I would say that unless you could directly/indirectly find a way to empirically test the result, you couldn't know that. But EVEN if you directly/indirectly empirically tested the result and found it seemed to be vindicated, you cannot rule out the possibility that the neural network made a correct prediction despite using seriously flawed implicit neural network 'logic' to rea ...[text shortened]... ve certain significant strengths which is why they are still used and have proven to be very useful.
Could you build in some sort of positive control that was previously-tested? Sort of like a training set, but you would not inform the computer ahead of time that this gene locus represented a known autism-susceptibility region. If the neural network doesn't find it, this might indicate a flaw in its logic.

h

Joined
06 Mar 12
Moves
642
Clock
28 May 19
1 edit
Vote Up
Vote Down

@wildgrass said
Could you build in some sort of positive control that was previously-tested? Sort of like a training set, but you would not inform the computer ahead of time that this gene locus represented a known autism-susceptibility region. If the neural network doesn't find it, this might indicate a flaw in its logic.
Yes, that would be an example of an indirect way to empirically test its 'logic'. I guess you could call that a 'calibration' of sorts.

s
Fast and Curious

slatington, pa, usa

Joined
28 Dec 04
Moves
53321
Clock
29 May 19
Vote Up
Vote Down

@kellyjay said
To design better superconductors, not evolve unless you are making the claim like cars improve over time, can also mean cars have evolved over time.
Do you have to put a religious spin on every aspect of science? Is your main argument humans are just too stupid to ever figure things out?

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.