1. Joined
    06 Mar '12
    Moves
    642
    24 May '19 18:562 edits
    This is about using AI to evolve better superconductors by making the AI evolve a model to optimisation where to position purposely made defects in the superconductor lattice to maximise the trapping of magnetic vortices that would otherwise tend to reduce the superconductor's critical temperature.

    https://phys.org/news/2019-05-power-ai-high-performance-evolution-superconductors.html
  2. Joined
    06 Mar '12
    Moves
    642
    27 May '19 16:29
    Here is another AI scientific application;

    https://medicalxpress.com/news/2019-05-artificial-intelligence-class-mutations-autism.html
  3. Joined
    20 Oct '06
    Moves
    9548
    27 May '19 16:551 edit
    @humy said
    Here is another AI scientific application;

    https://medicalxpress.com/news/2019-05-artificial-intelligence-class-mutations-autism.html
    It'll be interesting to see whether any of their hits can be validated experimentally.
    In this case, the algorithm teaches itself how to identify biologically relevant sections of DNA and predicts whether those snippets play a role in any of more than 2,000 protein interactions that are known to affect the regulation of genes. The system also predicts whether disrupting a single pair of DNA units would have a substantial effect on those protein interactions.

    I have a basic question: Once the data output is generated from a self-taught computer, do the researchers know HOW the computer reached its conclusions as to what regions of DNA are biologically relevant?
  4. Joined
    06 Mar '12
    Moves
    642
    27 May '19 22:323 edits
    @wildgrass said
    I have a basic question: Once the data output is generated from a self-taught computer, do the researchers know HOW the computer reached its conclusions as to what regions of DNA are biologically relevant?
    Unfortunately I think the answer to your question is simply "No".
    This is because that link says;
    "The system uses an artificial intelligence technique called deep learning ..."
    And what is meant by "deep learning" in AI terminology is AI learning using "deep neural networks" which are neural networks with multiple hidden layers. One down side of using neural networks as opposed to using, say, an AI knowledge based system, is that it is generally very difficult to know HOW it reached its conclusion because that information is highly implicit and, especially if its an extremely complex neural network like deep neural networks are, tends to be 'hidden' in the neural network.
    In general, if you want to know HOW the computer reached its conclusions you must avoid using neural networks altogether but then avoiding neural networks altogether generally makes it a lot harder to get your computer to recognize complex patterns in noisy and fuzzy data.

    One of the things I am currently researching is a way around that using a new kind of AI 'logic' (too complicated to explain here) that should allow an AI, that is represented as software only in a conventional computer so it is NOT neural network, to STILL, despite NOT being a neural network, efficiently recognize complex patterns in noisy and fuzzy data AND then to still tell you clearly HOW it reached its conclusions. If you know anything about AI you should know that is a highly difficult non-trivial task and you may well be very sceptical I could achieve this.
  5. Standard memberKellyJay
    Walk your Faith
    USA
    Joined
    24 May '04
    Moves
    157803
    28 May '19 00:52
    @humy said
    This is about using AI to evolve better superconductors by making the AI evolve a model to optimisation where to position purposely made defects in the superconductor lattice to maximise the trapping of magnetic vortices that would otherwise tend to reduce the superconductor's critical temperature.

    https://phys.org/news/2019-05-power-ai-high-performance-evolution-superconductors.html
    To design better superconductors, not evolve unless you are making the claim like cars improve over time, can also mean cars have evolved over time.
  6. Joined
    20 Oct '06
    Moves
    9548
    28 May '19 18:08
    @humy said
    Unfortunately I think the answer to your question is simply "No".
    This is because that link says;
    "The system uses an artificial intelligence technique called deep learning ..."
    And what is meant by "deep learning" in AI terminology is AI learning using "deep neural networks" which are neural networks with multiple hidden layers. One down side of using neural networks as oppo ...[text shortened]... that is a highly difficult non-trivial task and you may well be very sceptical I could achieve this.
    How could you know if the results were logical or gibberish?
  7. Joined
    06 Mar '12
    Moves
    642
    28 May '19 19:045 edits
    @wildgrass said
    How could you know if the results were logical or gibberish?
    I would say that unless you could directly/indirectly find a way to empirically test the result, you couldn't know that. But EVEN if you directly/indirectly empirically tested the result and found it seemed to be vindicated, you cannot rule out the possibility that the neural network made a correct prediction despite using seriously flawed implicit neural network 'logic' to reach that conclusion. That can happen; sometimes a bad model, despite being bad, makes a correct prediction so it ends up being right for the wrong reason.
    That is one of the significant weaknesses of neural networks; no easy way to test its very implicit and encrypted inferences to see if they were sound. But neural networks also have certain significant strengths which is why they are still used and have proven to be very useful.
  8. Joined
    20 Oct '06
    Moves
    9548
    28 May '19 19:20
    @humy said
    I would say that unless you could directly/indirectly find a way to empirically test the result, you couldn't know that. But EVEN if you directly/indirectly empirically tested the result and found it seemed to be vindicated, you cannot rule out the possibility that the neural network made a correct prediction despite using seriously flawed implicit neural network 'logic' to rea ...[text shortened]... ve certain significant strengths which is why they are still used and have proven to be very useful.
    Could you build in some sort of positive control that was previously-tested? Sort of like a training set, but you would not inform the computer ahead of time that this gene locus represented a known autism-susceptibility region. If the neural network doesn't find it, this might indicate a flaw in its logic.
  9. Joined
    06 Mar '12
    Moves
    642
    28 May '19 20:361 edit
    @wildgrass said
    Could you build in some sort of positive control that was previously-tested? Sort of like a training set, but you would not inform the computer ahead of time that this gene locus represented a known autism-susceptibility region. If the neural network doesn't find it, this might indicate a flaw in its logic.
    Yes, that would be an example of an indirect way to empirically test its 'logic'. I guess you could call that a 'calibration' of sorts.
  10. Subscribersonhouse
    Fast and Curious
    slatington, pa, usa
    Joined
    28 Dec '04
    Moves
    53223
    29 May '19 09:38
    @kellyjay said
    To design better superconductors, not evolve unless you are making the claim like cars improve over time, can also mean cars have evolved over time.
    Do you have to put a religious spin on every aspect of science? Is your main argument humans are just too stupid to ever figure things out?
Back to Top

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.I Agree