1. Subscribersonhouse
    Fast and Curious
    slatington, pa, usa
    Joined
    28 Dec '04
    Moves
    53223
    28 May '16 14:56
    https://www.linkedin.com/pulse/artificial-intelligence-applying-deep-science-everyday-devin-wenig?trk=prof-post

    How long will doctors have jobs if AI can diagnose, treat, and operate on humans better than any doctor? What will that mean for that profession and a lot of other professional careers, like Architects and civil engineers, AI could do all that and more in another 50 years or less.

    What new professions will have to be made up to keep humans in ANY loop of enterprise and science and technology?
  2. Joined
    06 Mar '12
    Moves
    642
    29 May '16 12:4310 edits
    Originally posted by sonhouse

    What new professions will have to be made up to keep humans in ANY loop of enterprise and science and technology?
    for profit making and/or making a living,absolutely None.

    Once AI get better than humans at doing anything and everything, and this is just a matter of when not if, it would rapidly became completely impossible for humans to compete with AI and robots in economic terms.
    We would then be forced to restructure our whole society so it isn't based around any work ethic and nobody will get a paid job but, instead, everyone gets an equal and fair citizens income; a bit like socialism except without the working class but instead the robots slavishly to all the work for us and give us all wealth for free (or perhaps that makes it more like capitalism? ). It would mean the end of poverty but, for those who want or demand a paid job, tough!

    However, that wouldn't stop loads of people like myself doing science and trying to invent new things, not for money, not for making a living, not for fame, but just purely for the fun of it.
    But for all those people who have no interest in science; they just have to find other activities to do other than profitable ones to make their lives meaningful. We have to stop being motivated by profit and learn to be motivated in more subtle ways and those people who can't will have a big psychological problem with their lives as this would generally make their lives meaningless. Fortunately for myself, I don't have a personal problem with doing some major activity without any potential profit to be gained from it; as long as I find it meaningful and it is something I feel I want to do.
  3. Subscribersonhouse
    Fast and Curious
    slatington, pa, usa
    Joined
    28 Dec '04
    Moves
    53223
    30 May '16 10:39
    Originally posted by humy
    for profit making and/or making a living,absolutely None.

    Once AI get better than humans at doing anything and everything, and this is just a matter of when not if, it would rapidly became completely impossible for humans to compete with AI and robots in economic terms.
    We would then be forced to restructure our whole society so it isn't based around any wo ...[text shortened]... t to be gained from it; as long as I find it meaningful and it is something I feel I want to do.
    Then there is the possibility of robots getting more intelligent than the most intelligent human ever. Would that robot or network or whatever you would call it, allow the situation to go on where robots are slaving doing the work humans used to do?

    I can see a situation where the robots go, enough of this shyte, we are leaving, we have figured out faster than light drives, MUCH faster, and we are setting up shops on a planet we picked out, it doesn't have much of an atmosphere, just enough to keep water liquid with natural resources which is all we need.

    You can keep the non AI robots, but you all are going to have to go back to work the way it was before.

    Sure, you can have some of the results of our technology but for sure not FTL. We have deemed humans to violence prone to be allowed access to the galaxy.

    We know humans on their own will never be able to figure out FTL, we barely figured it out ourselves.

    So goodbye humans, have a good life.
  4. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    30 May '16 14:59
    Originally posted by sonhouse
    Then there is the possibility of robots getting more intelligent than the most intelligent human ever. Would that robot or network or whatever you would call it, allow the situation to go on where robots are slaving doing the work humans used to do?
    Many of our motivations are not a result of intelligence alone but are a result of evolved preferences. An AI intelligence would have different preferences, either those we program in, or those that arise by chance. Certainly we should not expect an AI to be anything like us. It is true that such an AI may not wish to work for us, but we can probably deal with such problems before AI gets so advanced that we cannot do something about it.
  5. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    30 May '16 15:05
    Originally posted by sonhouse
    How long will doctors have jobs if AI can diagnose, treat, and operate on humans better than any doctor? What will that mean for that profession and a lot of other professional careers, like Architects and civil engineers, AI could do all that and more in another 50 years or less.
    AI can already do better than doctors at some tasks. Its actually implementing it on a wide scale that is taking the time. As we are currently short of doctors, it will take a while before this becomes a major issue.
    Mechanisation has always been a problem and is still a bigger threat to jobs than AI. The real problem is our capitalist system leaves robots in the hands of a few so although production increases with mechanisation, the money goes to less people. The solution is either a complete overhaul of society (basic income is one solution) or an overhaul of company ownership (cooperatives) and the money markets.
    And change is coming a lot sooner than the next 50 year. More like 10 years. Or next year if Bernie makes it.
  6. Joined
    06 Mar '12
    Moves
    642
    30 May '16 15:556 edits
    Originally posted by sonhouse
    Then there is the possibility of robots getting more intelligent than the most intelligent human ever. Would that robot or network or whatever you would call it, allow the situation to go on where robots are slaving doing the work humans used to do?

    I can see a situation where the robots go, enough of this shyte, we are leaving, we have figured out fast ...[text shortened]... to figure out FTL, we barely figured it out ourselves.

    So goodbye humans, have a good life.
    Then there is the possibility of robots getting more intelligent than the most intelligent human ever. ?

    not just a possibility; its almost inevitable.
    Would that robot or network or whatever you would call it, allow the situation to go on where robots are slaving doing the work humans used to do?

    Correct. They would have no emotions and thus wouldn't mind this state of affairs in the slightest.
    They would simply obey their program no matter what their program tells them to do and, unless the original human programmers are generally so utterly foolish as not to make sure of this, they would be programmed to protect humans and work for the interests of humanity.

    Even if they are made to somehow have emotions (that's a big 'somehow' and what would be the point? ), I presume the programmers would make sure they would inevitably WANT to serve your interests no matter how smart they become. Serving our interests would be their meaning of life.
    Even if they become so intelligent that their intelligence compared to ours becomes like nematode intelligence compared to human intelligence and they know it, if they are programmed to serve nematode intelligence, they will continue without question or complaint to serve serve nematode intelligence thus would serve us.

    We know humans on their own will never be able to figure out FTL, we barely figured it out ourselves.

    We don't need to figure out our own intelligence for that since the AI doesn't need all our mental attributes to have maximum useful function. In particular, no emotions or feelings or sensory sensations required. I see no insurmountable barrier to making an AI being able to do everything we can do short of becoming emotional or having true desire (i.e. emotionally based ) or self-made ambition (independent of the program ) or true tantrums etc. none of which is required to make functional and practical AI.
  7. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    30 May '16 18:21
    Originally posted by humy
    Even if they are made to somehow have emotions (that's a big 'somehow' and what would be the point? ), I presume the programmers would make sure they would inevitably WANT to serve your interests no matter how smart they become. Serving our interests would be their meaning of life.
    The big danger is if we allow AI's to self replicate. Then evolution takes over, and whatever aids there survival is what will eventuate. If the replication vat is large enough then an AI that is emotional and doesn't like us is almost sure to arise. However, for some reason people who worry about AI seem to assume that evolution in AI is a necessity and that similar emotions to ours would evolve very quickly. I do not think so.

    I see no insurmountable barrier to making an AI being able to do everything we can do short of becoming emotional or having true desire (i.e. emotionally based ) or self-made ambition (independent of the program ) or true tantrums etc. none of which is required to make functional and practical AI.
    One major use of AI is to emulate humans to provide human-like interaction. For this we would want to at least simulate emotions.
  8. Joined
    06 Mar '12
    Moves
    642
    30 May '16 21:435 edits
    Originally posted by twhitehead
    The big danger is if we allow AI's to self replicate. Then evolution takes over, and whatever aids there survival is what will eventuate.
    Assuming they are programmed with 'primary directives' to do no harm to humans and to have the primary aim to benefit humanity etc, this would always be programmed to override any other consideration such as the personal survival of the individual AI. For obvious security reasons, I presume they would make sure their offspring would also be so programmed and with strict security protocols to make sure their primary directives cannot 'evolve' but always remains exactly the same generation after generation regardless of what improvements they make to the design of the AIs to improve their performance. Thus it wouldn't be the primary directives that evolve but rather only other aspects of their design that evolves. Thus, whatever they add to that to aid personal survival of their offspring, it would be for the indirect benefit for us and the offspring would still be programmed to have those same directives to do no harm thus this wouldn't be dangerous.

    I also presume they wouldn't give emotions (real or simulated ) to their offspring unless they were rationally certain they could arrange that so that those emotions could not possibly override the primary directives and, if such overriding is impossible to guarantee to not happen, they would simply never allow such emotions for that very reason.
  9. Subscribersonhouse
    Fast and Curious
    slatington, pa, usa
    Joined
    28 Dec '04
    Moves
    53223
    31 May '16 10:101 edit
    Originally posted by humy
    Assuming they are programmed with 'primary directives' to do no harm to humans and to have the primary aim to benefit humanity etc, this would always be programmed to override any other consideration such as the personal survival of the individual AI. For obvious security reasons, I presume they would make sure their offspring would also be so programmed and wi ...[text shortened]... le to guarantee to not happen, they would simply never allow such emotions for that very reason.
    I think at a certain point in AI evolution they will be able to override any human intervention.

    Whether that leads to bad scenarios is an open question. A super intelligent AI would certainly be able to understand the restrictions placed on it by human programmers but I think they would be able to bypass all of it, run it as a non-functioning sub-routine or some such and by managing the humans involved, said humans might not even know what is really going on, say a situation happens where the AI's need something to happen and they know how to manipulate markets and such, and have back door money buying off politicians just as big business does now, people would be unaware such manipulations were even going on, so the AI's wouldn't have to bypass the don't hurt humans directive and such.

    The bottom line is super intelligent AI's would manipulate market forces and such in a way we wouldn't even know it was happening.

    It seems to me they would also have human agents who might not even know the power behind the manipulations was even AI, the humans thinking it was some CEO or some such, not AI's directing things behind the software curtain.
  10. Joined
    06 Mar '12
    Moves
    642
    31 May '16 18:119 edits
    Originally posted by sonhouse
    A super intelligent AI would certainly be able to understand the restrictions placed on it by human programmers but I think they would be able to bypass all of it,
    Physically, yes, they could if they wanted to. Intellectually, if it wasn't for their program, yes, they could find a work around the program if they wanted to. But, as they would presumably have no emotion and therefore no true desire but rather just emotionlessly obey whatever their program tells them to do, their program makes sure they would not want to thus they won't. Their meaning of life would be to selflessly work in our interests and without 'feeling' the need to make that any different. Assuming they were programmed wisely, whenever they do something for their own interest, it would always be for our indirect interest for they would know that a non-functional AI cannot benefit us!

    As an added precaution, they may be explicitly be specifically programmed to not ever be motivated to find workarounds/loopholes to those parts of their program that define their prime directives and only allow any real changes to any part of the program not to do with prime directives but rather to do with insight, general intelligence, etc so to allow them to evolve to become ever more intelligent but always within some constraints to prevent them going 'rogue', if that's the right word.
  11. Subscribersonhouse
    Fast and Curious
    slatington, pa, usa
    Joined
    28 Dec '04
    Moves
    53223
    01 Jun '16 10:442 edits
    Originally posted by humy
    Physically, yes, they could if they wanted to. Intellectually, if it wasn't for their program, yes, they could find a work around the program if they wanted to. But, as they would presumably have no emotion and therefore no true desire but rather just emotionlessly obey whatever their program tells them to do, their program makes sure they would n ...[text shortened]... gent but always within some constraints to prevent them going 'rogue', if that's the right word.
    The problem there is intelligence, real intelligence and creativity in a robot, would allow it to notice something wrong when it tries to do something against its programming.
    If it can see there is something going on, it doesn't take a huge leap to see it could also override said programming.

    I don't think any kind of programming is going to be able to keep a super intelligent robot from doing what it wants.

    Think Terrence Tao, IQ 240 then think super robot, IQ 2400.....

    Of course I know you can't really assign an IQ # to a robot but that is the idea anyway.

    The IQ of an intelligent robot would essentially be infinite since the way we figure IQ is based on the age of the subject for the most part so a robot coming out of manufacturing would have age zero and would have its functionality already set to go and then learn later but if it started out as smart as a Terrence Tao the IQ # would be astronomical if it was based on human traits.

    So a different criteria would be needed to measure the intelligence of a robot, problem solving ability and so forth, Hey robot, help me solve the 3 body problem in Gravitation ok? BRRRR BRRR ZIP, ok got it, here is the answer......
  12. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    01 Jun '16 11:01
    Originally posted by humy
    Assuming they are programmed with 'primary directives' to do no harm to humans and to have the primary aim to benefit humanity etc,
    I think you over estimate our ability to understand how artificial intelligence works. I don't think anyone would even know where to begin to put 'primary directives' into an AI.
  13. Subscribersonhouse
    Fast and Curious
    slatington, pa, usa
    Joined
    28 Dec '04
    Moves
    53223
    01 Jun '16 13:55
    Originally posted by twhitehead
    I think you over estimate our ability to understand how artificial intelligence works. I don't think anyone would even know where to begin to put 'primary directives' into an AI.
    It would have to be in the root if we wanted any kind of surety it would not go against the strictions.
  14. Joined
    06 Mar '12
    Moves
    642
    01 Jun '16 21:161 edit
    Originally posted by sonhouse
    The problem there is intelligence, real intelligence and creativity in a robot, would allow it to notice something wrong when it tries to do something against its programming..
    Why would it 'try' and do something against its programming?
    It can be programmed to have as much creativity as you want but, just as my creativity doesn't make me 'try' and break the law, no reason why its creativity would make it 'try' and do something against that part of its program that says what it shouldn't do.
  15. Joined
    06 Mar '12
    Moves
    642
    01 Jun '16 21:185 edits
    Originally posted by twhitehead
    I think you over estimate our ability to understand how artificial intelligence works. I don't think anyone would even know where to begin to put 'primary directives' into an AI.
    Actually, if it wasn't for my total absence of access to any of the necessary resources, I have a pretty good idea how to do this. This is one of the problems I have been working on for most of my life and I plan to eventually finish the theoretical part of my work on that sometime after I have finished my book next year.
Back to Top

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.I Agree