1. SubscriberC J Horse
    A stable personality
    Near my hay.
    Joined
    27 Apr '06
    Moves
    64129
    06 Feb '16 13:06
    In the book I'm currently reading, a character makes the statement (surprising to me) that her tablet computer has more computing power than existed in the whole world in 1984. That part of the book is set in 2025.

    Is this a reasonable claim for tablets now? If not, is it likely to be true in 2025?
  2. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    06 Feb '16 13:33
    In 1984 a typical home PC might have had an 80286 or worse:
    https://en.wikipedia.org/wiki/Intel_80286

    134,000 transistors
    Speeds varied, but lets assume 12.5 MHz.

    Tablets vary as to what they have in them. Let me first look at a high end PC instead:
    http://ark.intel.com/products/82930/Intel-Core-i7-5960X-Processor-Extreme-Edition-20M-Cache-up-to-3_50-GHz
    http://www.anandtech.com/show/8426/the-intel-haswell-e-cpu-review-core-i7-5960x-i7-5930k-i7-5820k-tested

    8 cores,
    up to 3.5 ghz
    2.6 billion transistors

    So maybe 1000 times more hz and 1000 times more transistors. More transistors means more processing power as do hz.
    Its very hard to translate transistor count in to processing power.
    But lets say 1,000,000 times more computing power.
    Were there less than 1,000,000 PCs in 1984?

    Ideally one would want to find the 'flops' rating of various processors, but even that could be deceptive.
    Add to all this the fact that most CPU's are idle most of the time and other factors such as RAM, and hard disc speeds are critical.

    I have ignored super computers.

    Add to this the fact that modern software can do an awful lot more.

    Add to this the fact that large projects benefit from larger storage including ram and hard disk.

    My first PC in 1989 had no hard disk, so the slowest component was the floppy disk. If you took one photo on your tablet, you could not have saved that photo onto one floppy disk let alone tried to edit it on my first PC. It would have to be divided up into sections so you could edit it bit by bit.
  3. Joined
    31 May '06
    Moves
    1795
    06 Feb '16 15:03
    Taking a more abstract approach.

    Moore's law basically states that computational power [per chip/per $1000/per pc/whatever]
    doubles roughly every 18 months. [this is a slight underestimate of the overall trend]
    Assuming it continues to hold over the relevant time span.

    2015-1984 = 31 years or 372 months or ~ 20 doubling periods.

    2^20 = 1,048,576 [which fits with Twhiteheads estimate, which gives us a reality check]

    2025-1984 = 41 years or 492 months or ~27 doubling periods.

    2^27 = 134,217,728 [or 100 million to first order of magnitude]

    So the question is did the total worlds computing power in 1984 sum up to > ~100 million 1984 desktop PC's?

    My intuitive first answer is probably no, as super computers at the time were very few in number, and the
    number of PC's was quite limited, [the internet wasn't really a thing etc etc].

    But lets look at some more info...

    https://en.wikipedia.org/wiki/Computer_performance_by_orders_of_magnitude

    This shows that the 1972 super computer ILLIAC IV had about 1/100th of the processing power of a ~$1000 PC
    from 2010.
    2010-1972 = 38 years or 456 months or ~ 25 doubling periods.

    2^25 = 33,554,432 [~30 million]
    So we should expect that $1000 pounds worth of computing power [pc's didn't exist then] got you ~ 1/300,000
    of the power of the worlds most powerful computer at the time.
    Lets take that as the separation between the PC's of a given time and the super computers of that time.
    So a 1984 top supercomputer could be expected to be ~300,000 times as powerful as a 1984 $1000 PC equivalent.

    Lets say that the sum total of all the worlds super computers is roughly ~3 times the power of the most powerful

    https://en.wikipedia.org/wiki/TOP500

    So we would expect the top super computers of 1984 to sum in at about ~1 million 1984 PC's worth of computing power.

    Even if we allow all the mainframes in the world to sum up to the same [highly unlikely] we would still need the total number
    of $1000 PC equivalent in 1984 to sum up to over 100 million.

    http://www.computerhope.com/history/1984.htm

    This still seems improbable to me, as people hadn't really got together what personal computers were for beyond fancier
    typewriters and enthusiast early gamers.


    So I would say that it is most likely true that a tablet device in 2025 would have more computing power than existed in the
    world in 1984 if Moore's law continues to hold over that period.
  4. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    06 Feb '16 15:311 edit
    Originally posted by googlefudge
    Moore's law basically states that computational power [per chip/per $1000/per pc/whatever] doubles roughly every 18 months.
    Actually Moore's law originally stated that the number of transistors on a die would double every period (initially a year then later revised to two years).
    https://en.wikipedia.org/wiki/Moore%27s_law

    I am doubtful as to whether 'computing power' equates to transistor count.
    Smaller transistors result in higher clock speeds and for a long time clock speeds also followed a kind of Moore's law.
    Now if we double the transistors by simply doubling the number of processors on a die, then we double computing power without any change to the clock speed.
    So theoretically a doubling of transistors plus a doubling of clock speed should give us 4 times the computing power.
    The question is whether or not a monolithic processor with twice the transistor count is faster or slower than a dual processor.
    It is a fact that processors significantly increased the number of instructions they could do per clock cycle, so pure clock speed is not an accurate measure of computing power.

    [edit]
    I see the 18 months figure comes from an estimate of the combined effect, so maybe your calculations already include it.

    The period is often quoted as 18 months because of Intel executive David House, who predicted that chip performance would double every 18 months (being a combination of the effect of more transistors and the transistors being faster).
  5. Joined
    31 May '06
    Moves
    1795
    06 Feb '16 16:07
    Originally posted by twhitehead
    Actually Moore's law originally stated that the number of transistors on a die would double every period (initially a year then later revised to two years).
    https://en.wikipedia.org/wiki/Moore%27s_law

    I am doubtful as to whether 'computing power' equates to transistor count.
    Smaller transistors result in higher clock speeds and for a long time clock ...[text shortened]... being a combination of the effect of more transistors and the transistors being faster).[/quote]
    Yeah, the ~18 months period is a function of computing power and not transistor number.
    And my estimate does rely on this holding over the period in question, which is probably not true.

    But while Moore's law holds fairly well over the 1984 to 2015 period, it doesn't hold over
    the 2012 to 2025 and into the future as current progress is slowing down. [we will soon run out
    of atoms to remove between the conductors in the transistors]
    There are other potential ways of increasing computing power [which is what we care about
    and is why I talked about computing power and not transistor size] for a given amount of money
    or per device.

    You can add more processors [stack them on top of each other for example] or increase the clock
    speed [people have clocked graphene field effect transistors [FET] in the lab at Thz speeds] by changing
    the materials used and/or by switching to optical computing [quantum computing makes SOME
    applications faster, but is no faster or slower for others, so quantum computing would augment but
    not replace classical computation.]
    which would bring speed benefits in both terms of clock speeds
    and in terms of latency moving data across the chip and to and from memory to the CPU.
    Indeed there are efforts to combine the RAM and the CPU so that memory and processing are done
    by the same structures so that there is no shunting data back and forth and thus increasing processing
    speeds.

    Such advances when they come [they are being demonstrated in the lab so it should be when not if]
    are likely to create a sudden jump in computing power that will create a discontinuity in the progression
    of computing power over time.

    So if [for example] we hit a limit in transistor size of about 1/2 the current size [so about 5nm] which is
    quite possible, then we might stagnate at about 4 times the processing power [in ~4~6 years time] of
    the present.
    But, if a few years later they introduce FET based on graphene that clock in at 4Thz instead of 4 Ghz then
    computing power might suddenly jump by a factor of 1,000 essentially overnight...

    It's quite possible that something along those lines might happen in the next 10 years [then again it might not]
    which would mean that a 2025 tablet easily out-computes the world total of 1984.

    As it stands it's still possible without such a jump, but to know for sure we would need a much more accurate
    assessment of the number and type of computers around in 1984.
  6. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    06 Feb '16 17:03
    Because of Moore's Law, I have often argued that any large costly computing project that does not have a strict deadline can benefit from merely sitting back and waiting a few years.
    The Seti@home project for example can now process in one hour what it took a month to do when it started.
    If they had simply archived their data during the first year of operation they could have now run the whole years worth in one day. It could be argued that it was a waste of resources.
    On the other hand a project that is helping to cure cancer, we want done as soon as possible.
  7. Joined
    31 May '06
    Moves
    1795
    06 Feb '16 17:20
    Originally posted by twhitehead
    Because of Moore's Law, I have often argued that any large costly computing project that does not have a strict deadline can benefit from merely sitting back and waiting a few years.
    The Seti@home project for example can now process in one hour what it took a month to do when it started.
    If they had simply archived their data during the first year of op ...[text shortened]... .
    On the other hand a project that is helping to cure cancer, we want done as soon as possible.
    My response is always that there will always be better technology tomorrow and that
    your logic results in never doing anything and always waiting for the next advance.

    Also, the constant need for more computing power and applications fighting over resources is
    the driving force that makes computing power keep increasing.

    Indeed, PC processing power [in particular graphics power] has stagnated of late precisely
    because for the vast majority of users they had enough processing power and didn't need more.

    Now that high end VR needs a significant increase in graphics power than the general PC has,
    it's likely and expected to provide a big boost to what the typical PC is capable of.

    For a number of years high end consumer Intel CPU's have stayed at 4 cores and not 6 or 8 because no
    regular consumer applications existed that needed more cores. Their 8+ core models were server
    grade only. The rest of their consumer chips featuring a relatively crappy on chip GPU to use the space
    that the extra CPU cores would go. [helpful in laptops, mostly worthless in desktops]

    Having a constant driving force for more computing power is what makes the manufacturers keep investing
    in the technology development needed to achieve that greater computing power.

    The Human genome project was completed years ahead of schedule because computing power grew
    exponentially from the start of the project making what would have taken a decade at the start take only
    a few years and can now be done in hours or days.

    But the technology we have now that allows us to do that was developed by doing that project.
    If we didn't do that project we wouldn't have that tech.

    Certainly every increasing computing power makes many tasks easier.

    But equally important is all the other technology, knowledge, experience, research that goes into a
    project. And if you wait for the computing power to increase before you start then none of that gets
    done. And the people who might be interested in doing it at the start have gone off to do something else.
    They have short lifespans that they don't want to waste waiting around, they want to do it now or they are
    gone.

    On top of that, extreme computing power can make you lazy, you can get away with inefficiency.
    If you plan your project for now, and today's computing power and you optimise as much as possible to
    make your code as efficient as possible because you don't have unlimited computing power, then your
    project will go that much faster when increased computing power does arrive.
  8. Standard memberDeepThought
    Losing the Thread
    Quarantined World
    Joined
    27 Oct '04
    Moves
    87415
    06 Feb '16 21:48
    Originally posted by C J Horse
    In the book I'm currently reading, a character makes the statement (surprising to me) that her tablet computer has more computing power than existed in the whole world in 1984. That part of the book is set in 2025.

    Is this a reasonable claim for tablets now? If not, is it likely to be true in 2025?
    Given it is a work of fiction, it is not unreasonable. The arguments above show it's at least plausible.
  9. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    06 Feb '16 22:00
    Originally posted by googlefudge
    My response is always that there will always be better technology tomorrow and that
    your logic results in never doing anything and always waiting for the next advance.
    Actually, no. There should always come a time when the computing cost becomes free or negligible.

    I agree with some of your argument, but not all of it. There are sometimes good reasons to spend the money now to get the results sooner, but that doesn't apply to all projects. Simply saying 'anything we do is for the good of mankind as it helps bring the costs down' doesn't really cut it for all projects.
    That argument was used for things like NASA as well as spending on the military. Guess which one I would have picked? Even better, we should have put most of that money into more immediately urgent and useful research.

    The human genome project will lead to many breakthroughs in medicine so was almost certainly a good thing. However you are not entirely correct that all the advances in sequencing resulted from it. Many of the advances have come from the brilliant ideas of scientists that would almost certainly have been working even if that project was never attempted.

    Suppose they had delayed that project by 5 years and saved maybe half the costs (over a billion dollars). Could they have put that money to better use than the benefit of finishing 5 years earlier?

    Its not an open and shut case.

    And I am still waiting for sequencing to come to South Africa and be available to the public at an affordable cost. I keep hearing that it is now possible under a thousand dollars in select locations in the US, but as far as I know it just can't be done here unless you have the right contacts.
  10. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    07 Feb '16 07:40
    YouTube

    And yet we have not yet been able to get to the computing power of a single human brain.

    1 million ARM cores = 10 mouse brains.
  11. R
    Standard memberRemoved
    Joined
    10 Dec '06
    Moves
    8528
    07 Feb '16 14:06
    I suppose that the field is too young for specific standards to be set based on the laws of physics as they pertain to computation? That is, has anyone developed/developing the "mechanics of computation" such that they are usable tools for determining the characteristics of a computational machine against ideals etc..?
  12. Standard memberDeepThought
    Losing the Thread
    Quarantined World
    Joined
    27 Oct '04
    Moves
    87415
    08 Feb '16 02:33
    Originally posted by joe shmo
    I suppose that the field is too young for specific standards to be set based on the laws of physics as they pertain to computation? That is, has anyone developed/developing the "mechanics of computation" such that they are usable tools for determining the characteristics of a computational machine against ideals etc..?
    I'm not absolutely sure what you are asking. Basic questions of what is computable were attacked by Turing and others before and after the war. You mentioned physics, so I'm thinking you mean the science of physical limitations on how fast electronics can be run and so forth. They are running up to some fundamental limits. But in the past have always got round them using one or another trick. For example, the speed at which a processor can be run is limited by the speed of light, basically the clock timing signal has to be able to get across the processor. To increase the speed of processors they use things called phase locked loops to ensure that parts of the processor that have space-like separation over the time-scale of a clock-cycle are kept in syncronicity with each other. This increases the theoretically achievable clock speed by a few orders of magnitude. The main limitation is how narrow one can make the channels, and on silicon they are pretty much at the limit. So yes, this is of great interest to processor manufacturers and is a fairly mature field of study (although not one I know a vast amount about).
  13. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    08 Feb '16 07:13
    Over the past 10 years or so, raw processor speed has become less important, partly because other parts such as hard disks have failed to keep up. I dramatically increased the overall speed of my PC by purchasing and SSD drive. I believe Intel is working on something akin to SSD but with RAM speeds. So it will be possible to have a terrabyte of RAM.

    Also very important is power consumption. Faster and faster computation came with higher and higher power consumption until processor heat sinks and fans were struggling. Then mobile came along and manufacturers concentrated on bringing power consumption down and now we have fast processors in mobile phones with no cooling fans at all.

    There is still a lot of room for growth in software including programmable chips which basically take software and make it part of the processor resulting in a processor specialised for a particular task. This can also lead to highly parallel computing. Graphics cards are an example of specialised processors and they can have over a thousand cores on one chip. Programmable chips similarly have the potential of running thousands of computations simultaneously on a single chip.

    Quantum computing will also be able to tackle some hard computing problems.

    Finally, many computation problems will be solved with better AI techniques which often involves highly parallel computing as opposed to the current paradigm of single threaded computing. Again, specialised chips could result in massive advances in effective computing power without requiring more or smaller transistors or higher clock speeds.

    Also very important in the modern world are storage capacity (some projects can potentially generate petabytes of data per day and many many projects generate very large amounts of data), and communication speeds (faster internet) and both these have a lot of room for growth.
  14. R
    Standard memberRemoved
    Joined
    10 Dec '06
    Moves
    8528
    09 Feb '16 01:422 edits
    Originally posted by DeepThought
    I'm not absolutely sure what you are asking. Basic questions of what is computable were attacked by Turing and others before and after the war. You mentioned physics, so I'm thinking you mean the science of physical limitations on how fast electronics can be run and so forth. They are running up to some fundamental limits. But in the past have always ...[text shortened]... nufacturers and is a fairly mature field of study (although not one I know a vast amount about).
    I'm sorry if what I ask is a bit underdeveloped. I just mean to inquire that since computers are physical machines their principle of operation should be able to be described by a suitable form of mechanics. For instance I could draw an analogy that a processor is a computational pump and thus might behave the way fluid pumps behave. Might a given processor have some computational head curve or limitation in the way mechanical pumps do? Is the computational architecture within the computer akin to a system head, and computations effectively network "flow" such that a processor combined with a particular system fix the computational flow state. Are there Series-Parallel relationships that hold similar to what is found in fluid mechanics or electrical circuit elements when dealing with computers as there are in dealing with multiple pumps or systems and their arrangement in mechanics? Basically, I wonder who is writing the book on "The Mechanics of Computation", and what do the mechanics look like?
  15. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    09 Feb '16 06:43
    Originally posted by joe shmo
    Might a given processor have some computational head curve or limitation in the way mechanical pumps do?
    Ultimately a processor contains flowing electrons. So the absolute limits are the speed of electricity (which is a little slower than the speed of light), and the frequency with which pulses (changes in the voltage) can occur and still work a transistor.
    Making faster transistors has been one of the key developments towards faster computers as has making them smaller and thus reducing the distances between them. The current limit however has to do with how much heat is produced by the flowing electrons. That is why cooling your processor allows you to over-clock it dramatically. That is also why other posters have suggested using say graphene for the wires as that has lower resistance than current wires and thus produces less heat.
Back to Top

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.I Agree