1. Subscribersonhouse
    Fast and Curious
    slatington, pa, usa
    Joined
    28 Dec '04
    Moves
    53223
    20 Jun '16 13:341 edit
    http://techxplore.com/news/2016-06-world-processor-chip.html

    1.8 trillion instructions per second and 100 times more energy efficient, it can run 115 billion instructions per seconds on a power budget of 0.7 watts, which puts it in range of running off a AA battery or two.

    Designed by engineers at University of California, Davis and fabricated by IBM using 32 Nm design rules so it could lead to 4000 core machines if they use 16 Nm rules.

    It is also faster, 3 to 18 times faster than equivalent multi core systems.
  2. Germany
    Joined
    27 Oct '08
    Moves
    3118
    20 Jun '16 15:43
    Not as radical as you think - a typical GPU (which is very similar to a CPU) on the market today already has hundreds of cores. What Intel has been trying to do is develop CPU architecture that combines the benefits of GPUs and multi-core CPUs.
  3. Subscribersonhouse
    Fast and Curious
    slatington, pa, usa
    Joined
    28 Dec '04
    Moves
    53223
    20 Jun '16 16:031 edit
    Originally posted by KazetNagorra
    Not as radical as you think - a typical GPU (which is very similar to a CPU) on the market today already has hundreds of cores. What Intel has been trying to do is develop CPU architecture that combines the benefits of GPUs and multi-core CPUs.
    The difference with this one is the speed compared to other GPU's. 3 to 18 times faster due to the architecture of the device and power savings also.

    But like I said, it is done with older tech, 32 Nm structures and now we are at the 16 NM area which could quadruple the number of cores for the same size die.
  4. Joined
    10 Nov '12
    Moves
    6889
    20 Jun '16 16:48
    I thought we were down to 4-7 nanometers (I.e. below which it's impossible to go). Maybe that's just in the lab.
  5. Joined
    31 May '06
    Moves
    1795
    20 Jun '16 17:06
    Originally posted by NoEarthlyReason
    I thought we were down to 4-7 nanometers (I.e. below which it's impossible to go). Maybe that's just in the lab.
    14nm is the current smallest commercial scale. [what they are doing in labs or for the military...]
  6. Subscribersonhouse
    Fast and Curious
    slatington, pa, usa
    Joined
    28 Dec '04
    Moves
    53223
    20 Jun '16 17:59
    Originally posted by googlefudge
    14nm is the current smallest commercial scale. [what they are doing in labs or for the military...]
    Even at 14 nm you are starting to count individual atoms. 4 nm is 40 angstroms so that is about 10 silicon atoms wide. Sounds like the limit to me.
  7. Standard memberDeepThought
    Losing the Thread
    Quarantined World
    Joined
    27 Oct '04
    Moves
    87415
    24 Jun '16 08:00
    Originally posted by KazetNagorra
    Not as radical as you think - a typical GPU (which is very similar to a CPU) on the market today already has hundreds of cores. What Intel has been trying to do is develop CPU architecture that combines the benefits of GPUs and multi-core CPUs.
    In at least the Radeon GPUs the way the GPU works is each execution unit is part of a wide SIMD unit (16 floats wide), and there are up to around 64 of these. Each thread has it's own execution unit but can only proceed while it is executing the same instruction in each thread in a given SIMD unit, so if half the threads are doing one thing and half the other it has to execute each different instruction from each thread for all the threads one after the other and then select the thread appropriate output (in other words it loses parallelism). In the IBM chip (if I understood the blurb right) each execution unit is genuinely independent and can execute completely different programs without the associated penalty in the GPU type configuration which basically works best if every execution unit is executing exactly the same code at every step. So I think that this is radical as the largest I'd heard of to do that before is the Xeon Phi with about 60 processing cores, but the processing cores are very powerful in the Xeon Phi (with 512 bit SIMD units), I think the one's in the IBM box are much simpler and do not contain SIMD units.
  8. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    24 Jun '16 12:42
    What really matters are:
    1. Power per unit computation. People doing serious computations don't really care about it being all on one chip, what they care about is the cost to run it - which is mostly about power consumption.
    2. Reliability. Graphics cards in the past, simply were not reliable enough for serious computation outside graphics. This has improved with time.
    3. Ease of programming it. Distributed programs such as Seti@home took a while to be converted to a version that would run on graphics cards because of the difficulties of programming them. The benefits once converted were significant. I believe however that some projects such as Rosetta@home has never had a GPU version due either to the difficulty of programming them or because they simply aren't suited to the task.
Back to Top

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.I Agree