Personal View site logo
Make sure to join PV Telegram channel! Perfect to keep up with community on your smartphone.
PCIe 5.0 coming very soon with 4x speed, Apple can be first to use it
  • Such situation happened due to extreme delay with PCIe 4.0, and it will be only upcoming AMD 3000 series CPUs that will befirst to implement such standard (even on older motherboards some slots will start to work in it, same as it happened with HDMI after 2200G/2400G introduction).

    Rumors in industry are that Apple aims to use PCIe 5.0 in all upcoming 8/16 cores ARM CPUs made for their upcoming iOS based notebooks lines. Of course Thunderbolt via USB-C ports will also greatly benefit from this change with corresponding speed increases.

    image

    image

    image

    image

    sa6477.jpg
    800 x 537 - 50K
    sa6478.jpg
    707 x 450 - 55K
    sa6479.jpg
    709 x 452 - 61K
    sa6480.jpg
    707 x 452 - 62K
  • 7 Replies sorted by
  • New TB, new TB, new TB. :). 10 lane TB plug please, for extra wide data rate.

  • The big tradeoff of the higher speeds is that signals won’t travel as far on existing designs. In the days of PCIe 1.0, the spec sent signals as much as 20 inches over traces in mainstream FR4 boards, even passing through two connectors. The fast 4.0 signals will peter out before they travel a foot without going over any connectors.

    System makers are sharpening their pencils on the costs of upgrading boards and connectors, adding chips to amplify signals, or redesigning their products to be more compact.

    Retimer chips for a full 16-lane full PCIe 4.0 could cost $15 to $25 — if you can find them. Upgrading an adapter card from Megtron-2 to Megtron-4 materials might only add a dollar or so. However, the cost of a similar upgrade for a motherboard is about $100, and if the upgrade is to even higher quality Megtron-6 it would cost about $300.

    PCIe 4.0 signals are only travelling three to five inches, said the CEO of PLDA Inc., a designer of PCIe controller cores that came out with 4.0 products three years ago.

    “We cannot use FR4. We needed to move to Megtron-6; that’s really clear to me. If you want to get more distance, you need to use retimers, and that’s a bit expensive and tricky.”

    “What we have been using for 4.0 and expect to use for 5.0 is twinax cables and firefly connectors, The cost is very low compared to retimers, you can get whatever you want in distance, and the latency is really good.”

    The industry has thrived for nearly 18 years on the PCIe mechanicals, but that it’s time for a change.

    The GenZ interconnect group is already exploring new designs that put a connector 60 mm closer to a processor.

    The 4.0 standard supports FR4 boards for all uses but the longest reaches where retimers are needed, said Yanes. Whether boards will have to move off of FR4 to get the 5.0 speeds “is still part of the discussion … some motherboards may only have a couple of 5.0 slots or attach points.”

    So, expect all consumer platforms to have one fast PCIe slot, and may be one extra M.2, al else will be slow same PCIe 3.0.

  • I'm not sure why anyone would want PCIe 4.0 or 5.0, as modern graphics cards are not even saturating PCIe 3.0 16x. Where would anybody need more data throughput (maybe except for servers)?

  • @Psyco

    It is simple.

    • SSD drives are now limited by PCIe 3.0 x4
    • Chipset link is PCIe x4 only now, so copy from one fast SSD to another is limited a lot
    • InfiniBand and high speed Ethernet cards, now require x8-x16 slots

    This year we will see AMD adding PCIe 4.0 mostly due to first two points.

  • @Vitaliy_Kiselev

    Who needs that much data moved that quickly?

    "Office"-User - no.

    The usual "high end" gamer - not really.

    Videoblogger - also no (I shoot BMD cameras in RAW and it piles up to 1TB quite fast, but even that amount of data is copied bevor I finish my coffee).

    More professional production - yes, but they can afford a HEDT like Threadripper or Xeons with plenty of PCIe lanes. Just put NVMe raid cards in 16x slots or the fast network cards.

    Don't get me wrong: Progress is good and faster PCIe lanes are a step in the right direction. But the drawbacks (extremly short wires) of PCIe 4.0/5.0 are really bad... and expensive. So, why would anybody throw money at it?

    It really does only make sense to have PCIe 4.0/5.0 between the CPU and chipset (as they are really close together anyway) and then have MUCH MORE PCIe 3.0 16x slots on the mainboard.

  • @Psyco

    No one asks office user as it is up to department to decide.

    Videoblogger - also no (I shoot BMD cameras in RAW and it piles up to 1TB quite fast, but even that amount of data is copied bevor I finish my coffee).

    People are different, and not all take during coffee during copy.

    On any consumer platform if you use two GPUs you can't use Infiniband anymore.
    For example, to edit heavy footage directly on server SSD storage.

    More professional production - yes, but they can afford a HEDT like Threadripper or Xeons with plenty of PCIe lanes. Just put NVMe raid cards in 16x slots or the fast network cards.

    It is consumer products that need PCIe 4.0 and 5.0 the most, exactly as it is total shortage of lines. And yes, you can't do more PCIe lines keeping same price.
    They can remove RGB leds, of course, but no sane market manager will do it, as RGB leds sell better :-)

    It really does only make sense to have PCIe 4.0/5.0 between the CPU and chipset (as they are really close together anyway) and then have MUCH MORE PCIe 3.0 16x slots on the mainboard.

    Comment makes no sense. Open any of my recent topics in Storage and Computer and look at real chipset limits and such.

  • Intel plans to have it working in 2021 for server segment, most probably we will see it in consumer segment around 2022-24

    image

    sa8397.jpg
    800 x 414 - 55K