Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That switch appears to have 2x 400G ports, 2x 200G ports, 8x 50G ports, and a pair of 10G ports. So unless it allows bonding together the 50G ports (which the switch silicon probably supports at some level), it's not going to get you more than four machines connected at 200+ Gbps.




As with most 40+GbE ports, the 400Gbit ports can be split into 2x200Gbit ports with the use of special cables. So you can connect a total of 6 machines at 200Gbit.

Ah, good point. Though if splitter cables are an option, then it seems more likely that the 50G ports could be combined into a 200G cable. Marvell's product brief for that switch chip does say it's capable of operating as an 8x 200G or 4x 400G switch, but Mikrotik may need to do something on their end to enable that configuration.

I'm not trolling here: Do you think that Marvell sells the chips wholesale buy the vendor buys the feature set (IP/drivers/whatever)? That would allow Marvell to effectively sell the same silicon but segment the market depending upon what buyers needs. Example: A buyer might need a config that is just a bunch of 50GB/s ports and another 100GB/s ports and another a mix. (I'm thinking about blowing fuses in the manuf phase, similar to what AMD and Intel do.) I write this as a complete noob in switching hardware.

The Marvell 98DX7335 switch ASIC has 32 lanes that can be configured any way the vendor wants. There aren't any fuses and it can even be reconfigured at runtime (e.g. a 400G port can be split into 2x200G).

Are the smaller 98DX7325 and 98DX7321 the same chip with fuses blown? I wouldn't be surprised.


I think if Marvell were doing that, they would have more part numbers in their catalog.

You’re talking about link aggregation (LACP) here, which requires specific settings on both the switch and client machine to enable, as well as multiple ports on the client machine (in your example, multiple 50Gbps ports). So while it’s likely possible to combine 50Gbps ports like you describe, that’s not what I was referring to.

No, I'm not talking about LACP, I'm talking about configuring four 50Gb links on the switch to operate as a single 200Gb link as if those links were wired up to a single QSFP connector instead of four individual SFP connectors.

The switch in question has eight 50Gb ports, and the switch silicon apparently supports configurations that use all of its lanes in groups of four to provide only 200Gb ports. So it might be possible with the right (non-standard) configuration on the switch to be able to use a four-way breakout cable to combine four of the 50Gb ports from the switch into a single 200Gb connection to a client device.


Ok. I’ve never seen a configuration like this, while using breakout cables to go from higher bandwidth -> multiple lower bandwidth clients is common, so I still disagree with your assertion that it seems “more likely” that this would be supported.

Breakout cables typically split to 4.

e.g. QSFP28 (100GbE) splits into 4x SFP28s (25GbE each), because QSFP28 is just 4 lanes of SFP28.

Same goes for QSFP112 (400GbE). Splits into SFP112s.

It’s OSFP that can be split in half, i.e. into QSFPs.


This is incorrect - they can split however the driving chip supports. (Q|O)SFP(28|56|112|+) can all be split to a single differential lane. All (Q|O)SFP(28|56|112|+) does is provide basically direct, high quality links to whatever you chips SERDES interfaces can do. It doesn't even have to be ethernet/IB data - I have a SFP module that has a SATA port lol.

There's also splitting at the module level, for example I have a PCIe card that is actually a fully self hosted 6 port 100GB switch with it's own onboard Atom management processor. The card only has 2 MPO fiber connectors - but each has 12 fibers, which each can carry 25Gbps. You need a special fiber breakout cable but you can mix anywhere between 6 100GbE ports and 24 25Gbe ports.

https://www.silicom-usa.com/pr/server-adapters/switch-on-nic...


Here’s an example of the cables I was referring to that can split a single 400Gbit QSFP56-DD port to two 200Gbit ports:

https://www.fs.com/products/101806.html

But all of this is pretty much irrelevant to my original point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: