Cat 7 vs Cat 6e

Just throwing this out there... Cat7 is NOT TIA/EIA compliant for 8P8C connections. TIA/EIA have both decided to skip Cat7 as a standard for typical wiring scenarios. You will be best off with Cat 6a STP.

Cat8 is the only newly recognized standard at speeds of 10Gbit at upto 100m, or 40Gbit at like 25m.

Also, make sure you're not buying CCA... That stuff has flooded the market...

The problem is that CAT8 is super expensive and terribly difficult to find. A lot of what I'm finding labeled as Cat8 are all running Cat6/7 specs and, as such, are way overpriced. It's a total ripoff arena right now.

Lastly... +1 to Cable Matters, bought a 1000ft spool of yellow Cat6 from them a few years ago, and it's some pretty good stuff, and proper cat6. Think I paid $140 for it, where as most the others near that price range seemed to be using CCA, which would be horrible for network cable.

I've never heard of Cable Matters, but a quick comparison on these shows they are priced higher compared to Monoprice, at least with regards to the cable we're talking about. As an apples to apples comparison (neither of these are CCA), links are below:

https://www.monoprice.com/product?c_id=102&cp_id=10234&cs_id=1023401&p_id=8103&seq=1&format=2

1000FT Cat 6 Bulk Bare Copper Ethernet Network Cable UTP, Solid, Plenum Jacket (CMP), 550MHz, 23AWG
 
Cat8 is the only newly recognized standard at speeds of 10Gbit at upto 100m, or 40Gbit at like 25m.

Also, I would throw in that since Cat8 only has a 30 meter (~100ft) range for 40G, this would make a poor choice for cabling anything longer. WAY too expensive to waste like that right now.
 
The funny thing about the Cat 7 BS is they didn't want to ratify a class F cable because they didn't want to use the TERA connector. It's BS simply because the cable can use an 8P8C connector just fine and also uses the ARJ45 (which comes standard on all Cat 6a and Cat 7 cabling) which happens to be compliant to IEC 61076-3-110. 7 and 7a may not be "recognized" but it's still compliant to all other standards including UL 1581. The other hilarious thing is most all cable sources all claim it meets all standards. Just check this out.
https://docs.google.com/viewer?url=...m/files/specsheets/30667_Specsheet_180420.pdf

The only thing the thread really relates to is CMR rated. A cable CMR rated will be higher than CMX which is residential standard. Aka, almost literally any cable you can buy on a spool.

The other hilarious thing is, most all data centers use Cat 7 cabling because it's more flexible than 6a with that stupid plastic piece.
 
Also, I would throw in that since Cat8 only has a 30 meter (~100ft) range for 40G, this would make a poor choice for cabling anything longer. WAY too expensive to waste like that right now.

How does that make it a poor choice for cabling anything longer? You realize 40Gbit isn't going to be in a home, at all, ANYTIME soon, right? We are just now seeing people get a trickle of 10Gbit, and that is a rarity. Probably be another 20 years before we see 40Gbit get into homes.


@PP; Most data centers that I have been in have stuck with 6a and fiber, haven't seen one with any Cat7 using 8P8C connections *yet*. 8P8C on Cat6a, and QSFP based fiber connections.


What I am saying in the end is, Cat7 (8 especially) is pointless in the home, unless you happen to have a real need to run your cable along a pair of 240v lines at high amperage. Cat6a is sufficient for 10Gbit @ 100m provided you have a proper installation.

It's almost like when everyone demanded you must use Cat6 for 1Gbit instead of 5e, it's not a requirement if you are installing everything with in what a standard allows.
 
Last edited:
You realize 40Gbit isn't going to be in a home, at all, ANYTIME soon, right?
I have 20Gb bonded, does that count? I have also been looking at the Intel 710 cards.

What I am saying in the end is, Cat7 (8 especially) is pointless in the home, unless you happen to have a real need to run your cable along a pair of 240v lines at high amperage. Cat6a is sufficient for 10Gbit @ 100m provided you have a proper installation.
Yea I clarified on that already. Cat 7 really isn't all that expensive unless you're doing a very large home with like 4 ports per wall or something. It's easier to work with and shielded by default. At this rate, makes no sense to use 6a if you can afford 7 simply because of the aforementioned point. My 6a was properly installed, there was still signal degradation due to it being UTP and the portion that is close to AC was a good 10-15ft away. Replaced with Cat 7 and 0 issues. Cat 7 patch cables from a foot to 100 is even the same price. (I should also mentioned I'm using a 6a keystone) This would have been rectified if I used some of the FTP 6a on the spool, but tbh F*CK that cable and F*CK that inner plastic piece.

@PP; Most data centers that I have been in have stuck with 6a and fiber, haven't seen one with any Cat7 using 8P8C connections *yet*. 8P8C on Cat6a, and QSFP based fiber connections.
We have DAC, fiber, 7 and 6a. The 6a are using unshielded 8p connectors and 7 is using shielded. All but one node (2 racks) have been moved to 7, our oldest VSOU is using the 6a. Our cabinet of patch cables is a combo of 6a and 7 but most are 7 now. To be absolutely fair, our comms engineers that took over are total nimrods and don't know the difference between their *** and a hole in the ground. I have a very good feeling that when JOTT comes by for the audit they will be slammed for not being 150% within code for those cables lol. ALIS baseline calls for 6a.
 
I am weary of most premade patch cables. Have been bit too many times with cables that fall short of the spec they claim to meet.

As far as audits, that is why the data centers I get to poke around in haven't accepted Cat7 for normal runs, costs may be similar, and is supposedly a superior cable. But because it isn't accepted by a specific entity, it just can't be used yet and most likely, ever.

I would hate to be the one that installs Cat7, or even Cat8 at this point in attempts to future proof anything unless the cable is absolutely required (such as in your case due to the interference). Even more so in any enterprise environment where it could cost my *** a job.

As far as the 20Gbit bonded... Not sure that counts since it's bonded. :p

BTW, X710 or XL710? I rather like the looks of these XL710 cards
 
All of my patch cables are from Cable Matters, no complaints here.

It's superior in the fact that it doesn't have that annoying *** plastic strip in it. Makes it a lot easier to cable manage. IMO they should have done that with 6 to begin with and then 7 wouldn't exist the way it does. As far as their audits, I don't really care. Those stupid ratifications and certifications are as pointless as other certifications in IT. Just more BS. The only thing that should matter is CRM and if it has that it's good by me. They probably don't even know that it isn't properly ratified and went with the "best they could find" rather than actually knowing a thing or two about their damn job. *shrug*

X710 QDA4 because it's 4 10Gb ports which I can use and is the same price as the XL710 QDA2. I don't have a QSFP+ switch and probably won't ever any time in the near future on sheer cost so for my situation DAC between servers and 2 ports to LAN makes more sense. Although, the XL can use breakout cables so that'd be 8 10Gb for only 8 lanes but it's double data and 8 DACs would get messy. I just feel that'd bottleneck but tbh I haven't looked into it too heavily yet.

Edit: Those wazoos didn't even recognize a flooding flag in our firewall before it died.
 
You mean x710-DA4 right? Can't find a QDA4 on the X710 series.

From my understanding you wouldn't need a QSFP switch when using the XL710 if you're using breakouts, or am I mistaken?
 
Yes a DA4.

DAC between servers and 2 ports to LAN makes more sense
8 DACs would get messy

Would want a switch instead of breakout cables so the rest of the network can utilize the bandwidth and not have 8 thick *** DAC cables coming out of one card. I could technically use only one breakout cable with all 4 going to an SFP+ switch, and a DAC to my big server but then I'd need an XL in that one and my other machines don't get 20-40Gb too. Would get very costly going either way utilizing the XLs. Instead I'd rather just have one DA2 in one server, and a DA4 in my NAS. 2 fiber to a switch, current fiber solution to my main server. That way the backbone has 20Gb and the main server also has 20Gb direct like it currently has. Tertiary machines will just connect to the switch due to lack of extra PCI-E lanes but still relieving the single 10Gb connection that's bottlenecked.
Even cheap Mellanox 12 port QSFP+ switches are 11 grand and other switches with QSFP+ uplinks are 3g+.

The sad thing is, even 20Gb bottlenecks NVMe caching.
 
Back
Top Bottom