[AusNOG] Copper versus fibre in the DC

John Edwards jaedwards at gmail.com
Sun Oct 13 10:35:43 EST 2013


If operational considerations factor at all into your decision making process, use SMF. It offers a flexibility that smart operators can take advantage of in quickly solving problems.

The only reasonable excuse for more multimode is compatibility with existing MMF infrastructure.

SMF supports single-fibre-optics, like BX or GPON, MMF does not
SMF supports multiple wavelengths for xWDM solutions, MMF does not
MMF comes in multiple flavours, SMF does too, but is generally compatible with fibre from 15 years ago.
SMF between racks means that you can patch an incoming service to any rack in your facility
This means that your border routers/switches don't have to all be in the same physical rack
This also potentially saves you a piece of powered active equipment to convert SMF to MMF to jump between racks and the outside world.
It may even mean that you can patch services to a different datacentre in the event of a failure or migration. You're not going to get that benefit with MMF.

As someone who manages datacentres for an employer who may one day buy more in Australia - everyone please use SMF between racks :)

If you must save $200 on a pair of optics, just keep it to patching within the same rack, and use SMF for all structured cabling!

John





On 12/10/2013, at 6:26 PM, Peter Tiggerdine wrote:

> SMF for carrier to rack but I don't see any problems with MMF inter rack. The delta between SFP cost is well worth it.
> 
> On 12/10/2013 5:51 PM, "James Braunegg" <james.braunegg at micron21.com> wrote:
> Dear Alastair,
> 
>  
> 
> I would recommend
> 
>  
> 
> Single mode Fibre for any rack to rack communications , or rack to carrier communication .
> 
>  
> 
> Today the same single mode fibre will run 1gbit, 10gbit, 40gbit and 100gbit … and I’m sure it will run 400gbit in years to come and that’s before you look at wavelength technology.
> 
>  
> 
> I also find fibre has an placebo effect on people thinking it’s more important than copper so they take more care when touching it….
> 
>  
> 
> I would only use copper for Switch to Server communication within a rack
> 
>  
> 
> Kindest Regards
> 
>  
> 
> James Braunegg
> P:  1300 769 972  |  M:  0488 997 207 |  D:  (03) 9751 7616
> 
> E:   james.braunegg at micron21.com  |  ABN:  12 109 977 666   
> W:  www.ddosprotection.com.au  T: @micron21
> 
>  
> 
> 
> <image001.jpg>
> This message is intended for the addressee named above. It may contain privileged or confidential information. If you are not the intended recipient of this message you must not use, copy, distribute or disclose it to anyone other than the addressee. If you have received this message in error please return the message to the sender by replying to it and then delete the message from your computer.
> 
>  
> 
> From: AusNOG [mailto:ausnog-bounces at lists.ausnog.net] On Behalf Of Alastair Waddell
> Sent: Friday, October 11, 2013 9:13 PM
> To: ausnog at ausnog.net
> Subject: [AusNOG] Copper versus fibre in the DC
> 
>  
> 
> Hi AusNOG,
> 
>  
> 
> I expect there's strong opinions about this.
> 
>  
> 
> As I'm relocating DCs, its an opportunity to re-assess carrier interconnect terminations. 
> 
>  
> 
> I've been reading how copper (CAT7) is still valid with 10Gb/s ethernet and at the same time how the transceiver is a point of latency where the optics must be converted to electrical signal.
> 
>  
> 
> I figure the transceiver is also a point of failure that's absent in copper although such an argument must surely factor the qualify of the cable/RJ and it's subsequent handling (but how hard can it be!)
> 
>  
> 
> So: 
> 
>  
> 
> * Is copper a valid or even a 'better' choice to terminate carriers in the DC for 1Gb/s and beyond to 10Gb/s? *
> 
>  
> 
> PS KISS and risk mitigation rule in my little world. My fallback position is that fibre is still preferred as the 'safe' option especially wrt 10Gb/s. I just want to canvass all options. I don't want to repeat the exercise with the carriers at some future date if I can avoid it. It probably means, sub 1Gb/s top-of-rack kit today (looking at 4948/4900M or Juniper equivalents) and new kit at somewhere near 1Gb/s throughput with a preference to avoid carrier re-cabling. 
> 
>  
> 
>  
> 
> "With the release of the IEEE 802.3an standard, 10 Gb/s over balanced twisted-pair cabling (10GBASE-T) is the fastest growing and is expected to be the most widely adopted 10GbE option. "
> 
>  
> 
> "At 1 Gb/s speeds, balanced twisted-pair compatible electronics offer better latency performance than fibre; however, considering latency at 10 Gb/s, currently fibre components perform better than balanced twisted-pair compatible 10GBASE-T electronics"
> 
>  
> 
> "Since optical fibre electronics cannot autonegotiate, a move from 1000BASE-xx to 10GBASE-xx requires a hardware change. In contrast, both 1GbE and 10GbE can be supported by 10GBASE-T balanced twisted-pair compatible equipment."
> 
>  
> 
> http://www.siemon.com/uk/white_papers/08-07-10-copper-fiber-options-data-center.asp
> 
>  
> 
>  
> 
>  
> 
>  
> 
> Regards,
> 
> -- 
> 
> Alastair Waddell
> Legion Internet
> Australia
> 
>  
> 
> 
> _______________________________________________
> AusNOG mailing list
> AusNOG at lists.ausnog.net
> http://lists.ausnog.net/mailman/listinfo/ausnog
> 
> _______________________________________________
> AusNOG mailing list
> AusNOG at lists.ausnog.net
> http://lists.ausnog.net/mailman/listinfo/ausnog

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ausnog.net/pipermail/ausnog/attachments/20131013/f1264230/attachment.html>


More information about the AusNOG mailing list