Time Warner Hybrid-Fiber-Coaxial (HFC) Network

Time Warner Cable traditionally used a completely coaxial network to allow subscribers to view television channels. This cable system is also a type of network. A network is simply an interconnected system of things; in our case, televisions.

There are a wide variety of networks in use today. The most basic network, the bus, connects each node (e.g. workstation) to a common backbone. Time Warner’s old network was completely a bus network, now it is only cable to each neighborhood. Most bus networks are very unreliable because if the backbone is broken at any point the entire network fails.

Figure 1 – TWC Coaxial Bus network after leaving node

clip_image002 As channel availability increased and cable services broadened the cable system had to be redesigned. Time Warner chose a Hybrid Fiber Coax system (HFC) to provide the adequate bandwidth and scalability that they needed and will need in the future. The company chose high-speed optical fiber to connect each of their hubs and nodes. To connect the customers to the nodes and hubs they used regular coaxial cable.

Time Warner’s fiber network is a Synchronous Optical Fiber Network (SONET) ring. This network is fully redundant. It uses fiber above and below ground to provide diverse-route connections. This type of network is not only stable, but very fast. If someone were to break one of the fiber routes, the system automatically re-routes traffic to the diverse route.

For scalability, Time Warner has ninety-six count fiber in place. This allows them to run many networks within one fiber sheath. Each service requires 4 fibers, two to carry traffic clockwise around the ring, two to carry traffic counter-clockwise around the ring and a matching set of fibers to carry the diverse path (underground or overhead) traffic. They currently run services such as Video On Demand (VOD), cable TV and high-speed cable internet.

The signals are transported on single-mode fiber (for long-haul applications) from hub to hub. At the end of each multi-mode fiber (short-distance applications) run there is an optical to electrical converter that converts the fiber to coaxial cable. The coaxial lines have to be amplified when the signal is too weak to receive on the other end. Each time the signal is amplified, there is more noise introduced to the line. Time Warner keeps their cable network no deeper than six amplifiers to keep noise to a minumum. Fiber has the same issue but the amplifiers are regenerators which are called Iridium-Doped (the technology that the amplifiers use) Amplifiers which are very expensive.

The Time Warner network is a MAN (Metro Area Network) which is a network that connects an entire city. The entire Time Warner network consists of many different MAN’s. There are other locations such as Milwaukee and Tennessee that are linked together to form a WAN (Wide Area Network.)

Figure 2 – SDTV Channel

To fit all the data onto one single coaxial cable TWC uses multiple signaling methods and multiplexing. The coaxial cable network uses FDM (Frequency Division Multiplexing) to divide the available bandwidth into “chunks” of space for each television channel, data channel or high definition digital channel. The coaxial spectrum has a bandwidth of 750MHz. Each channel of SDTV (standard definition television) occupies 6MHz of bandwidth (see figure 2.)clip_image003 The coax contains a sub-band from 0-55MHz which contains return traffic and is divided into 1.5MHz “chunks”. Usable bandwidth on the cable spectrum starts at 55MHz. From 55MHz to 550MHz is where all the analog signaling exists. The space from 550MHz to 750MHz is all digital data using the digital to analog signaling method 256 QAM (256 bits per baud Quadrature Amplitude Modulation.) QAM is a combination of two simpler encoding methods called ASK (Amplitude Shift Keying) and PSK (Phase Shift Keying.) In ASK the amplitude of a signal is varied to transmit data. In PSK the signal phase is varied by each bit or group of bits.

Digital Channels can achieve ten channels in the same 6MHz space as one SDTV chunk. They use MPEG2 compression to compress the video and then 256 QAM signaling to deliver the information to the viewer.

Figure 3 – Capture of a video from paramount

clip_image005One main service that requires a very elaborate process to provide to customers is VOD (Video On Demand.) This service requires TWC to receive a signal from the “pitcher” or the satellite at a 3.7 to 4.2GHz C-Band transmission. The catcher receives the movie then sends the movie to the short stop which buffers the movie until it is received. The movie is then sent to the Asset Management System (AMS) which determines the validity of the movie, size and other details. Then, when the time is right, it sends the movie out to the propagation server which then sends the movie to each of the SeaChange servers located in nine of twenty hub sites. clip_image007

Figure 4 – Propagation of a movie

A SeaChange server is basically a large storage center for movies that allows the movie to be delivered to a VOD customer requesting a movie. The request is sent from your cable box through the return band on your coax back through a series of checks for account payment and permission to request the movie. The movie is then sent back to you on your own personal “chunk” of a 256 QAM signal and then appears on your television.

Time Warner Cable uses a wide variety of technology to deliver the best in entertainment and maximize productivity. Their extensive use of a HFC (Hybrid Fiber Coax) network is very efficient and robust. All of this is possible because of the fundamentals of Data Communication, signaling methods, multiplexing, and networks.

The entire Time Warner network (In Kansas City)


Video On Demand


The Cable Spectrum

Disabling performance counters (Windows)

  1. Backup your registry
  2. Open “regedit”
  3. Navigate to “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services”
  4. Find the service you wish to disable performance counters for
  5. Navigate the performance key. If the key does not exist, create it.
  6. Create a DWORD entry called “Disable Performance Counters” and set the value to 1
  7. Enjoy a cool beverage.

Amazingly large null file (/var/log/lastlog)

I’ve been seeing a lot of strange size reports on some Linux machines… specifically 64-bit systems.

The reason why you see a 1.2TB file full of null info is because the “nfsnobody” user is created with a userid of “-1″ which is the highest UID available. On a 32-bit system this is “65534″ but on a 64-bit system it’s a staggering “4294967294″. Lastlog pre-allocates space for every uid it obeys and counts 4.2 billion users just to accommodate for the user “nfsnobody”. It really doesn’t use _that_ much space but most backup utilities (E.G. EMC Retrospect) don’t know how to handle null/sparse files it will hang almost indefinitely when it tries to backup that file. Here’s the quick and dirty solution:

# usermod -u 65533 nfsnobody
# groupmod -g 65533 nfsnobody
# echo "" > lastlog

Happy Linuxing!

Partition Manipulation (LVM … yum.)

A while back I faced a difficult issue concerning a full partition that needed to be expanded – logical volume management was in use but there was no extra physical disk space to be partitioned. Here’s what I did… The solution/information below pertains to a CentOS/Redhack EL box. Release 4 or higher. BACK UP YOUR DATA! :-)

Real World Problem:
Server "foo" has 50KB free on /home and has no additional disks to grow to. This server has all slots full and is running RAID5.

Real World Solution:
Copy the partition to another new disk (or disk array) pop into cfdisk, create a new partition on the extra free space, grow the logical volume. Resize with ext2online. Here’s a step-by-step:

  1. Use DD or whatever disk imaging to copy the disk to the new disk
  2. Boot from the new disk
  3. Download the CFDisk RPM
  4. Download ncurses-4 and ncurses-5
  5. Install above mentioned RPM’s
    (rpm -ivh cfdisk-glibc-0.8g-1.i386.rpm; rpm –force -ivh ncurses4-5.0-12.i386.rpm; rpm –force -ivh ncurses-devel-5.4-13.i386.rpm)
  6. Run cfdisk /dev/device (probably /dev/sda or sdb .. you should know the device!)
  7. Create new partition and write changes – reboot.
  8. Create pv (pvcreate /dev/sda6) this creates the pv on sda6
  9. Add pv to volume group (vgextend VolGroup00 /dev/sda6) this
  10. vgdisplay will show volume group’s total size, now we can extend our lv
  11. Unmount the mount being used (in our case it was /home) umount /home
  12. lvextend -L+100GB /dev/mapper/VolGroup00/LogVol00-Home – This extends the current lv by 100GB
  13. Mount the partition back to it’s mountpoint (as specified in fstab) mount /home
  14. Use ext2online to grow the partition to it’s full size – yes, while it’s mounted! (ext2online /home &)
    "df -h" will show the partition growing before your eyes!
  15. Enjoy a cool beverage.

The reason for using CFDisk and not FDISK is because fdisk will not recognize the disk size change because DD copies everything – all structures – cfdisk is the only utility (at least that I found) that can resize in this type of situation.

Oracle and RHCS – Bad Idea!

A long time ago I performed an installation of an Oracle cluster. This cluster, unlike other real clusters, is a cluster using RedHat Cluster Suite (RHCS) and involved two Dell 1950’s each with 8GB RAM and two 146GB 15K SAS drives. Each of these machines are connected to a CLARiiON SAN via QLogic FC cards through a pair of MC Data FC switches.

This cluster is more of an active/active cluster but, unlike Oracle’s RAC clustering software, there are four instances running on one machine and four running on the other; effectively evening out the load. There are quite a few downsides to this method which I will outline below. There are, however, a few pro’s as well.


  • Allows for applications not written for RAC cluster to still be used in a clustered environment for High Availability.


  • Expensive! RHCS was needed for this setup and GFS ($2000 per node per year) was needed. This company chose to use a supported version of Linux, Cluster Suite and GFS. Of course, you can do this freely as the OS, File System and cluster utility are all freely available.
  • Not truly load balanced. This setup was meant to be load balanced but they are still two separate Oracle installations on two machines. The clustering suite just allows for proper failover.
  • RHCS (Red Hat Cluster Suite) was never made to support Oracle on both systems. Oracle’s Enterprise Manager will not start for both installations on the same Ethernet adapter. This situation would be encountered if both installations would fail over to one machine. EM is inaccessible for the databases which failed over.
  • Lots and lots of editing needed to be done to make Oracle work properly – editing the startup scripts, oratab, creating more custom startup scripts and more. In general this was a messy install.

In summary, I would not recommend using Oracle with RHCS. I would highly recommend that the application be re-written to be “RAC Compatible” so that you can fully utilize the power of Oracle load-balancing. This will save time and money in the short run and possibly the long run depending on what kind of support issues we might encounter. RHCS is great but I wouldn’t recommend it to anyone for Oracle.