As NASA contemplated longer space missions involving multiple spacecraft, they foresaw the need for network automation, much the same as exists on the terrestrial Internet. Missions had to include not only coding for the applications to perform navigation and scientific observations—they also had to support communications with earth. NASA wanted to standardize and automate network communications in space, and at interplanetary distances.
Network automation is a huge advantage of the terrestrial Internet. Application developers in general do not have to worry about the performance of the network, and TCP/IP requires relatively little centralized administration. The Transmission Control Protocol (TCP) in particular, assures reliable communication for applications (almost anything of significance on the Internet uses TCP). TCP does this through a series of exchanges back-and-forth between the sending and receiving hosts in order to establish a communication session; establish communication speed and verify that all of the disparate pieces of a message have been properly received and reassembled at the destination in the right order. If anything is missing, it gets retransmitted.
All of these behind-the-scenes messages between sender and receiver assume that a communication pathway exists between the two that is almost instantaneous in nature. If that assumption breaks, TCP breaks. The session times out and must be re-established.
TCP works relatively well on planet earth because the velocity of light provides almost instantaneous bidirectional communication speeds given the relatively short distances involved (after all, a photon could circumnavigate the globe about 7.5 times a second). However, NASA was contemplating traveling very long distances indeed. The minimum distance from Earth to Mars is about 54.6 million kilometers (about 33.5 million miles). The farthest apart they can be is about 401 million km (about 350 million miles). The average distance is about 225 million km (about 140 million miles). This means round trip times for radio frequency communications ranging from a minimum of about 7 minutes to a maximum of about 40 minutes.
Not only would communications at interplanetary distances involve many, many times the signal delay experienced on earth, but they would also experience significant periods of disruption. Planets have this pesky habit of rotating. Anything on the surface of Mars would experience long periods of communication blackouts while the planet Mars itself blocked line of sight with the earth.
While other factors also constrained data communications at interplanetary distances, delay and disruption were the two primary limitations preventing the architecture of the terrestrial TCP/IP Internet from working in that environment. Which is how Delay/Disruption Tolerant Networking (DTN) got its name.
How to deal with this extreme environment? Change the store-and-forward (albeit for only a few milliseconds) architecture of the Internet to what has come to be called a “store-carry-and-forward” architecture. This uses local storage in network devices themselves to hold onto a much larger “bundle” of data (than an IP packet), and only forward that packet onto its target destination when an opportunity to do so presents itself.
We’ll be explaining this architecture in more detail in future blogs, but first, we’ll explain how DTN, which was designed for use in outer space, has very useful applications in network environments displaying very similar constraints right here on planet earth. That will be the topic of our next blog…
This blog is a product of the usual suspects: Scott Burleigh (NASA/JPL); Keith Scott (Mitre Corp./CCSDS and Mike Snell (IPNSIG)
Comentários