X
By Topic

IEEE Quick Preview
  • Abstract
  • Authors
  • Figures
  • Multimedia
  • References
  • Cited By
  • Keywords

A performance comparison of DTN protocols for high delay optical channels

Designed for long propagation delay and frequent channel disruptions, delay tolerant networking (DTN) protocols are supporting the latest proposed optical space internetworking communication missions. While previous DTN-based testbeds experimented with bandwidth capacity in megabits, this paper presents a channel emulation testbed that supports multiple gigabit-bandwidth Ethernet clients and servers simultaneously. In addition to capacity, this testbed modeled optical link properties with channel emulation, and provided high flexibility by utilizing virtual LANs and link aggregation connected through a gigabit switch. The testbed emulated propagation delay, asynchronous channel rates, and bit errors, including free-space optical bit errors that varied as a function of time. We emulated an optical flight terminal relaying data between ground stations, and measured the maximum goodput of various DTN configurations while increasing the relay's round trip times. Results showed that for a 200 ms to one second round trip time, BP/TCPCL/TCP transmitted higher goodput than BP/LTP/UDP, and TCP/IP. For round trip times longer than one second, BP/LTP/UDP transmitted higher goodput than BP/TCPCL/TCP, and TCP/IP.

SECTION I

INTRODUCTION

As missions in space expand from space-to-ground station links to multiple relay spacecraft, orbiting the Moon or Mars, the Consultative Committee for Space Data Systems (CCSDS) [1] has begun to standardize Delay Tolerant Networking (DTN) protocols to support store and forward space communication missions. Recently, CCSDS is standardizing the Bundle Protocol (BP) and the Licklider Transport Protocol (LTP) [1]. These DTN protocols address issues that may occur in store and forward topologies including high latency, and lack of end-to-end connectivity.

DTN protocols have been used for past space demonstration missions, including the Deep Impact Network Experiment (DINET) on the EPOXI spacecraft (2008), the Cisco router in Low Earth Orbit (CLEO) on the UK-DMC (2008), the International Space Station (2009), and the internet router in space (IRIS) on IntelSat-14 (2011) [2]. Advanced DTN concepts, such as reactive bundle fragmentation, which breaks data into independent fragments when encountering disruptions, proved beneficial as well. However, these past missions are limited to data rates under tens of megabits per second because of the transmitter's channel bandwidth and power constraints.

Several current missions, such as NASA's Lunar Laser Communication Demonstration (LLCD) and planned missions such as NASA's Laser Communication Relay Demonstration (LCRD, 2017) will rely on optical communication, which raises the requirements on DTN networking protocols. When compared to radio frequency (RF) communications, optical systems allow for higher bandwidth, power-efficient links. For example, the LLCD mission downlinks at 622 Mbit/second with a range of approximately 400 thousand kilometers [3]. Optical links have the same speed of light delays but are more vulnerable to disruptions when compared to RF. Applying DTN protocols to optical communications eases the potential problem of intermittent connections. LCRD adds DTN's bundle protocol to LLCD's hardware. Table I lists the past and future proposed DTN-related space missions. We note that future proposed missions will use DTN in the optical spectrum.

Table 1
TABLE I. DTN SPACE MISSION DEMONSTRATIONS

While previous DTN-based testbeds were only able to experiment with a maximum bandwidth capacity in megabits, this paper presents a channel emulation testbed that supports multiple gigabit-bandwidth Ethernet clients and servers simultaneously with a separate channel emulator. We designed a testbed with high bandwidth capacity to support multiple client machines using gigabit Ethernet links. When placed in-between client machines, the testbed emulated optical channels with high propagation delay, and bit errors for free-space optical uplinks and downlinks that vary as a function of time. Client transmitted using bundle protocol, Licklider Transport Protocol, and TCP. We observed goodput to compare the different protocol configurations over several iterations of round trip times by increasing the propagation delay. Additionally, for flexibility we used virtual LANs and link aggregation, which allowed configuration for various network topologies.

First, section II details previous DTN protocol performance studies. Section III presents a theoretical analysis of TCP throughput for increasing round trip times. Section IV describes the testbed design and experiments. Section V presents the testbed experiment results. Section VI discusses lessons learned from the testbed and solutions for future work.

SECTION II

DTN BACKGROUND AND PERFORMANCE STUDIES

This section describes DTN protocol implementations, simulators, and emulation testbeds for validating DTN protocol feasibility in challenged environments. For an overlay store and forward network, RFC 5050 specified the bundle protocol as a layer that lies between the application and transport layers. The BP layer encapsulates bundles, a series of contiguous data blocks, into an underlying convergence layer (CL) adapter.

Common convergence layer adapters for DTNs include a Transmission Control Protocol CL (TCPCL) [4], User Datagram Protocol CL (UDPCL) [5], and Licklider Transmission Protocol (LTP) [6]. TCPCL allows for reliable transfer of frames through a TCP/IP network [4], and UDPCL is the unreliable transport through user datagram protocol [5]. Applied to the space links, LTP allows for long delay, point-to-point channels [6].

A. Protocol Implementations

There are three common Linux-based DTN implementations for the bundle protocol DTN2, ION, and IBR-DTN.

1) DTN2 [7]

The codebase from the reference implemen- tation built at Trinity College, Dublin, Ireland evolved into DTN2 [7]. The implementation is hosted on source-forge, open-source, and built primarily in C++. DTN2 includes builtin applications. Users can run a wide range of commands such as dtnping, dtnsend, dtnrecv, and dtnperf.

2) ION [8]

NASA and the University of Ohio built the interplanetary overlay network (ION) as a space oriented implementation. For functionality in space, ION runs as flight software on VxWorks. DTN protocols including BP, LTP, CCSDS File Delivery Protocol (CFDP) and Asynchronous Message Service (AMS) are implemented for ION in the C language [8]. For routing, ION supports contact graph routing (CGR) [8], considered most suitable for routing nodes in space.

3) IBR-DTN [9]

Technische Universitt Braunschweig built IBR-DTN as a DTN implementation targeting embedded systems [9]. To run efficiently developers compiled using uClibc++, C library optimized for embedded systems. In addition to installing in Linux from the source, IBR-DTN has packages for devices such as raspberry-pi Debian Linux, android-based smart phones, and wireless access points (APs) with Open WRT.

B. DTN Simulators

Researchers have tested routing algorithms in challenged networks by building packages for more general-purpose simulators including ns-2/3 [10], and OMNeT++ [11] to measure DTN performance. Researchers have also developed custom software simulators DTNSim2, and the Opportunistic Network Environment (ONE) [12]. No longer supported, DTNSim2 tested routing in underdeveloped regions, and a vehicular infrastructure. Developed between 2008 and 2011, ONE [12] lacked support for lower layer protocols, and highly accurate time. Thus, these simulators cannot validate DTN performance of particular implementations on varied hardware for different topologies.

C. DTN Emulation Testbeds

Various testbeds have emulated space communication channels to measure DTN protocol performance. We discuss the advantages and disadvantages of testbeds to characterize DTN, including the Space Communication and Networking Testbed (SCNT) [13], the Space Internetworking Center (SPICE) [14], DTNRG's DTNBone [15], and the Jet Propulsion Lab's optical testbed [16].

Wang at Lamar University designed SCNT as a PC-based testbed to characterize DTN protocol goodput with latencies up to five seconds [13]. The SCNT refers to the entire testbed including clients equipped with the ION implementation of DTN. A single centralized machine called space link simulator (SLS) emulates the wireless channels for the entire network through virtual instrumentation in Labview software. The SLS emulates channel delay variations, asymmetric link ratios, and bit error rates (BERs) by additive white Gaussian noise generation. The SCNT has successfully tested many DTN protocols over cislunar delay, and asymmetric link simulations. However, the maximum baud rate for the testbed is 115,200 bits per second [17]. Inducing any delay or asymmetric links lowers the maximum data rate further according to the bandwidth delay product. While, the SCNT's bandwidth capacity met the requirement of current satellite system simulations, the testbed would need further development to simulate future satellite communications.

The Space Internetworking Center (SPICE) in Thrace, Greece built a DTN testbed of 12 nodes in three different lab locations: Democrtis University in Thrace, Hellenic Aerospace Industry in Athens, and Massachusetts Institute of Technology in Cambridge, Massachusetts [14]. SPICE experiments with CFTP, AMS, and space packet protocol by transmitting files between these labs. SPICE also communicates with HellasSat, a geosynchronous telecommunications satellite, as a relay node. As opposed to SCNT's centralized space link simulator, SPICE distributes channel emulation throughout the testbed. Each of the 12 nodes that connect to the testbed utilizes network emulation functionality (netem), included in Linux kernels. The command line tool, traffic control (tc), in the package IProute2 tools, configures network emulation. With these tools, each node in the testbed can emulate bandwidth, packet error rate, corruption, duplication, re-ordering, and delay. Distributing emulation across the testbed eliminates processing overhead that could occur on a centralized channel-emulating machine when the number of network nodes increases. This allows for a highly scalable testbed. However, netem and tc are only available on particular Linux distributions. Thus, the testbed lacks compatibility for client devices other operating systems such as RTEMS and VxWorks, real-time systems used on most spacecraft. In addition, SPICE designed a graphical user interface (GUI) to track and control the link emulation parameters for each of the nodes. However, the GUI only supports the ION implementation currently.

DTNbone defines itself as a collection of nodes worldwide running DTN bundle agents and applications [15]. DTNbone users can test connections using Ohio University's and Glenn Research Center's Always On networks of ION and DTN2 nodes, which have a defined schedule of disconnections in the topology, described in [15]. However, Beuran characterizes DTNbone as an interoperability testbed, given that it contains five different DTN implementations [18], as opposed to a DTN performance testbed. For performance testing, the testbed would most likely need to be local.

Schoolcraft first characterized DTN protocols for optical communication in a testbed of two PCs in [16]. The PCs used ION with LTP's Datagram Retransmission (DGR) [16]. Similar to TCP, DGR has an adaptive timeout interval congestion control, but without the round trip frequency of TCP. The testbed created a unidirectional forward optical link using two Perle media converter systems to convert gigabit Ethernet links into fiber and free space mediums. For free-space optical links, a variable rotating attenuator could reduce the signal from 0 to 10 db, which simulated the intermittent connections experienced. To create an asymmetric link, the testbed used Ethernet with rate limited at the Internet protocol (IP) by modifying Linux kernel networking queueing settings. The testbed successfully emulated an optical link with channel disruptions, and asymmetric bandwidth. However, the testbed did not examine long propagation delays to validate DTN performance over high latency channels.

In [19], Pottner compared performance of the three common DTN bundle protocol implementations discussed in section IIB between two PCs. The experiment compared implementation throughput for memory-based versus persistent disk-based storage, and varied bundle sizes. When using DTN2 as the same transmitting and receiving implementation, the highest throughput was 687.329 Mbit/second with 1 MB payload size. However, the experiment did not emulate the channel latency, rate or BER.

SECTION III

THEORETICAL ANALYSIS

To model the upper bound of the bundle protocol with the TCP convergence layer throughput, we examined how the theoretical limitations of TCP throughput were calculated. Equation (1) shows the Mathis equation [20], which sets the upper bound for the theoretical throughput of TCP with a small packet loss rate of less than 1%. FormulaTeX Source

The equation leverages the maximum segment size (MSS), round trip time (RTT), packet loss (P), and a constant that incorporates a random or periodic loss model and an acknowledgement strategy (C). The maximum frame segment size derives from the Maximum Transmission Unit (MTU). For Internet applications, the common MTU is 1500 bytes (to calculate MSS, the TCP overhead of 40 bytes for IP and TCP header data is subtracted from the 1500 byte MTU). Assuming an average random loss of 10ℒ6 for P, with delayed acknowledgments, the value of the Mathis constant, C, is 0.93.

Figure 1 shows the theoretical bandwidth for TCP with BER of 10−6 over various round trip times. To allow for higher rates along mediums with increasing RTTs, we configured network interfaces for jumbo frames with an MTU of 9000 bytes. Increasing the frame size reduces interrupts processed by the CPU. In addition, the required overhead, header to data ratio, decreases from 2.6% to 0.44%. With a typical geosynchronous round trip time of 500 ms, the maximum throughput of a 1460 byte, and a 8960 byte MSS calculates to 21.72 Mbit/second, and 133.325 Mbit/second respectively. Thus, the expected upper limit of the bundle protocol using the TCP convergence layer is the theoretical maximum TCP throughput with a 9000 byte MTU.

Figure 1
Fig. 1. Theoretical Maximum Throughput (Mbit/sec) over RTT (ms)
SECTION IV

TESTBED DESIGN

After surveying previous DTN protocol testbeds, we chose certain characteristics from each study. We designed a centralized emulator similar to SCNT [17] to allow simple system configurations and control, but added an Ethernet switch for a more flexible and scalable system, shown in Figure 2. The testbed's centralized channel emulator allows for any type of client device operating system to connect. Our testbed used the network emulation functionality in Linux similar to SPICE [14] to natively emulate wide area network properties such as delay, and loss, but our parameters could vary as a function of time. Last, we developed a testbed for demonstrating BP's maximum throughput, compatible with the three common DTN implementations (ION, DTN2, and IBR-DTN). This is similar to Pottner's study [19], but added wireless channel emulation features such as variable delay, and modeled BER over an optical channel as a function of time.

Figure 2
Fig. 2. Testbed System Diagram

A. Architecture

The testbed's centralized network emulation software was on a Concurrent ImaGen server with a 2.8 GHz quadcore Xeon 5600 running CentOS 6.2. Theoretical maximum throughput, as shown in Figure 1 with 9000 byte MTUs and less than 500 ms RTT, requires gigabit bandwidth capacity for each channel link. Therefore, another requirement for the system was to support multiple clients and servers simultaneously. To allow gigabit links and multiple devices, we installed two fourport Ethernet gigabit network interface cards in the ImaGen server. We connected the eight Ethernet ports on the Imagen server to a 24 port Cisco Catalyst 3560G switch.

The ports on the Imagen server were channel bonded, as shown in figure 3. Channel bonding, also known as link aggregation (LAG), is a technique in which more network interfaces combine on a host computer for redundancy or increased throughput. Linux kernels now come equipped with a channel-bonding driver. There are different modes of channel bonding. For increased throughput, we chose the balanced round-robin policy. In balanced round robin, packets are transmitted in sequential order from the first available slave interface to the last. Balanced round robin also provides load balancing and fault tolerance. All ports on the host machine are set to 9000 byte MTUs and promiscuous mode, which mandates that all traffic the port receives shall pass to the central processing unit as opposed to only the frames the port anticipated to receive.

Figure 3
Fig. 3. Testbed Architecture

The Cisco switch calls channel bonding Link Aggregation Control Protocol (LACP). Once we configured ports 1–8 on the Cisco switch for LACP the testbed had eight gigabits of total bandwidth capacity. For Clients, we used Dell PowerEdge 2950s with quad core Xeon 5300 2.8GHz CPUs. These clients connected to the testbed through the ports 17–24 on the Cisco switch. The switch mapped the 16 ports as virtual LANs (VLANs). With VLANs, devices behave as if they connect through a single, network segment. This allows for a flexible and fast testbed with up to 16 users, and eight-gigabit bandwidth capacity.

B. Channel Emulation Software Parameters

Prnere are three basic channels for communication between a flight terminal and ground station. Table II summarizes the effects of a ground station channel, space channel, and payload channel. In the flight terminal's payload channel, BER is negligible, but delay and jitter change over time depending on the flight hardware. When the signal moves through the wireless space channel, BER changes over time with atmospheric changes of either RF or optical signals. For example, a geosynchronous (GEO) orbit fixes delay to approximately 500 ms with small perturbations. Jitter for the space channel would be negligible. Similar to the payload channel, the ground station channel has negligible BER, but delay and jitter have small changes over time.

For our testbed purposes of simulating a ground station to flight terminal channel, we concerned ourselves with emulating bit error rate, rate limit, delay, and jitter as functions of time. We define jitter, in our case, as a change in delay brought upon from hardware processing at the transmitting and receiving terminals.

The Channel Emulator operates as a link layer bridge (not as an IP router). Intercepting data at the link layer allows for no special IP addressing in the network layer. The source and destination devices will behave as if they are directly connected. Linux network emulator provides basic emulation. We used scripts to change the emulation parameters over time to a resolution of one millisecond.

Table 2
TABLE II. GROUND STATION CHANNEL, AND SPACE CHANNEL, AND PAYLOAD CHANNEL PARAMETERS

C. Modeling Optical Atmospheric Conditions

There are several dynamic effects on free-space optical channels through the atmosphere. Atmospheric scintillation is the first and fastest effect. Varying temperatures through the atmosphere and index of refraction causes the light to focus and defocus at random, time varying ways. In this case, the effective power received can vary over a range of 20 dB and the time scale of the variation is tens of milliseconds [21]. This effect never really gets up into the one-second range. Employing coding and interleaving, helps mitigate this effect.

Another dynamic element that comes into play is the pointing and tracking systems on each of the terminals. For an optical link, the narrow beams require high fidelity pointing of both telescopes. Even small perturbations in a payload such as vibrations due to the reaction wheels, and maneuvering the solar panels can result in mispointing, which causes the received optical power to dip. The time scale here is similar to the scintillation, though in worst-case analyses there may be some resonances that allow some mispointing effects to last for a second period. Coding and interleaving helps here, but the longer-term resonances may overwhelm the interleaver's ability to whiten the effect. In addition, the spatial tracking loop can fail, in which case the telescope's control electronics perform re-acquisition. The period for this case may take tens of seconds [21]. If the clock synchronization fails, this could require seconds to recover, which is not fast enough for coding and interleaving to handle the majority of it.

Longer-term outages will likely occur due to cloud blockages, for example. These likely occur on the time scale of minutes. Coding and interleaving is completely ineffective in these cases. This is where the networking protocols, handover to alternate ground stations, and store and forward techniques have a significant impact.

Thus, for the testbed, we created models for cloud blockages. Our main concern was modeling longer-term outages for uplinks and downlinks. To start we used a 2-state Markov model with a good case of 10–9 BER and bad case of 10–6. Later, we modeled BER from commercial optical hardware parameters in RSoft OptSim with 1 ms resolution. Turbulence was moderately strong, with a measured optical turbulence (C2n profile of 2x clear 1) [21]. BER ranged from 10–9 to 10–2 over this time. The BER modeling included FEC and interleaving corrections.

D. Experiment Setup

We installed the ION implementation on a client machine, and used the bpsendfile executable to send through the channel emulator testbed to another machine. BP/TCPCL over TCP/IP sent files of 10MB, 100MB, and 1GB transferring bundles of sizes 1MB, 10MB, and 100MB respectively, so each file transmission sent ten bundles. The channel emulator intercepted, queued, and corrupted bits according to the BER profile, then passed data through to the destination. The channel emulator created a relayed optical communication scenario. A ground station uplinked data to a flight terminal, and the flight terminal downlinked data back to the ground station.

We increased round trip time to a maximum of one second to compare varying bundle sizes. We calculated goodput of the destination node's application layer using Wireshark, a common network protocol analyzer. As a benchmark, we sent standard TCP/IP packets through the channel emulator using iperf, a tool for measuring maximum TCP and UDP bandwidth performance. We then compared the DTN implementation to TCP/IP alone. In another experiment, we compared BP/TCP over TCP/IP and BP/LTP over UDP/IP, with 1 MB bundle sizes, to TCP/IP. In this comparison, we increased RTT to a maximum of 16 seconds.

SECTION V

RESULTS

A. Network Goodput for Varying Bundle Size

Figure 4 presents the mean goodput observed at the destination when RTT increased. Results showed that BP/TCP-CL/TCP with 100 MB bundle size goodput surpassed TCP/IP at 25 ms delay with a goodput of 445.3 Mbit/second compared to TCP's 432 Mbit/second. At 50 ms delay, TCP/IP goodput fell to 193 Mbit/second and BP with bundle size of 10 MB transmitted at 340.9 Mbit/second goodput. Then at 200 ms, 1 MB bundle sizes surpassed TCP/IP with a goodput of 61.1 Mbit/second compared to TCP's 47.7 Mbit/second. At a full second RTT, TCP dropped to 1.46 Mbit/second, while the 1, 10, and 100 MB bundle sizes transferred at 18.8, 27.4, and 37.1 Mbit/second respectively. The measured goodput stayed within 10 Mbit/second between bundle sizes of 1 MB and 10MB after 300 ms RTT, showing little difference in goodput for long RTTs. 100 MB bundles had more significant goodput increases over longer RTTs, and came the closest to the maximum calculated theoretical bandwidth.

Figure 4
Fig. 4. BP/TCPCL with varying bundle sizes vs. TCP/IP

B. BP/LTP for High Latency

Figure 5 demonstrates the performance of BP/LTP/UDP, BP/TCPCL/TCP, and TCP/IP for increasing round trip time up to 16 seconds. TCP and BP/TCPCL dropped below one-megabit goodput at 1.25 and 5 seconds delay respectively. BP/LTP surpassed BP/TCPCL in goodput when the delay was set to one second and longer. BP/LTP stayed above eight Mbit/second mean goodput up to the maximum tested delay.

SECTION VI

CONCLUSION AND FUTURE WORK

In conclusion, we designed a testbed that could measure the performance of DTN protocols through an optical, gigabit-bandwidth channel. The testbed contained a centralized channel emulator that created optical and RF propagation delay, asynchronous channel rates, and bit errors over the ground station channel, space channel, and payload channel. The channel emulator modeled delay, and bit errors as a function of time as separate channels for uplinks and downlinks. To experiment, we measured the maximum goodput of various DTN protocols and bundle sizes.

As a result, the testbed demonstrated that using the bundle protocol on top of TCP surpassed the bandwidth when we increased RTT beyond 200 ms. Large bundle sizes of 10 MB and 100 MB transmitted most efficiently and came closest to the theoretical TCP bandwidth. The theoretical calculation from the Mathis equation did not account for when retransmission timeouts occurred. This is one of the main reasons the result diverged from theoretical. We observed that every DTN implementation, DTN2, ION, and IBR-DTN would use most of the CPU resources when transmitting large bundles at high rates. The channel emulator machine was also CPU constrained when creating long delays at high rates.

Figure 5
Fig. 5. TCP/IP, BP/TCPCL, and BPILTP goodput over added delay ranging from 0 to 16 seconds

Future work includes lowering the required processing speeds for DTN on implementations because spacecraft have more CPU constraints than the DTN testbed's hardware. Pottner's study [19] had similar issues with DTN implementations, but showed that configuration for memory-based as opposed to disk-based storage improved all three implementations' performances. Incorporating LTP's Datagram Retransmission (DGR) as used in Schoolcraft's implementation [16] could also minimize the need for CPU resources.

ACKNOWLEDGMENT

This work was supported by a NASA Office of the Chief Technologist's Space Technology Research Fellowship (NSTRF) grant number NNX11AM73H. We would like to thank Dave Israel, Jane Marquart, Greg Menke, and Faith Davis at NASA Goddard Space Flight Center for providing expertise for the testbed development.

Footnotes

No Data Available

References

1. "Rationale, Scenarios, and Requirements for DTN in Space, " Report Concerning Space Data System Standard, CCSDS 743. 0-G-1. Green Book. Issue 1. Washington, D. C., August 2012. [Online]. Available: public. ccsds. org/publications/archive/734x0g1e1.PDF

2. P. Muri and J. McNair, "A Survey of Communication Sub-systems for Intersatellite Linked Systems and CubeSat Missions, " Journal of Communications, vol. 7, no. 4, 2012. [Online]. Available: ojs. academypublisher.com/index. php/jcm/article/view/jcm0704290308

3. E. A. Willis, "Downlink synchronization for the lunar laser communications demonstration, " in Space Optical Systems and Applications (ICSOS), 2011 International Conference on, may 2011, pp. 83-87.

4. M. Demmer, J. Ott, and S. Perreault, "Delay Tolerant Networking TCP Convergence Layer Protocol, " IRTF Delay Tolerant Networking Research Group, no. 4, August 2012. [Online]. Available: tools. ietf. org/html/draft-irtf-dtnrg-tcp-clayer-04

5. H. Kruse and S. Ostermann, "UDP Convergence Layers for the DTN Bundle and LTP Protocols, " IRTF Delay Tolerant Networking Research Group, no. 1, May 2009. [Online]. Available: tools. ietf. org/html/draftirtf-dtnrg-udp- clayer-00

6. M. Ramadas, S. Burleight, and S. Farrell, "Licklider Transmission Protocol-Specification, " IRTF Delay Tolerant Networking Research Group, no. 10, June 2008. [Online]. Available: tools. ietf. org/html/draftirtf-dtnrg- ltp-10

7. M. Demmer, "The DTN reference Implmentation, " presentation at the IETF DTNRG Meeting, March 2005. [Online]. Available: http://www.dtnrg. org/docs/presentations/IETF62/dtn-implietf-mar05-demmer.PDF

8. S. Burleigh, "Interplanetary Overlay Network: An Implementation of the DTN Bundle Protocol, " in Consumer Communications and Networking Conference, 2007. CCNC 2007. 4th IEEE, jan. 2007, pp. 222-226.

9. M. Doering, S. Lahde, J. Morgenroth, and L. Wolf, "IBR-DTN: an efficient implementation for embedded systems, " in Proceedings of the third ACM workshop on Challenged networks, ser. CHANTS '08. New York, NY, USA: ACM, 2008, pp. 117-120. [Online]. Available: http://doi. acm. org/10. 1145/1409985. 1410008

10. "ns-3, " January 2013. [Online]. Available: www.nsnam. org

11. "OMNeT++ Network Simulation Framework, " January 2013. [Online]. Available: www.omnetpp. org

12. A. Ker̈anen, J. Ott, and T. K̈arkk̈ainen, "The ONE simulator for DTN protocol evaluation, " in Proceedings of the 2nd International Conference on Simulation Tools and Techniques, ser. Simutools '09. ICST, Brussels, Belgium, Belgium: ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), 2009, pp. 55:1-55:10. [Online]. Available: http://dx. doi. org/10. 4108/ICST. SIMUTOOLS2009. 5674

13. R. Wang, S. Burleigh, P. Parikh, C.-J. Lin, and B. Sun, "Licklider Transmission Protocol (LTP)-Based DTN for Cislunar Communications, " Networking, IEEE/ACM Transactions on, vol. 19, no. 2, pp. 359-368, April 2011.

14. D. Koutsogiannis, S. Diamantopoulos, G. Papastergiou, I. Komnios, A. Aggelis, and N. Peccia, "Experiences from architecting a DTN testbed, " Journal of Internet Engineering, vol. 2, no. 1, pp. 219-229, December 2009.

15. "DtnBone/grc-dtnbone-Delay Tolerant Networking Research Group (DTNRG), " October 2012. [Online]. Available: http://www.dtnrg. org/wiki/DtnBone

16. J. Schoolcraft and K. Wilson, "Experimental characterization of space optical communications with disruption-tolerant network protocols, " in Space Optical Systems and Applications (ICSOS), 2011 International Conference on, may 2011, pp. 248-252.

17. S. Horan and R. Wang, "Design of a space channel simulator using virtual instrumentation software, " Instrumentation and Measurement, IEEE Transactions on, vol. 51, no. 5, pp. 912-916, oct 2002.

18. R. Beuran, S. Miwa, and Y. Shinoda, "Performance evaluation of dtn implementations on a large-scale network emulation testbed, " in Proceedings of the seventh ACM international workshop on Challenged networks, ser. CHANTS '12. New York, NY, USA: ACM, 2012, pp. 39-42. [Online]. Available: http://doi.acm.org/10.1145/2348616. 2348624

19. W.-B. Pöttner, J. Morgenroth, S. Schildt, and L. Wolf, "Performance comparison of dtn bundle protocol implementations, " in Proceedings of the 6th ACM workshop on Challenged networks, ser. CHANTS '11. New York, NY, USA: ACM, 2011, pp. 61-64. [Online]. Available: http://doi.acm.org/ 10.1145/2030652. 2030670

20. M. Mathis, J. Semke, J. Mahdavi, and T. Ott, "The macroscopic behavior of the tcp congestion avoidance algorithm, " SIGCOMM Comput.Commun. Rev., vol. 27, no. 3, pp. 67-82, Jul. 1997. [Online]. Available: http://doi.acm.org/10.1145/263932. 264023

21. R. Quaale, B. Hindman, B. Engberg, and P. Collier, "Mitigating environmental effects on free-space laser communications, " in Aerospace Conference, 2005 IEEE, march 2005, pp. 1-6.

Authors

No Photo Available

Paul Muri

No Bio Available
No Photo Available

Janise McNair

No Bio Available

Cited By

None

Corrections

None