Spines

Documentation

Spines is a messaging infrastructure that allows multi-hop communication (unicast, multicast, and anycast), and deployment of virtual topologies on real networks. Spines instantiates a virtual router node on every participating computer and creates virtual links between these nodes. Packets are routed automatically in through the network topology. Many Spines topologies can coexist in the same physical network, and even overlap on some of the nodes or links.

Client applications connect to one of the router nodes (usually the closest) and send and receive messages from that node. Spines is responsible to forward application messages towards the nodes that have destination applications. If multiple applications intend to communicate using Spines, they must connect to nodes of the same Spines network.

Spines runs a software daemon on each of the router nodes. The daemon acts both as a router, forwarding packets toward other nodes, and as a server, providing network services to client applications. Clients use a library to connect to a Spines daemon and send and receive messages. The API is almost identical to the Unix Socket interface. Virtually any socket-based application can be easily adapted to work with Spines. Spines API provides TCP and UDP-like functions with similar semantics for both reliable, and best effort communication.

A client can communicate with a daemon via TCP, UDP, or IPC (Inter-Process Communication). IPC uses unix socket domains (not available on Windows-based architectures), and was added to Spines in version 5.1.

For IPC communication, the Spines daemon binds to two file paths, one for the control channel and one for the data channel. The control channel binds to the default or user-specified path (e.g., /tmp/spines8100), and the data channel binds to the control channel path with a "data" suffix (e.g., /tmp/spines8100data). Clients just need to specify the (normal) control channel path; the data channel path is handled automatically. Normally, the Spines daemon unlinks and cleans up the paths it creates. However, in case of a hard daemon crash that is not handled, the files must be cleaned up manually.

A spines_socket() call returns a socket, which is actually a connection to the daemon. The application can use that socket to bind, listen, connect, send and receive, using Spines library calls (e.g. spines_send() is the equivalent to the regular send() call, spines_recv() is equivalent to recv(), etc.).

When connecting to a daemon using spines_socket(), the client specifies several options (as part of the protocol parameter) to specify the type of connection. For example, the client can specify the type of link protocol used to send its messages between daemons in the Spines network, the type of dissemination semantics to use, and the type of session semantics to apply to its messages (currently, this last option only applies to intrusion-tolerant reliable communication). Full details of these options can be found in the spines_socket() specification.

Each application can be uniquely identified in the topology by the pair of (Logical ID, Virtual Port). Currently, the logical ID of an application is the IPv4 IP address of the daemon it is connected to, and the virtual port is an identifier at the daemon. The virtual port of an application is either assigned automatically by the server node upon application connect, or it is set explicitly by the application in a call similar to the Unix bind(). Both reliable and best effort communication between two applications connected to the Spines network are done using the IP address and the virtual port described above in a way similar to TCP and UDP. Note that the virtual port of an application is only defined in relationship with a Spines node, and is not related to an operating system port of the computer the application or the daemon is running on.

A multicast group is defined as a class D multicast address and an anycast group as a class E address. If an application intends to join a group, it informs its server (router node) about this with a spines_setsockopt() call. From there on, the server will pass to the application the messages sent to that group. Leaving a group follows a similar procedure. In order to multicast/anycast a message to a group, an application simply sends the message (through its server) to the multicast/anycast address representing the group. The Spines network handles the routing of the multicast message according to the current membership of the group the message is sent to. Applications can join, leave, send and receive messages to and from multicast groups at any time. An application can join multiple groups, thus it can be member of more than one group at the same time. An application does not need to be a member of a group in order to send messages to that group.

Spines can also manage kernel routing tables by updating native kernel routes with those determined by the overlay topology and the chosen metric. In this case, Spines is acting as an overlay routing daemon where regular user packets (without any knowledge of Spines) are routed seamlessly in kernel-level between overlay nodes. As data packets are not processed by Spines, they are not copied to user-space, and therefore the routing overhead is substantially reduced. This reduction in CPU consumption can greatly benefit low cost routers, such as the Linksys WRT54G. Note that, in this mode, certain Spines protocols cannot be activated as packets are routed by the underlying kernel services. In addition, kernel-routing services are available in Spines to support anypath and multipath routing based on group membership.

Documentation about the new intrusion tolerance capabilities added in version 5.0 (and refined in versions 5.1 and 5.2) can be found here.

The current version of Spines was tested to run on Linux X86 computers.

The Spines Daemon

spines [-p spines_port] [-l logical_id] [-I local_address] [[-a destination]*]
       [[-d discovery_address]*] [-w Route_Type] [-tf] [-sf] [-m] [-x time_to_live]
       [-U] [-W] [-k level] [-lf log_file] [-ud unix_domain_path] [-pc]
       [-rl ] [-c config_file]

Setting topology parameters

setlink bandwidth(kbps) latency(ms) loss_rate(%) burst_rate(%) source_ip destination_ip [port]

The Spines API

spines_init()
spines_socket()
spines_close()
spines_bind()
spines_sendto()
spines_recvfrom()
spines_connect()
spines_send()
spines_recv()
spines_listen()
spines_accept()
spines_setsockopt()
spines_ioctl()
Distributed Systems and Networks Lab
Computer Science Department, Johns Hopkins University
207 Malone Hall
3400 North Charles Street
Baltimore, MD 21218
TEL: (410) 516-5562