| | ==Phrack Inc.== |
| |
|
| | Volume Three, Issue 25, File 4 of 11 |
| |
|
| | =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= |
| | =-= =-= |
| | =-= S P A N =-= |
| | =-= =-= |
| | =-= Space Physics Analysis Network =-= |
| | =-= =-= |
| | =-= Brought To You by Knight Lightning =-= |
| | =-= =-= |
| | =-= March 15, 1989 =-= |
| | =-= =-= |
| | =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= |
| |
|
| |
|
| | Preface |
| | ~~~~~~~ |
| | In the spirit of the Future Transcendent Saga, I continue to bring forth |
| | information about the wide area networks. The information presented in this |
| | file is based primarily on research. I do not have direct access to SPAN other |
| | than through TCP/IP links, but this file should provide you with general |
| | information with which to properly use the Space Physics Analysis Network. |
| |
|
| |
|
| | Introduction |
| | ~~~~~~~~~~~~ |
| | The Space Physics Analysis Network (SPAN) has rapidly evolved into a broadly |
| | based network for cooperative, interdisciplinary and correlative space and |
| | Earth science data analysis that is spaceflight mission independent. The |
| | disciplines supported by SPAN originally were Solar-Terrestrial and |
| | Interplanetary Physics. This support has been expanded to include Planetary, |
| | Astrophysics, Atmospherics, Oceans, Climate, and Earth Science. |
| |
|
| | SPAN utilizes up-to-date hardware and software for computer-to-computer |
| | communications allowing binary file transfer, mail, and remote log-on |
| | capability to over 1200 space and Earth science computer systems in the United |
| | States, Europe, and Canada. SPAN has been reconfigured to take maximum |
| | advantage of NASA's Program Support Communication Network (PSCN) high speed |
| | backbone highway that has been established between its field centers. In |
| | addition to the computer-to-computer communications which utilizes DECnet, SPAN |
| | provides gateways to the NASA Packet Switched System (NPSS), GTE/Telenet, |
| | JANET, ARPANET, BITNET and CSNET. A major extension for SPAN using the TCP/IP |
| | suite of protocols has also been developed. |
| |
|
| | This file provides basic information on SPAN, it's history, architecture, and |
| | present guidelines for it's use. It is anticipated that SPAN will continue to |
| | grow very rapidly over the next few years. Several existing wide-area DECnet |
| | networks have joined with SPAN to provide a uniform internetwork structure and |
| | more will follow. |
| |
|
| |
|
| | History Of The SPAN and the Data Systems Users Working Group (DSUWG) |
| | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| | A considerable evolution has occurred in the past two decades in the way |
| | scientific research in all disciplines is done. This is particularly true of |
| | NASA where early research was centered around exploratory missions in which |
| | measurements from individual scientific instruments could be meaningfully |
| | employed to advance the state of knowledge. As these scientific disciplines |
| | have progressed, a much more profound and interrelated set of questions is |
| | being posed by researchers. The result is that present-day investigations are |
| | generally much more complex. For example, within the space science community |
| | large volumes of data are acquired from multiple sensors on individual |
| | spacecraft or ground-based systems and, quite often, data are needed from many |
| | institutions scattered across the country in order to address particular |
| | physical problems. It is clear that scientific research during the late 1980s |
| | and beyond will be devoted to intense multi-disciplinary studies aimed at |
| | exploring very complex physical questions. In general, the need for |
| | researchers to exchange data and technical information in a timely and |
| | interactive way has been increasing. |
| |
|
| | The problems of data exchange are exacerbated by the lack of standards for |
| | scientific data bases. The net result is that, at present, most researchers |
| | recognize the value of multi-disciplinary studies, but the cost in time and |
| | effort is devastating to their research efforts. This trend is antithetical to |
| | the needs of the NASA research community. SPAN is only one of many research |
| | networks that are just beginning to fill a need for access to remote |
| | capabilities that are not obtainable locally. |
| |
|
| | In May of 1980 the Space Plasma Physics Branch of the Office of Space Science |
| | of NASA Headquarters funded a project at Marshall Space Flight Center (MSFC) to |
| | investigate ways of performing correlative space plasma research nationwide on |
| | a daily basis. As a first step, a user group was formed called the Data |
| | Systems Users Working Group (DSUWG) to provide the space science community |
| | interaction and direction in the project. After the first meeting of the DSUWG |
| | in September 1980, it was decided that the approach would be to design, build, |
| | and operate a spacecraft mission independent science network as a test case. |
| | In addition, the construction of the system would be designed to use existing |
| | data analysis computer systems at space physics institutions and to take full |
| | advantage of "off-the-shelf" software and hardware. |
| |
|
| | The Space Physics Analysis Network (SPAN) first became operational in December |
| | 1981 with three major nodes: |
| |
|
| | o University of Texas at Dallas |
| | o Utah State University |
| | o MSFC |
| |
|
| | Since that time it has grown rapidly. Once operational, SPAN immediately |
| | started to facilitate space-data analysis by providing electronic mail, |
| | document browsing, access to distributed data bases, facilities for numeric and |
| | graphic data transfer, access to Class VI machines, and entry to gateways for |
| | other networks. |
| |
|
| | The DSUWG continues to provide guidance for SPAN growth and seeks to identify, |
| | promote, and implement appropriate standards for the efficient management and |
| | exchange of data, related information, and graphics. All SPAN member |
| | organizations are expected to participate in the DSUWG. The basic composition |
| | of the DSUWG is a representative scientist and computer systems manager (who |
| | has the networking responsibility) at each of the member institutions. DSUWG |
| | meetings are held regularly at approximately nine month intervals. |
| |
|
| | The DSUWG is structured along lines conducive to addressing major outstanding |
| | problems of scientific data exchange and correlation. There is a chairman for |
| | each subgroup to coordinate and focus the group's activities and a project |
| | scientist to oversee the implementation of the DSUWG recommendations and |
| | policies. The working group itself is divided into several subgroups which |
| | address issues of policy, networking and hardware, software and graphics |
| | standards, and data base standards. |
| |
|
| | The DSUWG is a dynamic, evolving organization. We expect members to move in |
| | (or out) as appropriate to their active involvement in data related issues. We |
| | also realize that at present SPAN and the DSUWG are dealing with only a limited |
| | portion of the whole spectrum of problems facing the NASA research community. |
| | As present problems are solved, as the network evolves, and as new issues |
| | arise, we look to the DSUWG to reflect these changes in it's makeup, structure, |
| | and focus. |
| |
|
| | The SPAN is currently managed by the National Space Science Data Center (NSSDC) |
| | located at Goddard Space Flight Center (GSFC). All SPAN physical circuits are |
| | funded by the Communication and Data Systems Division at NASA Headquarters. |
| | Personnel at the NSSDC facility, at the NASA SPAN centers, and the remote |
| | institutions work in unison to manage and maintain the network. |
| |
|
| |
|
| | Network Configuration and Evolution |
| | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| | The initial topology for SPAN was a modified star where all communication with |
| | the remote institutions came to a major central switching or message routing |
| | node at MSFC. This topology served the network well until many new nodes were |
| | added and more scientists became accustomed to using the network. As data rate |
| | demands on the network increased, it was apparent that a new topology using |
| | lines with higher data rates was needed. Toward this end, a new communication |
| | architecture for SPAN was constructed and implemented. |
| |
|
| | The current structure of SPAN in the United States is composed of an |
| | interconnected four-star, mesh topology. Each star has, as its nucleus, a SPAN |
| | routing center. The routing centers are located at GSFC, MSFC, Jet Propulsion |
| | Lab (JPL), and Johnson Space Center (JSC). The routing centers are linked |
| | together by a set of redundant 56 kbps backbone circuits. Tail circuits, at |
| | speeds of 9.6 kbps (minimum line speed), are connected to each routing center |
| | and and into the SPAN backbone. |
| |
|
| | Most remote institutions have local area networks that allow a number of |
| | different machines to be connected to SPAN. Regardless of a machine's |
| | position in the network, all computers on SPAN are treated logically equal. |
| | The main goal of the new SPAN architecture is for a node that is located across |
| | the country through two routing centers to be as transparently accessible as a |
| | SPAN node sharing the same machine room with the originating system. This ease |
| | of use and network transparency is one of SPAN's greatest assets. |
| |
|
| | The new configuration allows for rapid expansion of the network via the |
| | addition of new tail circuits, upgrade to existing tail circuits, and dynamic |
| | dialing of higher data-rate backbone circuits Implementation of this new |
| | configuration began in July 1986, and the new topology was completed in |
| | November 1986, although there are new circuits being added on a continuing |
| | basis. It is expected that a fifth routing center located at Ames Research |
| | Center. |
| |
|
| | Nearly all of the machines on SPAN are linked together using the commercially |
| | available software package DECnet. DECnet allows suitably configured computers |
| | (IBM-PCs and mainframes, SUN/UNIX workstations, DEC/PROs, PDPs, VAXs, and |
| | DECSYSTEMs) to communicate across a variety of media (fiber optics, coax, |
| | leased telephone lines, etc.) utilizing a variety of low level protocols |
| | (DDCMP, Ethernet, X.25). There are also several institutions that are |
| | connected through Janus hosts which run more then one protocol. |
| |
|
| | SPAN links computers together and touches several other networks in the United |
| | States, Europe, and Canada that are used for data analysis on NASA spaceflight |
| | missions and other NASA related projects. At this time, there are well over |
| | 1200+ computers that are accessible through SPAN. |
| |
|
| | DECnet networks has been accomplished by the unprecedented, successful |
| | cooperation of the network management of the previously separate networks. For |
| | example, the International High Energy Physics Network (HEPNET), the Canadian |
| | Data Analysis Network (DAN) and the Texas University Network (TEXNET) now have |
| | nonconflicting network addresses. Every node on each of these networks is as |
| | accessible to SPAN users as any other SPAN node. The mutual cooperation of |
| | these WANs has given enhanced capabilities for all. |
| |
|
| | There are several capabilities and features that SPAN is developing, making it |
| | unique within the NASA science community. The SPAN system provides remote |
| | users with access to science data bases and brings scientists throughout the |
| | country together in a common working environment. Unlike past NASA mission |
| | networks, where the remote sites have only remote terminals (supporting one |
| | person at the remote site at a time), SPAN supports many users simultaneously |
| | at each remote node through computer-to-remote computer communications |
| | software. Users at their institutions can participate in a number of network |
| | functions involving other remote computer facilities. Scientific papers, data |
| | and graphics files can easily be transferred between network nodes. This |
| | significantly reduces the time it takes to perform correlative work when |
| | authors are located across the country or ocean. As an introduction to SPAN's |
| | network wide capabilities. More advanced users are referred to the DEC DECnet |
| | User's Manual. |
| |
|
| | SPAN will continue to be used as a test case between NASA science investigators |
| | with the intent of exploring and employing modern computer and communication |
| | technology as a tool for doing NASA science research. This can be accomplished |
| | because SPAN is not a project dependent system that requires a static hardware |
| | and software configuration for the duration of a mission. SPAN has provided a |
| | quick reaction capability for several NASA and ESA missions. Each of these |
| | missions needed to rapidly move near real-time ground and spacecraft |
| | observations to a variety of destinations for analysis and mission planning. |
| | Because of SPAN's great success, new NASA spaceflight missions are seriously |
| | looking into creating networks with similar capabilities that are |
| | internetworked with SPAN. |
| |
|
| | Within the next few years, new developments in software and hardware will be |
| | implemented on SPAN that will continue to aid NASA science research. It is |
| | anticipated that SPAN will greatly improve its access to gateways into Europe |
| | and other locations throughout the world. As a natural evolution, SPAN will |
| | migrate toward the International Standards Organization's (ISO) Open Systems |
| | Interconnect (OSI) protocol as the software becomes available. It is expected |
| | that the ISO/OSI protocol will greatly enhance SPAN and increase the number of |
| | heterogeneous computer systems accessible. |
| |
|
| |
|
| | Security And Conduct On The Network |
| | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| | Misconduct is defined as: |
| |
|
| | 1. Any unauthorized access or use of computers on the network, |
| | 2. Attempts to defeat computer security systems (e.g. violating a captive |
| | account), |
| | 3. Repeated login failures to computers or privileged accounts to which |
| | the user is not authorized to use, |
| | 4. Massive file transfers from a given site without prior consent and |
| | coordination with the appropriate SPAN routing centers. |
| |
|
| | The network is monitored very closely, and it is relatively simple to spot an |
| | attempted break-in and then track down the source. When a violation is found, |
| | the matter will be reported to the DSUWG steering committee and the SPAN line |
| | will be in immediate danger of being disconnected. If the situation cannot be |
| | resolved to the satisfaction of both the DSUWG steering committee and network |
| | management, the SPAN line to the offending site will be reviewed for the |
| | possibility of permanent disconnection. In short, NASA pays for the |
| | communications lines and will not tolerate misconduct on the network. |
| |
|
| |
|
| | SPAN Network Information Center (SPAN-NIC) |
| | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| | The SPAN-NIC is located at the National Space Science Data Center in Greenbelt, |
| | Maryland. The purpose of the SPAN-NIC is to provide general user services and |
| | technical support to SPAN users via telephone, electronic mail, and postal |
| | mail. |
| |
|
| | As SPAN has grown exponentially over recent years, it was realized that a |
| | central organization had to be developed to provide users with technical |
| | assistance to better utilize the resources that the network provides. This is |
| | accomplished by maintaining and distributing relevant technical documents, |
| | providing user assistance on DECnet related questions, monitoring traffic on |
| | the network, and maintaining an online data base of SPAN node information. |
| | More specific information on becoming a SPAN site, beyond that provided in this |
| | document, can also be obtained through SPAN-NIC. |
| |
|
| | The SPAN-NIC uses a VAX 8650 running VMS as its host computer. Users wishing |
| | to use the online information services can use the account with the username |
| | SPAN_NIC. Remote logins are capable via SET HOST from SPAN, TELENET from |
| | ARPANET and by other procedures detailed later. |
| |
|
| | SPAN-NIC DECnet host address: NSSDCA or 6.133 |
| |
|
| | SPAN-NIC ARPANET host address: NSSDC.ARPA or 128.183.10.4 |
| |
|
| | SPAN-NIC GTE/TELENET DTE number: 311032107035 |
| |
|
| | An alternative to remote login is to access online text files that are |
| | available. These text files reside in a directory that is pointed to by the |
| | logical name "SPAN_NIC:". Example commands for listing this directory follow: |
| |
|
| | From SPAN: $ DIRECTORY NSSDCA::SPAN__NIC: |
| | From ARPA: FTP> ls SPAN__NIC: |
| |
|
| | The available files and a synopsis of their contents can be found in the file |
| | "SPAN_NIC:SPAN_INDEX.TXT". Once a file is identified, it can be transferred to |
| | the remote host using the VMS COPY command, or the FTP GET command. It is |
| | important to note that this capability will be growing significantly not only |
| | to catch up to the current SPAN configuration but also keep current with its |
| | growth. |
| |
|
| |
|
| | DECnet Primer |
| | ~~~~~~~~~~~~~ |
| | The purpose of the SPAN is to support communications between users on network |
| | nodes. This includes data access and exchange, electronic mail communication, |
| | and sharing of resources among members of the space science community. |
| |
|
| | Communication between nodes on the SPAN is accomplished by means of DECnet |
| | software. DECnet software creates and maintains logical links between network |
| | nodes with different or similar operating systems. The operating systems |
| | currently in use on SPAN are VAX/VMS, RSX, and IAS. DECnet provides network |
| | control, automatic routing of messages, and a user interface to the network. |
| | The DECnet user interface provides commonly needed functions for both terminal |
| | users and programs. The purpose of this section of the file is to provide a |
| | guide on the specific implementation of DECnet on SPAN and is not intended to |
| | supercede the extensive manuals on DECnet already produced by DEC. |
| |
|
| | DECnet supports the following functions for network users: |
| |
|
| | 1. TASK-TO-TASK COMMUNICATIONS: User tasks can exchange data over a network |
| | logical link. The communicating tasks can be on the same or different |
| | nodes. Task-to- task communication can be used to initiate and control |
| | tasks on remote nodes. |
| |
|
| | 2. REMOTE FILE ACCESS: Users can access files on remote nodes at a terminal or |
| | within a program. At a terminal, users can transfer files between nodes, |
| | display files and directories from remote nodes, and submit files containing |
| | commands for execution at a remote node. Inside a program, users can read |
| | and write files residing at a remote node. |
| |
|
| | 3. TERMINAL COMMUNICATIONS: RSX and IAS users can send messages to terminals |
| | on remote RSX or IAS nodes. This capability is available on VMS nodes by |
| | using the PHONE utility. |
| |
|
| | 4. MAIL FACILITY: VMS users can send mail messages to accounts on remote VMS |
| | nodes. This capability is currently available for RSX and IAS nodes but is |
| | not supported by DEC. There are slight variations for RSX and IAS network |
| | mail compared to VMS mail. |
| |
|
| | 5. REMOTE HOST: VMS, RSX, and IAS users can log-on to a remote host as if |
| | their terminals were local. |
| |
|
| |
|
| | Network Implementations For DECnet |
| | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| | The SPAN includes implementations for RSX, IAS and VAX/VMS operating systems. |
| | DECnet software exists at all the SPAN nodes and it allows for the |
| | communication of data and messages between any of the nodes. Each of the |
| | network nodes has a version of DECnet that is compatible with the operating |
| | system of that node. These versions of DECnet have been presently developed to |
| | different extents causing some nodes to have more or less capabilities than |
| | other nodes. The version or "phase" of the DECnet, as it is called, indicates |
| | the capability of of that node to perform certain levels of communication. |
| | Since RSX and IAS implementations are almost identical, they are described |
| | together. |
| |
|
| | Users need not have any special privileges (VAX/VMS users will need the NETMBX |
| | privilege on their account) to run network tasks or create programs which |
| | access the network. However users must supply valid access control information |
| | to be able to use resources. The term "access control" refers to the user name |
| | and password of an account (local or on a remote node). |
| |
|
| | Online system documentation is a particularly important and valuable component |
| | of DEC systems. At the present, SPAN is comprised almost completely of DEC |
| | systems. An extensive set of system help files and libraries exists on all the |
| | SPAN DEC nodes. The HELP command invokes the HELP Utility to display |
| | information about a particular topic. The HELP utility retrieves help |
| | available in the system help files or in any help library that you specify. You |
| | can also specify a set of default help libraries for HELP to search in addition |
| | to these libraries. |
| |
|
| | Format: HELP [keyword [...]] |
| |
|
| | On many systems, new users can display a tutorial explanation of HELP by typing |
| | TUTORIAL in response to the "HELP Subtopic?" prompt and pressing the RETURN |
| | key. |
| |
|
| |
|
| | Utilities for DECnet-VAX |
| | ~~~~~~~~~~~~~~~~~~~~~~~~ |
| | VAX terminal users have several utility programs for network communications |
| | available from the VMS operating system. Documentation for most of these |
| | utilities can be found in the Utility Reference Manual of the VAX/VMS manual |
| | set, and each utility has extensive online help available. The following |
| | descriptions offer a brief introduction to these utilities: |
| |
|
| | MAIL: The VAX/VMS mail utility allows you to send a message to any account or |
| | to a series of accounts on the network. To send a message, you must |
| | know the account name of the person you wish to contact and his node |
| | name or node number. (This will be covered more extensively later in |
| | this file). |
| |
|
| | FINGER: The DECUS VAX/VMS Finger utility has been installed on a number of |
| | SPAN VAX/VMS systems. Finger allows a user to see who is doing what, |
| | both on his machine and on other machines on the network that support |
| | Finger. Finger also allows a user to find information about the |
| | location and accounts used by other users, both locally and on the |
| | network. The following is an example session using the FINGER utility. |
| |
|
| | $ FINGER |
| |
|
| |
|
| | NSSDCA VAX 8600, VMS V4.3. Sunday, 28-Sep-1986 19:55,4 Users,0 Batch. |
| | Up since Sunday, 28-Sep-1986 14:28 |
| |
|
| | Process Personal name Program Login Idle Location |
| |
|
| | HILLS H.Kent Hills Tm 19:02 NSSDC.DECnet |
| | _RTA4: Dr. Ken Klenk Tm 17:55 NSSDC.DECnet |
| | _NVA1: Michael L. Gough Mail 15:13 |
| | SPAN Man Joe Hacker Finger 17:33 bldg26/111 |
| |
|
| |
|
| | $ FINGER SWAFFORD@NSSDCA |
| |
|
| | [NSSDCA.DECnet] |
| |
|
| | NSSDCA VAX/VMS, Sunday, 28-Sep-1986 19:55 |
| |
|
| | Process Personal name Program Login Idle Location |
| |
|
| | SPAN Man Finger 17:33 |
| |
|
| | Logged in since: Sunday, 28-Sep-1986 17:33 |
| |
|
| | Mail: (no new mail) |
| |
|
| | Plan: |
| |
|
| | Joe Hacker, SPAN Hackers Guild |
| |
|
| | Telephone: (800)555-6000 |
| |
|
| | If your VAX supports VMS Finger, further information can be found by typing |
| | HELP FINGER. If your system does not currently have the FINGER utility, a copy |
| | of it is available in the form of a BACKUP save set in the file: |
| | NSSDCA::SPAN_NIC:FINGER.BCK |
| |
|
| | PHONE: The VAX/VMS PHONE utility allows you to have an interactive |
| | conversation with any current user on the network. This utility can |
| | only be used on video terminals which support direct cursor |
| | positioning. The local system manager should know if your terminal can |
| | support this utility. To initiate a phone call, enter the DCL command |
| | PHONE. This should clear the screen and set up the phone screen |
| | format. The following commands can be executed: |
| |
|
| | DIAL nodename::username |
| |
|
| | Places a call to another user. You must wait for a response from that |
| | user to continue. DIAL is the default command if just |
| | nodename::username is entered. |
| |
|
| |
|
| | ANSWER Answers the phone when you receive a call. |
| |
|
| | HANGUP Ends the conversation (you could also enter a CTRL/Z). |
| |
|
| | REJECT Rejects the phone call that has been received. |
| |
|
| | DIR nodename:: |
| |
|
| | Displays a list of all current users on the specified node. This |
| | command is extremely useful to list current users on other nodes of |
| | the network. |
| |
|
| | FACSIMILE filename |
| |
|
| | Will send the specified file to your listener as part of your |
| | conversation. |
| |
|
| | To execute any of these commands during a conversation, the switch hook |
| | character must be entered first. By default, that character is the percent |
| | key. |
| |
|
| | REMOTE FILE ACCESS: DCL commands that access files will act transparently over |
| | the network. For example, to copy a file from a remote |
| | node: |
| |
|
| | $copy |
| |
|
| | From: node"username password"::disk:[directory]file.lis |
| | To: newfile.lis |
| |
|
| | This will copy "file.lis" in "directory" on "node" to the account the command |
| | was issued in and name it "newfile.lis". The access information (user name and |
| | password of the remote account) is enclosed in quotes. Note that you can also |
| | copy that same file to any other node and account you desire. For another |
| | example, to obtain a directory listing from a remote node, use the following |
| | command: |
| |
|
| | $dir node::[directory] (if on the default disk) |
| |
|
| |
|
| | Utilities for DECnet-11M/DECnet-IAS |
| | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| | There are certain DECnet functions that can only be done on nodes that have the |
| | same type of operating systems, such as the MPB, TRW, SPRL, LASR, and UTD nodes |
| | all with an RSX-11M operating system. The capabilities offered to the RSX |
| | DECnet user can be broken down into two major categories: those functions for |
| | terminal users and those functions for FORTRAN programmers. |
| |
|
| | DECnet-11M terminal users have several utility programs available to them which |
| | allows logging onto other machines in the network, file transfers, message |
| | communication, and network status information. |
| |
|
| | REMOTE-LOGON: The REMOTE-LOGON procedure allows a user at a node to log-on to |
| | another node in the network. This capability is also called |
| | virtual terminal. The "SET /HOST=nodename" command allows the |
| | user to log-on to adjacent nodes in the network from a |
| | DECnet-11M node. This command is initiated by simply typing |
| | "SET /HOST=nodename". The "SET HOST" command on the SPAN-VAX |
| | also allows you to log-on to adjacent nodes. |
| |
|
| | NETWORK FILE TRANSFER: NFT is the Network File Transfer program and is part of |
| | the DECnet software. It is invoked by typing NFT <CR> |
| | to file = from file or by typing NFT to file = from |
| | file. Embedded in the file names must be the node |
| | name, access information, and directory if it is |
| | different than the default conventions. Also note that |
| | file names can only be 9 (nine) characters long on RSX |
| | systems. |
| |
|
| | Therefore, VAX/VMS files with more than 9 characters |
| | will not copy with default-file naming. In such a case |
| | you must explicitly name the file being copied to an |
| | RSX system. The following structure for the file names |
| | must be used when talking to the SPAN nodes with NFT. |
| |
|
| | NODE/username/password::Dev:[dir.sub-dir]file.type |
| |
|
| | The following NFT switches are very useful: |
| |
|
| | /LI Directory listing switch. |
| | /AP Appends/adds files to end of existing file. |
| | /DE Deletes one or more files. |
| | /EX Executes command file stored on remote/local |
| | node. |
| | /SB Submits command file for execution |
| | (remote/local). |
| | /SP Spools files to the line printer (works only with |
| | "like" nodes). |
| |
|
| | A particular use for NFT is for the display of graphics |
| | files on the network. It is important to note, |
| | however, that some device-dependent graphics files are |
| | not all displayable, such as those generated by IGL |
| | software. The graphic files generated by graphic |
| | packages that are displayable when residing at other |
| | nodes may be displayed by using the following input: |
| |
|
| | NFT> TI:=SPAN/NET/NET::[NETNET.RIMS]D1364.COL |
| |
|
| | Graphics files generated by IGL can be displayed by |
| | running either REPLAY or NETREP programs (see the |
| | net-library documentation). |
| |
|
| | TERMINAL COMMUNICATIONS: TLK is the Terminal Communications Utility which |
| | allows users to exchange messages through their |
| | terminals. TLK somewhat resembles the RSX broadcast |
| | command but with more capabilities. TLK currently |
| | works only between RSX-11 nodes and within a RSX-11 |
| | node. There are two basic modes of operation for |
| | TLK: The single message mode and the dialogue mode. |
| |
|
| | The single message mode conveys short messages to any |
| | terminal in the same node or remote node. The syntax |
| | for this operation is: |
| |
|
| | >TLK TARGETNODE::TTn:--Message-- |
| |
|
| | To initiate the the dialogue mode type: |
| |
|
| | >TLK TARGETNODE::TTn<cr> |
| |
|
| | When you receive the TLK> prompt, you can enter a new |
| | message line. |
| |
|
| |
|
| | Graphics Display Utilities |
| | ~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| | One of the main objectives of the SPAN system project is to accommodate |
| | coordinated data analysis without leaving one's institution. Therefore, there |
| | is a strong need to develop the ability to have graphic images of data from any |
| | node to be displayed by any other node. The current inability to display data |
| | on an arbitrary graphics device at any node has been quickly recognized. As |
| | general network utilities are developed to support the display of device |
| | dependent and independent graphic images, the handbook SPAN Graphics Display |
| | Utilities Handbook will serve to document their use and limitations. The |
| | graphics handbook is a practical guide to those common network facilities which |
| | will be used to support network correlative studies from the one-to-one to the |
| | workshop levels. For each graphics software utility the handbook contains |
| | information necessary to obtain, use, and implement the utility. |
| |
|
| |
|
| | Network Control Program |
| | ~~~~~~~~~~~~~~~~~~~~~~~ |
| | NCP is the Network Control Program and is designed primarily to help the |
| | network manager. However, there are some NCP commands which are useful for the |
| | general user. With these commands, the user can quickly determine node names |
| | and whether nodes are reachable or not. Help can be obtained by entering |
| | NCP>HELP and continuing from there. For a complete listing of all the NCP |
| | commands that are available to nonpriviledged users, refer to the NCP Utility |
| | manual on VAXs, and the NCP appendix of the DECnet-11M manual for PDPs. The |
| | following two commands are probably the most beneficial to users: |
| |
|
| | $ RUN SYS$SYSTEM:NCP !on VAXs |
| |
|
| | -or- |
| |
|
| | > RUN $NCP !on PDPs |
| |
|
| | NCP> SHOW KNOWN NODES !show a list of all nodes |
| | ! defined in the volatile data base |
| | NCP> SHOW ACTIVE NODES !show a list of only currently reachable |
| |
|
| | Please note that the second command cannot be used on "end nodes", that is, |
| | nodes that do not perform at least DECnet Level I routing. In addition, only |
| | nodes in the user's area will be displayed on either Level I or Level II |
| | routers. In the case of end nodes, users should find out the name of the |
| | nearest Level I or II routing node and issue the following command: |
| |
|
| | NCP> TELL GEORGE SHOW ACTIVE NODES |
| |
|
| |
|
| | Mail |
| | ~~~~ |
| | As briefly discussed earlier all SPAN DEC nodes have a network mail utility. |
| | Before sending a mail message, the node name and user name must be known. To |
| | send a message to the project manager, you would enter the following commands: |
| |
|
| | $ MAIL |
| |
|
| | MAIL> SEND |
| |
|
| | To: NSSDCA::THOMAS |
| | Subj: MAIL UTILITY TEST |
| | Enter your message below. Press ctrl/z when complete |
| | ctrl/c to quit: |
| |
|
| | VALERIE, |
| | OUR NETWORK CONNECTION IS NOW AVAILABLE AT ALL TIMES. WE ARE LOOKING |
| | FORWARD TO WORKING FULL TIME ON SPAN. THANKS FOR ALL YOUR HELP. |
| |
|
| | FRED |
| | <CTRL/Z> |
| |
|
| | MAIL>EXIT |
| |
|
| | In order to send mail to more than one user, list the desired network users on |
| | the same line as the TO: command, separating each with a comma. Another way to |
| | accomplish this is to use a file of names. For example, in the file SEPAC.DIS, |
| | all SEPAC investigators on SPAN are listed: |
| |
|
| | SSL::ROBERTS |
| | SSL::REASONER |
| | SSL::CHAPPELL |
| | SWRI::JIM |
| | TRW::TAYLOR |
| | STAR::WILLIAMSON |
| |
|
| | The network mail utility will send duplicate messages to all those named in the |
| | above file by putting the file name on the TO: command line (TO: @SEPAC). A |
| | second option for the SEND command is to include a file name that contains the |
| | text to be sent. You will still be prompted for the To: and Subject: |
| | information. The following statements give a brief description of other |
| | functions of the MAIL utility: |
| |
|
| | READ n Will list, on the terminal, the mail message corresponding to |
| | number n. If n is not entered, new mail messages will be listed. |
| |
|
| | EXTRACT Saves a copy of the current message to a designated file. |
| |
|
| | FORWARD Sends a copy of the current message to other users. |
| |
|
| | REPLY Allows you to send a message to the sender of the current message. |
| |
|
| | DIR Lists all messages in the current folder that you have selected. |
| | The sequence numbers can then be used with the READ command. |
| |
|
| | DEL Delete the message just read. The message is actually moved to the |
| | WASTEBASKET folder until you exit the utility, when it is actually |
| | deleted. Therefore, you can retrieve a message that you have |
| | "deleted", up until you enter "exit" or ^Z to the MAIL> prompt. |
| |
|
| | HELP Always useful if you're lost. |
| |
|
| |
|
| | Remote Node Information Files |
| | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| | All nodes on the SPAN are required to maintain two node specific information |
| | files in their DECnet default directories. |
| |
|
| | The first file is a network user list file that contains specific information |
| | on each network user who has an account on the machine. At a minimum, the user |
| | list file should contain the name of the user, his electronic mail address, his |
| | account/project identifier, and his default directory. All of this information |
| | is easily obtained on VAX/VMS systems from the SYS$SYSTEM:SYSUAF.DAT file. |
| | (Note that the SYSUAF.DAT file is (and should be) only readable by the system |
| | manager.) The file is called USERLIST.LIS and resides in the node's DECnet |
| | default directory. A command procedure for creating this file is available in |
| | NSSDCA::SPAN_NIC:USERLIST.COM. This procedure should be executed from the |
| | SYSTEM account on the remote node for which it is to be compiled. Following is |
| | an example of displaying the USERLIST.LIS file on NSSDCA from a VAX/VMS system. |
| |
|
| | $ TYPE NSSDCA::USERLIST |
| |
|
| | Userlist file created at : 28-SEP-1986 22:06:01.71 |
| |
|
| | Owner Mail Address Project Default Directory |
| | ---------------- ----------------- --------- ----------------- |
| | ROBERT HOLZER NSSDCA::HOLZER CD8UCLGU CDAW_C8USER:[HOLZER] |
| | RICHARD HOROWITZ NSSDCA::HOROWITZ ACQ633GU ACQ_USER:[HOROWITZ] |
| | CHERYL HUANG NSSDCA::HUANG CD8IOWGU CDAW_C8USER:[HUANG] |
| | DOMINIK P. IASCO NSSDCA::IASCONE PCDCDWPG CDAW_DEV:[IASCONE] |
| | ISADARE BRADSKY NSSDCA::IZZY DVDSARPG DAVID_DEV:[IZZY] |
| | WENDELL JOHNSON NSSDCA::JOHNSON DCSSARPG CODD_DEV:[JOHNSON] |
| | DAVID JOSLIN NSSDCA::JOSLIN SYSNYMOP OPERS_OPER:[JOSLIN] |
| | JENNIFER HYESONG NSSDCA::JPARK CAS130GU CAS_USER:[JPARK] |
| | HSIAOFANG HU NSSDCA::JUDY DVDSARPG DAVID_DEV:[JUDY] |
| | YOUNG-WOON KANG NSSDCA::KANG ADCSARGU ADC_USER:[KANG] |
| | SUSAN E. KAYSER NSSDCA::KAYSER ACQSARGU ACQ_USER:[KAYSER] |
| | DR. JOSEPH KING NSSDCA::KING ADM633MG ADM_USER:[KING] |
| | BERNDT KLECKER NSSDCA::KLECKER CD8MAXGU CDAW_C8USER:[KLECKER] |
| | KENNETH KLENK NSSDCA::KLENK PCDSARPG ADM_USER:[KLENK] |
| |
|
| | Much like the user list, a node information listing is available for all nodes |
| | in their DECnet default account. This file is named NODEINFO.LIS. The |
| | following example is for the SSL node and should be taken as a template for the |
| | generic NODEINFO.LIS file that should be on each node in SPAN. |
| |
|
| | $ TYPE SSL::NODEINFO |
| |
|
| |
|
| | Telenet Access To SPAN |
| | ~~~~~~~~~~~~~~~~~~~~~~ |
| | As SPAN grows, the number of users wishing to make use of its capabilities |
| | increases dramatically. Now it is possible for any user with a terminal and a |
| | 0.3 or 1.2 kbps modem to access SPAN from anywhere in the U.S. simply by making |
| | a local telephone call. There exists an interconnection between SPAN and the |
| | NASA Packet Switched Service (NPSS). The NPSS in turn has a gateway to the |
| | public GTE Telenet network which provides the local call access facilities. |
| | The user dials into one of Telenet's local access facilities and dials the NASA |
| | DAF (Data Access Facility) security computer. The user is then able to access |
| | SPAN transparently through the NSSDC or SSL machines. |
| |
|
| | To find the phone number of a PAD local to the area you are calling from, you |
| | can call the Telenet customer service office, toll free, at 1-800-TELENET. They |
| | will be able to provide you with the number of the nearest Telenet PAD. |
| |
|
| | The following outlines the steps that one must go through to gain access to |
| | SPAN through Telenet. |
| |
|
| | 1. First dial into the local Telenet PAD. |
| | 2. When the PAD answers, hit carriage return several times until the '@' |
| | prompt appears. |
| |
|
| | <CR><CR><CR> |
| |
|
| | @ |
| |
|
| | 3. Next enter the host identification address of the NASA DAF (security |
| | computer). This identification was not yet available at publication |
| | time, but will be made available to all users requesting this type of |
| | access. |
| |
|
| | @ID ;32100104/NASA |
| |
|
| | 4. You will then be prompted for a password (which will be made available |
| | with the identification above). |
| |
|
| | PASSWORD = 021075 |
| |
|
| | (Note: Tthe password will not be echoed) |
| |
|
| | 5. Then type <CR>. You will be connected to the NASA DAF computer. The |
| | DAF will tell you which facility and port you succeeded in reaching, |
| | along with a "ready" and then an asterisk prompt: |
| |
|
| | NASA PACKET NETWORK - PSCN |
| |
|
| | TROUBLE 205/544(FTS 824)-1771 |
| |
|
| | PAD 311032115056 |
| |
|
| | *1 |
| |
|
| | ready |
| |
|
| | * |
| |
|
| | All entries to the DAF must be in capital letters, and the USERID and |
| | PASSWORD will undoubtedly be echoed on the screen. |
| |
|
| | *LOGON |
| | ENTER USERID> LPORTER |
| | ENTER PASSWORD> XXXXXXX |
| | ENTER SERVICE> SPANSSL |
| | NETWORK CONNECTION IN PROGRESS |
| | connected |
| |
|
| | Alternatively, you may enter NSSDC for the "Service>" request. |
| |
|
| | 6. You should now get the VMS "Username" prompt: |
| |
|
| | Username: SPAN |
| |
|
| | 7. You will then be prompted for the name of the SPAN host destination. |
| | For instance, if you are a Pilot Land Data System user on the NSSDC |
| | VAX 11/780, you would enter NSSDC and hit the carriage return in |
| | response to the prompt for host name. |
| |
|
| | SPAN host name? NSSDC |
| |
|
| | 8. Finally, continue with normal logon procedure for the destination host. |
| |
|
| |
|
| | The SPAN X.25 gateways have also been used extensively for internetwork |
| | communications to developing networks in Europe and Canada. |
| |
|
| | The traffic from the United States to Europe was so extensive that a dedicated |
| | link between the GSFC and ESOC routing centers. This link became operational |
| | in January 1987. |
| |
|
| | Configuration Of SPAN/TELENET Gateway |
| |
|
| | ---------- |
| | | dial-up| |
| | | user | |
| | ---------- |
| | | |
| | ------------------------- |
| | | TELENET | |
| | ------------------------- |
| | | gateway |
| | ------------------------- |
| | | NPSS | |
| | ------------------------- |
| | | | |
| | ----------- ----------- |
| | | SSL | | NSSDC | |
| | | VAX 780 | | VAX 8650| |
| | ----------- ----------- |
| | | | |
| | ------------------------- |
| | | SPAN | |
| | ------------------------- |
| | | | | | |
| | ------ ------ ------ ------ |
| | |SPAN| |SPAN| |SPAN| |SPAN| |
| | |node| |node| |node| |node| |
| | ------ ------ ------ ------ |
| |
|
| |
|
| | SPAN/ARPANET/BITNET/Public Packet Mail Gateways |
| | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| | SPAN supports several gateways both to and from several major networks. The |
| | following gives the current syntax for forming an address to another user on |
| | another network. There are several similar gateways at other SPAN nodes that |
| | are not included in this list. Stanford is used here only as a typical |
| | example. If it is necessary for you to use the Stanford mail gateway on an |
| | occasional basis, you should obtain permission from the system manager on the |
| | STAR node (or any other non-NASA gateway node). Currently, there is no |
| | restriction on the NSSDCA gateway usage. |
| |
|
| |
|
| | SPAN-to-ARPANET: NSSDC Gateway . . To: NSSDCA::ARPA%"arpauser@arpahost" |
| | JPL Gateway . . . To: JPLLSI::"arpauser@arpahost" |
| | Stanford Gateway. To: STAR::"arpauser@arpahost" |
| |
|
| | ARPANET-to-SPAN: NSSDC Gateway . . To: spanuser%spanhost.SPAN@128.183.10.4 |
| | JPL Gateway . . . To: spanuser%spanhost.SPAN@JPL-VLSI.ARPA |
| | Stanford Gateway. To: spanuser%spanhost.SPAN@STAR.STANFORD.EDU |
| | [Note: 128.183.10.4 is MILNET/ARPANET address for the NSSDC] |
| |
|
| | SPAN-to-BITNET: |
| | NSSDC Gateway. . .To: NSSDCA::ARPA%"bituser%bithost.BITNET@CUNY.CUNYVM.EDU" |
| | JPL Gateway. . . .To: JPLLSI::"bituser%bithost.BITNET@CUNY.CUNYVM.EDU" |
| | Stanford Gateway .To: STAR::"bituser%bithost.BITNET@CUNY.CUNYVM.EDU" |
| |
|
| | BITNET-to-SPAN: Stanford Gateway. . . . To: spanuser%spanhost.SPAN@SU-STAR.ARPA |
| |
|
| |
|
| | The following gateways allow users on a VAX that supports a connection to a |
| | public packet switch system (virtually anywhere in the world) to reach SPAN |
| | nodes and vice-versa. Note that this will transmit mail only to and from VAXs |
| | that support DEC PSI and PSI incoming and outgoing mail. |
| |
|
| | SPAN-to-Public Packet VAX |
| | NSSDC Gateway. To: NSSDCA::PSI%dte_number::username |
| | SSL Gateway. . To: SSL::PSI%dte_number::username |
| |
|
| | Public Packet VAX-to-SPAN node |
| | NSSDC Gateway. To: PSI%311032107035::span_node_name::username |
| | SSL Gateway. . To: PSI%311032100160::span_node_name::username |
| |
|
| |
|
| | It is possible for remote terminal access and mail between users on England's |
| | Joint Academic Network (JANET) and SPAN. JANET is a private X.25 network used |
| | by the UK academic community and is accessible through the two SPAN public |
| | packet switched gateways at MSFC and at the NSSDC. |
| |
|
| |
|
| | List Of Acronyms |
| | ~~~~~~~~~~~~~~~~ |
| | ARC - Ames Research Center |
| | ARPANET - Advanced Research Projects Agency network |
| | BITNET - Because It's Time Network |
| | CDAW - Coordinated Data Analysis Workshop |
| | CSNET - Computer Science Network |
| | DDCMP - DEC "level II" network protocol |
| | DEC - Digital Equipment Corporation |
| | DECnet - DEC networking products generic family name |
| | DSUWG - Data System Users Working Group |
| | ESOC - European Space Operations Center |
| | ESTEC - European Space Research and Technology Center |
| | GSFC - Goddard Space Flight Center |
| | GTE - General Telephone and Electic |
| | HEPNET - High Energy Physics Network |
| | INFNET - Instituto Nazional Fisica Nucleare Network |
| | ISAS - Institute of Space and Astronautical Science |
| | ISO/OSI - International Standards Organization/Open Systems Interconnection |
| | (network protocol) |
| | ISTP - International Solar Terrestrial Physics |
| | JANET - Joint Academic Network (in United Kingdom) |
| | JPL - Jet Propulsion Laboratory |
| | JSC - Johnson Space Center |
| | kbps - Kilobit per second |
| | LAN - Local area network |
| | LANL - Los Alamos National Laboratory |
| | MFENET - Magnetic Fussion Energy Network |
| | MILNET - Defence data network (originally part of ARPANET) |
| | MSFC - Marshall Space Flight Center |
| | NCAR - National Center for Atmospheric Research |
| | NFT - Network File Transfer (program on RSX/IAS systems) |
| | NIC - Network Information Center |
| | NPSS - NASA Packet Switched System (using X.25 protocol) |
| | NSSDC - National Space Science Data Center (at GSFC) |
| | PDS - Planetary Data System |
| | PSCN - Program Support Communications Network |
| | SESNET - Space and Earth Science Network (at GSFC) |
| | SPAN - Space Physics Analysis Network |
| | SSL - Space Science Laboratory (at MSFC) |
| | RVT - Remote virtual terminal program for RSX or IAS systems |
| | TCP/IP - Transmission Control Protocol/Internet Protocol |
| | Telenet - A public packed switched network owned by GTE |
| | TEXNET - Texas Network (Academic network) |
| | WAN - Wide area network |
| | X.25 - A "level II" communication protocol for packet switched networks |
| | _______________________________________________________________________________ |
| |
|