Session: Distributed Simulation Fundamentals
One of the Monday tutorials was a history and overview of distributed simulation. Margaret L. Loper, Ph.D., from Georgia Tech, was the presenter. I took notes, but it was impossible to keep up. Loper is one of the instructors at the Binghamton University Flight and Ground Vehicle Simulation Update 2008.
The history went all the way back to 1929 and Edwin Link and included SAGE, Spacewar, Maze War, TACDEW, B-52/KC-135, the development of synchronization, SIMNET, and the development of DIS.
She said that SIMNET training was a factor in the decisive Gulf War Battle of 73 Easting, in which an inexperience tank force successfully engaged and destroyed a unit of the Iraqi Republican Guard.
The goal of a distributed system, according to Loper, is to create the illusion that everything is running on a single machine.
She listed the following elements of a distributed computing system:
DIS has scaling problems, documented in a 1995 paper by Macedonia, Zyda, Pratt, Brutzman, and Barham titled Exploiting Reality with Multicast Groups: A Network Architecture for Large-scale Virtual Environments.
Other random bits from my notes:
Other protocols and projects in the 1990s: Aggregate Level Simulation Protocol (ALSP), MODSIM, JMASS. TENA was developed for range systems. Bossi, 1997, on Data Volume Reduction Techniques, Russo on quiescent object model. Multicast.
Ed Powell gave a paper in June 2007 on four approaches to interoperability between distributes simulation protocols.
The history went all the way back to 1929 and Edwin Link and included SAGE, Spacewar, Maze War, TACDEW, B-52/KC-135, the development of synchronization, SIMNET, and the development of DIS.
She said that SIMNET training was a factor in the decisive Gulf War Battle of 73 Easting, in which an inexperience tank force successfully engaged and destroyed a unit of the Iraqi Republican Guard.
The goal of a distributed system, according to Loper, is to create the illusion that everything is running on a single machine.
She listed the following elements of a distributed computing system:
- communication techniques
- computational network
- time and coordination
- naming
- security
- resource management
- process management
- distributed file system
- Object / event architecture: Info about non-changing objects is assumed to be known by all. Dynamic objects pass messages about events and movements.
- Autonomous simulation nodes: All events broadcast to everyone, join and leave any time, don't have to know who needs a piece of information.
- Transmission of "ground truth": Each node xmits absolute truth, each node is responsible for determining visibility and impact on themselves, each node responsible to degrade information, e.g. sensor info.
- Transmission of state change: Update rate drops as rate of change drops
- Extrapolation (dead reckoning): Each node is responsible for comparing detailed position model vs. dead reckoning -- sending data only when error gets too large. Dead reckoning is trade off between communications, computation, and precision.
DIS has scaling problems, documented in a 1995 paper by Macedonia, Zyda, Pratt, Brutzman, and Barham titled Exploiting Reality with Multicast Groups: A Network Architecture for Large-scale Virtual Environments.
Other random bits from my notes:
Other protocols and projects in the 1990s: Aggregate Level Simulation Protocol (ALSP), MODSIM, JMASS. TENA was developed for range systems. Bossi, 1997, on Data Volume Reduction Techniques, Russo on quiescent object model. Multicast.
Ed Powell gave a paper in June 2007 on four approaches to interoperability between distributes simulation protocols.
- Central broker
- UN approach
- One language
- Heterogeneous gateway
0 TrackBacks
Listed below are links to blogs that reference this entry: Session: Distributed Simulation Fundamentals.
TrackBack URL for this entry: http://simulation.batesline.com/cgi-bin/mt/mt-tb.cgi/3311
Leave a comment