2.3 OpenVMS Galaxy Components and Concepts
To appreciate how OpenVMS Galaxy uses APMP to run
multiple instances of OpenVMS in a single computer, it is
important to understand this new computing model.
The Galaxy Software Architecture on OpenVMS includes the
following hardware and software components:
Console
The console on an OpenVMS system is comprised of an
attached terminal and a firmware program that performs
power-up self-tests, initializes hardware, initiates system
booting, and performs I/O services during system booting
and shutdown. The console program also provides run-time
services to the operating system for console terminal I/O, en-
vironment variable retrieval, NVRAM (nonvolatile random
access memory) saving, and other miscellaneous services.
In an OpenVMS Galaxy computing environment, the console
plays a critical role in partitioning hardware resources. It
maintains the permanent configuration in NVRAM and the
running configuration in memory. The console provides each
instance of the OpenVMS operating system with a pointer to
the running configuration data.
Shared memory
Memory is logically partitioned into private and shared sec-
tions. Each operating system instance has its own private
memory; that is, no other instance maps those physical pages.
Some of the shared memory is available for instances of
OpenVMS to communicate with one another, and the rest of
the shared memory is available for applications.
The Galaxy Software Architecture is prepared for a nonuni-
form memory access (NUMA) environment and, if neces-
sary, will provide special services for such systems to achieve
maximum application performance.
CPUs
In an OpenVMS Galaxy computing environment, CPUs can
be reassigned between instances.
I/O
An OpenVMS Galaxy has a highly scalable I/O subsystem be-
cause there are multiple, primary CPUs in the system-one
for each instance. Also, OpenVMS currently has features for
distributing some I/O to secondary CPUs in an SMP system.
Independent instances
One or more OpenVMS instances can execute without shar-
ing any resources in an OpenVMS Galaxy. An OpenVMS
instance that does not share resources is called an indepen-
dent instance .
An independent instance of OpenVMS does not participate in
shared memory use. Neither the base operating system nor
its applications access shared memory.
An OpenVMS Galaxy can consist solely of indepen-
dent instances; such a system would resemble traditional
mainframe-style partitioning.
2.3.1 APMP Concepts
Architecturally, APMP is based on an SMP hardware ar-
chitecture. It assumes that CPU, memory, and I/O have
full connectivity within the machine and that the memory is
cache coherent. Each subsystem has full access to all other
subsystems.
As shown in Figure 2-1 diagram, APMP then looks at the
resources as if they were a pie. The various resources (CPUs,
private memory, shared memory, and I/O) are arranged
as concentric bands within the pie in a specific hierarchy.
Shared memory is at the center.
APMP supports the ability to divide the pie into multiple
slices, each of disparate size. Each slice, regardless of size, has
access to all of shared memory. Furthermore, because soft-
ware partitions the pie, you can vary the number and size of
slices dynamically.
In summary, each slice of the pie is a separate and complete
instance of the operating system. Each instance has some
amount of dedicated private memory, a number of CPUs,
and the necessary I/O. Each instance can see all of shared
memory, which is where the application data resides. System
resources can be reassigned between the instances of the
operating system without rebooting.
2.3.2 Another Possible Picture
Another way to look at the APMP computing model is to
think about how the resources could be divided.
The overall sense of the diagram is that the proportion
by which one resource is divided between instances is the
proportion by which each of the other resources must be
divided.
Some boxes would have varying proportions per instance.
Physical
Memory CPU I/O
+------+ - - - - - -+------+ - - - - - -+------+
| | | | | |
Instance 1 | M1 | | | | I1 |
+______+ | | +______+
| |\ | | /| |
| | \ | | / | |
Instance 2 | M2 | \ | C1 | / | |
| | \ | | / | |
+______+ \ | | / | |
| |\ \ | | / | |
| | \ \ | | / | |
| | \ \ | | / | |
| | \ \ | | / | I2 |
| | \ \ | | / | |
| | \ \ | | / | |
| | \ \+______+/ | |
| | \ | | | |
Instance 3 | M3 | \ | | | |
| | \ | C2 | | |
| | \ | | | |
| | \+______+ - - - - - -+______+
| | | | | |
| | | | | |
| | | C3 | | I3 |
| | | | | |
+------+ - - - - - -+------+ - - - - - -+------+
| Shared Memory |
| |
+------+ - - - - - -+------+ - - - - - -+------+