So far, UML as been used solely in a standalone configuration, as a traditional virtual machine. However, the fact that it is normal userspace code makes it possible to configure it as a library which would be linked into other applications. This would require some structural changes which are also needed for it to be used in the shared subsystem configuration described in section 3.4. The kernel's initialization code would be required to initialize its data and nothing else. It would not then exec init in order to boot a system. It would behave the same as any library initialization code - initialize the library and return to the application.
The most obvious implication of this is that user-level applications could link against the kernel and gain access to the kernel's facilities, such as memory management and allocation, threads, filesystems, and networking.
The kernel's memory allocation facilities include the slab allocator, which allocates uniform sized objects, and page allocator, which allocates memory in units of pages using a buddy system which provides defragmentation.
The thread facilities offer a very efficient scheduler and a full set of spinlock and semaphore primitives.
These are all well-tested, debugged, efficient, and scalable. For these reasons alone, they make attractive replacements for their libc equivalents. In particular, the ongoing scalability work that's intended to make Linux scale to high-end SMP machines will translate directly into allowing applications to link against the UML library and gain the same scalability to a similar number of threads.
There are also a number of facilities in the kernel which do not have any equivalent in libc. The filesystems supported by Linux can be viewed as hierarchical data stores. A filesystem stored on a ramdisk is a temporary data store which will go away when the process exits. In order to make the data persistent, it would simply be stored on a device backed by a file on the host.
The network subsystem provides a complete private TCP/IP stack and network interface. This would allow an application to be a full-fledged, if somewhat specialized, network node.
Combined, these facilities offer some interesting possibilities to an application. It could store some of its data in an internal filesystem and export it to the rest of the world via NFS or another remote filesystem. External processes could mount this filesystem and gain access to this internal data. If it's read-only, this would allow transparent monitoring of the application's internal state. If it's writable as well, external monitors could change that state.
This would be useful for managing the configuration of a complex server such as Apache. It could export its configuration to an external monitor, which could tweak it as needed, without needing to change the configuration files and have Apache restart or reread those files. A sophisticated monitor could keep track of the state of the machines running Apache and change limits so that it continues to run efficiently. For example, a server on a fully-loaded server could be limited to not accepting further requests until it has reduced its backlog.
Another possibility would be to store the configuration in a filesystem outside the application and have it import it via NFS. This would enable changing the configuration of a large number of instances of this application at once, from a central location.
A different sort of application of this capability is to have an interactive application export its user interface (UI) to the host as a filesystem. External processes could then examine and manipulate the UI, allowing them to do such things as