Tuesday, August 08, 2017

When is a hypervisor 'not' a hypervisor?

Abstract: Microsoft Hyper-V does not reliably host Ubuntu GNU/Linux virtual machines -- often not responding to network requests -- until a set of Hyper-V integration services is installed on the Linux machine.

For search discovery purposes: What's wrong with the GNU/Linux virtual machines on my Microsoft Windows Hyper-V virtual host? Why do the Linux machines not respond to network requests (ping, ssh, etc.) for a few minutes, while the Windows virtual machines do? It's a virtualized network card on a a virtual machine -- what could go wrong? Is the hypervisor, the system responsible for producing and maintaining the virtual environments, not working? Is it broken? Why is it failing? This is ancient technology: how could the hypervisor, the VMM (Virtual Machine Manager) be so broken? How could the culprit be Hyper-V? I would think that's the most stable technology imaginable.

Having built and worked with hypervisors since the early 1980's, I believed that a hypervisor was a hypervisor. It created a reliable virtualized environment.

But virtual machines have competed with much lighter-weight sandboxing and namespace isolation solutions for many years, especially with the advent of containers. That means that a modern hypervisor is not a traditional hypervisor. It's trying to optimize and pipeline resources, and it needs cooperation from the operating system on the virtual machine to even begin to compete with containers, which have no separate operating system at all.

That means: if you just install an operating system on a virtual machine, you need to make sure the OS knows which hypervisor it's working with. These are integration adaptations. There's no need to be an isolation purist. In fact, isolation is always virtualized in the computing world. We build computers to give us reasonably reliable isolation of logical spaces when we want them. That doesn't always work, but it works most of the time. But in reality there are all sorts of interconnections that are working hard to maintain this illusion of separation for us. Consider a web application with separate user accounts. That isolation is a construction. The exceptions and actual connections and commonalities (a user table, for example) are well-understood.

For Ubuntu GNU/Linux server 16.04 (Xenial) running on Hyper-V, if you want your virtual network to run reliably, you need to activate "hv", which is already installed in the OS:

Add the following lines to the /etc/initramfs-tools/modules file:

hv_vmbus
hv_storvac
hv_bikvac
hv_netvac

Then run:

update-initramfs -u

... and reboot.

Tuesday, May 26, 2015

Fixing the resizing problem with Ubuntu on VirtualBox on Windows

Here's an interesting moment in the history of computing: there are so many conflicting web-answers to a problem, a problem that shouldn't even arise in the first place, that everyone needs to spend an hour, in a constrained trial-and-error exploration, to install the simplest and most-likely virtual machine combination. A million person-hours were probably wasted on this. So, this bug is responsible for the death of two lifetimes of work. Is our current era of extreme technical dysfunction really the right time to train young computer people? Isn't that somewhat sadistic?

Oracle supports a free VM hypervisor called VirtualBox. I'm using version 4.3.28 on Windows 7. I want to put Linux on it. Ubuntu is one of the few Linux distros that offers an .iso image download. I download Ubuntu 14.04.02.

Out-of-the-box, the install comes with a problem. The display is too small, 640 x 480, to even see the entire display settings screen. There are no options for making it bigger, or for making it resize automatically.

Systems are so unbundled, dependency building is so unreliable, conflicts are so common, that it's unclear if my solution would work for even a slight variation on the above situation.

But here's my solution. When I found it, I deleted the virtual machine and tried the same installation procedure again, just to be certain.

  1. install Ubuntu on the virtual machine
  2. at the top of the VirtualBox guest operating system desktop window, click Devices->Insert Guest Additions CD Image … 
  3. you will be prompted to run the Ubuntu Guest Additions CD. Do it. 
  4. then shutdown from within the virtual desktop, and reset from the dashboard. Resizing should now work.

The dozen other solutions are probably just out-of-date. But, again, computing is in such a irresponsible state that it is not possible to know this for sure, without an extensive research initiative.

Thursday, October 12, 2006

Installing Xen on Ubuntu 6.06 (based on debian)

Just a note on the resources that worked for me.

I installed on a really cheap (sub-$400) Compaq Presario AMD Sempron 3400+. Used the Ubuntu distro in the Linux Format 'Ubuntu special', and followed the update instructions in that mag.

Then used this great cut-and-paste stepwise guide:

How To Set Up Xen 3.0 From Binaries In Ubuntu 6.06 LTS (Dapper Drake)

It worked. There's no greater praise. A long sequence, but certainly educational.

Additionally, this was a useful read:

Create a Debian VM with debootstrap

And this is essential reading for actual use of the Xen:

Xen 3.0 User's Manual

Running Linux under Xen: VT hardware harmful?

If I add up everything I read, the new VT chips from Intel (featuring new hardware virtualization technology) do not enhance the speed of Xen hypervising Virtual Machines loaded with Linux ... beyond the fact that the chips are faster. In fact, the performance is allegedly worse, so the price/performance is hypothetically much worse.

In the October 2006 Issue of the UK's Linux Format magazine, their benchmarks show Linux with Xen on hardware virtualization chips to be about 50% the speed as native Linux! But, in Steve Hand's dynamic BayLISA lecture on Xen last year, he benchmarked Xen domains without virtualization hardware to be close to 100% native speed.

So what's up?

I can only guess that using the VT hardware takes significant overhead.

It offers benefits, of course! You can run unmodified Windows images, trapping the infamous "odd opcodes" and passing control to the hypervisor (the virtual machine manager). You also get more flexibility to run simultaneous x86 modes.

I have to look into this more.

Tuesday, October 03, 2006

On Writing Simulators and the Use of Macros

Greg Bryant and Josh Gordon
Conference Proceedings of the Eighth West Coast Computer Faire
March, 1983

Macros allow you to take a big, difficult, repetitive program and break it down into interesting, easy to work with chunks. But more important, if you're going to transport your code to a different system, macros let you reduce the system dependent part to its smallest denominator, making portability problems seemingly disappear. Reading this note, we hope, will help you understand what macros are and how to use them.

Our case is of particular interest to microcomputer users who are considering changing processors or are transporting assembly-level software between two microprocressors. There are two ways to go: 1) auto translation, where you actually convert code pound for pound so that it runs under the target machine or 2) simulation, where you write a write a program that will make believe it's another machine, and will perform instructions one at a time as given. When dealing with higher-level languages, these two ideas are called compiling and interpreting, respectively.

We will write the simulator in the assembly language of the target machine. Working in assembly language lets us concentrate on the speed of the simulator and occasionally, depending on the machine, makes the translation process easier. This of course makes some sense, since machine talk is pretty much at one level. Besides, simulation in a higher-level language is another ball of wax, where variables and subroutines take the place of the macros we're about to talk about.
A simluator has as its structure the instructions of the machine you're going to simulate.  That is, the bulk of the program the simulation code itself, ils merely a bunch of labels named after instructions. When you've fetched the next line of code to be simulated, you jump to the label where you'll find the simulation of that indtruction.

The mechanism we use to help us go quickly to the correct label in our simulator is called a jump table. Machine-level instructions read as numbers when you go to simulate them, so you'll have an area in your program where all the labels that match a particular instruction are aranged in the order of these numbers. In other words (using the 8080 as an example):

number         instruction        label
00pop bpopb80
01lxi blxib80
02stax bstaxb80
(and so on) ...

So, when next you fetch an instruction ( a numerical value that you've previously loaded into your simulator's program memory) you add the addresss of the start of your jump table to the instruction itself, and jump to the label stored at that location.

Threaded code is the most efficient in time, that is, a section of code called 'the thread' is provided to fetch the next instruction and jump to the appropriate code. This 'thread' is put at the end of each section of the simulator, so that the instruction will be fetched right away. You can save space by putting the thread in one place and jumping to it each time, but it runs faster if you duplicate the code.

The following is a bit of an 8080 simulator written for a fairly compatible machine, the Z8000. Don't try to read the code (it's unreadable to make a point) just notice that the thread follows the simulation of the instruction:

adca8:
ldctlbflags,rh3
adcbrl3,rl3
ldctlbrh3,flags
incr2,$1
/* thread */
ldr0,$0
clrr1
ldbrl1,*r2
addr1,r1
ldr12,inst(r1)
jp*r12
subb8:
ldctlbflags,rh3
subbrl3,rh4
ldctlbrh3,flags
incr2,$1
/* thread */
ldr0,$0                   
// etc ...

As you can see, simple though this program is in design, typing in 256 sections of almost identical code is no trivial task. Also notice that all the 8080 registers are represented by Z8000 registers, rh3, rl1 and so on. The potential for typos when typing these in, as well as for human error during debugging, is very high. A macro pre-processor can be used to give these registers meaningful names such as a_reg and b_reg, so that the programmer needn't do any translation. Also, if for some unseen reason of system quirkiness he has to change register representation, he only has to change it in his macro definition.

When building a simulator, you must become completely familiar with (in fact, omniscient of) the simulated machine's structure, but you needn't be as familiar with the target machine's architecture. Clearly much of the coding involved in this 'brute force' simulation of instructions for an entire processor is repetitive, and often only the operands change. Designing with the use of macros is the most efficient and intelligent way to begin the coding stage of this type of project.

The primary idea of a macro is t0o replace the keywords with text, much as in the fashion of creating a language specifically to write your program in. At one level this means replacing the word 'thread' in your program with the six lines of code above, and at another level it means replacing the word a_reg (which is meaningful to us) with 'rl3' (which is meaningful to the machine). Doing this provides the opportunity to construct portable code in the same sense as code written in a higher level language is portable -- the defined macros, like the implementations of instructions in an interpreter, will contain all the system dependent code that is necessary.

While writing your programs you get a good idea of what 'instructions' you'd like to have around, and they'll be different than those in a programming language where the instructions need to be very general in nature. The more general the instruction, the harder it is to implement, but most of your instructions will be quite specific, and therefore easily re-written if the need arises.

The program that takes your macro calls and replaces them with their definitions is called a macro pre-processor. Although there are many macro pre-processors used today, we will use as an example the program M4, developed at Bell Laboratories in asasociation with the increasingly popular UNIX operating system. This important utility can be constructed on your system
in a very short time (say a half hour) using as a guide the fine book 'Software Tools' by Kernigan and Plaugher (1). You may then add features to this utility as you deem necessary, following the idea that you are constructing a tool that you may use over and over again in the building of your programs.

For the purposes of this paper we will use only one function from M4. It is 'define'. It works much like a function definition, instead that you're defining text instead of code. That is, you treat it as text (although it probably IS code). You may define a string to be replaced by a set of instructions thusly:

define (THREAD,[
clrr1
ldbrl1,*r2
addr1,r2
ldr12,inst(r1)
jp*r12]) dnl

From then on, every time the word THREAD appears it will be replaced by the indicated text. Nothing spectacular so far, but definitions, like functions, may also take arguments. The arguments are passed in the form of a function call, say 'arith_gen(adc,a)', and the defined text uses the variables in the form $1, $2, $3 etc. This function may call other macros, passing some of these variables along as parameters. At the logical bottom, there should be system dependant code.

There should be two files involved in the construction of any final code -- 1) the macro file, which holds all your definitions and is called by run time by m4 for your 2) source file, which contains all the calls to the macro definitions and looks like an outline of your program.

Since your simulator's outline looks rather like an instruction set, you should divide up that instruction set into similar parts so that you can create macros that will be passed parameters to construct the code. An example is a macro that creates all the code necessary to simulate the 8080 arithmetic instructions ADD, ADC, SUB, SBB and CMP (complicated macros can encompass many more instructions, but for simplicity's sake, we'll limit the scope of this one).

Let's start with the hard part -- the actual simulation of the instruction. The funny part is that we don't have to do any work now. We'lll just assume that a bunch of macros have been defined called add_implementation, adc_implementation and so on. These can be very simple, such as:

define(adc_implementation,
[adcba_reg,$1])

They might be more complex, but we don't need to worry about that now -- this is the system code at the logical bottom we were talking about. These implementation macros will be called by other macros in the form: '$1_implementation($2_reg)' where $1 will be the word 'adc' passed to it when it was called, and $2 will be the register to 'and-with-carry' with
the 'a' register. More on this in a moment.

The structure of an instruction simulation looks like this:

LABEL:
Load status flags
Instruction implementation
Store status flags
Adjust the program counter
Thread

Our instruction generator for these 5 8080 instructions looks like this:

define(arith_gen,[$1$280:
load_flags
$1_implementation($2_reg)
store_flags
program_counter_+1
thread])

The first line defines the name (arith_gen) and creates the LABEL, which is 'adca80:' in the case of arith_gen(adc,a). The second line is a stand-alone macro called load_flags that does whatever is necessary, before the instruction is executed, to prepare the simulated status flags to be modified.

The third line is the implementation macro discussed before, and now what it does is clearer. When called with arith_gen(adc,a):

$1_implementation($2_reg)

becomes:

adc_implementation(a_reg)

a function performed by your macro pre-processor. The fourth line stores the resulting flags in some manner, and the fifth line increments the program counter by one in this case, although the amount varies for different instructions, of course. The last line is the threading function, or a call to it, which jumps to the next instruction.

In order for all this to work, we must define macros for each possible case. So for the '$1_reg' macro:

define(a_reg,[r13])
define(b_reg,[r14])
define(c_reg,[rh4])
etc ...

However this little bit of work is nothing compared to the trouble you'll get into trying to remember what target machine registers you are using for each 8080 register. And when you write a simulator for yet another machine with completely different register mnemonics, you will only need to type them in once. From there, all your higher level macros, such as 'arith_gen' above, are still useful.

The primary power of macro pre-processing comes from one idea, substitution, in two forms: replacement and parameter incorporation. The system can be brought up quickly, as mentioned before, and when used properly can bring a very large program to its knees. The success of the system is unquestionable, since implementations of some of our largest programs, which used to take a month to produce, may now be transported to a new machine in about a week. That's a tremendous time savings, and well worth the time invested in creating the macro structure (one to two weeks).

This technique is not just for simulators, but for any large program that is system dependant and needs transport. In the computer industry more and more programmers are discovering the painless process of writing macro implementations.

Acknowledgements: We'd like to thanks Lance Batten for the time, Mike Higgens for the arrangement, and G.B. Shaw for the clarification.

References:

(1) Software Tools, B.W. Kernighan, P.J. Plaugher; Addison-Wesley, 1976
(2) Z8000 Assembly language Programming, Lance Leventhal, Adam Osborne, Chuck Collins; Osborne/McGraw-Hill, 1980