beowulf Usage Notes

For beowulf version 3.0
27 June 2003

About the code

beowulf version 3.0 calculates infrared safe three jet observables in electron positron annihilation at next-to-leading order in the strong coupling while adding parton showers to the outgoing partons. The program is described in The code comes in two files: The first file contains routines that could be altered by the user. For example, the program as presented here calculates the thrust distribution and moments of the thrust distribution. If you would like, say, the three jet cross section for a given ycut, all you have to do is provide subroutines to calculate whether a given final state has three jets and call these routines from the subroutine hrothgar, following the pattern that you will see in the current hrothgar. The second file contains subroutines that should not be modified except at extreme peril of getting the wrong answer.

The code is written in standard Fortran 90 with one exception: the subroutine timing(deltatime) calls a subroutine cpu_time(t2) that is standard in Fortran 95 but not in earlier versions of Fortran. This subroutine is present so that you can instruct the program to run for x cpu hours. You may have to replace cpu_time(t2) by whatever is appropriate for your system.

Compiling

lepton% f90 -u beowulf.v3_0.f90 beowulfsubs.v3_0.f90 -o beowulf

Running

The program is set up to ask for the following information when you run it. If you would like to reproduce the graph in "Next-to-leading order QCD calculations with parton showers I: collinear singularities," you should modify the main program in beowulf.v2_1.f90 by setting the switch dosoft to .false..

There quite a number of possible error messages. Error conditions that are not supposed to happen will generally stop the program.

The message

 Primary splitting in splitI aborted because of
 small qbarsq/calqsq,  4.397527252492080E-011
 Further such messages will be suppressed.
is harmless. What has happened is that in the first splitting of one of the partons emerging from a Born graph, the virtuality was tiny. This can cause numerical problems, so the program sets the virtuality to a very small number and continues. This situation occurs because there is no cutoff on how small the virtuality in a primary parton splitting can be (at least until the virtuality is so small that we approach the precision of double precision numbers). In contrast, when the virtuality in a secondary splitting is too small (qbar^2 smaller than 0.0003 times s by default) the splitting is cancelled and no further splitting of the parton is attempted. That is what stops showering from going on forever.

The message

 deltatime =  -4226.57031250000       replaced by 0.0d0.

is also harmless. It has to do with the behavior of the system clock.

Output

Here is an explanation of the program's output.

Statement of parameters

When the program starts, it will write the values of the parameters that it is using.
 beowulf version 3.0 20030616 172910.232       
--------------------------------------------------
 Latest revision 12 June 2003        
 beowulf 3.0 subroutines 16 June 2003
 Coulomb gauge and Feynman gauge
  
 I put as much faith in my martial
 might and power as Grendel puts in his.
 Therefore I'll not slay him with a sword...
 ... no sword on earth
 even the best of iron war-blades,
 could make a dent in that miscreant; 
 for he had worked a spell on weapons
 to blunt their edge...
   - Beowulf, translated by Stanley B. Greenfield
  
beowulf will work for   10.00 hours
using groups of  21 sets of points.
The seed is    7342.
beowulf will use the Coulomb gauge and calculate
observable/sigma_0 at next-to-leading order with showers.
alpha_s(mz) is 0.1180 with mz = 91.1876.
sqrt(s) is 91.1876    .
beowulf will use  3.0    colors and  5.0    flavors.
Renormalization parameter:
                   mu /sqrt(s): 0.1667    
                   alpha_s(mu): 0.1623    
Cutoff parameters:
                 badness limit: 1.00E+04
            cancellation limit: 1.00E+04

Secondary showering stops at virtuality qbar^2/rts0^2 = 3.00E-04
All relevant graphs are used.
Most of this should be self-explanatory, but I discuss below two items.

1) The program chooses points in sets. Each set consists of roughly 3000 points distributed over the ten graph topologies. The sets are assembled into groups. Here we learn how many sets there will be in a group; when the program is done, we will learn how many groups have been used. The division into sets and groups is relevant for the error analysis, which is based on the fluctuations of the results among the groups.

2) A small region is excluded from the integration according to the values of two parameters, the badness limit and the cancellation limit. A point has large badness if it is very near one of the collinear or soft singularities, so that the program is likely to mess up in calculating the kinematics. The program calculates, as a kind of side product, a sample integral (discussed below). A point has large cancellation if the largest contribution to the sample integral from a point is large compared to the net contribution, so that the calculation of the net contribution may not be accurate. If the badness is bigger than the badness limit OR if the cancellation is bigger than the cancellation limit, then the point is excluded from whatever quantities are being calculated. The program lists the values of the limits that it is using.

Standard results

As mentioned above, the program calculates a sample integral. By default, this is the average value of (1-thrust)2 times sigma /sigma0. That is this is the "average" normalized to the born level total cross section sigma0, not the full total cross section sigma.

After    36002.7 cpu seconds, beowulf is done.
beowulf used    893 groups of points.

Results for average of (1 - thrust)**2:
  
     points included in result:     5644533
   points in cutoff correction:       82235
       points dropped entirely:       49156
  
         re(result) =  7.34636E-03 +/-  3.64E-05
         im(result) = -2.76228E-06 +/-  2.02E-05
  cutoff correction =  1.48764E-05 +/-  1.22E-05
The following items are reported:

Hrothgar's results

The main point of the program is to send events with weights to a subroutine Hrothgar. This subroutine, which can be easily modified by the user, can use the events and their weights to calculate whatever quantities may be desired. (Hrothgar was the king for whom Beowulf rendered mighty service by eliminating a monster, Grendel, that had been terrorizing the neighborhood, eating the king's loyal subjects, etc.)

The main program sends a signal to Hrothgar that now is the time to report on the status of all monsters in the kingdom, and Hrothgar issues his report.

For each item in Hrothgar's report below, the following quantities are reported:

The first part of the report below concerns the ratio of the thrust distribution divided by a function that fits the results for this quantity reported by Kunszt and Nason (Z Physics at LEP I, CERN Yellow report). This ratio is averaged over a region centered on the stated value of the thrust T, using a smooth weighting function. In the next two parts of the report below, moments of the thrust distribution and of the derivative of the three jet cross section with respect to ycut are reported.

Of course, you are invited to modify the Hrothgar subroutine to calculate whatever you want.

       ---   Hrothgar reports   ---

First, the average near the thrust t of the thrust 
distribution ((1-t)/sigma_0) d sigma /d t divided by
an approximation to the same quantity.

t = 0.710    approximation function = 6.074E-02
             result =  0.99547     +/-  2.52E-02
  cutoff correction = -1.72843E-03 +/-  2.23E-03
  alternative error =  8.35E-02

t = 0.740    approximation function = 8.668E-02
             result =  0.99240     +/-  1.51E-02
  cutoff correction = -1.98449E-03 +/-  1.46E-03
  alternative error =  3.18E-02

t = 0.770    approximation function = 0.113    
             result =  0.97173     +/-  1.34E-02
  cutoff correction =  2.42629E-04 +/-  7.22E-04
  alternative error =  1.41E-02

t = 0.800    approximation function = 0.140    
             result =  0.98232     +/-  1.40E-02
  cutoff correction =  3.38825E-03 +/-  3.78E-03
  alternative error =  2.36E-02

t = 0.830    approximation function = 0.172    
             result =  0.97912     +/-  1.31E-02
  cutoff correction = -5.91181E-04 +/-  1.09E-03
  alternative error =  1.31E-02

t = 0.860    approximation function = 0.209    
             result =  0.96454     +/-  1.28E-02
  cutoff correction = -6.28018E-04 +/-  8.51E-04
  alternative error =  1.34E-02

t = 0.890    approximation function = 0.254    
             result =  0.97665     +/-  1.28E-02
  cutoff correction = -1.41986E-04 +/-  3.60E-04
  alternative error =  1.35E-02

t = 0.920    approximation function = 0.311    
             result =  0.99905     +/-  1.41E-02
  cutoff correction =  3.57279E-03 +/-  4.24E-03
  alternative error =  1.40E-02

t = 0.950    approximation function = 0.381    
             result =   1.0449     +/-  1.50E-02
  cutoff correction =  1.92039E-02 +/-  1.83E-02
  alternative error =  1.61E-02


Next, moments <(1-t)^n>.

n = 1.500
             result =  2.09030E-02 +/-  1.07E-04
  cutoff correction =  7.43689E-05 +/-  5.21E-05
  alternative error =  1.28E-04

n = 2.000
             result =  7.34636E-03 +/-  3.64E-05
  cutoff correction =  1.48764E-05 +/-  1.22E-05
  alternative error =  6.10E-05

n = 2.500
             result =  2.91412E-03 +/-  1.56E-05
  cutoff correction =  3.08063E-06 +/-  3.21E-06
  alternative error =  3.25E-05

n = 3.000
             result =  1.24721E-03 +/-  7.35E-06
  cutoff correction =  5.46709E-07 +/-  1.06E-06
  alternative error =  1.75E-05

n = 3.500
             result =  5.62509E-04 +/-  3.64E-06
  cutoff correction =  2.29888E-08 +/-  4.34E-07
  alternative error =  9.38E-06

n = 4.000
             result =  2.63617E-04 +/-  1.86E-06
  cutoff correction = -5.93463E-08 +/-  2.01E-07
  alternative error =  5.01E-06

n = 4.500
             result =  1.27214E-04 +/-  9.65E-07
  cutoff correction = -5.28655E-08 +/-  9.74E-08
  alternative error =  2.67E-06

n = 5.000
             result =  6.28275E-05 +/-  5.09E-07
  cutoff correction = -3.52943E-08 +/-  4.86E-08
  alternative error =  1.42E-06

n = 5.500
             result =  3.16175E-05 +/-  2.71E-07
  cutoff correction = -2.14980E-08 +/-  2.47E-08
  alternative error =  7.58E-07

n = 6.000
             result =  1.61618E-05 +/-  1.45E-07
  cutoff correction = -1.25745E-08 +/-  1.27E-08
  alternative error =  4.03E-07

Diagnostics

The last part of the output is some diagnostic information that results from the calculation of the sample integral. This is for users who want to know how well the program is doing. The first item tells the distribution of the contributions v to the sample integral. The average value of v is the result for the sample integral, e.g. 7.34636E-03. For this average value of v, it would be nice if all of the contributions had log10(|v|) in the range -4 to -2. Points with log10(|v|) < - 4 are essentially wasted because they are not contributing significantly to the average. Points with log10(|v|) > -2 are a problem because they make the statistical error large. Alas, there are points with log10(|v|) approxomately +1. Each of these points makes a very big contribution to the statistical error. The largest |v| in this run is reported as 92.4. Values of |v| up to about this big are occasionally be encountered. Evidently, there is room for improvement in the code.

The rest of the diagnostic report concerns the characteristics of the point with the worst value of |v|.

***********************
diagnostic information: 
  
Number of points with -9 < log_10(|v|) <-8 is     103327
Number of points with -8 < log_10(|v|) <-7 is     177615
Number of points with -7 < log_10(|v|) <-6 is     302821
Number of points with -6 < log_10(|v|) <-5 is     501206
Number of points with -5 < log_10(|v|) <-4 is     768401
Number of points with -4 < log_10(|v|) <-3 is    1026877
Number of points with -3 < log_10(|v|) <-2 is    1140016
Number of points with -2 < log_10(|v|) <-1 is     996015
Number of points with -1 < log_10(|v|) < 0 is     314019
Number of points with  0 < log_10(|v|) < 1 is       4125
Number of points with  1 < log_10(|v|) < 2 is          4
Number of points with  2 < log_10(|v|) < 3 is          0
Number of points with  3 < log_10(|v|) < 4 is          0
Number of points with  4 < log_10(|v|) < 5 is          0
Number of points with  5 < log_10(|v|) < 6 is          0
Number of points with  6 < log_10(|v|) < 7 is          0
  
Biggest contribution was    92.4    
from graph  7, point choice method 12
  
 Analysis by subroutine diagnostic
  
graph number   7
 Point:
p = 1  k =    -1.25       0.762       -1.06    
p = 2  k =     1.25      -0.762        1.06    
p = 3  k =     1.83       0.342       0.336    
p = 4  k =    -1.83      -0.342      -0.336    
p = 5  k =    0.578        1.10      -0.729    
p = 6  k =     1.25      -0.762        1.06    
p = 7  k =    2.289E-04   2.719E-04   2.342E-04
p = 8  k =   -0.579       -1.10       0.729    
  
 Softness:
p = 1  |k| =     1.81    
p = 2  |k| =     1.81    
p = 3  |k| =     1.89    
p = 4  |k| =     1.89    
p = 5  |k| =     1.44    
p = 6  |k| =     1.81    
p = 7  |k| =    4.256E-04
p = 8  |k| =     1.44    
  
 Collinearity:
v = 3 ps = 1 3 5  sines =   0.71646   0.89993   0.94022
v = 4 ps = 2 6 7  sines =   0.00021   0.90447   0.90438
v = 5 ps = 4 6 8  sines =   0.71656   0.94024   0.89985
v = 6 ps = 5 7 8  sines =   0.90470   0.90459   0.00027
  
Badness of this point is  5.63E+03
  
 Calculate finds the folowing:
valuept =   92.35      -28.61     abs(valuept) =    96.68    
biggest contribution was    3597.    

Done 20030617 064503.673  

Davison E. Soper
Institute of Theoretical Science
University of Oregon
Eugene OR 97403 USA
soper@physics.uoregon.edu