TOC | | | Next Section | Gpscom | INDEX
These files are designed for the storage of reduced and partially reduced normal equation elements and associated information. The purpose of these files is to provide the capability for combining multiple files of this nature into a global GPS least squares adjustment including a large number of parameters and a large amount of data.
While the structure of all files of this type is the same except for the absence of record 7 if its length is zero. The contents of the files may differ. There will be files for initial level data sets coming directly from PAGES and there will be files for combined data sets. Each of these files may also have a counter part which holds the solution and inverse elements from a global adjustment.
The normal equations are stored and used in memory as an upper triangular matrix. The "right hand side" or "absolute column" is maintained in this matrix as one additional column. Thus if there are N parameters in the adjustment there are N+1 rows and columns in the matrix. The elements of the matrix are stored sequentially in rows. Each row begins with the diagonal term and ends with the element of the right hand side with no space allocated for the lower triangular elements which are not used.
These files are written by the pages subroutine wrtnrm.f and by the Gpscom subroutine wneqf.f and the header record is defined in the included common block Gpscom/hlhrc.cm. These sources should be used for the final authority on all matters as this discussion is likely to be come out of date quickly as program improvements are made. Currently these are sequential binary files written by standard FORTRAN. There are eight record types defined for this file in the following order. The Fortran variable names used in the following discussion are those used in the Gpscom program.
This is a fixed length record with a relatively small number of constants which define this file and the set of normal equations included. These constants are as follows;
len1 i*4 length of record type 1 (bytes), Header record (this record) len3 i*4 length of record type 3 (bytes), equal 4*(nlv +ngv) len4 i*4 length of record type 4 (bytes), record not present if len4 = 0 len5 i*4 length of record type 5 (bytes), ngblfs*(4+4+4+8) len6 i*4 length of record type 6 (bytes), (nlv+ngv+1)*8 len7 i*4 length of record type 7 (bytes), record not present if len7 = 0 len8 i*4 length of record type 8 (bytes), nlv i*4 number of local variables for which reduced normal equation elements are saved in record 7 ngv i*4 number of global variables for which partially reduced normal equation elements are saved in record 8 nlp i*4 total number of local parameters included in the adjustment up to the establish of this file, but excluding those defined in this file in record 7 and variable nlv neqc i*4 number of constraint equations that have been added to the matrix nobs i*4 number of observation equations included (NOT including neqc) ntds i*4 number of total initial level data sets included in these normal equations tminh i*4 minimum time for data set, high part tmaxh i*4 maximum time for data set, high part typel c*2 type of elements in record 7, ' R' = reduced normal eq. elements ' I' = inverse normal eq. elements ' N' or blank = no record typeg c*2 type of elements in record 8, 'PR' = partially reduced ' I' = inverse elements cdat c*8 date of the creation of this file ctim c*8 time of the creation of this file fdef c*64 character variable defining the contents of this data set For initial data sets each contains the name of the computer on which the adjustment was made along with the original directory path name for the project of the data processing as well as the yr_day of the particular data set. tminl r*8 minimum time for data set, low part tmaxl r*8 maximum time for data set, low part ssqr r*8 sum of squares of residuals seuw r*8 std. error of an obs. of unit weight (chi**2) nt2r i*4 number of type 2 records in this file. len9 i*4 Extra dummy variable to allow adding one additional record to this file without changing the definition of this header record. For awhile this parameter was used to define the version of this file type. This use has since been superseded by "ihrver" defined below. fanth c*(maxpat) Name of antenna phase correction file. ffsih(5) c*(maxpat) Five locations for names of station information files. fpolh c*(maxpat) Name of file with a priori pole positions fsvih c*(maxpat) Name of Satellite information file forbh c*(maxpat) Name of broadcast orbit file fephh c*(maxpat) Name of a priori ephemeris For the above character variables for which space has been set aside, some will only have meaning in the context of the initial run of program Pages and will thus only be defined in the output files from that program. (see the parameter block sizes.par for def. of maxpat) ihrver i*4 version number for this file type noofl i*4 overflow count for observation count lmbuf i*4 buffer length in real *8 words for the type 7 & 8 records idumm1 i*4 dummy place holder for future use idumm2 i*4 dummy place holder for future use idumm3 i*4 dummy place holder for future use
This record type contains multiple records which provide a copy of the OPTIONS section of the pages.skl or gpscom.pdf input file. These are ASCII text records one for each line of the input data file. The number of records is defined in the header record by the parameter nt2r.
pid() c*12 Each local parameter is identified by a 12 character ID. The number of elements defined in this array is nlv +ngv Each global parameter is identified by a 12 character name. Troposphere parameters are identified with the site identifier in columns one to nine, a 't' in column 10 and, a sequential number in columns 11 and 12. Since the troposphere model can be defined in pages as piecewise linear, the sequential number separates each successive troposphere parameter for the given site. This unique character identifier is established by pages for use in the program Gpscom. It is based on the time span of the data, particularly the beginning time, and the interval of time for the piecewise linear troposphere model. Thus these definitions must be the same for each run of pages for which common junction sites are defined and which are to be combined together with Gpscom. For the satellite parameters the first three characters of the identifier are the letters "PRN" followed by two integers which are the PRN number of the satellite. This is followed by 4 blank characters and then a 3 letter code as shown in the following list. The model defined in the pages run will determine which parameters are actually used. 'oc1' Satellite clock parameter 'oc2' Satellite clock parameter 'oc3' Satellite clock parameter 'oyb' Satellite y-bias 'orp' Radiation pressure 'or2' Radiation pressure 'o x' Satellite position 'o y' Satellite position 'o z' Satellite position 'ovx' Satellite velocity 'ovy' Satellite velocity 'ovz' Satellite velocity Recent additions to the radiation pressure models in Pages has added radiation terms with the following designations. 'or1' Berne radiation pressure model 'or2' Berne radiation pressure model 'or3' Berne radiation pressure model 'ora' Berne radiation pressure model 'orb' Berne radiation pressure model 'orc' Berne radiation pressure model 'ord' Berne radiation pressure model 'ore' Berne radiation pressure model 'orf' Berne radiation pressure model When EOP identifiers are generated by the pages program, they include an indication of the parameter type as well as the time the parameter is associated with to the nearest hour. This requires that the time span for EOP parameters must be some multiple of an hour and it is probably best to arrange for the data set to begin on the hour as well. The first 7 characters of the identifier are the last two digits of the year, followed by 3 digits for the day of the year, and then two digits for the hour. The eighth and ninth characters are blanks, and the last 3 characters of the identifier are "p x" for the polar x component, "p y" for the polar y component and "ut1" for time element UT1. For site parameters the first four characters of the identifier is the site code generally used followed by a blank and then one character which is the "point code" as defined in the site information file for use in the Sinex file and then 3 digits for an occupation number which is also defined in the site information file and then the last 3 characters of the name are assigned as follows; s x = x coordinate s y = y coordinate s z = z coordinate svx = x component of velocity svy = y component of velocity svz = z component of velocity. If tidal terms are being computed then they are expressed as amplitude of the term times the sine or cosine of the phase of the term. In the following a capital letter refers to the amplitude times the cosine of the phase A*cos(p), while lower case only refers to the amplitude times the sine of the phase A*sin(p). sSa = Solar semiannual 182.62 d A*cos(p) ssa = Solar semiannual 182.62 d A*sin(p) sMm = Lunar mounthly 27.55 d A*cos(p) smm = Lunar mounthly 27.55 d A*sin(p) sMf = Lunar fortnightly 13.66 d A*cos(p) smf = Lunar fortnightly 13.66 d A*sin(p) sQ1 = Major Lunar elliptic 26.87 h A*cos(p) sq1 = Major Lunar elliptic 26.87 h A*sin(p) sO1 = Principal Lunar 25.82 h A*cos(p) so1 = Principal Lunar 25.82 h A*sin(p) sP1 = Principal Solar 24.07 h A*cos(p) sp1 = Principal Solar 24.07 h A*sin(p) sK1 = Luni-solar declinational 23.93 h A*cos(p) sk1 = Luni-solar declinational 23.93 h A*sin(p) sN2 = Major lunar elliptic 12.66 h A*cos(p) sn2 = Major lunar elliptic 12.66 h A*sin(p) sM2 = Principal Lunar 12.42 h A*cos(p) sm2 = Principal Lunar 12.42 h A*sin(p) sS2 = Principal Solar 12.00 h A*cos(p) ss2 = Principal Solar 12.00 h A*sin(p) sK2 = Luni-solar declinational 11.97 h A*cos(p) sk2 = Luni-solar declinational 11.97 h A*sin(p)
This list contains the names of data files included in these normal equations. This record contains an array of character variables. If this is an initial level data set this record will not exist.
fnml() c*64 There are 'len3' elements each is character * 64. This list of names can include initial level data sets or combined data sets. The length of the file name is restricted to 60 characters as the last 4 will be replaced with the number of data sets included in the file. Thus the distinction between an initial level data set and a combined data set can be made on the basis of the number of included data sets.
This record was ititially intended to contain an array of counts for the observations of each site and an array of the mean time of the observations for the sites. An array is included in the record giving the a-priori values of the parameters. The record has been expanded to include a set of variables for each parameter because of the need for providing apriori values and a reference time. For different parameters the count parameter and the time parameters may have somewhat different meanings. For time dependent parameters such as orbits, EOP or tropospheres the times recorded are the reference time for the parameter. The observation counts are not currently available for the orbit parameters therefore the count parameter is not meaningful. In general the arrays defined in this record have the following definitions. As for the pid() variable in record type 3 above and the googe() variable in record 6 below, the subscript i in these arrays also indicates the row of the normal equation matrix where that parameter is allocated.
kfswmt(i) parameter observation count fswmth(i) parameter reference time, full day part fswmtl(i) parameter reference time, fractional day part apval(i) a-priori values of the parameter being estimated
The station parameters for site position, velocity and ocean tidal terms are arranged together in the order in which they are described in record type 3 above. For all parameters the apriori value term "apval()" is appropriate. For the station x position parameter noted "s x" in record type 3 above which if located in row 'i' of the normal equation matrix will have its associated information in position 'i' of the data arrays in this record, the associated variable "kfswmt(i)" contains the total count of observations for the site. The associated time variables fswmth(i), fswmtl(i) contain the mean epoch time of all observations that include this site. At location i+1 are the variables associated with the y position of the site 's y', and kfswnt(i+1) has been set to zero, the time variables have the special meanings, fswmth(i+1) is the date of the first observations from this site and fswntl(i+1) is the total length in units of days of the observation interval for this site (actually it is length-1day, or last day - first day). At location i+2 are the variables associated with the z position of the site 's z', the associated variable kfswmt(i+2) and also any associated with the site velocity terms are not clearly defined and by happenstance contain the site observation count for a single day. However the time terms fswmth(i+2) and fswmtl(i+2) contain the reference time for the apriori site coordinates. For all site velocity terms the parameters kfswmt, fswmth, fswmtl are not explicitly defined.
For an initial level data set this array contains the Googe number for those rows of the matrix which have been fully reduced (record 7) and the original diagonal term of the matrix for those rows that are only partially reduced (record 8). If this file is for the final solution and inverse then final Googe numbers will be in all elements.
Googe numbers are used in the forward reduction, or triangularization, of the normal equation matrix to check for rows of the matrix which are singular. In the triangularization process when i-1 rows, or columns, have been processed before one starts row i, the value of the diagonal term must be tested. Since row i must be divided by the diagonal term of the row, or the square root of the diagonal term in Cholesky, it must be tested to insure it is non-zero. A zero value would indicate a pure algebraic singularity, but since round off errors can produce small non-zero values which should algebraicly be zero one must test against some tolerance to see if the row should be considered to be singular.
The use of the Googe numbers represents an improvement on the classical method described above. The Googe number is defined as follows. In the triangularization of the normal equations when the, as above, i-1 rows have been reduced the terms of the subsequent rows will have been modified by that process. The Googe number for row i is then computed as the diagonal term of row i divided by the original diagonal term of row i in the unreduced normal equations. Similar to the classic test above the Googe number is then tested against a tolerance to determine if the row should be considered to be non-singular, or sufficiently independent of the i-1 rows of the matrix previously reduced.
The use of this method was proposed by William D. Googe of the Defense Mapping Agency Topographic Center. A more through discussion of the Googe number is given by Charles R. Schwarz, NOAA Technical Memorandum NOS NGS-12, Trav10 Horizontal Network Adjustment Program, April 1978. In his discussion he points out that "This normalization ... allows selection of a tolerance which is independent of network size, observation types, or weights." He reports that the tolerance used effectively for the adjustment of classical horizontal survey networks is .000001 and we have found the same value to be useful in the adjustment of GPS networks and orbits using double difference phase observations.
It should be pointed out that the tolerance values used in the Googe number test are determined experimentally and there is some flexibility in the value used. So in-spite of the equivalence of the normalization mentioned above there may be some variation needed from one problem to another. In previous work using VLBI data a value of the tolerance of .00001 was used. When the tolerance is relaxed often the least squares adjustment will proceed to completion with out detecting a singularity which would otherwise have been flagged. This would indicate that the matrix was not truly singular and sometimes these results will be usefull. However, often when the tolerance is relaxed the solution will be obviously non-sensical, so as in all tolerance testing some experience needs to be applied. In reality, in our work this testing goes beyond singularity testing and is used to detect problems in the least squares adjustment. In fact, when an apparent "singularity" is discovered the programs apply an infinite constraint on that parameter and proceed to a solution on that basis giving the analyst a flag as to what parameter was indeterminate. This usually allows the analyst to discover and correct the cause of the problem.
Rows of the reduced normal equation elements for this data set for which a final solution based on the final global solution is desired. These parameters have been "eliminated" from the matrix by the forward reduction process and thus do not need to be carried forward into a more global combination. These records save the rows of the reduced normal equation matrix to allow for a back substitution process to determine these local parameters based on the combined global adjustment.
There are a potentially large number of elements to be saved in this part of the file. Therefore in order to ease the memory requirements to read the data when it is to be combined into another even larger matrix the elements of these rows are buffered, that is written out as sequential records of groups whose size is defined in the header record as the variable "lmbuf". The total length, in bytes, of the information written is saved as "len7".
NOTE: In the context of program Gpscom and the embedded comments in that code and with respect to this file the normal equations may be considered to have three different groups of parameters. The first section may be called "nuisance" parameters because while they are needed parameters in the adjustment, the final values of these parameters are not of any real interest so the adjusted values do not need to be computed. Thus the solution and inverse terms for these parameters are not computed and the program Pages which establishes the first level of normal equation files does not include these parameters in the normal equation file. The nuisance parameters appear first in the matrix so that after those rows are reduced they can be dropped from the problem. Following the nuisance parameters is a group of unknowns here called "local" unknowns. These are parameters for which the solution values are needed. The rows of the normal equation matrix for these parameters are first forward reduced and are saved in that state in the type 7 records of this file. The third group of parameters is here called "global" parameters. These follow the local parameters in the normal equation matrix and after the local parameters have been forwarded reduced the rows of the normal equation matrix holding the global parameters are saved in the type 8 records of this file.
Rows of the partially reduced normal equation elements for the global parameters, that is those parameters being carried forward into the next level global adjustment. This is usually site parameters and may optionally include earth orientation parameters, orbital parameters and troposphere parameters for junction sites.
These records are buffered as discussed for the type 7 records above. The length of the information in bytes is saved in the variable "len8".
TOC | | | Next Section | Gpscom | INDEX