Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- samples/texts/1169714/page_1.md +29 -0
- samples/texts/1169714/page_10.md +5 -0
- samples/texts/1169714/page_11.md +13 -0
- samples/texts/1169714/page_12.md +57 -0
- samples/texts/1169714/page_2.md +96 -0
- samples/texts/1169714/page_3.md +125 -0
- samples/texts/1169714/page_4.md +48 -0
- samples/texts/1169714/page_5.md +19 -0
- samples/texts/1169714/page_6.md +5 -0
- samples/texts/1169714/page_7.md +747 -0
- samples/texts/1169714/page_8.md +13 -0
- samples/texts/1169714/page_9.md +1 -0
- samples/texts/1285480/page_1.md +17 -0
- samples/texts/1285480/page_2.md +33 -0
- samples/texts/1285480/page_3.md +82 -0
- samples/texts/1285480/page_4.md +80 -0
- samples/texts/1285480/page_5.md +53 -0
- samples/texts/1285480/page_6.md +82 -0
- samples/texts/1364076/page_1.md +34 -0
- samples/texts/1364076/page_2.md +37 -0
- samples/texts/1364076/page_3.md +17 -0
- samples/texts/1660153/page_1.md +27 -0
- samples/texts/1660153/page_10.md +33 -0
- samples/texts/1660153/page_11.md +34 -0
- samples/texts/1660153/page_12.md +21 -0
- samples/texts/1660153/page_13.md +37 -0
- samples/texts/1660153/page_14.md +35 -0
- samples/texts/1660153/page_15.md +43 -0
- samples/texts/1660153/page_16.md +64 -0
- samples/texts/1660153/page_17.md +87 -0
- samples/texts/1660153/page_18.md +27 -0
- samples/texts/1660153/page_19.md +39 -0
- samples/texts/1660153/page_2.md +46 -0
- samples/texts/1660153/page_20.md +53 -0
- samples/texts/1660153/page_21.md +33 -0
- samples/texts/1660153/page_22.md +57 -0
- samples/texts/1660153/page_23.md +29 -0
- samples/texts/1660153/page_3.md +112 -0
- samples/texts/1660153/page_4.md +17 -0
- samples/texts/1660153/page_5.md +19 -0
- samples/texts/1660153/page_6.md +77 -0
- samples/texts/1660153/page_7.md +35 -0
- samples/texts/1660153/page_8.md +19 -0
- samples/texts/1660153/page_9.md +73 -0
- samples/texts/1749790/page_1.md +25 -0
- samples/texts/1749790/page_10.md +45 -0
- samples/texts/1749790/page_11.md +41 -0
- samples/texts/1749790/page_12.md +39 -0
- samples/texts/1749790/page_13.md +47 -0
- samples/texts/1749790/page_14.md +39 -0
samples/texts/1169714/page_1.md
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Spatial and Temporal Variability of Soil Moisture
|
| 2 |
+
|
| 3 |
+
Vanita Pandey¹, Pankaj K. Pandey²*
|
| 4 |
+
|
| 5 |
+
¹Department of Soil and Water Engineering, CAEPHT, Central Agricultural University, Gangtok, India
|
| 6 |
+
|
| 7 |
+
²Department of Agricultural Engineering, North Eastern Regional Institute of Science & Technology, Nirjuli Itanagar Arunachal Pradesh, India
|
| 8 |
+
|
| 9 |
+
E-mail: pandeypk@gmail.com
|
| 10 |
+
|
| 11 |
+
Received February 11, 2010; revised March 15, 2010; accepted April 20, 2010
|
| 12 |
+
|
| 13 |
+
## Abstract
|
| 14 |
+
|
| 15 |
+
The characterization of temporal and spatial variability of soil moisture is highly relevant for understanding the many hydrological processes, to model the processes better and to apply them to conservation planning. Considerable variability in space and time coupled with inadequate and uneven distribution of irrigation water results in uneven yield in an area Spatial and temporal variability highly affect the heterogeneity of soil water, solute transport and leaching of chemicals to ground water. Spatial variability of soil moisture helps in mapping soil properties across the field and variability in irrigation requirement. While the temporal variability of water content and infiltration helps in irrigation management, the temporal correlation structure helps in forecasting next irrigation. Kriging is a geostatistical technique for interpolation that takes into account the spatial auto-correlation of a variable to produce the best linear unbiased estimate. The same has been used for data interpolation for the C. T. A. E. Udaipur India. These interpolated data were plotted against distance to show variability between the krigged value and observed value. The range of krigged soil moisture values was smaller than the observed one. The goal of this study was to map layer-wise soil moisture up to 60 cm depth which is useful for irrigation planning.
|
| 16 |
+
|
| 17 |
+
**Keywords:** Soil Moisture, Spatial & Temporal Variability, Kriging
|
| 18 |
+
|
| 19 |
+
## 1. Introduction
|
| 20 |
+
|
| 21 |
+
Spatially and temporally varying soil moisture is being increasingly used as input to hydrological and meteorological models. Knowledge of spatial and temporal variability of field soil helps in characterization of the soil. The use of mathematical model to simulate the water and solute movement into the field soil has accelerated the need to understand the variability of soil properties that affect the interpretation of model output variability.
|
| 22 |
+
|
| 23 |
+
Soil moisture spatial distribution varies both vertically and laterally due to evapotranspiration and precipitation, influenced by topography, soil texture, and vegetation. While small scale spatial variations are influenced by soil texture, larger scales are influenced by precipitation and evaporation [1]. Field soil encompasses considerable inherent variability in their texture, structure and physical and chemical properties due to variability in parent material and other soil forming factors. Variability in water holding capacity of the soil can adversely affect yield and would complicate irrigation scheduling. Thus,
|
| 24 |
+
|
| 25 |
+
variability has been found to have significant affect on moisture movement, process and the parameters associated with this process. The characteristics of soil moisture variability is essential for understanding and predicting land surface processes, that varies based on topography, soil texture, and vegetation at different spatial and temporal scales [2]. Thus, the spatial characteristics is a key parameter used in the background statistical error models as well dynamic propagation of the modeled state uncertainty in data assimilation modeling systems [3-7].
|
| 26 |
+
|
| 27 |
+
Temporal variability of soil water properties are induced by tillage, cropping and other management practices. Surface seal and compaction of soil are predominant phenomenon that affects water flow.
|
| 28 |
+
|
| 29 |
+
The geostatistical studies for soil moisture variability [8-13] are carried out at the scales of small catchments areas (1-5 km²). S. G. Reynolds [14] found a close relation between sizes on soil moisture variability, with R² of 0.7 considered to be best. To reduce uncertainty, O. R. Dani and R. J. Hanks [15] used state space models for soil water
|
samples/texts/1169714/page_10.md
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Figure 8. (a) and (b) measured and kriged value of soil moisture at 30 cm depth for dry condition;
|
| 2 |
+
(c) and (d) measured and kriged value of soil moisture at 30 cm depth for wet condition.
|
| 3 |
+
|
| 4 |
+
Figure 9. (a) and (b) measured and kriged value of soil moisture at 45 cm depth for dry condition;
|
| 5 |
+
(c) and (d) measured and kriged value of soil moisture at 45 cm depth for wet condition.
|
samples/texts/1169714/page_11.md
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Figure 10. (a) and (b) measured and kriged value of soil moisture at 60 cm depth for dry condition; (c) and (d) measured and kriged value of soil moisture at 60 cm depth for wet condition.
|
| 2 |
+
|
| 3 |
+
at 30 cm depth as in **Figure 8(a)** and comparatively less variability is observed at 7.5 cm depth as in **Figure 6(a)** due to lack of moisture. After kriging, the variability was reduced as shown in krigged map. Also the range of the krigged values are less than that of the observed one (**Table 3**).
|
| 4 |
+
|
| 5 |
+
For second observation the largest variability was observed at 15 cm depth as in **Figure 7(c)**, which was reduced drastically for krigged estimate whereas least variability was observed at 7.5 cm depth. The range of second observation indicates that the water is not uniformly distributed in the whole field. For the lower depths, the variability is almost same for both sets of observations for measured as well as krigged estimates, because some time is required for water to percolate down and during that time sufficient variability is observed at lower depth.
|
| 6 |
+
|
| 7 |
+
From the maps, it was observed that the spatial variability is considerably reduced after kriging for each depth, especially for second observation.
|
| 8 |
+
|
| 9 |
+
Contour maps of soil moisture for each depth were plotted. Careful comparison of the contour map of soil moisture envisaged that there has been considerable reduction in spatial variability in case of krigged estimates. On the basis of the above discussion, it can be said that the krigged values are more consistent and are true representative of soil moisture values. Therefore, contour maps developed with krigged estimates would be more precise than those developed with measured value.
|
| 10 |
+
|
| 11 |
+
## 4. Conclusions
|
| 12 |
+
|
| 13 |
+
Based on the findings it can be said that the krigged val-
|
samples/texts/1169714/page_12.md
ADDED
|
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
ues are more consistent and true representative of soil
|
| 2 |
+
moisture values. Hence, on the basis of past soil moisture
|
| 3 |
+
values the krigged value of soil moisture at particular
|
| 4 |
+
time and space may be estimated. Statistical parameters
|
| 5 |
+
reflect that the variability of soil moisture reduces sig-
|
| 6 |
+
nificantly after kriging. These estimated values help in
|
| 7 |
+
proper irrigation scheduling along with necessary infor-
|
| 8 |
+
mation on the crops to be grown and expected yield of
|
| 9 |
+
the same.
|
| 10 |
+
|
| 11 |
+
5. References
|
| 12 |
+
|
| 13 |
+
[1] A. Oldak, T. Jackson and Y. Pachepsky, “Using GIS in Passive Microwave Soil Mapping and Geostatistical Analysis,” *International Journal of Geographical Information Science*, Vol. 16, No. 7, 2002, pp. 681-689.
|
| 14 |
+
|
| 15 |
+
[2] A. J. Teuling and P. A. Troch, “Improved Understanding of Soil Moisture Variability Dynamics,” *Geophysical Research Letters*, Vol. 32, No. 5, 2005, p. 4
|
| 16 |
+
|
| 17 |
+
[3] M. Durand and S. Margulis, “Effects of Uncertainty Magnitude and Accuracy on Assimilation of Multi-Scale Mea- surements for Snowpack Characterization,” *Journal of Ge- ophysical Research*, Vol. 113, No. 16, 2008, p. 17.
|
| 18 |
+
|
| 19 |
+
[4] M. Zupanski, D. Zupanski, D. F. Parrish, E. Rogers and G. DiMego, “Four-Dimensional Variation Data Assimilation for the Blizzard of 2000,” *Monthly Weather Review*, Vol. 130, No. 8, 2002, pp. 1967-1988.
|
| 20 |
+
|
| 21 |
+
[5] T. Vukicevic, M. Sengupta, A. S. Jones and T. Von- der-Haar, “Cloud-Resolving Satellite Data Assimilation: Information Content of IR Window Observations and Uncertainties in Estimation,” *Journal of the Atmospheric Sciences*, Vol. 63, No. 3, 2006, pp. 901-919.
|
| 22 |
+
|
| 23 |
+
[6] W. T. Crow, “Correcting Land Surface Model Predictions for the Impact of Temporally Sparse Rainfall Rate Measurements Using an Ensemble Kalman Filter and Surface Brightness Temperature Observations,” *Journal of Hydro-meteorology*, Vol. 4, No. 5, 2003, pp. 960-973.
|
| 24 |
+
|
| 25 |
+
[7] M. Zupanski, S. J. Fletcher, I. M. Navon, B. Uzunoglu, R. P. Heikes, D. A. Randall, T. D. Ringler and D. N. Daescu, “Initiation of Ensemble Data Assimilation,” *Tellus A*, Vol. 58, No. 2, 2006, pp. 159-170.
|
| 26 |
+
|
| 27 |
+
[8] F. Anctil, R. Mathieu, L.-E. Parent, A. A. Viau, M. Sbih, and M. Hessami, “Geostatistics of Near-Surface Moisture in Bare Cultivated Organic Soils,” *Journal of Hydrology*, Vol. 260, No. 1, 2002, pp. 30-37.
|
| 28 |
+
|
| 29 |
+
[9] A. Bardossy and W. Lehmann, “Spatial Distribution of Soil Moisture in a Small Catchment. Part 1: Geostatistical Analysis,” *Journal of Hydrology*, Vol. 206, No. 1, 1998, pp. 1-15.
|
| 30 |
+
|
| 31 |
+
[10] M. Herbst and B. Diekkruger, “Modelling the Spatial
|
| 32 |
+
|
| 33 |
+
Variability of Soil Moisture in a Micro-Scale Catchment and Comparison with Field Data Using Geostatistics,” *Physics and Chemistry of the Earth, Parts A/B/C*, Vol. 28, No. 6, 2003, pp. 239-245.
|
| 34 |
+
|
| 35 |
+
[11] J. Wang, B. Fu, Y. Qiu, L. Chen and Z. Wang, “Geostatistical Analysis of Soil Moisture Variability on Da Nan-gou Catchment of the Loess Plateau, China,” *Environmental Geology*, Vol. 41, No. 1-2, 2001, pp. 113-120.
|
| 36 |
+
|
| 37 |
+
[12] A. W. Western, R. B. Grayson and T. R. Green, “The Tarrawarra Project: High Resolution Spatial Measurement, Modelling and Analysis of Hydrological Response,” *Hydrological Processes*, Vol. 13, No. 5, 1999, pp. 633-652.
|
| 38 |
+
|
| 39 |
+
[13] A. W. Western, G. Blöschl and R. B. Grayson, “Geostatistical Characterization of Soil Moisture Patterns in the Tarrawarra a Catchment,” *Journal of Hydrology*, Vol. 217, No. 3, 1999, pp. 203-224.
|
| 40 |
+
|
| 41 |
+
[14] S. G. Reynolds, “A Note on Relationship between the Size of Area and Soil Moisture Variability,” *Journal of Hydrology*, Vol. 22, 1974, pp. 71-76.
|
| 42 |
+
|
| 43 |
+
[15] O. R. Dani and R. J. Hanks, “Spatial and Temporal Soil Moisture Estimation Considering Soil Variability and Evapotranspiration on Uncertainty,” *Water Resources Research*, Vol. 28, No. 3, 1992, pp. 803-814.
|
| 44 |
+
|
| 45 |
+
[16] B. P. Mohanty, T. H. Skaggs and J. S. Famiglietti, “Analysis and Mapping of Field Scale Soil Moisture Variability Using High Resolution Ground Based Data during the Southern Great Plains,” *Water Resources Research*, Vol. 36, No. 4, 2000, pp. 1023-1031.
|
| 46 |
+
|
| 47 |
+
[17] J. A. Huisman, C. Sperl, W. Bouten and J. M. Verstraten, “Soil Water Content Measurent at Different Scales: Accuracy of Time Domain Reflectometery and Ground Penetrating Radar,” *Journal of Hydrology*, Vol. 245, No. 1, 2001, pp. 48-58.
|
| 48 |
+
|
| 49 |
+
[18] G. Matheron, "La the´orie des Variables re´gionalise´s et ses Applications," Masson et Cie, Paris, 1965.
|
| 50 |
+
|
| 51 |
+
[19] E. H. Isaaks and R. M. Srivastava, "Applied Geostatistics," Oxford University Press, New York, 1989.
|
| 52 |
+
|
| 53 |
+
[20] R. E. Rossi, J. L. Dungan and L. R. Beck, “Kriging in the Shadows: Geostatistical Interpolation for Remote Sensing,” *Remote Sensing of Environment*, Vol. 49, No. 1, 1994, pp. 32-40.
|
| 54 |
+
|
| 55 |
+
[21] A. U. Bhatti, D. J. Mulla and B. E. Frazier, “Estimation of Soil Properties and Wheat Yields on Complex Eroded Hills Using Geostatistics and Thematic Mapper Images,” *Remote Sensing of Environment*, Vol. 37, No. 3, 1991, pp. 181-191.
|
| 56 |
+
|
| 57 |
+
[22] R. Webster and M. A. Oliver, “Geostatistics for Environmental Scientists,” John Wiley & Sons Ltd, Chichester, 2001.,
|
samples/texts/1169714/page_2.md
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
balance and evapotranspiration in application to spatial
|
| 2 |
+
and temporal estimation methods. B. P. Mohanty, et al. [16]
|
| 3 |
+
analyzed and mapped field scale soil moisture variability
|
| 4 |
+
using high resolution ground based data. J. A. Huisman
|
| 5 |
+
[17] studied ground penetrating radar for mapping soil
|
| 6 |
+
water content at intermediate scales between point and
|
| 7 |
+
remote sensing measurements.
|
| 8 |
+
|
| 9 |
+
A variogram, a central concept in geostatistics, is used
|
| 10 |
+
to analyze the structure of spatial variation of soil moist-
|
| 11 |
+
ure. The experimental variogram characterizes the spa-
|
| 12 |
+
tial variability in the measured data. This variogram is
|
| 13 |
+
used in Kriging to determine soil moisture values at un-
|
| 14 |
+
sampled locations. This section first describes the effect
|
| 15 |
+
of selection of active separation distance for best model
|
| 16 |
+
fitting. The variogram structure consists of the nugget
|
| 17 |
+
(the variance at zero lag distance), sill (the variance to
|
| 18 |
+
which the variogram asymptotically rises), and decorre-
|
| 19 |
+
lation length (range of spatial dependence). The decorre-
|
| 20 |
+
lation length varies based on minimum distance between
|
| 21 |
+
sampling locations and size of sampled area [13]. In this
|
| 22 |
+
study the average grid resolution (40 m × 40 m) of soil
|
| 23 |
+
moisture data layer – wise up to 60 cm depths were ana-
|
| 24 |
+
lysed. One of the major issues in variographic analysis is
|
| 25 |
+
the selection of total lag distance for variogram fitting to
|
| 26 |
+
experimental data. As the separation distance increases,
|
| 27 |
+
after half of the total separation distance, the variogram
|
| 28 |
+
starts to decompose at larger separation distances due to
|
| 29 |
+
the reduced availability of pairs. Thus to obtain robust
|
| 30 |
+
estimation of the variogram, we ignored pairs at larger
|
| 31 |
+
separation distances that usually have smaller variance.
|
| 32 |
+
The separation distance is selected based on the criterion
|
| 33 |
+
that 95% pairs should have been used for variogram
|
| 34 |
+
model fitting. Kriging is an interpolation technique based
|
| 35 |
+
on the theory of regionalized variables developed by G.
|
| 36 |
+
Matheron [18]. Kriging offers a wide and flexible variety
|
| 37 |
+
of tools that provide estimates for unsampled locations
|
| 38 |
+
using weighted average of neighboring field values fal-
|
| 39 |
+
ling within a certain distance called the range of influ-
|
| 40 |
+
ence. Kriging requires a variogram model to compute
|
| 41 |
+
variable values for any possible sampling interval. The
|
| 42 |
+
variogram functionality in conjunction with kriging al-
|
| 43 |
+
lows us to estimate the accuracy with which a value at an
|
| 44 |
+
unsampled location can be predicted given the sample
|
| 45 |
+
values at other locations [19-21]. Kriging provides opti-
|
| 46 |
+
mal interpolation of soil moisture at grid points in a spa-
|
| 47 |
+
tial domain based on autocorrelation in the variograms.
|
| 48 |
+
The theoretical variogram model (Gaussian, spherical,
|
| 49 |
+
exponential, or linear) that best fits the experimental
|
| 50 |
+
variogram was selected for soil moisture mapping using
|
| 51 |
+
the block Kriging technique [22].
|
| 52 |
+
|
| 53 |
+
We are not aware of any documented study on layer-
|
| 54 |
+
wise soil moisture mapping available for study area.
|
| 55 |
+
Hence, the study of spatial and temporal distribution of
|
| 56 |
+
|
| 57 |
+
soil moisture helps in mapping soil properties across the
|
| 58 |
+
field and variability in irrigation requirement, which is
|
| 59 |
+
helpful in real time management at field scale.
|
| 60 |
+
|
| 61 |
+
**2. Material and Methods**
|
| 62 |
+
|
| 63 |
+
## 2.1. Measurement of Soil Moisture
|
| 64 |
+
|
| 65 |
+
The instructional farm of CTAE, Udaipur having an area of 2.16 hectare was selected for soil sampling. The area was surrounded by vegetation and the field was bare at the time of observation. Grid having a size of 40 m × 40 m was formed in that area. Soil samples were taken at each grid point at the depths of 7.5 cm, 15 cm, 30 cm, 45 cm and 60 cm. These soil moisture data were then used for analysis of spatial and temporal variability. One set of soil moisture was recorded prior to irrigation and the other, after irrigation.
|
| 66 |
+
|
| 67 |
+
## 2.2. Autocovariance and Autocorrelation
|
| 68 |
+
|
| 69 |
+
Representation of variability of soil properties by fre-
|
| 70 |
+
quency distribution does not assume that the values are
|
| 71 |
+
random and independent. Physically, it is expected that
|
| 72 |
+
the values of any properties of two neighboring points
|
| 73 |
+
will be closer to each other then those at two distant
|
| 74 |
+
points.
|
| 75 |
+
|
| 76 |
+
Limiting consideration of spatial relationship only to
|
| 77 |
+
the second order, i.e., second moment, the relationship
|
| 78 |
+
expressed by the auto covariance as a function of separa-
|
| 79 |
+
tion distance C(h) can be presented mathematically,
|
| 80 |
+
thus,
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
\begin{align*}
|
| 84 |
+
C(h) &= E\{\left[Z(X) - \mu\right]\left[Z(X+h) - \mu\right]\} \\
|
| 85 |
+
&= E\{Z(X) - Z(X+h)\} - \mu^2
|
| 86 |
+
\end{align*}
|
| 87 |
+
\tag{1}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
The computational form of C(h) is
|
| 91 |
+
|
| 92 |
+
$$ \gamma(h) = \frac{1}{2N(h)} \sum_{i=1}^{N(h)} \left[ z(x)_i - z(x+h)_i \right]^2 \quad (2) $$
|
| 93 |
+
|
| 94 |
+
where, $\gamma(h)$ = Semi-variance for interval distance class $(h)$, $z_i$ = measured sample value at point $i$, $z_i + h$ = measured sample value at point $I + h$, and $N(h)$ = total number of sample couples for the separation interval $h$.
|
| 95 |
+
|
| 96 |
+
The least squares best-fit criteria is used to fit a model to the experimental semi-variance data through which the nugget ($C_0$), sill ($C_0 + C$), and decorrelation length or range of spatial dependence ($A_0$). From Equation (1) and (2), it is clear that $C_0$ is the variance $\sigma^2$ or its sample estimate of the variable. The ratio, $C(h)/C_0$ is auto correlation, $\gamma(h)$, having values between +1 and -1. It is also apparent that for $C(h)$ and $\gamma(h)$ to exist, the mean and
|
samples/texts/1169714/page_3.md
ADDED
|
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
variance of the population must be finite and constant in
|
| 2 |
+
the area of consideration.
|
| 3 |
+
|
| 4 |
+
**2.3. Semivariance and Semivariogram**
|
| 5 |
+
|
| 6 |
+
The Quantification of spatial dependence is called semi
|
| 7 |
+
variance. The intrinsic hypothesis states that for any two
|
| 8 |
+
locations separated by lag distance, ‘h’, the variance of
|
| 9 |
+
the differences of the measured property is finite and
|
| 10 |
+
dependent of its (x, y) position. The Variance depends on
|
| 11 |
+
the lag distance (h)
|
| 12 |
+
|
| 13 |
+
$$
|
| 14 |
+
\mathrm{Var}[z(x) - z(x+h)] = 2\gamma(h) \quad (3)
|
| 15 |
+
$$
|
| 16 |
+
|
| 17 |
+
Therefore, the model of soil variance is
|
| 18 |
+
|
| 19 |
+
$$
|
| 20 |
+
Z(x) = \mu + \varepsilon(x) \tag{4}
|
| 21 |
+
$$
|
| 22 |
+
|
| 23 |
+
where, z(x) = expected value μ = mean, ε(x) = residual
|
| 24 |
+
value
|
| 25 |
+
|
| 26 |
+
Semi variance, $\gamma(h)$ is a function of $h$. This equa-
|
| 27 |
+
tion is similar to Equation (1). Expected $\epsilon(x)$ is spatially
|
| 28 |
+
dependent random component with zero mean and vari-
|
| 29 |
+
ance defined by $\gamma(h)$. The semi variance may be esti-
|
| 30 |
+
mated as a function of ‘h’ by Equation (2). The semi-
|
| 31 |
+
variogram is a graphical model that indicates the spatial
|
| 32 |
+
relationship between measured values. The com- mon
|
| 33 |
+
characteristics of semivariogram are that they will in-
|
| 34 |
+
crease from some minimum value called the nugget at a
|
| 35 |
+
zero separation distance to some finite maximum value
|
| 36 |
+
as the separation distance increases. The maximum value
|
| 37 |
+
of a variogram is typically an estimate of variance. The
|
| 38 |
+
distance at which the variogram first reaches the sill is
|
| 39 |
+
termed as the range. The range is an estimate of the dis-
|
| 40 |
+
tance over which the measurements are spatially corre-
|
| 41 |
+
lated and may reflect the physical extent of similar soil
|
| 42 |
+
bodies.
|
| 43 |
+
|
| 44 |
+
**2.4. Selection of Model**
|
| 45 |
+
|
| 46 |
+
Before estimation of soil moisture, a mathematical model
|
| 47 |
+
has to be fitted to it. Selection of model is based on sill,
|
| 48 |
+
range and nugget of the experimental semivariogram
|
| 49 |
+
model. From the various models, i.e., random model,
|
| 50 |
+
spherical model, linear model, logarithmic model and
|
| 51 |
+
parabolic model, considered for the observations of the
|
| 52 |
+
present study, the spherical model was found to be best
|
| 53 |
+
suited for the purpose.
|
| 54 |
+
|
| 55 |
+
Spherical model is probably the most commonly used
|
| 56 |
+
model. It has a simple polynomial expression and its
|
| 57 |
+
shape matches well with the observed data. The model
|
| 58 |
+
ensures almost a linear growth up to a certain distance
|
| 59 |
+
before stabilizing. The tangent at origin intersects the sill
|
| 60 |
+
at a point with an abscissa (2/3*a) and slope of this tan-
|
| 61 |
+
gent originates at 3c/2a. This can be useful while fitting
|
| 62 |
+
models. It is characterized by two parameters c and a,
|
| 63 |
+
|
| 64 |
+
and mathematically represented by an equation as:
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
\begin{align*}
|
| 68 |
+
\gamma(h) &= C \left( \frac{3h}{2a} - \frac{h^3}{2a^3} \right) && \text{for } h < a \\
|
| 69 |
+
\gamma(h) &= C && \text{for } (h > a)
|
| 70 |
+
\end{align*}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
After selecting the model, kriging technique was used for interpolation of values. In the present study, block kriging is used for estimating the kriged value. The grid system was developed to create a common reference point for all information.
|
| 74 |
+
|
| 75 |
+
Kriging is the application of geostatistical techniques for interpolation. The estimation fulfills the following conditions (Cuenca, and Amegee, 1987).
|
| 76 |
+
|
| 77 |
+
1) Linearity: Kriging estimate is formed from a linear combination of measured data at surrounding points. It can be expressed as:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
K(X_p) = \lambda_1 K(X_1) + \lambda_2 K(X_2) + \lambda_3 K(X_3) + \dots + \lambda_n K(X_n)
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
where, $K(X_p)$ = value of parameter at $(X)$, $\lambda_i$ = weight assigned to each measured value of estimation
|
| 84 |
+
|
| 85 |
+
2) Unbiasedness: The condition of unbiasedness is that mean of the estimates should be equal to the mean of the measured value, that is:
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
E[K'(X)] = E[K(X)] = m \\
|
| 89 |
+
\text{But } E[K'(X)] = m \sum_{i=1}^{n} \lambda_i
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
where, *m* = mean. The above equation gives the fol-
|
| 93 |
+
lowing result Σ λᵢ = 1, that is, the sum of individual
|
| 94 |
+
weight λᵢ, must be equal to unity.
|
| 95 |
+
|
| 96 |
+
3) Best Criterion: As per the third constraint, the vari-
|
| 97 |
+
ance should be minimum, i.e., the error between the
|
| 98 |
+
kriging estimates and the true value is minimized. To
|
| 99 |
+
fulfill this condition, derivatives of the variance with
|
| 100 |
+
respect to weights λ₁ should be zero. As there are n
|
| 101 |
+
weights, the above procedure will produce n equations
|
| 102 |
+
with n unknown weights. Since there is one more Equa-
|
| 103 |
+
tion, i.e.,
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\sum_{i=1}^{n} \lambda_i = 1
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
hence, there is n + 1 equation with n unknown. To miti-
|
| 110 |
+
gate the situation, one new unknown, i.e., Lagrangian
|
| 111 |
+
multiplier a constant equal to zero has to be added. For
|
| 112 |
+
convenience Lagrangian multiplier has been taken as μ.
|
| 113 |
+
|
| 114 |
+
Therefore, kriging system of equation in terms of semi-variance function appears as:
|
| 115 |
+
|
| 116 |
+
$$
|
| 117 |
+
\begin{align*}
|
| 118 |
+
\lambda_1 \gamma(h_{11}) + \lambda_2 \gamma(h_{12}) + \lambda_3 \gamma(h_{13}) + \dots + \lambda_n \gamma(h_{nn}) + \mu &= \gamma(h_{np}) \\
|
| 119 |
+
\lambda_1 \gamma(h_{21}) + \lambda_2 \gamma(h_{22}) + \lambda_3 \gamma(h_{23}) + \dots + \lambda_n \gamma(h_{2n}) + \mu &= \gamma(h_{np})
|
| 120 |
+
\end{align*}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\lambda_1 \gamma(h_{n1}) + \lambda_2 \gamma(h_{n2}) + \lambda_3 \gamma(h_{n3}) + \cdots + \lambda_n \gamma(h_{nn}) + \mu = \gamma(h_{np})
|
| 125 |
+
$$
|
samples/texts/1169714/page_4.md
ADDED
|
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
By solving the above system all the assigned weights are determined and value of soil moisture is estimated at any point p.
|
| 2 |
+
|
| 3 |
+
## 2.5. Cross Validation
|
| 4 |
+
|
| 5 |
+
The estimated kriged values are subjected to cross validation. For a cross validation at each point, the reduced estimation error is obtained by dividing the estimation error by the standard deviation of estimation. The goodness of estimation is expressed by two conditions on the reduced estimation errors: 1) minimum, nearly zero mean and 2) variance near to unity.
|
| 6 |
+
|
| 7 |
+
# 3. Results and Discussion
|
| 8 |
+
|
| 9 |
+
## 3.1. Statistical Analysis of Observed Data
|
| 10 |
+
|
| 11 |
+
This was conducted in two stages
|
| 12 |
+
|
| 13 |
+
1) The traditional summary of statistics, i.e., mean, standard deviation, coefficient of variation, skewness and kurtosis were estimated
|
| 14 |
+
|
| 15 |
+
2) Semi variance was defined and difference in nugget, sill and range were examined for each depth.
|
| 16 |
+
|
| 17 |
+
## 3.2. Summary Statistics of the Soil Moisture Data
|
| 18 |
+
|
| 19 |
+
The different statistical properties for the soil moisture at different depths and for both observations were calculated. It was observed that before irrigation the soil was very dry with mean moisture content of 2.89% to 8.68%. After irrigation, this has been a drastic increase from 10.16% to 11.88%. At lower depths, the variation in
|
| 20 |
+
|
| 21 |
+
mean is almost constant before and after irrigation.
|
| 22 |
+
|
| 23 |
+
## 3.3. Estimation of Geostatistical Parameters
|
| 24 |
+
|
| 25 |
+
The semi variance was computed using Equation (3) for different lag distance. Using the data (Table 1) experimental semivariograms for different lags was plotted. A model was fitted to the plotted experimental semivariograms. The parameters of the fitted model are given in Table 2. The fitted spherical model can be expressed as:
|
| 26 |
+
|
| 27 |
+
$$
|
| 28 |
+
\begin{align}
|
| 29 |
+
\gamma(h) &= C_o + C_1 \left[ \frac{3}{2} \left( \frac{h}{r} \right) - \frac{1}{2} \left( \frac{h}{r} \right)^3 \right] && \text{For } 0 < h < r \tag{5} \\
|
| 30 |
+
\gamma(h) &= C_0 + C_1 && \text{For } h > r
|
| 31 |
+
\end{align}
|
| 32 |
+
$$
|
| 33 |
+
|
| 34 |
+
The experimental semivariogram and fitted model layer-wise upto 60 cms depth as shown in Figures 1(a) to 5(b). Nugget indicates error of estimation of parameter at the smallest sampling interval. If nugget expressed as a percent of sill, is less than 24 percent, the variable may be considered showing a strong spatial dependence; if it is between 25 to 75 percent, it is considered as moderately dependent and if it exceeds 75 percent spatially, it is said to have poor dependence.
|
| 35 |
+
|
| 36 |
+
## 3.4. Contour Maps of Soil Moisture Variability
|
| 37 |
+
|
| 38 |
+
Contour maps were prepared for all the depths and for both dry and wet condition, using surfer. Surfer uses the inverse distance technique for interpolating irregularly spaced measured data to a specified grid size.
|
| 39 |
+
|
| 40 |
+
In first set of observations (dry condition), it was observed that for 7.5 cm depth maximum variability is observed at the corner of the field, which is mainly due to vegetation, which reduce evaporation. In the centre of
|
| 41 |
+
|
| 42 |
+
**Table 1.** Lag, average distance and semi variance for soil moisture at different depths (cm).
|
| 43 |
+
|
| 44 |
+
<table><thead><tr><th rowspan="3">Lag</th><th rowspan="3">Average Distance (m)</th><th colspan="10">Semivarince</th></tr><tr><th colspan="4">First Observation</th><th colspan="4">Second Observation</th></tr><tr><th>7.5</th><th>15</th><th>30</th><th>45</th><th>60</th><th>7.5</th><th>15</th><th>30</th><th>45</th><th>60</th></tr></thead><tbody><tr><td>1</td><td>40.00</td><td>2.8</td><td>2.45</td><td>3.59</td><td>8.46</td><td>15.87</td><td>16.73</td><td>3.62</td><td>5.79</td><td>9.74</td><td>8.04</td></tr><tr><td>2</td><td>56.57</td><td>2.04</td><td>2.34</td><td>2.72</td><td>6.43</td><td>11.64</td><td>20.32</td><td>3.54</td><td>6.03</td><td>8.05</td><td>8.58</td></tr><tr><td>3</td><td>80.00</td><td>3.04</td><td>2.33</td><td>2.90</td><td>5.32</td><td>11.35</td><td>17.57</td><td>4.37</td><td>6.08</td><td>10.07</td><td>7.85</td></tr><tr><td>4</td><td>89.44</td><td>2.41</td><td>1.85</td><td>2.17</td><td>3.81</td><td>7.68</td><td>14.42</td><td>3.95</td><td>6.85</td><td>8.32</td><td>9.86</td></tr><tr><td>5</td><td>113.14</td><td>5.86</td><td>5.24</td><td>7.21</td><td>15.70</td><td>30.01</td><td>4.92</td><td>4.12</td><td>5.46</td><td>11.82</td><td>9.46</td></tr><tr><td>6</td><td>120.00</td><td>4.47</td><td>5.17</td><td>6.09</td><td>10.84</td><td>17.54</td><td>15.24</td><td>6.11</td><td>11.69</td><td>6.94</td><td>9.46</td></tr><tr><td>7</td><td>126.49</td><td>3.63</td><td>2.46</td><td>3.37</td><td>5.95</td><td>9.68</td><td>26.00</td><td>7.05</td><td>14.09</td><td>7.47</td><td>9.05</td></tr><tr><td>8</td><td>144.22</td><td>1.23</td><td>5.24</td><td>3.51</td><td>2.50</td><td>3.89</td><td>7.37</td><td>6.35</td><td>9.18</td><td>8.51</td><td>3.777</td></tr><tr><td>9</td><td>162.81</td><td>4.82</td><td>4.39</td><td>3.36</td><td>2.42</td><td>7.41</td><td>19.93</td><td>11.31</td><td>14.74</td><td>7.47</td><td>7.33</td></tr><tr><td>10</td><td>178.89</td><td>7.14</td><td>10.04</td><td>8.22</td><td>11.57</td><td>14.69</td><td>8.35</td><td>9.24</td><td>13.37</td><td>44.14</td><td>8.62</td></tr><tr><td>11</td><td>202.26</td><td>6.91</td><td>4.43</td><td>3.06</td><td>4.99</td><td>5.27</td><td>6.09</td><td>8.04</td><td>7.45</td><td>2.39</td><td>1.17</td></tr><tr><td>12</td><td>215.41</td><td>9.22</td><td>7.95</td><td>4.48</td><td>0.38</td><td>0.56</td><td>11.55</td><td>13./83</td><td>14.83</td><td>6.22</td><td>4.08</td></tr></tbody></table>
|
| 45 |
+
|
| 46 |
+
Copyright © 2010 SciRes.
|
| 47 |
+
|
| 48 |
+
IJG
|
samples/texts/1169714/page_5.md
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
**Table 2. Different parameters of the fitted model of semivariogram.**
|
| 2 |
+
|
| 3 |
+
<table><thead><tr><th>S.No</th><th>Depth (cm)</th><th>Type of Model</th><th>Range</th><th>Nugget</th><th>Sill</th><th>Nugget as % of Sill</th><th>Spatial Dependence</th></tr></thead><tbody><tr><td colspan="8">First Observation</td></tr><tr><td>1</td><td>7.5</td><td>Spherical</td><td>125</td><td>0.45</td><td>3.05</td><td>14.75</td><td>Strongly dependent</td></tr><tr><td>2</td><td>15</td><td>Spherical</td><td>113.1</td><td>0.327</td><td>2.947</td><td>11.09</td><td>Strongly dependent</td></tr><tr><td>3</td><td>30</td><td>Spherical</td><td>110</td><td>0.46</td><td>3.1</td><td>14.84</td><td>Strongly dependent</td></tr><tr><td>4</td><td>45</td><td>Spherical</td><td>113.09</td><td>0.638</td><td>5.748</td><td>11.09</td><td>Strongly dependent</td></tr><tr><td>5</td><td>60</td><td>Spherical</td><td>113.09</td><td>1.1787</td><td>10.61</td><td>11.10</td><td>Strongly dependent</td></tr><tr><td colspan="8">Second Observation</td></tr><tr><td>1</td><td>7.5</td><td>Spherical</td><td>125</td><td>2.1</td><td>14</td><td>15</td><td>Strongly dependent</td></tr><tr><td>2</td><td>15</td><td>Spherical</td><td>113.09</td><td>0.548</td><td>4.94</td><td>11.09</td><td>Strongly dependent</td></tr><tr><td>3</td><td>30</td><td>Spherical</td><td>113.09</td><td>0.836</td><td>7.525</td><td>11.10</td><td>Strongly dependent</td></tr><tr><td>4</td><td>45</td><td>Spherical</td><td>113.09</td><td>0.848</td><td>7.634</td><td>11.10</td><td>Strongly dependent</td></tr><tr><td>5</td><td>60</td><td>Spherical</td><td>113.09</td><td>0.804</td><td>7.244</td><td>11.09</td><td>Strongly dependent</td></tr></tbody></table>
|
| 4 |
+
|
| 5 |
+
the field, soil moisture is almost constant **Figure 6(a)**, which is due to the bare field, which provides a free evaporation surface.
|
| 6 |
+
|
| 7 |
+
As shown in **Figure 7(a)** for 15 cm depth, variability is large as compared to 7.5 cm depth, but the same pattern is observed at this depth, i.e., maximum in the corners and minimum at the centre. Mean moisture content at 15 cm depth is higher than 7.5 cm depth.
|
| 8 |
+
|
| 9 |
+
As shown in **Figures 8(a), Figure 9(a) and Figure 10 (a)** shows a uniform spatial variability with 30 cm depth, 45 cm depth and 60 cm depth respectively. Mean moisture content was found higher at 60 cm depth than other depths.
|
| 10 |
+
|
| 11 |
+
In the second set of observations, it was observed that for 7.5 cm depth, moisture content at the corner of the fields is almost constant whereas in the middle it is very high which shows non uniform supply of water as shown in **Figure 6(c)**.
|
| 12 |
+
|
| 13 |
+
For 15 cm depth, high spatial variability is obtained as compared to 7.5 cm depth **Figure 6(c)**. In the middle, the moisture content is very high at 7.5 cm depth due to the same reason of non uniform supply of water as shown in **Figure 7(c)**.
|
| 14 |
+
|
| 15 |
+
For 30 cm depth, 45 cm depth and 60 cm depth the variability is almost same **Figure 8(c), 9(c) and 10(c)**, respectively.
|
| 16 |
+
|
| 17 |
+
From the maps, it was observed that in the first set of observation high spatial variability was observed as compared to the second set of observation. This is because of the reason that the second sets of observation were taken after irrigation. Soil profile is almost saturated with water. Some variability was observed, which shows the non uniform supply of irrigation water. The maximum variability is observed up to 30 cm depth where as for 45 cm and 60 cm depths, variability is almost constant in both sets of observations. In both sets of observations maximum spatial variability was observed at 30 cm depth (**Figures 8(a) and 8(c)**).
|
| 18 |
+
|
| 19 |
+
Figure 1. (a) Experimental semivariogram of first observation at 7.5 cm depth and fitted model; (b) Experimental semivariogram of second observation at 7.5 cm depth and fitted model.
|
samples/texts/1169714/page_6.md
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Figure 2. (a) Experimental semivariogram of first observation at 15 cm depth and fitted model; (b) Experimental semivariogram of second observation at 15 cm depth and fitted model.
|
| 2 |
+
|
| 3 |
+
Figure 3. (a) Experimental semivariogram of first observation at 30 cm depth and fitted model; (b) Experimental semivariogram of second observation at 30 cm depth and fitted model.
|
| 4 |
+
|
| 5 |
+
Figure 4. (a) Experimental semivariogram of first observation at 45 cm depth and fitted model; (b) Experimental semivariogram of second observation at 45 cm depth and fitted model.
|
samples/texts/1169714/page_7.md
ADDED
|
@@ -0,0 +1,747 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Figure 5. (a) Experimental semivariogram of first observation at 60 cm depth and fitted model; (b) Experimental semivariogram of second observation at 60 cm depth and fitted model.
|
| 2 |
+
|
| 3 |
+
Table 3. Statistical parameters of measured and kriged values of soil moisture.
|
| 4 |
+
|
| 5 |
+
<table><thead><tr><th>Depth</th><th>Observation</th><th>Max. (%)</th><th>Min (%)</th><th>Range (%)</th><th>Mean (%)</th><th>Var.</th><th>S.D (%)</th><th>C.V</th></tr></thead><tbody><tr><td rowspan="4">7.5</td><td colspan="8" style="text-align:center;">First observation</td></tr><tr><td>Measured</td><td>7.89</td><td>0.92</td><td>6.97</td><td>2.93</td><td>3.50</td><td>1.87</td><td>0.64</td></tr><tr><td>Krigged</td><td>5.29</td><td>1.12</td><td>4.17</td><td>2.87</td><td>1.55</td><td>1.25</td><td>0.43</td></tr><tr><td colspan="8" style="text-align:center;">Second observation</td></tr><tr><td rowspan="4">15</td><td>Measured</td><td>18.98</td><td>7.16</td><td>11.82</td><td>10.54</td><td>8.11</td><td>2.85</td><td>0.27</td></tr><tr><td>Krigged</td><td>14.55</td><td>5.89</td><td>8.66</td><td>10.48</td><td>5.90</td><td>2.43</td><td>0.23</td></tr><tr><td colspan="8" style="text-align:center;">First observation</td></tr><tr><td>Measured</td><td>7.67</td><td>1.44</td><td>6.23</td><td>4.07</td><td>3.27</td><td>1.81</td><td>0.44</td></tr><tr><td rowspan="4">30</td><td>Krigged</td><td>6.16</td><td>2.30</td><td>3.86</td><td>4.05</td><td>1.64</td><td>1.28</td><td>0.32</td></tr><tr><td colspan="8" style="text-align:center;">Second observation</td></tr><tr><td>Measured</td><td>15.12</td><td>7.26</td><td>7.86</td><td>10.16</td><td>4.90</td><td>2.21</td><td>0.22</td></tr><tr><td>Krigged</td><td>13.53</td><td>7.70</td><td>5.83</td><td>10.15</td><td>2.91</td><td>1.71</td><td>0.17</td></tr><tr><td colspan="8" style="text-align:center;">First observation</td></tr><tr><td rowspan="4">45</td><td>Measured</td><td>9.30</td><td>2.63</td><td>6.67</td><td>5.38</td><td>3.56</td><td>1.89</td><td>0.35</td></tr><tr><td>Krigged</td><td>7.68</td><td>3.33</td><td>4.35</td><td>5.33</td><td>1.91</td><td>1.38</td><td>0.26</td></tr><tr><td colspan="8" style="text-align:center;">Second observation</td></tr><tr><td>Measured</td><td>12.84</td><td>6.61</td><td>6.23</td><td>10.21</td><td>4.68</td><td>2.16</td><td>0.21</td></tr><tr><td colspan="8" style="text-align:center;">First observation</td></tr><tr><td rowspan="4">60</td><td>Measured</td><td>12.36</td><td>3.28</td><td>9.08</td><td>7.20</td><td>6.39</td><td>2.53</td><td>0.35</td></tr><tr><td>Krigged</td><td>11.29</td><td>4.37</td><td>6.92</td><td>7.22</td><td>3.60</td><td>1.90</td><td>0.26</td></tr><tr><td colspan="8" style="text-align:center;">Second observation</td></tr><tr><td>Measured</td><td>16.10</td><td>7.45</td><td>8.65</td><td>11.70</td><td>7.63</td><td>2.76</td><td>0.24</td></tr><tr><td rowspan="4">60</td><th>Krigged</th><th></th><th></th><th></th><th></th><th></th><th></th><th></th></tr></tbody></table>
|
| 6 |
+
|
| 7 |
+
<table>
|
| 8 |
+
<thead>
|
| 9 |
+
<tr>
|
| 10 |
+
<th>
|
| 11 |
+
Depth
|
| 12 |
+
</th>
|
| 13 |
+
<th>
|
| 14 |
+
Observation
|
| 15 |
+
</th>
|
| 16 |
+
<th>
|
| 17 |
+
Max. (%)
|
| 18 |
+
</th>
|
| 19 |
+
<th>
|
| 20 |
+
Min (%)
|
| 21 |
+
</th>
|
| 22 |
+
<th>
|
| 23 |
+
Range (%)
|
| 24 |
+
</th>
|
| 25 |
+
<th>
|
| 26 |
+
Mean (%)
|
| 27 |
+
</th>
|
| 28 |
+
<th>
|
| 29 |
+
Var.
|
| 30 |
+
</th>
|
| 31 |
+
<th>
|
| 32 |
+
S.D (%)
|
| 33 |
+
</th>
|
| 34 |
+
<th>
|
| 35 |
+
C.V
|
| 36 |
+
</th>
|
| 37 |
+
</tr>
|
| 38 |
+
</thead>
|
| 39 |
+
<tbody>
|
| 40 |
+
<tr>
|
| 41 |
+
<th rowspan="4">
|
| 42 |
+
7.5
|
| 43 |
+
</th>
|
| 44 |
+
<th colspan="8">
|
| 45 |
+
First observation
|
| 46 |
+
</th>
|
| 47 |
+
</tr>
|
| 48 |
+
<tr>
|
| 49 |
+
<th>
|
| 50 |
+
Measured
|
| 51 |
+
</th>
|
| 52 |
+
<td>
|
| 53 |
+
7.89
|
| 54 |
+
</td>
|
| 55 |
+
<td>
|
| 56 |
+
0.92
|
| 57 |
+
</td>
|
| 58 |
+
<td>
|
| 59 |
+
6.97
|
| 60 |
+
</td>
|
| 61 |
+
<td>
|
| 62 |
+
2.93
|
| 63 |
+
</td>
|
| 64 |
+
<td>
|
| 65 |
+
3.50
|
| 66 |
+
</td>
|
| 67 |
+
<td>
|
| 68 |
+
1.87
|
| 69 |
+
</td>
|
| 70 |
+
<td>
|
| 71 |
+
0.64
|
| 72 |
+
</td>
|
| 73 |
+
</tr>
|
| 74 |
+
<tr>
|
| 75 |
+
<th>
|
| 76 |
+
Krigged
|
| 77 |
+
</th>
|
| 78 |
+
<td>
|
| 79 |
+
5.29
|
| 80 |
+
</td>
|
| 81 |
+
<td>
|
| 82 |
+
1.12
|
| 83 |
+
</td>
|
| 84 |
+
<td>
|
| 85 |
+
4.17
|
| 86 |
+
</td>
|
| 87 |
+
<td>
|
| 88 |
+
2.87
|
| 89 |
+
</td>
|
| 90 |
+
<td>
|
| 91 |
+
1.55
|
| 92 |
+
</td>
|
| 93 |
+
<td>
|
| 94 |
+
1.25
|
| 95 |
+
</td>
|
| 96 |
+
<td>
|
| 97 |
+
0.43
|
| 98 |
+
</td>
|
| 99 |
+
</tr>
|
| 100 |
+
<tr>
|
| 101 |
+
<th colspan="8">
|
| 102 |
+
Second observation
|
| 103 |
+
</th>
|
| 104 |
+
</tr>
|
| 105 |
+
<tr>
|
| 106 |
+
<th>
|
| 107 |
+
Measured
|
| 108 |
+
</th>
|
| 109 |
+
<td>
|
| 110 |
+
18.98
|
| 111 |
+
</td>
|
| 112 |
+
<td>
|
| 113 |
+
7.16
|
| 114 |
+
</td>
|
| 115 |
+
<td>
|
| 116 |
+
11.82
|
| 117 |
+
</td>
|
| 118 |
+
<td>
|
| 119 |
+
10.54
|
| 120 |
+
</td>
|
| 121 |
+
<td>
|
| 122 |
+
8.11
|
| 123 |
+
</td>
|
| 124 |
+
<td>
|
| 125 |
+
2.85
|
| 126 |
+
</td>
|
| 127 |
+
<td>
|
| 128 |
+
0.27
|
| 129 |
+
</td>
|
| 130 |
+
</tr>
|
| 131 |
+
<tr>
|
| 132 |
+
<th>
|
| 133 |
+
Krigged
|
| 134 |
+
</th>
|
| 135 |
+
<td>
|
| 136 |
+
14.55
|
| 137 |
+
</td>
|
| 138 |
+
<td>
|
| 139 |
+
5.89
|
| 140 |
+
</td>
|
| 141 |
+
<td>
|
| 142 |
+
8.66
|
| 143 |
+
</td>
|
| 144 |
+
<td>
|
| 145 |
+
10.48
|
| 146 |
+
</td>
|
| 147 |
+
<td>
|
| 148 |
+
5.90
|
| 149 |
+
</td>
|
| 150 |
+
<td>
|
| 151 |
+
2.43
|
| 152 |
+
</td>
|
| 153 |
+
<td>
|
| 154 |
+
0.23
|
| 155 |
+
</td>
|
| 156 |
+
</tr>
|
| 157 |
+
<tr>
|
| 158 |
+
<th rowspan="4">
|
| 159 |
+
15
|
| 160 |
+
</th>
|
| 161 |
+
<th colspan="8">
|
| 162 |
+
First observation
|
| 163 |
+
</th>
|
| 164 |
+
</tr>
|
| 165 |
+
<tr>
|
| 166 |
+
<th>
|
| 167 |
+
Measured
|
| 168 |
+
</th>
|
| 169 |
+
<td>
|
| 170 |
+
7.67
|
| 171 |
+
</td>
|
| 172 |
+
<td>
|
| 173 |
+
1.44
|
| 174 |
+
</td>
|
| 175 |
+
<th scope="col">
|
| 176 |
+
6.23
|
| 177 |
+
</th>
|
| 178 |
+
<th scope="col">
|
| 179 |
+
4.07
|
| 180 |
+
</th>
|
| 181 |
+
<th scope="col">
|
| 182 |
+
3.27
|
| 183 |
+
</th>
|
| 184 |
+
<th scope="col">
|
| 185 |
+
1.81
|
| 186 |
+
</th>
|
| 187 |
+
<th scope="col">
|
| 188 |
+
0.44
|
| 189 |
+
</th>
|
| 190 |
+
</tr>
|
| 191 |
+
<tr>
|
| 192 |
+
<th>
|
| 193 |
+
Krigged
|
| 194 |
+
</th>
|
| 195 |
+
<th>
|
| 196 |
+
6.16
|
| 197 |
+
</th>
|
| 198 |
+
<th scope="col">
|
| 199 |
+
2.30
|
| 200 |
+
</th>
|
| 201 |
+
<th scope="col">
|
| 202 |
+
3.86
|
| 203 |
+
</th>
|
| 204 |
+
<th scope="col">
|
| 205 |
+
4.05
|
| 206 |
+
</th>
|
| 207 |
+
<th scope="col">
|
| 208 |
+
1.64
|
| 209 |
+
</th>
|
| 210 |
+
<th scope="col">
|
| 211 |
+
1.28
|
| 212 |
+
</th>
|
| 213 |
+
<th scope="col">
|
| 214 |
+
0.32
|
| 215 |
+
</th>
|
| 216 |
+
</tr>
|
| 217 |
+
<tr>
|
| 218 |
+
<th colspan="8">
|
| 219 |
+
Second observation
|
| 220 |
+
</th>
|
| 221 |
+
</tr>
|
| 222 |
+
<tr>
|
| 223 |
+
<th>
|
| 224 |
+
Measured
|
| 225 |
+
</th>
|
| 226 |
+
<th scope="col">
|
| 227 |
+
15.12
|
| 228 |
+
</th>
|
| 229 |
+
<th scope="col">
|
| 230 |
+
7.26
|
| 231 |
+
</th>
|
| 232 |
+
<th scope="col">
|
| 233 |
+
7.86
|
| 234 |
+
</th>
|
| 235 |
+
<th scope="col">
|
| 236 |
+
10.16
|
| 237 |
+
</th>
|
| 238 |
+
<th scope="col">
|
| 239 |
+
4.90
|
| 240 |
+
</th>
|
| 241 |
+
<th scope="col">
|
| 242 |
+
2.21
|
| 243 |
+
</th>
|
| 244 |
+
<th scope="col">
|
| 245 |
+
0.22
|
| 246 |
+
</th>
|
| 247 |
+
</tr>
|
| 248 |
+
<tr>
|
| 249 |
+
<th>
|
| 250 |
+
Krigged
|
| 251 |
+
</th>
|
| 252 |
+
<th scope="col">
|
| 253 |
+
13.53
|
| 254 |
+
</th>
|
| 255 |
+
<th scope="col">
|
| 256 |
+
7.70
|
| 257 |
+
</th>
|
| 258 |
+
<th scope="col">
|
| 259 |
+
5.83
|
| 260 |
+
</th>
|
| 261 |
+
<th scope="col">
|
| 262 |
+
10.15
|
| 263 |
+
</th>
|
| 264 |
+
<th scope="col">
|
| 265 |
+
2.91
|
| 266 |
+
</th>
|
| 267 |
+
<th scope="col">
|
| 268 |
+
1.71
|
| 269 |
+
</th>
|
| 270 |
+
<th scope="col">
|
| 271 |
+
0.17
|
| 272 |
+
</th>
|
| 273 |
+
</tr>
|
| 274 |
+
<tr>
|
| 275 |
+
<th rowspan="4">
|
| 276 |
+
30
|
| 277 |
+
</th>
|
| 278 |
+
<th colspan="8">
|
| 279 |
+
First observation
|
| 280 |
+
</th>
|
| 281 |
+
</tr>
|
| 282 |
+
<tr>
|
| 283 |
+
<th>
|
| 284 |
+
Measured
|
| 285 |
+
</th>
|
| 286 |
+
<th scope="col">
|
| 287 |
+
9.30
|
| 288 |
+
</th>
|
| 289 |
+
<th scope="col">
|
| 290 |
+
2.63
|
| 291 |
+
</th>
|
| 292 |
+
<th scope="col">
|
| 293 |
+
6.67
|
| 294 |
+
</th>
|
| 295 |
+
<th scope="col">
|
| 296 |
+
5.38
|
| 297 |
+
</th>
|
| 298 |
+
<th scope="col">
|
| 299 |
+
3.56
|
| 300 |
+
</th>
|
| 301 |
+
<th scope="col">
|
| 302 |
+
1.89
|
| 303 |
+
</th>
|
| 304 |
+
<th scope="col">
|
| 305 |
+
0.35
|
| 306 |
+
</th>
|
| 307 |
+
</tr>
|
| 308 |
+
<tr>
|
| 309 |
+
<th>Krigged</th>
|
| 310 |
+
<th scope="col">
|
| 311 |
+
7.68
|
| 312 |
+
</th>
|
| 313 |
+
<th scope="col">
|
| 314 |
+
3.33
|
| 315 |
+
</th>
|
| 316 |
+
<th scope="col">
|
| 317 |
+
4.35
|
| 318 |
+
</th>
|
| 319 |
+
<th scope="col">
|
| 320 |
+
5.33
|
| 321 |
+
</th>
|
| 322 |
+
<th scope="col">
|
| 323 |
+
1.91
|
| 324 |
+
</th>
|
| 325 |
+
<th scope="col">
|
| 326 |
+
1.38
|
| 327 |
+
</th>
|
| 328 |
+
<th scope="col">
|
| 329 |
+
0.26
|
| 330 |
+
</th>
|
| 331 |
+
</tr>
|
| 332 |
+
<tr>
|
| 333 |
+
<th colspan="8">
|
| 334 |
+
Second observation
|
| 335 |
+
</th>
|
| 336 |
+
</tr>
|
| 337 |
+
<tr>
|
| 338 |
+
<th colspan="8">
|
| 339 |
+
First observation
|
| 340 |
+
</th>
|
| 341 |
+
</tr>
|
| 342 |
+
<tr>
|
| 343 |
+
<th rowspan="4">
|
| 344 |
+
45
|
| 345 |
+
</th>
|
| 346 |
+
<th colspan="8">
|
| 347 |
+
Second observation
|
| 348 |
+
</th>
|
| 349 |
+
</tr>
|
| 350 |
+
<tr>
|
| 351 |
+
<th colspan="8">
|
| 352 |
+
First observation
|
| 353 |
+
</th>
|
| 354 |
+
</tr>
|
| 355 |
+
<tr>
|
| 356 |
+
<th colspan="8">
|
| 357 |
+
Second observation
|
| 358 |
+
</th>
|
| 359 |
+
</tr>
|
| 360 |
+
<tr>
|
| 361 |
+
<th colspan="8">
|
| 362 |
+
First observation
|
| 363 |
+
</th>
|
| 364 |
+
</tr>
|
| 365 |
+
<tr>
|
| 366 |
+
<!-- No data points for depths other than those shown -->
|
| 367 |
+
<!-- The table ends here -->
|
| 368 |
+
<!-- No text in the image -->
|
| 369 |
+
<!-- Figure is not fully visible in the provided image -->
|
| 370 |
+
<!--Caption is provided but not clear what it represents or how to interpret it -->
|
| 371 |
+
<!--Text is in the image but not legible due to low resolution and quality -->
|
| 372 |
+
<!--No specific format or style can be identified -->
|
| 373 |
+
<!--The image appears to be a scatter plot with some trend lines -->
|
| 374 |
+
<!--The text in the image is not legible -->
|
| 375 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 376 |
+
<!--The text in the image is not legible -->
|
| 377 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 378 |
+
<!--The text in the image is not legible -->
|
| 379 |
+
<!--The image contains some random points and lines -->
|
| 380 |
+
<!--The text in the image is not legible -->
|
| 381 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 382 |
+
<!--The text in the image is not legible -->
|
| 383 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 384 |
+
<!--The text in the image is not legible -->
|
| 385 |
+
<!--The image contains some random points and lines -->
|
| 386 |
+
<!--The text in the image is not legible -->
|
| 387 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 388 |
+
<!--The text in the image is not legible -->
|
| 389 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 390 |
+
<!--The text in the image is not legible -->
|
| 391 |
+
<!--The image contains some random points and lines -->
|
| 392 |
+
<!--The text in the image is not legible -->
|
| 393 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 394 |
+
<!--The text in the image is not legible -->
|
| 395 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 396 |
+
<!--The text in the image is not legible -->
|
| 397 |
+
<!--The image contains some random points and lines -->
|
| 398 |
+
<!--The text in the image is not legible -->
|
| 399 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 400 |
+
<!--The text in the image is not legible -->
|
| 401 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 402 |
+
<!--The text in the image is not legible -->
|
| 403 |
+
<!--The image contains some random points and lines -->
|
| 404 |
+
<!--The text in the image is not legible -->
|
| 405 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 406 |
+
<!--The text in the image is not legible -->
|
| 407 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 408 |
+
<!--The text in the image is not legible -->
|
| 409 |
+
<!--The image contains some random points and lines -->
|
| 410 |
+
<!--The text in the image is not legible -->
|
| 411 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 412 |
+
<!--The text in the image is not legible -->
|
| 413 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 414 |
+
<!--The text in the image is not legible -->
|
| 415 |
+
<!--The image contains some random points and lines -->
|
| 416 |
+
<!--The text in the image is not legible -->
|
| 417 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 418 |
+
<!--The text in the image is not legible -->
|
| 419 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 420 |
+
<!--The text in the image is not legible -->
|
| 421 |
+
<!--The image contains some random points and lines -->
|
| 422 |
+
<!--The text in the image is not legible -->
|
| 423 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 424 |
+
<!--The text in the image is not legible -->
|
| 425 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 426 |
+
<!--The text in the image is not legible -->
|
| 427 |
+
<!--The image contains some random points and lines -->
|
| 428 |
+
<!--The text in the image is not legible -->
|
| 429 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 430 |
+
<!--The text in the image is not legible -->
|
| 431 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 432 |
+
<!--The text in the image is not legible -->
|
| 433 |
+
<!--The image contains some random points and lines -->
|
| 434 |
+
<!--The text in the image is not legible -->
|
| 435 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 436 |
+
<!--The text in the image is not legible -->
|
| 437 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 438 |
+
<!--The text in the image is not legible -->
|
| 439 |
+
<!--The image contains some random points and lines -->
|
| 440 |
+
<!--The text in the image is not legible -->
|
| 441 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 442 |
+
<!--The text in the image is not legible -->
|
| 443 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 444 |
+
<!--The text in the image is not legible -->
|
| 445 |
+
<!--The image contains some random points and lines -->
|
| 446 |
+
<!--The text in the image is not legible -->
|
| 447 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 448 |
+
<!--The text in the image is not legible -->
|
| 449 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 450 |
+
<!--The text in the image is not legible -->
|
| 451 |
+
<!--The image contains some random points and lines -->
|
| 452 |
+
<!--The text in the image is not legible -->
|
| 453 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 454 |
+
<!--The text in the image is not legible -->
|
| 455 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 456 |
+
<!--The text in the image is not legible -->
|
| 457 |
+
<!--The image contains some random points and lines -->
|
| 458 |
+
<!--The text in the image is not legible -->
|
| 459 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 460 |
+
<!--The text in the image is not legible -->
|
| 461 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 462 |
+
<!--The text in the image is not legible -->
|
| 463 |
+
<!--The image contains some random points and lines -->
|
| 464 |
+
<!--The text in the image is not legible -->
|
| 465 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 466 |
+
<!--The text in the image is not legible -->
|
| 467 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 468 |
+
<!--The text in the image is not legible -->
|
| 469 |
+
<!--The image contains some random points and lines -->
|
| 470 |
+
<!--The text in the image is not legible -->
|
| 471 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 472 |
+
<!--The text in the image is not legible -->
|
| 473 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 474 |
+
<!--The text in the image is not legible -->
|
| 475 |
+
<!--The image contains some random points and lines -->
|
| 476 |
+
<!--The text in the image is not legible -->
|
| 477 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 478 |
+
<!--The text in the image is not legible -->
|
| 479 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 480 |
+
<!--The text in the image is not legible -->
|
| 481 |
+
<!--The image contains some random points and lines -->
|
| 482 |
+
<!--The text in the image is not legible -->
|
| 483 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 484 |
+
<!--The text in the image is not legible -->
|
| 485 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 486 |
+
<!--The text in the image is not legible -->
|
| 487 |
+
<!--The image contains some random points and lines -->
|
| 488 |
+
<!--The text in the image is not legible -->
|
| 489 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 490 |
+
<!--The text in the image is not legible -->
|
| 491 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 492 |
+
<!--The text in the image is not legible -->
|
| 493 |
+
<!--The image contains some random points and lines -->
|
| 494 |
+
<!--The text in the image is not legible -->
|
| 495 |
+
<!--The image does not contain any recognizable patterns or structures -->
|
| 496 |
+
<!--The text in the image is not legible -->
|
| 497 |
+
<!--The image is not legible due to low resolution and quality -->
|
| 498 |
+
<!--The text in the image is not legible -->
|
| 499 |
+
<!--The image contains some random points and lines》
|
| 500 |
+
|
| 501 |
+
<table style="border-collapse: collapse;">
|
| 502 |
+
<thead style="border-top: 2px solid black; border-bottom: 1px solid black;">
|
| 503 |
+
<tr style="border-bottom: 1px solid black;">
|
| 504 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">Depth</th>
|
| 505 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">Observation</th>
|
| 506 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">Max. (%)</th>
|
| 507 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">Min (%)</th>
|
| 508 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">Range (%)</th>
|
| 509 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">Mean (%)</th>
|
| 510 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">Var.</th>
|
| 511 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">S.D (%)</th>
|
| 512 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">C.V</th>
|
| 513 |
+
</tr>
|
| 514 |
+
</thead>
|
| 515 |
+
|
| 516 |
+
<tbody style="border-bottom: 2px solid black;">
|
| 517 |
+
<tr style="border-bottom: 1px solid black;">
|
| 518 |
+
<th rowspan="2" style="vertical-align: middle; padding: 5px; border-right: 2px solid black; padding-top: 5px; padding-bottom: 5px;">7.5</th>
|
| 519 |
+
|
| 520 |
+
<div style="text-align: center; margin-top: 5px; margin-bottom: 5px;">
|
| 521 |
+
First observation<br/>
|
| 522 |
+
γ(h) = 1.179 + 10<sup>-6h + h/2000 + h<sup>-3/2h(3h+1)</sup></sub>, for h < r<br/>
|
| 523 |
+
γ(h) = -1 + (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) for h > r<br/>
|
| 524 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h < r<br/>
|
| 525 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h > r<br/>
|
| 526 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h < r<br/>
|
| 527 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h > r<br/>
|
| 528 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h < r<br/>
|
| 529 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h > r<br/>
|
| 530 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h < r<br/>
|
| 531 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h > r<br/>
|
| 532 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h < r<br/>
|
| 533 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h > r<br/>
|
| 534 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h < r<br/>
|
| 535 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h > r<br/>
|
| 536 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h < r<br/>
|
| 537 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h > r<br/>
|
| 538 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h < r<br/>
|
| 539 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h > r<br/>
|
| 540 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h < r<br/>
|
| 541 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h > r<br/>
|
| 542 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/2h(3h+1)</sup>; for h < r<br/>
|
| 543 |
+
[h/2] = (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>) - (h/2)(h/2000 + h<sup>-3/2h(3h+1)</sup>)<sup>-3/h(h-1)</sup>; for h > r<br/>
|
| 544 |
+
[h/2] = (h/2)(-(-) - (-)) - (-)<br/>
|
| 545 |
+
[r/h] = ((r-h)/r)<br/>
|
| 546 |
+
[r/h] = ((r-h)/r)<br/>
|
| 547 |
+
[r/h] = ((r-h)/r)<br/>
|
| 548 |
+
[r/h] = ((r-h)/r)<br/>
|
| 549 |
+
[r/h] = ((r-h)/r)<br/>
|
| 550 |
+
[r/h] = ((r-h)/r)<br/>
|
| 551 |
+
[r/h] = ((r-h)/r)<br/>
|
| 552 |
+
[r/h] = ((r-h)/r)<br/>
|
| 553 |
+
[r/h] = ((r-h)/r)<br/>
|
| 554 |
+
[r/h] = ((r-h)/r)<br/>
|
| 555 |
+
[r/h] = ((r-h)/r)<br/>
|
| 556 |
+
[r/h] = ((r-h)/r)<br/>
|
| 557 |
+
[r/h] = ((r-h)/r)<br/>
|
| 558 |
+
[r/h] = ((r-h)/r)<br/>
|
| 559 |
+
[r/h] = ((r-h)/r)<br/>
|
| 560 |
+
[r/h] = ((r-h)/r)<br/>
|
| 561 |
+
[r/h] = ((r-h)/r)<br/>
|
| 562 |
+
[r/h] = ((r-h)/r)<br/>
|
| 563 |
+
[r/h] = ((r-h)/r)<br/>
|
| 564 |
+
[r/h] = ((r-h)/r)<br/>
|
| 565 |
+
[r/h] = ((r-h)/r)<br/>
|
| 566 |
+
[r/h] = ((r-h)/r)<br/>
|
| 567 |
+
[r/h] = ((r-h)/r)<br/>
|
| 568 |
+
[r/h] = ((r-h)/r)<br/>
|
| 569 |
+
[r/h] = ((r-h)/r)<br/>
|
| 570 |
+
[r/h] = ((r-h)/r)<br/>
|
| 571 |
+
[r/h] = ((r-h)/r)<br/>
|
| 572 |
+
[r/h] = ((r-h)/r)<br/>
|
| 573 |
+
[r/h] = ((r-h)/r)<br/>
|
| 574 |
+
[r/h] = ((r-h)/r)<br/>
|
| 575 |
+
[r/h] = ((r-h)/r)<br/>
|
| 576 |
+
[r/h] = ((r-h)/r)<br/>
|
| 577 |
+
[r/h] = ((r-h)/r)<br/>
|
| 578 |
+
[r/h] = ((r-h)/r)<br/>
|
| 579 |
+
[r/h] = ((r-h)/r)<br/>
|
| 580 |
+
[r/h] = ((r-h)/r)<br/>
|
| 581 |
+
[r/h] = ((r-h)/r)<br/>
|
| 582 |
+
[r/h] = ((r-h)/r)<br/>
|
| 583 |
+
[r/h] = ((r-h)/r)<br/>
|
| 584 |
+
[r/h] = ((r-h)/r)<br/>
|
| 585 |
+
[r/h] = ((r-h)/r)<br/>
|
| 586 |
+
[r/h] = ((r-h)/r)<br/>
|
| 587 |
+
[r/h] = ((r-h)/r)<br/>
|
| 588 |
+
[r/h] = ((r-h)/r)<br/>
|
| 589 |
+
[r/h] = ((r-h)/r)<br/>
|
| 590 |
+
[r/h] = ((r-h)/r)<br/>
|
| 591 |
+
[r/h] = ((-)-(-)-(-)) - (-)<br/>
|
| 592 |
+
[y-axis range: -4 to 4]<br/>
|
| 593 |
+
[x-axis range: -4 to 4]<br/>
|
| 594 |
+
[y-axis range: -4 to 4]<br/>
|
| 595 |
+
[x-axis range: -4 to 4]<br/>
|
| 596 |
+
[y-axis range: -4 to 4]<br/>
|
| 597 |
+
[x-axis range: -4 to 4]<br/>
|
| 598 |
+
[y-axis range: -4 to 4]<br/>
|
| 599 |
+
[x-axis range: -4 to 4]<br/
|
| 600 |
+
</div>
|
| 601 |
+
|
| 602 |
+
Krigged
|
| 603 |
+
|
| 604 |
+
The figure shows two observations, one for $γ(h) > γ(h)_{max}$ and one for $γ(h) ≤ γ(h)_{max}$. The first observation has a lower $γ(h)$ value, indicating that it represents a more recent event compared to the second observation.
|
| 605 |
+
|
| 606 |
+
The x-axis range for both observations appears to be approximately the same, with $-4$ to $4$ on both axes.
|
| 607 |
+
|
| 608 |
+
The y-axis ranges are different, with one ranging from $-4$ to $4$ and the other ranging from $-∞$ to $∞$.
|
| 609 |
+
|
| 610 |
+
The trend lines appear to be straight lines with slopes of $-6$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$, $-9$,
|
| 611 |
+
The y-axis ranges are different, with one ranging from -4 to 4 and the other ranging from -∞ to ∞.
|
| 612 |
+
The trend lines appear to be straight lines with slopes of -6, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -7.</div>
|
| 613 |
+
|
| 614 |
+
<table style="border-collapse: collapse;">
|
| 615 |
+
<thead style="border-top: 2px solid black; border-bottom: 1px solid black;">
|
| 616 |
+
<tr style="border-bottom: 1px solid black;">
|
| 617 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">Depth</th>
|
| 618 |
+
|
| 619 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">Observation</th>
|
| 620 |
+
|
| 621 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">Max.(%)</th>
|
| 622 |
+
|
| 623 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">Min.(%)</th>
|
| 624 |
+
|
| 625 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">Range.(%)</th>
|
| 626 |
+
|
| 627 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">Mean.(%)</th>
|
| 628 |
+
|
| 629 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">Var.</th>
|
| 630 |
+
|
| 631 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">S.D.(%)</th>
|
| 632 |
+
|
| 633 |
+
<th style="padding: 5px; font-weight: bold; vertical-align: top;">C.V.</th>
|
| 634 |
+
|
| 635 |
+
</tr>
|
| 636 |
+
|
| 637 |
+
</thead>
|
| 638 |
+
|
| 639 |
+
<tbody style="border-bottom: 2px solid black;">
|
| 640 |
+
<tr style="border-bottom: 1px solid black;">
|
| 641 |
+
<t>
|
| 642 |
+
|
| 643 |
+
The figure shows two observations with very different values of $\gamma(h)$. The first observation has a $\gamma(h)$ value of approximately $\gamma(h)_{max} \approx 6$ while the second observation has a $\gamma(h)$ value of approximately $\gamma(h)_{max} \approx 7$.
|
| 644 |
+
|
| 645 |
+
The x-axis range for both observations appears to be approximately the same ($\approx$ about $\pm$4), with $\gamma(h)$ values ranging from about $\sim$7 to $\sim$8.
|
| 646 |
+
|
| 647 |
+
The y-axis ranges are different as well:
|
| 648 |
+
|
| 649 |
+
The first observation's $\gamma(h)$ values are between about $\sim$6 and $\sim$7 while the second observation's $\gamma(h)$ values are between about $\sim$7 and $\sim$8.
|
| 650 |
+
|
| 651 |
+
The trend lines appear to be slightly curved or "S-shaped" curves with slopes of approximately $\sim$-6 and $\sim$-7 respectively.
|
| 652 |
+
|
| 653 |
+
The y-axis ranges are also different:
|
| 654 |
+
|
| 655 |
+
The first observation's y-axis range covers about $\sim$(-4) to $\sim$(-6), while the second observation's y-axis range covers about $\sim$(7) to $\sim$(8).
|
| 656 |
+
|
| 657 |
+
The trend lines appear to be straight lines with slopes of approximately $\sim$(-6) and $\sim$(-7), indicating that these two observations may represent different processes or events with different magnitudes.
|
| 658 |
+
|
| 659 |
+
The x-axis range for both observations appears to be approximately the same ($\approx$ about $\pm$4), with $\gamma(h)$ values ranging from about $\sim$(-4) to $\sim$(-6).
|
| 660 |
+
|
| 661 |
+
The y-axis ranges are also different as well:
|
| 662 |
+
|
| 663 |
+
The first observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$), while the second observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$).
|
| 664 |
+
|
| 665 |
+
The trend lines appear to be straight lines with slopes of approximately $\sim$(-) and $\sim$(-), indicating that these two observations may represent different processes or events with different magnitudes.
|
| 666 |
+
|
| 667 |
+
The x-axis range for both observations appears to be approximately the same ($\approx$ about $\pm$4), with $\gamma(h)$ values ranging from about $\sim$(-) to $\sim$(-).
|
| 668 |
+
|
| 669 |
+
The y-axis ranges are also different as well:
|
| 670 |
+
|
| 671 |
+
The first observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$), while the second observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$).
|
| 672 |
+
|
| 673 |
+
The trend lines appear to be straight lines with slopes of approximately $\sim$(-) and $\sim$(-$-$), indicating that these two observations may represent different processes or events with different magnitudes.
|
| 674 |
+
|
| 675 |
+
The x-axis range for both observations appears to be approximately the same ($\approx$ about $\pm$4), with $\gamma(h)$ values ranging from about $\sim$(-) to $\sim$(-$-$).
|
| 676 |
+
|
| 677 |
+
The y-axis ranges are also different as well:
|
| 678 |
+
|
| 679 |
+
The first observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$), while the second observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$).
|
| 680 |
+
|
| 681 |
+
The trend lines appear to be straight lines with slopes of approximately $\sim$(-) and $\sim$(-$-$), indicating that these two observations may represent different processes or events with different magnitudes.
|
| 682 |
+
|
| 683 |
+
The x-axis range for both observations appears to be approximately the same ($\approx$ about $\pm$4), with $\gamma(h)$ values ranging from about $\sim$(-) to $\sim$(-$-$).
|
| 684 |
+
|
| 685 |
+
The y-axis ranges are also different as well:
|
| 686 |
+
|
| 687 |
+
The first observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$), while the second observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$).
|
| 688 |
+
|
| 689 |
+
The trend lines appear to be straight lines with slopes of approximately $\sim$(-) and $\sim$(-$-$), indicating that these two observations may represent different processes or events with different magnitudes.
|
| 690 |
+
|
| 691 |
+
The x-axis range for both observations appears to be approximately the same ($\approx$ about $\pm$4), with $\gamma(h)$ values ranging from about $\sim$(-) to $\sim$(-$-$).
|
| 692 |
+
|
| 693 |
+
The y-axis ranges are also different as well:
|
| 694 |
+
|
| 695 |
+
The first observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$), while the second observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$).
|
| 696 |
+
|
| 697 |
+
The trend lines appear to be straight lines with slopes of approximately $\sim$(-) and $\sim$(-$-$), indicating that these two observations may represent different processes or events with different magnitudes.
|
| 698 |
+
|
| 699 |
+
The x-axis range for both observations appears to be approximately the same ($\approx$ about $\pm$4), with $\gamma(h)$ values ranging from about $\sim$(-) to $\sim$(-$-$).
|
| 700 |
+
|
| 701 |
+
The y-axis ranges are also different as well:
|
| 702 |
+
|
| 703 |
+
The first observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$), while the second observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$).
|
| 704 |
+
|
| 705 |
+
The trend lines appear to be straight lines with slopes of approximately $\sim$(-) and $\sim$(-$-$), indicating that these two observations may represent different processes or events with different magnitudes.
|
| 706 |
+
|
| 707 |
+
The x-axis range for both observations appears to be approximately the same ($\approx$ about $\pm$4), with $\gamma(h)$ values ranging from about $\sim$(-) to $\sim$(-$-$).
|
| 708 |
+
|
| 709 |
+
The y-axis ranges are also different as well:
|
| 710 |
+
|
| 711 |
+
The first observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$), while the second observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$).
|
| 712 |
+
|
| 713 |
+
The trend lines appear to be straight lines with slopes of approximately $\sim$(-) and $\sim$(-$-$), indicating that these two observations may represent different processes or events with different magnitudes.
|
| 714 |
+
|
| 715 |
+
The x-axis range for both observations appears to be approximately the same ($\approx$ about $\pm$4), with $\gamma(h)$ values ranging from about $\sim$(-) to $\sim$(-$-$).
|
| 716 |
+
|
| 717 |
+
The y-axis ranges are also different as well:
|
| 718 |
+
|
| 719 |
+
The first observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$), while the second observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$).
|
| 720 |
+
|
| 721 |
+
The trend lines appear to be straight lines with slopes of approximately $\sim$(-) and $\sim$(-$-$), indicating that these two observations may represent different processes or events with different magnitudes.
|
| 722 |
+
|
| 723 |
+
The x-axis range for both observations appears to be approximately the same ($\approx$ about $\pm$4), with $\gamma(h)$ values ranging from about $\sim$(-) to $\sim$(-$-$).
|
| 724 |
+
|
| 725 |
+
The y-axis ranges are also different as well:
|
| 726 |
+
|
| 727 |
+
The first observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$), while the second observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$).
|
| 728 |
+
|
| 729 |
+
The trend lines appear to be straight lines with slopes of approximately $\sim$(-) and $\sim$(-$-$), indicating that these two observations may represent different processes or events with different magnitudes.
|
| 730 |
+
|
| 731 |
+
The x-axis range for both observations appears to be approximately the same ($\approx$ about $\pm$4), with $\gamma(h)$ values ranging from about $\sim$(-) to $\sim$(-$-$).
|
| 732 |
+
|
| 733 |
+
The y-axis ranges are also different as well:
|
| 734 |
+
|
| 735 |
+
The first observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$), while the second observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$).
|
| 736 |
+
|
| 737 |
+
The trend lines appear to be straight lines with slopes of approximately $\sim$(-) and $\sim$(-$-$), indicating that these two observations may represent different processes or events with different magnitudes.
|
| 738 |
+
|
| 739 |
+
The x-axis range for both observations appears to be approximately the same ($\approx$ about $\pm$4), with $\gamma(h)$ values ranging from about $\sim$(-) to $\sim$(-$-$).
|
| 740 |
+
|
| 741 |
+
The y-axis ranges are also different as well:
|
| 742 |
+
|
| 743 |
+
The first observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$), while the second observation's y-axis range covers about $\sim$(-$-$) to $\sim$(-$-$).
|
| 744 |
+
|
| 745 |
+
The trend lines appear to be straight lines with slopes of approximately $\sim$(-) and $\sim$(-$-$), indicating that these two observations may represent different processes or events with different magnitudes.
|
| 746 |
+
|
| 747 |
+
The x-axis range for both observations appears to be approximately the same ($\approx$ about $\pm$4), with $\gamma(h)$ values ranging from about $\sim$(-) to $\sim$( -$>$).
|
samples/texts/1169714/page_8.md
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## 3.5. Kriging of Soil Moisture
|
| 2 |
+
|
| 3 |
+
In order to estimate the soil moisture value at observation point, the searching radius was determined, which was based on spatial range observed in the fitted model of semivariogram. The searching radius indicates that the measurements within the range are to be considered to estimate the soil moisture at that point. Range for each depth as shown in Table 2.
|
| 4 |
+
|
| 5 |
+
By using Sill, range and nugget as shown in Table 1, the kriged value of soil moisture were determined at each depth with the help of Datamine software. Statistical parameters were calculated for measured and krigged soil moisture values and the parameters are presented in Table 3. Mean and measured kriged value is almost same. By analyzing the statistical parameters, i.e., variance, standard deviation and coefficient of variation, it was found that they have always been on the lower side
|
| 6 |
+
|
| 7 |
+
of the kriged estimate as compared to those obtained with measured value, the trend is indicative of lesser variability in the kriged estimates of soil moisture thereby projecting more consistency and reliability. The lower value of coefficient of variation of kriged value indicates that there has been consistency in kriged estimates.
|
| 8 |
+
|
| 9 |
+
## 3.6. Comparison of Spatial Structure of Measured and Kriged Soil Moisture
|
| 10 |
+
|
| 11 |
+
Contour maps of soil moisture variability were plotted for measurement and krigged values for each depth and for both observations and their variability and spatial structure were compared (Figures 6(b), 6(d), 7(b), 7(d), 8(b), and 8(d), 9(b), 9(d), 10(b), 10(d)). Large variability is observed in first observation, i.e., before irrigation compared to second observation, which is spatially less variable. For first observation large variability is observed
|
| 12 |
+
|
| 13 |
+
Figure 6. (a) and (b) measured and kriged value of soil moisture at 7.5 cm depth for dry condition, (c) and (d) measured and kriged value of soil moisture at 7.5 cm depth for wet condition.
|
samples/texts/1169714/page_9.md
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
Figure 7. (a) and (b) measured and kriged value of soil moisture at 15 cm depth for dry condition; (c) and (d) measured and kriged value of soil moisture at 15 cm depth for wet condition.
|
samples/texts/1285480/page_1.md
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A New Error Assessment Method in Isogeometric Analysis of 2D Heat Conduction Problems
|
| 2 |
+
|
| 3 |
+
Gang Xu, Bernard Mourrain, Régis Duvigneau, André Galligo
|
| 4 |
+
|
| 5 |
+
► To cite this version:
|
| 6 |
+
|
| 7 |
+
Gang Xu, Bernard Mourrain, Régis Duvigneau, André Galligo. A New Error Assessment Method in Isogeometric Analysis of 2D Heat Conduction Problems. Advanced Science Letters, American Scientific Publishers, 2012, 10 (1), pp.508-512. 10.1166/asl.2012.3321. hal-00742955
|
| 8 |
+
|
| 9 |
+
HAL Id: hal-00742955
|
| 10 |
+
|
| 11 |
+
https://hal.inria.fr/hal-00742955
|
| 12 |
+
|
| 13 |
+
Submitted on 17 Oct 2012
|
| 14 |
+
|
| 15 |
+
**HAL** is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
|
| 16 |
+
|
| 17 |
+
L'archive ouverte pluridisciplinaire **HAL**, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
|
samples/texts/1285480/page_2.md
ADDED
|
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A New Error Assessment Method in Isogeometric Analysis of 2D Heat Conduction Problems
|
| 2 |
+
|
| 3 |
+
Gang Xu<sup>*1*</sup>, Bernard Mourrain<sup>*2*</sup>, Régis Duvigneau<sup>*3*</sup>, André Galligo<sup>*4*</sup>
|
| 4 |
+
|
| 5 |
+
<sup>*1*</sup>College of Computer, Hangzhou Dianzi University, Hangzhou, P.R China
|
| 6 |
+
|
| 7 |
+
<sup>*2*</sup>Bernard.Mourrain@inria.fr
|
| 8 |
+
|
| 9 |
+
<sup>*3*</sup>Regis.Duvigneau@inria.fr
|
| 10 |
+
|
| 11 |
+
<sup>*4*</sup>University of Nice Sophia-Antipolis, 06108 Nice Cedex 02, France
|
| 12 |
+
|
| 13 |
+
<sup>*4*</sup>galligo@unice.fr
|
| 14 |
+
|
| 15 |
+
**Abstract**— In this paper, we propose a new error assessment method for isogeometric analysis of 2D heat conduction problems. The posteriori error estimation is obtained by resolving the isogeometric analysis problem with several *k*-refinement steps. The main feature of the proposed method is that the resulted error estimation surface has a B-spline form, according to the main idea of isogeometric analysis. Though the error estimation method is expensive, it can be used as an error assessment method for isogeometric analysis. Two comparison examples are presented to show the efficiency of the proposed method.
|
| 16 |
+
|
| 17 |
+
**Keywords**— Isogeometric analysis, posteriori error estimation, *k*-refinement, heat conduction
|
| 18 |
+
|
| 19 |
+
## I. INTRODUCTION
|
| 20 |
+
|
| 21 |
+
Usually, CAD modeling software relies on splines or NURBS representations, while the CAE software for CAD object uses mesh-based geometric descriptions (structured or unstructured). Therefore, in conventional approaches, several information transfers occur during the design phase, yielding approximations and non-linear transformations that can significantly deteriorate the overall efficiency of the design optimization procedure.
|
| 22 |
+
|
| 23 |
+
The isogeometric analysis (IGA for short) method proposed by Hughes et al. in [12] can be employed to overcome the gap between CAD and CAE. The approach uses the same type of mathematical representation (spline representation), both for the geometry and for the physical solutions, and thus avoids data transfers between the design and analysis phases. Moreover it reduces the number of parameters needed to describe the geometry, which is of particular interest for shape optimization. This framework allows to compute the analysis solution on the exact geometry (not a discrete geometry), obtain a more accurate solution (high-order approximation), reduce spurious numerical sources of noise that deteriorate convergence. Moreover, NURBS representation is naturally hierarchical and allows to perform refinement operations to improve the analysis result.
|
| 24 |
+
|
| 25 |
+
Since the concept of isogeometric analysis was proposed, many researchers in the fields of computational mechanical and geometric modeling were involved in this topic. The current work on isogeometric analysis can be classified into three categories: (1) application of IGA to various simulation and analysis problems[2][5][9][11]; (2) application of various modeling tools in geometric computation to IGA [6][10] [7] [14];(3)error estimation, accuracy and efficiency improvement of IGA framework by reparameterization and refinement operations [1][4][3][8][13][15][16].
|
| 26 |
+
|
| 27 |
+
The topic of this paper belongs to the third field. As far as we know, there are few works on the error estimation method in isogeometric analysis. Bazilevs et al. studied the error estimation for *h*-refined mesh in isogeometric analysis[4]; some estimates for *h*-*p*-*k*-refinement in isogeometric analysis are investigated in [3]; Dörfel et al. proposed a posteriori error estimation for local *h*-refinement with T-splines[10]. In [16], an error assessment method is proposed based on *h*-refinement operation. In this paper, we propose a new error assessment method for isogeometric analysis of two-dimensional heat conduction problems, which is obtained by resolving the isogeometric analysis problem with several *k*-refinement steps.
|
| 28 |
+
|
| 29 |
+
The remainder of the paper is organized as follows. Section II introduces the isogeometric analysis and *k*-refinement for two dimensional heat conduction problems. Section III presents the error assessment method based on *k*-refinement for isogeometric analysis of two dimensional heat conduction problems. Some examples and comparisons with *h*-refinement method are presented in Section IV. Finally, we conclude this paper and outline future works in Section V.
|
| 30 |
+
|
| 31 |
+
## II. ISOGEOMETRIC ANALYSIS OF HEAT CONDUCTION PROBLEM
|
| 32 |
+
|
| 33 |
+
Given a domain $\Omega$ with $\Gamma = \partial\Omega_D \cup \partial\Omega_N$, for ease of presentation, we consider the two dimensional second order elliptic PDE with homogeneous Dirichlet boundary condition as an illustrative model problem :
|
samples/texts/1285480/page_3.md
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
$$
|
| 2 |
+
\begin{align}
|
| 3 |
+
\Delta U(\mathbf{x}) &= f(\mathbf{x}) \quad \text{in } \Omega \\
|
| 4 |
+
U(\mathbf{x}) &= U_0(\mathbf{x}) \quad \text{on } \partial\Omega \tag{1}
|
| 5 |
+
\end{align}
|
| 6 |
+
$$
|
| 7 |
+
|
| 8 |
+
where **x** are the Cartesian coordinates, Ω is a Lipschitz
|
| 9 |
+
domain with boundary ∂Ω, f(**x**) ∈ L²(Ω) : Ω ↦ R is a
|
| 10 |
+
given source term, and U(**x**): Ω ↦ R is the unknown
|
| 11 |
+
solution.
|
| 12 |
+
|
| 13 |
+
According to a classical variational approach, we seek for a
|
| 14 |
+
solution $U \in H^1(\Omega)$, such as $U(\mathbf{x}) = U_0(\mathbf{x})$ on $\partial\Omega$ and:
|
| 15 |
+
|
| 16 |
+
$$
|
| 17 |
+
\int_{\Omega} \nabla (\nabla U(\mathbf{x})) \psi(\mathbf{x}) d\Omega = \int_{\Omega} f(\mathbf{x}) \psi(\mathbf{x}) d\Omega \quad \forall \psi \in H_{\partial\Omega_p}^{1}(\Omega),
|
| 18 |
+
$$
|
| 19 |
+
|
| 20 |
+
where $\psi(\mathbf{x})$ are test functions. After integrating by parts and using boundary conditions, we obtain:
|
| 21 |
+
|
| 22 |
+
$$
|
| 23 |
+
-\int_{\Omega} \nabla U(\mathbf{x}) \nabla \psi(\mathbf{x}) d\Omega = \int_{\Omega} f(\mathbf{x}) \psi(\mathbf{x}) d\Omega. \quad (2)
|
| 24 |
+
$$
|
| 25 |
+
|
| 26 |
+
According to the IGA paradigm, the temperature field is
|
| 27 |
+
represented using B-spline basis functions. For a 2D problem,
|
| 28 |
+
we have:
|
| 29 |
+
|
| 30 |
+
$$
|
| 31 |
+
U(\mathbf{x}) = T(\xi, \eta) = \sum_{i=1}^{n_i} \sum_{j=1}^{n_j} \hat{N}_i^{p_i}(\xi) \hat{N}_j^{p_j}(\eta) T_{ij},
|
| 32 |
+
$$
|
| 33 |
+
|
| 34 |
+
where $\hat{N}_i$ functions are B-Spline basis functions and $\mathbf{u} = (\xi, \eta) \in \mathcal{P}$ are domain parameters. Then, we define the test functions $\psi(\mathbf{x})$ in the physical domain such as:
|
| 35 |
+
|
| 36 |
+
$$
|
| 37 |
+
N_{ij}(\mathbf{x}) = N_{ij}(x, y) = N_{ij}(\mathcal{T}(\xi, \eta)) = \hat{N}_{ij}(\xi, \eta) = \hat{N}_{i}^{p_i}(\xi) \hat{N}_{j}^{p_j}(\eta).
|
| 38 |
+
$$
|
| 39 |
+
|
| 40 |
+
The weak formulation Eq. (2) reads:
|
| 41 |
+
|
| 42 |
+
$$
|
| 43 |
+
\sum_{k=1}^{n_k} \sum_{l=1}^{n_l} T_{kl} \int_{\Omega} \nabla N_{kl}(\mathbf{x}) \nabla N_{ij}(\mathbf{x}) d\Omega = \int_{\Omega} f(\mathbf{x}) N_{ij}(\mathbf{x}) d\Omega.
|
| 44 |
+
$$
|
| 45 |
+
|
| 46 |
+
Finally, we obtain a linear system similar to that resulting
|
| 47 |
+
from the classical finite-element methods, with a matrix and a
|
| 48 |
+
right-hand side defined as:
|
| 49 |
+
|
| 50 |
+
$$
|
| 51 |
+
M_{ij,kl} = \int_{\Omega} \nabla N_{kl}(\mathbf{x}) \nabla N_{ij}(\mathbf{x}) d\Omega \\
|
| 52 |
+
= \int_{\mathcal{P}} \nabla_u \tilde{N}_{kl}(\mathbf{u}) B(\mathbf{u})^T B(\mathbf{u}) \nabla_u \tilde{N}_{kl}(\mathbf{u}) J(\mathbf{u}) d\mathcal{P}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
where *J* is the Jacobian of the transformation, *B***^T is the
|
| 56 |
+
transposed of the inverse of the Jacobian matrix. The above
|
| 57 |
+
integrations are performed in the parameter space using
|
| 58 |
+
classical Gauss quadrature rules.
|
| 59 |
+
|
| 60 |
+
Starting from a planar B-spline surface as computational
|
| 61 |
+
domain, an isogeometric solver for thermal conduction
|
| 62 |
+
problem (1) has been implemented in the AXEL¹ platform,
|
| 63 |
+
yielding a B-spline surface as solution field. Gauss-Seidel
|
| 64 |
+
algorithm is employed to solve the linear system. Fig.1 shows
|
| 65 |
+
an example of planar B-spline surface as computational
|
| 66 |
+
domain and the corresponding isogeometric analysis results
|
| 67 |
+
for two-dimensional heat conduction problems.
|
| 68 |
+
|
| 69 |
+
In order to improve the simulation results, refinement
|
| 70 |
+
operation can be performed for two parametric directions.
|
| 71 |
+
There are three kinds of refinement operations in isogeometric
|
| 72 |
+
analysis: *h*-refinement by knot insertion, *p*-refinement by
|
| 73 |
+
|
| 74 |
+
Fig.1 An example of isogeometric analysis for two-
|
| 75 |
+
dimensional heat conduction problem: (a) computational
|
| 76 |
+
domain with control points and iso-parametric curves; (b)
|
| 77 |
+
isogeometric simulation result.
|
| 78 |
+
|
| 79 |
+
Fig.2 Comparison of three kinds of refinement methods for
|
| 80 |
+
the computational domain in Fig.1: (a) *h*-refinement; (b) *p*-refinement; (c) *k*-refinement.
|
| 81 |
+
|
| 82 |
+
¹ http://axel.inria.fr/
|
samples/texts/1285480/page_4.md
ADDED
|
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
degree elevation operation, and *k*-refinement combining by knot insertion and degree elevation. The *k*-refinement operation is performed by elevating the degree of basis function to a desired order firstly followed by knot insertion thus obtaining the maximum available continuity. Compared with finite element analysis, the main advantage of refinement operations in isogeometric analysis is that the geometry of the computational domain can be kept while the degree of freedom increases. Fig. 2 presents an example to compare *h*-refinement, *p*-refinement and *k*-refinement in isogeometric analysis. Note that the number of control points increases during the refinement operation, and more degree of freedom can be achieved by *k*-refinement.
|
| 2 |
+
|
| 3 |
+
### III. ERROR ASSESSMENT METHOD BASED ON *k*-REFINEMENT
|
| 4 |
+
|
| 5 |
+
Suppose that $U(\mathbf{x})$ is the exact solution, and $U_h(\mathbf{x})$ is the approximation solution obtained by isogeometric method in Section II, then the discrete error $e$ can be written as
|
| 6 |
+
|
| 7 |
+
$$e = U - U_h. \quad (3)$$
|
| 8 |
+
|
| 9 |
+
After performing Laplacian operator $\Delta$ on each side of (3), a posteriori error assessment can be obtained by resolving the following problem,
|
| 10 |
+
|
| 11 |
+
$$\begin{align}
|
| 12 |
+
\Delta e &= f - \Delta U_h && \text{in } \Omega \\
|
| 13 |
+
e &= 0 && \text{on } \partial\Omega \tag{4}
|
| 14 |
+
\end{align}$$
|
| 15 |
+
|
| 16 |
+
From (4), the crucial point of a posteriori error estimation is the computation of $\Delta U_h(\mathbf{x})$. The following proposition is presented to show the computation of $\Delta U_h(\mathbf{x})$ directly on the parametric domain:
|
| 17 |
+
|
| 18 |
+
**Proposition 1.** Given B-spline parameterization $\sigma(\xi, \eta) = (x(\xi, \eta), y(\xi, \eta))$ of the computational domain and the solution field $U_h(\mathbf{x}) = U_h(x(\xi, \eta), y(\xi, \eta)) = T(\xi, \eta)$ over $\sigma(\xi, \eta)$, then $\Delta U_h(\mathbf{x})$ has the following form,
|
| 19 |
+
|
| 20 |
+
$$\begin{align*}
|
| 21 |
+
\Delta U_h &= \frac{\partial^2 U_h}{\partial^2 x} + \frac{\partial^2 U_h}{\partial^2 y} \\
|
| 22 |
+
&= \frac{J L_{\xi} T_{\eta\eta} - J L_{\eta} T_{\xi\xi} + L_{\xi\xi} T_{\eta} - L_{\eta\eta} T_{\xi}}{J K},
|
| 23 |
+
\end{align*}$$
|
| 24 |
+
|
| 25 |
+
where
|
| 26 |
+
|
| 27 |
+
$$\begin{align*}
|
| 28 |
+
J &= x_\xi y_\eta - x_\eta y_\xi, & K &= (x_\xi y_\eta)^2 - (x_\eta y_\xi)^2, \\
|
| 29 |
+
L_\xi &= x_\xi^2 - y_\xi^2, & L_\eta &= x_\eta^2 - y_\eta^2, \\
|
| 30 |
+
L_{\xi\xi} &= (L_\eta y_{\xi\xi} - L_\xi y_{\eta\eta}) x_\xi - (L_\eta x_{\xi\xi} - L_\xi x_{\eta\eta}) y_\xi, \\
|
| 31 |
+
L_{\eta\eta} &= (L_\eta y_{\xi\xi} - L_\xi y_{\eta\eta}) x_\eta - (L_\eta x_{\xi\xi} - L_\xi x_{\eta\eta}) y_\eta.
|
| 32 |
+
\end{align*}$$
|
| 33 |
+
|
| 34 |
+
**Proof.** The idea of isogeometric analysis is to use the same mathematical representation for the computational domain and
|
| 35 |
+
|
| 36 |
+
solution field. Suppose that the computational domain $\Omega$ is parameterized by the following planar B-spline surface:
|
| 37 |
+
|
| 38 |
+
$$\sigma(\xi, \eta) = (x(\xi, \eta), y(\xi, \eta)) = \sum_{i=1}^{n_1} \sum_{j=1}^{n_j} N_i^{d_i}(\xi) N_j^{d_j}(\eta) c_{ij},$$
|
| 39 |
+
|
| 40 |
+
In isogeometric analysis, the solution field of heat conduction problem (1) over the computational domain $\Omega$ has the following form,
|
| 41 |
+
|
| 42 |
+
$$U_h(x(\xi, \eta), y(\xi, \eta)) = T(\xi, \eta) = \sum_{i=1}^{n_1} \sum_{j=1}^{n_j} N_i^{d_i}(\xi) N_j^{d_j}(\eta) T_{ij},$$
|
| 43 |
+
|
| 44 |
+
Here $T_{ij}$ are the unknown variables in isogeometric analysis to be solved from the boundary condition and Eq. (1).
|
| 45 |
+
|
| 46 |
+
From $U_h(x(\xi, \eta), y(\xi, \eta)) = T(\xi, \eta)$, we have
|
| 47 |
+
|
| 48 |
+
$$\begin{align*}
|
| 49 |
+
\frac{\partial T}{\partial \xi} &= \frac{\partial U_h}{\partial x} \frac{\partial x}{\partial \xi} + \frac{\partial U_h}{\partial y} \frac{\partial y}{\partial \xi}, \\
|
| 50 |
+
\frac{\partial T}{\partial \eta} &= \frac{\partial U_h}{\partial x} \frac{\partial x}{\partial \eta} + \frac{\partial U_h}{\partial y} \frac{\partial y}{\partial \eta},
|
| 51 |
+
\end{align*}$$
|
| 52 |
+
|
| 53 |
+
Then we can obtain
|
| 54 |
+
|
| 55 |
+
$$\begin{align*}
|
| 56 |
+
\frac{\partial U_h}{\partial x} &= (\frac{\partial T}{\partial \xi} y_{\eta} - \frac{\partial T}{\partial \eta} y_{\xi}) / J, \\
|
| 57 |
+
\frac{\partial U_h}{\partial y} &= (\frac{\partial T}{\partial \eta} x_{\xi} - \frac{\partial T}{\partial \xi} x_{\eta}) / J,
|
| 58 |
+
\end{align*}$$
|
| 59 |
+
|
| 60 |
+
where $J = x_\xi y_\eta - x_\eta y_\xi$.
|
| 61 |
+
|
| 62 |
+
Similarly,
|
| 63 |
+
|
| 64 |
+
$$\begin{align*}
|
| 65 |
+
\frac{\partial^2 T}{\partial^2 \xi} &= \frac{\partial^2 U_h}{\partial^2 x} \left( \frac{\partial x}{\partial \xi} \right)^2 + \frac{\partial U_h}{\partial x} \frac{\partial^2 x}{\partial^2 \xi} + \frac{\partial^2 U_h}{\partial^2 y} \left( \frac{\partial y}{\partial \xi} \right)^2 + \frac{\partial U_h}{\partial y} \frac{\partial^2 y}{\partial^2 \xi}, \\
|
| 66 |
+
\frac{\partial^2 T}{\partial^2 \eta} &= \frac{\partial^2 U_h}{\partial^2 x} \left( \frac{\partial x}{\partial \eta} \right)^2 + \frac{\partial U_h}{\partial x} \frac{\partial^2 x}{\partial^2 \eta} + \frac{\partial^2 U_h}{\partial^2 y} \left( \frac{\partial y}{\partial \eta} \right)^2 + \frac{\partial U_h}{\partial y} \frac{\partial^2 y}{\partial^2 \eta},
|
| 67 |
+
\end{align*}$$
|
| 68 |
+
|
| 69 |
+
From above two equations, we have
|
| 70 |
+
|
| 71 |
+
$$\begin{align*}
|
| 72 |
+
\frac{\partial^2 U_h}{\partial^2 x} &= [y_{\eta}^2 (T_{\xi\xi} - \frac{\partial U_h}{\partial x} x_{\xi\xi} - \frac{\partial U_h}{\partial y} y_{\xi\xi})] - \\
|
| 73 |
+
&y_{\xi}^2 (T_{\eta\eta} - \frac{\partial U_h}{\partial x} x_{\eta\eta} - \frac{\partial U_h}{\partial y} y_{\eta\eta})] / K, \\
|
| 74 |
+
\frac{\partial^2 U_h}{\partial^2 y} &= [x_{\xi}^2 (T_{\eta\xi} - \frac{\partial U_h}{\partial x} x_{\eta\xi} - \frac{\partial U_h}{\partial y} y_{\eta\xi})] - \\
|
| 75 |
+
&x_{\eta}^2 (T_{\xi\xi} - \frac{\partial U_h}{\partial x} x_{\xi\xi} - \frac{\partial U_h}{\partial y} y_{\xi\xi})] / K,
|
| 76 |
+
\end{align*}$$
|
| 77 |
+
|
| 78 |
+
where $K = (x_\xi y_\eta)^2 - (x_\eta y_\xi)^2$.
|
| 79 |
+
|
| 80 |
+
Hence, we can obtain
|
samples/texts/1285480/page_5.md
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Fig.3 Comparison of error assessment method based on h-refinement and k-refinement: (a) isogeometric solution surface with control points; (b) error surface obtained by three h-refinement steps; (c) error surface obtained by three k-refinement steps; (d) exact error color map; (e) color map of error surface in Fig. 3 (b); (f) colormap of error surface in Fig. 3 (c).
|
| 2 |
+
|
| 3 |
+
$$
|
| 4 |
+
\begin{align*}
|
| 5 |
+
\Delta U_h ={}& \frac{\partial^2 U_h}{\partial^2 x} + \frac{\partial^2 U_h}{\partial^2 y} \\
|
| 6 |
+
&= [(x_{\xi}^2 - y_{\xi}^2)(T_{\eta\eta} - \frac{\partial U_h}{\partial x}x_{\eta\eta} - \frac{\partial U_h}{\partial y}y_{\eta\eta}) - \\
|
| 7 |
+
& \qquad (x_{\eta}^2 - y_{\eta}^2)(T_{\xi\xi} - \frac{\partial U_h}{\partial x}x_{\xi\xi} - \frac{\partial U_h}{\partial y}y_{\xi\xi})] / K \\
|
| 8 |
+
&= (JL_{\xi}T_{\eta\eta} - JL_{\eta}T_{\xi\xi} + L_{\xi\xi}T_{\eta} - L_{\eta\eta}T_{\xi}) / (JK).
|
| 9 |
+
\end{align*}
|
| 10 |
+
$$
|
| 11 |
+
|
| 12 |
+
where
|
| 13 |
+
|
| 14 |
+
$$
|
| 15 |
+
\begin{align*}
|
| 16 |
+
L_{\xi} &= x_{\xi}^{2} - y_{\xi}^{2}, & L_{\eta} &= x_{\eta}^{2} - y_{\eta}^{2}, \\
|
| 17 |
+
L_{\xi\xi} &= (L_{\eta} y_{\xi\xi} - L_{\xi} y_{\eta\xi}) x_{\xi} - (L_{\eta} x_{\xi\xi} - L_{\xi} x_{\eta\xi}) y_{\xi}, \\
|
| 18 |
+
L_{\eta\eta} &= (L_{\eta} y_{\xi\xi} - L_{\xi} y_{\eta\xi}) x_{\eta} - (L_{\eta} x_{\xi\xi} - L_{\xi} x_{\eta\xi}) y_{\eta}.
|
| 19 |
+
\end{align*}
|
| 20 |
+
$$
|
| 21 |
+
|
| 22 |
+
This completes the proof. □
|
| 23 |
+
|
| 24 |
+
The approximation error surface *e* from (4) also has a B-spline form. Different from the method in [16], we perform several *k* -refinement operations to achieve more accurate results. Though it is much more expensive, we can use it as an error assessment method for isogeometric simulation solutions.
|
| 25 |
+
|
| 26 |
+
As a summary, the procedure of error assessment method for model problem (1) can be described as follows,
|
| 27 |
+
|
| 28 |
+
**Input:** the isogeometric solution $U_h(\mathbf{x})$ over computational domain $\Omega$
|
| 29 |
+
|
| 30 |
+
**Output:** the error surface *e*
|
| 31 |
+
|
| 32 |
+
1. Compute $\Delta U_h(\mathbf{x})$ according to Proposition 1;
|
| 33 |
+
|
| 34 |
+
2. Solve the isogeometric analysis problem (4) with several $k$-refinement steps;
|
| 35 |
+
|
| 36 |
+
3. Output the error surface $e$.
|
| 37 |
+
|
| 38 |
+
IV. EXAMPLES AND COMPARISONS
|
| 39 |
+
|
| 40 |
+
In this paper, we test the error assessment methods based on h-refinement and k-refinement for the heat conduction problem (1) with source term
|
| 41 |
+
|
| 42 |
+
$$
|
| 43 |
+
f(x, y) = - \frac{4\pi^2}{9} \sin\left(\frac{\pi x}{3}\right) \sin\left(\frac{\pi y}{3}\right).
|
| 44 |
+
$$
|
| 45 |
+
|
| 46 |
+
For this problem with boundary condition $U_0(x) = 0$, the
|
| 47 |
+
exact solution over the computational domain $[0, 3] \times [0, 3]$ is
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
U(x, y) = 2 \sin\left(\frac{\pi x}{3}\right) \sin\left(\frac{\pi y}{3}\right).
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
Fig. 3 illustrates an example over the computational domain $[0,3] \times [0,3]$, which has exact solution for problem (1). The isogeometric solution surface with control points are shown in Fig. 3(a), the error surface obtained by three h-refinement steps is illustrated in Fig. 3 (b), Fig. 3 (c) shows the error
|
samples/texts/1285480/page_6.md
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Fig.4 Error assessment for the simulation result in Fig. 1: (a) *h*-refinement; (b) *k*-refinement.
|
| 2 |
+
|
| 3 |
+
surface obtained by three *k*-refinement steps. The exact error
|
| 4 |
+
color map for this example is shown in Fig. 3 (d), Fig. 3 (e)
|
| 5 |
+
illustrate the color map of the error surface in Fig. 3(b), the
|
| 6 |
+
color map of the error surface in 3(f) is shown in Fig. 3 (c).
|
| 7 |
+
From this example, we can find that *k*-refinement method
|
| 8 |
+
can achieve a better approximation of the exact error surface
|
| 9 |
+
than *h*-refinement method.
|
| 10 |
+
|
| 11 |
+
As an example with unknown exact solution, the error color
|
| 12 |
+
map of simulation result in Fig.1 obtained by *h*-refinement
|
| 13 |
+
method and *k*-refinement method are shown in Fig. 4. Note
|
| 14 |
+
that the original design of computational domain has
|
| 15 |
+
significant impact on the simulation results. Hence, for
|
| 16 |
+
different parameterization of computational domain, the
|
| 17 |
+
resulted error surface obtained by the proposed method is also
|
| 18 |
+
different.
|
| 19 |
+
|
| 20 |
+
V. CONCLUSIONS
|
| 21 |
+
|
| 22 |
+
A new error assessment method for isogeometric analysis
|
| 23 |
+
of two-dimensional heat conduction problems is proposed in
|
| 24 |
+
this paper. The basic idea is to resolve the isogeometric
|
| 25 |
+
analysis problem with several *k*-refinement steps. The main
|
| 26 |
+
feature of the proposed method is that the resulted error
|
| 27 |
+
estimation surface also has a B-spline representation. It can be
|
| 28 |
+
used as an error assessment method for isogeometric analysis
|
| 29 |
+
results. The efficiency of the proposed method is proved by
|
| 30 |
+
several comparison examples.
|
| 31 |
+
|
| 32 |
+
In the future, we will generalize the proposed method to 3D
|
| 33 |
+
case and employ it to validate the *r* -refinement method in 3D
|
| 34 |
+
isogeometric analysis as in [16].
|
| 35 |
+
|
| 36 |
+
ACKNOWLEDGMENT
|
| 37 |
+
|
| 38 |
+
The authors are partially supported by the 7th Framework
|
| 39 |
+
Program of the European Union, project SCP8-218536
|
| 40 |
+
"EXCITING". The first author is partially supported by the
|
| 41 |
+
National Nature Science Foundation of China (No.61004117),
|
| 42 |
+
the Zhejiang Provincial Natural Science Foundation of China
|
| 43 |
+
under Grant No.Y1090718, Defence Industrial Technology
|
| 44 |
+
Development Program (No.A3920110002), and the Open
|
| 45 |
+
Project Program of the State Key Lab of CAD &CG (A1105),
|
| 46 |
+
Zhejiang University.
|
| 47 |
+
|
| 48 |
+
REFERENCES
|
| 49 |
+
|
| 50 |
+
[1] M. Aigner, C. Heinrich, B. Jüttler, E. Pilgerstorfer, B. Simeon and A.-V. Vuong. Swept volume parametrization for isogeometric analysis. In
|
| 51 |
+
|
| 52 |
+
E. Hancock and R. Martin (eds.), The Mathematics of Surfaces (MoS XIII 2009), LNCS vol. 5654, Springer, 19-44, 2009
|
| 53 |
+
|
| 54 |
+
[2] F. Auricchio, L.B. da Veiga, A. Buffa, C. Lovadina, A. Reali, and G. Sangalli. A fully locking-free isogeometric approach for plane linear elasticity problems: A stream function formulation. Computer Methods in Applied Mechanics and Engineering, 197:160-172, 2007.
|
| 55 |
+
|
| 56 |
+
[3] L. Beirao da Veiga, A. Buffa, J. Rivas and G. Sangalli. Some estimates for $h-p-k$-refinement in isogeometric analysis. Numerische Mathematik, 2011, 118(2): 271-305
|
| 57 |
+
|
| 58 |
+
[4] Y. Bazilevs, L. Beirao de Veiga, J.A. Cottrell, T.J.R. Hughes, and G. Sangalli. Isogeometric analysis: approximation, stability and error estimates for h-refined meshes. Mathematical Models and Methods in Applied Sciences, 6:1031-1090, 2006.
|
| 59 |
+
|
| 60 |
+
[5] Y. Bazilevs, V.M. Calo, T.J.R. Hughes, and Y. Zhang. Isogeometric fluid structure interaction: Theory, algorithms, and computations. Computational Mechanics, 43:3-37, 2008.
|
| 61 |
+
|
| 62 |
+
[6] Y. Bazilevs, V.M. Calo, J.A. Cottrell, J. Evans, T.J.R. Hughes, S. Lipton, M.A. Scott, and T.W. Sederberg. Isogeometric analysis using T-Splines. Computer Methods in Applied Mechanics and Engineering, 199(5-8): 229-263, 2010.
|
| 63 |
+
|
| 64 |
+
[7] D. Burkhart, B. Hamann and G. Umlauf. Iso-geometric analysis based on Catmull-Clark subdivision solids. Computer Graphics Forum, 29(5): 1575-1584, 2010.
|
| 65 |
+
|
| 66 |
+
[8] J.A. Cottrell, T.J.R. Hughes, and A. Reali. Studies of refinement and continuity in isogeometric analysis. Computer Methods in Applied Mechanics and Engineering, 196:4160-4183, 2007.
|
| 67 |
+
|
| 68 |
+
[9] J.A. Cottrell, A. Reali, Y. Bazilevs, and T.J.R. Hughes. Isogeometric analysis of structural vibrations. Computer Methods in Applied Mechanics and Engineering, 195:5257-5296, 2006.
|
| 69 |
+
|
| 70 |
+
[10] M. Dörfel, B. Jüttler, and B. Simeon. Adaptive isogeometric analysis by local h-refinement with T-splines. Computer Methods in Applied Mechanics and Engineering, 199(5-8): 264-275, 2010.
|
| 71 |
+
|
| 72 |
+
[11] H. Gomez, V.M. Calo, Y. Bazilevs, and T.J.R. Hughes. Isogeometric analysis of the Cahn-Hilliard phase-field model. Computer Methods in Applied Mechanics and Engineering, 197:4333-4352, 2008.
|
| 73 |
+
|
| 74 |
+
[12] T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs. Isogeometric analysis: CAD, finite elements, NURBS, exact geometry, and mesh refinement. Computer Methods in Applied Mechanics and Engineering 194, 39-41: 4135-4195, 2005.
|
| 75 |
+
|
| 76 |
+
[13] T. Nguyen, B. Jüttler. Using approximate implicitization for domain parameterization in isogeometric analysis. International Conference on Curves and Surfaces, Avignon, France, 2010.
|
| 77 |
+
|
| 78 |
+
[14] N. Nguyen-Thanh, H. Nguyen-Xuan, S.P.A. Bordas and T. Rabczuk. Isogeometric analysis using polynomial splines over hierarchical T-meshes for two-dimensional elastic solids. Computer Methods in Applied Mechanics and Engineering, 2011, 21-22: 1892-1908
|
| 79 |
+
|
| 80 |
+
[15] G. Xu, B. Mourrain, R. Duvigneau, A. Galligo. Optimal analysis-aware parameterization of computational domain in isogeometric analysis. Proc. of Geometric Modeling and Processing (GMP 2010), 2010, 236-254.
|
| 81 |
+
|
| 82 |
+
[16] G. Xu, B. Mourrain, R. Duvigneau, A. Galligo. Parameterization of computational domain in isogeometric analysis: methods and comparison.Computer Methods in Applied Mechanics and Engineering, 200 (23-24): 2021-2031, 2011
|
samples/texts/1364076/page_1.md
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
und.1 Trakthenbrot's Theorem
|
| 2 |
+
|
| 3 |
+
In ?? we defined sentences $\tau(M, w)$ and $\alpha(M, w)$ for a Turing machine $M$ and input string $w$. Then we showed in ?? and ?? that $\tau(M, w) \rightarrow \alpha(M, w)$ is valid iff $T$ started on input $w$ eventually halts. Since the Halting Problem is undecidable, this implies that validity and satisfiability of sentences of first-order logic is undecidable (????).
|
| 4 |
+
|
| 5 |
+
But validity and satisfiability of sentences is defined for arbitrary *structures*, finite or infinite. You might suspect that it is easier to decide if a sentence is satisfiable in a finite *structure* (or valid in all finite *structures*). We can improve the proof of the unsolvability of the decision so that it shows this is not the case.
|
| 6 |
+
|
| 7 |
+
First, if you go back to the proof of ??, you’ll see that what we did there is
|
| 8 |
+
produce a model $\mathfrak{M}$ of $\tau(M, w)$ which describes exactly what machine $M$ does
|
| 9 |
+
when started on input $w$. The domain of that model was $\mathbb{N}$, i.e., infinite. But
|
| 10 |
+
if $M$ actually halts on input $w$, we can build a finite model $\mathfrak{M}'$ in the same
|
| 11 |
+
way. Suppose $M$ started on input $w$ halts after $k$ steps. Take as domain $|\mathfrak{M}'|$
|
| 12 |
+
the set $\{0, \dots, n\}$, where $n$ is the larger of $k$ and the length of $w$, and let
|
| 13 |
+
|
| 14 |
+
$$
|
| 15 |
+
\mathfrak{M}'(x) = \begin{cases} x+1 & \text{if } x < n \\ n & \text{otherwise.} \end{cases}
|
| 16 |
+
$$
|
| 17 |
+
|
| 18 |
+
Otherwise $\mathfrak{M}'$ is defined just like $\mathfrak{M}$. By the definition of $\mathfrak{M}'$, just like in the proof of ??, $\mathfrak{M}' \models \tau(M, w)$. And since we assumed that $M$ halts on input $w$, $\mathfrak{M}' \models \alpha(M, w)$. So, $\mathfrak{M}'$ is a finite model of $\tau(M, w) \land \alpha(M, w)$ (note that we've replaced $\rightarrow$ by $\land$). We are halfway to a proof: if $M$ halts on input $w$, then $\tau(M, e) \land \alpha(M, w)$ has a finite model. Unfortunately, the “only if” direction does not hold. For instance, if $M$ after $n$ steps is in state $q$ and reads a symbol $\sigma$, and $\delta(q, \sigma) = \langle q, \sigma, N \rangle$, then the configuration after $n+1$ steps is exactly the same as the configuration after $n$ steps (same state, same head position, same tape contents). But the machine never halts; it's in an infinite loop. The corresponding *structure* $\mathfrak{M}'$ above satisfies $\tau(M, w)$ but not $\alpha(M, w)$. (In it, the values of $n+l$ are all the same, so it is finite). But by changing $\tau(M, w)$ suitable we can rule out *structures* like this.
|
| 19 |
+
|
| 20 |
+
Consider the **sentences** describing the operation of the Turing machine **M**
|
| 21 |
+
on input **w** = σ<sub>i<sub>1</sub></sub> ... σ<sub>i<sub>k</sub></sub>:
|
| 22 |
+
|
| 23 |
+
1. Axioms describing numbers and < (just like in the definition of τ(M, w)
|
| 24 |
+
in ??).
|
| 25 |
+
|
| 26 |
+
2. Axioms describing the input configuration: just like in the definition of τ(M, w).
|
| 27 |
+
|
| 28 |
+
3. Axioms describing the transition from one configuration to the next:
|
| 29 |
+
|
| 30 |
+
For the following, let $\varphi(x, y)$ be as before, and let
|
| 31 |
+
|
| 32 |
+
$$
|
| 33 |
+
\psi(y) \equiv \forall x (x < y \rightarrow x \neq y).
|
| 34 |
+
$$
|
samples/texts/1364076/page_2.md
ADDED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
a) For every instruction $\delta(q_i, \sigma) = \langle q_j, \sigma', R \rangle$, the sentence:
|
| 2 |
+
|
| 3 |
+
$$ \forall x \forall y ((Q_{q_i}(x, y) \land S_\sigma(x, y)) \rightarrow \\ (Q_{q_j}(x', y') \land S_{\sigma'}(x, y') \land \varphi(x, y) \land \psi(y')) ) $$
|
| 4 |
+
|
| 5 |
+
In other words, the same as the corresponding **sentence** in $\tau(M, w)$,
|
| 6 |
+
except we add $\psi(y')$ at the end. ($\psi(y')$ ensures that the number $y'$
|
| 7 |
+
of the “next” configuration is different from all previous numbers 0,
|
| 8 |
+
0', ...)
|
| 9 |
+
|
| 10 |
+
b) For every instruction $\delta(q_i, \sigma) = \langle q_j, \sigma', L \rangle$, the sentence:
|
| 11 |
+
|
| 12 |
+
$$ \forall x \forall y ((Q_{q_i}(x, y) \land S_\sigma(x, y)) \rightarrow \\ (Q_{q_j}(x, y') \land S_{\sigma'}(x', y') \land \varphi(x, y))) \land \\ \forall y ((Q_{q_i}(o, y) \land S_\sigma(o, y)) \rightarrow \\ (Q_{q_j}(o, y') \land S_{\sigma'}(o', y') \land \varphi(o, y) \land \psi(y')) ) $$
|
| 13 |
+
|
| 14 |
+
c) For every instruction $\delta(q_i, \sigma) = \langle q_j, \sigma', N \rangle$, the sentence:
|
| 15 |
+
|
| 16 |
+
$$ \forall x \forall y ((Q_{q_i}(x, y) \land S_\sigma(x, y)) \rightarrow \\ (Q_{q_j}(x, y') \land S_{\sigma'}(x', y') \land \varphi(x, y) \land \psi(y')) ) $$
|
| 17 |
+
|
| 18 |
+
Let $\tau'(M, w)$ be the conjunction of all the above **sentences** for Turing ma-
|
| 19 |
+
chine $M$ and input $w$.
|
| 20 |
+
|
| 21 |
+
**Lemma und.1.** If $M$ started on input $w$ halts, then $\tau'(M, w) \land \alpha(M, w)$ has a finite model.
|
| 22 |
+
|
| 23 |
+
*Proof.* Let $\mathfrak{M}'$ be as in the proof of ??, except
|
| 24 |
+
|
| 25 |
+
$$ |\mathfrak{M}'| = \{0, \dots, n\} $$
|
| 26 |
+
|
| 27 |
+
$$ \mathfrak{M}'(x) = \begin{cases} x+1 & \text{if } x < n \\ n & \text{otherwise,} \end{cases} $$
|
| 28 |
+
|
| 29 |
+
where $n = \max(k, \text{len}(w))$ and $k$ is the least number such that $M$ started
|
| 30 |
+
on input $w$ has halted after $k$ steps. We leave the verification that $\mathfrak{M}' \models$
|
| 31 |
+
$\tau(M, w) \land E(M, w)$ as an exercise. $\square$
|
| 32 |
+
|
| 33 |
+
**Problem und.1.** Complete the proof of **Lemma und.1** by proving that $\mathfrak{M}' \models \tau(M, w) \land E(M, w)$.
|
| 34 |
+
|
| 35 |
+
**Lemma und.2.** If $\tau'(M, w) \land \alpha(M, w)$ has a finite model, then $M$ started on *input w halts*.
|
| 36 |
+
|
| 37 |
+
*Proof.* We show the contrapositive. Suppose that $M$ started on $w$ does not halt. If $\tau'(M,w) \land \alpha(M,w)$ has no model at all, we are done. So assume $\mathfrak{M}$ is a model of $\tau(M,w) \land \alpha(M,w)$. We have to show that it cannot be finite.
|
samples/texts/1364076/page_3.md
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
We can prove, just like in ??, that if $M$, started on input $w$, has not halted after $n$ steps, then $\tau'(M, w) \models \chi(M, w, n) \wedge \psi(\bar{n})$. Since $M$ started on input $w$ does not halt, $\tau'(M, w) \models \chi(M, w, n) \wedge \psi(\bar{n})$ for all $n \in \mathbb{N}$. Note that by ??, $\tau'(M, w) \models \bar{k} < \bar{n}$ for all $k < n$. Also $\psi(\bar{n}) \models \bar{k} < \bar{n} \rightarrow \bar{k} \neq \bar{n}$. So, $\mathfrak{M} \models \bar{k} \neq \bar{n}$ for all $k < n$, i.e., the infinitely many terms $\bar{k}$ must all have different values in $\mathfrak{M}$. But this requires that $|\mathfrak{M}|$ be infinite, so $\mathfrak{M}$ cannot be a finite model of $\tau(M, w) \wedge \alpha(M, w)$. $\square$
|
| 2 |
+
|
| 3 |
+
**Problem und.2.** Complete the proof of **Lemma und.2** by proving that if $M$, started on input $w$, has not halted after $n$ steps, then $\tau'(M, w) \models \psi(\bar{n})$.
|
| 4 |
+
|
| 5 |
+
**Theorem und.3 (Trakthenbrot's Theorem).** It is undecidable if an arbitrary sentence of first-order logic has a finite model (i.e., is finitely satisfiable).
|
| 6 |
+
|
| 7 |
+
*Proof.* Suppose there were a Turing machine *F* that decides the finite satisfiability problem. Then given any Turing machine *M* and input *w*, we could compute the sentence $\tau'(M, w) \wedge \alpha(M, w)$, and use *F* to decide if it has a finite model. By **Lemmata und.1** and **und.2**, it does iff *M* started on input *w* halts. So we could use *F* to solve the halting problem, which we know is unsolvable. $\square$
|
| 8 |
+
|
| 9 |
+
**Corollary und.4.** There can be no *derivation system* that is *sound* and *complete* for finite validity, i.e., a *derivation system* which has $\vdash \psi$ iff $\mathfrak{M} \models \psi$ for every finite structure $\mathfrak{M}$.
|
| 10 |
+
|
| 11 |
+
*Proof.* Exercise. $\square$
|
| 12 |
+
|
| 13 |
+
**Problem und.3.** Prove **Corollary und.4**. Observe that $\psi$ is satisfied in every finite structure iff $\neg\psi$ is not finitely satisfiable. Explain why finite satisfiability is semi-decidable in the sense of ???. Use this to argue that if there were a *derivation system* for finite validity, then finite satisfiability would be decidable.
|
| 14 |
+
|
| 15 |
+
Photo Credits
|
| 16 |
+
|
| 17 |
+
Bibliography
|
samples/texts/1660153/page_1.md
ADDED
|
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Toward more localized local algorithms: removing assumptions concerning global knowledge
|
| 2 |
+
|
| 3 |
+
Amos Korman, Jean-Sébastien Sereni, Laurent Viennot
|
| 4 |
+
|
| 5 |
+
► To cite this version:
|
| 6 |
+
|
| 7 |
+
Amos Korman, Jean-Sébastien Sereni, Laurent Viennot. Toward more localized local algorithms: removing assumptions concerning global knowledge. Distributed Computing, Springer Verlag, 2013, 26 (5-6), 10.1007/s00446-012-0174-8. hal-01241086
|
| 8 |
+
|
| 9 |
+
HAL Id: hal-01241086
|
| 10 |
+
|
| 11 |
+
https://hal.inria.fr/hal-01241086
|
| 12 |
+
|
| 13 |
+
Submitted on 9 Dec 2015
|
| 14 |
+
|
| 15 |
+
**HAL** is a multi-disciplinary open access
|
| 16 |
+
archive for the deposit and dissemination of sci-
|
| 17 |
+
entific research documents, whether they are pub-
|
| 18 |
+
lished or not. The documents may come from
|
| 19 |
+
teaching and research institutions in France or
|
| 20 |
+
abroad, or from public or private research centers.
|
| 21 |
+
|
| 22 |
+
L'archive ouverte pluridisciplinaire **HAL**, est
|
| 23 |
+
destinée au dépôt et à la diffusion de documents
|
| 24 |
+
scientifiques de niveau recherche, publiés ou non,
|
| 25 |
+
émanant des établissements d'enseignement et de
|
| 26 |
+
recherche français ou étrangers, des laboratoires
|
| 27 |
+
publics ou privés.
|
samples/texts/1660153/page_10.md
ADDED
|
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
running time of the Monte-Carlo algorithm provided that the checking mechanism used is efficient.
|
| 2 |
+
|
| 3 |
+
If we wish to come up with a similar transformation in the context of locality, a first idea would be to consider a local algorithm that checks the validity of a tentative output vector. This concept has been studied from various perspectives (cf., e.g., [16, 21, 32]). However, such fast local checking procedures can only guarantee that faults are detected by at least one node, whereas to restart the Monte-Carlo algorithm, all nodes should be aware of a fault. This notification can take diameter time and will thus violate the locality constraint (i.e. running in a bounded number of rounds).
|
| 4 |
+
|
| 5 |
+
Instead of using local checking procedures, we introduce the notion of *pruning algorithms*. Informally, this is a mechanism that identifies “valid areas” where the tentative output vector $\hat{y}$ is valid and *prunes* these areas, i.e., takes them out of further consideration. A pruning algorithm $\mathcal{P}$ must satisfy two properties, specifically, (1) *gluing*: $\mathcal{P}$ must make sure that the current solution on these “pruned areas” can be extended to a valid solution for the remainder of the graph, and (2) *solution detection*: if $\hat{y}$ is a valid global solution to begin with then $\mathcal{P}$ should prune all nodes. Observe that since the empty output vector is a solution for the empty input graph then (1) implies the converse of (2), that is, if $\mathcal{P}$ prunes all nodes, then $\hat{y}$ is a valid global solution.
|
| 6 |
+
|
| 7 |
+
Now, given a Monte-Carlo algorithm $\mathcal{A}$ and a pruning algorithm $\mathcal{P}$ for the problem, we can transform $\mathcal{A}$ into a Las Vegas algorithm by executing the pair of algorithms $(\mathcal{A}; \mathcal{P})$ in iterations, where each iteration $i$ is executed on the graph $G_i$ induced by the set of nodes that were not pruned in previous iterations ($G_1$ is the initial graph $G$). If, in some iteration $i$, Algorithm $\mathcal{A}$ solves the problem on the graph $G_i$, then the solution detection property guarantees that the subsequent pruning algorithm will prune all nodes in $G_i$ and hence at that time all nodes are pruned and the execution terminates. Furthermore, using induction, it can be shown that the gluing property guarantees that the correct solution to $G_i$ combined with the outputs of the previously pruned nodes forms a solution to $G$.
|
| 8 |
+
|
| 9 |
+
## 3.2 Pruning Algorithms: Definition and Examples.
|
| 10 |
+
|
| 11 |
+
We now formally define pruning algorithms. Fix a problem $\Pi$ and a family of instances $\mathcal{F}$ for $\Pi$. A pruning algorithm $\mathcal{P}$ for $\Pi$ and $\mathcal{F}$ is a uniform algorithm that takes as input a triplet $(G, \mathbf{x}, \hat{\mathbf{y}})$, where $(G, \mathbf{x}) \in \mathcal{F}$ and $\hat{\mathbf{y}}$ is some tentative output vector (i.e. an output vector that may be incorrect), and returns a configuration $(G', \mathbf{x}')$ such that $G'$ is an induced subgraph of $G$ and $(G', \mathbf{x}') \in \mathcal{F}$. Thus, at each node $v$ of $G$, the pruning
|
| 12 |
+
|
| 13 |
+
algorithm $\mathcal{P}$ returns a bit $b(v)$ that indicates whether $v$ belongs to some selected subset $W$ of nodes of $G$ to be pruned. (Recall that the idea is to assume that nodes in $W$ have a satisfying tentative output value and that they can be excluded from further computations.) Note that $\mathbf{x}'$ may be different than $\mathbf{x}$ restricted to the nodes outside $W$.
|
| 14 |
+
|
| 15 |
+
Consider now an output vector $\mathbf{y}'$ for the nodes in $V(G')$. The combined output vector $\mathbf{y}$ of the vectors $\hat{\mathbf{y}}$ and $\mathbf{y}'$ is the output vector that is a combination of $\hat{\mathbf{y}}$ restricted to the nodes in $W$ and $\mathbf{y}'$ restricted to the nodes in $G'$, i.e., $\mathbf{y}(v) = \hat{\mathbf{y}}(v)$ if $v \in W$ and $\mathbf{y}(v) = \mathbf{y}'(v)$ otherwise. A pruning algorithm $\mathcal{P}$ for a problem $\Pi$ must satisfy the following properties.
|
| 16 |
+
|
| 17 |
+
- **Solution detection:** if $(G, \mathbf{x}, \hat{\mathbf{y}}) \in \Pi$, then $W = V(G)$, that is, $\mathcal{P}(G, \mathbf{x}, \hat{\mathbf{y}}) = (\emptyset, \emptyset)$.
|
| 18 |
+
|
| 19 |
+
- **Gluing:** if $\mathcal{P}(G, \mathbf{x}, \hat{\mathbf{y}}) = (G', \mathbf{x}')$ and $\mathbf{y}'$ is a solution for $(G', \mathbf{x}')$, i.e., $(G', \mathbf{x}', \mathbf{y}') \in \Pi$, then the combined output vector $\mathbf{y}$ is a solution for $(G, \mathbf{x})$, i.e., $(G, \mathbf{x}, \mathbf{y}) \in \Pi$.
|
| 20 |
+
|
| 21 |
+
As mentioned earlier, it follows from the gluing property that if the pruning algorithm $\mathcal{P}$ returns $(\emptyset, \emptyset)$ (i.e., all nodes are pruned) then $(G, \mathbf{x}, \hat{\mathbf{y}}) \in \Pi$.
|
| 22 |
+
|
| 23 |
+
The pruning algorithm $\mathcal{P}$ is monotone with respect to a parameter $\mathbf{p}$ if $p(G, \mathbf{x}) \ge p(\mathcal{P}(G, \mathbf{x}, \hat{\mathbf{y}}))$ for every $(G, \mathbf{x}) \in \mathcal{F}$ and every tentative output vector $\hat{\mathbf{y}}$. The pruning algorithm $\mathcal{P}$ is monotone with respect to a collection of parameters $\Gamma$ if $\mathcal{P}$ is monotone with respect to every parameter $\mathbf{p} \in \Gamma$. In such a case, we may also say that $\mathcal{P}$ is $\Gamma$-monotone. The following assertions follow from the definitions.
|
| 24 |
+
|
| 25 |
+
**Observation 3.1** Let $\mathcal{P}$ be a pruning algorithm.
|
| 26 |
+
|
| 27 |
+
1. Algorithm $\mathcal{P}$ is monotone with respect to any non-decreasing graph-parameter.
|
| 28 |
+
|
| 29 |
+
2. If the configuration $(G', \mathbf{x}')$ returned by $\mathcal{P}$ satisfies $\mathbf{x}'(v) = \mathbf{x}(v)$ for every $v \in V(G) \setminus W$ and every configuration $(G, \mathbf{x})$, then $\mathcal{P}$ is monotone with respect to any non-decreasing parameter.
|
| 30 |
+
|
| 31 |
+
For simplicity, we impose that the running time of a pruning algorithm $\mathcal{P}$ be constant. We shall elaborate on general pruning algorithms at the end of the paper.
|
| 32 |
+
|
| 33 |
+
We now give examples of pruning algorithms for several problems, namely, (2, β)-Ruling set for a constant integer β (recall that MIS is precisely (2, 1)-Ruling set), and maximal matching. These pruning algorithms ignore the input of the nodes. Thus, by Observation 3.1, they are monotone with respect to any non-decreasing parameter.
|
samples/texts/1660153/page_11.md
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
**The (2, β)-ruling set pruning algorithm:** Let $\beta$ be a constant integer. We define a pruning algorithm $P_{(2,\beta)}$ for the (2, $\beta$)-ruling set problem as follows. Given a triplet $(G, \mathbf{x}, \hat{\mathbf{y}})$, let $W$ be the set of nodes $u$ satisfying one of the following two conditions.
|
| 2 |
+
|
| 3 |
+
- $\hat{\mathbf{y}}(u) = 1$ and $\hat{\mathbf{y}}(v) = 0$ for all $v \in N(u)$, or
|
| 4 |
+
|
| 5 |
+
- $\hat{\mathbf{y}}(u) = 0$ and $\exists v \in B_G(u, \beta)$ such that $\hat{\mathbf{y}}(v) = 1$ and
|
| 6 |
+
$\hat{\mathbf{y}}(w) = 0$ for all $w \in N(v)$.
|
| 7 |
+
|
| 8 |
+
The question of whether a node $u$ belongs to $W$ can be determined by inspecting $B_G(u, 1 + \beta)$, the ball of radius $1 + \beta$ around $u$. Hence, we obtain the following.
|
| 9 |
+
|
| 10 |
+
**Observation 3.2** *Algorithm* $P_{(2,\beta)}$ is a pruning algorithm for the (2, $\beta$)-ruling set problem, running in time $1 + \beta$. (In particular, $P_{(2,1)}$ is a pruning algorithm for the MIS problem running in time 2.) Furthermore, $P_{(2,\beta)}$ is monotone with respect to any non-decreasing parameter.
|
| 11 |
+
|
| 12 |
+
**The maximal matching problem:** We define a pruning algorithm $P_{MM}$ as follows. Given a tentative output vector $\hat{\mathbf{y}}$, recall that $u$ and $v$ are matched when $u$ and $v$ are neighbors, $\hat{\mathbf{y}}(u) = \hat{\mathbf{y}}(v)$ and $\hat{\mathbf{y}}(w) \neq \hat{\mathbf{y}}(u)$ for every $w \in (N_G(u) \cup N_G(v)) \setminus \{u, v\}$. Set $W$ to be the set of nodes $u$ satisfying one of the following conditions.
|
| 13 |
+
|
| 14 |
+
- $\exists v \in N(u)$ such that $u$ and $v$ are matched, or
|
| 15 |
+
|
| 16 |
+
- $\forall v \in N(u), \exists w \neq u$ such that $v$ and $w$ are matched.
|
| 17 |
+
|
| 18 |
+
**Observation 3.3** *Algorithm* $P_{MM}$ is a pruning algorithm for MM whose running time is 3. Furthermore, $P_{MM}$ is monotone with respect to any parameter.
|
| 19 |
+
|
| 20 |
+
We exhibit several applications of pruning algorithms. The main application appears in the next section, where we show how pruning algorithms can be used to transform non-uniform algorithms into uniform ones. Before we continue, we need the concept of alternating algorithms.
|
| 21 |
+
|
| 22 |
+
### 3.3 Alternating Algorithms
|
| 23 |
+
|
| 24 |
+
A pruning algorithm can be used in conjunction with a sequence of algorithms as follows. Let $\mathcal{F}$ be a collection of instances for some problem $\Pi$. For each $i \in \mathbb{N}$, let $\mathcal{A}_i$ be an algorithm defined on $\mathcal{F}$. Algorithm $\mathcal{A}_i$ does not necessarily solve $\Pi$, it is only assumed to produce some output.
|
| 25 |
+
|
| 26 |
+
Let $\mathcal{P}$ be a pruning algorithm for $\Pi$ and $\mathcal{F}$, and for $i \in \mathbb{N}$, let $\mathcal{B}_i = (\mathcal{A}_i; \mathcal{P})$, that is, given an instance
|
| 27 |
+
|
| 28 |
+
$(G, \mathbf{x})$, Algorithm $\mathcal{B}_i$ first executes $\mathcal{A}_i$, which returns an output vector $\mathbf{y}$ for the nodes of $G$ and, subsequently, Algorithm $\mathcal{P}$ is executed over the triplet $(G, \mathbf{x}, \mathbf{y})$. We define the alternating algorithm $\pi$ for $(\mathcal{A}_i)_{i \in \mathbb{N}}$ and $\mathcal{P}$ as follows. The alternating algorithm $\pi = \pi((\mathcal{A}_i)_{i \in \mathbb{N}}, \mathcal{P})$ executes the algorithms $\mathcal{B}_i$ for $i = 1, 2, 3, \dots$ one after the other: let $(G_1, \mathbf{x}_1) = (G, \mathbf{x})$ be the initial instance given to $\pi$; for $i \in \mathbb{N}$, Algorithm $\mathcal{A}_i$ is executed on the instance $(G_i, \mathbf{x}_i)$ and returns the output vector $\mathbf{y}_i$. The subsequent pruning algorithm $\mathcal{P}$ takes the triplet $(G_i, \mathbf{x}_i, \mathbf{y}_i)$ as input and produces the instance $(G_{i+1}, \mathbf{x}_{i+1})$. See Figure 1 for a schematic view of an alternating algorithm. The definition extends to a finite sequence $(\mathcal{A}_i)_{i=1}^k$ of algorithms in a natural way; the alternating algorithm for $(\mathcal{A})_{i=1}^k$ and $\mathcal{P}$ being $A_1; \mathcal{P}; A_2; \mathcal{P}; \dots; A_k; \mathcal{P}$.
|
| 29 |
+
|
| 30 |
+
The alternating algorithm $\pi$ terminates on an instance $(G, \mathbf{x}) \in \mathcal{F}$ if there exists $k$ such that $V(G_k) = \emptyset$. Observe that in such a case, the tail $\mathcal{B}_k; \mathcal{B}_{k+1}; \dots$ of $\pi$ is trivial. The output vector $\mathbf{y}$ of a terminating alternating algorithm $\pi$ is defined as the combination of the output vectors $\mathbf{y}_1, \mathbf{y}_2, \mathbf{y}_3, \dots$. Specifically, for $s \in [1, k-1]$, let $W_s = V(G_s) \setminus V(G_{s+1})$. (Observe that $W_s$ is precisely the set of nodes pruned by the execution of the pruning algorithm $\mathcal{P}$ in $\mathcal{B}_s$.) Then, the collection $\{W_s : 1 \le s \le k-1\}$ forms a partition of $V(G)$, i.e., $W_s \cap W_{s'} = \emptyset$ if $s \ne s'$, and $\cup_{s=1}^{k-1} W_s = V(G)$. Observe that the final output $\mathbf{y}$ of $\pi$ satisfies $\mathbf{y}(u) = \mathbf{y}_s(u)$ for every node $u$, where $s$ is such that $u \in W_s$. In other words, the output of $\pi$ restricted to the nodes in $W_s$ is precisely the corresponding output of Algorithm $\mathcal{A}_s$. The next observation readily follows from the definition of pruning algorithms.
|
| 31 |
+
|
| 32 |
+
**Observation 3.4** *Consider a problem* $\Pi$, a collection of instances $\mathcal{F}$, a sequence of algorithms $(\mathcal{A}_i)_{i \in \mathbb{N}}$ defined on $\mathcal{F}$ and a pruning algorithm $\mathcal{P}$ for $\Pi$ and $\mathcal{F}$. Consider the alternating algorithm $\pi = \pi((\mathcal{A}_i)_{i \in \mathbb{N}}, \mathcal{P})$ for $(\mathcal{A}_i)_{i \in \mathbb{N}}$ and $\mathcal{P}$. If $\pi$ terminates on an instance $(G, \mathbf{x}) \in \mathcal{F}$ then it produces a correct output $\mathbf{y}$, that is, $(G, \mathbf{x}, \mathbf{y}) \in \Pi$.
|
| 33 |
+
|
| 34 |
+
In what follows, we often produce a sequence of algorithms $(\mathcal{A}_i)_{i \in \mathbb{N}}$ from an algorithm $\mathcal{A}^\Gamma$ requiring a collection $\Gamma$ of non-decreasing parameters. The general idea is to design a sequence of guesses $\tilde{\Gamma}_i$ and let $\mathcal{A}_i$ be algorithm $\mathcal{A}^\Gamma$ provided with guesses $\tilde{\Gamma}_i$. Given a pruning algorithm $\mathcal{P}$, we obtain a uniform alternating algorithm $\pi = \pi((\mathcal{A}_i)_{i \in \mathbb{N}}, \mathcal{P})$. The sequence of guesses is designed such that for any configuration $(G, \mathbf{x}) \in \mathcal{F}$, there exists some $i$ for which $\tilde{\Gamma}_i$ is a collection of good guesses for $(G, \mathbf{x})$. The crux is to obtain an execution time for $\mathcal{A}_1; \mathcal{P}; \dots; \mathcal{A}_i; \mathcal{P}$ of the same order as the exe-
|
samples/texts/1660153/page_12.md
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Fig. 1 Schematic view of an alternating algorithm for ($\mathcal{A}_i$)$_{i \in N}$ and $\mathcal{P}$.
|
| 2 |
+
|
| 3 |
+
cution time of $\mathcal{A}^{\Gamma}$ provided with the collection $\Gamma^*(G, \mathbf{x})$ of correct guesses.
|
| 4 |
+
|
| 5 |
+
## 4 The General Method
|
| 6 |
+
|
| 7 |
+
We now turn to the main application of pruning algorithms discussed in this paper, that is, the construction of a transformer taking a non-uniform algorithm $\mathcal{A}^\Gamma$ as a black box and producing a uniform one that enjoys the same (asymptotic) time complexity as the original non-uniform algorithm.
|
| 8 |
+
|
| 9 |
+
We begin with a few illustrative examples of our method in Subsection 4.1. Then, the general framework of our transformer is given in Subsection 4.2. This subsection introduces a concept of “sequence-number functions” as well as the a fundamental construction used in our forthcoming algorithms.
|
| 10 |
+
|
| 11 |
+
Then, in Subsection 4.3, we consider the deterministic setting: a somewhat restrictive, yet useful, transformer is given in Theorem 1. This transformer considers a single set $\Gamma$ of non-decreasing parameters $p_1, \dots, p_\ell$, and assumes that (1) the given non-uniform algorithm $\mathcal{A}^\Gamma$ depends on $\Gamma$ and (2) the running time of $\mathcal{A}^\Gamma$ is evaluated with respect to the parameters in $\Gamma$. Such a situation is customary, and occurs for instance for the best currently known MIS Algorithms [4,22,34] as well as for the maximal matching algorithm of Hanckowiak et al. [19]. As a result, the transformer given by Theorem 1 can be used to transform each of these algorithms into a uniform one with asymptotically the same time complexity.
|
| 12 |
+
|
| 13 |
+
The transformer of Theorem 1 is extended to the randomized setting in Subsection 4.4. In Subsection 4.5, we establish Theorem 3, which generalizes both Theorem 1 and Theorem 2. Finally, we conclude the section with Theorem 4 in Subsection 4.6, which shows how to manipulate several uniform algorithms that run in unknown times to obtain a uniform algorithm that runs as fast as the fastest algorithm among those given algorithms.
|
| 14 |
+
|
| 15 |
+
## 4.1 Some Illustrative Examples
|
| 16 |
+
|
| 17 |
+
The basic idea is very simple. Consider a problem for which we have a pruning algorithm $\mathcal{P}$, and a non-uniform algorithm $\mathcal{A}$ that requires the upper bounds on some parameters to be part of the input. To obtain a uniform algorithm, we execute the pair of algorithms $(\mathcal{A}; \mathcal{P})$ in iterations, where each iteration executes $\mathcal{A}$ using a specific set of guesses for the parameters. Typically, as iterations proceed, the guesses for the parameters grow larger and larger until we reach an iteration $i$ where all the guesses are larger than the actual value of the corresponding parameters. In this iteration, the operation of $\mathcal{A}$ on $G_i$ using such guesses guarantees a correct solution on $G_i$ ($G_i$ is the graph induced by the set of nodes that were not pruned in previous iterations). The solution detection property of the pruning algorithm then guarantees that the execution terminates in this iteration and hence, Observation 3.4 guarantees that the output of all nodes combines to a global solution on $G$. To bound the running time, we shall make sure that the total running time is dominated by the running time of the last iteration, and that this last iteration is relatively fast.
|
| 18 |
+
|
| 19 |
+
There are various delicate points when using this general strategy. For example, in iterations where incorrect guesses are used, we have no control over the behavior of the non-uniform algorithm $\mathcal{A}$ and, in particular, it may run for too many rounds, perhaps even indefinitely. To overcome this obstacle, we allocate a prescribed number of rounds for each iteration; if Algorithm A reaches this time bound without outputting at some node $u$, then we force it to terminate with an arbitrary output. Subsequently, we run the pruning algorithm and proceed to the next iteration.
|
| 20 |
+
|
| 21 |
+
Obviously, this simple approach of running in iterations and increasing the guesses from iteration to iteration is hardly new. It was used, for example, in the context of wireless networks to compute estimates of parameters (cf., e.g., [8,31]), or to estimate the number of faults [25]. It was also used by Barenboim and Elkin [6] to avoid the necessity of having an upper bound on the arboricity $a$ in one of their MIS algorithms, although their approach increases the running time by $\log^* n$.
|
samples/texts/1660153/page_13.md
ADDED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
One of the main contributions of the current paper is the formalization and generalization of this technique, allowing it to be used for a wide varieties of problems and applications. Interestingly, note that we are only concerned with getting rid of the use of some global parameters in the code of local algorithms, and not with obtaining estimates for them (in particular, when our algorithms terminate, a node has no guarantee to have upper bounds on these global parameters).
|
| 2 |
+
|
| 3 |
+
To illustrate the method, let us consider the non-uniform MIS algorithm of Panconesi and Srinivasan [34]. The code of Algorithm $\mathcal{A}$ uses an upper bound $\tilde{n}$ on the number of nodes $n$, and runs in time at most $f(\tilde{n}) = 2^{O(\sqrt{\log \tilde{n}})}$. Consider a pruning algorithm $\mathcal{P}_{\text{MIS}}$ for MIS (such an algorithm is given by Observation 3.2). The following sketches our technique for obtaining a uniform MIS algorithm. For each integer $i$, set $n_i = \max \{a \in \mathbf{N} : f(a) \le 2^i\}$.
|
| 4 |
+
|
| 5 |
+
In Iteration $i$, for $i = 1, 2, \dots$, we first execute Algorithm $\mathcal{A}$ using the guess $n_i$ (as an input serving as an upper bound for the number of nodes) for precisely $2^i$ rounds. Subsequently, we run the pruning algorithm $\mathcal{P}_{\text{MIS}}$. When the pruning algorithm terminates, we execute the next iteration on the non-pruned nodes. Let $s$ be the integer such that $2^{s-1} < f(n) \le 2^s$, where $n$ is the number of nodes of the input graph. By the definition, $n \le n_s$. Therefore, the application of $\mathcal{A}$ in Iteration $s$ uses a guess $n_s$ that is indeed good, i.e., larger than the number of nodes. Moreover, this execution of $\mathcal{A}$ is completed before the prescribed deadline of $2^s$ rounds expires because its running time is at most $f(n_s) \le 2^s$. Hence, we are guaranteed to have a correct solution by the end of Iteration $s$. The running time is thus at most $\sum_{i=1}^s 2^i = O(f(n))$.
|
| 6 |
+
|
| 7 |
+
This method can sometimes be extended to simultaneously remove the use of several parameters in the code of a local algorithm. For example, consider the MIS algorithm of Barenboim and Elkin [4] (or that of Kuhn [22]), which uses upper bounds $\tilde{n}$ and $\tilde{\Delta}$ on $n$ and $\Delta$, respectively, and runs in time $f(\tilde{n}, \tilde{\Delta}) = f_1(\tilde{n}) + f_2(\tilde{\Delta})$, where $f_1(\tilde{\Delta}) = O(\tilde{\Delta})$ and $f_2(\tilde{n}) = O(\log^*\tilde{n})$. The following sketches our method for obtaining a corresponding uniform MIS algorithm that runs in time $O(f(n, \Delta))$. For each integer $i$, set $n_i = \max \{a \in \mathbf{N} : f_1(a) \le 2^i\}$ and $\Delta_i = \max \{a \in \mathbf{N} : f_2(a) \le 2^i\}$. In Iteration $i$, for $i = 1, 2, \dots$, we first execute Algorithm $\mathcal{A}$ using the guesses $n_i$ and $\Delta_i$, but this time the execution lasts for precisely $2 \cdot 2^i$ rounds. (The factor 2 in the running time of an iteration follows from the fact that the running time is the sum of two non-negative ascending functions of two different parameters, namely $f_1(n)$ and $f_2(\Delta)$.) Subsequently, we run the pruning algorithm $\mathcal{P}_{\text{MIS}}$, and as before, when the pruning algorithm terminates, we execute the next iteration on the non-pruned nodes.
|
| 8 |
+
|
| 9 |
+
Now, let $s$ be the integer such that $2^{s-1} < f(n, \Delta) \le 2^s$. By the definition, $n \le n_s$ and $\Delta \le \Delta_s$. Hence, the application of $\mathcal{A}$ in Iteration $s$ uses guesses that are indeed good. This execution of $\mathcal{A}$ is completed before the prescribed deadline of $2^{s+1}$ rounds expires because its running time is at most $f_1(n_s) + f_2(\Delta_s) \le 2^{s+1}$. Thus, the algorithm consists of at most $s$ iterations. Since the running time of the whole execution is dominated by the running time of the last iteration, the total running time is $O(2^{s+1}) = O(f(n, \Delta))$.
|
| 10 |
+
|
| 11 |
+
The above discussion shall be formalized in Theorem 1. Before stating and proving it, though, we need one more concept, called “sequence-number function”, which gives a certain measure for the “separation” between the variables in a function defined over $\mathbf{N}^\ell$.
|
| 12 |
+
|
| 13 |
+
### 4.2 The General Framework
|
| 14 |
+
|
| 15 |
+
Consider a function $f: \mathbf{N}^\ell \to \mathbf{R}^+$. A set-sequence for $f$ is a sequence $(S_f(i))_{i \in \mathbb{N}}$ such that for every $i \in \mathbb{N}$,
|
| 16 |
+
|
| 17 |
+
(i) $S_f(i)$ is a finite subset (possibly empty) of $\mathbf{N}^\ell$; and
|
| 18 |
+
|
| 19 |
+
(ii) if $\underline{y} \in \mathbf{N}^\ell$ and $f(\underline{y}) \le i$, then $\underline{y}$ is dominated by a vector $\underline{x}$ that belongs to $S_f(i)$.
|
| 20 |
+
|
| 21 |
+
The set-sequence $(S_f(i))_{i \in \mathbb{N}}$ is bounded if there exists a positive number $c$ such that
|
| 22 |
+
|
| 23 |
+
$$\forall i \in \mathbb{N}, \quad \forall \underline{x} \in S_f(i), \quad f(\underline{x}) \le c \cdot i.$$
|
| 24 |
+
|
| 25 |
+
The constant $c$ is referred to as the bounding constant of $(S_f(i))_{i \in \mathbb{N}}$. Note that a set-sequence may contain empty sets.
|
| 26 |
+
|
| 27 |
+
A function $s_f: \mathbb{N} \to \mathbb{N}$ is a sequence-number function for $f$ if
|
| 28 |
+
|
| 29 |
+
(1) $s_f$ is moderately-slow; and
|
| 30 |
+
|
| 31 |
+
(2) there exists a bounded set-sequence $(S_f(i))_{i \in \mathbb{N}}$ for $f$ such that
|
| 32 |
+
|
| 33 |
+
$$\forall i \in \mathbb{N}, \quad |S_f(i)| \le s_f(i).$$
|
| 34 |
+
|
| 35 |
+
For example, consider the case where $f: \mathbf{N}^\ell \to \mathbf{R}$ is additive, i.e., $f(x_1, \dots, x_\ell) = \sum_{k=1}^\ell f_k(x_k)$, where $f_1, \dots, f_\ell$ are non-negative ascending functions. Here, the constant function 1 is a sequence-number function for $\sum_{k=1}^\ell f_k$. Indeed, for $i \in \mathbb{N}$, let $S_f(i) = \{\underline{x}\}$, where the k-th coordinate of $\underline{x}$ is defined to be the largest integer $y$ such that $f_k(y) \le i$ (if such an integer $y$ exists, otherwise, $S_f(i)$ is empty). Hence, if $f(\underline{y}) \le i$ then we deduce that $f_k(\underline{y}_k) \le i$ as each of the functions $f_1, \dots, f_\ell$ is non-negative. Therefore, $\underline{x}$ dominates $\underline{y}$. Consequently, $(S_f(i))_{i \in \mathbb{N}}$ is a set-sequence for $f$, which is bounded since
|
| 36 |
+
|
| 37 |
+
$$f(\underline{x}) \le \sum_{k=1}^\ell f_k(\underline{x}_k) \le \ell \cdot i,$$
|
samples/texts/1660153/page_14.md
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
and $\ell$ does not depend on $i$ (the bounding constant $c$ is equal to $\ell$ in this case).
|
| 2 |
+
|
| 3 |
+
As another example, consider the case where $f: \mathbb{N}^2 \rightarrow \mathbf{R}$ is given by $f(x_1, x_2) = f_1(x_1) \cdot f_2(x_2)$, where $f_1$ and $f_2$ are ascending functions taking values at least 1. Then, the function $s_f(i) = \lfloor \log i \rfloor + 1$ is a sequence-number for $f$. Indeed, for $i \in \mathbb{N}$ let $S_f(i) = \{(x_1^j, x_2^j) : j \in [0, \lfloor \log i \rfloor]\}$ where $x_1^j$ is the largest integer $y_1$ such that $f_1(y_1) \le 2^j$ and $x_2^j$ is the largest integer $y_2$ such that $f_2(y_2) \le 2^{\log i - j + 1}$ for each $j \in [0, \lfloor \log i \rfloor]$ (if such integers $y_1$ and $y_2$ exist, otherwise we do not define the pair $(x_1^j, x_2^j)$). Again, a straightforward check ensures that $(S_f(i))_{i \in \mathbb{N}}$ is a bounded set-sequence for $f$ with bounding constant 2. On the other hand, it is interesting to note that not all functions have a bounded sequence-number function, as one can see by considering the min function over $\mathbb{N}^2$. The following observation summarizes to two aforementioned examples.
|
| 4 |
+
|
| 5 |
+
**Observation 4.1**
|
| 6 |
+
|
| 7 |
+
* The constant function 1 is a sequence-number function for any additive function.
|
| 8 |
+
|
| 9 |
+
* Let $f: \mathbb{N}^2 \to \mathbf{R}$ be a function given by $f(x_1, x_2) = f_1(x_1) \cdot f_2(x_2)$, where $f_1 \ge 1$ and $f_2 \ge 1$ are ascending functions. Then, the function $s_f(i) = \lfloor \log i \rfloor + 1$ is a sequence-number function for $f$.
|
| 10 |
+
|
| 11 |
+
We now give an explicit construction of a local algorithm $\pi$, which will be used to prove the forthcoming theorems.
|
| 12 |
+
|
| 13 |
+
Consider a problem $\Pi$ and a family of instances $\mathcal{F}$. Assume that $\mathcal{P}$ is a pruning algorithm for $\Pi$. Let $\mathcal{A}^\Gamma$ be a deterministic algorithm for $\Pi$ and $\mathcal{F}$ depending on a set $\Gamma$ of parameters $\mathbf{p}_1, \dots, \mathbf{p}_\ell$. In addition, fix an integer $c$ and let $(S_i)_{i \in \mathbb{N}}$ be a family of (possibly empty) subsets of $\mathbb{N}^\ell$.
|
| 14 |
+
|
| 15 |
+
The algorithm $\pi$ runs in iterations, each of which can be seen as a uniform alternating algorithm that operates on the configurations in $\mathcal{F}$.
|
| 16 |
+
|
| 17 |
+
Fix $i \in \mathbb{N}$ and let us write $S_i = \{\underline{x}^1, \dots, \underline{x}^{J_i}\}$. For every $j \in [1, J_i]$, consider the uniform algorithm $\mathcal{A}_{j,i}$ that consists of running $\mathcal{A}^\Gamma$ with the vector of guesses $\underline{x}^j$ of $S_i$. More precisely, the $k$-th coordinate of $\underline{x}^j$ is used as a guess for $\mathbf{p}_k$ for $k \in \{1, \dots, \ell\}$. Now, we define $\mathcal{A}'_{j,i}$ to be the algorithm $\mathcal{A}_{j,i}$ restricted to $c \cdot 2^i$ rounds.
|
| 18 |
+
|
| 19 |
+
An iteration of $\pi$ consists of running the uniform alternating algorithm for the sequence of uniform algorithms $\{\mathcal{A}'_{j,i}\}_{j \in [1, J_i]}$ and the pruning algorithm $\mathcal{P}$. A pseudocode description of Algorithm $\pi$ is given by Algorithm 1.
|
| 20 |
+
|
| 21 |
+
**Algorithm 1:** The algorithm $\pi$.
|
| 22 |
+
|
| 23 |
+
We are now ready to state and prove Theorem 1, which deals with deterministic local algorithms.
|
| 24 |
+
|
| 25 |
+
### 4.3 The Deterministic Case
|
| 26 |
+
|
| 27 |
+
Theorem 1 considers a single set $\Gamma$ of non-decreasing parameters $\mathbf{p}_1, \dots, \mathbf{p}_\ell$, and assumes that (1) the given non-uniform algorithm $\mathcal{A}^\Gamma$ depends on $\Gamma$ and (2) the running time of $\mathcal{A}^\Gamma$ is evaluated according to the parameters in $\Gamma$. Recall that in such a case, we say that a function $f: \mathbb{N}^\ell \to \mathbf{R}^+$ upper bounds the running time of $\mathcal{A}^\Gamma$ with respect to $\Gamma$ if the running time $T_{\mathcal{A}^\Gamma}(G, \mathbf{x})$ of $\mathcal{A}^\Gamma$ for every $(G, \mathbf{x}) \in \mathcal{F}$ using a collection of good guesses $\tilde{\Gamma} = \{\tilde{\mathbf{p}}_1, \dots, \tilde{\mathbf{p}}_\ell\}$ for $(G, \mathbf{x})$ is at most $f(\tilde{\mathbf{p}}_1, \dots, \tilde{\mathbf{p}}_\ell)$.
|
| 28 |
+
|
| 29 |
+
**Theorem 1** Consider a problem $\Pi$ and a family of instances $\mathcal{F}$. Let $\mathcal{A}^\Gamma$ be a deterministic algorithm for $\Pi$ and $\mathcal{F}$ depending on a set $\Gamma$ of non-decreasing parameters. Suppose that the running time of $\mathcal{A}^\Gamma$ is bounded from above by some function $f: \mathbb{N}^\ell \to \mathbf{R}^+$ where $\ell = |\Gamma|$. Assume that there exists a sequence-number function $s_f$ for $f$, and a $\Gamma$-monotone pruning algorithm $\mathcal{P}$ for $\Pi$ and $\mathcal{F}$. Then there exists a uniform deterministic algorithm for $\Pi$ and $\mathcal{F}$ whose running time is $O(f^* \cdot s_f(f^*))$, where $f^* = f(\Gamma^*)$.
|
| 30 |
+
|
| 31 |
+
*Proof* Let $\mathbf{p}_1, \dots, \mathbf{p}_\ell$ be the parameters in $\Gamma$. Fix a bounded set-sequence $(S_f(i))_{i \in \mathbb{N}}$ for $f$ corresponding to $s_f$ and let $c$ be the bounding constant of $(S_f(i))_{i \in \mathbb{N}}$. Set $S_i = S_f(2^i)$ and $J_i = |S_i|$, hence $J_i \le s_f(2^i)$.
|
| 32 |
+
|
| 33 |
+
The desired uniform algorithm is the algorithm $\pi$ (Algorithm 1). We shall prove that $\pi$ is correct and runs in time $O(s_f(2^m) \cdot 2^m)$ over every configuration in $\mathcal{F}$, where $m = \lfloor \log f^* \rfloor]$.
|
| 34 |
+
|
| 35 |
+
Fix $i \in \mathbb{N}$ and let us write $S_i = \{\underline{x}^1, \dots, \underline{x}^{J_i}\}$. Each iteration of the inner loop of $\pi$ is called *Sub-iteration*,
|
samples/texts/1660153/page_15.md
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
while *Iteration* is reserved for iterations of the outer loop. As written in the pseudocode description of $\pi$ given by Algorithm 1, $(G_{j,i}, \mathbf{x}_{j,i})$ is the configuration over which $\pi$ operates during Sub-iteration $j$ of Iteration $s$, for $j \in [1, J_i]$.
|
| 2 |
+
|
| 3 |
+
Let us prove that Algorithm $\pi$ is correct. Fix a configuration $(G, \mathbf{x})$ and set $\mathbf{p}_r^* = \mathbf{p}_r(G, \mathbf{x})$ for $r \in [1, \ell]$. We consider the operation of $\pi$ on $(G, \mathbf{x})$. Setting $f^* = f(\mathbf{p}_1^*, \dots, \mathbf{p}_\ell^*)$, we know that $f^*$ is an upper bound on the running time of $\mathcal{A}^\Gamma$ over $(G, \mathbf{x})$, assuming that $\mathcal{A}^\Gamma$ uses the vector $\Gamma^*$ of correct guesses $\mathbf{p}_1^*, \dots, \mathbf{p}_\ell^*$. Let $s$ be the least integer such that $f^* \le 2^s$. By the definition, there exists $j^* \in [1, J_s]$, such that $\underline{x}^{j^*}$ dominates $(\mathbf{p}_1^{*}, \dots, \mathbf{p}_\ell^{*})$.
|
| 4 |
+
|
| 5 |
+
The monotonicity property of $\mathcal{P}$ implies that $\mathbf{p}_r(G_{j-1,i}, \mathbf{x}_{j-1,i}) \ge \mathbf{p}_r(G_{j,i}, \mathbf{x}_{j,i})$ for every $r \in [1, \ell]$. Thus, we infer by induction on $k$ that $\mathbf{p}_r^* = \mathbf{p}_r(G, \mathbf{x}) \ge \mathbf{p}_r(G_{j,i}, \mathbf{x}_{j,i})$ for every $i \in \mathbb{N}, j \in [1, J_i]$ and $r \in [1, \ell]$.
|
| 6 |
+
|
| 7 |
+
Now, let us consider Iteration $s$ of $\pi$. Assume that some nodes are still active during Iteration $s$ of $\pi$, that is, $V(G_s)$ is not empty. Iteration $s$ of $\pi$ is composed of $J_s$ sub-iterations. During Sub-iteration $j$, the algorithm $\mathcal{A}'_{j,s}; \mathcal{P}$ is executed over $(G_s^j, \mathbf{x}_s^j)$. We know that $\mathbf{p}_r^* \ge \mathbf{p}_r(G_{j,s}, \mathbf{x}_{j,s})$ for every $j \in [1, J_s]$, and every $r \in [1, \ell]$. So, in Sub-iteration $j^*$ of Iteration $s$, we have $x_{j^*,r} \ge \mathbf{p}_r^* \ge \mathbf{p}_r(G_{j^*,s}, \mathbf{x}_{j^*,s})$ for every $r \in [1, \ell]$.
|
| 8 |
+
|
| 9 |
+
Sub-iteration $j^*$ consists of first running Algorithm $\mathcal{A}'_{j^*,s}$, which amounts to running $\mathcal{A}^\Gamma$ for $c \cdot 2^s$ rounds using the vector of guesses $\underline{x}^{j^*}$. By the definition of $S_f(2^s)$, it follows that $f(\underline{x}^{j^*}) \le c \cdot 2^s$. Hence, this execution of Algorithm $\mathcal{A}^\Gamma$ is actually completed by time $c \cdot 2^s$. Furthermore, since $\underline{x}^{j^*}$ dominates $(\mathbf{p}_1(G_{j^*,s}, \mathbf{x}_{j^*,s}), \dots, \mathbf{p}_\ell(G_{j^*,s}, \mathbf{x}_{j^*,s}))$, the vector of guesses used by Algorithm $\mathcal{A}^\Gamma$ is good, and hence the algorithm outputs a vector $\underline{y}_s^{j^*}$ satisfying $(G_{j^*,s}, \mathbf{x}_{j^*,s}, \mathbf{y}_{j^*,s}) \in \Pi$. By the solution detection property, the subsequent pruning algorithm (still in Sub-iteration $j^*$ of Iteration $s$) selects $W_{j^*,s} = V(G_{j^*,s})$. By Observation 3.4, it follows that $\pi$ is correct.
|
| 10 |
+
|
| 11 |
+
It remains to prove that the running time is $O(s_f(f^*) \cdot f^*)$. Let $T_0$ be the running time of $\mathcal{P}$. Observe that Iteration $i$ of $\pi$ takes at most $J_i(c \cdot 2^i + T_0)$ rounds, which is $O(s_f(2^i) \cdot 2^i)$ rounds. Since $\pi$ consists of at most $s$ iterations, the running time of $\pi$ is bounded by $\sum_{i=1}^s s_f(2^i) \cdot 2^i$, which is $O(s_f(2^s) \cdot 2^s)$ because $s_f$ is non-decreasing. Moreover,
|
| 12 |
+
|
| 13 |
+
$$O(s_f(2^s) \cdot 2^s) = O(s_f(2 \cdot f^*) \cdot 2^s) = O(s_f(f^*) \cdot f^*)$$
|
| 14 |
+
|
| 15 |
+
since $2^{s-1} < f^* \le 2^s$ and $s_f$ is moderately-slow (hence, in particular, non-decreasing). Therefore, the running time of $\pi$ is bounded by $O(s_f(f^*) \cdot f^*)$. $\square$
|
| 16 |
+
|
| 17 |
+
By Observation 4.1, the constant function $s_f = 1$ is a sequence number function for any additive func-
|
| 18 |
+
|
| 19 |
+
tion $f$. Hence, Corollary 1(vi) follows directly by applying Theorem 1 to the maximal matching algorithm of Hanckowiak et al. [19], and using Observation 3.3.
|
| 20 |
+
|
| 21 |
+
In addition, using Observation 3.2, Theorem 1 allows us to transform each of the MIS algorithms in [4, 22,34] into a uniform one with asymptotically the same time complexity. We thus obtain the following corollary.
|
| 22 |
+
|
| 23 |
+
**Corollary 2** Consider the family $\mathcal{F}$ of all graphs.
|
| 24 |
+
|
| 25 |
+
– There exists a uniform deterministic MIS algorithm for $\mathcal{F}$ running in time $O(\Delta + \log^n n)$.
|
| 26 |
+
|
| 27 |
+
– There exists a uniform deterministic MIS algorithm for $\mathcal{F}$ running in time $2^{O(\sqrt{\log n})}$.
|
| 28 |
+
|
| 29 |
+
Recall that Barenboim and Elkin [5] devised, for every $\delta > 0$, a (non-uniform) deterministic MIS algorithm for the family of all graphs running in time $f(a,n) = O(a + a^\delta \log n)$. Fix $\epsilon \in (0,1)$ and consider the family $F_{\text{large}}$ of graphs with arboricity $a > \log^{1+\epsilon/2} n$. It follows from [5] (applied with, e.g., $\delta = \epsilon/3$), that there exists a (non-uniform) deterministic MIS algorithm for $F_{\text{large}}$ running in time $O(a)$. Hence, using Observation 3.2 and Theorem 1, we obtain a uniform deterministic MIS algorithm for $F_{\text{large}}$ running in $O(a)$ time.
|
| 30 |
+
|
| 31 |
+
Next, let $F_{\text{med}}$ be the family of graphs with arboricity $a$ such that $\log^{1/3} n < a \le \log^{1+\epsilon/2} n$. Since $a \le \log^{1+\epsilon/2} n$, it follows that $a^{1-\epsilon/2} < \log n$, and hence, $a < a^{\epsilon/2} \log n$. By [5], applied with $\delta = \epsilon/2$, there exists a deterministic MIS algorithm for $F_{\text{med}}$ running in time $f_{\text{med}} = O(a^{\epsilon/2} \log n)$. Note that by Observation 4.1, the sequence number for $f_{\text{med}}$ is $s_{f_{\text{med}}} (f_{\text{med}}) = O(\log f_{\text{med}}) = O(\log (\log n))$. Hence, by combining Observation 3.2 and Theorem 1, we obtain a uniform MIS algorithm for $F_{\text{med}}$ running in time $O(a^{\epsilon/2} (\log n) (\log (\log n))) = O(a^\epsilon \log n)$. (This last equality follows from the fact that $\log^{1/3} n < a.)^1$ Summarizing the above discussion, we obtain the following.
|
| 32 |
+
|
| 33 |
+
**Corollary 3** For every $\epsilon > 0$, there exists the following uniform deterministic MIS algorithm:
|
| 34 |
+
|
| 35 |
+
– For the family $F_{\text{large}}$, running in $O(a)$ time,
|
| 36 |
+
|
| 37 |
+
– For the family $F_{\text{med}}$, running in $O(a^\epsilon \log n)$ time.
|
| 38 |
+
|
| 39 |
+
## 4.4 The Randomized Case
|
| 40 |
+
|
| 41 |
+
We now show how to extend Theorem 1 to the randomized setting. More specifically, we replace the given
|
| 42 |
+
|
| 43 |
+
¹ In fact, we could have used in the definition of $F_{\text{med}}$ any small constant instead of 1/3, but 1/3 is sufficiently good for our purposes as, anyway, this result will be combined with better results for $a = o(\sqrt{\log n})$, which shall be established later on, in Corollary 4.
|
samples/texts/1660153/page_16.md
ADDED
|
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
non-uniform deterministic algorithm of Theorem 1 by a non-uniform weak Monte-Carlo algorithm $\mathcal{A}^\Gamma$ and produce a uniform Las Vegas one. This transformer is more sophisticated than the one given in Theorem 1, and requires the use of sub-iterations for bounding the expected running time and probability of success of the resulting Las-Vegas algorithm.
|
| 2 |
+
|
| 3 |
+
**Theorem 2** Consider a problem $\Pi$ and a family of instances $\mathcal{F}$. Let $\mathcal{A}^\Gamma$ be a weak Monte-Carlo algorithm for $\Pi$ and $\mathcal{F}$ depending on a set $\Gamma$ of non-decreasing parameters. Suppose that the running time of $\mathcal{A}^\Gamma$ is bounded from above by some function $f: \mathbf{N}^\ell \to \mathbf{R}^+$, where $\ell = |\Gamma|$. Assume that there exists a sequence-number function $s_f$ for $f$, and a $\Gamma$-monotone pruning algorithm $\mathcal{P}$ for $\Pi$ and $\mathcal{F}$. Then there exists a uniform Las Vegas algorithm for $\Pi$ and $\mathcal{F}$ whose expected running time is $O(f^* \cdot s_f(f^*))$, where $f^* = f(\Gamma^*)$.
|
| 4 |
+
|
| 5 |
+
*Proof* Let $\mathbf{p}_1, \dots, \mathbf{p}_\ell$ be the parameters in $\Gamma$. Let $T_0$ be the running time of the pruning algorithm $\mathcal{P}$, and let $\mathcal{A}^\Gamma$ be the given weak Monte-Carlo algorithm. To simplify the notations, we assume that the success guarantee $\rho$ of $\mathcal{A}^\Gamma$ is $1/2$.
|
| 6 |
+
|
| 7 |
+
begin
|
| 8 |
+
(S_f(i))_{i \in \mathbb{N}} \leftarrow \text{bounded set-sequence for } f \\
|
| 9 |
+
&\text{corresponding to } s_f; \\
|
| 10 |
+
c \leftarrow \text{bounding constant of } (S_f(i))_{i \in \mathbb{N}}; \\
|
| 11 |
+
(G_1, \mathbf{x}_1) \leftarrow (G, \mathbf{x}); \\
|
| 12 |
+
\multicolumn{2}{l}{\textbf{for } i \text{ from } 1 \text{ to } \infty \text{ do}} \\
|
| 13 |
+
&\quad \textbf{for } j \text{ from } 1 \text{ to } i \text{ do} \\
|
| 14 |
+
&\qquad S_j \leftarrow S_f(2^j); \\
|
| 15 |
+
&\qquad J_j \leftarrow |S_j|; \\
|
| 16 |
+
&\qquad (G_{1,j}, \mathbf{x}_{1,j}) \leftarrow (G_i, \mathbf{x}_i); \\
|
| 17 |
+
&\qquad \textbf{for } k \text{ from } 1 \text{ to } J_j \text{ do} \\
|
| 18 |
+
&\qquad\quad \mathcal{A}'_{k,j} \leftarrow \mathcal{A}^\Gamma \text{ restricted to } c \cdot 2^j \text{ rounds} \\
|
| 19 |
+
&\qquad\quad \text{run with vector guesses } \underline{x}^k \text{ of } S_j; \\
|
| 20 |
+
&\qquad\quad \mathbf{y}_{k,j} \leftarrow \mathcal{A}'_{k,j}(G_{k,j}, \mathbf{x}_{k,j}); \\
|
| 21 |
+
&\qquad\quad (G_{k+1,j}, \mathbf{x}_{k+1,j}) \leftarrow \\
|
| 22 |
+
&\qquad\quad \mathcal{P}(G_{k,j}, \mathbf{x}_{k,j}, \mathbf{y}_{k,j}); \\
|
| 23 |
+
&\quad \textbf{end} \\
|
| 24 |
+
&\quad (G_{j+1}, \mathbf{x}_{j+1}) \leftarrow (G_{J,j+1}, j, \mathbf{x}_{J,j+1}, j); \\
|
| 25 |
+
end
|
| 26 |
+
end
|
| 27 |
+
|
| 28 |
+
**Algorithm 2:** The algorithm $\tau$ in the proof of Theorem 2.
|
| 29 |
+
|
| 30 |
+
The desired uniform algorithm $\tau$ runs in iterations, where Iteration $i$ consists of running the first $i$ iterations of the algorithm $\pi$ defined in Subsection 4.2. A pseudocode description of Algorithm $\tau$ is given by Algorithm 2. Similarly as in the proof of Theorem 1, the word “Iteration” is reserved for the iterations of the outer loop of $\tau$, while “Sub-iteration” is used for the iterations of the middle loop of $\tau$.
|
| 31 |
+
|
| 32 |
+
For each positive integer $i$, let $\beta_i$ be the number of rounds used in Iteration $i$ of $\tau$. Analogously to the proof of Theorem 1, we infer that $\beta_i = O(s_f(2^i) \cdot 2^i)$. Let $\alpha_i$ be the number of rounds used during the first $i$ iterations of $\tau$. We thus have $\alpha_i = \sum_{k=1}^i \beta_k$, which is $O(s_f(2^i) \cdot 2^i)$.
|
| 33 |
+
|
| 34 |
+
It follows using similar arguments to the ones given in the proof of Theorem 1, that if $\tau$ outputs, then the output vector $\mathbf{y}$ is a solution, i.e. $(G, \mathbf{x}, \mathbf{y}) \in \Pi$.
|
| 35 |
+
|
| 36 |
+
It remains to bound the running time of $\tau$. We consider the random variable $T_\tau(G, x)$ that stands for “the running time of $\tau$ on $(G, \mathbf{x})$”. For every integer $i$, let $\rho_i$ be the probability that $V(G_i) \neq \emptyset$ and $V(G_{i+1}) = \emptyset$, that is, $\rho_i$ is the probability that the last active node becomes inactive precisely during Iteration $i$ of $\tau$. In other words,
|
| 37 |
+
|
| 38 |
+
$$\rho_i = \mathbf{Pr}(T_\tau(G, x) \in [\alpha_{i-1} + 1, \alpha_i]).$$
|
| 39 |
+
|
| 40 |
+
Setting $f^* = f(p_1^*, \dots, p_\ell^*)$, we know that $f^*$ is an upper bound on the running time of $\mathcal{A}^\Gamma$ over $(G, \mathbf{x})$, assuming that $\mathcal{A}^\Gamma$ uses the collection $\Gamma^*$ of correct guesses $p_1^*, \dots, p_\ell^*$. Consider the smallest integer $s$ such that $f^* \le 2^s$.
|
| 41 |
+
|
| 42 |
+
Since $s_f$ is moderately-slow, there is a constant $K$ such that $\alpha_{i+1} \le K \cdot \alpha_i$ for every positive integer $i$. In particular, $\alpha_{s+i} \le K^i \cdot \alpha_s$, and hence
|
| 43 |
+
|
| 44 |
+
$$
|
| 45 |
+
\begin{align*}
|
| 46 |
+
\mathbf{E}(T_\tau(G, x)) &\le \alpha_s \cdot \mathbf{Pr}(T_\tau(G, x) \le \alpha_s) + \sum_{i=1}^\infty \alpha_{s+i} \cdot \rho_{s+i} \\
|
| 47 |
+
&\le \alpha_s + \alpha_s \sum_{i=1}^\infty K^i \cdot \rho_{s+i}.
|
| 48 |
+
\end{align*}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
Our next goal is to bound $\rho_{s+i}$ from above. For a positive integer $r$, let $\chi_r$ be the event that $V(G_{r+1}) \neq \emptyset$, that is, none of $C_1, \dots, C_r$ output the empty configuration and thus, there is still an active node at the beginning Iteration $r+1$ of $\tau$. Thus, $\rho_{s+i} \le \mathbf{Pr}(\chi_{s+i-1})$.
|
| 52 |
+
|
| 53 |
+
Recall that we assume that the success guarantee of $\mathcal{A}^\Gamma$ is $1/2$. Therefore, using similar analysis as in the proof of Theorem 1, it follows that for every positive integer $k$, the probability that an application of $B_{s+k-1}$ (in particular, during iteration $s+i-1$) does not output the empty configuration is at most $1/2$. As a result,
|
| 54 |
+
|
| 55 |
+
$$\rho_{s+i} \leqslant \mathbf{Pr}(\chi_{s+i-1}) \leqslant \prod_{j=1}^{i} 2^{-j} = 2^{-(i^2+i)/2}.$$
|
| 56 |
+
|
| 57 |
+
Therefore,
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\begin{align*}
|
| 61 |
+
\mathbf{E}(T_\tau(G, x)) &\leqslant \alpha_s \left( 1 + \sum_{i=1}^{\infty} K^i \cdot 2^{-(i^2+i)/2} \right) \\
|
| 62 |
+
&= O(\alpha_s) = O(f^* \cdot s_f(f^*)).
|
| 63 |
+
\end{align*}
|
| 64 |
+
$$
|
samples/texts/1660153/page_17.md
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Corollary 1(vii) follows by applying Theorem 2 to
|
| 2 |
+
the ruling set algorithm of Schneider and Wattenhofer [36],
|
| 3 |
+
and using the pruning algorithm given by Observation 3.2.
|
| 4 |
+
|
| 5 |
+
4.5 The General Theorem
|
| 6 |
+
|
| 7 |
+
Some complications arise when the correctness of the
|
| 8 |
+
given non-uniform algorithm relies on the use of a set of
|
| 9 |
+
parameters Γ while its running time is evaluated with
|
| 10 |
+
respect to another set of parameters Λ. For example,
|
| 11 |
+
it may be the case that an upper bound on a param-
|
| 12 |
+
eter p is required for the correct operation of an algo-
|
| 13 |
+
rithm, yet the running time of the algorithm does not
|
| 14 |
+
depend on p. In this case, it may not be clear how to
|
| 15 |
+
choose the guesses for p. (This occurs, for example, in
|
| 16 |
+
the MIS algorithms of Barenboim and Elkin [6], where
|
| 17 |
+
the knowledge of n and the arboricity a are required,
|
| 18 |
+
yet the running time f is a function of n only.) Such
|
| 19 |
+
complications can be solved when there is some relation
|
| 20 |
+
between the parameters in Γ and those in Λ; specifically,
|
| 21 |
+
when Γ is weakly-dominated by Λ. (The definition of
|
| 22 |
+
weakly-dominated is given in Section 2.) This issue is
|
| 23 |
+
handled in the following theorem, which extends both
|
| 24 |
+
Theorem 1 and Theorem 2.
|
| 25 |
+
|
| 26 |
+
**Theorem 3** Consider a problem $\Pi$, a family of instances $\mathcal{F}$ and two sets of non-decreasing parameters $\Gamma$ and $\Lambda$, where $\Gamma$ is weakly-dominated by $\Lambda$. Let $\mathcal{A}^\Gamma$ be a deterministic (respectively, weak Monte-Carlo) algorithm depending on $\Gamma$ whose running time is upper bounded by some function $f: \mathbb{N}^\ell \to \mathbf{R}^+$, where $\ell = |\Lambda|$. Assume that there exists a sequence-number function $s_f$ for $f$, and a $\Lambda \cup \Gamma$-monotone pruning algorithm $\mathcal{P}$ for $\Pi$ and $\mathcal{F}$. Then there exists a uniform deterministic (resp., Las Vegas) algorithm for $\Pi$ and $\mathcal{F}$ whose running time on every configuration $(G, \mathbf{x}) \in \mathcal{F}$ is $O(f^* \cdot s_f(f^*))$, where $f^* = f(\Lambda^*(G, \mathbf{x}))$.
|
| 27 |
+
|
| 28 |
+
Proof First, we consider the case where $\Gamma \subseteq \Lambda$ and next the general case.
|
| 29 |
+
|
| 30 |
+
Assume that $\Lambda = \{\mathbf{p}_1, \dots, \mathbf{p}_\ell\}$ and $\Gamma = \{\mathbf{p}_1, \dots, \mathbf{p}_r\}$,
|
| 31 |
+
where $r \le \ell$. Then, let us simply impose that $A^\Gamma$ also
|
| 32 |
+
requires estimates for the parameters $\mathbf{p}_{r+1}, \dots, \mathbf{p}_\ell$, that
|
| 33 |
+
is, the operation of $A^\Gamma$ requires such estimates but ac-
|
| 34 |
+
tually ignores them after obtaining them. This way, we
|
| 35 |
+
obtain an algorithm $A^\Lambda$ depending on $\Lambda$. Since $f$ is non-
|
| 36 |
+
decreasing, $f(\mathbf{p}_1^*, \dots, \mathbf{p}_\ell^*) \le f(\mathbf{p}_1^*, \dots, \mathbf{p}_r^*, \tilde{\mathbf{p}}_{r+1}, \dots, \tilde{\mathbf{p}}_\ell)$,
|
| 37 |
+
where $\tilde{\mathbf{p}}_i$ is a good guess for every $i \in [r+1, \ell]$. Hence,
|
| 38 |
+
the running time of Algorithm $A^\Lambda$ is also bounded by $f$,
|
| 39 |
+
so the conclusion follows by applying Theorems 1 and 2.
|
| 40 |
+
|
| 41 |
+
Now, let $\mathbf{p}_1, \dots, \mathbf{p}_r$ and $\mathbf{q}_1, \dots, \mathbf{q}_\ell$ be the parameters in $\Gamma$ and $\Lambda$, respectively. Recall that $r' \in [0, \min\{r, \ell\}]$ is such that $\{\mathbf{p}_{r'+1}, \mathbf{p}_{r'+2}, \dots, \mathbf{p}_r\} \cap \{\mathbf{q}_{r'+1}, \mathbf{q}_{r'+2}, \dots, \mathbf{q}_\ell\} =$
|
| 42 |
+
|
| 43 |
+
$\emptyset$ and $\mathbf{p}_i = \mathbf{q}_i$ for every $i \in [1, r']$. Set $t = r - r'$. As $\Gamma$ is weakly-dominated by $\Lambda$, there exists a function $h: [1, t] \to [1, \ell]$ and, for each $j \in [1, t]$, an ascending function $g_j$ such that $g_j(\mathbf{p}_{r'+j}(G, \mathbf{x})) \le \mathbf{q}_{h(j)}(G, \mathbf{x})$ for every configuration $(G, \mathbf{x}) \in \mathcal{F}$. For every real number $x$, we set $g_j^{-1}(x) = \min g_j^{-1}(\{x\})$. Since $g_j$ is ascending, $g_j^{-1}(x) \ge g_j^{-1}(y)$ whenever $x \ge y$.
|
| 44 |
+
|
| 45 |
+
Let $\Lambda' = \Lambda \cup \Gamma = \{\mathbf{q}_1, \dots, \mathbf{q}_\ell, \mathbf{p}_{r'+1}, \dots, \mathbf{p}_r\}$, and recall that $f: \mathbb{N}^\ell \to \mathbf{R}^+$ is the (non-decreasing) function bounding the running time of $\mathcal{A}^\Gamma$. We define a new function $f': \mathbb{N}^{\ell+t} \to \mathbf{R}$ by setting
|
| 46 |
+
|
| 47 |
+
$$f'(x_1, \ldots, x_\ell, y_1, \ldots, y_t) = f(z_1, \ldots, z_\ell),$$
|
| 48 |
+
|
| 49 |
+
where for each $i \in [1, l]$,
|
| 50 |
+
|
| 51 |
+
$$z_i = \max (\{x_i\} \cup \{g_k(y_k) : k \in h^{-1}(\{i\})\}).$$
|
| 52 |
+
|
| 53 |
+
Let $s_f$ be a sequence-number function for $f$ and let
|
| 54 |
+
$(S_f(i))_{i \in \mathbb{N}}$ be a corresponding bounded set-sequence
|
| 55 |
+
with bounding constant $c$.
|
| 56 |
+
|
| 57 |
+
We assert that $s_f$ is also a sequence-number func-
|
| 58 |
+
tion of $f'$ and admits a corresponding bounded set-
|
| 59 |
+
sequence with bounding constant $c$. To see this, we first
|
| 60 |
+
define for $i \in \mathbb{N}$ a set $S_{f'}(i)$ with $|S_{f'}(i)| = |S_f(i)|$ as
|
| 61 |
+
follows. For each $(x_1, \dots, x_\ell) \in S_f(i)$, let $S_{f'}(i)$ con-
|
| 62 |
+
tain $(x_1, \dots, x_\ell, y_1, \dots, y_t)$, where $y_j = g_j^{-1}(x_{h(j)})$ for
|
| 63 |
+
$j \in [1, t]$. Observe that $g_j(y_j) = x_{h(j)}$ for every $j \in$
|
| 64 |
+
$[1, t]$. Hence, $f'(x_1, \dots, x_\ell, y_1, \dots, y_t) = f(x_1, \dots, x_\ell)$ if
|
| 65 |
+
$(x_1, \dots, x_\ell, y_1, \dots, y_t) \in S_{f'}(i)$.
|
| 66 |
+
|
| 67 |
+
This observation directly implies that $f'(\underline{x}') \le c \cdot i$ if
|
| 68 |
+
$\underline{x}' \in S_{f'}(i)$, since $f(\underline{x}) \le c \cdot i$ if $\underline{x} \in S_f(i)$. Now, assume
|
| 69 |
+
that $f'(\underline{x}) \le i$ for some $\underline{x} = (x_1, \dots, x_\ell, y_1, \dots, y_\ell) \in$
|
| 70 |
+
$\mathbb{N}^{\ell+t}$. Then, $f(z_1, \dots, z_\ell) \le i$, where $z_i$ is given by the
|
| 71 |
+
definition of $f'$. Consequently, there exists a vector $\tilde{\underline{x}} \in$
|
| 72 |
+
$S_f(i)$ that dominates $(z_1, \dots, z_\ell)$. Moreover,
|
| 73 |
+
|
| 74 |
+
$$\tilde{\underline{x}}' = (\tilde{\underline{x}}_1, \dots, \tilde{\underline{x}}_\ell, g_1^{-1}(\tilde{\underline{x}}_{h(1)}), \dots, g_t^{-1}(\tilde{\underline{x}}_{h(t)})) \in S_{f'}(i).$$
|
| 75 |
+
|
| 76 |
+
Therefore, if $j \in [1, l]$ then $(z')_j = \tilde{z}_j \ge z_j \ge x_j$, and
|
| 77 |
+
if $j \in [1, t]$ then $g_j((z')_{l+j}) = \tilde{z}_{h(j)} \ge z_{h(j)} \ge g_j(y_j)$, so
|
| 78 |
+
$(z')_{l+j} \ge y_j$, as $g_j$ is ascending. This finishes the proof
|
| 79 |
+
of the assertion.
|
| 80 |
+
|
| 81 |
+
Since $\Gamma \subseteq A'$, we know that there exists a uniform local deterministic (respectively, randomized Las Vegas) algorithm $\mathcal{A}$ for $\Pi$ and $\mathcal{F}$ such that the (respectively, expected) running time of $\mathcal{A}$ over any configuration $(G,\mathbf{x}) \in \mathcal{F}$ is $O(f^{l*} \cdot s_{f'}(f'^*)) = O(f^{l*} \cdot s_f(f'^*))$, where $f'^* = f(q_1^*, \dots, q_\ell^*, p_{r'+1}, \dots, p_r^*)$. The fact that $f'$ is non-decreasing implies that
|
| 82 |
+
|
| 83 |
+
$$f'^* \le f'(\boldsymbol{q}_1^*, \dots, \boldsymbol{q}_\ell^*, g_1^{-1}(\boldsymbol{q}_{h(1)}), \dots, g_t^{-1}(\boldsymbol{q}_{h(t)})) = f^*.$$
|
| 84 |
+
|
| 85 |
+
As $s_f$ is non-decreasing, the (respectively, expected)
|
| 86 |
+
running time of $\mathcal{A}$ is bounded by $O(f^* \cdot s_f(f^*))$, as
|
| 87 |
+
desired. □
|
samples/texts/1660153/page_18.md
ADDED
|
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Applying Theorem 3 to the work of Barenboim and Elkin [6] (see Theorem 6.3 therein) with $\Gamma = \{a, n\}$ and $\Lambda = \{n\}$ yields the following result, since $a \le n$.
|
| 2 |
+
|
| 3 |
+
**Corollary 4** The following uniform deterministic algorithms solving MIS exist:
|
| 4 |
+
|
| 5 |
+
- For the family of graphs with arboricity $a = o(\sqrt{\log n})$, running in time $O(\log n)$,
|
| 6 |
+
|
| 7 |
+
- For any constant $\delta \in (0, 1/2)$, for the family of graphs with arboricity $a = O(\log^{1/2-\delta} n)$, running in time $O(\log n / \log \log n)$.
|
| 8 |
+
|
| 9 |
+
## 4.6 Running as Fast as the Fastest Algorithm
|
| 10 |
+
|
| 11 |
+
To illustrate the topic of the next theorem, consider the non-uniform algorithms for MIS for general graphs, namely, the algorithms of Barenboim and Elkin [4] and that of Kuhn [22], which run in time $O(\Delta + \log^* n)$ and use the knowledge of $n$ and $\Delta$, and the algorithm of Panconesi and Srinivasan [34], which runs in time $2^{O(\sqrt{\log n})}$ and requires the knowledge of $n$. Furthermore, consider the MIS algorithms of Barenboim and Elkin in [5,6], which are very efficient for graphs with a small arboricity $a$. If $n$, $\Delta$ and $a$ are contained in the inputs of all nodes, then one can compare the running times of these algorithms and use the fastest one. That is, there exists a non-uniform algorithm $\mathcal{A}^{\{n,\Delta,a\}}$ that runs in time $T(n, \Delta, a) = \min\{g(n), h(\Delta, n), f(a,n)\}$, where $g(n) = 2^{O(\sqrt{\log n})}$, $h(\Delta, n) = O(\Delta + \log^* n)$, and $f(a,n)$ is defined as follows: $f(a,n) = o(\log n)$ for graphs of arboricity $a = o(\sqrt{\log n})$, $f(a,n) = O(\log n / \log \log n)$ for arboricity $a = O(\log^{1/2-\delta} n)$, for some constant $\delta \in (0, 1/2)$; and otherwise: $f(a,n) = O(a + a^\epsilon \log n)$, for arbitrary small constant $\epsilon > 0$.
|
| 12 |
+
|
| 13 |
+
Unfortunately, the theorems established so far do not allow us to transform $\mathcal{A}^{\{n,\Delta,a\}}$ into a uniform algorithm because the reason being that the function $T(n, \Delta, a)$ bounding the running time does not have a sequence number. On the other hand, as mentioned in Corollary 2, Theorem 1 does allow us to transform each of the algorithms in [4, 22, 34] into a uniform MIS algorithm, with time complexity $O(\Delta + \log^* n)$ and $2^{O(\sqrt{\log n})}$, respectively. Moreover, Corollaries 3 and 4 show that Theorems 1 and 3 allow us to transform the algorithms in [5,6] to uniform algorithms that (over the appropriate graph families), run as fast as the corresponding non-uniform algorithms. Nevertheless, unless $n$, $\Delta$ and $a$ are provided as inputs to the nodes, it is not clear how to obtain from these transformed algorithms a uniform algorithm running in time $T(n, \Delta, a)$. The following theorem solves this problem.
|
| 14 |
+
|
| 15 |
+
**Theorem 4** Consider a problem $\Pi$ and a family of instances $\mathcal{F}$. Let $k$ be a positive integer and let $\Lambda_1, \dots, \Lambda_k$ be $k$ sets of non-decreasing parameters. Let $\mathcal{P}$ be a $(\Lambda_1 \cup \dots \cup \Lambda_k)$-monotone pruning algorithm for $\Pi$ and $\mathcal{F}$. For $i \in \{1, 2, \dots, k\}$, consider a uniform algorithm $\mathcal{U}_i$ whose running time is bounded with respect to $\Lambda_i$ by a function $f_i$. Then there is a uniform algorithm with running time $O(f_{\min})$, where $f_{\min} = \min\{f_1(\Lambda_1^*), \dots, f_k(\Lambda_k^*)\}$.
|
| 16 |
+
|
| 17 |
+
*Proof.* Clearly, it is sufficient to prove the theorem for the case $k=2$. The basic idea behind the proof of theorem above is to run in iterations, such that each iteration $i$ consists of running the quadruple $(\mathcal{U}_1; \mathcal{P}; \mathcal{U}_2; \mathcal{P})$, where $\mathcal{U}_1$ and $\mathcal{U}_2$ are executed for precisely $2^i$ rounds each. Hence, a correct solution will be produced in Iteration $s = \lceil \log f_{\min} \rceil$ or before. Since each iteration $i$ takes at most $O(2^i)$ rounds (recall that the running time of $\mathcal{P}$ is constant), the running time is $O(f_{\min})$.
|
| 18 |
+
|
| 19 |
+
Formally, we define a sequence of uniform algorithms $(\mathcal{A}_i)_{i \in \mathbb{N}}$ as follows. For $i \in \mathbb{N}$, set $\mathcal{A}_{2i+1} = \tilde{\mathcal{U}}_1$ and $\mathcal{A}_{2i+2} = \tilde{\mathcal{U}}_2$, where $\tilde{\mathcal{U}}_j$ is $\mathcal{U}_j$ restricted to $2^i$ rounds for $j \in \{1, 2\}$. Let $\pi$ be the uniform alternating algorithm with respect to $(\mathcal{A}_i)_{i \in \mathbb{N}}$ and $\mathcal{P}$, that is $\pi = \mathcal{B}_1; \mathcal{B}_2; \mathcal{B}_3; \dots$ where $\mathcal{B}_{2i+j} = \tilde{\mathcal{U}}_j; \mathcal{P}$ for every $i \in \mathbb{N}$ and every $j \in \{1, 2\}$. Letting $T_0$ be the running time of $\mathcal{P}$, the running time of $\mathcal{B}_i$ is at most $2^i + T_0$, for every $i \in \mathbb{N}$.
|
| 20 |
+
|
| 21 |
+
Consider an instance $(G, \mathbf{x}) \in \mathcal{F}$. For each $(\mathbf{p}, \mathbf{q}) \in \Lambda_1 \times \Lambda_2$, let $\mathbf{p}^* = (\mathbf{p}(G, \mathbf{x}))$ and $\mathbf{q}^* = (\mathbf{q}(G, \mathbf{x}))$. Algorithm $\mathcal{B}_i$ operates on the configuration $(G_i, \mathbf{x}_i)$. Let $\mathbf{p} \in \Lambda_1 \cup \Lambda_2$. Because $\mathcal{P}$ is monotone with respect to $\Lambda_1 \cup \Lambda_2$, it follows by induction on $i$ that $\mathbf{p}^* \ge \mathbf{p}(G_i, \mathbf{x}_i)$. Hence, the running time of $\mathcal{U}_j$ over $(G_i, \mathbf{x}_i)$ is bounded from above by $f_j(\Lambda_j^*)$ for every $i \in \mathbb{N}$ and each $j \in \{1, 2\}$. Thus, $V(G_{2s+2}) = \emptyset$ for the smallest $s$ such that $2^s \ge f_{\min}$. In other words, $\pi = \mathcal{B}_1; \mathcal{B}_2; \dots; \mathcal{B}_{2s+1}$. Consequently, by Observation 3.4, Algorithm $\pi$ correctly solves $\Pi$ on $\mathcal{F}$ and, since $\mathcal{B}_i$ runs in at most $2^{i/2} + T_0$ rounds, the running time of $\pi$ is $O(2^s) = O(f_{\min})$, as asserted. □
|
| 22 |
+
|
| 23 |
+
Now, we can combine Theorem 4 with Corollaries 3 and 4, and establish a uniform algorithm for MIS that runs in time $f(a,n)$. Combining this algorithm with Corollary 2, and applying once more Theorem 4 yields Corollary 1(i).
|
| 24 |
+
|
| 25 |
+
## 5 Uniform Coloring Algorithms
|
| 26 |
+
|
| 27 |
+
In general, we could not find a way to directly apply our transformers (e.g., the one given by Theorem 3) for the coloring problem. The main reason is that we could not find an efficient pruning algorithm for the coloring problem. Indeed, consider for example the $O(\Delta)$-coloring problem. The checking property of a pruning
|
samples/texts/1660153/page_19.md
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
algorithm requires that, in particular, the nodes can locally decide whether they belong to a legal configuration. While locally checking that neighboring nodes have distinct colors is easy, knowing whether a color is in the required range, namely, $[1, O(\Delta)]$, seems difficult as the nodes do not know $\Delta$. Moreover, the gluing property seems difficult to tackle also: after pruning a node with color $c$, none of its unpruned neighbors can be colored in color $c$. In other words, a correct solution on the non-pruned subgraph may not glue well with the pruned subgraph.
|
| 2 |
+
|
| 3 |
+
Nevertheless, we show in this section that several relatively general transformers can be used to obtain uniform coloring algorithms from non-uniform one. We focus on standard coloring problems in which the required number of colors is given as a function of $\Delta$.
|
| 4 |
+
|
| 5 |
+
## 5.1 Uniform ($\Delta + 1$)-coloring Algorithms
|
| 6 |
+
|
| 7 |
+
A standard trick (cf., [28,30]) allows us to transform an efficient (with respect to $n$ and $\Delta$) MIS algorithm for general graphs into one for ($\Delta + 1$)-coloring (and, actually, to the more general maximal coloring problem defined by Luby [30]). The general idea is based on the observation that ($\Delta + 1$)-colorings of $G$ and maximal independent sets of $G' = G \times K_{\Delta+1}$ are in one-to-one correspondence. More precisely, and avoiding the use of $\Delta$, the graph $G'$ is constructed from $G$ as follows. For each node $u \in V(G)$, take a clique $C_u$ of size $\deg_G(u)+1$ with nodes $u_1, \dots, u_{\deg_G(u)+1}$. Now, for each $(u,v) \in E(G)$ and each $i \in [1, 1+\min\{\deg_G(u), \deg_G(v)\})$, let $(u_i, v_i) \in E(G')$. The graph $G'$ can be constructed by a local algorithm without using any global parameter. It remains to observe the existence of a natural one-to-one correspondence between the maximal independent sets of $G'$ and the ($\deg_G+1$)-colorings of $G$, that is, the colorings of $G$ such that each node $u$ is assigned a color in $[1, \deg_G(u)+1]$.
|
| 8 |
+
|
| 9 |
+
To see this, first consider a ($\deg_G+1$)-coloring $c$ of $G$. Set
|
| 10 |
+
|
| 11 |
+
$$X = \{u_i \in V(G') : c(u) = i\}.$$
|
| 12 |
+
|
| 13 |
+
Then, no two nodes in $X$ are adjacent in $G'$. Moreover, a node that does not belong to $X$ has a neighbor in $X$ since $X$ contains a vertex from each clique $C_u$ for $u \in V(G)$. Therefore, $X$ is a MIS of $G'$.
|
| 14 |
+
|
| 15 |
+
Conversely, let $X$ be a MIS of $G'$. We assert that $X$ contains a node from every clique $C_u$ for $u \in V(G)$. Indeed, suppose on the contrary that $X \cap V(C_u) = \emptyset$ for a node $u \in V(G)$. By the definition of a MIS, every vertex $u_i \in V(C_u)$ has a neighbor $v(u_i)$ that belongs to $X$. Since a clique can contain at most one node in $X$
|
| 16 |
+
|
| 17 |
+
and $v(u_i) \neq v(u_j)$ whenever $i \neq j$, we deduce that at least $|C_u|$ cliques $C_v$ with $v \neq u$ contain a node that has a neighbor in $C_u$. This contradicts the definition of $G'$, since $|C_u| = \deg_G(u) + 1$. Thus, setting $c(u)$ to be the index $i \in \{1, \dots, \deg_G(u) + 1\}$ such that $u_i \in X$ yields a ($\deg_G+1$)-coloring of $G$.
|
| 18 |
+
|
| 19 |
+
Therefore, we obtain Corollary 1(ii) as a direct consequence of Corollary 1(i).
|
| 20 |
+
|
| 21 |
+
## 5.2 Uniform Coloring with More than $\Delta + 1$ Colors
|
| 22 |
+
|
| 23 |
+
We now aim to provide a transformer taking as input an efficient non-uniform coloring algorithm that uses $g(\Delta)$ colors (where $g(\Delta) > \Delta$) and produces an efficient uniform coloring algorithm that uses $O(g(\Delta))$ colors. We begin with the following definitions.
|
| 24 |
+
|
| 25 |
+
An instance for the coloring problem is a pair $(G, \mathbf{x})$ where $G$ is a graph and $\mathbf{x}(v)$ contains a color $c(v)$ such that the collection $\{c(v) : v \in V(G)\}$ forms a coloring of $G$. (The color $c(v)$ can be the identity $\mathrm{Id}(v)$.) For a given family $\mathcal{G}$ of graphs, we define $\mathcal{F}(\mathcal{G})$ to be the collection of instances $(G, \mathbf{x})$ for the coloring problem, where $G \in \mathcal{G}$.
|
| 26 |
+
|
| 27 |
+
Many coloring algorithms consider the identities as colors, and relax the assumption that the identities are unique by replacing it with the weaker requirement that the set of initial colors forms a coloring. Given an instance $(G, \mathbf{x})$, let $m = m(G, \mathbf{x})$ be the maximal identity. Note that $m$ is a graph-parameter.
|
| 28 |
+
|
| 29 |
+
Recall the $\lambda(\tilde{\Delta}+1)$-coloring algorithms designed by Barenboim and Elkin [4] and Kuhn [22] (which generalize the $\mathcal{O}(\tilde{\Delta}^2)$-coloring algorithm of Linial [28]). We would like to point out that, in fact, everything works similarly in these algorithms if one replaces $n$ with $m$. That is, these $\lambda(\tilde{\Delta}+1)$-coloring algorithms can be viewed as requiring $m$ and $\Delta$ and running in time $\mathcal{O}(\tilde{\Delta}/\lambda + \log^*\tilde{m})$. The same is true for the edge-coloring algorithms of Barenboim and Elkin [7].
|
| 30 |
+
|
| 31 |
+
The following theorem implies that these algorithms can be transformed into uniform ones. In the theorem, we consider two sets $\Gamma$ and $\Lambda$ of non-decreasing graph-parameters such that
|
| 32 |
+
|
| 33 |
+
(1) $\Gamma$ is weakly-dominated by $\Lambda$; and
|
| 34 |
+
|
| 35 |
+
(2) $\Gamma \subseteq \{\Delta, m\}$.
|
| 36 |
+
|
| 37 |
+
Two such sets of parameters are said to be *related*. The notion of moderately-fast function (defined in Section 2) will be used to govern the number of colors used by the coloring algorithms.
|
| 38 |
+
|
| 39 |
+
**Theorem 5** Let $\Gamma$ and $\Lambda$ be two related sets of non-decreasing graph-parameters and let $\mathcal{A}^\Gamma$ be a $\mathcal{g}(\tilde{\Delta})$-coloring algorithm with running time bounded with respect to $\Lambda$ by some function $f$. If
|
samples/texts/1660153/page_2.md
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Toward More Localized Local Algorithms: Removing Assumptions Concerning Global Knowledge
|
| 2 |
+
|
| 3 |
+
Amos Korman · Jean-Sébastien Sereni · Laurent Viennot
|
| 4 |
+
|
| 5 |
+
Received: date / Accepted: date
|
| 6 |
+
|
| 7 |
+
**Abstract** Numerous sophisticated local algorithm were suggested in the literature for various fundamental problems. Notable examples are the MIS and $(\Delta+1)$-coloring algorithms by Barenboim and Elkin [6], by Kuhn [22], and by Panconesi and Srinivasan [34], as well as the $O(\Delta^2)$-coloring algorithm by Linial [28]. Unfortunately, most known local algorithms (including, in particular, the aforementioned algorithms) are *non-uniform*, that is, local algorithms generally use good estimations of
|
| 8 |
+
|
| 9 |
+
one or more global parameters of the network, e.g., the maximum degree $\Delta$ or the number of nodes $n$.
|
| 10 |
+
|
| 11 |
+
This paper provides a method for transforming a non-uniform local algorithm into a *uniform* one. Furthermore, the resulting algorithm enjoys the same asymptotic running time as the original non-uniform algorithm. Our method applies to a wide family of both deterministic and randomized algorithms. Specifically, it applies to almost all state of the art non-uniform algorithms for MIS and Maximal Matching, as well as to many results concerning the coloring problem. (In particular, it applies to all aforementioned algorithms.)
|
| 12 |
+
|
| 13 |
+
To obtain our transformations we introduce a new distributed tool called *pruning algorithms*, which we believe may be of independent interest.
|
| 14 |
+
|
| 15 |
+
**Keywords** distributed algorithm · global knowledge · parameters · MIS · coloring · maximal matching
|
| 16 |
+
|
| 17 |
+
Amos Korman is supported in part by a France-Israel cooperation grant ("Mutli-Computing" project) from the France Ministry of Science and Israel Ministry of Science, by the ANR projects ALADDIN and PROSE, and by the INRIA project GANG. Jean-Sébastien Sereni is partially supported by the French *Agence Nationale de la Recherche* under reference ANR 10 JCJC 0204 01. Laurent Viennot is supported by the european STREP project EULER, and the INRIA project-team GANG.
|
| 18 |
+
|
| 19 |
+
Amos Korman
|
| 20 |
+
CNRS and University Paris Diderot
|
| 21 |
+
LIAFA Case 7014
|
| 22 |
+
Université Paris Diderot – Paris 7
|
| 23 |
+
F-75205 Paris Cedex 13, France.
|
| 24 |
+
Tel.: +33-1-57-27-92-56
|
| 25 |
+
Fax: +33-1-57-27-94-09
|
| 26 |
+
E-mail: Amos.Korman@liafa.jussieu.fr
|
| 27 |
+
|
| 28 |
+
Jean-Sébastien Sereni
|
| 29 |
+
CNRS (LIAFA, Université Denis Diderot), Paris, France
|
| 30 |
+
and Department of Applied Mathematics (KAM), Faculty of
|
| 31 |
+
Mathematics and Physics, Charles University, Prague, Czech
|
| 32 |
+
Republic
|
| 33 |
+
E-mail: sereni@kam.mff.cuni.cz
|
| 34 |
+
|
| 35 |
+
Laurent Viennot
|
| 36 |
+
INRIA and University Paris Diderot
|
| 37 |
+
LIAFA Case 7014
|
| 38 |
+
Université Paris Diderot – Paris 7
|
| 39 |
+
F-75205 Paris Cedex 13, France.
|
| 40 |
+
E-mail: Laurent.Viennot@inria.fr
|
| 41 |
+
|
| 42 |
+
# 1 Introduction
|
| 43 |
+
|
| 44 |
+
## 1.1 Background and Motivation
|
| 45 |
+
|
| 46 |
+
Distributed computing concerns environments in which many processors, located at different sites, must collaborate in order to achieve some global task. One of the main themes in distributed network algorithms concerns the question of how to cope with *locality* constraints, that is, the lack of knowledge about the global structure of the network (cf., [35]). On the one hand, information about the global structure may not always be accessible to individual processors and the cost of computing it from scratch may overshadow the cost of the algorithm using it. On the other hand, global knowledge is not always essential, and many seemingly global tasks can be efficiently achieved by letting processors
|
samples/texts/1660153/page_20.md
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
1. there exists a sequence-number function $s_f$ for $f$;
|
| 2 |
+
|
| 3 |
+
2. $g$ is moderately-fast;
|
| 4 |
+
|
| 5 |
+
3. the dependence of $f$ on $m$ is bounded by a polylog; and
|
| 6 |
+
|
| 7 |
+
4. the dependence of $f$ on $\Delta$ is moderately-slow;
|
| 8 |
+
|
| 9 |
+
then there exists a uniform $O(g(\Delta))$-coloring algorithm running in time $O(f(\Lambda^*) \cdot s_f(f(\Lambda^*)))$.
|
| 10 |
+
|
| 11 |
+
**Proof** Our first goal is to obtain a coloring algorithm that does not require $m$ (and thus requires only $\Delta$). For this purpose we first define the following problem. The strong list-coloring (SLC) problem: a configuration for the SLC problem is a pair $(G, \mathbf{x}) \in \mathcal{F}(\mathcal{G})$ such that
|
| 12 |
+
|
| 13 |
+
(1) there exists an integer $\hat{\Delta}$ in $\cap_{v \in V(G)} \mathbf{x}(v)$ such that $\hat{\Delta} \ge \Delta$; and
|
| 14 |
+
|
| 15 |
+
(2) the input $\mathbf{x}(v)$ of every vertex $v \in V(G)$ contains a list $L(v)$ of colors contained in $[1, g(\hat{\Delta})) \times [1, \hat{\Delta}+1]$ such that
|
| 16 |
+
|
| 17 |
+
$$\forall k \in [1, g(\hat{\Delta})], | \{j : (k, j) \in L(v)\} | \ge \deg_G(v) + 1.$$
|
| 18 |
+
|
| 19 |
+
Given a configuration $(G, \mathbf{x}) \in \mathcal{F}(\mathcal{G})$, an output vector $\mathbf{y}$ is a solution to SLC if it forms a coloring and if $\mathbf{y}(v) \in L(v)$ for every node $v \in V(G)$. Condition (1) above implies that a local algorithm for SLC can use an upper bound on $\Delta$, which is the same for all nodes. Informally, Condition (2) above implies that the list $L(v)$ of colors available for each node $v$ contains $\deg_G(v)+1$ copies of each color in the range $[1, g(\hat{\Delta}))$.
|
| 20 |
+
|
| 21 |
+
We now design a pruning algorithm $\mathcal{P}$ for SLC. Consider a triplet $(G, \mathbf{x}, \hat{\mathbf{y}})$, where $(G, \mathbf{x})$ is a configuration for SLC and $\hat{\mathbf{y}}$ is some tentative assignment of colors. The set $W$ of nodes to be pruned is composed of all nodes $u$ satisfying $\hat{\mathbf{y}}(u) \in L(u)$ and $\hat{\mathbf{y}}(u) \neq \hat{\mathbf{y}}(v)$ for all $v \in N_G(u)$. For each node $u \in V \setminus W$, set
|
| 22 |
+
|
| 23 |
+
$$L'(u) = L(u) \setminus \{\hat{\mathbf{y}}(v) : v \in N_G(u) \cap W\}.$$
|
| 24 |
+
|
| 25 |
+
In other words, $L'(u)$ contains all the colors in $L(u)$ that are not assigned to a neighbor of $u$ belonging to $W$. Algorithm $\mathcal{P}$ returns the configuration $(G', \mathbf{x}')$, where $G'$ is the subgraph of $G$ obtained by removing the nodes in $W$ and
|
| 26 |
+
|
| 27 |
+
$$\mathbf{x}'(u) = (\mathbf{x}(u) \setminus L(u)) \cup L'(u), \quad \text{for } u \in V \setminus W.$$
|
| 28 |
+
|
| 29 |
+
Observe that if we start with a configuration $(G, \mathbf{x})$ for SLC, then the output $(G', \mathbf{x}')$ of the pruning algorithm $\mathcal{P}$ is also a configuration for SLC. Indeed, for every node $v$ and every integer $k$, at most $\deg_W(v)$ pairs $(k, j)$ are removed from the list $L(v)$ of $v$, where $\deg_W(v)$ is the number of neighbors of $v$ that belong to $W$. On the other hand, the degree of $v$ in $G'$ is reduced by $\deg_W(v)$. Note also that the input vector of all nodes
|
| 30 |
+
|
| 31 |
+
still contain $\hat{\Delta}$, which is an upper bound for the maximum degree of $G'$.
|
| 32 |
+
|
| 33 |
+
Starting with $\mathcal{A}^\Gamma$, it is straightforward to design a local algorithm $\mathcal{B}^{\Gamma'}$ for SLC that depends on $\Gamma' = \Gamma \setminus \{\Delta\}$. Specifically, $\mathcal{B}^{\Gamma'}$ executes $\mathcal{A}^\Gamma$ using the good guess $\hat{\Delta} = \hat{\Delta}$ for the parameter $\Delta$. Furthermore, if $\mathcal{A}^\Gamma$ outputs at $v$ a color $c$, then $\mathcal{B}^{\Gamma'}$ outputs the color $(c, j)$ where $j = \min\{s : (c, s) \in L(v)\}$.
|
| 34 |
+
|
| 35 |
+
Given an instance for SLC, we view $\hat{\Delta}$ as a non-decreasing parameter, and convert $\Lambda$ to a new set of non-decreasing parameters $\Lambda'$ by replacing $\Delta$ with $\Delta'$. Formally, if $\Delta \in \Lambda$ then set $\Lambda' = (\Lambda \setminus \Delta) \cup \hat{\Delta}$, and otherwise, set $\Lambda' = \Lambda$. Since $\Gamma$ and $\Lambda$ contain only non-decreasing graph-parameters—and since $\hat{\Delta}$ is contained in all the inputs—we deduce that the pruning algorithm $\mathcal{P}$ is $(\Gamma' \cup \Lambda')$-monotone.
|
| 36 |
+
|
| 37 |
+
Now, we apply Theorem 3 to Algorithm $\mathcal{B}^{\Gamma'}$, the sets $\Gamma'$ and $\Lambda'$ of non-decreasing parameters and the aforementioned pruning algorithm $\mathcal{P}$ for SLC. We obtain a uniform algorithm $\mathcal{B}$ for SLC and $\mathcal{F}(\mathcal{G})$, whose running time is $O(f(\Lambda'^*) \cdot s_f(f(\Lambda'^*)))$.
|
| 38 |
+
|
| 39 |
+
We are ready to specify the desired uniform $O(g(\Delta))$-coloring algorithm. We define inductively a sequence $(D_i)_{i \in \mathbb{N}}$ by setting $D_1 = 1$ and
|
| 40 |
+
|
| 41 |
+
$$D_{i+1} = \min \{l : g(l) \ge 2g(D_i)\}$$
|
| 42 |
+
|
| 43 |
+
for $i \ge 1$. As $g$ is moderately-increasing, there is a constant $\alpha$ such that for each integer $i \ge 1$,
|
| 44 |
+
|
| 45 |
+
1. $D_{i+1} \ge \alpha D_i$ and
|
| 46 |
+
|
| 47 |
+
2. $g(D_{i+1}) \le \alpha^{\log \alpha} g(D_i)$.
|
| 48 |
+
|
| 49 |
+
Given an initial configuration $(G, \mathbf{x})$, we partition it according to the node degrees. For $i \in \mathbb{N}$, let $D_i$ be the subgraph of $G$ induced by the set of nodes $v$ of $G$ with $\deg_G(v) \in [D_i, D_{i+1}-1]$. Let $\mathbf{x}_i$ be the input $\mathbf{x}$ restricted to the nodes in $D_i$. The configuration $(G_i, \mathbf{x}_i)$, which belongs to $\mathcal{F}(\mathcal{G})$, is referred to as *layer i*. Note that nodes can figure out locally which layer they belong to. Observe also that $D_{i+1}-1$ is an upper bound on node degrees in layer *i*.
|
| 50 |
+
|
| 51 |
+
The algorithm proceeds in two phases. In the first phase, each node in layer *i* is assigned the list of colors $L_i'' = [1, g(D_{i+1})] \times [1, D_{i+1}+1]$, and the degree estimation $\hat{\Delta}_i = D_{i+1}$. Each layer is now an instance of SLC and we execute Algorithm **B** in parallel on all layers. If Algorithm **B** assigns a color $(c, j)$ to a node $v$ in layer *i* then we change this color to $(g(D_{i+1}) + c, j)$. Hence, for each *i*, layer *i* is colored with colors taken from $L_i' = [g(D_{i+1}) + 1, 2g(D_{i+1})] \times [1, D_{i+1}+1]$.
|
| 52 |
+
|
| 53 |
+
Note that nodes in different layers have disjoint color lists, and hence we obtain a coloring of the whole graph G. The number of colors in $L_i'$ is at most $2D_{i+1}g(D_{i+1})$.
|
samples/texts/1660153/page_21.md
ADDED
|
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Let $i_{\max}$ is the maximal integer $i$ such that layer $i$ is non-empty. The total number of colors used in the first phase is at most $2D_{i_{\max}+1}g(D_{i_{\max}+1})$, which is $O(\Delta g(\Delta))$ by Properties 1 and 2 above.
|
| 2 |
+
|
| 3 |
+
Furthermore, the running time of the first phase of the algorithm is dominated by the running time of the algorithm on layer $i_{\max}$. That is, the running time is at most $O(f(\Lambda^*) \cdot s_f(f(\Lambda^{*})))$, where $\Lambda^*$ is the collection of correct parameters in $\Lambda'$ for layer $i_{\max}$. Since $D_{i_{\max}+1} = O(\Delta)$ and the dependence of $f$ on $\Delta$ is moderately-slow, we infer that $f(\Lambda^*) = O(f(\Lambda^*))$. As $s_f$ is moderately-slow too (by the definition), we deduce that the running time is $O(f(\Lambda^*) \cdot s_f(f(\Lambda^{*})))$.
|
| 4 |
+
|
| 5 |
+
The second phase consists of running a second algorithm to change the set of possible colors of nodes in layer $i$ from $L'_i$ to $L_i = [g(D_{i+1}) + 1, 2g(D_{i+1})]$. Specifically, on layer $i$, we execute $\mathcal{A}^\Gamma$ using the guess $\tilde{\Delta} = D_{i+1}$ for the parameter $\Delta$ and the guess $\tilde{m} = 2D_{i+1}g(D_{i+1})$ for the parameter $m$ (recall that $\Gamma \subseteq \{\Delta, m\}$). This procedure colors each layer with colors taken from the range $[1, g(D_{i+1})]$. Let $v$ be in layer $i$ and let $c(v)$ be the color assigned to $v$ by $\mathcal{A}^\Gamma$. The final color of $v$ given by our desired algorithm $\mathcal{A}$ is $g(D_{i+1}) + c(v)$. Thus, the colors assigned to the nodes in layer $i$ belong to $[g(D_{i+1}) + 1, 2g(D_{i+1})]$. Therefore, nodes in different layers are assigned distinct colors. The algorithm is executed on each layer independently, all in parallel. Hence, we obtain a coloring of $\mathcal{G}$. Moreover, since $g$ is moderately-increasing, the total number of colors used is $O(g(\Delta))$.
|
| 6 |
+
|
| 7 |
+
Recall that $D_{i+1} = O(\Delta)$ and $g(D_{i+1}) = O(g(\Delta))$ for all $i$ such that $G_i$ is not empty. Hence, we deduce that the running time of the second phase of the algorithm is bounded from above by the running time of $\mathcal{A}^\Gamma$ on $(G, \mathbf{x})$ using the guesses $\tilde{\Delta} = O(\Delta)$ and $\tilde{m} = O(\Delta g(\Delta))$. Moreover, the fact that $g(x)$ is bounded by a polynomial in $x$ implies that $\tilde{m}$ is at most polynomial in $\Delta$, and hence in $m$. $\square$
|
| 8 |
+
|
| 9 |
+
Now, as the dependence of $f$ on $\Delta$ is moderately-slow and the dependence of $f$ on $m$ is polylogarithmic, the running time of the second phase of $\mathcal{A}$ is $O(f(\Lambda))$. Combining this with the running time of the first phase concludes the proof.
|
| 10 |
+
|
| 11 |
+
By Observation 4.1, the constant function $s_f = 1$ is a sequence-number function for every additive function $f$. Hence, Corollary 1(iii) directly follows from Theorem 5. Regarding edge-coloring, observe that Barenboim and Elkin [7] obtain their algorithm for general graphs by running a vertex-coloring algorithm $\mathcal{A}$ on the line-graph of the given graph. This algorithm $\mathcal{A}$ uses $m$ and $\Delta$ in and the number of colors and time complexity of the resulting edge-coloring algorithm are that of $\mathcal{A}$. Using Theorem 5, one can transform the algorithm $\mathcal{A}$
|
| 12 |
+
|
| 13 |
+
designed for the family of line graphs into a uniform one, having asymptotically the same number of colors and running time. Hence, Theorem 1(v) follows.
|
| 14 |
+
|
| 15 |
+
Let $f: \mathbb{N}^2 \rightarrow \mathbb{R}$ be given by $f(x_1, x_2) = f_1(x_1) \cdot f_2(x_2)$, where $f_1$ and $f_2$ are ascending functions. By Observation 4.1, the function $s_f(i) = [\log i] + 1$ is a sequence-number function for $f$. Therefore, Corollary 1(iv) now follows by applying Theorem 5 to the coloring algorithms of Barenboim and Elkin [5].
|
| 16 |
+
|
| 17 |
+
# 6 Conclusion and Further Research
|
| 18 |
+
|
| 19 |
+
## 6.1 Pruning Algorithms
|
| 20 |
+
|
| 21 |
+
This paper focuses on removing assumptions concerning global knowledge in the context of local algorithms. We provide transformers taking a non-uniform local algorithm as a black box and producing a uniform algorithm running in asymptotically the same number of rounds. This is established via the notion of pruning algorithms. We believe that this novel notion is of independent interest and can be used for other purposes too, e.g., in the context of fault tolerance or dynamic settings.
|
| 22 |
+
|
| 23 |
+
We remind the reader that we restricted the running time of a pruning algorithm to be constant. This is because in all our applications we use constant time pruning algorithms. In fact, our transformers extend to the case where the given *uniform* pruning algorithm $\mathcal{P}$
|
| 24 |
+
|
| 25 |
+
- has running time bounded with respect to a set $\mathcal{S}$ of non-decreasing parameters by a (non-decreasing) function $h$; and
|
| 26 |
+
|
| 27 |
+
- is $\mathcal{S}$-monotone.
|
| 28 |
+
|
| 29 |
+
However, the transformer may incur an additive overhead in the running time of the obtained uniform algorithms, as these repeatedly use $\mathcal{P}$. Specifically, the overhead will be $h(\mathcal{S}^*)$ times the number of iterations used by the transformer, which is typically logarithmic in the running time of the non-uniform algorithm. It would be interesting to have an example of a problem that admits a fast non-trivial uniform pruning algorithm but does not admit a constant time one.
|
| 30 |
+
|
| 31 |
+
## 6.2 Bounded Message Size
|
| 32 |
+
|
| 33 |
+
This paper focuses on the *LOCAL* model, which does not restrict the number of bits used in messages. Ideally, messages should be short, i.e., using $O(\log n)$ bits. We found it difficult to obtain a general transformer that takes an arbitrary non-uniform algorithm using short
|
samples/texts/1660153/page_22.md
ADDED
|
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
messages and produces a uniform one having asymptotically the same running time and message size. The reason is that techniques similar to those used in this paper, require guesses that fit for both the function bounding the running time and the function bounding the message size. Nevertheless, maintaining the same message size may still be possible given particular non-uniform algorithms that use messages whose content does not depend on the guessed upper bounds, such as algorithms that encode in the messages only identifiers, colors, or degrees.
|
| 2 |
+
|
| 3 |
+
## 6.3 Coloring
|
| 4 |
+
|
| 5 |
+
Recall that one of the difficulties in obtaining a pruning algorithm for coloring problems lies in the fact that the gluing property may not hold, that is, a pruned node $v$ with color $c$ may have a non-pruned neighbor $u$ which is also colored $c$ in some correct coloring of the non-pruned subgraph. In the context of running in iterations, in which one invokes a pruning algorithm and subsequently, an algorithm $\mathcal{A}$ on the non-pruned subgraph (similarly to Theorem 3), the aforementioned undesired phenomenon could be prevented if the algorithm $\mathcal{A}$ would avoid coloring node $u$ with color $c$. With this respect, we believe that it would be interesting to investigate connections between $g$-coloring problems and strong $g$-coloring problems, in which each node $v$ is given as an input a list of (forbidden) colors $F(v)$. In a correct solution, each node $v$ must color itself with a color not in $L(v)$ such that the final configuration is a coloring using at most $g$ colors.
|
| 6 |
+
|
| 7 |
+
Finally, recall that our transformer for coloring applies to deterministic algorithms only. It would be interesting to design a general transformer that takes non-uniform randomized coloring algorithms (e.g., the ones by Schneider and Wattenhofer [36]) and transforms them into uniform ones with asymptotically the same running time.
|
| 8 |
+
|
| 9 |
+
**Acknowledgements** The authors thank Boaz Patt-Shamir and the anonymous referees for their careful reading and thoughtful suggestions. Their comments helped to considerably improve the presentation of the paper.
|
| 10 |
+
|
| 11 |
+
## References
|
| 12 |
+
|
| 13 |
+
1. Alon, N., Babai, L., Itai, A.: A fast and simple randomized parallel algorithm for the maximal independent set problem. J. Algorithms **7**, 567–583 (1986)
|
| 14 |
+
|
| 15 |
+
2. Awerbuch, B.: Complexity of network synchronization. J. ACM **32**, 804–823 (1985)
|
| 16 |
+
|
| 17 |
+
3. Awerbuch, B., Luby, M., Goldberg, A.V., Plotkin, S.A.: Network decomposition and locality in distributed computation. In: Proc. 30th IEEE Symp. Found. Comput. Sci. (FOCS), pp. 364–369 (1989)
|
| 18 |
+
|
| 19 |
+
4. Barenboim, L., Elkin, M.: Distributed $(\Delta + 1)$-coloring in linear (in $\Delta$) time. In: Proc. 41st ACM Symp. Theor. Comput. (STOC), pp. 111–120 (2009)
|
| 20 |
+
|
| 21 |
+
5. Barenboim, L., Elkin, M.: Deterministic distributed vertex coloring in polylogarithmic time. In: Proc. 29th ACM Symp. Principles Distrib. Comput. (PODC), pp. 410–419 (2010)
|
| 22 |
+
|
| 23 |
+
6. Barenboim, L., Elkin, M.: Sublogarithmic distributed misalgorithm for sparse graphs using nash-williams decomposition. Distrib. Comput. **22**(5-6), 363–379 (2010)
|
| 24 |
+
|
| 25 |
+
7. Barenboim, L., Elkin, M.: Distributed deterministic edge coloring using bounded neighborhood independence. In: Proc. 30th ACM Symp. Principles Distrib. Comput. (PODC) (2011)
|
| 26 |
+
|
| 27 |
+
8. Bentley, J.L., Yao, A.C.C.: An almost optimal algorithm for unbounded searching. Information Processing Lett. **5**(3), 82–87 (1976)
|
| 28 |
+
|
| 29 |
+
9. Cohen, R., Fraigniaud, P., Ilcinkas, D., Korman, A., Peleg, D.: Label-guided graph exploration by a finite automaton. ACM Trans. Algorithms **4**, 42:1–42:18 (2008)
|
| 30 |
+
|
| 31 |
+
10. Cole, R., Vishkin, U.: Deterministic coin tossing and accelerating cascades: micro and macro techniques for designing parallel algorithms. In: Proc. 18th ACM Symp. Theor. Comput. (STOC), pp. 206–219 (1986)
|
| 32 |
+
|
| 33 |
+
11. Derbel, B., Gavoille, C., Peleg, D., Viennot, L.: On the locality of distributed sparse spanner construction. In: Proc. 27th ACM Symp. Principles Distrib. Comput. (PODC), pp. 273–282 (2008)
|
| 34 |
+
|
| 35 |
+
12. Dereniowski, D., Pelc, A.: Drawing maps with advice. In: Proc. 24th Int. Symp. Distrib. Comput. (DISC), pp. 328–342. Springer-Verlag (2010)
|
| 36 |
+
|
| 37 |
+
13. Fraigniaud, P., Gavoille, C., Ilcinkas, D., Pelc, A.: Distributed computing with advice: information sensitivity of graph coloring. Distrib. Comput. **21**, 395–403 (2009)
|
| 38 |
+
|
| 39 |
+
14. Fraigniaud, P., Ilcinkas, D., Pelc, A.: Communication algorithms with advice. J. Comput. Syst. Sci. **76**, 222–232 (2010)
|
| 40 |
+
|
| 41 |
+
15. Fraigniaud, P., Korman, A., Lebhar, E.: Local mst computation with short advice. In: Proc. 19th ACM Symp. Parallelism Algo. Archit. (SPAA), pp. 154–160 (2007)
|
| 42 |
+
|
| 43 |
+
16. Fraigniaud, P., Korman, A., Peleg, D.: Local distributed decision. Submitted for Publication
|
| 44 |
+
|
| 45 |
+
17. Goldberg, A.V., Plotkin, S.A.: Efficient parallel algorithms for $(\Delta+1)$-coloring and maximal independent set problem. In: Proc. 19th ACM Symp. Theor. Comput. (STOC), pp. 315–324 (1987)
|
| 46 |
+
|
| 47 |
+
18. Goldberg, A.V., Plotkin, S.A., Shannon, G.E.: Parallel symmetry-breaking in sparse graphs. SIAM J. Discrete Math. **1**(4), 434–446 (1988)
|
| 48 |
+
|
| 49 |
+
19. Hańckowiak, M., Karoński, M., Panconesi, A.: On the distributed complexity of computing maximal matchings. SIAM J. Discrete Math. **15**(1), 41–57 (electronic) (2001/02)
|
| 50 |
+
|
| 51 |
+
20. Korman, A., Kutten, S.: Distributed verification of minimum spanning trees. Distrib. Comput. **20**, 253–266 (2007)
|
| 52 |
+
|
| 53 |
+
21. Korman, A., Kutten, S., Peleg, D.: Proof labeling schemes. Distrib. Comput. **22**, 215–233 (2010)
|
| 54 |
+
|
| 55 |
+
22. Kuhn, F.: Weak graph colorings: distributed algorithms and applications. In: Proc. 21st ACM Symp. Parallelism Algo. Archit. (SPAA), pp. 138–144 (2009)
|
| 56 |
+
|
| 57 |
+
23. Kuhn, F., Moscibroda, T., Wattenhofer, R.: What cannot be computed locally! In: Proc. 23rd ACM Symp. Principles Distrib. Comput. (PODC), pp. 300–309 (2004)
|
samples/texts/1660153/page_23.md
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
24. Kuhn, F., Wattenhofer, R.: On the complexity of distributed graph coloring. In: Proc. 25th ACM Symp. Principles Distrib. Comput. (PODC), pp. 7–15 (2006)
|
| 2 |
+
|
| 3 |
+
25. Kutten, S., Peleg, D.: Tight fault locality. SIAM J. Comput. **30**(1), 247–268 (electronic) (2000)
|
| 4 |
+
|
| 5 |
+
26. Lenzen, C., Oswald, Y., Wattenhofer, R.: What can be approximated locally?: case study: dominating sets in planar graphs. In: Proc. 20th ACM Symp. Parallelism Algo. Archit. (SPAA), pp. 46–54 (2008)
|
| 6 |
+
|
| 7 |
+
27. Linial, N.: Distributive graph algorithms global solutions from local data. In: Proc. 28th IEEE Symp. Found. Comput. Sci. (FOCS), pp. 331–335 (1987)
|
| 8 |
+
|
| 9 |
+
28. Linial, N.: Locality in distributed graph algorithms. SIAM J. Comput. **21**, 193 (1992)
|
| 10 |
+
|
| 11 |
+
29. Lotker, Z., Patt-Shamir, B., Rosén, A.: Distributed approximate matching. SIAM J. Comput. **39**(2), 445–460 (2009)
|
| 12 |
+
|
| 13 |
+
30. Luby, M.: A simple parallel algorithm for the maximal independent set problem. SIAM J. Comput. **15**, 1036–1053 (1986)
|
| 14 |
+
|
| 15 |
+
31. Nakano, K., Olariu, S.: Uniform leader election protocols for radio networks. IEEE Trans. Parallel Distrib. Syst. **13**(5), 516–526 (2002)
|
| 16 |
+
|
| 17 |
+
32. Naor, M., Stockmeyer, L.: What can be computed locally? SIAM J. Comput. **24**(6), 1259–1277 (1995)
|
| 18 |
+
|
| 19 |
+
33. Panconesi, A., Rizzi, R.: Some simple distributed algorithms for sparse networks. Distrib. Comput. **14**, 97–100 (2001)
|
| 20 |
+
|
| 21 |
+
34. Panconesi, A., Srinivasan, A.: On the complexity of distributed network decomposition. J. Algorithms **20**(2), 356–374 (1996)
|
| 22 |
+
|
| 23 |
+
35. Peleg, D.: Distributed computing. A locality-sensitive approach. SIAM Monographs on Discrete Mathematics and Applications, 5. Philadelphia, PA: SIAM, Society for Industrial and Applied Mathematics. xvi, 343 p. (2000)
|
| 24 |
+
|
| 25 |
+
36. Schneider, J., Wattenhofer, R.: A new technique for distributed symmetry breaking. In: Proc. 29th ACM Symp. Principles Distrib. Comput. (PODC), pp. 257–266 (2010)
|
| 26 |
+
|
| 27 |
+
37. Schneider, J., Wattenhofer, R.: An optimal maximal independent set algorithm for bounded-independence graphs. Distrib. Comput. **22**(5-6), 1–13 (2010)
|
| 28 |
+
|
| 29 |
+
38. Szegedy, M., Vishwanathan, S.: Locality based graph coloring. In: Proc. 25th ACM Symp. Theor. Comput. (STOC), pp. 201–207 (1993)
|
samples/texts/1660153/page_3.md
ADDED
|
@@ -0,0 +1,112 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
know more about their immediate neighborhoods and
|
| 2 |
+
less about the rest of the network.
|
| 3 |
+
|
| 4 |
+
A standard model for capturing the essence of lo-
|
| 5 |
+
cality is the LOCAL model (cf., [35]). In this model,
|
| 6 |
+
the network is modeled by a graph G, where the nodes
|
| 7 |
+
of G represent the processors and the edges represent
|
| 8 |
+
the communication links. To perform a task, nodes are
|
| 9 |
+
woken up simultaneously, and computation proceeds in
|
| 10 |
+
fault-free synchronous rounds during which every node
|
| 11 |
+
exchanges messages with its neighbors, and performs
|
| 12 |
+
arbitrary computations on its data. Since many tasks
|
| 13 |
+
cannot be solved distributively in an anonymous net-
|
| 14 |
+
work in a deterministic way, symmetry breaking must
|
| 15 |
+
be addressed. Arguably, there are two typical ways to
|
| 16 |
+
address this issue: the first one is to use randomized
|
| 17 |
+
algorithms, while the second one is to assume that each
|
| 18 |
+
node v in the network is initially provided a unique
|
| 19 |
+
identity Id(v). A local algorithm operating in such a
|
| 20 |
+
setting must return an output at each node such that
|
| 21 |
+
the collection of outputs satisfies the required task. For
|
| 22 |
+
example, a Maximal Independent Set (MIS) of a graph
|
| 23 |
+
G is a set S of nodes of G such that every node not in S
|
| 24 |
+
has a neighbor in S and no two nodes of S are adjacent.
|
| 25 |
+
In a local algorithm for the MIS problem, the output at
|
| 26 |
+
each node v is a bit b(v) indicating whether v belongs
|
| 27 |
+
to a selected set S of nodes, and it is required that S
|
| 28 |
+
forms a MIS of G. The running time of a local algo-
|
| 29 |
+
rithm is the number of rounds needed for the algorithm
|
| 30 |
+
to complete its operation at each node, taken in the
|
| 31 |
+
worst case scenario. This is typically evaluated with re-
|
| 32 |
+
spect to some parameters of the underlying graph. The
|
| 33 |
+
common parameters used are the number of nodes n in
|
| 34 |
+
the graph and the maximum degree Δ of a node in the
|
| 35 |
+
graph.
|
| 36 |
+
|
| 37 |
+
To ease the computation, it is often assumed that
|
| 38 |
+
some kind of knowledge about the global network is
|
| 39 |
+
provided to each node a priori. A typical example of
|
| 40 |
+
such knowledge is the number of nodes *n* in the net-
|
| 41 |
+
work. It turns out that in some cases, this (common)
|
| 42 |
+
assumption can give a lot of power to the distributed
|
| 43 |
+
algorithm. This was observed by Fraigniaud et al. [16] in
|
| 44 |
+
the context of local decision: they introduced the com-
|
| 45 |
+
plexity class of decision problems NLD, which contains
|
| 46 |
+
all decision problems that can be verified in constant
|
| 47 |
+
time with the aid of a certificate. They proved that, al-
|
| 48 |
+
though there exist decision problems that do not belong
|
| 49 |
+
to NLD, every (computable) decision problem falls in
|
| 50 |
+
NLD if it is assumed that each node is given the value
|
| 51 |
+
of *n* as an input.
|
| 52 |
+
|
| 53 |
+
In general, the amount and type of such informa-
|
| 54 |
+
tion may have a profound effect on the design of the
|
| 55 |
+
distributed algorithm. Obviously, if the whole graph
|
| 56 |
+
is contained in the input of each node, then the dis-
|
| 57 |
+
|
| 58 |
+
tributed algorithm can be reduced to a central one. In
|
| 59 |
+
fact, the whole area of computation with advice [9,12–
|
| 60 |
+
15,20,21] is dedicated to studying the amount of infor-
|
| 61 |
+
mation contained in the inputs of the nodes and its
|
| 62 |
+
effect on the performances of the distributed algorithm.
|
| 63 |
+
For instance, Fraigniaud et al. [15] showed that if each
|
| 64 |
+
node is provided with only a constant number of bits
|
| 65 |
+
then one can locally construct a BFS-tree in constant
|
| 66 |
+
time, and can locally construct a MST in O(log n) time,
|
| 67 |
+
while both tasks require diameter time if no knowledge
|
| 68 |
+
is assumed. As another example, Cohen et al. [9] proved
|
| 69 |
+
that O(1) bits, judiciously chosen at each node, can al-
|
| 70 |
+
low a finite automaton to distributively explore every
|
| 71 |
+
graph. As a matter of fact, from a radical point of view,
|
| 72 |
+
for many questions (e.g., MIS and Maximal Matching),
|
| 73 |
+
additional information may push the question at hand
|
| 74 |
+
into absurdity: even a constant number of bits of ad-
|
| 75 |
+
ditional information per node is enough to compute a
|
| 76 |
+
solution—simply let the additional information encode
|
| 77 |
+
the solution!
|
| 78 |
+
|
| 79 |
+
When dealing with locality issues, it is desired that
|
| 80 |
+
the amount of information regarding the whole network
|
| 81 |
+
contained in the inputs of the nodes is minimized. A lo-
|
| 82 |
+
cal algorithm that assumes that each node is initially
|
| 83 |
+
given merely its own identity is often called uniform.
|
| 84 |
+
Unfortunately, there are only few local algorithms in
|
| 85 |
+
the literature that are uniform (e.g., [11, 26, 29, 30, 37]).
|
| 86 |
+
In contrast, most known local algorithms assume that
|
| 87 |
+
the inputs of all nodes contain upper bounds on the
|
| 88 |
+
values of some global parameters of the network. More-
|
| 89 |
+
over, it is often assumed that all inputs contain the
|
| 90 |
+
same upper bounds on the global parameters. Further-
|
| 91 |
+
more, typically, not only the correct operation of the
|
| 92 |
+
algorithm requires that upper bounds be contained in
|
| 93 |
+
the inputs of all nodes, but also the running time of the
|
| 94 |
+
algorithm is actually a function of the upper bound esti-
|
| 95 |
+
mations and not of the actual values of the parameters.
|
| 96 |
+
Hence, it is desired that the upper bounds contained
|
| 97 |
+
in the inputs are not significantly larger than the real
|
| 98 |
+
values of the parameters.
|
| 99 |
+
|
| 100 |
+
Some attempts to transform a non-uniform local al-
|
| 101 |
+
gorithm into a uniform one were made by examining
|
| 102 |
+
the details of the algorithm at hand and modifying it
|
| 103 |
+
appropriately. For example, Barenboim and Elkin [6]
|
| 104 |
+
first gave a non-uniform MIS algorithm for the family of
|
| 105 |
+
graphs with arboricity $a = O(\log^{1/2-\delta} n)$, for any con-
|
| 106 |
+
stant $\delta \in (0, 1/2)$, running in time $O(\log n / \log \log n)$.
|
| 107 |
+
(The arboricity of a graph being the smallest number of
|
| 108 |
+
acyclic subgraphs that together contain all the edges of
|
| 109 |
+
the graph.) At the cost of increasing the running time
|
| 110 |
+
to $O(\frac{\log n}{\log \log n} \log^*\* n)$, the authors show how to modify
|
| 111 |
+
their algorithm so that the value of *a* need not be part
|
| 112 |
+
of the inputs of nodes. In addition to the MIS algo-
|
samples/texts/1660153/page_4.md
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
rithms, the work of [6] also contains algorithms that do not require the knowledge of the arboricity, but have the same asymptotic running time as the ones that require it. For example, this corresponds to algorithms computing forests-decomposition and $O(a)$-coloring. Nevertheless, all their algorithms still require the inputs of all nodes to contain a common upper bound on $n$.
|
| 2 |
+
|
| 3 |
+
We present general methods for transforming a non-uniform local algorithm into a uniform one without increasing the asymptotic running time of the original algorithm. Our method applies to a wide family of both deterministic and randomized algorithms. In particular, our method applies to all state of the art non-uniform algorithms for MIS and Maximal Matching, as well as to several of the best known results for $(\Delta+1)$-coloring.
|
| 4 |
+
|
| 5 |
+
Our transformations are obtained using a new type of local algorithms termed *pruning algorithms*. Informally, the basic property of a pruning algorithm is that it allows one to iteratively apply a sequence of local algorithms (whose output may not form a correct global solution) one after the other, in a way that “always progresses” toward a solution. In a sense, a pruning algorithm is a combination of a gluing mechanism and a *local checking* algorithm (cf., [16,32]). A local checking algorithm for a problem $\Pi$ runs on graphs with an output value at each node (and possibly an input too), and can locally detect whether the output is “legal” with respect to $\Pi$. That is, if the instance is not legal then at least one node detects this, and raises an alarm. (For example, a local checking algorithm for MIS is trivial: each node in the set $S$, which is suspected to be a MIS, checks that none of its neighbors belongs to $S$, and each node not in $S$ checks that at least one of its neighbors belongs to $S$. If the check fails, then the node raises an alarm.) A pruning algorithm needs to satisfy an additional *gluing* property not required by local checking algorithms. Specifically, if the instance is not legal, then the pruning algorithm must carefully choose the nodes raising the alarm (and possibly modify their input too), so that a solution for the subgraph induced by those alarming nodes can be well glued to the previous output of the non-alarming nodes, in a way such that the combined output is a solution to the problem for the whole initial graph.
|
| 6 |
+
|
| 7 |
+
We believe that this new type of algorithms may be of independent interest. Indeed, as we show, pruning algorithms have several types of other applications in the theory of local computation, besides the aforementioned issue of designing uniform algorithms. Specifically, they can be used also to transform a local Monte-Carlo algorithm into a Las Vegas one, as well as to obtain an algorithm that runs in the minimum running time of a given (finite) set of uniform algorithms.
|
| 8 |
+
|
| 9 |
+
## 1.2 Previous Work
|
| 10 |
+
|
| 11 |
+
*MIS and coloring:* There is a long line of research concerning the two related problems of $(\Delta+1)$-coloring and MIS [3, 10, 17, 18, 23, 24, 28]. A *k*-coloring of a graph is an assignment of an integer in $\{1, \dots, k\}$ to each node such that no two adjacent vertices are assigned the same integer. Recently, Barenboim and Elkin [4] and independently Kuhn [22] presented two elegant $(\Delta+1)$-coloring and MIS algorithms running in $O(\Delta + \log^* n)$ time on general graphs. This is the best currently-known bound for these problems on low degree graphs. For graphs with a large maximum degree $\Delta$, the best bound is due to Panconesi and Srinivasan [34], who devised an algorithm running in $2^{O(\sqrt{\log n})}$ time. The aforementioned algorithms are not uniform. Specifically, all three algorithms require that the inputs of all nodes contain a common upper bound on $n$ and the first two also require a common upper bound on $\Delta$.
|
| 12 |
+
|
| 13 |
+
For bounded-independence graphs, Schneider and Wattenhofer [37] designed uniform deterministic MIS and $(\Delta + 1)$-coloring algorithms running in $O(\log^* n)$ time. Barenboim and Elkin [6] devised a deterministic algorithm for the MIS problem on graphs of bounded arboricity that requires time $O(\log n / \log \log n)$. More specifically, for graphs with arboricity $a = o(\sqrt{\log n})$, they show that a MIS can be computed deterministically in $o(\log n)$ time, and whenever $a = O(\log^{1/2-\delta} n)$ for some constant $\delta \in (0, 1/2)$, the same algorithm runs in time $O(\log n / \log \log n)$. At the cost of increasing the running time by a multiplicative factor of $O(\log^* n)$, the authors show how to modify their algorithm so that the value of $a$ need not be part of the inputs of nodes. Nevertheless, all their algorithms require the inputs of all nodes to contain a common upper bound on the value of $n$. Another MIS algorithm which is efficient for graphs with low arboricity was devised by Barenboim and Elkin [5]; this algorithm runs in time $O(a+a^\epsilon \log n)$ for arbitrary constant $\epsilon > 0$.
|
| 14 |
+
|
| 15 |
+
Concerning the problem of coloring with more than $\Delta + 1$ colors, Linial [27, 28], and subsequently Szegedy and Vishwanathan [38], described $O(\Delta^2)$-coloring algorithms with running time $\theta(\log^* n)$. Barenboim and Elkin [4] and, independently, Kuhn [22] generalized this by presenting a tradeoff between the running time and the number of colors: they devised a $\lambda(\Delta + 1)$-coloring algorithm with running time $O(\Delta/\lambda + \log^* n)$, for any $\lambda \ge 1$. All these algorithms require the inputs of all nodes to contain common upper bounds on both $n$ and $\Delta$.
|
| 16 |
+
|
| 17 |
+
Barenboim and Elkin [5] devised a $\Delta^{1+\epsilon(1)}$ coloring algorithm running in time $O(f(\Delta) \log \Delta \log n)$, for an arbitrarily slow-growing function $f = \omega(1)$. They
|
samples/texts/1660153/page_5.md
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<table><thead><tr><th>Problem</th><th>Parameters</th><th>Time</th><th>Ref.</th><th>This paper (uniform)</th><th>Corollary 1</th></tr></thead><tbody><tr><td rowspan="2">Det. MIS and (Δ+1)-coloring</td><td>n, Δ</td><td>O(Δ + log<sup>*</sup> n)</td><td>[4,22]</td><td rowspan="2">min {O(Δ + log<sup>*</sup> n), 2O(√log n)}</td><td>(i)</td></tr><tr><td>n</td><td>2O(√log n)</td><td>[34]</td><td>(ii)</td></tr><tr><td>Det. MIS (arboricity a = o(√log n))</td><td>n, a</td><td>o(log n)</td><td>[6]</td><td>o(log n)</td><td>(i)</td></tr><tr><td>Det. MIS (arboricity a = O(log<sup>1/2-δ</sup> n))</td><td>n, a</td><td>O(log n / log log n)</td><td>[6]</td><td>O(log n / log log n)</td><td>(i)</td></tr><tr><td>Det. λ(Δ + 1)-coloring</td><td>n, Δ</td><td>O(Δ/λ + log<sup>*</sup> n)</td><td>[4,22]</td><td>O(Δ/λ + log<sup>*</sup> n)</td><td>(iii)</td></tr><tr><td>Det. O(Δ)-edge-coloring</td><td>n, Δ</td><td>O(Δ<sup>ε</sup> + log<sup>*</sup> n)</td><td>[7]</td><td>O(Δ<sup>ε</sup> + log<sup>*</sup> n)</td><td>(v)</td></tr><tr><td>Det. O(Δ<sup>1+ε</sup>)-edge-coloring</td><td>n, Δ</td><td>O(log Δ + log<sup>*</sup> n)</td><td>[7]</td><td>O(log Δ + log<sup>*</sup> n)</td><td>(v)</td></tr><tr><td>Det. Maximal Matching</td><td>n or Δ</td><td>O(log<sup>4</sup> n)</td><td>[19]</td><td>O(log<sup>4</sup> n)</td><td>(vi)</td></tr><tr><td>Rand. (2, 2(c + 1))-ruling set</td><td>n</td><td>O(2<sup>c</sup> log<sup>1/c</sup> n)</td><td>[36]</td><td>O(2<sup>c</sup> log<sup>1/c</sup> n)</td><td>(vii)</td></tr><tr><td>Rand. MIS</td><td>uniform</td><td>O(log n)</td><td>[1,30]</td><td></td><td></td></tr></tbody></table>
|
| 2 |
+
|
| 3 |
+
**Table 1** Comparison of *LOCAL* algorithms with respect to the use of global parameters. “Det.” stands for deterministic, and “Rand.” for randomized.
|
| 4 |
+
|
| 5 |
+
also produced an $O(\Delta^{1+\epsilon})$-coloring algorithm running in $O(\log \Delta \log n)$-time, for an arbitrarily small constant $\epsilon > 0$, and an $O(\Delta)$-coloring algorithm running in $O(\Delta^{\epsilon} \log \frac{1}{M})$ time, for an arbitrarily small constant $\epsilon > 0$. All these coloring algorithms require the inputs of all nodes to contain the values of both $\Delta$ and $n$. Other deterministic non-uniform coloring algorithms with number of colors and running time corresponding to the arboricity parameter were given by Barenboim and Elkin [5, 6].
|
| 6 |
+
|
| 7 |
+
Efficient deterministic algorithms for the edge-coloring problem can be found in several papers [5, 7, 33]. In particular, Panconesi and Rizzi [33] designed a simple deterministic local algorithm that finds a $(2\Delta - 1)$-edge-coloring of a graph in time $O(\Delta + \log^* n)$. Recently, Barenboim and Elkin [7], designed an $O(\Delta)$-edge-coloring algorithm running in time $O(\Delta^{\epsilon}) + \log^* n$, for any $\epsilon > 0$, and an $O(\Delta^{1+\epsilon})$-edge-coloring algorithm running in time $O(\log \Delta) + \log^* n$, for any $\epsilon > 0$. All these algorithms require the inputs of all nodes to contain common upper bounds on both $n$ and $\Delta$.
|
| 8 |
+
|
| 9 |
+
Randomized algorithms for MIS and $(\Delta+1)$-coloring running in expected time $O(\log n)$ were initially given by Luby [30] and, independently, by Alon et al. [1].
|
| 10 |
+
|
| 11 |
+
Recently, Schneider and Wattenhofer [36] constructed the best currently-known non-uniform $(\Delta+1)$-coloring algorithm, which runs in time $O(\log \Delta + \sqrt{\log n})$. They also provided random algorithms for coloring using more colors. For every positive integer $c$, a randomized algorithm for $(2, 2(c+1))$-ruling set running in time $O(2^c \log^{1/c} n)$ is also presented. (A set $S$ of nodes in a graph being $(\alpha, \beta)$-ruling if every node not in $S$ is at distance at most $\beta$ of a node in $S$ and no two nodes in $S$ are at distance less than $\alpha$.) All these algorithms of Schneider and Wattenhofer [36] are not uniform and require the inputs of all nodes to contain a common upper bound on $n$.
|
| 12 |
+
|
| 13 |
+
**Maximal Matching:** A maximal matching of a graph $G$ is a set $M$ of edges of $G$ such that every edge not in $M$ is incident to an edge in $M$ and no two edges in $M$ are incident. Schneider and Wattenhofer [37] designed a uniform deterministic maximal matching algorithm on bounded-independence graphs running in $O(\log^* n)$ time. For general graphs, however, the state of the art maximal matching algorithm is not uniform: Hanckowiak et al. [19] presented a non-uniform deterministic algorithm for maximal matching running in time $O(\log^4 n)$. This algorithm assumes that the inputs of all nodes contain a common upper bound on $n$ (this assumption can be omitted for some parts of the algorithm under the condition that the inputs of all nodes contain the value of $\Delta$).
|
| 14 |
+
|
| 15 |
+
## 1.3 Our Results
|
| 16 |
+
|
| 17 |
+
The main conceptual contribution of the paper is the introduction of a new type of algorithms called *pruning algorithms*. Informally, the fundamental property of this type of algorithms is to allow one to iteratively run a sequence of algorithms (whose output may not necessarily be correct everywhere) so that the global output does not deteriorate, and it always progresses toward a solution.
|
| 18 |
+
|
| 19 |
+
Our main application for pruning algorithm concerns the problem of locally computing a global solution while minimizing the necessary global information contained in the inputs of the nodes. Addressing this, we provide a method for transforming a non-uniform local algorithm into a uniform one without increasing the asymptotic running time of the original algorithm. Our method applies to a wide family of both deterministic and randomized algorithms; in particular, it applies to many of the best known results concerning classical problems such as MIS, Coloring, and Maximal Matching. (See Table 1.2 for a summary of some of the uni-
|
samples/texts/1660153/page_6.md
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
form algorithms we obtain and the corresponding state
|
| 2 |
+
of the art existing non-uniform algorithms.)
|
| 3 |
+
|
| 4 |
+
In another application, we show how to transform a
|
| 5 |
+
Monte-Carlo local algorithm into a Las Vegas one. Fi-
|
| 6 |
+
nally, given a constant number of uniform algorithms
|
| 7 |
+
for the same problem whose running times depend on
|
| 8 |
+
different parameters—which are unknown to nodes—
|
| 9 |
+
we show a method for constructing a uniform algorithm
|
| 10 |
+
solving the problem, that on every instance runs asymp-
|
| 11 |
+
totically as fast as the fastest algorithm among those
|
| 12 |
+
given algorithms.
|
| 13 |
+
|
| 14 |
+
Stating our main results requires a number of for-
|
| 15 |
+
mal definitions, so we defer the precise statements to
|
| 16 |
+
later parts of the paper. Rather, we provide here some
|
| 17 |
+
interesting corollaries of our results. References for the
|
| 18 |
+
corresponding non-uniform algorithms are provided in
|
| 19 |
+
Table 1.2. (The notion of “moderately-slow function”
|
| 20 |
+
used in item (iii) below is defined in Section 2.)
|
| 21 |
+
|
| 22 |
+
Corollary 1
|
| 23 |
+
|
| 24 |
+
(i) There exists a uniform deterministic algorithm solving MIS on general graphs in time
|
| 25 |
+
|
| 26 |
+
$$
|
| 27 |
+
\begin{equation}
|
| 28 |
+
\begin{array}{@{}l@{}}
|
| 29 |
+
\min \{g(n), h(\Delta, n), f(a, n)\}, \\
|
| 30 |
+
\quad \text{where } g(n) = 2^{O(\sqrt{\log n})}, h(\Delta, n) = O(\Delta + \log^* n), \\
|
| 31 |
+
\quad \text{and } f(a, n) \text{ is bounded as follows. } f(a, n) = o(\log n) \\
|
| 32 |
+
\quad \text{for graphs of arboricity } a = o(\sqrt{\log n}), f(a, n) = \\
|
| 33 |
+
\quad \text{for } (\log n)/(\log \log n) \text{ for arboricity } a = O(\log^{1/2-\delta} n), \\
|
| 34 |
+
\quad \text{for some constant } \delta \in (0, 1/2); \text{ and otherwise: } f(a, n) = \\
|
| 35 |
+
\quad \qquad O(a + a^{\epsilon} \log n), \text{ for arbitrary small constant } \epsilon > 0.
|
| 36 |
+
\end{array}
|
| 37 |
+
\tag{2}
|
| 38 |
+
\end{equation}
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
(ii) There exists a uniform deterministic algorithm solving the $(\Delta + 1)$-coloring problem on general graphs in time $\min\{O(\Delta + \log^* n), 2^{O(\sqrt{\log n})}\}$.
|
| 42 |
+
|
| 43 |
+
(iii) There exists a uniform deterministic algorithm solving the $\lambda(\Delta+1)$-coloring problem on general graphs and running in time $O(\Delta/\lambda + \log^* n)$, for any $\lambda \ge 1$, such that $\Delta/\lambda$ is a moderately-slow function. In particular, there exists a uniform deterministic algorithm solving the $O(\Delta^2)$-coloring problem in time $O(\log^* n)$.
|
| 44 |
+
|
| 45 |
+
(iv) The following uniform deterministic coloring algorithms exist.
|
| 46 |
+
|
| 47 |
+
- A uniform $\Delta^{1+\alpha(1)}$-coloring algorithm running in time $O(f(\Delta) \log \Delta \log n \log \log n)$, for an arbitrarily slow-growing function $f = \omega(1)$.
|
| 48 |
+
|
| 49 |
+
- A uniform $O(\Delta^{1+\epsilon})$-coloring algorithm running in $O(\log \Delta \log n \log \log n)$ time, for any constant $\epsilon > 0$.
|
| 50 |
+
|
| 51 |
+
- A uniform $O(\Delta)$-coloring algorithm running in $O(\Delta^{\epsilon} \log n \log \log n)$ time, for any constant $\epsilon > 0$.
|
| 52 |
+
|
| 53 |
+
(v) — There exists a uniform deterministic $O(\Delta)$-edge-coloring algorithm for general graphs running in time $O(\Delta^{\epsilon} + \log^* n)$, for any constant $\epsilon > 0$.
|
| 54 |
+
|
| 55 |
+
- There exists a uniform deterministic $O(\Delta^{1+\epsilon})$-edge-coloring algorithm for general graphs that runs in time $O(\log \Delta + \log^* n)$, for any constant $\epsilon > 0$.
|
| 56 |
+
|
| 57 |
+
(vi) There exists a uniform deterministic algorithm solving the maximal matching problem in time $O(\log^4 n)$.
|
| 58 |
+
|
| 59 |
+
(vii) For a constant integer $c \ge 1$, there exists a uniform
|
| 60 |
+
randomized algorithm solving the $(2, 2(c+1))$-ruling
|
| 61 |
+
set problem in time $O(2^c \log^{1/c} n)$.
|
| 62 |
+
|
| 63 |
+
## 2 Preliminaries
|
| 64 |
+
|
| 65 |
+
*General definitions:* For two integers $a$ and $b$, we let $[a, b] = \{a, a+1, \dots, b\}$. A vector $\underline{x} \in \mathbf{R}^\ell$ is said to dominate a vector $\underline{y} \in \mathbf{R}^\ell$ if $\underline{x}$ is coordinate-wise greater than or equal to $\underline{y}$, that is, $x_k \ge \frac{\underline{y}_k}{c}$ for each $k \in [1, \ell]$.
|
| 66 |
+
|
| 67 |
+
For a graph $G$, we let $V(G)$ and $E(G)$ be the sets of nodes and edges of $G$, respectively. (Unless mentioned otherwise, we consider only undirected and unweighted graphs.) The degree $\deg_G(v)$ of a node $v \in V(G)$ is the number of neighbors of $v$ in $G$. The maximum degree of $G$ is $\Delta_G = \max\{\deg_G(v) : v \in V(G)\}$.
|
| 68 |
+
|
| 69 |
+
Let $u$ and $v$ be two nodes of $G$. The distance $\mathrm{dist}_G(u, v)$ between $u$ and $v$ is the number of edges on a shortest path connecting them. Given an integer $r \ge 0$, the ball of radius $r$ around $u$ is the subgraph $B_G(u, r)$ of $G$ induced by the collection of nodes at distance at most $r$ from $u$. The neighborhood $N_G(u)$ of $u$ is the set of neighbors of $u$, i.e., $N_G(u) = B_G(u, 1) \setminus \{u\}$. In what follows, we may omit the subscript $G$ from the previous notations when there is no risk of confusion.
|
| 70 |
+
|
| 71 |
+
**Functions:** A function $f: \mathbf{R}^{\ell} \rightarrow \mathbf{R}$ is non-decreasing if for every two vectors $\underline{x}$ and $\underline{y}$ such that $\underline{x}$ dominates $\underline{y}$,
|
| 72 |
+
|
| 73 |
+
$f(\underline{y}) \le f(\underline{x}).$
|
| 74 |
+
|
| 75 |
+
A function $f: \mathbf{R}^+ \to \mathbf{R}^+$ is moderately-slow if it is non-decreasing and there exists a positive integer $\alpha$ such that
|
| 76 |
+
|
| 77 |
+
$\forall i \in \mathbb{N} \setminus \{1\}, \quad \alpha \cdot f(i) \ge f(2i).$
|
samples/texts/1660153/page_7.md
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
In other words, $f(c \cdot i) = O(f(i))$ for every constant $c$ and every integer $i$, where the constant hidden in the $O$ notation depends only on $c$. An example of a moderately-slow function is given by the logarithm.
|
| 2 |
+
|
| 3 |
+
A function $f: \mathbf{R}^+ \rightarrow \mathbf{R}^+$ is *moderately-increasing* if it is non-decreasing and there exists a positive integer $\alpha$ such that
|
| 4 |
+
|
| 5 |
+
$$\forall i \in \mathbb{N} \setminus \{1\}, f(\alpha \cdot i) \ge 2f(i) \text{ and } \alpha \cdot f(i) \ge f(2i).$$
|
| 6 |
+
|
| 7 |
+
Note that $f(x) = x^{k_1} \log^{k_2}(x)$ is a moderately-increasing function for every two reals $k_1 \ge 1$ and $k_2 \ge 0$. Moreover, every moderately-increasing function is moderately-slow. On the other hand, some functions (such as the constant functions or the logarithm) are moderately-slow but not moderately-increasing.
|
| 8 |
+
|
| 9 |
+
A function $f: \mathbf{R}^+ \to \mathbf{R}^+$ is *moderately-fast* if it is moderately-increasing and there exists a polynomial $P$ such that
|
| 10 |
+
|
| 11 |
+
$$\forall x \in \mathbf{R}^{+}, x < f(x) < P(x).$$
|
| 12 |
+
|
| 13 |
+
A function $f: \mathbf{R}^+ \to \mathbf{R}^+$ tends to infinity if
|
| 14 |
+
|
| 15 |
+
$$\limsup_{x \to \infty} f(x) = \infty,$$
|
| 16 |
+
|
| 17 |
+
and $f$ is *ascending* if it is non-decreasing and it tends to infinity. (Note that in this case $\lim_{x \to \infty} f(x) = \infty$.)
|
| 18 |
+
|
| 19 |
+
A function $f: (\mathbf{R}^+)^{\ell} \to \mathbf{R}^+$ is *additive* if there exist $\ell$ ascending functions $f_1, \dots, f_\ell$ such that
|
| 20 |
+
|
| 21 |
+
$$f(x_1, \dots, x_\ell) = \sum_{i=1}^{\ell} f_i(x_i).$$
|
| 22 |
+
|
| 23 |
+
**Problems and instances:** Given a set $V$ of nodes, a vector for $V$ is an assignment **x** of a bit string **x**(v) to each $v \in V$, i.e., **x** is a function **x**: $V \rightarrow \{0, 1\}^*$. A problem is defined by a collection of triplets: $\Pi = \{(G, \mathbf{x}, \mathbf{y})\}$, where $G$ is a (not necessarily connected) graph, and **x** and **y** are input and output vectors for $V$, respectively. We consider only problems that are closed under disjoint union, i.e., if $G_1$ and $G_2$ are two vertex disjoint graphs and $(G_1, \mathbf{x}_1, \mathbf{y}_1), (G_2, \mathbf{x}_2, \mathbf{y}_2) \in \Pi$ then $(G, \mathbf{x}, \mathbf{y}) \in \Pi$, where $G = G_1 \cup G_2$, **x** = $\mathbf{x}_1 \cup \mathbf{x}_2$ and $\mathbf{y} = \mathbf{y}_1 \cup \mathbf{y}_2$.
|
| 24 |
+
|
| 25 |
+
An instance, with respect to a given problem $\Pi$, is a pair $(G, \mathbf{x})$ for which there exists an output vector $\mathbf{y}$ satisfying $(G, \mathbf{x}, \mathbf{y}) \in \Pi$. In what follows, whenever we consider a collection $\mathcal{F}$ of instances, we always assume that $\mathcal{F}$ is closed under inclusion. That is, if $(G, \mathbf{x}) \in \mathcal{F}$ and $(G', \mathbf{x}') \subseteq (G, \mathbf{x})$ (i.e., $G'$ is a subgraph of $G$ and $\mathbf{x}'$ is the input vector $\mathbf{x}$ restricted to $V(G')$ then $(G', \mathbf{x}') \in \mathcal{F}$. Informally, given a problem $\Pi$ and a collection of instances $\mathcal{F}$, the goal is to design an efficient distributed algorithm that takes an instance $(G, \mathbf{x}) \in \mathcal{F}$ as input,
|
| 26 |
+
|
| 27 |
+
and produces an output vector $\mathbf{y}$ satisfying $(G, \mathbf{x}, \mathbf{y}) \in \Pi$. The reason to require problems to be closed under disjoint union is that a distributed algorithm operating on an instance $(G, \mathbf{x})$ runs separately and independently on each connected component of $G$. Let $\mathcal{G}$ be a family of graphs closed under inclusion. We define $\mathcal{F}(\mathcal{G})$ to be $\{\mathcal{G}\} \times \{0, 1\}^*$.
|
| 28 |
+
|
| 29 |
+
We assume that each node $v \in V$ is provided with a unique integer referred to as the *identity* of $v$, and denoted $\mathrm{Id}(v)$; by unique identities, we mean that $\mathrm{Id}(u) \neq \mathrm{Id}(v)$ for every two distinct nodes $u$ and $v$. For ease of exposition, we consider the identity of a node to be part of its input.
|
| 30 |
+
|
| 31 |
+
We consider classical problems such as coloring, maximal matching (MM), Maximal Independent Set (MIS) and the $(\alpha, \beta)$-ruling set problem. Informally, viewing the output of a node as a *color*, the requirement of *coloring* is that the colors of two neighboring nodes must be different. In the $(\alpha, \beta)$-ruling set problem, the output at each node is Boolean, and indicates whether the node belongs to a set $S$ that must form an $(\alpha, \beta)$-ruling set. That is, the set $S$ of selected nodes must satisfy: (1) two nodes that belong to $S$ must be at distance at least $\alpha$ from each other, and (2) if a node does not belong to $S$, then there is a node in the set at distance at most $\beta$ from it. MIS is a special case of the ruling set problem, specifically, MIS is precisely (2, 1)-ruling set. Finally, given a triplet $(G, \mathbf{x}, \mathbf{y})$, two nodes $u$ and $v$ are said to be *matched* if $(u, v) \in E$, $\mathbf{y}(u) = \mathbf{y}(v)$ and $\mathbf{y}(w) \neq \mathbf{y}(u)$ for every $w \in (N_G(u) \cup N_G(v)) \setminus \{\mathbf{u}, \mathbf{v}\}$. Thus, the MM problem requires that each node $u$ is either matched to one of its neighbors or that every neighbor $v$ of $u$ is matched to one of $v$'s neighbors.
|
| 32 |
+
|
| 33 |
+
**Parameters:** Fix a problem $\Pi$ and let $\mathcal{F}$ be a collection of instances for $\Pi$. A parameter $\mathbf{p}$ is a positive valued function $\mathbf{p}: \mathcal{F} \to \mathbb{N}$. The parameter $\mathbf{p}$ is non-decreasing, if $\mathbf{p}(G', \mathbf{x}') \leq \mathbf{p}(G, \mathbf{x})$ whenever $(G', \mathbf{x}') \in \mathcal{F}$ and $(G', \mathbf{x}') \subseteq (G, \mathbf{x})$.
|
| 34 |
+
|
| 35 |
+
Let $\mathcal{F}$ be a collection of instances. A parameter $\mathbf{p}$ for $\mathcal{F}$ is a *graph-parameter* if $\mathbf{p}$ is independent of the input, that is, if $\mathbf{p}(G, \mathbf{x}) = \mathbf{p}(G, \mathbf{x}')$ for every two instances $(G, \mathbf{x}), (G, \mathbf{x}') \in \mathcal{F}$ such that the input assignments $\mathbf{x}$ and $\mathbf{x}'$ preserve the identities, i.e., the inputs $\mathbf{x}(v)$ and $\mathbf{x}'(v)$ contain the same identity $\mathrm{Id}(v)$ for every $v \in V(G)$. In what follows, we will consider only non-decreasing graph-parameters (note, not all graph-parameters are non-decreasing, an example being the diameter of a graph). More precisely, we will primarily focus on the following non-decreasing graph-parameters: the number $n$ of nodes of the graph $G$, i.e., $|V(G)|$, the maximum degree $\Delta = \Delta(G)$ of $G$, i.e., $\max\{\deg_G(u) : u \in V(G)\}$
|
samples/texts/1660153/page_8.md
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
and the arboricity $a = a(G)$ of $G$, i.e., the least number of acyclic subgraphs of $G$ whose union is $G$.
|
| 2 |
+
|
| 3 |
+
**Local algorithms:** Consider a problem $\Pi$ and a collection of instances $\mathcal{F}$ for $\Pi$. An algorithm for $\Pi$ and $\mathcal{F}$ takes as input an instance $(G, \mathbf{x}) \in \mathcal{F}$ and must terminate with an output vector $\mathbf{y}$ such that $(G, \mathbf{x}, \mathbf{y}) \in \Pi$. We consider the *LOCAL* model (cf., [35]). During the execution of a *local* algorithm $\mathcal{A}$, all processors are woken up simultaneously and computation proceeds in fault-free synchronous rounds. In each round, every node may send messages of unrestricted size to its neighbors and may perform arbitrary computations on its data. A message that is sent in a round $r$, arrives at its destination before the next round $r+1$ starts. It must be guaranteed that after a finite number of rounds, each node $v$ terminates by writing some final output value $\mathbf{y}(v)$ in its designated output variable (informally, this means that we may assume that a node "knows" that its output is indeed its final output.) The algorithm $\mathcal{A}$ is *correct* if for every instance $(G, \mathbf{x}) \in \mathcal{F}$, the resulting output vector $\mathbf{y}$ satisfies $(G, \mathbf{x}, \mathbf{y}) \in \Pi$.
|
| 4 |
+
|
| 5 |
+
Let $\mathcal{A}$ be a local deterministic algorithm for $\Pi$ and $\mathcal{F}$. The *running time* of $\mathcal{A}$ over a particular instance $(G, \mathbf{x}) \in \mathcal{F}$, denoted $T_{\mathcal{A}}(G, \mathbf{x})$, is the number of rounds from the beginning of the execution of $\mathcal{A}$ until all nodes terminate. The running time of $\mathcal{A}$ is typically evaluated with respect to a collection $\Lambda$ of parameters $\mathbf{q}_1, \dots, \mathbf{q}_{\ell}$. Specifically, it is compared to a non-decreasing function $f: \mathbb{N}^{\ell} \to \mathbb{R}^{+}$; we say that $f$ is an upper bound for the running time of $\mathcal{A}$ with respect to $\Lambda$ if $T_{\mathcal{A}}(G, \mathbf{x}) \le f(\mathbf{q}_1^*, \dots, \mathbf{q}_{\ell}^*)$ for every instance $(G, \mathbf{x}) \in \mathcal{F}$ with parameters $\mathbf{q}_i^* = \mathbf{q}_i(G, \mathbf{x})$ for $i \in [1, \ell]$. Let us stress that we assume throughout the paper that all the functions bounding running times of algorithms are non-decreasing.
|
| 6 |
+
|
| 7 |
+
For an integer $i$, the algorithm $\mathcal{A}$ restricted to $i$ rounds is the local algorithm $\mathcal{B}$ that consists of running $\mathcal{A}$ for precisely $i$ rounds. The output $\mathbf{y}(u)$ of $\mathcal{B}$ at a vertex $u$ is defined as follows: if, during the $i$ rounds, $\mathcal{A}$ outputs a value $y$ at $u$ then $\mathbf{y}(u) = y$; otherwise we let $\mathbf{y}(u)$ be an arbitrary value, e.g., “0”.
|
| 8 |
+
|
| 9 |
+
A *randomized* local algorithm is a local algorithm that allows each node to use random bits in its local computation—the random bits used by different nodes being independent. A randomized (local) algorithm $\mathcal{A}$ is *Las Vegas* if its correctness is guaranteed with probability 1. The *running time* of a Las Vegas algorithm $\mathcal{A}_{LV}$ over a particular configuration $(G, \mathbf{x}) \in \mathcal{F}$, denoted $T_{\mathcal{A}_{LV}}(G, \mathbf{x})$, is a random variable, which may be unbounded. However, the expected value of $T_{\mathcal{A}_{LV}}(G, \mathbf{x})$
|
| 10 |
+
|
| 11 |
+
is bounded. A Monte-Carlo algorithm $\mathcal{A}_{MC}$ with guarantee $\rho \in (0, 1]$ is a randomized algorithm that takes a configuration $(G, \mathbf{x}) \in \mathcal{F}$ as input and terminates before a predetermined time $T_{\mathcal{A}_{MC}}(G, \mathbf{x})$ (called the *running time* of $\mathcal{A}_{MC}$). It is certain that the output vector produced by Algorithm $\mathcal{A}_{MC}$ is a solution to $\Pi$ with probability at least $\rho$. Finally, a weak Monte-Carlo algorithm $\mathcal{A}_{WMC}$ with guarantee $\rho \in (0, 1]$ guarantees that with probability at least $\rho$, the algorithm outputs a correct solution by its running time $T_{\mathcal{A}_{WMC}}(G, \mathbf{x})$. (Observe that it is not certain that any execution of the weak Monte-Carlo algorithm will terminate by the prescribed time $T_{\mathcal{A}_{WMC}}(G, \mathbf{x})$, or even terminate at all.) Note that a Monte-Carlo algorithm is in particular a weak Monte-Carlo algorithm, with the same running time and guarantee. Moreover, for any constant $\rho \in (0, 1]$, a Las Vegas algorithm running in expected time $T$ is a weak Monte-Carlo algorithm with guarantee $\rho$ running in time $\frac{T}{1-\rho}$, by Markov's inequality.
|
| 12 |
+
|
| 13 |
+
*Synchronicity and time complexity:* Many *LOCAL* algorithms happen to have different termination times at different nodes. On the other hand, most of the algorithms rely on a simultaneous wake-up time for all nodes. This becomes an issue when one wants to run an algorithm $\mathcal{A}_1$ and subsequently an algorithm $\mathcal{A}_2$ taking the output of $\mathcal{A}_1$ as input. Indeed, this problem amounts to running $\mathcal{A}_2$ with non-simultaneous wake-up times: a node $u$ starts $\mathcal{A}_2$ when it terminates $\mathcal{A}_1$.
|
| 14 |
+
|
| 15 |
+
As observed (e.g., by Kuhn [22]), the concept of synchronizer [2], used in the context of local algorithms, allows one to transform an asynchronous local algorithm to a synchronous one that runs in the same asymptotic time complexity. Hence, the synchronicity assumption can actually be removed. Although the standard asynchronous model introduced still assumes a simultaneous wake-up time, it can be easily verified that the technique still applies with non-simultaneous wake-up times if a node can buffer messages received before it wakes up, which is the case when running an algorithm after another.
|
| 16 |
+
|
| 17 |
+
However, we have to adapt the notion of running time. The computation that a node performs in time $t$ depends on its interactions with nodes at distance at most $t$ in the network. More precisely, we say that a node $u$ terminates in time $t$ if it terminates at most $t$ rounds after all nodes in $B_G(u,t)$ have woken up. The termination time of $u$ is the least $t$ such that $u$ terminates in time $t$. We finally define the running time of an algorithm as the maximum termination time over all nodes and all wake-up patterns.
|
| 18 |
+
|
| 19 |
+
Given two local algorithms $\mathcal{A}_1$ and $\mathcal{A}_2$, we let $\mathcal{A}_1; \mathcal{A}_2$ be the process of running $\mathcal{A}_2$ after $\mathcal{A}_1$. It turns out that
|
samples/texts/1660153/page_9.md
ADDED
|
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
the running time of $\mathcal{A}_1$; $\mathcal{A}_2$ is bounded from above by
|
| 2 |
+
the sum of the running times of $\mathcal{A}_1$ and $\mathcal{A}_2$. This can be
|
| 3 |
+
shown as follows. Let $t_1$ and $t_2$ be the running times of
|
| 4 |
+
$\mathcal{A}_1$ and $\mathcal{A}_2$ respectively. Consider a node $u$ and let $t_0$ be
|
| 5 |
+
the last wake-up time of a node in the ball $B_G(u, t_1+t_2)$.
|
| 6 |
+
At $t_0 + t_1$, all nodes in $B_G(u, t_2)$ have terminated $\mathcal{A}_1$
|
| 7 |
+
and are thus considered as woken up for the execution
|
| 8 |
+
of $\mathcal{A}_2$. Node $u$ thus terminates before $(t_0+t_1)+t_2$. As
|
| 9 |
+
this is true for any node $u$ independently of the wake-up
|
| 10 |
+
pattern, $\mathcal{A}_1$; $\mathcal{A}_2$ has running time at most $t_1+t_2$. This
|
| 11 |
+
establishes the following observation.
|
| 12 |
+
|
| 13 |
+
**Observation 2.1** For any two local algorithms $\mathcal{A}_1$ and $\mathcal{A}_2$, the running time of $\mathcal{A}_1$; $\mathcal{A}_2$ is bounded by the sum of the running times of $\mathcal{A}_1$ and $\mathcal{A}_2$.
|
| 14 |
+
|
| 15 |
+
Another useful remark is that a simultaneous wake-
|
| 16 |
+
up algorithm running in time $t$ can be emulated in a
|
| 17 |
+
non-simultaneous wake-up environment with running
|
| 18 |
+
time at most $t$ using the simple $\alpha$ synchronizer. Indeed,
|
| 19 |
+
consider a node $u$ and let $t_0$ be the last wake-up time
|
| 20 |
+
of a node in the ball $B_G(u,t)$. At time $t_0$, all nodes in
|
| 21 |
+
$B_G(u,t)$ perform (or have performed) round 0. Using
|
| 22 |
+
the $\alpha$ synchronizer a node can perform round $i$ when
|
| 23 |
+
all its neighbors have performed round $i-1$. We can
|
| 24 |
+
thus show by induction on $i$ that all nodes in $B_G(u,t-i)$
|
| 25 |
+
perform (or have performed) round $i$ at time $t_0+i$. The
|
| 26 |
+
node $u$ thus terminates in time $t$. This implies that the
|
| 27 |
+
running time of the emulation of the algorithm with the
|
| 28 |
+
$\alpha$ synchronizer is at most $t$. Therefore, in the remaining
|
| 29 |
+
of the paper we may assume without loss of generality
|
| 30 |
+
that all nodes wake up simultaneously at time 0.
|
| 31 |
+
|
| 32 |
+
Local algorithms requiring parameters: Fix a problem $\Pi$ and let $\mathcal{F}$ be a collection of instances for $\Pi$. Let $\Gamma$ be a collection of parameters $\mathbf{p}_1, \dots, \mathbf{p}_r$ and let $\mathcal{A}$ be a local algorithm. We say that $\mathcal{A}$ requires $\Gamma$ if the code of $\mathcal{A}$, which is executed by each node of the input configuration, uses a value $\tilde{\mathbf{p}}$ for each parameter $\mathbf{p} \in \Gamma$. (Note that this value is thus the same for all nodes.) The value $\tilde{\mathbf{p}}$ is a *guess* for $\mathbf{p}$. A collection of guesses for the parameters in $\Gamma$ is denoted by $\tilde{\Gamma}$ and an algorithm $\mathcal{A}$ that requires $\Gamma$ is denoted by $\mathcal{A}^\Gamma$. An algorithm that does not require any parameter is called *uniform*.
|
| 33 |
+
|
| 34 |
+
Consider an instance $(G, \mathbf{x}) \in \mathcal{F}$, a collection $\Gamma$ of parameters and a parameter $\mathbf{p} \in \Gamma$. A guess $\tilde{\mathbf{p}}$ for $\mathbf{p}$ is termed good if $\tilde{\mathbf{p}} \ge \mathbf{p}(G, \mathbf{x})$, and the guess $\tilde{\mathbf{p}}$ is called correct if $\tilde{\mathbf{p}} = \mathbf{p}(G, \mathbf{x})$. We typically write correct guesses and collection of correct guesses with a star superscript, as in $\mathbf{p}^*$ and $\Gamma^*(G, \mathbf{x})$, respectively. When $(G, \mathbf{x})$ is clear from the context, we may use the notation $\Gamma^*$ instead of $\Gamma^*(G, \mathbf{x})$.
|
| 35 |
+
|
| 36 |
+
An algorithm $\mathcal{A}^\Gamma$ depends on $\Gamma$ if for every instance $(G, \mathbf{x}) \in \mathcal{F}$, the correctness of $\mathcal{A}^\Gamma$ over $(G, \mathbf{x})$ is guaranteed only when $\mathcal{A}^\Gamma$ uses a collection $\tilde{\Gamma}$ of good guesses.
|
| 37 |
+
|
| 38 |
+
Consider an algorithm $\mathcal{A}^\Gamma$ that depends on a collection $\Gamma$ of parameters $\mathbf{p}_1, \dots, \mathbf{p}_r$ and fix an instance $(G, \mathbf{x})$. Observe that the running time of $\mathcal{A}^\Gamma$ over $(G, \mathbf{x})$ may be different for different collections of guesses $\tilde{\Gamma}$, in other words, the running time over $(G, \mathbf{x})$ may be a function of $\tilde{\Gamma}$. Recall that when we consider an algorithm that does not require parameters, we still typically evaluate its running time with respect to a collection of parameters $\Lambda$. We generalize this to the case where the algorithm depends on $\Gamma$ as follows.
|
| 39 |
+
|
| 40 |
+
Consider two collections $\Gamma$ and $\Lambda$ of parameters
|
| 41 |
+
$\mathbf{p}_1, \dots, \mathbf{p}_r$ and $\mathbf{q}_1, \dots, \mathbf{q}_\ell$, respectively. Some parameters
|
| 42 |
+
may belong to both $\Gamma$ and $\Lambda$. Without loss of generality,
|
| 43 |
+
we shall always assume that $\{\mathbf{p}_{r'+1}, \dots, \mathbf{p}_r\} \cap$
|
| 44 |
+
$\{\mathbf{q}_{r'+1}, \dots, \mathbf{q}_\ell\} = \emptyset$ for some $r' \in [0, \min\{r, \ell\}]$ and
|
| 45 |
+
$\mathbf{p}_i = \mathbf{q}_i$ for every $i \in [1, r']$. Notice that $\Gamma \setminus \Lambda =$
|
| 46 |
+
$\{\mathbf{p}_{r'+1}, \mathbf{p}_{r'+2}, \dots, \mathbf{p}_r\}$. A function $f$: $(\mathbf{R}^+)^{\ell} \to \mathbf{R}^+$ up-
|
| 47 |
+
per bounds the running time of $\mathcal{A}^\Gamma$ with respect to $\Gamma$
|
| 48 |
+
and $\Lambda$ if the running time $T_{\mathcal{A}^\Gamma}(G, \mathbf{x})$ of $\mathcal{A}^\Gamma$ for $(G, \mathbf{x}) \in$
|
| 49 |
+
$\mathcal{F}$ using a collection of good guesses $\tilde{\Gamma} = \{\tilde{\mathbf{p}}_1, \dots, \tilde{\mathbf{p}}_r\}$
|
| 50 |
+
is at most $f(\tilde{\mathbf{p}}_1, \dots, \tilde{\mathbf{p}}_{r'}, \dots, \tilde{\mathbf{q}}_\ell^*)$, where $\tilde{\mathbf{q}}_i^* = \mathbf{q}_i(G, \mathbf{x})$
|
| 51 |
+
for $i \in [r' + 1, \ell]$. Note that we do not put any restric-
|
| 52 |
+
tion on the running time of $\mathcal{A}^\Gamma$ over $(G, \mathbf{x})$ if some of
|
| 53 |
+
the guesses in $\tilde{\Gamma}$ are not good. In fact, in such a case,
|
| 54 |
+
the algorithm may not even terminate and it may also
|
| 55 |
+
produce wrong results.
|
| 56 |
+
|
| 57 |
+
For simplicity of notation, when Γ and Λ are clear
|
| 58 |
+
from the context, we say that f upper bounds the run-
|
| 59 |
+
ning time of AΓ, without writing that it is with respect
|
| 60 |
+
to Γ and Λ.
|
| 61 |
+
|
| 62 |
+
The set Γ is weakly-dominated by Λ if for each
|
| 63 |
+
$j \in [r'+1, r]$, there exists an index $i_j \in [1, \ell]$ and an as-
|
| 64 |
+
cending function $g_j$ such that $g_j(\mathbf{p}_j(G, \mathbf{x})) \le q_{ij}(G, \mathbf{x})$
|
| 65 |
+
for every instance $(G, \mathbf{x}) \in \mathcal{F}$. (For example, Γ = {Δ} is
|
| 66 |
+
weakly-dominated by Λ = {n}, since Δ(G, x) ≤ n(G, x)
|
| 67 |
+
for any (G, x).)
|
| 68 |
+
|
| 69 |
+
3 Pruning Algorithms
|
| 70 |
+
|
| 71 |
+
3.1 Overview
|
| 72 |
+
|
| 73 |
+
Consider a problem $\Pi$ in the centralized setting and an efficient randomized Monte-Carlo algorithm $\mathcal{A}$ for $\Pi$. A known method for transforming $\mathcal{A}$ into a Las Vegas algorithm is based on repeatedly doing the following. Execute $\mathcal{A}$ and, subsequently, execute an algorithm that checks the validity of the output. If the checking fails then continue, and otherwise, terminate, i.e., break the loop. This transformation can yield a Las Vegas algorithm whose expected running time is similar to the
|
samples/texts/1749790/page_1.md
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Analytic Continuation of Group Representations. III*
|
| 2 |
+
|
| 3 |
+
ROBERT HERMANN
|
| 4 |
+
|
| 5 |
+
Applied Mathematics Division, Argonne National Laboratory
|
| 6 |
+
|
| 7 |
+
Received April 22, 1966
|
| 8 |
+
|
| 9 |
+
**Abstract.** The connection between the ideas of "contraction" and "analytic continuation" of Lie algebras and their representations is discussed, with particular emphasis on the contraction of the Poincaré to the Galilean group.
|
| 10 |
+
|
| 11 |
+
## 1. Introduction
|
| 12 |
+
|
| 13 |
+
We continue the study of the relation between analytic continuation of Lie algebras, their representations and Lie algebra cohomology. The first topic we will treat will be a further development of the formalism when a Lie algebra structure is fixed, and an irreducible representation of it is analytically continued. In [4] we showed that, if the relevant first cohomology group vanishes, then the Casimir operators of the Lie algebra are constants of the deformation parameter. Here, we will study the formalism for the case where the first cohomology group does not vanish. We will obtain some insight into one of the main problems, namely, discovering when the first cohomology group is finite dimensional.
|
| 14 |
+
|
| 15 |
+
Our next topic will be to continue both the Lie algebra structure and the representation. This will provide a tie-up between Lie algebra cohomology theory and the Gell-Mann formula for the representations of Lie algebras. Again, we will find that the beautiful ideas of the Kodaira-Spencer theory of deformation of structure provide us with a deep insight into the already known situation, and should be an invaluable guide to extending the existing theory to new situations. The case of the contraction of the Poincaré to the Galilean group will be treated in some detail.
|
| 16 |
+
|
| 17 |
+
I would like to thank R. KALMAN for his hospitality at Stanford University while this paper was written.
|
| 18 |
+
|
| 19 |
+
## 2. The effect of continuation of representations on the universal enveloping algebra
|
| 20 |
+
|
| 21 |
+
Let $G$ be a Lie algebra, with $X, Y, \dots$ denoting its typical elements, $[X, Y]$ its bracket. Recall that $U(G)$, the universal associative enveloping algebra of $G$ is defined in the following way [3, 5]:
|
| 22 |
+
|
| 23 |
+
* This research was supported in part by the Office of Air Force Scientific Research AF 49 (638)-1440.
|
| 24 |
+
|
| 25 |
+
6 Commun. math. Phys., Vol. 3
|
samples/texts/1749790/page_10.md
ADDED
|
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Form operators
|
| 2 |
+
|
| 3 |
+
$$X_{\lambda} := 1/2 i [Z^2, X'] + \lambda X'$$
|
| 4 |
+
|
| 5 |
+
$$Y_{\lambda} = 1/2 i [Z^2, Y'] + \lambda Y$$
|
| 6 |
+
|
| 7 |
+
$$Z_{\lambda} = Z.$$
|
| 8 |
+
|
| 9 |
+
Then, as was shown in [4],
|
| 10 |
+
|
| 11 |
+
$$[X_{\lambda}, Y_{\lambda}] = -Z$$
|
| 12 |
+
|
| 13 |
+
$$[Z, X_{\lambda}] = Y_{\lambda}$$
|
| 14 |
+
|
| 15 |
+
$$[Z, Y_{\lambda}] = -X_{\lambda}$$
|
| 16 |
+
|
| 17 |
+
these operators ($X_λ, Y_λ, Z$) form a representation of the Lie algebra of $SL(2, R)$.
|
| 18 |
+
|
| 19 |
+
We now want to investigate more precisely what happens as $\lambda \to \infty$. Let us set $\varepsilon = 1/\gamma$. Define
|
| 20 |
+
|
| 21 |
+
$$\varphi_e(X') = 1/2 \varepsilon i [Z^2, X'] + X' = \varepsilon X_λ$$
|
| 22 |
+
|
| 23 |
+
$$\varphi_e(Y') = \varepsilon Y_λ \quad (5.2)$$
|
| 24 |
+
|
| 25 |
+
$$\varphi_e(Z') = Z.$$
|
| 26 |
+
|
| 27 |
+
Then,
|
| 28 |
+
|
| 29 |
+
$$[\varphi_e(X'), \varphi_e(Y')] = -\varepsilon^2 Z$$
|
| 30 |
+
|
| 31 |
+
$$[Z, \varphi_e(X')] = \varepsilon Y_λ = \varphi_e(Y')$$
|
| 32 |
+
|
| 33 |
+
$$[Z, \varphi_e(Y')] = -\varphi_e(X').$$
|
| 34 |
+
|
| 35 |
+
These formulas can be interpreted as follows:
|
| 36 |
+
|
| 37 |
+
Let $\mathfrak{G}$ be the *vector space* spanned by the elements $X', Y', Z'$. For each $\epsilon$, define a Lie algebra structure as $[\cdot, ]_\epsilon$ on $\mathfrak{G}$ by the following formulas:
|
| 38 |
+
|
| 39 |
+
$$[X', Y']_\epsilon = -\epsilon^2 Z \quad (5.3)$$
|
| 40 |
+
|
| 41 |
+
$$[Z, X']_\epsilon = Y', [Z, Y']_\epsilon = X'.$$
|
| 42 |
+
|
| 43 |
+
Define $\varphi_\epsilon$ as above. Then, for each $\epsilon$, the above formulas define $\varphi_\epsilon$ as a linear representation of the $[\cdot, ]_\epsilon$ Lie algebra. There is no longer any singularity at $\epsilon = 0$ or $\lambda = \infty$. Thus, passing from the “Inonu-Wigner” picture with which we began (where the Lie algebra structure remains fixed, and the representation is continued and the basis of the algebra is changed simultaneously) to the “Kodaira-Spencer” picture (where the Lie algebra and representation are continued simultaneously) is an enormous aid to a proper mathematical understanding of the situation.
|
| 44 |
+
|
| 45 |
+
Thus, we can look at the Gell-Mann formula (5.1) in the following way: start off with the Lie algebra defined by (5.1), which is the Lie algebra of the group of rigid motions of the plane. Define an analytic continuation of the Lie algebra structure by the formulas (5.3). This continuation is nonrigid in the Kodaira-Spencer sense, since for $\epsilon > 0$
|
samples/texts/1749790/page_11.md
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
the algebra is not isomorphic to the one with which we started at $\varepsilon = 0$. The Gell-Mann formula itself, i.e., (5.2), now provides an analytic continuation of the representation of the $[\cdot, \cdot]_0$ structure that is given, each representation for $\varepsilon$ being a representation of the $[\cdot, \cdot]_e$ structure.
|
| 2 |
+
|
| 3 |
+
Let us now look for the interpretation of this in terms of cohomology. Let us change notations to conform with our earlier work. Suppose $G$ and $L$ are Lie algebras, with the bracket in $G$ given by $[X, Y]$, and suppose $\varphi$ is a homomorphism $G \to L$. Again, let $\varphi'$ be the homomorphism for $G$ into the linear transformations on $L$ given by:
|
| 4 |
+
|
| 5 |
+
$$ \varphi'(X)(Z) = [\varphi(X), Z] \quad \text{for} \ X \in G, Z \in L. $$
|
| 6 |
+
|
| 7 |
+
Suppose a one-parameter family
|
| 8 |
+
|
| 9 |
+
$$ ([X, Y])_{\lambda} $$
|
| 10 |
+
|
| 11 |
+
of Lie algebra structures is given on $G$, reducing to the given one for $\lambda = 0$. Let $\gamma: G \to (\text{linear maps on } G)$ be the adjoint representation of the $\lambda = 0$ Lie algebra on $G$, i.e.,
|
| 12 |
+
|
| 13 |
+
$$ \gamma(X)(Y) = [X, Y] \quad \text{for } X, Y \in G. $$
|
| 14 |
+
|
| 15 |
+
Then, we know that the formula:
|
| 16 |
+
|
| 17 |
+
$$ \omega(X, Y) = \frac{d}{d\lambda} [X, Y]_{\lambda}|_{\lambda=0} $$
|
| 18 |
+
|
| 19 |
+
defines $\omega$ as a two-cocycle relative to $\gamma$, i.e., on element in $Z^2(\gamma)$, whose cohomology class in $H^2(\gamma)$ measures the "nonisomorphism" of the structure at $\gamma = 0$ and that for small, but nonzero $\gamma$.
|
| 20 |
+
|
| 21 |
+
Suppose further that, for each $\lambda$, $\varphi_\lambda$ is a linear mapping of $G \to L$ reducing to $\varphi$ for $\lambda = 0$, such that:
|
| 22 |
+
|
| 23 |
+
$$ \varphi_{\lambda}([X, Y]_{\lambda}) = [\varphi_{\lambda}(X), \varphi_{\lambda}(Y)] \quad \text{for } X, Y \in G. \quad (5.4) $$
|
| 24 |
+
|
| 25 |
+
Define $\varphi: G \to L$ by the formula:
|
| 26 |
+
|
| 27 |
+
$$ \theta(X) = \frac{d}{d\lambda} \varphi_{\lambda}(X)|_{\lambda=0} $$
|
| 28 |
+
|
| 29 |
+
$\theta$ is a one-cochain in $C^1(\varphi')$. However, it is not a cocycle. In fact, let us differentiate (5.4) and set $\lambda = 0$:
|
| 30 |
+
|
| 31 |
+
$$ \theta([X, Y]) + \varphi(\omega(X, Y)) = [\theta(X), \varphi(Y)] + [\varphi(X), \theta(Y)]. $$
|
| 32 |
+
|
| 33 |
+
This gives the formula:
|
| 34 |
+
|
| 35 |
+
$$ \varphi(\omega) = d\theta \qquad (5.5) $$
|
| 36 |
+
|
| 37 |
+
where $\varphi(\omega)$ is the two-chain in $C^2(\varphi')$ given by:
|
| 38 |
+
|
| 39 |
+
$$ \varphi(\omega)(X, Y) = \varphi(\omega(X, Y)). $$
|
| 40 |
+
|
| 41 |
+
Thus, $\omega$ considered as a cocycle in $C^2(\gamma)$ is not necessarily a coboundary, but its image under $\varphi$, $\varphi(\omega)$, is a coboundary, and the element $\varphi$ in $C^1(\gamma)$ is the first term in the analytic continuation of $\varphi$.
|
samples/texts/1749790/page_12.md
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Now, this does not quite reflect the situation in the case developed above; $\omega$ defined as the first derivative is zero, since the parameter $\lambda$ occurs to different order in the continuation of the representation and the Lie algebra structure. Suppose then that
|
| 2 |
+
|
| 3 |
+
$$ \frac{d}{d\lambda} [X, Y]|_{\lambda=0} = 0 \quad \text{for } X, Y \in G. $$
|
| 4 |
+
|
| 5 |
+
Define now
|
| 6 |
+
|
| 7 |
+
$$ \omega_2(X, Y) = \frac{d^2}{d\lambda^2} [X, Y]|_{\lambda=0} \quad \text{for } X, Y \in G. $$
|
| 8 |
+
|
| 9 |
+
Since the first derivations are zero, it is readily seen that $\omega_2$ so defined also satisfies the cocycle condition. Then,
|
| 10 |
+
|
| 11 |
+
$$ d\theta = 0, $$
|
| 12 |
+
|
| 13 |
+
i.e., $\theta$ itself is a cocycle. Let
|
| 14 |
+
|
| 15 |
+
$$ \theta_2(X) = \frac{d^2}{d\lambda^2} \varphi_\lambda(X)|_{\lambda=0}. $$
|
| 16 |
+
|
| 17 |
+
Differentiating (5.4) twice gives now:
|
| 18 |
+
|
| 19 |
+
$$ \begin{aligned} & \theta_2([X, Y]) + \varphi\omega(X, Y) \\ &= [\theta_2(X), \varphi(Y)] + [\varphi(X), \theta_2(Y)] + 2[\theta_1(X), \theta_1(Y)]. \end{aligned} $$
|
| 20 |
+
|
| 21 |
+
This can be rewritten as
|
| 22 |
+
|
| 23 |
+
$$ -d\theta_2(X, Y) + \varphi\omega(X, Y) = 2[\theta_1(X), \theta_1(Y)]. $$
|
| 24 |
+
|
| 25 |
+
Now, the right-hand side obviously is a two-cocycle in $C^2(\varphi')$ since the left-hand side is such a cocycle. Let us denote this cocycle by
|
| 26 |
+
|
| 27 |
+
$$ [\theta_1, \theta_1]. $$
|
| 28 |
+
|
| 29 |
+
(This operation is discussed in the review article by NIJENHUIS and RICHARDSON [6]. It turns out to depend only on the cohomology class determined by $\theta_1$ in $H^1(\varphi')$). Then, we can write the relation as:
|
| 30 |
+
|
| 31 |
+
$$ \varphi\omega = d\theta_2 + 2[\theta_1, \theta_1] $$
|
| 32 |
+
|
| 33 |
+
i.e., the cohomology class determined by $\varphi\omega$ in $H^1(\varphi')$ can be written as a "square" of an element of $H^1(\varphi')$.
|
| 34 |
+
|
| 35 |
+
In summary, we have shown that there are interesting relations between the deformation theory and the analytic continuation problems that are of importance for the application of group-theoretical ideas to elementary particle physics. Before proceeding further with the general theory (in a later paper) it is appropriate to work out a further example that is of the greatest importance for physics.
|
| 36 |
+
|
| 37 |
+
## 6. Contraction of the Poincaré group into the Galilean group
|
| 38 |
+
|
| 39 |
+
Let T be a vector space over the real numbers, considered as an Abelian Lie algebra. (One might think of T as the Lie algebra of the group of space-time translations.) Denote elements of T by such letters
|
samples/texts/1749790/page_13.md
ADDED
|
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
as X, Y, etc. Suppose a $(X, Y) \to Q(X, Y)$ is a nondegenerate, symmetric bilinear form in $T$. Let $K(Q)$ be the Lie algebra (under commutator) of all linear transformations $A : T \to T$ that satisfy:
|
| 2 |
+
|
| 3 |
+
$$Q(A X, Y) + Q(X, A Y) = 0.$$
|
| 4 |
+
|
| 5 |
+
Thus, each such $A$ is the infinitesimal generator of a one-parameter group of linear transformations on $T$ that preserve the form $Q(\cdot, \cdot)$. Form the Lie algebra $\mathfrak{G}(Q)$ as the semidirect sum of $K(Q)$ and $T$, i.e., as a vector space $\mathfrak{G}(Q)$ is the direct sum of $K(Q)$ and $T$ with the bracket defined as follows:
|
| 6 |
+
|
| 7 |
+
$$[X, Y] = 0 \quad \text{for } X, Y \in T$$
|
| 8 |
+
|
| 9 |
+
$$[A_1, A_2] = A_1 A_2 - A_2 A_1 \quad \text{for } A_1, A_2 \in K(Q)$$
|
| 10 |
+
|
| 11 |
+
$$[A, X] = A(X) \quad \text{for } A \in K(Q), X \in T.$$
|
| 12 |
+
|
| 13 |
+
Now suppose $Q_\lambda$ is a one-parameter family of such bilinear forms, reducing to the given one at $\lambda = 0$. We can, of course, form $\mathfrak{G}(Q_\lambda)$ for every value of $\lambda$. In what sense can this be considered an analytic continuation of $\mathfrak{G}(Q)$, and how can we investigate the limit as $\lambda \to \infty$?
|
| 14 |
+
|
| 15 |
+
Since $Q_\lambda$ is nondegenerate, for each $\lambda$ there is a linear transformation $B_\lambda : T \to T$ with nonzero determinant such that
|
| 16 |
+
|
| 17 |
+
$$Q_\lambda(X, Y) = Q(B_\lambda X, Y) \quad \text{for } X, Y \in T.$$
|
| 18 |
+
|
| 19 |
+
Thus,
|
| 20 |
+
|
| 21 |
+
$$Q_\lambda(X, Y) = Q_\lambda(Y, X) \quad \text{forces} \quad Q(B_X, Y) = Q(B_X, X) = Q(Y, B_X X)$$
|
| 22 |
+
|
| 23 |
+
i.e., $B_\lambda^* = B_\lambda$, where $B_\lambda^*$ denotes the adjoint of $B_\lambda$ with respect to the form $Q$.
|
| 24 |
+
|
| 25 |
+
Suppose $A \in K(Q_\lambda)$:
|
| 26 |
+
|
| 27 |
+
$$Q_\lambda(AX, Y) + Q_\lambda(X, AY) = 0,$$
|
| 28 |
+
|
| 29 |
+
or
|
| 30 |
+
|
| 31 |
+
$$0 = Q(B_\lambda AX, Y) + Q(B_\lambda X, AY) = Q(B_\lambda AX, Y) + Q(X, B_\lambda AY)$$
|
| 32 |
+
|
| 33 |
+
Hence,
|
| 34 |
+
|
| 35 |
+
$$B_\lambda A \in K(Q).$$
|
| 36 |
+
|
| 37 |
+
Thus, there is a map $A \to B_\lambda A = \alpha_\lambda(A)$ from $K(Q_\lambda)$ to $K(Q)$ that is not a Lie algebra isomorphism. Thus, we can define a one-parameter family $[\cdot, ]_\lambda$ of Lie algebra structures on $\mathfrak{G}(Q)$ by carrying over the Lie algebra structure on $\mathfrak{G}(Q_\lambda)$ via this isomorphism:
|
| 38 |
+
|
| 39 |
+
$$[X, Y]_A = 0 \quad \text{for } X, Y \in T$$
|
| 40 |
+
|
| 41 |
+
$$[A, Y]_A = q_A^{-1} A, \quad Y = B_A^{-1} A Y \quad \text{for } A \in K(Q), X \in T$$
|
| 42 |
+
|
| 43 |
+
$$\begin{align}
|
| 44 |
+
[A_1, A_2]_A &= \alpha_A [\alpha_A^{-1} A_1, \alpha_A^{-1} A_2] \tag{6.1} \\
|
| 45 |
+
&= B_A (B_A^{-1} A_1 B_A^{-1} A_2 - B_A^{-1} A_2 B_A^{-1} A_1) \nonumber \\
|
| 46 |
+
&= A_1 B_A^{-1} A_2 - A_2 B_A^{-1} A_1. \nonumber
|
| 47 |
+
\end{align}$$
|
samples/texts/1749790/page_14.md
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Now, we can pass to the limit as $\lambda \to \infty$: If
|
| 2 |
+
|
| 3 |
+
$$B = \lim_{\lambda \to \infty} B_{\lambda}^{-1},$$
|
| 4 |
+
|
| 5 |
+
the limiting algebra has the structure
|
| 6 |
+
|
| 7 |
+
$$ \begin{align} [A, T]_{\infty} &= 0 \nonumber \\ [A, Y]_{\infty} &= BA^T \quad \text{for } A \in K(Q), Y \in T \tag{6.2} \\ [A_1, A_2]_{\infty} &= A_1BA_2 - A_2BA_1 \quad \text{for } A_1, A_2 \in K(Q). \nonumber \end{align} $$
|
| 8 |
+
|
| 9 |
+
Further, if $B_{\lambda}^{-1}$ is analytic $Y_{\lambda}$ in the neighborhood of infinite, then the formulas (6.1) show that the algebra for large $\lambda$ is a perfectly smooth deformation in the Kodaira-Spencer sense of the $\infty$-algebra, which we denote by $G_{\infty}$.
|
| 10 |
+
|
| 11 |
+
The structure of $G_{\infty}$ can be exhibited quite nicely if $B$ is a projection operator $B^2 = B$ as it is for the case where $G(Q)$ is the Poincaré group, and $G_{\infty}$ is the Galilean group. (There, $B_{\lambda}$ is the diagonal matrix
|
| 12 |
+
|
| 13 |
+
$$ \begin{pmatrix} \lambda & & \\ & 1 & \\ & & 1 \end{pmatrix}. $$
|
| 14 |
+
|
| 15 |
+
$\lambda = c^2; c = \text{velocity of light}$ and $B$ is the matrix
|
| 16 |
+
|
| 17 |
+
Then
|
| 18 |
+
|
| 19 |
+
$$ T = BT \oplus (I - B)T, \\ Q(BT, (I - B)T) = Q(T, B(I - B)T) = 0 \tag{since } B^2 = B, \text{ and } B = B^* \text{).} $$
|
| 20 |
+
|
| 21 |
+
Let
|
| 22 |
+
|
| 23 |
+
$$ A = I - 2B. $$
|
| 24 |
+
|
| 25 |
+
Then, $s^2 = I + 4B^2 - 4B = I$.
|
| 26 |
+
|
| 27 |
+
$$ Q(sX, sY) = Q(X, s^2Y) = Q(X, Y). $$
|
| 28 |
+
|
| 29 |
+
Thus, $s$ is an automorphism of $T$ whose square is the identity which preserves the form $Q: s$ defines a symmetric automorphism of $K(Q)$ by the formula:
|
| 30 |
+
|
| 31 |
+
$$ s(A) = sAs \quad \text{for} \quad A \in K(Q). $$
|
| 32 |
+
|
| 33 |
+
Let $L$ be the set of all $A \in K(Q)$ such that
|
| 34 |
+
|
| 35 |
+
$$ s(A) = A. $$
|
| 36 |
+
|
| 37 |
+
Let $P$ be the set of all $A \in K(Q)$ such that
|
| 38 |
+
|
| 39 |
+
$$ s(A) = A. $$
|