Title: | Non-Parametric Estimation of Spatial Segregation in a Multivariate Point Process |
---|---|
Description: | Edge-corrected kernel density estimation and binary kernel regression estimation for multivariate spatial point process data. For details, see Diggle, P.J., Zheng, P. and Durr, P. A. (2005) <doi:10.1111/j.1467-9876.2005.05373.x>. |
Authors: | Virgilio Gómez-Rubio [aut, cre], Pingping Zheng [aut], Peter Diggle [aut, cph], Richard Cotton [ctb], Alan Murta [ctb, cph], Jonathan Richard Shewchuk [ctb, cph], Barry Rowlingson [ctb, cph], Wm. Randolph Franklin [ctb, cph], David C. Sterratt [cph, aut], Elias Pipping [ctb], Roger D. Peng [aut], Duncan Murdoch [aut], Barry Rowlingson [aut] |
Maintainer: | Virgilio Gómez-Rubio <[email protected]> |
License: | CC BY-NC-SA 4.0 |
Version: | 0.4-25 |
Built: | 2024-11-06 03:56:12 UTC |
Source: | https://github.com/barryrowlingson/spatialkernel |
An R package for spatial point process analysis.
This package contains functions for spatial point process analysis using kernel smoothing methods. This package has been written to be compatible with the splancs package which is available on CRAN (The Comprehensive R Archive Network).
For a complete list of functions with individual help pages,
use library(help = \ "spatialkernel")
.
Pingping Zheng [email protected]
For the convience of the user, we present here examples which show how to use some of the functions in the package.
Pingping Zheng and Peter Diggle
P. Zheng, P.A. Durr and P.J. Diggle (2004) Edge-correction for Spatial Kernel Smoothing — When Is It Necessary? Proceedings of the GisVet Conference 2004, University of Guelph, Ontario, Canada, June 2004.
Diggle, P.J., Zheng, P. and Durr, P. A. (2005) Nonparametric estimation of spatial segregation in a multivariate point process: bovine tuberculosis in Cornwall, UK. J. R. Stat. Soc. C, 54, 3, 645–658.
cvloglk
, phat
,
mcseg.test
, plotphat
,
plotmc
, pinpoly
,
risk.colors
, metre
## An example of spatial segregation analysis ## Not run: ## source in Lansing Woods tree data within a polygon boundary data(lansing) data(polyb) ## select data points within polygon ndx <- which(pinpoly(polyb, as.matrix(lansing[c("x", "y")])) > 0) pts <- as.matrix(lansing[c("x", "y")])[ndx,] marks <- lansing[["marks"]][ndx] ## select bandwidth h <- seq(0.02, 0.1, length=101) cv <- cvloglk(pts, marks, h=h)$cv hcv <- h[which.max(cv)] plot(h, cv, type="l") ## estimate type-specific probabilities and do segregation tests ## by one integrated function sp <- spseg(pts, marks, hcv, opt=3, ntest=99, poly=polyb) ## plot estimated type-specific probability surfaces plotphat(sp) ## additional with pointwise significance contour lines plotmc(sp, quan=c(0.025, 0.975)) ## p-value of the Monte Carlo segregation test cat("\np-value of the Monte Carlo segregation test", sp$pvalue) ##estimate intensity function at grid point for presentation ##with bandwidth hcv gridxy <- as.matrix(expand.grid(x=seq(0, 1, length=101), y=seq(0, 1, length=101))) ndx <- which(pinpoly(polyb, gridxy) > 0) ##inside point index lam <- matrix(NA, ncol=101, nrow=101) lam[ndx] <- lambdahat(pts, hcv, gpts = gridxy[ndx,], poly = polyb)$lambda brks <- pretty(range(lam, na.rm=TRUE), n=12) plot(0, 0, xlim=0:1, ylim=0:1, xlab="x", ylab="y", type="n") image(x=seq(0, 1, length=101), y=seq(0, 1, length=101), z=lam, add=TRUE, breaks=brks, col=risk.colors(length(brks)-1)) polygon(polyb) metre(0, 0.01, 0.05, 0.51, lab=brks, col=risk.colors(length(brks)-1), cex=1) ## An example of inhomogeneous intensity function and K function ## estimated with the same data s <- seq(0, 0.06, length=101) lam <- lambdahat(pts, hcv, poly=polyb)$lambda kin <- kinhat(pts, lam, polyb, s) plot(kin$s, kin$k-pi*(kin$s)^2, xlab="s", ylab="k-pi*s^2", type="l") ## End(Not run)
## An example of spatial segregation analysis ## Not run: ## source in Lansing Woods tree data within a polygon boundary data(lansing) data(polyb) ## select data points within polygon ndx <- which(pinpoly(polyb, as.matrix(lansing[c("x", "y")])) > 0) pts <- as.matrix(lansing[c("x", "y")])[ndx,] marks <- lansing[["marks"]][ndx] ## select bandwidth h <- seq(0.02, 0.1, length=101) cv <- cvloglk(pts, marks, h=h)$cv hcv <- h[which.max(cv)] plot(h, cv, type="l") ## estimate type-specific probabilities and do segregation tests ## by one integrated function sp <- spseg(pts, marks, hcv, opt=3, ntest=99, poly=polyb) ## plot estimated type-specific probability surfaces plotphat(sp) ## additional with pointwise significance contour lines plotmc(sp, quan=c(0.025, 0.975)) ## p-value of the Monte Carlo segregation test cat("\np-value of the Monte Carlo segregation test", sp$pvalue) ##estimate intensity function at grid point for presentation ##with bandwidth hcv gridxy <- as.matrix(expand.grid(x=seq(0, 1, length=101), y=seq(0, 1, length=101))) ndx <- which(pinpoly(polyb, gridxy) > 0) ##inside point index lam <- matrix(NA, ncol=101, nrow=101) lam[ndx] <- lambdahat(pts, hcv, gpts = gridxy[ndx,], poly = polyb)$lambda brks <- pretty(range(lam, na.rm=TRUE), n=12) plot(0, 0, xlim=0:1, ylim=0:1, xlab="x", ylab="y", type="n") image(x=seq(0, 1, length=101), y=seq(0, 1, length=101), z=lam, add=TRUE, breaks=brks, col=risk.colors(length(brks)-1)) polygon(polyb) metre(0, 0.01, 0.05, 0.51, lab=brks, col=risk.colors(length(brks)-1), cex=1) ## An example of inhomogeneous intensity function and K function ## estimated with the same data s <- seq(0, 0.06, length=101) lam <- lambdahat(pts, hcv, poly=polyb)$lambda kin <- kinhat(pts, lam, polyb, s) plot(kin$s, kin$k-pi*(kin$s)^2, xlab="s", ylab="k-pi*s^2", type="l") ## End(Not run)
Calculate the area of a polygon and its boundary direction.
areapoly(poly)
areapoly(poly)
poly |
matrix containing the |
positive numeric, the area of the polygon.
integer, 1 if the polygon orientation is anticlockwise, -1 otherwise.
copy of the passed argument poly
.
This function is provided here so that users do not need to load other packages, as it is not available in the base R packages.
Joseph O'Rourke, Computational Geometry in C (2nd Edition), Cambridge University Press, 2000 edition.
Select a common bandwidth for kernel regression estimation of type-specific probabilities of a multivariate Poisson point process with independent component processes of each categorical type by maximizing the cross-validate log-likelihood function.
cvloglk(pts, marks, t = NULL, h)
cvloglk(pts, marks, t = NULL, h)
pts |
matrix containing the |
marks |
numeric/character vector of the marked labels of the type of each point. |
t |
numeric vector of the associated time-periods, default |
h |
numeric vector of the kernel smoothing bandwidths at which to calculate the cross-validated log-likelihood function. |
Select a common bandwidth for kernel regression of type-specific
probabilities for all time-periods when the argument t
is not
NULL
, in which case the data is of a multivariate spatial-temporal
point process, with t
the values of associated time-periods.
A list with components
vector of the values of the cross-validated Log-likelihood function.
numeric value which maximizing the cross-validate log-likelihood function
copy of the arguments pts, marks, h
.
Diggle, P.J., Zheng, P. and Durr, P. A. (2005) Nonparametric estimation of spatial segregation in a multivariate point process: bovine tuberculosis in Cornwall, UK. J. R. Stat. Soc. C, 54, 3, 645–658.
phat
, mcseg.test
, and
mcpat.test
This is the Lansing Woods Tree data set and an arbitrary polygon boundary within unit square.
lansing
is an R data.frame
with x, y, marks
and polyb
is an R matrix with x, y
.
The Lansing Woods tree data set and the arbitrary polygon are provided
here for the demonstration of the basic usage of this package. See Section
Examples in spatialkernel-package
.
Lansing Woods trees from Lansing Woods, Clinton County, Michigan USA, including hickories, maples, and oaks.
Gerrard, D.J. (1969) Competition quotient: a new measure of the competition affecting individual forest trees. Research Bulletin 20, Agricultural Experiment Station, Michigan State University.
filled.contour
. It additionally plots a polygonal boundary.Level (Contour) Plots with Polygonal Boundary
This is a revised version of the base R function
filled.contour
. It additionally plots a polygonal boundary.
filled.contour.poly( x = seq(min(poly[, 1]), max(poly[, 1]), len = nrow(z)), y = seq(min(poly[, 2]), max(poly[, 2]), len = ncol(z)), z, poly, xlim = range(x, finite = TRUE), ylim = range(y, finite = TRUE), zlim = range(z, finite = TRUE), levels = pretty(zlim, nlevels), nlevels = 10, color.palette = risk.colors, col = color.palette(length(levels) - 1), llevels = levels, labels = NULL, labcex = 0.6, drawlabel = TRUE, method = "flattest", vfont = c("sans serif", "plain"), lcol = par("fg"), lty = par("lty"), lwd = par("lwd"), plot.title, plot.axes, key.title, key.axes, asp = NA, xaxs = "i", yaxs = "i", las = 1, axes = TRUE, ... )
filled.contour.poly( x = seq(min(poly[, 1]), max(poly[, 1]), len = nrow(z)), y = seq(min(poly[, 2]), max(poly[, 2]), len = ncol(z)), z, poly, xlim = range(x, finite = TRUE), ylim = range(y, finite = TRUE), zlim = range(z, finite = TRUE), levels = pretty(zlim, nlevels), nlevels = 10, color.palette = risk.colors, col = color.palette(length(levels) - 1), llevels = levels, labels = NULL, labcex = 0.6, drawlabel = TRUE, method = "flattest", vfont = c("sans serif", "plain"), lcol = par("fg"), lty = par("lty"), lwd = par("lwd"), plot.title, plot.axes, key.title, key.axes, asp = NA, xaxs = "i", yaxs = "i", las = 1, axes = TRUE, ... )
x |
locations of grid lines at which the values in |
y |
See |
z |
a matrix containing the values to be plotted ( |
poly |
a matrix containing the |
xlim |
|
ylim |
|
zlim |
|
levels |
a set of levels which are used to partition the range of
|
nlevels |
if |
color.palette |
a color palette function used to assign colors in the plot. |
col |
an explicit set of colors to be used in the plot. This argument overrides any palette function specification. |
llevels |
numeric vector of levels at which to draw contour
lines, default is the same as |
labels |
a vector giving the labels for the contour lines. If
|
labcex |
|
drawlabel |
logical, contour lines are labelled if |
method |
character string specifying where the labels will be located. Possible values are "simple", "edge" and "flattest" (the default). See the Details section. |
vfont |
if a character vector of length 2 is specified, then
Hershey vector fonts are used for the contour labels. The first
element of the vector selects a typeface and the second element
selects a fontindex (see |
lcol |
color for the lines drawn. |
lty |
line type for the lines drawn. |
lwd |
line width for the lines drawn. |
plot.title |
statement which add title to the main plot. |
plot.axes |
statement which draws axes on the main plot. This overrides the default axes. |
key.title |
statement which adds title to the plot key. |
key.axes |
statement which draws axes on the plot key. This overrides the default axis. |
asp |
the |
xaxs |
the |
yaxs |
the |
las |
the style of labeling to be used. The default is to use horizontal labeling. |
axes |
Logical. Should axes be drawn? See
|
... |
additional graphical parameters. |
By defining z
values as NA
at points outside the
polygonal boundary, filled.contour.poly
produces a contour plot
within the polygonal boundary.
filled.contour
, contour
and
pinpoly
Inhomogeneous K-function Estimation Estimate the inhomogeneous K function of a non-stationary point pattern.
kinhat(pts, lambda, poly, s)
kinhat(pts, lambda, poly, s)
pts |
matrix of the |
lambda |
intensity function evaluated at the above point locations. |
poly |
matrix of the |
s |
vector of distances at which to calculate the K function. |
The inhomogeneous K function is a generalization of the usual K function defined for a second-order intensity-reweighted stationary point process, proposed by Baddeley et\ al (2000).
When the true intensity function is unknown, and is to be estimated
from the same data as been used to estimate the K function,
a modified kernel density estimation implemented in lambdahat
with argument gpts=NULL
can be used to calculate the estimated intensity at data points.
See Baddeley et al (2000) for details,
and Diggle, P.J., et al (2006) for a cautious note.
A list with components
values of estimated K at the distances s
.
copy of s
.
This code is adapted from splancs (Rowlingson and Diggle, 1993)
fortran code for the estimation of homogeneous K function
khat
, with edge correction inherited
for a general polygonal area.
Baddeley, A. J. and M?ller, J. and Waagepetersen R. (2000) Non and semi-parametric estimation of interaction in inhomogeneous point patterns, Statistica Neerlandica, 54, 3, 329–350.
Diggle, P.J., V. Gmez-Rubio,
P.E. Brown, A.G. Chetwynd and S. Gooding (2006) Second-order
analysis of inhomogeneous spatial point processes using case-control
data, submitted to Biometrics.
Rowlingson, B. and Diggle, P. (1993) Splancs: spatial point pattern analysis code in S-Plus. Computers and Geosciences, 19, 627–655.
Kernel density estimation of the intensity function of a two-dimensional point process.
lambdahat(pts, h, gpts = NULL, poly = NULL, edge = TRUE)
lambdahat(pts, h, gpts = NULL, poly = NULL, edge = TRUE)
pts |
matrix containing the |
h |
numeric value of the bandwidth used in the kernel smoothing. |
gpts |
matrix containing the |
poly |
matrix containing the |
edge |
logical, with default |
Kernel smoothing methods are widely used to estimate the intensity
of a spatial point process. One problem which arises is the need to handle
edge effects. Several methods of edge-correction have been proposed. The
adjustment factor proposed in Berman and Diggle (1989) is a double
integration , where
is a polygonal area,
is the smoothing kernel and
is
the bandwidth used for the smoothing. Zheng, P. et\ al (2004)
proposed an algorithm for fast calculate of Berman and Diggle's adjustment
factor.
When gpts
is NULL
, lambdahat
uses a leave-one-out
estimator for the intensity at each of the data points, as been suggested
in Baddeley et al (2000). This leave-one-out estimate at each of the
data points then can be used in the inhomogeneous K function estimation
kinhat
when the true intensity function is unknown.
The default kernel is the Gaussian. The kernel function is selected
by calling setkernel
.
A list with components
numeric vector of the estimated intensity function.
copy of the arguments pts, gpts, h, poly, edge
.
In principle, the double adaptive double integration algorithm of Zheng, P. et\ al (2004) can be applied to other kernel functions. Furthermore, the area at the present is enclosed by a simple polygon which could be generalized into a complex area with polygonal holes inside. For instance, a large lake lays within the land area of study.
Other source codes used in the implementation of the double integration algorithm include
Laurie, D.P. (1982) adaptive cubature code in Fortran;
Shewchuk, J.R. triangulation code in C;
Alan Murta's polygon intersection code in C (Project: Generic Polygon Clipper).
M. Berman and P. Diggle (1989) Estimating weighted integrals of the second-order intensity of a spatial point process, J. R. Stat. Soc. B, 51, 81–92.
P. Zheng, P.A. Durr and P.J. Diggle (2004) Edge–correction for Spatial Kernel Smoothing — When Is It Necessary? Proceedings of the GisVet Conference 2004, University of Guelph, Ontario, Canada, June 2004.
Baddeley, A. J. and Møller, J. and Waagepetersen R. (2000) Non and semi-parametric estimation of interaction in inhomogeneous point patterns, Statistica Neerlandica, 54, 3, 329–350.
Laurie, D.P. (1982). Algorithm 584 CUBTRI: Adaptive Cubature over a Triangle. ACM–Trans. Math. Software, 8, 210–218.
Jonathan R. Shewchuk, Triangle, a Two-Dimensional Quality Mesh Generator and Delaunay Triangulator at http://www-2.cs.cmu.edu/~quake/triangle.html.
Alan Murta, General Polygon Clipper at http://www.cs.man.ac.uk/~toby/gpc.
NAG's Numerical Library. Chapter 11: Quadrature, NAG's Fortran 90 Library. https://www.nag.co.uk/numeric/fn/manual/html/c11_fn04.html
Monte Carlo Inference of Temporal Changes in Spatial Segregation An approximate Monte Carlo test of temporal changes in a multivariate spatial-temporal point process.
mcpat.test(pts, marks, t, h, ntest = 100, proc = TRUE)
mcpat.test(pts, marks, t, h, ntest = 100, proc = TRUE)
pts |
matrix containing the |
marks |
numeric/character vector of the marked type labels of the data points. |
t |
numeric vector of the associated time-periods. |
h |
numeric vector of the bandwidths at which to calculate the cross-validated log-likelihood function pooled over times. |
ntest |
integer with default 100, number of simulations for the Monte Carlo test |
proc |
logical, default |
The spatial-temporal data are denoted as ,
where
are the spatial locations,
are the categorical
mark sequence numbers, and
are the associated time-periods.
The null hypothesis is that the type-specific probability surfaces are
constant over time-periods, i.e., , for any
, where
are the type-specific probabilities for
th category within time-period
.
Each Monte Carlo simulation is sampled from an approximate true type-specific probability surfaces — the estimated one from the data. Approximately, the simulated data and the original data are samples from the same probability distribution under the null hypothesis. See Diggle, P.J. et al (2005) for more details.
A list with components
-value of the approximate Monte Carlo test.
copy of pts, marks, t, h, ntest
.
Diggle, P. J. and Zheng, P. and Durr, P. A. (2005) Nonparametric estimation of spatial segregation in a multivariate point process: bovine tuberculosis in Cornwall, UK. J. R. Stat. Soc. C, 54, 3, 645–658.
Monte Carlo test of spatial segregation in a multivariate point process by simulating data from random re-labelling of the categorical marks.
mcseg.test(pts, marks, h, stpts = NULL, ntest = 100, proc = TRUE)
mcseg.test(pts, marks, h, stpts = NULL, ntest = 100, proc = TRUE)
pts |
matrix containing the |
marks |
numeric/character vector of the marked type labels of the point pattern. |
h |
numeric vector of the bandwidths at which to calculate the cross-validated likelihood function. |
stpts |
matrix containing the |
ntest |
integer with default 100, number of simulations for the Monte Carlo test. |
proc |
logical with default |
The null hypothesis is that the estimated risk surface is spatially
constant, i.e., the type-specific probabilities are
, for all
, see
phat
. Each Monte Carlo
simulation is done by relabeling the data categorical marks at random
whilst preserving the observed number of cases of each type.
The segregation test can also be done pointwise, usually at a fine grid of points, to mark the areas where the estimated type-specific probabilities are significantly greater or smaller than the spatial average.
A list with components
numeric, -value of the Monte Carlo test.
matrix, -values of the test at each point in
stpts
(if stpts
is not NULL
), with each column corresponds
to one type
copy of the arguments pts, marks, h, stpts, ntest, proc
.
Kelsall, J. E. and Diggle, P. J. (1998) Spatial variation in risk: a nonparametric binary regression approach, Applied Statistics, 47, 559–573.
Diggle, P. J. and Zheng, P. and Durr, P. A. (2005) Nonparametric estimation of spatial segregation in a multivariate point process: bovine tuberculosis in Cornwall, UK. J. R. Stat. Soc. C, 54, 3, 645–658.
This is a simple function provided here for the convenience of users. It adds
a key showing how the colors map to the values of an image
plot.
metre( xl, yb, xr, yt, lab, cols = risk.colors(length(lab) - 1), shift = 0, cex = 1 )
metre( xl, yb, xr, yt, lab, cols = risk.colors(length(lab) - 1), shift = 0, cex = 1 )
xl |
Left coordinate of the color level metre bar. |
yb |
Bottom coordinate of the color level metre bar. |
xr |
Right coordinate of the color level metre bar. |
yt |
Top coordinate of the color level metre bar. |
lab |
metre level labels in the metre. |
cols |
associated colours, defaults to use |
shift |
distance to shift the label texts away from the metre bar. |
cex |
numeric character expansion factor. |
Listing Loaded/Installed Package Versions List version numbers of loaded or installed packages.
package.version(all.available = FALSE, lib.loc = NULL)
package.version(all.available = FALSE, lib.loc = NULL)
all.available |
logical, if |
lib.loc |
character vector describing the location of R library trees
to search through, or |
This is a revised version of the base R function
.package
. It gives both package names and their version numbers.
A list with components
names of loaded or available packages
if all.available
is TRUE
.
associated package versions.
Estimate the type-specific probabilities for a multivariate Poisson point process with independent component processes of each type.
phat(gpts, pts, marks, h)
phat(gpts, pts, marks, h)
gpts |
matrix containing the |
pts |
matrix containing the |
marks |
numeric/character vector of the types of the point in the data. |
h |
numeric value of the bandwidth used in the kernel regression. |
The type-specific probabilities for data , where
are the spatial point locations and
are the categorical
mark sequence numbers,
, are estimated using the kernel
smoothing methodology
,
where
,
is
the kernel function with bandwidth
,
, and
is the standardised
form of the kernel function.
The default kernel is the Gaussian. Different kernels can be
selected by calling setkernel
.
A list with components
matrix of the type-specific probabilities for all types, with the type marks as the matrix row names.
copy of the arguments pts, dpts, marks, h
.
Diggle, P. J. and Zheng, P. and Durr, P. A. (2005) Nonparametric estimation of spatial segregation in a multivariate point process: bovine tuberculosis in Cornwall, UK. J. R. Stat. Soc. C, 54, 3, 645–658.
cvloglk
, mcseg.test
, and
setkernel
Check the location of point(s) with respect to a polygon.
pinpoly(poly, pts)
pinpoly(poly, pts)
poly |
matrix containing the |
pts |
matrix of containing the |
An integer vector of indicators for each point in pts
,
error when number of polygon vertices exceeds 3000;
outside the polygon;
at the polygon boundary;
inside the polygon.
This function is provided here so that users do not need to load other packages, as it is not available in the base R packages. THE VERTICES MAY BE LISTED CLOCKWISE OR ANTICLOCKWISE
The return values have been changed from the original ones so that the point is inside (including at the boundary) if positive.
This Fortran code comes from Wm Randolph Franklin, Electrical, Computer, and Systems Engineering Department, Rensselaer Polytechnic Institute, Troy, New York, at website https://wrf.ecse.rpi.edu/nikola/pages/.
phat
and mcseg.test
This color palette is designed to show risk levels with different colours from small risk (bright orange) to high risk (dark red).
risk.colors(n)
risk.colors(n)
n |
number of colors (>= 1) to be in the palette. |
## risk pie with ten levels pie(rep(1,10), labels = seq(0.1, 1, 0.1), col = risk.colors(10))
## risk pie with ten levels pie(rep(1,10), labels = seq(0.1, 1, 0.1), col = risk.colors(10))
Select a kernel function for kernel regression and kernel smoothing.
setkernel(kernel = NULL)
setkernel(kernel = NULL)
kernel |
character string giving the smoothing kernel to be used.
This must be one of gaussian, epanechnikov, quadratic,
quartic, or |
A character string of the kernel function selected, or the kernel
function currently being used when kernel
is NULL
.
The default kernel used is Gaussian. Unless users want to use a
non-default kernel, there is no need to call setkernel
.
quadratic is an alias for epanechnikov.
setkernel
setup kernel function for both kernel regression in the
type-specific probability estimation and the kernel smoothing in the
intensity function estimation.
## Not run: setkernel("e") ## Select "epanechnikov" kernel setkernel() ## show the kernel currrently being used ## End(Not run)
## Not run: setkernel("e") ## Select "epanechnikov" kernel setkernel() ## show the kernel currrently being used ## End(Not run)
plot
functions.Run the spatial segregation analysis
spseg for spatstat objects
## S3 method for class 'matrix' spseg( pts, marks, h, opt = 2, ntest = 100, poly = NULL, delta = min(apply(apply(pts, 2, range), 2, diff))/100, proc = TRUE, ... ) plotcv(obj, ...) plotphat( obj, types = unique(obj$marks), sup = TRUE, col = risk.colors(10), breaks = seq(0, 1, length = length(col) + 1), ... ) plotmc( obj, types = unique(obj$marks), quan = c(0.05, 0.95), sup = FALSE, col = risk.colors(10), breaks = seq(0, 1, length = length(col) + 1), ... ) spseg(pts, ...) ## S3 method for class 'ppp' spseg(pts, h, opt, ...)
## S3 method for class 'matrix' spseg( pts, marks, h, opt = 2, ntest = 100, poly = NULL, delta = min(apply(apply(pts, 2, range), 2, diff))/100, proc = TRUE, ... ) plotcv(obj, ...) plotphat( obj, types = unique(obj$marks), sup = TRUE, col = risk.colors(10), breaks = seq(0, 1, length = length(col) + 1), ... ) plotmc( obj, types = unique(obj$marks), quan = c(0.05, 0.95), sup = FALSE, col = risk.colors(10), breaks = seq(0, 1, length = length(col) + 1), ... ) spseg(pts, ...) ## S3 method for class 'ppp' spseg(pts, h, opt, ...)
pts |
an object that contains the points. This could be a two-column matrix or a ppp object from spatstat. |
marks |
numeric/character vector of the types of the point in the data. |
h |
numeric vector of the kernel smoothing bandwidth at which to calculate the cross-validated log-likelihood function. |
opt |
integer, 1 to select bandwidth; 2 to calculate type-specific probabilities; and 3 to do the Monte Carlo segregation test. |
ntest |
integer with default 100, number of simulations for the Monte Carlo test. |
poly |
matrix containing the |
delta |
spacing distance of grid points at which to calculate the estimated
type-specific probabilities for |
proc |
logical with default |
... |
|
obj |
list of the returning value of |
types |
numeric/character types of the marks of data points to plot the estimated type-specific probabilities, default to plot all types. |
sup |
logical with default |
col |
list of colors such as that generated by |
breaks |
a set of breakpoints for the |
quan |
numeric, the pointwise significance levels to add contours to
|
spseg
implements a complete spatial segregation analysis by
selecting bandwidth, calculating the type-specific probabilities, and then
carrying out the Monte Carlo test of spatial segregation and pointwise
significance. Some plot
functions are also provided here so that
users can easily present the results.
These functions are provided only for the convenience of users. Users can instead use individual functions to implement the analysis step by step and plot the diagrams as they wish.
Examples of how to use spseg
and present results using plot
functions are presented in spatialkernel-package
.
This is the details of the S3 generic method
Does spseg for marked ppp objects
A list with components
bandwidth selected by the cross-validated log-likelihood function.
x, y
coordinate vectors at which the grid points
are generated at which to calculate the type-specific probabilities
and pointwise segregation test p-value.
estimated type-specific probabilities at grid points generated
by vectors gridx, gridy
.
p-value of the Monte Carlo spatial segregation test.
pointwise p-value of the Monte Carlo spatial segregation test.
copy of pts, marks, h, opt
.
spseg results
an spseg object
Setting h
to a unique value may force spseg
to skip the
selecting bandwidth step, go straight to calculate the type-specific
probabilities and then test the spatial segregation with this fixed
value of bandwidth.
Barry Rowlingson
Barry Rowlingson
cvloglk
, phat
, mcseg.test
,
pinpoly
, risk.colors
, and metre