ssanova {gss} | R Documentation |
Fit smoothing spline ANOVA models in Gaussian regression. The
symbolic model specification via formula
follows the same
rules as in lm
.
ssanova(formula, type=NULL, data=list(), weights, subset, offset, na.action=na.omit, partial=NULL, method="v", alpha=1.4, varht=1, id.basis=NULL, nbasis=NULL, seed=NULL, random=NULL, skip.iter=FALSE)
formula |
Symbolic description of the model to be fit. |
type |
List specifying the type of spline for each variable.
See |
data |
Optional data frame containing the variables in the model. |
weights |
Optional vector of weights to be used in the fitting process. |
subset |
Optional vector specifying a subset of observations to be used in the fitting process. |
offset |
Optional offset term with known parameter 1. |
na.action |
Function which indicates what should happen when the data contain NAs. |
partial |
Optional symbolic description of parametric terms in partial spline models. |
method |
Method for smoothing parameter selection. Supported
are |
alpha |
Parameter modifying GCV or Mallows' CL; larger absolute
values yield smoother fits; negative value invokes a stable and
more accurate GCV/CL evaluation algorithm but may take two to
five times as long. Ignored when |
varht |
External variance estimate needed for
|
id.basis |
Index designating selected "knots". |
nbasis |
Number of "knots" to be selected. Ignored when
|
seed |
Seed to be used for the random generation of "knots".
Ignored when |
random |
Input for parametric random effects in nonparametric
mixed-effect models. See |
skip.iter |
Flag indicating whether to use initial values of theta and skip theta iteration. See notes on skipping theta iteration. |
The model specification via formula
is intuitive. For
example, y~x1*x2
yields a model of the form
y = C + f_{1}(x1) + f_{2}(x2) + f_{12}(x1,x2) + e
with the terms denoted by "1"
, "x1"
, "x2"
, and
"x1:x2"
.
The model terms are sums of unpenalized and penalized terms. Attached to every penalized term there is a smoothing parameter, and the model complexity is largely determined by the number of smoothing parameters.
A subset of the observations are selected as "knots." Unless
specified via id.basis
or nbasis
, the number of
"knots" q is determined by max(30,10n^{2/9}), which is
appropriate for the default cubic splines for numerical vectors.
Using q "knots," ssanova
calculates an approximate
solution to the penalized least squares problem using algorithms of
the order O(nq^{2}), which for q<<n scale better than
the O(n^{3}) algorithms of ssanova0
. For the
exact solution, one may set q=n in ssanova
, but
ssanova0
would be much faster.
ssanova
returns a list object of class "ssanova"
.
The method summary.ssanova
can be used to obtain
summaries of the fits. The method predict.ssanova
can
be used to evaluate the fits at arbitrary points along with standard
errors. The method project.ssanova
can be used to
calculate the Kullback-Leibler projection for model selection. The
methods residuals.ssanova
and
fitted.ssanova
extract the respective traits
from the fits.
For the selection of multiple smoothing parameters,
nlm
is used to minimize the selection criterion such
as the GCV score. When the number of smoothing parameters is large,
the process can be time-consuming due to the great amount of
function evaluations involved.
The starting values for the nlm
iteration are obtained using
Algorith 3.2 in Gu and Wahba (1991). These starting values usually
yield good estimates themselves, leaving the subsequent quasi-Newton
iteration to pick up the "last 10%" performance with extra effort
many times of the initial one. Thus, it is often a good idea to
skip the iteration by specifying skip.iter=TRUE
, especially
in high-dimensions and/or with multi-way interactions.
skip.iter=TRUE
could be made the default in future releases.
To use GCV and Mallows' CL unmodified, set alpha=1
.
For simpler models and moderate sample sizes, the exact solution of
ssanova0
can be faster.
The results may vary from run to run. For consistency, specify
id.basis
or set seed
.
In gss versions earlier than 1.0, ssanova
was under
the name ssanova1
.
Chong Gu, chong@stat.purdue.edu
Gu, C. (2002), Smoothing Spline ANOVA Models. New York: Springer-Verlag.
Gu, C. and Wahba, G. (1991), Minimizing GCV/GML scores with multiple smoothing parameters via the Newton method. SIAM Journal on Scientific and Statistical Computing, 12, 383–398.
Kim, Y.-J. and Gu, C. (2004), Smoothing spline Gaussian regression: more scalable computation via efficient approximation. Journal of the Royal Statistical Society, Ser. B, 66, 337–356.
Wahba, G. (1990), Spline Models for Observational Data. Philadelphia: SIAM.
## Fit a cubic spline x <- runif(100); y <- 5 + 3*sin(2*pi*x) + rnorm(x) cubic.fit <- ssanova(y~x) ## Obtain estimates and standard errors on a grid new <- data.frame(x=seq(min(x),max(x),len=50)) est <- predict(cubic.fit,new,se=TRUE) ## Plot the fit and the Bayesian confidence intervals plot(x,y,col=1); lines(new$x,est$fit,col=2) lines(new$x,est$fit+1.96*est$se,col=3) lines(new$x,est$fit-1.96*est$se,col=3) ## Clean up ## Not run: rm(x,y,cubic.fit,new,est) dev.off() ## End(Not run) ## Fit a tensor product cubic spline data(nox) nox.fit <- ssanova(log10(nox)~comp*equi,data=nox) ## Fit a spline with cubic and nominal marginals nox$comp<-as.factor(nox$comp) nox.fit.n <- ssanova(log10(nox)~comp*equi,data=nox) ## Fit a spline with cubic and ordinal marginals nox$comp<-as.ordered(nox$comp) nox.fit.o <- ssanova(log10(nox)~comp*equi,data=nox) ## Clean up ## Not run: rm(nox,nox.fit,nox.fit.n,nox.fit.o)