This function fits the single season N-mixture model of Royle et al. (2004).

stan_pcount(
  formula,
  data,
  K = NULL,
  mixture = "P",
  prior_intercept_state = normal(0, 5),
  prior_coef_state = normal(0, 2.5),
  prior_intercept_det = logistic(0, 1),
  prior_coef_det = logistic(0, 1),
  prior_sigma = gamma(1, 1),
  log_lik = TRUE,
  ...
)

Arguments

formula

Double right-hand side formula describing covariates of detection and abundance in that order

data

A unmarkedFramePCount object

K

Integer upper index of integration for N-mixture. This should be set high enough so that it does not affect the parameter estimates. Note that computation time will increase with K.

mixture

Character specifying mixture: "P" is only option currently.

prior_intercept_state

Prior distribution for the intercept of the state (abundance) model; see ?priors for options

prior_coef_state

Prior distribution for the regression coefficients of the state model

prior_intercept_det

Prior distribution for the intercept of the detection probability model

prior_coef_det

Prior distribution for the regression coefficients of the detection model

prior_sigma

Prior distribution on random effect standard deviations

log_lik

If TRUE, Stan will save pointwise log-likelihood values in the output. This can greatly increase the size of the model. If FALSE, the values are calculated post-hoc from the posteriors

...

Arguments passed to the stan call, such as number of chains chains or iterations iter

Value

ubmsFitPcount object describing the model fit.

References

Royle JA. 2004. N-mixture models for estimating populaiton size from spatially replicated counts. Biometrics 60: 105-108.

Examples

# \donttest{
data(mallard)
mallardUMF <- unmarkedFramePCount(mallard.y, siteCovs=mallard.site)

(fm_mallard <- stan_pcount(~1~elev+forest, mallardUMF, K=30,
                           chains=3, iter=300))
#> 
#> SAMPLING FOR MODEL 'pcount' NOW (CHAIN 1).
#> Chain 1: 
#> Chain 1: Gradient evaluation took 0.005243 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 52.43 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1: 
#> Chain 1: 
#> Chain 1: Iteration:   1 / 300 [  0%]  (Warmup)
#> Chain 1: Iteration:  30 / 300 [ 10%]  (Warmup)
#> Chain 1: Iteration:  60 / 300 [ 20%]  (Warmup)
#> Chain 1: Iteration:  90 / 300 [ 30%]  (Warmup)
#> Chain 1: Iteration: 120 / 300 [ 40%]  (Warmup)
#> Chain 1: Iteration: 150 / 300 [ 50%]  (Warmup)
#> Chain 1: Iteration: 151 / 300 [ 50%]  (Sampling)
#> Chain 1: Iteration: 180 / 300 [ 60%]  (Sampling)
#> Chain 1: Iteration: 210 / 300 [ 70%]  (Sampling)
#> Chain 1: Iteration: 240 / 300 [ 80%]  (Sampling)
#> Chain 1: Iteration: 270 / 300 [ 90%]  (Sampling)
#> Chain 1: Iteration: 300 / 300 [100%]  (Sampling)
#> Chain 1: 
#> Chain 1:  Elapsed Time: 9.529 seconds (Warm-up)
#> Chain 1:                9.335 seconds (Sampling)
#> Chain 1:                18.864 seconds (Total)
#> Chain 1: 
#> 
#> SAMPLING FOR MODEL 'pcount' NOW (CHAIN 2).
#> Chain 2: 
#> Chain 2: Gradient evaluation took 0.005072 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 50.72 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2: 
#> Chain 2: 
#> Chain 2: Iteration:   1 / 300 [  0%]  (Warmup)
#> Chain 2: Iteration:  30 / 300 [ 10%]  (Warmup)
#> Chain 2: Iteration:  60 / 300 [ 20%]  (Warmup)
#> Chain 2: Iteration:  90 / 300 [ 30%]  (Warmup)
#> Chain 2: Iteration: 120 / 300 [ 40%]  (Warmup)
#> Chain 2: Iteration: 150 / 300 [ 50%]  (Warmup)
#> Chain 2: Iteration: 151 / 300 [ 50%]  (Sampling)
#> Chain 2: Iteration: 180 / 300 [ 60%]  (Sampling)
#> Chain 2: Iteration: 210 / 300 [ 70%]  (Sampling)
#> Chain 2: Iteration: 240 / 300 [ 80%]  (Sampling)
#> Chain 2: Iteration: 270 / 300 [ 90%]  (Sampling)
#> Chain 2: Iteration: 300 / 300 [100%]  (Sampling)
#> Chain 2: 
#> Chain 2:  Elapsed Time: 8.751 seconds (Warm-up)
#> Chain 2:                9.323 seconds (Sampling)
#> Chain 2:                18.074 seconds (Total)
#> Chain 2: 
#> 
#> SAMPLING FOR MODEL 'pcount' NOW (CHAIN 3).
#> Chain 3: 
#> Chain 3: Gradient evaluation took 0.004808 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 48.08 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3: 
#> Chain 3: 
#> Chain 3: Iteration:   1 / 300 [  0%]  (Warmup)
#> Chain 3: Iteration:  30 / 300 [ 10%]  (Warmup)
#> Chain 3: Iteration:  60 / 300 [ 20%]  (Warmup)
#> Chain 3: Iteration:  90 / 300 [ 30%]  (Warmup)
#> Chain 3: Iteration: 120 / 300 [ 40%]  (Warmup)
#> Chain 3: Iteration: 150 / 300 [ 50%]  (Warmup)
#> Chain 3: Iteration: 151 / 300 [ 50%]  (Sampling)
#> Chain 3: Iteration: 180 / 300 [ 60%]  (Sampling)
#> Chain 3: Iteration: 210 / 300 [ 70%]  (Sampling)
#> Chain 3: Iteration: 240 / 300 [ 80%]  (Sampling)
#> Chain 3: Iteration: 270 / 300 [ 90%]  (Sampling)
#> Chain 3: Iteration: 300 / 300 [100%]  (Sampling)
#> Chain 3: 
#> Chain 3:  Elapsed Time: 9.591 seconds (Warm-up)
#> Chain 3:                8.742 seconds (Sampling)
#> Chain 3:                18.333 seconds (Total)
#> Chain 3: 
#> Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
#> Running the chains for more iterations may help. See
#> https://mc-stan.org/misc/warnings.html#bulk-ess
#> Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
#> Running the chains for more iterations may help. See
#> https://mc-stan.org/misc/warnings.html#tail-ess
#> 
#> Call:
#> stan_pcount(formula = ~1 ~ elev + forest, data = mallardUMF, 
#>     K = 30, chains = 3, iter = 300)
#> 
#> Abundance (log-scale):
#>             Estimate    SD  2.5%  97.5% n_eff  Rhat
#> (Intercept)    -1.95 0.230 -2.46 -1.536   308 0.998
#> elev           -1.35 0.222 -1.78 -0.923   304 0.995
#> forest         -0.74 0.162 -1.08 -0.445   231 1.006
#> 
#> Detection (logit-scale):
#>  Estimate   SD   2.5% 97.5% n_eff Rhat
#>     0.466 0.19 0.0927 0.803   313    1
#> 
#> LOOIC: 535.921
#> Runtime: 55.271 sec
# }