Skip to content

Add statistics functions#836

Draft
jzemmels wants to merge 33 commits intoDOI-USGS:developfrom
jzemmels:statistics
Draft

Add statistics functions#836
jzemmels wants to merge 33 commits intoDOI-USGS:developfrom
jzemmels:statistics

Conversation

@jzemmels
Copy link
Collaborator

@jzemmels jzemmels commented Dec 29, 2025

Still to-do:

  • unpack "values" and "percentiles" column when computation_type includes "percentile"
  • function argument descriptions -- do we want to generalize the get_params, get_description, and base_url functions to pull the docs for /statistics too?
  • arg type checking (is that done in other waterdata functions?)
  • pull apart get_statistics_data into separate functions to match other read_waterdata patterns
  • Figure out if other read_ogc_data helpers are needed:
    • rejigger_cols
    • deal_with_empty
    • switch_arg_id
    • switch_properties_id
  • resolve merge conflicts
  • (Nice to have) determine if there's a more efficient way to join everything together than the final return_list <- do.call(rbind, return_list_tmp) as it's a slight bottleneck.

And there's Laura's list of to-dos here: https://code.usgs.gov/water/dataRetrieval/-/issues/450

@jzemmels jzemmels self-assigned this Dec 29, 2025
@jzemmels
Copy link
Collaborator Author

To the discussion of whether base R vs. data.table is faster: I've implemented a data.table version of the get_statistics_data function and temporarily put in the read_waterdata_stats_datatable.R script until we decide if we want to add data.table as a dependency in this PR.

I've profiled the rbind-y base R version of the function against the data.table version and the comparison is noteworthy. The data.table version is much faster and uses way-way less memory.

image

Here's the benchmark code I ran:

bench::mark(
  "data.table" = {
    dat1 <- 
      get_statistics_data_data_table(args = list(approval_status = "approved", 
                                            # state_code = "US:42",
                                            county_code = "US:42:103",
                                            parameter_code = "00095", 
                                            mime_type = "application/json",
                                            computation_type = c("arithmetic_mean", "percentile")),
                                service = "Intervals")
    
    list(
      non_missing_ave = mean(dat1$value, na.rm = TRUE),
      missing_obs = sum(is.na(mean(dat1$value)))
    )
  },
  "base R" = {
    dat2 <-
  get_statistics_data(args = list(approval_status = "approved", 
                                  # state_code = "US:42",
                                  county_code = "US:42:103",
                                  parameter_code = "00095", 
                                  mime_type = "application/json",
                                  computation_type = c("arithmetic_mean", "percentile")),
                      service = "Intervals")
    
    list(
      non_missing_ave = mean(dat2$value, na.rm = TRUE),
      missing_obs = sum(is.na(dat2$value))
    )
  }, min_iterations = 5
)

@ldecicco-USGS
Copy link
Collaborator

To the discussion of whether base R vs. data.table is faster: I've implemented a data.table version of the get_statistics_data function and temporarily put in the read_waterdata_stats_datatable.R script until we decide if we want to add data.table as a dependency in this PR.

I've profiled the rbind-y base R version of the function against the data.table version and the comparison is noteworthy. The data.table version is much faster and uses way-way less memory.

image Here's the benchmark code I ran:
bench::mark(
  "data.table" = {
    dat1 <- 
      get_statistics_data_data_table(args = list(approval_status = "approved", 
                                            # state_code = "US:42",
                                            county_code = "US:42:103",
                                            parameter_code = "00095", 
                                            mime_type = "application/json",
                                            computation_type = c("arithmetic_mean", "percentile")),
                                service = "Intervals")
    
    list(
      non_missing_ave = mean(dat1$value, na.rm = TRUE),
      missing_obs = sum(is.na(mean(dat1$value)))
    )
  },
  "base R" = {
    dat2 <-
  get_statistics_data(args = list(approval_status = "approved", 
                                  # state_code = "US:42",
                                  county_code = "US:42:103",
                                  parameter_code = "00095", 
                                  mime_type = "application/json",
                                  computation_type = c("arithmetic_mean", "percentile")),
                      service = "Intervals")
    
    list(
      non_missing_ave = mean(dat2$value, na.rm = TRUE),
      missing_obs = sum(is.na(dat2$value))
    )
  }, min_iterations = 5
)

No surprises there, if it wasn't for RDB I would have switched many years ago. Let's go for it. We can start swapping out other code later (off the top of my head I can think of a several places where I'd been tempted in the past). Let's leave those types of edits for a dedicated PR later though.

combined <- data.table::rbindlist(combined_list)
combined <- combined[return_list, on = "rid"]

col_order <- c("parent_time_series_id", "monitoring_location_id", "monitoring_location_name", "parameter_code",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally we can figure out a way to not write these out by hand. On the off-chance they add some columns or change the function will start to fail. (and yes, similar hard-coding has caused some headaches in the past with dataRetrieval). We added some logic (I think just in the current PR that removes max_results) to move all "id" columns to the far right (because users don't want to see those big ol'hashes first) UNLESS they are special like monitoring_location_id.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The column order is no longer hard-coded in the function, but I don't think the current order we get from unpacking the nested JSON is ideal:

c("parameter_code", "unit_of_measure", "parent_time_series_id", 
"value", "percentile", "sample_count", "approval_status", "computation_id", 
"computation", "time_of_year", "time_of_year_type", "monitoring_location_id", 
"monitoring_location_name", "site_type", "site_type_code", "country_code", 
"state_code", "county_code", "geometry") 

I can merge the newest changes into this branch to see if what you mentioned changes anything. Otherwise, what would you recommend?

@jzemmels
Copy link
Collaborator Author

Tests appear to be failing when installing vctrs 0.7.0: https://github.com/DOI-USGS/dataRetrieval/actions/runs/21305229784/job/61331526434#step:5:3135. That isn't an explicit dependency of data.table, so I'm not sure why it's installing it in the first place.

@jzemmels
Copy link
Collaborator Author

jzemmels commented Jan 26, 2026

I merged the main branch updates into /statistics. Not sure what substantively changed, but the GH action passed successfully 🤷🏻.

I think this is ready for a re-review to verify it's matching dR conventions. For example, the columns are probably not in the order we'd like, but I think they're in the de facto order that comes about from unpacking the nested JSON output. I'm not sure how to deal with this other than hard-coding the desired order, which we don't want.

Edit: here's the current column order if you don't want to install my version of the package:

c("parameter_code", "unit_of_measure", "parent_time_series_id", 
"value", "percentile", "sample_count", "approval_status", "computation_id", 
"computation", "time_of_year", "time_of_year_type", "monitoring_location_id", 
"monitoring_location_name", "site_type", "site_type_code", "country_code", 
"state_code", "county_code", "geometry")

In my mind, the ideal order would be something like this:

c("monitoring_location_id", "monitoring_location_name", "parameter_code", "computation", 
"time_of_year", "time_of_year_type", "value", "percentile", "sample_count", "unit_of_measure", 
"approval_status", "site_type", "site_type_code", "county_code", "state_code", "country_code", 
"parent_time_series_id", "computation_id", "geometry")


httr2::request("https://api.waterdata.usgs.gov/statistics/") |>
httr2::req_url_path_append(paste0("v", version)) |>
httr2::req_url_path_append(paste0("observation", service))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add in the user agent info, and probably the gzip encoding and bump up the timeout.

    httr2::req_user_agent(default_ua()) |> 
    httr2::req_headers(`Accept-Encoding` = "gzip") |> 
    httr2::req_timeout(seconds = 180)

@ldecicco-USGS
Copy link
Collaborator

A few minor things that come out during CHECK:

  1. Add the vignette to .Rbuildignore so we don't send it to CRAN. CRAN doesn't like to have vignettes that make calls to the internet. (and then we don't need to add tidyr to Suggests - it is being installed in the pkgdown GH workflow, so we should be good).
  2. There's a new WARNING:
checking Rd metadata ... WARNING
  Rd files with duplicated alias 'dataRetrieval-package':
    'dataRetrieval-package.Rd' 'dataRetrieval.Rd'

I'm 99% sure it's because the ancient file "dataRetrievals-package.R" has that wayward s. You could first rename that to dataRetrieval-package.R, then re-run the "usethis" code to get the data.table imports, or just paste those importsFrom into the original dataRetrievals-package.R file.
3. A new NOTE:

❯ checking R code for possible problems ... NOTE
  get_statistics_data: no visible binding for global variable 'rid'
  get_statistics_data: no visible binding for global variable 'data'
  get_statistics_data: no visible binding for global variable 'ts_id'
  get_statistics_data: no visible binding for global variable '.j'
  get_statistics_data: no visible binding for global variable 'values'
  Undefined global functions or variables:
    .j data rid ts_id values

Much like coding with dplyr, we have to do some tricks to get rid of that:
https://cran.r-project.org/web/packages/data.table/vignettes/datatable-importing.html
The easiest is:

rid <- data <- ts_id <- values <- NULL

I'm not entirely sure I understand what the .j is doing in the code. Maybe double check that line to make sure you understand it, and figure out if it's just another variable like the others, or if it's special.

  1. Another new NOTE:
❯ checking Rd line widths ... NOTE
  Rd file 'read_waterdata_stats.Rd':
    \examples lines wider than 100 characters:
       # - Month-of-year percentiles for June, computed using all June data (not just June 1 through June 15).
       # Note: the month-of-year percentiles are returned only when the month-day range includes the beginning of the month (e.g., "06-01") 

I've been trying to keep the checks NOTE free (never a good idea to threaten CRAN with anything but perfectly clean submission), so just split those lines up:

# - Month-of-year percentiles for June, computed using
#                     all June data (not just June 1 through June 15).
# Note: the month-of-year percentiles are returned only when the  month-day range includes
#                    the beginning of the month (e.g., "06-01") 

OK, so those seems like the straightforward things to update from running the package check.

  1. Add the user agent (and probably accept encoding gzip) to the request. I'll plop that in a inline comment.

I'm playing around now with the clean_value_cols function, it's working as expected, now I need to learn how to debug it a bit better.

{
out <- get_statistics_data(list(), "Normals")

expect_s3_class(out, "sf")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you set up the mocking, seems like a great time to actually test that the data frame out is exactly like you expect (like, value column is numeric and exactly equal to c(8032.903, 12521.667, 1, 2) or whatever it's suppose to be). We can't do that for actually dataRetrieval calls since the data changes - but with mocking that would be fantastic.

@jzemmels
Copy link
Collaborator Author

jzemmels commented Feb 4, 2026

Thanks, @ldecicco-USGS. I believe I've addressed all of the points from R CMD CHECK. I also replaced my clean stats col function with yours, which ended up being faster and more readable, IMO.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants