Skip to content

Commit e39fa92

Browse files
committed
add earthaccess login to all notebooks
1 parent bfed982 commit e39fa92

7 files changed

Lines changed: 211 additions & 8 deletions

examples/docs_2_pace_l2.ipynb

Lines changed: 16 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,11 @@
1212
"\n",
1313
"* Create a plan for files to use `pc.plan()`\n",
1414
"* Print the plan to check it `print(plan.summary())`\n",
15-
"* Do the plan and get matchups `pc.matchup(plan, spatial_method=\"xoak\")`\n",
15+
"* Do the plan and get matchups `pc.matchup(plan)`\n",
1616
"\n",
17-
"## Prerequisite -- Login to EarthData\n",
17+
"*Note: In a virtual machine in AWS us-west-2, where NASA cloud data is, the point matchups are fast. In Colab, say, your compute is not in the same data region nor provider, the same matchups might take 10x longer.*\n",
18+
"\n",
19+
"## Prerequisites\n",
1820
"\n",
1921
"The examples here use NASA EarthData and you need to have an account with EarthData. Make sure you can login."
2022
]
@@ -28,7 +30,7 @@
2830
{
2931
"data": {
3032
"text/plain": [
31-
"<earthaccess.auth.Auth at 0x7ff7429e3320>"
33+
"<earthaccess.auth.Auth at 0x7fd752432d20>"
3234
]
3335
},
3436
"execution_count": 1,
@@ -38,10 +40,20 @@
3840
],
3941
"source": [
4042
"import earthaccess\n",
41-
"import xoak\n",
4243
"earthaccess.login()"
4344
]
4445
},
46+
{
47+
"cell_type": "code",
48+
"execution_count": null,
49+
"id": "78b37622-212b-4156-8f26-28af653c954d",
50+
"metadata": {},
51+
"outputs": [],
52+
"source": [
53+
"# if needed\n",
54+
"pip install point-collocation"
55+
]
56+
},
4557
{
4658
"cell_type": "markdown",
4759
"id": "ba4ef7b4-7569-4da4-8956-27f8b060c5b2",

examples/docs_3_many_points.ipynb

Lines changed: 29 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,35 @@
1010
"\n",
1111
"When we have 10s of thousands of points for matchups, we need to be careful with memory. `point-collocation` will try to minimize memory accummulation, but you may still need to use a machine with more RAM to handle the task. 5Gb RAM is needed for the example with 15k+ points here. The amount of memory needed depends a bit on how your points are distributed across the grid and how they match up with the underlying netcdf chunking. The user doesn't have to worry abou that however. The package is designed to take care of using memory efficient approaches.\n",
1212
"\n",
13-
"Also keeping a very large dataframe in memory will consume RAM. So if RAM is limited, or you are concerned about your machine crashing and losing work, you can save your matchups as you go along."
13+
"Also keeping a very large dataframe in memory will consume RAM. So if RAM is limited, or you are concerned about your machine crashing and losing work, you can save your matchups as you go along.\n",
14+
"\n",
15+
"*Note: In a virtual machine in AWS us-west-2, where NASA cloud data is, the point matchups are fast. In Colab, say, your compute is not in the same data region nor provider, the same matchups might take 10x longer.*\n",
16+
"\n",
17+
"## Prerequisites\n",
18+
"\n",
19+
"The examples here use NASA EarthData and you need to have an account with EarthData. Make sure you can login."
20+
]
21+
},
22+
{
23+
"cell_type": "code",
24+
"execution_count": null,
25+
"id": "52d6f342-2da3-4a9a-8c5d-92b418d7f722",
26+
"metadata": {},
27+
"outputs": [],
28+
"source": [
29+
"import earthaccess\n",
30+
"earthaccess.login()"
31+
]
32+
},
33+
{
34+
"cell_type": "code",
35+
"execution_count": null,
36+
"id": "45bfd84e-34af-4914-b9f2-c8099d04de19",
37+
"metadata": {},
38+
"outputs": [],
39+
"source": [
40+
"# if needed\n",
41+
"pip install point-collocation"
1442
]
1543
},
1644
{

examples/docs_4_ecco.ipynb

Lines changed: 47 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,53 @@
1414
"* ECCO_L4_MIXED_LAYER_DEPTH_05DEG_DAILY_V4R4: mixed layer depth\n",
1515
"* ECCO_L4_SSH_05DEG_DAILY_V4R4: sea surface height\n",
1616
"* ECCO_L4_TEMP_SALINITY_05DEG_DAILY_V4R4\n",
17-
" "
17+
"\n",
18+
"*Note: In a virtual machine in AWS us-west-2, where NASA cloud data is, the point matchups are fast. In Colab, say, your compute is not in the same data region nor provider, the same matchups might take 10x longer.*\n",
19+
"\n",
20+
"## Prerequisites\n",
21+
"\n",
22+
"The examples here use NASA EarthData and you need to have an account with EarthData. Make sure you can login."
23+
]
24+
},
25+
{
26+
"cell_type": "code",
27+
"execution_count": 1,
28+
"id": "da47feb3-0273-4c22-aa24-61c05be2ddaf",
29+
"metadata": {},
30+
"outputs": [
31+
{
32+
"data": {
33+
"text/plain": [
34+
"<earthaccess.auth.Auth at 0x7fb0cbd1bc20>"
35+
]
36+
},
37+
"execution_count": 1,
38+
"metadata": {},
39+
"output_type": "execute_result"
40+
}
41+
],
42+
"source": [
43+
"import earthaccess\n",
44+
"earthaccess.login()"
45+
]
46+
},
47+
{
48+
"cell_type": "code",
49+
"execution_count": null,
50+
"id": "89758143-6b4f-476d-9219-56bc1cabad02",
51+
"metadata": {},
52+
"outputs": [],
53+
"source": [
54+
"# if needed\n",
55+
"pip install point-collocation"
56+
]
57+
},
58+
{
59+
"cell_type": "markdown",
60+
"id": "538f0232-2879-4bb7-96e0-ee1e0505e625",
61+
"metadata": {},
62+
"source": [
63+
"## Let's see what ECCO collections are available"
1864
]
1965
},
2066
{

examples/docs_4_icesat2.ipynb

Lines changed: 36 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@
66
"metadata": {},
77
"source": [
88
"# ICESat-2 ATL21\n",
9+
"<br>\n",
910
"\n",
1011
"ATL21 is the gridded sea surface height anomaly (SSHA) product derived from ICESat-2 sea-ice measurements. Because it is a gridded product, we can use `point-collocation` to do matchups. Other ICESat-2 products like ATL07 are along track (lines) and `point-collocation` will not work for those data.\n",
1112
"\n",
@@ -15,7 +16,41 @@
1516
"\n",
1617
"The granules are h5 grouped netcdf files. It has monthly, daily, and metadata all in one netcdf.\n",
1718
"\n",
18-
"## First generate some points over the arctic"
19+
"*Note: In a virtual machine in AWS us-west-2, where NASA cloud data is, the point matchups are fast. In Colab, say, your compute is not in the same data region nor provider, the same matchups might take 10x longer.*\n",
20+
"\n",
21+
"## Prerequisites\n",
22+
"\n",
23+
"The examples here use NASA EarthData and you need to have an account with EarthData. Make sure you can login."
24+
]
25+
},
26+
{
27+
"cell_type": "code",
28+
"execution_count": null,
29+
"id": "c70e56df-c0d7-46d1-a855-77facca171c1",
30+
"metadata": {},
31+
"outputs": [],
32+
"source": [
33+
"import earthaccess\n",
34+
"earthaccess.login()"
35+
]
36+
},
37+
{
38+
"cell_type": "code",
39+
"execution_count": null,
40+
"id": "39d9b6af-5e2a-46c3-ae6a-b9e1d3e99c67",
41+
"metadata": {},
42+
"outputs": [],
43+
"source": [
44+
"# if needed\n",
45+
"pip install point-collocation"
46+
]
47+
},
48+
{
49+
"cell_type": "markdown",
50+
"id": "8d0c5e94-2ebd-4f19-864b-dbd0df8715e3",
51+
"metadata": {},
52+
"source": [
53+
"## Generate some points over the arctic"
1954
]
2055
},
2156
{

examples/docs_4_mur.ipynb

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,40 @@
1313
"* MUR-JPL-L4-GLOB-v4.1\t(~1 km resolution) High-resolution SST\t~2002–present\n",
1414
"* MUR25-JPL-L4-GLOB-v04.2\t(~25 km resolution)\tCoarse global SST analysis\t~1992–2017 (retrospective)\n",
1515
"\n",
16+
"*Note: In a virtual machine in AWS us-west-2, where NASA cloud data is, the point matchups are fast. In Colab, say, your compute is not in the same data region nor provider, the same matchups might take 10x longer.*\n",
17+
"\n",
18+
"## Prerequisites\n",
19+
"\n",
20+
"The examples here use NASA EarthData and you need to have an account with EarthData. Make sure you can login."
21+
]
22+
},
23+
{
24+
"cell_type": "code",
25+
"execution_count": null,
26+
"id": "0dd6737e-c9e1-44a5-83c8-bd706fed1386",
27+
"metadata": {},
28+
"outputs": [],
29+
"source": [
30+
"import earthaccess\n",
31+
"earthaccess.login()"
32+
]
33+
},
34+
{
35+
"cell_type": "code",
36+
"execution_count": null,
37+
"id": "edd76e93-6eef-450e-a59a-c77ed2e72dd5",
38+
"metadata": {},
39+
"outputs": [],
40+
"source": [
41+
"# if needed\n",
42+
"pip install point-collocation"
43+
]
44+
},
45+
{
46+
"cell_type": "markdown",
47+
"id": "1b4a87eb-6c35-4ece-9a1a-c351a8c87280",
48+
"metadata": {},
49+
"source": [
1650
"## Create some points\n",
1751
"\n",
1852
"Random global over ocean."

examples/docs_4_tempo.ipynb

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,40 @@
1313
"* TEMPO_NO2_L3 this is level 3 on a grid\n",
1414
"* TEMPO_O3TOT_L2 this is level 2 swath data\n",
1515
"\n",
16+
"*Note: In a virtual machine in AWS us-west-2, where NASA cloud data is, the point matchups are fast. In Colab, say, your compute is not in the same data region nor provider, the same matchups might take 10x longer.*\n",
17+
"\n",
18+
"## Prerequisites\n",
19+
"\n",
20+
"The examples here use NASA EarthData and you need to have an account with EarthData. Make sure you can login."
21+
]
22+
},
23+
{
24+
"cell_type": "code",
25+
"execution_count": null,
26+
"id": "e9b6e22e-f993-4aa0-a03b-6c310cbe11be",
27+
"metadata": {},
28+
"outputs": [],
29+
"source": [
30+
"import earthaccess\n",
31+
"earthaccess.login()"
32+
]
33+
},
34+
{
35+
"cell_type": "code",
36+
"execution_count": null,
37+
"id": "69c969a2-cad8-41d0-b1cd-f78228b188b9",
38+
"metadata": {},
39+
"outputs": [],
40+
"source": [
41+
"# if needed\n",
42+
"pip install point-collocation"
43+
]
44+
},
45+
{
46+
"cell_type": "markdown",
47+
"id": "cd008cf7-3bcb-4bf4-8535-7f0e142170bb",
48+
"metadata": {},
49+
"source": [
1650
"## TEMPO_NO2_L3\n",
1751
"\n",
1852
"### Create some points\n",

examples/docs_quickstart.ipynb

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,14 +6,17 @@
66
"metadata": {},
77
"source": [
88
"# Quickstart\n",
9+
"<br>\n",
910
"\n",
1011
"`point-collocation` gets matchups to lat/lon using the pixel center that is closest to the lat/lon point (equivalent to method=\"nearest\"). For time, you can select a buffer of 0, which means the time of the point must be within the time range of the file or a buffer like buffer=\"1D\" to find files within 1 day of the point. Using a buffer can help for L2 files with short windows (minutes) or collections with infrequent files.\n",
1112
"\n",
1213
"* Create a plan for files to use `pc.plan()`\n",
1314
"* Print the plan to check it `plan.summary()`\n",
1415
"* Do the plan and get matchups for variables `pc.matchup(plan, variables=['var'])`\n",
1516
"\n",
16-
"## Prerequisite -- Login to EarthData\n",
17+
"*Note: In a virtual machine in AWS us-west-2, where NASA cloud data is, the point matchups are fast. In Colab, say, your compute is not in the same data region nor provider, the same matchups might take 10x longer.*\n",
18+
"\n",
19+
"## Prerequisites\n",
1720
"\n",
1821
"The examples here use NASA EarthData and you need to have an account with EarthData. Make sure you can login."
1922
]
@@ -40,6 +43,17 @@
4043
"earthaccess.login()"
4144
]
4245
},
46+
{
47+
"cell_type": "code",
48+
"execution_count": null,
49+
"id": "d69b9b2f-4d77-4e21-8d3a-0216f9b3cd46",
50+
"metadata": {},
51+
"outputs": [],
52+
"source": [
53+
"# if needed\n",
54+
"pip install point-collocation"
55+
]
56+
},
4357
{
4458
"cell_type": "markdown",
4559
"id": "83d8e05a-3219-46e9-90c0-3847bf89750a",

0 commit comments

Comments
 (0)