Task:
Make matplotlib plots look nicer
Solution:
Seaborn
We all know and love matplotlib, but I guess most of you agree that the default output of matplotlib is ugly. One can spend hours tweaking the plots, and for sure you will get very nice result at the end - customization is one of the great powers of matplotlib after all. However there is another way - just rely on beautiful defaults created by someone else. Below I will show you couple of examples with Seaborn library, that is based on matplotlib, but make figures look much better. It also provide very simple way to draw statistical graphs, that I will also demonstrate.
read moreThere are comments.
SMOS sea ice thickness
Task:
Acces SMOS Sea Ice Thickness data
Solution:
pydap
Sea ice thickens is one of the most important environmental variables in the Arctic, but unfortunately is one of the hardest to measure. Unlike sea ice concentration, which is measured by satellites operationally now for more than three decades, only recently we begin to obtain limited satellite sea ice thickness information from missions like ICESat, Cryosat-2 and SMOS. The later is not specifically dedicated to cryospheric applications, but turns out that its information can be used to obtain data about the thin sea ice.
Here you can learn more about SMOS sea ice thickness project read more
There are comments.
Northern Cryosphere Metrics rendered with Colors
Arctic sea ice and snow cover and are two of the most prominent features of the cryosphere of the northern hemisphere and can be seen with a naked eye from the Moon. Measurements of sea ice area started around 1979 and snow cover a bit earlier. Both show a strong seasonal signal and area also a decline over the three decades. Even stronger is the decline calculated by a sea ice model run by the Polar Science Center, Washington. PIOMAS outputs daily sea ice volume for the same time allowing a good comparison of the three data sets. Instead of the usual line charts this notebook translates the daily data into a color range. Changes within a dataset are far better visible, while maintaining comparability.
read moreThere are comments.
Near realtime data from Arctic ice mass balance buoys
Arctic sea ice thickness is a very important information and tells a lot about the state of the floating ice sheet. Unfortunately direct measurements are rare and an area-wide assessment from the ground is too costly. Satellites can fill the gap by using freeboard as a proxy but there are still some obstacles like e.g. determining snow cover. Mass balance buoys offer a near realtime view at a few sites and show the characteristics of melting sea ice in summer and freezing in winter. This notebook accesses latest available data and plots daily thickness, temperature, snow cover and drift of the buoys.
read moreThere are comments.
Interpolation between grids with Basemap
Task:
Interpolate data from regular to curvilinear grid
Solution:
Basemap.interp function
Unfortunately geophysical data distributed on a large variety of grids, and from time to time we have to compare our variables to each other. Often plotting a simple map is enough, but if you want to go a bit beyond qualitative comparison then you have to interpolate data from one grid to another. One of the easiest way to do this is to use basemap.interp function from Matplotlib Basemap library. Here I will show how to prepare your data and how to perform interpolation.
Some necessary imports:
Use of Basemap for Amazon river discharge visualization
Task:
Show how to work with river discharge data.
Also show cople of ways to visualise this data with Basemap.
Solution
Pandas, Basemap
This notebook was originally created for Marinexplore Earth Data Challenge, in order to show how data for the submission was processed. I think it might be also interesting for those who begin to use python in geoscience, because it demonstrate couple of ways to handle csv and netCDF data and plotting capabilities of the Basemap module. There will be no extensive explanations though, moistly the code.
I want to show a small example of the work flow that is more or less typical during research process. Often you see some interesting feature in your data and want to investigate it in more detail. If you not lucky enough to work with the model data, this would require dealing with multiple data sources, and possibly multiple file formats. Having all data sets in one place in consistent format becomes very handy for this type of applications.
read moreThere are comments.