Other articles

  1. Near realtime data from Arctic ice mass balance buoys

    Notebook file

    Author's twitter

    Arctic sea ice thickness is a very important information and tells a lot about the state of the floating ice sheet. Unfortunately direct measurements are rare and an area-wide assessment from the ground is too costly. Satellites can fill the gap by using freeboard as a proxy but there are still some obstacles like e.g. determining snow cover. Mass balance buoys offer a near realtime view at a few sites and show the characteristics of melting sea ice in summer and freezing in winter. This notebook accesses latest available data and plots daily thickness, temperature, snow cover and drift of the buoys.

    read more

    There are comments.

  2. Interpolation between grids with Basemap


    Interpolate data from regular to curvilinear grid


    Basemap.interp function

    Unfortunately geophysical data distributed on a large variety of grids, and from time to time we have to compare our variables to each other. Often plotting a simple map is enough, but if you want to go a bit beyond qualitative comparison then you have to interpolate data from one grid to another. One of the easiest way to do this is to use basemap.interp function from Matplotlib Basemap library. Here I will show how to prepare your data and how to perform interpolation.

    Some necessary imports:

    read more

    There are comments.

  3. Use of Basemap for Amazon river discharge visualization


        Show how to work with river discharge data.
        Also show cople of ways to visualise this data with Basemap. 


        Pandas, Basemap

    This notebook was originally created for Marinexplore Earth Data Challenge, in order to show how data for the submission was processed. I think it might be also interesting for those who begin to use python in geoscience, because it demonstrate couple of ways to handle csv and netCDF data and plotting capabilities of the Basemap module. There will be no extensive explanations though, moistly the code.

    I want to show a small example of the work flow that is more or less typical during research process. Often you see some interesting feature in your data and want to investigate it in more detail. If you not lucky enough to work with the model data, this would require dealing with multiple data sources, and possibly multiple file formats. Having all data sets in one place in consistent format becomes very handy for this type of applications.

    read more

    There are comments.