Release: | 1.6.1 |
---|---|
Date: | 18th February 2014 |
This document explains the new/changed features of Iris in version 1.6. (View all changes.)
Showcase Feature - Back to the future ...
The new iris.FUTURE global variable is a iris.Future instance that controls the run-time behaviour of Iris.
By setting iris.FUTURE.cell_datetime_objects to True, a time reference coordinate will return datetime-like objects when invoked with iris.coords.Coord.cell() or iris.coords.Coord.cells().
>>> from iris.coords import DimCoord
>>> iris.FUTURE.cell_datetime_objects = True
>>> coord = DimCoord([1, 2, 3], 'time', units='hours since epoch')
>>> print([str(cell) for cell in coord.cells()])
['1970-01-01 01:00:00', '1970-01-01 02:00:00', '1970-01-01 03:00:00']
Note that, either a datetime.datetime or netcdftime.datetime object instance will be returned, depending on the calendar of the time reference coordinate.
This capability permits the ability to express time constraints more naturally when the cell represents a datetime-like object.
# Ignore the 1st of January.
iris.Constraint(time=lambda cell: cell.point.month != 1 and cell.point.day != 1)
Note that, iris.Future also supports a context manager which allows multiple sections of code to execute with different run-time behaviour.
>>> print(iris.FUTURE)
Future(cell_datetime_objects=False)
>>> with iris.FUTURE.context(cell_datetime_objects=True):
... # Code that expects to deal with datetime-like objects.
... print(iris.FUTURE)
...
Future(cell_datetime_objects=True)
>>> print(iris.FUTURE)
Future(cell_datetime_objects=False)
Showcase Feature - Partial date/time ...
The iris.time.PartialDateTime class provides the ability to perform comparisons with other datetime-like objects such as datetime.datetime or netcdftime.datetime.
The year, month, day, hour, minute, second and microsecond attributes of a iris.time.PartialDateTime object may be fully or partially specified for any given comparison.
This is particularly useful for time based constraints, whilst enabling the iris.FUTURE.cell_datetime_objects, see here for further details on this new release feature.
from iris.time import PartialDateTime
# Ignore the 1st of January.
iris.Constraint(time=lambda cell: cell != PartialDateTime(month=1, day=1))
# Constrain by a specific year.
iris.Constraint(time=PartialDateTime(year=2013))
Also see the User Guide Constraining on Time section for further commentary.
The experimental ‘concatenate’ function is now a method of a iris.cube.CubeList, see iris.cube.CubeList.concatenate(). The functionality is unchanged.
iris.cube.Cube.extract_by_trajectory() has been removed. Instead, use iris.analysis.trajectory.interpolate().
iris.load_strict() has been removed. Instead, use iris.load_cube() and iris.load_cubes().
iris.coords.Coord.cos() and iris.coords.Coord.sin() have been removed.
iris.coords.Coord.unit_converted() has been removed. Instead, make a copy of the coordinate using iris.coords.Coord.copy() and then call the iris.coords.Coord.convert_units() method of the new coordinate.
Iteration over a Cube has been removed. Instead, use iris.cube.Cube.slices().
The following Unit deprecated methods/properties have been removed.
Removed property/method |
New method |
---|---|
convertible() |
|
dimensionless |
|
no_unit |
|
time_reference |
|
unknown |
As a result of deprecating iris.cube.Cube.add_history() and removing the automatic appending of history by operations such as cube arithmetic, collapsing, and aggregating, the signatures of a number of functions within iris.analysis.maths have been modified along with that of iris.analysis.Aggregator and iris.analysis.WeightedAggregator.
The experimental ABF and ABL functionality has now been promoted to core functionality in iris.fileformats.abf.
The following iris.coord_categorisation deprecated functions have been removed.
Removed function |
New function |
---|---|
add_custom_season() |
|
add_custom_season_number() |
|
add_custom_season_year() |
|
add_custom_season_membership() |
|
add_month_shortname() |
|
add_weekday_shortname() |
|
add_season_month_initials() |
When a cube is loaded from PP or GRIB and it has both time and forecast period coordinates, and the time coordinate has bounds, the forecast period coordinate will now also have bounds. These bounds will be aligned with the bounds of the time coordinate taking into account the forecast reference time. Also, the forecast period point will now be aligned with the time point.
Congratulations and thank you to felicityguest, jkettleb, kwilliams-mo and shoyer who all made their first contribution to Iris!
To assist with management of caching results to file, the new utility function iris.util.file_is_newer_than() may be used to easily determine whether the modification time of a specified cache file is newer than one or more other files.
Typically, the use of caching is a means to circumvent the cost of repeating time consuming processing, or to reap the benefit of fast-loading a pickled cube.
# Determine whether to load from the cache or source.
if iris.util.file_is_newer(cache_file, source_file):
with open(cache_file, 'rb') as fh:
cube = cPickle.load(fh)
else:
cube = iris.load_cube(source_file)
# Perhaps perform some intensive processing ...
# Create the cube cache.
with open(cache_file, 'wb') as fh:
cPickle.dump(cube, fh)
The iris.analysis.RMS aggregator has been extended to allow the use of weights using the new keyword argument weights.
For example, an RMS weighted cube collapse is performed as follows:
from iris.analysis import RMS
collapsed_cube = cube.collapsed('height', RMS, weights=weights)
To assist with iris.cube.Cube merging, the new experimental in-place function iris.experimental.equalise_cubes.equalise_attributes() ensures that a sequence of cubes contains a common set of iris.cube.Cube.attributes.
This attempts to smooth the merging process by ensuring that all candidate cubes have the same attributes.
The result from collapsing masked cube data may now be completely masked by providing a mdtol missing-data tolerance keyword to iris.cube.Cube.collapsed().
This tolerance provides a threshold that will completely mask the collapsed result whenever the fraction of data to missing-data is less than or equal to the provided tolerance.
The new utility function iris.util.new_axis() creates a new cube with a new leading dimension of size unity. If a scalar coordinate is provided, then the scalar coordinate is promoted to be the dimension coordinate for the new leading dimension.
Note that, this function will load the data payload of the cube.
The new iris.analysis.PEAK aggregator calculates the global peak value from a spline interpolation of the iris.cube.Cube data payload along a nominated coordinate axis.
For example, to calculate the peak time:
from iris.analysis import PEAK
collapsed_cube = cube.collapsed('time', PEAK)