A comparison between WCS and OPeNDAP for making model results available through the internet. Fedor Baart1,2, Gerben de Boer1,2, Wim de Haas3, Gennadii Donchyts2, Marc Philippart3, Maarten Plieger4 Delft University of Technology, Delft, The Netherlands (
[email protected],
[email protected] ) Deltares, Delft, The Netherlands (
[email protected],
[email protected],
[email protected]) Ministry of Infrastructure and the Environment (
[email protected],
[email protected]) Royal Netherlands Meteorological Institute (
[email protected])
Abstract Numerical models, such as hydrodynamic and climate models, produce output with a large number of variables and a large number of grid cells and time steps. Further use of the resulting data products has been challenging, especially for further use outside the institute of origin. Due to the vastness of the data, simply downloading copies of data is impossible. Web services are therefore used to access subsets of the data. The most mature candidates for working with gridded data are OPeNDAP and WCS. Here we compare these two protocols to serve gridded data through the web. In the framework of the new Dutch National Model and Data Centre (NMDC) a distributed data storage has been realized by coupling OPeNDAP servers. A WCS service layer is provided for the same data. This allows us to compare OPeNDAP and WCS. Using several use cases we compare the usability, performance and features of the two protocols.
1. Introduction 1.1.
Getting the data out there
Scientists and government agencies run numerical models for hydrodynamic, atmospheric or subsoil predictions on a routinely basis. The results of these models were traditionally stored in safe data centers. Only the very communities that run these models mainly used the data produced by these models. Currently there is a bottom-up trend of making these results available on the internet for a larger audience than these communities themselves. Research on the interfaces between disciplines fosters a bottom-up approach of opening up data. Another bottom-up push is the recognition of the sheer need to share the infrastructures to work with the increasingly large datasets. The recent establishment of NMDC in the Netherlands is an example of this. In addition, this trend to open up data is also pushed top-down by slow but irreversible international legislation. An example is the EU INSPIRE directive and the digital agenda’s by Neelie Kroes (European Commissioner for Digital Agenda).
For the scientists and governments the motivation to expose data is often based on efficiency. The chance of co-operation increases and the chance of duplication reduces. If data is put on the internet in an accessible form, i.e. through an API, then other parties have the possibility to join in the analysis and further dissemination of the data. Good examples of this are the “hack the government” and OpenEarth (Van Koningsveld 2010) initiatives where developers take the available datasets of the government and try to build better applications based on the government data (http://www.hackdeoverheid.nl, http://rewiredstate.org/, http://www.openearth.eu). Furthermore there often is a formal obligation for governments and scientists to provide their data on requests (freedom of information act). By making the data available in a convenient way, governments can reduce the number of requests and answer them easier.
1.2.
Model results on the internet
In the last decade several methods have been developed to make numerical model results available through the internet. The same trend that is seen in general IT, a shift from client server applications to service based applications can be seen in the applications that expose numerical model results on the internet. Geospatial datasets have benefited from the development of the web services such as Web Feature Service (WFS) and Web Map Service (WMS) for exchanging information about geospatial features and rendered maps. Rendered maps are used a lot already and they serve a large part of the potential audience. One problem with providing information through maps is that they circumvent the issue that some agencies are reluctant to share the actual numbers of their data. Fellow researchers would rather access the actual data than mere visualizations of it. Here other protocols than WMS are required. Web services for data exchange are not new though. The World Wide Web was conceived to exchange data between large-scale physics laboratories (CERN). In the early days data exchange consisted of file transfer between computers, monitored by people behind terminals (thin clients). Next, data exchange shifted towards the PC so the researchers. Now there is a tendency for data processing to shift back to data centers with users again behind thin clients (this time with color screen instead of green letters), (Carr, 2008). The main criterion to keep data at datacenters now is energy efficiency and cheap availability due to redundancy in commercial data centers.
2. Services for providing model results In this paper we discuss the two best candidates for exchanging gridded model data results: Web Coverage Service (WCS) and Open Source Project for a Network Data Access Protocol (OPeNDAP).
2.1.
OPeNDAP
The development of the OPeNDAP started in 2000. The goal was to develop and promote software that facilitates access to data via the network (Cornillon 2003). The OPeNDAP standard has led to a high level of interoperability for "gridded" data such as model outputs (Hankin 2009). It started as a way to make federal oceanographic data arrays available for scientists. GIS concepts as georeferencing were added later with optional conventions. The OPeNDAP data model is closely related to that of NetCDF. The service is build as a layer on top of http. The request is done in the form of a URL with query parameters. The data store typically consists of
files that are introspected by an OPeNDAP server. Data is returned as binary (dods), ascii or as a file (various servers support various formats). The service itself is based on data types such as grids, variables and attributes. No specific knowledge about geospatial or other physical aspects are part of the protocol. These aspects are implemented through additional conventions.
2.2.
Web Coverage Service
The Open Geospatial Consortium (OGC) developed the Web Coverage Protocol. Web Coverage Service standard is defined by the OGC as a web service interface that enables interoperable access to geospatial "coverages", OGC (1999). The WCS supports electronic retrieval of geospatial data as "coverages" – that is, digital geospatial information representing space-varying phenomena (Whiteside 2008). The latest specification defines WCS as follows. “WCS allows clients to choose portions of a server's information holdings based on spatial constraints and other query criteria. (...) WCS provides available data together with their detailed descriptions; defines a rich syntax for requests against these data; and returns data with its original semantics (instead of pictures) which may be interpreted, extrapolated, etc., and not just portrayed.” WCS started as a web enabled version of traditional GIS coverages, later extended with time-concepts for modelers.
2.3.
Efforts to combine WCS and OPeNDAP
Domenico et al. (2006) noted the difference between the standard GIS approach (everything is a feature) and the “Fluid Earth Science” approach (everything is a function). They performed two experiments in the context of the Galeon project. The first was to find out if a web coverage service can serve data that is typically stored in NetCDF. The second interoperability experiment tested if a WCS server could be build on top of an OPeNDAP server. This project, based on regular gridded datasets, resulted in an extensive list of features request for the next WCS specification. The CDM was extended to conform to the recent geospatial coverage OGC/ISO standards Nativi (2008).
2.4.
End users
The different users of the datasets have different expectations on usability fitness for a particular purpose. This is discussed by Blower et al. (2009), who describe 6 different levels (0-5) of data products that meet the requirements of the different end user populations. In this paper we focus on making results from numerical geospatial models (hydrodynamic, atmospheric and subsoil) available. There are several different audiences for such online hindcast, nowcast and forecast data. In this study we try to focus on the following sets of possible end users. - Scientists - Analysts - Government agencies We do not focus on making the data available to the general public. Providing information to the general public usually involves an extra step of making visualizations and key indicators that fit specific purposes. An example of this is the categorization of forecasted weather data into icons showing a sun and a little cloud.
Figure 1 shows that range of web services that exist to serve data via the web. Scientists and analysts work with data in scientific file formats such as NetCDF and HDF, although some still prefer wasting time with flat binary files. These data were traditionally transferred via the web via ftp or http download. Servers were equipped with delayed-mode subsetting services to cut the data to the required region. For new data delivery services this approach is now replaced by OPeNDAP and WCS where subsetting can be done real-time. These protocols can be adopted for all data enrichment levels of Blower et al. (2009). Government agencies work with visualizations of the highest data product levels. These are already provided through portals that top an OPeNDAP (NOAA DCHART, ncWMS, Pydap WMS responses) or WMS servers (EMODNET portals).
Figure 1: Overview of different levels for providing results from numerical models
3. Challenges When working with these approaches in different national and EU projects (MICORE, NMDC) several challenges have arrived. A lot of aspects are already solved or can be based on more general solutions. For example the communication between a server providing data and a client making requests can be based on one of the many service protocols such as URL query strings, SOAP, XMLRPC or on a more easy to use REST like approach. Other aspects that are more specific to the field of hydrodynamic, meteorological and subsoil models still require extra effort and attention. - Time dependency - Non-rectilinear grids - Multidimensional data - Information and retention of the physical quantities - Indices, queries and interpolation, performance of interpolation (stateless/stateful) In the results section we will elaborate on the experiences relating to these challenges while providing our data through the two service types.
3.1.
Time and sparsity
Time is often referred to as the 4th dimension in GIS. There are different strategies to store information about time together with spatial information (see for example Langran 1990). In numeric models the information can be quite dense. For a hydrodynamic model all the wet grid cells have a temperature and volume for all time steps. Some information can be quite sparse. For example a field rain volume can be empty for a whole model run. Exchanging both sparse and dense arrays in a uniform way is one of the ways that we could improve the exchange of model results through the web. Another topic relating to time arises when numerical models are used to do forecast of fore example weather predictions one often uses a time window. This time window usually starts a few hours in the past and predictions range from one to several days. These windows from multiple model runs overlap and are usually stored next to each other. Easily providing and combining these model runs Is an important challenge.
3.2.
Non-rectilinear grids
Numerical models subdivide the real world into small areas (points, cells, volumes or elements) and layers. The collection of cells covering an area is called the computational grid. One of the challenges in numerical modeling is to obtain a high precision in areas of interest. Grids with complex geometries are employed to get a high precision without sacrificing computation time, which is a function of the number of grid cells. An example of such a grid, taken from Kernkamp et al. (2011) can be seen in Figure 2. One other aspect that deserves special attention is the position of the quantities. For coastal applications vector fields are located on the interfaces between the cells and the scalar quantities at the cells, for oceanographic sets both vectors and scalars are localized at the corners, while in some finite element approaches quantities can even be non-localized distributed.
Figure 2: Examples of non-rectilinear grids
3.3.
Vertical coordinates
Results from numerical models in the field of atmospheric and hydrodynamic models can have some reference to a vertical coordinate. The vertical coordinate system can be based on a presentation of the globe but can also be based on a physical quantity such as pressure or temperature.
The most common vertical coordinates are relative to a defined geoid, such as the WGS84. In modeling often other vertical reference levels are used. For example the ocean bottom, the terrain or pressure levels. For offshore sets local reference fields with respect to LAT (Lowest Astronomical Tide) are used, that are a result of model simulations themselves. Figure x gives an overview of some of the different type of coordinate systems that are used in hydrodynamic (z, sigma), atmospheric and subsoil modeling (theta, eta and sigma). Also combinations of vertical coordinate (hybrid) systems can be used, for example a fixed height layer at the bottom of the ocean followed by several relative sigma layers. It is often difficult to store information while keeping track (in a standardized way) of the properties of vertical coordinates. The CF convention has standardized these coordinates for ocean and atmospheric models (appendix D). Storing the data in their original coordinates and providing a formula to transform to a different vertical coordinate system is the preferred way. This is comparable to how we deal with geographic and projected coordinates where the transformation is usually well defined.
Figure 3: Example of different vertical coordinate systems in use in hydrodynamic and meteorological modeling
3.4.
Multidimensional data
One of the noticeable aspects of model results is that they have more than just the regular three or four spatial-temporal dimensions. For example, to properly model suspended sediment in water, the sediment
concentration is stored per time, vertical layer, x, y and per bin of grain size. Other common extra dimensions are spectral (wave length) and direction bins. When viewed from a relational database perspective these are extra properties that result in more rows, when viewed from a multidimensional data perspective these properties generate an extra dimension. It is often relevant to be able to query data on index base along these extra dimensions. For example, we might be interested in the transport of finest sediments because they might contain more pollution. Or we might be interested in the long waves from the north because they result in a high surge at the southern coast. Thus being able to query data across the different dimensions is one of the challenges.
3.5.
Physical quantities
Model results in geophysical modeling context often result in output variables that correspond to physical quantities. For these quantities it is important that at least the following two aspects are provided with the results: unit of measurement and measured variable, preferably according to a controlled vocabulary. For example, in sea waves a distinction is made between different types of waves depending on their wave period. These longer and shorter wave energies are often modeled and outputted separate. It is important that the physical quantities and other metadata are propagated throughout the whole series of data provision.
3.6.
Indices, queries and interpolation
One of the step stones that has to be taken when switching from model outputs to consumable data is to go from the integer query index space that is usually present in model output to the spatial space. Inside numerical models the world usually consists of index space. The index-based approach is familiar for people who program with arrays or know linear algebra such as scientists. The rest of the possible users are more familiar with the spatial space. Therefore people who come from a scientific background can easily work with the index-based OPeNDAP protocol, in fact, OPeNDAP was conceived by people from this background.
4. Questions Using these several use cases we compare the different techniques for suitability for analysts, government agencies and scientists. In addition we will look at performance and usability. Our goal here is to answer the following questions. What is the best way to compare the different methods for making numerical model results available through the internet? How can the two methods best be improved when considering the goal of making numerical model results available?
5. Methods To be able to assess the suitability of the WCS and OPeNDAP service for the provision of numerical model results we do some simple tests in addition to comparing features, implementations and specifications. We test the usability to be able to see if the current implementations of services are easy enough to be used by untrained end users. We do some simple test to measure the performance of the
application in order to get a rough estimate of directions for a good balance between a client side effort and a server side effort.
5.1.
Performance measurements
To be able to roughly assess the performance of the two different services we used a dedicated machine with the following services installed: - thredds 4.2 as an OPeNDAP server - geoserver 2.1 rc5 as a WCS server For both tests the same file was used. The file that was used for testing was a 720 by 360 grid file. For the WCS test the original ascii grid format was used. For the thredds server the file was converted to a NetCDF file. The file contains unspecified information about precipitation. For both services the whole dataset was extracted so no interpolation should be required. The interpolation was set to nearest neighbor, the least computational intensive, for the WCS server. A test plan was created using the apache jmeter software. Tests were done on the loopback network interface to avoid looking at network congestion. Before each test the tomcat server was restarted and given time to properly start and load applications. This was done to avoid the applications caching requests. The timings were checked with a script using curl. Tests were carried out on an OSX MacBook Pro with 3GB of memory, Intel Core 2 Duo 2.4 GHz processor running under Snow Leopard. For the servers java memory was increased to 1GB.
5.2.
Usability test.
To test the usability of both standards we will define two relatively simple assignments. We will ask five engineering MSc students with medium level scripting skills to get data from both the OPeNDAP service and the WCS service. Since our main goal is to make the data generated by numerical models available to others, we will test in how well potential end users are able to work with existing services. The group of five students and interns is based on the following selection and exclusion criteria: - Has between 6 months and 2 years experience in at least one of the languages matlab, python or ruby - Has not created a script or run a program before to extract data from an OPeNDAP or a WCS server Two datasets were provided for testing a dataset of time varying cloud coverage (MSGCPP) and a dataset of surface altitudes (AHN). We used measured data products instead of model data because these datasets were readily available, easy to understand and structurally the same as model results. In fact, quite complex computer codes were executed to produce these data products. For both datasets both a WCS URL and an OPeNDAP URL were provided to the subjects. The following instructions were provided. The order in which the students used the two services was varied to correct for learning effects.
6. Results 6.1.
Time and sparsity
One of the aspects that is only partially available in both specifications is the possibility of sparse data. The CF convention describes the compression by gathering strategy to store a sparse arrays and compression for variables is available. But data types like sparse and masked arrays or tree-based structures that are useful to exchange some model datasets are not available. It is possible to define data on specific points in the coverage specification but implementations expect grids to be fully filled. The aspect of forecast times can be quite flexible solved in the OPeNDAP approach. There can be more than one time variable per computation, allowing to select both on the valid time and forecast reference time.
6.2.
Non-rectilinear grids
Although the specification of both OPeNDAP and WCS allows for exposing model results based on complex geometries, practical implementations are currently limited. The OPeNDAP data model allows for many types of different applications but without a standard or convention it is unlikely that complex geometries can be visualized. The gridspec of Balaji (2007) and the proposal of Jagers and van Dam (2010) provide good directions for implementations. Also the different unstructured models that already provide NetCDF output, such as ADCIRC and FVCOM. The different approaches are actively discussed in the ugrid group. The next step is to extend visualization tools such as VTK with support for the unstructured grid data structures.
Figure 4: Example of non-rectilinear grid structure with values on nodes, faces and edges. 6.3.
Multidimensional data
The extension of data structures to N dimensions is properly implemented in the OPeNDAP protocol. Sometimes implementations of clients, especially based on languages that don’t support n-dimensional arrays, lack support of up to n, but when limited to 6 we have not ran into any problems. The abstract specification limits the number of dimensions to four. This is not enough for some geophysical applications. The WCS specification does supports the concept of dimensions but it comes with limitations. For example there can only be one variable selected, the spatial grid has to be rectilinear and a few other drawbacks.
6.4.
Physical quantities
Both services provide the ability to work with units. For the OPeNDAP approach the units are based on the udunits package and for the WCS units are available for “Axis” through the ValuesUnits data structure. It is important that implementations retain the units from the model results files. Otherwise these have to be manually configured. The retention of metadata is a missing feature in most WCS implementations.
6.5.
Indices, queries and interpolation
In numerical models the variables are all get and set by index. This makes the matrix or array style approach that is used to query OPeNDAP data very familiar to scientists who often work script based in for example R, python and matlab. A typical approach to doing a spatial query on an OPeNDAP dataset is to start with getting the spatial variables, latitude, longitude and possibly a vertical coordinate. Based on these variables a spatial extent of interest is found locally. With this extent the server is again queried, this time for the relevant variables. It is also possible to do the queries on the server side with OPeNDAP, but this is not a common feature of OPeNDAP implementations. One advantage is that caching of the search index can be done on the client. This allows for custom indexing strategies and subsetting. This is often helpful to be able to quickly visualize a dataset. For example the first and last few columns are sometimes left out of the analysis because they can contain boundary effects that are not representative. The querying by index-based approach is not the query type that is most common for map-based applications. In a map there are generally two main queries parameters the bounding box and the resolution. If we look at the query parameters of a WCS request URL then we see that these are also the obligatory parameters for a WCS getCoverage query. http://geoservices.knmi.nl/cgibin/NMDC_TEST_OPENDAP.cgi?&service=wcs&version=1.0.0&request=getcoverage&coverage=Open Earth/OPeNDAP/tno/ahn100m/mv100&format=NetCDF4&crs=EPSG:28992&bbox=0,300000,280000,625 000&resx=100&resy=100
The part after the question mark represents the (key-value pair) query parameters, with the parameter names marked bold. One of the things to note is that the file is stored locally before it is read. This can be avoided by using the xml response, but we have not seen it in common use.
6.6.
Feature summary
As an overview of the features discussed so far we present Table 1. This table shows that the two main differences between the services are the default way of querying, the limits in dimensionality and the default response type.
Table 1: Feature overview for WCS and OPeNDAP OPeNDAP
WCS
Querying
by index
by coordinates
Reprojection
No
reprojection on request
Multiple projections
One dataset can contain multiple projections
Projections on the fly
Interpolation
No
Yes
Dimensionality
n
4 (x,y,z,t)
Units
Yes
Yes
Metadata
CF convention
OWS Common
Unstructured grids
Possible but not standardized
Standardized but not possible
Response type
Arrays + attributes (dods)
Xml + file
Multiple file formats
6.7.
Yes (optional, depending on implementation)
Yes
Performance
When optimizing performance of a service that exposes data over the internet one needs to balance data size, response time, memory use (caching) and processor time. To give a very rough estimate of the difference in performance between the two types of services we ran the simple performance benchmark as described in the method section. The results we found was that the requests we tested on the geoserver WCS service resulted in a response time of 2.4 +- 0.6 seconds. The thredds OPeNDAP server yielded a response time of 0.15 +- 0.2 seconds for the same dataset. It is clear that performance for this specific case gives a much better response time for the thredds OPeNDAP service than for the reference WCS implementation. For a lot of purposes it is relatively easy to improve performance with appropriate caching strategies. The throughput of both services, once data transfer started, was the same. We have not looked in detail at the causes and generalizability of this difference in performance, but the extra effort that the WCS server has to do, in reading (in this case an ascii grid versus a binary file), interpolation (although not required) and perhaps reprojection and generating a new output file are likely some of the causes.
6.8.
Usability
To test whether the services are “usable” enough for possible users to be able to extract data from the service we conducted in a small usability experiment, as described in the method sections. We found that all subjects had trouble extracting the data. Our first approach was to give the subjects only the URL from the dataset and service name. This resulted in an annoyed subject, who spent over an hour looking for proper libraries and clients to be able to extract the data. We redefined our experiment to be a bit more helpful. We pointed the subjects to the libraries and were available to assist the subjects on request. Most subjects made use of this by asking questions about several problems they ran into. The first problem they ran into was with the general concept of a web service. The users had a strong association between the URL of the dataset and the browser. They used the URL and put it in a web browser, where we expected them to put the URL in their favorite scripting language. When the webpage that users find, if accessed by a browser, e.g. without request parameters, provides a short usage guide the likelihood of a user being able to get results from the dataset increases significantly. Good examples of this are the WCS coverage builders in ADAGUC and Geoserver that provide step by step URL building and the selection tool in pydap that provides examples for different scripting languages (see Figure 5).
Figure 5: Example of the web interface to WCS builder from ADAGUC (left) and Pydap (right)
Another problem that the subjects found was that they got different answers from the different methods. For example for the assignment to find altitude information the subjects all found different answers. For example for the WCS the subjects used a visualization to get a height, but the height was based on a too coarse grid (high resx) so that the value that was extracted was the average over a wider area than necessary. For the OPeNDAP service the subjects got wrong answers or no answers at all. The spatial querying by hand (by using MATLAB) proved to be a too tough task. For the mean thickness of liquid water, the subjects did find consistent answers but this answer was consistently incorrect. The confrontation with new tools and the pressure to give an answer were a good basis for wrong answers. The average time that the subjects spend was over 10 minutes per subtask for most subjects. Most subjects preferred the WCS service; this was mainly due to the ability of the KNMI ADAGUC system to do the spatial queries. They also found the forms that the thredds server provides by manually changing the URL’s (dodsC to catalog) but they were not able to use this form to extract the relevant data.
7. Conclusions One size doesn’t fit all when it comes to providing model results through the web. The different user groups have different expectations and skill levels when it comes to consuming the results of the numerical models. WCS and OPeNDAP operate in a narrow functional requirements zone. Yet, this narrow zone is wide enough to warrant OPeNDAP use next to WCS. WCS needs a base layer for data storage. This could well be OPeNDAP, e.g. in ADAGUC. WCS acts as a thin processing service on top of OPeNDAP. WCS is a layer that offers more, but should not be used to hide an underlying OPeNDAP layer. It is interesting to see how the two different approaches: implementation before specification for OPeNDAP and specification before implementation for the WCS service lead to different types of solutions to similar problems. One of the issues that results from this difference in approach is that although the specification of the WCS is suitable for most of our purposes the implementations are lagging behind in some aspects. For instance, we do not know of any WCS implementation that can handle curvilinear grids and unstructured grids. In contrast, OPeNDAP can handle curvi-linear grids, although the CF convention does not yet cover this. OPeNDAP can handle is because it is outside the functional realm of OPeNDAP. The fact that users can easily extend OPeNDAP with their own
conventions is a benefactor for OPeNDAP use. However, it limits convergence of ideas to agreed-upon conventions like CF when there is not an active community driving the convergence. WCS is an excellent protocol for analysts and government agencies that want to download part of a large datasets occasionally, especially if they do not want to get into the details of a particular dataset. In contrast, when a datasets is accessed often, and the user becomes an expert on a dataset, the rationale of WCS is less efficient that OPeNDAP. For a datasets with which a user is familiar, he can cache the indices required to access a specific part of the datasets, of he can automate the calculation of the indices. When indices can be reused, OPeNDAP is a more efficient protocol than WCS. OPeNDAP allows a kind of “client state” on a dataset, for WCS the maximum possible “client state” is the bounding box. The main advantage of using a WCS service is the binning/gridding to rectangular grids in a different coordinate system. For scientists who are interested in the “exact data” this WCS can be bypassed to use OPeNDAP directly. Because the spatial and temporal queries, reprojection and interpolation are such common operations having this implemented on the server side can be convenient. The interoperability between the two services is an important aspect. We hope to see NetCDF should be one the obliged return parameters to bride gap between WCS and OPeNDAP. Our indicatory performance test showed that there is room for performance increases in the WCS service we tested. For performance the layer on layer approach might not always be optimal. Having custom WMS services directly on OPeNDAP service or directly on the raw data can still serve a purpose. But for openness having access to the lower layer in a convenient way is preferable. Our usability test showed that there are a lot of usability issues with providing access to the services without guidance. Some of these issues can be easily solved but one should be aware that it is likely that providing datasets without proper guidance or training will result in incorrect analysis. Even users with some experience in programming need quite a bit of help to get started. Writing better tutorials, examples and blog posts about using the services should help out a lot of starting service users.
References R. Bellman. Dynamic programming. Dover Publications, Mineola, New York, 2003. J. D. Blower, F. Blanc, M. Clancy, P. Cornillon, C. Donlon, P. Hacker, K. Haines, S. C. Hankin, T. Loubrieu, S. Pouliquen, M. Price, T. F. Pugh, and A. Srinivasan. Serving godae data and products to the ocean community. Oceanography, 22(3):70–79, Sept. 2009. E. P. Chassignet, H. E. Hurlburt, O. M. Smedstad, G. R. Halliwell, A. J. Wallcraft, E. J. Metzger, B. O. Blanton, C. Lozano, D. B. Rao, P. J. Hogan, , and A. Srinivasan. Generalized vertical coordinates for eddy-resolving global and coastal ocean forecasts. Oceanography, 19(1):119, 129 2006. P. Cornillon, J. Gallagher, and T. Sgouros. OPeNDAP: Accessing data in a distributed, heterogeneous environment. Data Science Journal, 2(0):164–174, 2003. B. Domenico and S. Nativi. Web coverage service (wcs) 1.1 extension for cf-NetCDF 3.0 encoding. Discussion Paper 09-018, Open Geospatial Consortium, 2009. S. Hankin, J. Blower, T. Carval, K. Casey, C. Donlon, O. Lauret, T. Loubrieu, A. Srinivasan, J. Trinanes, O. Godoy, et al. NetCDF-CF-OPeNDAP: Standards for ocean data interoperability and object lessons for community data standards processes. In Oceanobs 2009, Venice, September 2010. H. Jagers and A. van Dam. Deltares CF proposal for unstructured grid data model, July 2011 .
M. van Koningsveld, G. J. de Boer, F. Baart, T. Damsma, C. den Heijer, P. van Geer, and B. de Sonneville. OpenEarth - Inter-Company Management of: Data, Models, Tools & Knowledge. In Proceedings WODCON XIX Conference, Beijing, China, 2010.
G. Langran. Temporal gis design tradeoffs. Journal of the Urban and Regional Information Systems Association, 2(2):16, 1990. S. Nativi, J. Caron, B. Domenico, and L. Bigagli. Unidata’s common data model mapping to the iso 19123 data model. Earth Science Informatics, 1(2):59–78, Sept. 2008. A. Whiteside and J. D. Evans. Web coverage service (WCS) implementation standard. Implementation Standard 07-067r5, Open Geospatial Consortium, March 2008. Ugrid discussion group, July 2011 .
Acknowledgements The research has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement 202798 and the Cornelis Lely foundation.