joint wmo technical progress report on the global data processing ...
Mini Manuel De Mathacmatiques Pour La Physique Cours Exercices Corrigacs ...
Pubmed Racontac A Un Aclave De Cm1 Suivi De Pubmed En Langage Bac Et
Du Programme De .... Madagascar Idaces Rea Ues Sur Madagascar ... Cours De
Mathematiques T5 Algebre 1 1e Annee Mpsi Pcsi Ptsi 600 Exercices Corriges
Part of the document
JOINT WMO TECHNICAL PROGRESS REPORT ON THE GLOBAL DATA PROCESSING AND
FORECASTING SYSTEM AND NUMERICAL WEATHER PREDICTION RESEARCH ACTIVITIES FOR
2010
Country: Germany Centre: NMC Offenbach
Summary of highlights
The operational deterministic modelling suite of DWD consists of three
models, namely the global icosahedral-hexagonal grid point model GME (grid
spacing 30 km, i.e. 655362 grid points/layer, 60 layers), the non-
hydrostatic regional model COSME-EU (COSMO model Europe, grid spacing 7 km,
665x657 grid points/layer, 40 layers), and finally the convection-resolving
model COSMO-DE, covering Germany and its surroundings with a grid spacing
of 2.8 km, 421x461 grid points/layer and 50 layers. The COSMO model (http://cosmo-model.cscs.ch/) is used operationally at the
national meteorological services of Germany, Greece, Italy, Poland,
Romania, Russia and Switzerland, and at the regional meteorological service
in Bologna (Italy). The military weather service of Germany operates a
relocatable version of the COSMO model for worldwide applications. Three
national meteorological services, namely INMET (Brazil), DGMAN (Oman) and
NCMS (United Arab Emirates) use the COSMO model in the framework of an
operational licence agreement. The high-resolution regional model HRM
(http://www.met.gov.om/hrm/index.html) of DWD is being used as operational
model with a grid spacing between 7 and 25 km and 40 to 60 layers at 26
national/regional meteorological services, namely Armenia, Bosnia-
Herzegovina, Botswana, Brazil-INMET, Brazil-Navy, Bulgaria, Georgia,
Indonesia, Iran, Israel, Italy, Jordan, Kenya, Libya, Madagascar, Malaysia,
Mozambique, Nigeria, Oman, Pakistan, Philippines, Romania, Spain, Tanzania,
United Arab Emirates and Vietnam. For lateral boundary conditions, GME data
are sent via the internet to the HRM and COSMO model users up to four times
per day. A new probabilistic ensemble prediction system on the convective scale,
called COSMO-DE-EPS, is in pre-operational trial with 20 EPS members since
9 December 2010. It is based on COSMO-DE with a grid spacing of 2.8 km,
421x461 grid points/layer and 50 layers. Four global models, namely GME
(DWD), IFS (ECMWF), GFS (NOAA-NCEP) and GSM (JMA) provide lateral boundary
conditions to intermediate 7-km COSMO models which in turn provide lateral
boundary conditions to COSMO-DE-EPS. To sample the PDF and estimate
forecast uncertainty, variations of the initial state and physical
parameterizations are used to generate additional EPS members. The
forecast range of COSMO-DE-EPS is 21 h with new forecasts every three
hours. The main improvements of DWD's modelling suite included: For GME:
27/04/2010: Additional data used in 3D-Var data assimilation: AMV from GOES-
13; aircraft data over Japan, China and South Korea; additional buoy data
in Mediterranean Sea and Southern Hemisphere. 08/09/2010: Speed-up of operational production schedule; 174-h forecasts of
GME are now ready at 03:30 UTC for 00 UTC and 15:30 UTC for 12 UTC. 20/10/2010: Use of observations ((stratospheric balloons and drop sondes)
taken near Antarctica during the international field experiment CONCORDIASI
(Sept. 2010 - March 2011). 02/02/2011: Increase of the maximum solar albedo in forest-free regions
from 0.70 to 0.85. This results in a pronounced reduction of the positive
temperature bias in the lower troposphere during the southern summer in
Antarctica.
For COSMO-EU:
02/02/2010: Specific rain and snow water contents from GME serve as lateral
boundary conditions in COSMO-EU. 29/06/2010: New dynamical core: Runge-Kutta time stepping and higher-order
horizontal advection scheme (3rd order upwind). For all moisture variables,
a fully 3D semi-Lagrangian advection scheme is used. 08/09/2010: Speed-up of operational production schedule; 78-h forecasts of
COSMO-EU are now ready at 03:05 UTC for 00 UTC and 15:05 UTC for 12 UTC. 15/12/2010: Introduction of FLake (http://www.flake.igb-
berlin.de/index.shtml), a lake parameterization scheme which allows
freezing and melting of inland lakes. FLake is being implemented in several
European NWP models, e.g. IFS (ECMWF) and AROME (Météo France). 02/02/2011: Introduction of a sea ice model to provide a better simulation
of the temperature at the interface between ice surface and atmosphere. The
sea ice fraction is analyzed once a day based on satellite data.
For COSMO-DE:
31/03/2010: Low elevation precipitation scans of 16 additional radar sites
in France, Belgium, the Netherlands and Switzerland are included during the
latent heat nudging data assimilation steps. 08/09/2010: Speed-up of operational production schedule; 21-h forecasts of
COSMO-DE are now ready 55 minutes past 00, 03, 06, ..., 18, 21 UTC. 02/02/2011: New bias correction for Vaisala RS92 sounding humidity data.
2. Equipment in use
2.1 Main computers 2.1.1 Two identical NEC SX-8R Clusters Each Cluster:
Operating System NEC Super-UX 17.1
7 NEC SX-8R nodes (8 processors per node, 2.2 GHz, 35.2 GFlops/s peak
processor
performance, 281.6 GFlops/s peak node performance)
1.97 TFlops/s peak system performance
64 GiB physical memory per node, complete system 448 GiB physical
memory
NEC Internode crossbar switch IXS (bandwidth 16 GiB/s bidirectional)
FC SAN attached global disk space (NEC GFS), see 2.1.4
Both NEC SX-8R clusters are used for climate modelling and research. 2. Two NEC SX-9 Clusters Each cluster:
Operating System NEC Super-UX 18.1
14 NEC SX-9 nodes (16 processors per node, 3.2 GHz, 102.4 GFlops/s
peak processor
performance, 1638.4 GFlops/s peak node performance)
22.93 TFlops/s peak system performance
512 GiB physical memory per node, complete system 7 TiB physical
memory
NEC Internode crossbar switch IXS (bandwidth 128 GiB/s bidirectional)
FC SAN attached global disk space (NEC GFS), see 2.1.4
One NEC SX-9 cluster is used to run the operational weather forecasts;
the second one serves as research and development system.
2.1.3 Two SUN X4600 Clusters
Each cluster:
Operating System SuSE Linux SLES 10
15 SUN X4600 nodes (8 AMD Opteron quad core CPUs per node, 2.3 GHz,
36.8 GFlops/s
peak processor performance, 294.4 GFlops/s peak node performance)
4.4 TFlops/s peak system performance
128 GiB physical memory per node, complete system 1.875 TiB physical
memory
Voltaire Infiniband Interconnect for multinode applications (bandwidth
10 GBit/s bidirectional)
Network connectivity 10 Gbit Ethernet
FC SAN attached global disk space (NEC GFS), see 2.1.4
One SUN X4600 cluster is used to run operational tasks (pre-/post-
processing, special
product applications), the other one research and development tasks.
2.1.4 NEC Global Disk Space Three storage clusters: 16 TiB + 80 TiB + 160 TiB
SAN based on 4 GBit/s FC-AL technology
4 GiB/s sustained aggregate performance
Software: NEC global filesystem GFS-II
Hardware components: NEC NV7300G High redundancy metadata server,
NEC Storage D3-10
The three storage clusters are accessible from systems in 2.1.1,
2.1.2, 2.1.3 2.1.5 Three SGI Altix 4700 systems SGI Altix 4700 systems are used as data handling systems for
meteorological data
Redundancy Cluster SGI_1 consisting of 2 SGI Altix 4700 for
operational tasks and research/development each with:
Operating System SuSE Linux SLES 10
92 Intel Itanium dual core processors 1.6 GHz
1104 GiB physical memory
Network connectivity 10 Gbit Ethernet
290 TiB (SATA) and 30 TiB (SAS) disk space on redundancy cluster SGI_1
for meteorological data
Backup System SGI_B : one SGI Altix 4700 for operational tasks with
Operating System SuSE Linux SLES 10
24 Intel Itanium dual core processors 1.6 GHz
288 GiB physical memory
Network connectivity 10 Gbit Ethernet
70 TiB (SATA) and 10 TiB (SAS) disk space for meteorological data
2.1.6 Sun Fire 4900 Server Operating System Solaris 9
2 Sun Fire 4900 Server (8 dual core processors, 1.2 GHz)
32 GB of physical memory
40 TB of disk space for SAM-QFS filesystems
50 Archives (currently 2600 TB)
connected to 2 Storage-Tek Tape Libraries via SAN This failover cluster is used for HSM based archiving of
meteorological data and forecasts. 2.1.7 Storage-Tek SL8500 Tape Library Attached are 36 Sun STK FC-tape drives
16 x T10000A (500 GB, 120 MB/s)
20 x T10000B (1000 GB, 120 MB/s)
2.2 Networks The main computers are interconnected via Gigabit Ethernet
(Etherchannel) and connected to the LAN via Fast Ethernet 2.3 Special systems 2.3.1 RTH Offenbach Telecommunication systems The Message Switching System (MSS) in Offenbach is acting as RTH on
the MTN within the WMO GTS. It is called Meteorological
Telecommunications System Offenbach (MTSO) and based on a High-
Availability-Primecluster with two Primergy RX300 S3 Computers
(Fujitsu Siemens Computers) running on Novell Linux SLES10 SP3 system-
software and Primecluster cluster-software.
The MSS software is a commercial software package
(MovingWeather by IBLsoft).. Applications are communicating in real
time via the GTS (RMDCN and leased lines), national and international
PTT networks and the Internet with WMO-Partners and global customers
like, EUMETSAT, ECMWF and DFS.
2.3.2 Other Dat