EumetCast reception (20170913) updated
An intervention (extension) will be performed on our Eumetcast reception system around 08:00 UTC.
During this time, no data will be received. Downtime should be relatively short, although no real estimation could be provided so far.
Intervention last about 1 hour, and successfuly allowed the installation of new modem for receiving second transponder. The presence of a splitter on the single cable arriving from the LNB induced a reduction of signal level, on transponder 1, of about 4dBm (-41 dBm to -45 dBm), which corresponds to a new power level at 40% of the preceding one (10% less than the 50/50 of an ideal splitter). More analysis on transponder 2 later next week.
SO2 Alerts (20170610)
A recent update, introduced a bug in the mail alerting system leading to an incorrect link to the alert picture.
Normally this should be corrected for future alerts.
All HDD have been upgraded to better disks. This took about 2 months!
Now the system seems stable and data are being transfered from other storages for the sake of better unification and readiness for the long awaited L1/L2 reprocessing.
Data are now all available again. Please report any suspicious behaviour or file.
Power Outages (20170425)
Two micro outages occured last night. All systems are down until further notice.
Brescia (20170303) -- SOLVED (20170304)
Since Feb 22, Brescia is crashing. The patch applied to use the corrected version of TWT files had introduced a severe "protection fault" issue.
So far no solution has been found.
System is still unavailable and pourparler are ongoing with Western Digital to possibly replace the 36 drives, which could misbehave with the Synology hardware.
On Feb 16th, Eumetsat changed without any prior warning the encoding of surface pressure. This broke the processing of all products.
Software are now patched accordingly and correct processing is ongoing. Back processing of rotten data, will be launched as soon as possible.
Plots (20170216) -- SOLVED
Plot processing queue is broken. For unknown reason, the jobs remain stucked in queue and must be launched manually.
This could make the daily and alert plots to be delivered lately.
Server was overloaded by dead loop processes generated on the Eumetsat update of the 16th
Another recovery for nothing, another crash occurred while storing data on it. No L1 or L2 are available from 2007 to april 2013. Forli/Brescia results are also partly unavailable.
Just after the recovery of the incident occured on Jan 23, another disk gave up. Interactions with Synology resumed.
No L1 or L2 are available from 2007 to april 2013. Forli/Brescia results are also partly unavailable.
Three disks simultaneously disapeared from the controller, leading to a crash of the RAID Structure and a total loss of about 32TB of data.
This means that no L1 or L2 are available from 2007 to april 2013.
A ticket has been openend with synology, with the hope to recover at least partly the lost data. Otherwise, a full download from Ether will have to be done, which will last around 40 to 50 days.
Mail server (20161028) Update
Apparently someone cut deliberately the power of the server without any permission.
The server had difficulty to restart due to this particularly brutal event. Now things seem to resume slowly.
My mail server went down... I will not answer any mail, and no alerts services will be available till Thursday November 3.
We've just received the new data server (116TB!). Set-up is ongoing.
We're facing mechanical problems to integrate it in the 19" rack.
Disk has been replaced and raid partiotion is rebuilding. Access should be available tomorrow.
One disk of the RAID is failling. All services are down till a new disk will be plugged in.
Timeline service has been eventually resumed in a relatively elementary behaviour.
Power Outage follow-up (20160704)
HESIONE has been fixed, with old spare components. This set-up is not guaranteed to work in the long term.
PRIAM error came from a misconfigured switch, which lost it's configuration at shutdown.
Power Outage results
HESIONE does not reboot anymore. This is unfortunately a definitive failure. This means no external services until further notice (i.e.: rsync and reports).
PRIAM network interface seems damaged. Test will be performed next week to add a new interface if possible.
Power Outage (20160630-20160701)
There will be a power interruption from June 30th to July 1st. No operation will be available during that time.
Services will stop from 1400 UTC to around 0830 UTC, sorry for any inconvenience.
Processing and data access are down. Problem will be investigated Monday.
Processing will restart Tuesday after the last viability checks. Processing restarted on Tuesday at around 10:00 Zulu. Missing data will be reprocessed when normal operation will be in steady state.
An unexpected maintenance has to be performed on AJAX. Outage should be short.
Operation have now resumed. (14:21Z)
Back Processing (20160106)
Back processing is now running in steady regime. Last estimation of procesing time is 8 days platform /day.
This means that from today it remains about 15 months of uninterrupted computations (provisional end date: March 2017) !!!!
Network servers were successfully updated. Apparently most of the services are operational. So far only small problems are visible with some web pages due to the deprecation of PHP 5 in favor of PHP 7.
V20151001 selection tools have been updated here and are now able to read previous file format.
Forli version 20151001 is now operational. Back-processing has started and should last about 417 days (Finished around Jan 2017)!!!
New selection tools are available here.
Server outage (20150927)
New server is almost operational.
Processing has restarted, and data are available.
Back-processing will start as soon as possible.
Server outage (20150923)
New server was delivered yesterday.
Installation has started.
Provisional restart date is Thursday Oct 1.
Server outage (20150902)
New server order has been made. Expect delivery delay of about 3 weeks.
Installation should last about one week.
Provisional restart date is Thursday Oct 1.
Server outage (20150824)
Main server is DEAD (hardware failure). No processing, reception and data available until replacement.
(20150825) An offer has been requested. Order will be placed ASAP. Expect a delay of 4 to 5 weeks between order and reception.
Server outage update (20150823)
Main server will be offline for investigations (as well as all other local services) Monday Aug 24th from 08:00 UTC until further notice.
Due to another crash date has been advanced.
Server crash (20150820)
Main server crashed again. NFS daemon generates a "general protection fault" leading to a kernel panic.
In order to investigate the problem a maintenance shutdown will be performed next week.
SO2 alerts (20150504) Update
Mailing service has now resumed.
Outage (20150415) Updated
Main server unexpectedly died (root partition was full).
All process have recovered now... No apparent loss.
SO2 alerts (20150317) Update
Mailing service is broken since February (due to a security patch in glibc).
The compiler + library association being unable to compile succesfully Brescia for the time being, I've no idea on when service could resume.
Plots are still available on usual webpage.
NPP-CrIS data from 2013 were purged to gain space on storage.
New https (20150125)
Changed server certificates (more secure ones) and removes SSL to keep only TLS.
New design (20150119)
A new design for the website has been implemented. Don't hesitate to send your comments, and eventual bug reports.
Backprocessing to version 20140922 has resumed.
Migration from DVB-S to DVB-S2 is now complete. All receptions parameters seem correct.
Only a few PDU's were definitivly lost during migration.
New BUFR extraction are available here.
This a preliminary version to be tested. As the new TWT are on a 110 grid instead of 90, an interpolation is performed to fit in the amp file structure.
New selection tools are available here for FORLI.
Preliminary COX select tool is available here. Functionalities are almost identical to usual select tools.
Data skimming <sticky>
New plot selection criteria. Based upon statistical analysis of the residuals recommended values are used to avoid partly cloudy scenes:
|CO||-0.15/0.25 10-9||2.7 10-9|
|HNO3||-0.60/0.40 10-9||3.0 10-8|
|O3||-0.75/1.25 10-9||3.5 10-8|
|* insufficient statistics.|
Ozone is the most affected, as the standard flags are normally sufficient for CO and HNO3.