[News] WP1 - news Leiden/Groningen
Roeland Rengelink
rengelin@strw.LeidenUniv.nl
Thu, 20 Feb 2003 11:16:18 +0100
Recently completed activities WP1
o Rewrite of the Python eclipse interface
o Implementation of a preliminary astrometric correction for
WFI data with large offsets.
Current Activities Leiden WP1:
o Parallelization Framework v0.2 -- a major refactoring and
update of opipeclient and opipeserver.
o Implementation Pipeline DFS, using the query functionality
recently implemented in WP3 and the parallization framework
Status AIs
o QC-meeting
NOTE I received the IVO paper on EIS from Luiz, that I will
distribute as soon as I have a PS version
1. Quality Control -- some clarification was send around,
and is appended to this message
3. LDAC in CVS -- Will be announced sepparately
4. Create mailserver -- you're reading it. A bug reporting
facility will be announced separately
5. Improve the functionality of eclipse -- interface done,
fft deferred, sigma-clipping, see OAC
6. Creation of a data flow system -- currently working on
--
Clarification of some QC issues (included for the public record)
To Do
=====
Since a framework for doing QA and TA is now in place, it is good to
start thinking how we want to use this framework to implement
sophisticated QA and TA procedures.
o Quality Asessment
For each ProcessTarget in astro/main (i.e. each class that has or should
have make(), verify() and compare() methods) we should determine
1. What verification operations are needed?
- Are the operations in the CP sufficient, if not what additional
operations are required?
- What are the necessary measurements that need to be done
2. What comparison operations are needed?
- What measures can be used to qualify the differences between
subsequent objects
- Can these measures be derived separately or do we have to compare
the data directly to derive new measures (e.g. can we compare the means
of the two images or do we have to compute the mean opf the difference
of the two images?)
3. Are all measurements specified under 1. and 2. actually carried out
in the make() method?
o Trend Analysis
For each ProcessTarget object, we should determine what the trend
analysis operations could be. For each trend analysis we should then
determine.
1. What measure are we going to trend? Has this measurement been made?
2. What measure do we use to select the objects for the TA, and are
these measures available?
3. How do we quantify the trend? I.e. what measurements does the
trend-analysis do:
(Implementation note: We could define TrendAnalysis objects which are
themselves ProcessTargets (have make() and verify() methods), with
dependencies that are queries into the database)
To summarize: It should be possible to come up with a list of QA and TA
procedures for BiasFrame, DomeFlatFrame, TwilightFlatFrame,
MasterFlatFrame, FringeFlatFrame, etc, etc. If these procedure doe
indeed fit within the current framework, then it should be relatively
straightforward to iomplent them. If these procedures do not fit within
the current framework, then this list will provide a clear specification
of the additions and/or modifications that are needed.
What Next
=========
We want to find out soon if the current framework does indeed support
the different understanding and various ideas people have about QC. We
also want people to actually start using and contributing to the
astro-wise code.
To start this process, I would suggest that we first try to list every
QC procedure (measurements, visualization, trend-analysis, you-name-it)
that we could possibly think of for biases. We can then see how all
these ideas fit into the current framework
I suggest that OAC takes the lead in assembling this list for
BiasFrame.py. I would love to see an estimate of the amount of time
Agnello thinks it will take to make this list.