Pipeline for creating NB422-detected (ODI) catalog
The repository contains a Docker-based pipeline for preprocessing observational data. The pipeline creates a "master catalog" that combines sextractor catalogs from the input of NB422 (ODI) and other broadband images in COSMOS field. The pipeline is fully automatic except during certain tasks in the pipeline that require user input (e.g. IRAF msccmatch
, imexamine
and sextractor tasks).
Features:
- run environment is independent of local machine
- only re-run necessary steps after changing input data and parameters
- automatically analyze stdout from IRAF
iterstat
andimexamine
tasks and use result for the following step - suuport multiple host operating systems
The pipeline is not meant to be run as is but should be modified to suit specific data analysis routine.
license
Licensed under the MIT license; see LICENSE
Software used in the example
SAOImageDS9, Python/AstroPy, IRAF/PyRaf, IDL, SExtractor
Prerequisites
- Docker
- IDL, Python3 (if running the IDL-based generate_kernel task is desired)
Instructions
- prepare mosaic images and name them as
NB422.fits
,g.fits
etc under a folder (rms imagesg.rms.fits
); Download coordinate file from Gaia server to the same folder and name it asgaia.coo
- make any necessary changes to adapt the code to your workflow
- edit
docker-compose.yml
and change/path/to/data_folder
to the actual location of the data folder on the host machine - (steps 4-6 apply only if you want to run the generate_kernel task) open a new terminal, run
export REQUEST_PORT=8088
andexport RESPONSE_PORT=8089
(or any other port that you choose) in both terminals - copy
server.py
andgenerate_kernel.pro
to the data folder - run
python server.py
- Back in the repository, run
docker build -t odi_pipeline .
to build the image - After the image is built successfully, run
docker-compose run --service-ports pipeline
to create a container - inside the container, run
python3 -m py_programs.tasks.runner create_makefile
- go to
/mnt/data
folder and runmake master_catalog.csv
- caution: for Mac and Windows users, check
Addtional Notes
- caution: for Mac and Windows users, check
Brief explanation of data processing in the pipeline
- calibrate astrometry using gaia.coo, use IRAF
msccmatch
task - copy the new wcs in the mosaic image headers to their rms images, use IRAF
wcscopy
- (steps 3-4 apply only to broadband images) reproject the broadband images to the same tangent point and pixel scale of
NB422.fits
, use IRAFwregister
- caution: turn on flux_conserve when dealing with the mosaic images; no need for rms maps
- match the reprojected rms map to the sky noise of the reprojected mosaic image, use IRAF
iterstat
- make flag map of NB image, use Python
- measure the image PSFs, use IRAF
imexamine
- make moffat PSFs, use Python
- generate kernels, that transform original (broadband) PSFs to NB PSF, use IDL
max_entropy
- convolve broadband images, make sure all images have the same PSF, use Python
- run sextractor, make a master catalog
- All done!
Additional notes
- IDL is required for creating kernels
bb_to_nb.fits
. The command is run on the host machine to avoid the complexity of installing IDL and setting up the license. The host communicates with the dock container via a basic TCP connection. - The pipeline has been thoroughly tested on a 64-bit Ubuntu host.
touch
command is used after some IRAF/PyRAF tasks because IRAF changes the modification time of input files in an unexpected way, which makes the timestamp-based make system unusable
Operation system support
- For both Windows and Mac systems:
- edit
docker-compose.yml
: changeenvironment: DISPLAY=host.docker.internal:0
. - edit
server.py
(line:29) andpy_programs/func/generate_kernel.py
(line:8): change127.0.0.1
tohost.docker.internal
.
- edit
- Windows:
- must have WSL 2 installed, have WSL integration enabled, and have WSLg support (generally means on Windows 11). If there is an issue with opening display, refer to Wiki.
- Mac:
- must have XQuartz installed
- Launch XQuartz. Under the XQuartz menu, select Preferences
- Go to the security tab and ensure "Allow connections from network clients" is checked.
- Run
xhost + ${hostname}
to allow connections to the macOS host - For Mac with M1 processors, run
export DOCKER_DEFAULT_PLATFORM=linux/amd64
before docker build