Here is a list of research projects that I have been involved in.
Two wide-field imaging algorithms are proposed to deal with the non-coplanar effects brought by the so-called "w-term", which only becomes significant when the field of view is large or the antennas are at different planes.
The first one is N-Faceting method, where the celestial sphere is cut from top to bottom into slices of facets, just like the process of cutting a potato into potato chips. These facets can be imaged, CLEANed and then projected onto the tangent plane as the final reconstructed sky image. The local beams hence can be used. However, multiple facets require much computational cost, which can be greatly reduced by paralleling the imaging process of each facet.
Figure 1. The field of view on the celestial sphere can be cut from top to bottom into 6 facets.
Figure 2. (a) Dirty images of the 16 Facets are organised from left to right and up to bottom. The top facets contain sources lying within the central field of view, which are out of focus on lower facets. The data containing 6 point sources is simulated in a field of 30 degrees. (b) Dirty beam on the central facet. (c) Projection of the reconstructed images onto the tangent plane.
The second method is an improved W-Stacking method. Modifications include:
Though three-dimensional gridding brings extra computational cost, the increase can be compensated by the reduced FFT computations due to the decreased number of W-planes when the field of view is considerably large. The improved W-Stacking method can make the difference between the DFT and FFT dirty images for wide-field observations negligible to single precision by using the least-misfit gridding function with a window width of 7 and to double precision with a window width of 14.
Figure 3. Left: DFT dirty image of a simulated VLA A-array data of 34 point sources. Right: The difference between the DFT and FFT dirty images is negligible to double precision. The FFT dirty image is made using the improved W-Stacking method with our least-misfit gridding function. The window width is chosen 14 and the outer half of the image is thrown away.
The paper is being written.
H Ye, S F Gull, S M Tan, B Nikolic, Accurate wide-field imaging: N-Faceting and the improved W-Stacking method
A novel gridding function is proposed under the criterion of minimising the upper bound of the DFT and FFT dirty images in radio interferometry imaging process. We name it as the "least-misfit gridding function". The image cropping ratio is introduced as a parameter during the optimisation process, so as to control how much of the image at the outer edge would be discarded to obtain the usable FFT dirty image.
Compared to the widely used spheroidal function, our least-misfit gridding function makes the difference between the DFT and FFT dirty images at least 100 times smaller. In terms of the aliasing suppression, our least-misfit gridding function can suppress aliases at least 100 times more strongly than the spheroidal function.
Figure 4. Least-misfit gridding function with different window width W. x0 is the parameter controlling how much the image at the outer edge will be discarded. When x0=0.25, the outer half of the FFT image result will be thrown away.
Figure 5. Left: The root mean square (rms) of the difference between DFT and FFT dirty images in a square area from the image center to the given standardized coordinate. The simulated data contains 34 point sources scattering all over the field of view. The least-misfit gridding function is used with different window width W. Right: Under the same experiment setting but with the spheroidal function as the gridding function.
Figure 6. One point source is put outside of the field of view hence its alias shows up within the view. The x-axis represents the pixel number from the source to the field edge, while the y-axis shows the normalised aliasing brightness. Left: the least-misfit function is used with different width W. Right:the spheroidal function is used.
By using the least-misfit gridding function, the errors introduced in the gridding or degridding process can be reduced to single precision with a window width of 7 and to double precision with a window width of 14. If aiming at the accuracy comparable to that achieved by CASA, it only requires a lookup table with 300 entries and a support width of 3 for the least-misfit gridding function, allowing for a greatly reduced computation cost for a given performance.
H Ye, S F Gull, S M Tan, B Nikolic, Optimal gridding and degridding in radio interferometry imaging, Preprint
A novel Bayesian approach is proposed and implemented to conduct source extraction in radio interferometry, which does not require reconstructed radio interferometric images (e.g. CLEANed images) but can directly work on intermediate image products like dirty images.
We build a model which takes the visibility data into consideration despite working with more computationally manageable image products in practice. Based on the model, we use Bayes' theorem to formulate the likelihood of the sources' location and flux information. Through a Markov Chain Monte Carlo (MCMC) process given the likelihood, we can find the most possible positions of the sources. A clustering process is followed to mark down the most likely source positions from the MCMC results as the output. This method is implemented in a software called BaSC.
Figure 7. Left: BaSC receives a set of a dirty beam, dirty image and primary beam as the input and outputs source locations and fluxes. Right: the MCMC results are clustered into two clusters, one is green and the other one is blue, which represent two point-sources respectively. The black dots are outliers.
In comparison, most existing source extraction packages (e.g. SExtractor) were originally designed for CCD images or photographic scans, therefore, they would only use reconstructed images and treat them as optical images. However, CLEAN produces inaccurate models of the sky, and based on these models inserts a gaussian restoring beam at every point suspected to be a source. This makes source extraction packages such as SExtractor have a resolution limited by the restoring beam, which is not a problem for BaSC. In fact, BaSC can better distinguish two nearby point-sources.
Figure 8. Simulated datasets containing two point-sources with different distances are applied to both BaSC and SExtractor. BaSC can distinguish two sources whose distance is smaller than the size of the restoring beam, which is the distinguish limit of SExtractor.