From a6197c288b70fc297e91196e60e74ccee41d4c09 Mon Sep 17 00:00:00 2001 From: DhruvrajSinhZala24 Date: Mon, 23 Mar 2026 16:43:31 +0530 Subject: [PATCH] Improved documentation, added comments, and enhanced usability for beginners --- .../README.md | 28 +++--- .../vanilla/Evaluation.ipynb | 86 ++++++++++++++++- .../vanilla/Train_3.ipynb | 2 +- .../vanilla/data.py | 27 +++++- README.md | 94 ++++++++++++++++--- 5 files changed, 201 insertions(+), 36 deletions(-) diff --git a/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/README.md b/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/README.md index 548ff704..f61a30b2 100644 --- a/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/README.md +++ b/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/README.md @@ -21,21 +21,21 @@ Find below the list of contents of this description: 11. [Appendix](#11-appendix) ## 1. Replication and set-up -The Git repository can be accessed from here, as part of the parent ML4Sci repository. +This project lives inside the parent ML4Sci DeepLense repository. It contains all the Python Notebooks used in training and testing, the trained model weights, dataset simulation scripts, set-up instructions and some examples. -Requirements are divided into two, for each of the tasks: +The dependencies are split into two groups based on the workflow: -**1. Image simulation**: Use the following in an environment of your choice to install the libraries required for simulating the datasets: +**1. Image simulation**: Use the following command in your environment to install the libraries required for simulating datasets: -`pip install simulations.txt` +`pip install -r simulations.txt` The Simulations directory contains the code used to create the dataset, and is adopted from [Michael Toomey’s work](https://github.com/mwt5345/DeepLenseSim/tree/main/Model_I), one of my project mentors. -**2. Super-resolution and related tasks (basically, everything else)**: Use the following in an environment of your choice to install the libraries required for everything else presented in this project: +**2. Super-resolution and related tasks (everything else)**: Use the following command to install the libraries required for the training and evaluation notebooks: -`pip install requirements.txt` +`pip install -r requirements.txt` ## 2. What is gravitational lensing ? @@ -77,7 +77,7 @@ Training is done on low-resolution lensing images alone, as described in the [fo ## 4. Datasets Three datasets are used, each of which mimics a telescope: -* Model 1 dataset: mimics an artifically constructed telescope +* Model 1 dataset: mimics an artificially constructed telescope * Model 2 dataset: mimics Euclid * Model 3 dataset: mimics the Hubble Space Telescope @@ -100,7 +100,7 @@ The model is trained in an unsupervised fashion using a multi-faceted physics-ba * Sub-low-resolution scale (0.5x): At dimensions that are further lower than the low-resolution input images * Mean squared error (MSE) between the Sérsic profile and the reconstructed source image * MSE between the interpolated images and the images produced by the model -* Intensity constraints between the Sérsic source and the upscaled lensing images. This is elabored on in the [Appendix](#11-appendix) +* Intensity constraints between the Sérsic source and the upscaled lensing images. This is elaborated on in the [Appendix](#11-appendix) * The deflection angle is ensured to be > 0, as this is a physical constraint on the system owing to the non-negativity of the mass distribution * A variation density loss (VDL) that restricts the local variability of the deflection angle. This loss ensures that the produced deflection angles remain physical, and without any artifacts or aphysical fluctuations. More in the [Appendix](#11-appendix). @@ -122,7 +122,7 @@ Models are trained on the training dataset described in [Section 4](#4-datasets) Performance of the models are evaluated using the following metrics: * **MSE**: The MSE between the SR and true HR images acts as a simple estimate of closeness of individual pixels in both images. -* **Structural Similarity Index Measure (SSIM)**: This metric also evalutates how different the relation between pixels and their neighbours are, in the SR and true HR images. +* **Structural Similarity Index Measure (SSIM)**: This metric also evaluates how different the relationships between neighbouring pixels are in the SR and true HR images. * **Peak Signal-to-Noise Ratio (PSNR)**: This metric acts as a measure of the amount of noise present in the image. A higher value corresponds to better image quality (lesser noise). Model 1: @@ -147,7 +147,7 @@ Model 3: | CDM (sub-halos) | 0.001684 | 0.809 | 28.636 | ### Performance on the model-2 dataset -As seen above, the performance of the model on the model-2 dataset (limited to the SSIM) appears to be significantly worse. Model-2, mimicing Euclid, has a much lower PSF (point spread function) when compared to the other models. This causes broader and more blurred local features in the images, with which both the model. The worse performance can thus be associated for the most part with the quality issues imposed by the model-2 PSF. +As seen above, the performance of the model on the Model 2 dataset, especially in terms of SSIM, appears to be significantly worse. Model 2, which mimics Euclid, has a much lower PSF (point spread function) than the other models. This leads to broader and blurrier local features in the images, making reconstruction harder for the model. Most of the performance gap can therefore be attributed to the image-quality limitations imposed by the Model 2 PSF. ## 7. Auxiliary studies @@ -185,15 +185,15 @@ This direction tests the performance of the model when trained with a much small ![Performance with sparsity](Readme/sparse.jpg) -The model trained on fewer samples again, show at most a 10% degradation in the SSIM and PSNR scores. Again, this could imply minute structural improvements, but overall similariy in performance. +The model trained on fewer samples again shows at most a 10% degradation in the SSIM and PSNR scores. This suggests small structural differences, but broadly similar overall performance. This result also falls in line with several PINN studies, suggesting that a strength of PINNs is their ability to function effectively with sparse datasets. ### Quality verification -This final direction aims at ensuring the quality of images produed by the SR architecture proposed. While the SSIM and PSNR scores can ensure perceptual quality, their true quality that one can see directly, must also be ensured. +This final direction aims to ensure the quality of the images produced by the proposed SR architecture. While SSIM and PSNR help quantify perceptual quality, the visual and downstream usefulness of the outputs must also be checked directly. -For this purpose, a downstream classification into the three DM sub-structure classes is performed by similar classification networks on the two sets of images: the LR images, and the SR architecture's outputs. There was initially a 10% worse perofrmance by the SR model in downsteam classification accuracy, and small adjustments to the architecture was made to preserve SR image quality. +For this purpose, a downstream classification into the three DM sub-structure classes is performed using similar classification networks on two sets of images: the LR images and the SR model outputs. There was initially a roughly 10% drop in downstream classification accuracy for the SR outputs, so small adjustments were made to the architecture to preserve SR image quality. ![DC model adjustments](Readme/Alpha_2.png) @@ -385,4 +385,4 @@ Model 3: |-----------------|----------|-------|--------| | No_substructure | 0.004156 | 0.786 | 25.519 | | Axion (vortex) | 0.003884 | 0.790 | 26.537 | -| CDM (sub-halos) | 0.003883 | 0.786 | 25.986 | \ No newline at end of file +| CDM (sub-halos) | 0.003883 | 0.786 | 25.986 | diff --git a/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/vanilla/Evaluation.ipynb b/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/vanilla/Evaluation.ipynb index 30c0ccb8..f0c61174 100755 --- a/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/vanilla/Evaluation.ipynb +++ b/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/vanilla/Evaluation.ipynb @@ -1,5 +1,14 @@ { "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Physics-Informed Super-Resolution Evaluation\n", + "\n", + "This notebook evaluates the vanilla physics-informed super-resolution model on the DeepLense test split. It loads a trained model, reconstructs higher-resolution lensing images, and reports image-quality metrics for each dark matter substructure class.\n" + ] + }, { "cell_type": "code", "execution_count": 1, @@ -27,13 +36,22 @@ "print(device)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load Configuration\n", + "\n", + "This section imports the experiment settings and selects which trained model checkpoint to evaluate. The `model_number` controls the telescope-style dataset variant used during testing.\n" + ] + }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ - "with open('config.json') as config_file:\n", + "with open('../config.json') as config_file:\n", " config = json.load(config_file)\n", "\n", "model_number = 1" @@ -82,6 +100,15 @@ " return torch.sum(diff_x)/(diff_x.shape[2]*diff_x.shape[3]) + torch.sum(diff_y)/(diff_y.shape[2]*diff_y.shape[3])" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Prepare Evaluation Datasets\n", + "\n", + "The notebook builds test datasets for the three substructure categories and their corresponding high-resolution targets. These pairs are used to compare the model output against the reference images.\n" + ] + }, { "cell_type": "code", "execution_count": 5, @@ -97,6 +124,15 @@ "HR_cdm = data.LensingDataset('../Simulations/test/data_model_%d/'%model_number,['cdm_HR'],5000)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load the Trained Model\n", + "\n", + "The super-resolution model predicts deflection-angle information that is then used to reconstruct sharper lensing images. Here we restore the saved weights for the selected dataset configuration.\n" + ] + }, { "cell_type": "code", "execution_count": 6, @@ -119,6 +155,15 @@ "alpha_model.load_state_dict(torch.load('Weights_%d.pt'%(model_number), weights_only=True))" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Visualize a Sample Reconstruction\n", + "\n", + "Before aggregating metrics across the full test set, the notebook defines a helper to inspect one reconstructed example. This makes it easier to confirm that the model output looks reasonable.\n" + ] + }, { "cell_type": "code", "execution_count": 7, @@ -239,6 +284,19 @@ "give_image(dataset_cdm, HR_cdm, alpha_model, len(dataset_cdm), plot=True)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Define Evaluation Metrics\n", + "\n", + "The reported metrics capture complementary aspects of image quality:\n", + "\n", + "- `MSE` measures average pixel-level error.\n", + "- `SSIM` measures perceptual and structural similarity.\n", + "- `PSNR` summarizes reconstruction quality relative to noise.\n" + ] + }, { "cell_type": "code", "execution_count": 9, @@ -265,6 +323,15 @@ "history_cdm = {'loss':[],'SSIM':[], 'PSNR':[], 'vdl':[]}" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Run Evaluation Across All Classes\n", + "\n", + "The next cells iterate through the test samples, compute reconstruction metrics, and store per-class histories for later reporting.\n" + ] + }, { "cell_type": "code", "execution_count": 10, @@ -274,9 +341,9 @@ "name": "stderr", "output_type": "stream", "text": [ - "Evaluating CDM sub-structure images: 100%|██████████| 5000/5000 [00:35<00:00, 141.76it/s]\n", - "Evaluating no sub-structure images: 100%|██████████| 5000/5000 [00:32<00:00, 154.52it/s]\n", - "Evaluating axion DM sub-structure images: 100%|██████████| 5000/5000 [00:33<00:00, 148.73it/s]\n" + "Evaluating CDM sub-structure images: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5000/5000 [00:35<00:00, 141.76it/s]\n", + "Evaluating no sub-structure images: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5000/5000 [00:32<00:00, 154.52it/s]\n", + "Evaluating axion DM sub-structure images: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5000/5000 [00:33<00:00, 148.73it/s]\n" ] } ], @@ -339,6 +406,15 @@ "print(f\"cdm: Evaluation completed with \\nMSE: {np.mean(history_cdm['loss'])} ({np.std(history_cdm['loss'])})\\nSSIM: {np.mean(history_cdm['SSIM'])} ({np.std(history_cdm['SSIM'])})\\nPSNR: {np.mean(history_cdm['PSNR'])} ({np.std(history_cdm['PSNR'])})\")" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Summarize Results\n", + "\n", + "The final table presents the average scores for each dark matter substructure class so the model can be compared across datasets.\n" + ] + }, { "cell_type": "code", "execution_count": 12, @@ -438,4 +514,4 @@ }, "nbformat": 4, "nbformat_minor": 2 -} +} \ No newline at end of file diff --git a/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/vanilla/Train_3.ipynb b/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/vanilla/Train_3.ipynb index addb4b22..622bd9ca 100755 --- a/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/vanilla/Train_3.ipynb +++ b/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/vanilla/Train_3.ipynb @@ -39,7 +39,7 @@ "metadata": {}, "outputs": [], "source": [ - "with open('config.json') as config_file:\n", + "with open('../config.json') as config_file:\n", " config = json.load(config_file)" ] }, diff --git a/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/vanilla/data.py b/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/vanilla/data.py index 6fc22b22..d9ef3d17 100755 --- a/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/vanilla/data.py +++ b/DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/vanilla/data.py @@ -1,5 +1,6 @@ import torch import numpy as np +from pathlib import Path class LensingDataset(torch.utils.data.Dataset): def __init__(self, directory, classes, num_samples, aux='_sim_'): @@ -12,10 +13,11 @@ def __init__(self, directory, classes, num_samples, aux='_sim_'): :param aux: Used to indicate whether the dataset contains image sumulations ('_sim_') or deflection angles ('_alpha_') """ super(LensingDataset, self).__init__() - self.directory = directory + self.directory = Path(directory) self.classes = classes self.num_samples = num_samples self.aux = aux + def __len__(self): """ :return: Returns the length of the dataset @@ -30,8 +32,23 @@ def __getitem__(self, index): :return: LR image, min-max normalized """ selected_class = self.classes[index//self.num_samples] - class_index = index%self.num_samples - image = torch.tensor(np.array([np.load(self.directory+selected_class+'/%s'%selected_class+self.aux+'%d.npy'%(class_index))])) + sample_index = index%self.num_samples + image_path = self.directory / selected_class / f"{selected_class}{self.aux}{sample_index}.npy" + + if not image_path.exists(): + raise FileNotFoundError(f"Expected sample not found: {image_path}") + + # Keep the returned tensor in the original single-channel format: [1, H, W]. + image = torch.tensor(np.array([np.load(image_path)])) + if self.aux == '_sim_': - image = (image - torch.min(image))/(torch.max(image)-torch.min(image)) - return image \ No newline at end of file + image_min = torch.min(image) + image_max = torch.max(image) + + # Avoid dividing by zero if a sample is constant-valued. + if torch.isclose(image_max, image_min): + return torch.zeros_like(image) + + image = (image - image_min)/(image_max - image_min) + + return image diff --git a/README.md b/README.md index f0f50e60..e4b20057 100644 --- a/README.md +++ b/README.md @@ -1,25 +1,97 @@ ![ML4Sci x DeepLense](/Images_for_README/DEEPLENSE.png) +## Project Overview + +DeepLense is a research repository from ML4SCI focused on applying machine learning to strong gravitational lensing. The repository brings together multiple independent projects covering classification, regression, domain adaptation, self-supervised learning, and super-resolution for lensing data. + +In practice, this repository works as a collection of project folders rather than a single Python package with one entry point. Each subdirectory usually contains its own notebooks, scripts, trained weights, and dependency list. + +## Repository Structure + +- `README.md`: top-level overview of the datasets and research directions in DeepLense. +- `Images_for_README/`: figures used by the root documentation. +- Project directories such as `DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/`, `DeepLense_Diffusion_Rishi/`, and `Transformers_Classification_DeepLense_Kartik_Sachdev/`: self-contained research efforts with their own code and notebooks. + +## How to Run + +Because DeepLense is a multi-project repository, you should run one subproject at a time. + +### 1. Clone the repository + +```bash +git clone https://github.com//DeepLense.git +cd DeepLense +``` + +### 2. Create and activate a virtual environment + +```bash +python3 -m venv .venv +source .venv/bin/activate +python -m pip install --upgrade pip +``` + +### 3. Choose a project directory + +Examples: + +- `DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/` +- `DeepLense_Diffusion_Rishi/` +- `DeepLense_Gravitational_Lensing_Mriganka_Nath/` + +### 4. Install that project's dependencies + +Many subprojects include their own `requirements.txt`. Install dependencies from the directory you want to work in. + +Example: + +```bash +cd DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar +pip install -r requirements.txt +``` + +Some projects also include extra dependency files for specific tasks. For example, the simulation workflow in `DeepLense_Physics_Informed_Super_Resolution_Anirudh_Shankar/` uses: + +```bash +pip install -r simulations.txt +``` + +### 5. Run notebooks or project-specific scripts from the selected project + +Typical workflows in this repository are notebook-driven: + +```bash +jupyter lab +``` + +Then open the notebook for the subproject you want to explore. + +Some projects also provide helper scripts under folders such as `scripts/` or `train/`. Use the commands documented in that project's README rather than assuming a shared entry point across the repository. + +### 6. Check project-specific README files + +The root README gives a repository-level overview, but setup details can differ across projects. Before running a specific workflow, read the README inside that project folder if one is available. + ## 1. Background -We at DeepLense explore cutting-edge Machine Learning techniques for the study of Strong Gravitational Lensing and Dark Matter Sub-structure. We use both simulated and real lensing images, for a variety of tasks, using a variety of techniques. +At DeepLense, we explore machine learning techniques for studying strong gravitational lensing and dark matter substructure. We use both simulated and real lensing images across several tasks and model families. -We also actively mentor [Google Summer of Code (GSoC)](https://summerofcode.withgoogle.com/) projects, that you can find listed [here](#3-projects). +We also actively mentor [Google Summer of Code (GSoC)](https://summerofcode.withgoogle.com/) projects, many of which are listed [here](#3-projects). -1. Find below a description of [gravitational lensing](#11-gravitational-lensing) and [dark matter sub-structure](#12-dark-matter-and-sub-structure). -2. [Section 2](#2-models) contains a detailed description of the datasets used in the various projects -3. [Section 3](#3-projects) beins with a short description followed by an expansion on the various (GSoC) projects conducted at DeepLense +1. A short introduction to [gravitational lensing](#11-gravitational-lensing) and [dark matter substructure](#12-dark-matter-and-sub-structure) is provided below. +2. [Section 2](#2-datasets) describes the datasets used across the projects. +3. [Section 3](#3-projects) starts with a summary and then expands on the various DeepLense and GSoC projects. ### 1.1 Gravitational Lensing -Gravitational lensing is the phenomenon of the bending of light in the gravity of a massive celestial object (such as a massive galaxy or a group of galaxies); the object essentially behaving as a cosmic lens. We, as a result see the distorted image(s) of light sources (typically another galaxy) behind it. +Gravitational lensing is the bending of light by the gravity of a massive celestial object, such as a galaxy or galaxy cluster. The object effectively acts as a cosmic lens, so we observe distorted images of background light sources, typically other galaxies. ![Lensing Schematic](https://upload.wikimedia.org/wikipedia/commons/thumb/0/02/Gravitational_lens-full.jpg/330px-Gravitational_lens-full.jpg) The dynamics of lensing depends on both the composition of the lens and the nature of the source. We explore different lens models and source light profiles including real galaxy images in DeepLense, that you can find [here](#2-datasets). ### 1.2 Dark Matter and Sub-structure -In DeepLense, we’re mainly dealing with three kinds of simulated Dark Matter: +In DeepLense, we mainly work with three kinds of simulated dark matter: * **Axion Dark Matter (Vortex)**: Axions are hypothetical particles that are considered as candidates for dark matter. In the context of axion dark matter, vortex substructures refer to specific topological features that can form in the distribution of axion fields. * **Cold Dark Matter (Subhalo)**: This model suggests that dark matter consists of slow-moving particles. In the CDM paradigm, smaller clusters of dark matter, known as subhalos, are approximated as “point masses.” This simplification facilitates computational modeling by treating these subhalos as singular points in the overall distribution of dark matter. * **No-Substructure Dark Matter**: Unlike the CDM model, the “no-substructure” approach assumes that dark matter is evenly spread out, devoid of any smaller-scale clusters or sub-halos. This stands in stark contrast to the hierarchical structuring and layering of sub-halos within larger halos as predicted by CDM models. @@ -83,7 +155,7 @@ Classification of lensing images into their intrinsic dark matter sub-structure #### 3.1.6 Contrastive Learning vs BYOL -**Yashwardhan Deshmukh** compares the performance of the self-supervised learning techniques in their [GSoC 2023 project](https://summerofcode.withgoogle.com/archive/2023/projects/TBOsy4MA), Contranstive Learning and Bootstrap Your Own Latent (BYOL) on the three datasets, Model 1, 2 and 3. +**Yashwardhan Deshmukh** compares the performance of the self-supervised learning techniques in their [GSoC 2023 project](https://summerofcode.withgoogle.com/archive/2023/projects/TBOsy4MA), Contrastive Learning and Bootstrap Your Own Latent (BYOL), on the three datasets: Model 1, 2, and 3. #### 3.1.7 Domain Adaptation for Simulation-Based Dark Matter Searches Using Strong Gravitational Lensing @@ -102,11 +174,11 @@ Their work has been published as a [paper](https://iopscience.iop.org/article/10 ### 3.2 Dark matter property estimation through regression Another means of dark matter study through strong lensing is through the approximation of their properties. **Yurii Halychanskyi** and **Zhongchao Guan** approximate the mass density of vortex substructure of dark matter condensates on the three datasets, Model 1, 2 and 3. -Yurii uses the ResNet18Hybrid and CmtTi architectures in their [GSoc 2021](https://summerofcode.withgoogle.com/archive/2021/projects/5719965138681856) and [2022](https://summerofcode.withgoogle.com/archive/2022/projects/58Y5QOU4) projects, while Zhongchao demonstres with ResNet18, ViT, CNN-T, MobileNet V2 and CvT-13, in their [GSoc 2022 project](https://summerofcode.withgoogle.com/archive/2022/projects/lnptRFqq). +Yurii uses the ResNet18Hybrid and CmtTi architectures in their [GSoc 2021](https://summerofcode.withgoogle.com/archive/2021/projects/5719965138681856) and [2022](https://summerofcode.withgoogle.com/archive/2022/projects/58Y5QOU4) projects, while Zhongchao demonstrates results with ResNet18, ViT, CNN-T, MobileNet V2, and CvT-13 in their [GSoc 2022 project](https://summerofcode.withgoogle.com/archive/2022/projects/lnptRFqq). ### 3.3 Super-resolution of lensing images -Finally, DeepLense help combat the problem of noisy and low-resolution of real lensing images through various super-resolution techniques. Denoising and upscaling of lensing images can help make their study more accurate. +Finally, DeepLense helps address the problem of noisy, low-resolution real lensing images through various super-resolution techniques. Denoising and upscaling these images can make scientific analysis more accurate. #### 3.3.1 Single Image Super-Resolution with Diffusion Models **Atal Gupta** achieves super-resolution of the real-galaxy lensing dataset, in their [GSoC 2024 project](https://summerofcode.withgoogle.com/programs/2024/projects/3YAQgkHr), Model 4 using a variety of Diffusion Models (DDPM, SR3, SRDiff, ResShift and CG-DPM). @@ -115,4 +187,4 @@ Finally, DeepLense help combat the problem of noisy and low-resolution of real l **Pranath Reddy** performs a comparative study of the super-resolution of strong lensing images in their [GSoC 2023 project](https://summerofcode.withgoogle.com/archive/2023/projects/Rh8kJLr4), using Residual Models with Content Loss and Conditional Diffusion Models, on the Model 1 dataset. #### 3.3.3 Physics-Informed Unsupervised Super-Resolution of Strong Lensing Images -**Anirudh Shankar** explores the unsupervised super-resolution of strong lensing images through a Physics-Informed approach in his [GSoC 2024 project](https://summerofcode.withgoogle.com/programs/2024/projects/AvlaMMJJ), built to handle sparse datasets. They use custom datasets using different lens models and light profiles. \ No newline at end of file +**Anirudh Shankar** explores the unsupervised super-resolution of strong lensing images through a Physics-Informed approach in his [GSoC 2024 project](https://summerofcode.withgoogle.com/programs/2024/projects/AvlaMMJJ), built to handle sparse datasets. They use custom datasets using different lens models and light profiles.