We present Isabellm, an LLM-powered theorem prover for Isabelle/HOL that performs fully automatic proof synthesis. Isabellm works with any local LLM on Ollama and APIs such as Gemini CLI, and it is designed to run on consumer grade computers. The system combines a stepwise prover, which uses large language models to propose proof commands validated by Isabelle in a bounded search loop, with a higher-level proof planner that generates structured Isar outlines and attempts to fill and repair remaining gaps. The framework includes beam search for tactics, tactics reranker ML and RL models, premise selection with small transformer models, micro-RAG for Isar proofs built from AFP, and counter-example guided proof repair. All the code is implemented by GPT 4.1 - 5.2, Gemini 3 Pro, and Claude 4.5. Empirically, Isabellm can prove certain lemmas that defeat Isabelle's standard automation, including Sledgehammer, demonstrating the practical value of LLM-guided proof search. At the same time, we find that even state-of-the-art LLMs, such as GPT 5.2 Extended Thinking and Gemini 3 Pro struggle to reliably implement the intended fill-and-repair mechanisms with complex algorithmic designs, highlighting fundamental challenges in LLM code generation and reasoning. The code of Isabellm is available at https://github.com/zhehou/llm-isabelle
With the rapidly growing population of resident space objects (RSOs) in the near-Earth space environment, detailed information about their condition and capabilities is needed to provide Space Domain Awareness (SDA). Space-based sensing will enable inspection of RSOs at shorter ranges, independent of atmospheric effects, and from all aspects. The use of a sub-THz inverse synthetic aperture radar (ISAR) imaging and sensing system for SDA has been proposed in previous work, demonstrating the achievement of sub-cm image resolution at ranges of up to 100 km. This work focuses on recognition of external structures by use of sequential feature detection and tracking throughout the aligned ISAR images of the satellites. The Hough transform is employed to detect linear features, which are tracked throughout the sequence. ISAR imagery is generated via a metaheuristic simulator capable of modelling encounters for a variety of deployment scenarios. Initial frame-to-frame alignment is achieved through a series of affine transformations to facilitate later association between image features. A gradient-by-ratio method is used for edge detection within individual ISAR images, and edge magnitude and direction are subsequently used to inform a double-weighted Hough transform to detect features with high accuracy. Feature evolution during sequences of frames is analysed. It is shown that the use of feature tracking within sequences with the proposed approach will increase confidence in feature detection and classification, and an example use-case of robust detection of shadowing as a feature is presented.
The earlier works in the context of low-rank-sparse-decomposition (LRSD)-driven stationary synthetic aperture radar (SAR) imaging have shown significant improvement in the reconstruction-decomposition process. Neither of the proposed frameworks, however, can achieve satisfactory performance when facing a platform residual phase error (PRPE) arising from the instability of airborne platforms. More importantly, in spite of the significance of real-time processing requirements in remote sensing applications, these prior works have only focused on enhancing the quality of the formed image, not reducing the computational burden. To address these two concerns, this article presents a fast and unified joint SAR imaging framework where the dominant sparse objects and low-rank features of the image background are decomposed and enhanced through a robust LRSD. In particular, our unified algorithm circumvents the tedious task of computing the inverse of large matrices for image formation and takes advantage of the recent advances in constrained quadratic programming to handle the unimodular constraint imposed due to the PRPE. Furthermore, we extend our approach to ISAR autofocusing and imaging. Specifically, due to the intrinsic sparsity of ISAR images, the LRSD framework is essentially tasked with the recovery of a sparse image. Several experiments based on synthetic and real data are presented to validate the superiority of the proposed method in terms of imaging quality and computational cost compared to the state-of-the-art methods.
Background. Dengue outbreaks are a major public health issue, with Brazil reporting 71% of global cases in 2024. Purpose. This study aims to describe the profile of severe dengue patients admitted to Brazilian Intensive Care units (ICUs) (2012-2024), assess trends over time, describe new onset complications while in ICU and determine the risk factors at admission to develop complications during ICU stay. Methods. We performed a prospective study of dengue patients from 253 ICUs across 56 hospitals. We used descriptive statistics to describe the dengue ICU population, logistic regression to identify risk factors for complications during the ICU stay, and a machine learning framework to predict the risk of evolving to complications. Visualisations were generated using ISARIC VERTEX. Results. Of 11,047 admissions, 1,117 admissions (10.1%) evolved to complications, including non-invasive (437 admissions) and invasive ventilation (166), vasopressor (364), blood transfusion (353) and renal replacement therapy (103). Age>80 (OR: 3.10, 95% CI: 2.02-4.92), chronic kidney disease (OR: 2.94, 2.22-3.89), liver cirrhosis (OR: 3.65, 1.82-7.04), low platelets (<50,000 cells/mm3; OR: OR: 2.25, 1.89-2.68), and high leukocytes (>7,000 cells/mm3; OR: 2.47, 2.02-3.03) were significant risk factors for complications. A machine learning tool for predicting complications was proposed, showing accurate discrimination and calibration. Conclusion. We described a large cohort of dengue patients admitted to ICUs and identified key risk factors for severe dengue complications, such as advanced age, presence of comorbidities, higher level of leukocytes and lower level of platelets. The proposed prediction tool can be used for early identification and targeted interventions to improve outcomes in dengue-endemic regions.
This study aims to investigate the effect of various beam geometries and dimensions of input data on the sparse-sampling streak artifact correction task with U-Nets for clinical CT scans as a means of incorporating the volumetric context into artifact reduction tasks to improve model performance. A total of 22 subjects were retrospectively selected (01.2016-12.2018) from the Technical University of Munich's research hospital, TUM Klinikum rechts der Isar. Sparsely-sampled CT volumes were simulated with the Astra toolbox for parallel, fan, and cone beam geometries. 2048 views were taken as full-view scans. 2D and 3D U-Nets were trained and validated on 14, and tested on 8 subjects, respectively. For the dimensionality study, in addition to the 512x512 2D CT images, the CT scans were further pre-processed to generate a so-called '2.5D', and 3D data: Each CT volume was divided into 64x64x64 voxel blocks. The 3D data refers to individual 64-voxel blocks. An axial, coronal, and sagittal cut through the center of each block resulted in three 64x64 2D patches that were rearranged as a single 64x64x3 image, proposed as 2.5D data. Model performance was assessed with the mean squared error (MSE) and structural similarity index measure (SSIM). For all geometries, the 2D U-Net trained on axial 2D slices results in the best MSE and SSIM values, outperforming the 2.5D and 3D input data dimensions.
To improve ATR identification of ships at sea requires an advanced ISAR processor - one that not only provides focused images but can also determine the pose of the ship. This tells us whether the image shows a profile (vertical plane) view, a plan (horizontal plane) view or some view in between. If the processor can provide this information, then the ATR processor can try to match the images with known vertical or horizontal features of ships and, in conjunction with estimated ship length, narrow the set of possible identifications. This paper extends the work of Melendez and Bennett [M-B, Ref. 1] by combining a focus algorithm with a method that models the angles of the ship relative to the radar. In M-B the algorithm was limited to a single angle and the plane of rotation was not determined. This assumption may be fine for a short time image where there is limited data available to determine the pose. However, the present paper models the ship rotation with two angles - aspect angle, representing rotation in the horizontal plane, and tilt angle, representing variations in the effective grazing angle to the ship.
The aim of this work is to develop an approach that enables Unmanned Aerial System (UAS) to efficiently learn to navigate in large-scale urban environments and transfer their acquired expertise to novel environments. To achieve this, we propose a meta-curriculum training scheme. First, meta-training allows the agent to learn a master policy to generalize across tasks. The resulting model is then fine-tuned on the downstream tasks. We organize the training curriculum in a hierarchical manner such that the agent is guided from coarse to fine towards the target task. In addition, we introduce Incremental Self-Adaptive Reinforcement learning (ISAR), an algorithm that combines the ideas of incremental learning and meta-reinforcement learning (MRL). In contrast to traditional reinforcement learning (RL), which focuses on acquiring a policy for a specific task, MRL aims to learn a policy with fast transfer ability to novel tasks. However, the MRL training process is time consuming, whereas our proposed ISAR algorithm achieves faster convergence than the conventional MRL algorithm. We evaluate the proposed methodologies in simulated environments and demonstrate that using this training philosophy in conjunction with the ISAR algorithm significantly improves the convergence speed for navigation in large-scale cities and the adaptation proficiency in novel environments.




Inverse synthetic aperture radar (ISAR) super-resolution imaging technology is widely applied in space target imaging. However, the performance limits of super-resolution imaging algorithms remain a rarely explored issue. This paper investigates these limits by analyzing the boundaries of super-resolution algorithms for space targets and examines the relationships between key contributing factors. In particular, drawing on the established mathematical theory of computational resolution limits (CRL) for line spectrum reconstruction, we derive mathematical expressions for the upper and lower bounds of cross-range super-resolution imaging, based on ISAR imaging model transformations. Leveraging the explicit expressions, we first explore influencing factors of these bounds, such as the traditional Rayleigh limit, the number of scatterers, and the peak signal-to-noise ratio (PSNR) of scatterers. Then we elucidate the minimum resource requirements in ISAR imaging imposed by the CRL theory to meet the desired cross-range resolution, without which studying super-resolution algorithms becomes unnecessary in practice. Furthermore, the tradeoffs between the cumulative rotation angle, the radar transmit energy, and other contributing factors in optimizing the resolution are analyzed. Simulations are conducted to demonstrate these tradeoffs across various ISAR imaging scenarios, revealing their high dependence on specific imaging targets.




Inverse Synthetic Aperture Radar (ISAR) imaging presents a formidable challenge when it comes to small everyday objects due to their limited Radar Cross-Section (RCS) and the inherent resolution constraints of radar systems. Existing ISAR reconstruction methods including backprojection (BP) often require complex setups and controlled environments, rendering them impractical for many real-world noisy scenarios. In this paper, we propose a novel Analysis-through-Synthesis (ATS) framework enabled by Neural Radiance Fields (NeRF) for high-resolution coherent ISAR imaging of small objects using sparse and noisy Ultra-Wideband (UWB) radar data with an inexpensive and portable setup. Our end-to-end framework integrates ultra-wideband radar wave propagation, reflection characteristics, and scene priors, enabling efficient 2D scene reconstruction without the need for costly anechoic chambers or complex measurement test beds. With qualitative and quantitative comparisons, we demonstrate that the proposed method outperforms traditional techniques and generates ISAR images of complex scenes with multiple targets and complex structures in Non-Line-of-Sight (NLOS) and noisy scenarios, particularly with limited number of views and sparse UWB radar scans. This work represents a significant step towards practical, cost-effective ISAR imaging of small everyday objects, with broad implications for robotics and mobile sensing applications.
The effect of noise on the Inverse Synthetic Aperture Radar (ISAR) with sparse apertures is a challenging issue for image reconstruction with high resolution at low Signal-to-Noise Ratios (SNRs). It is well-known that the image resolution is affected by the bandwidth of the transmitted signal and the Coherent Processing Interval (CPI) in two dimensions, range and azimuth, respectively. To reduce the noise effect and thus increase the two-dimensional resolution of Unmanned Aerial Vehicles (UAVs) images, we propose the Fast Reweighted Atomic Norm Denoising (FRAND) algorithm by incorporating the weighted atomic norm minimization. To solve the problem, the Two-Dimensional Alternating Direction Method of Multipliers (2D-ADMM) algorithm is developed to speed up the implementation procedure. Assuming sparse apertures for ISAR images of UAVs, we compare the proposed method with the MUltiple SIgnal Classification (MUSIC), Cadzow, and SL0 methods in different SNRs. Simulation results show the superiority of FRAND at low SNRs based on the Mean-Square Error (MSE), Peak Signal-to-Noise ratio (PSNR) and Structural Similarity Index Measure (SSIM) criteria.