Abstract:We search for efficient disentanglers on random Clifford circuits of two-qubit gates arranged in a brick-wall pattern, using the proximal policy optimization (PPO) algorithm \cite{schulman2017proximalpolicyoptimizationalgorithms}. Disentanglers are defined as a set of projective measurements inserted between consecutive entangling layers. An efficient disentangler is a set of projective measurements that minimize the averaged von Neumann entropy of the final state with the least number of total projections possible. The problem is naturally amenable to reinforcement learning techniques by taking the binary matrix representing the projective measurements along the circuit as our state, and actions as bit flipping operations on this binary matrix that add or delete measurements at specified locations. We give rewards to our agent dependent on the averaged von Neumann entropy of the final state and the configuration of measurements, such that the agent learns the optimal policy that will take him from the initial state of no measurements to the optimal measurement state that minimizes the entanglement entropy. Our results indicate that the number of measurements required to disentangle a random quantum circuit is drastically less than the numerical results of measurement-induced phase transition papers. Additionally, the reinforcement learning procedure enables us to characterize the pattern of optimal disentanglers, which is not possible in the works of measurement-induced phase transitions.