Abstract:
Deep Neural Networks (DNN) have shown early promise for inverse design with their ability to arrive at working designs much faster than conventional optimization techniques. Current approaches, however, require complicated workflows involving training more than one DNN to address the problem of non-uniqueness in the inversion and the emphasis on speed has overshadowed the far more important consideration of solution optimality. We propose and demonstrate a simplified workflow that pairs forward-model DNN with evolutionary algorithms which are widely used for inverse gg design. Our evolutionary search in forward-model space is global and exploits the massive parallelism of modern GPUs for a speedy inversion. We propose a hybrid approach where the DNN is used only for preselection and initialization that is more effective at optimization than a standalone DNN and performs nearly as well as a vanilla evolutionary search with a significantly reduced function evaluation budget. We finally show the utility of an iterative procedure for building the training dataset which further boosts the effectiveness of this approach.