Abstract We describe attempts to design protein sequences by inverting the protein structure prediction algorithm ESMFold. State-of-the-art protein structure prediction methods achieve high accuracy by relying on evolutionary patterns derived from either multiple sequence alignments (AlphaFold, RosettaFold) or pretrained protein language models (PLMs; ESMFold, OmegaFold). In principle, by inverting these networks, protein sequences can be designed to fulfill one or more design objectives, such as high prediction confidence, predicted protein binding, or other geometric constraints that can be expressed with loss functions. In practice, sequences designed using an inverted AlphaFold model, termed AFDesign, contain unnatural sequence profiles shown to express poorly, whereas an inverted RosettaFold network has been shown to be sensitive to adversarial sequences. Here, we demonstrate that these limitations do not extend to neural networks that include PLMs, such as ESMFold. Using an inverted ESMFold model, termed ESM-Design, we generated sequences with profiles that are both more native-like and more likely to express than sequences generated using AFDesign, but less likely to express than sequences rescued by the structure-based design method ProteinMPNN. However, the safeguard offered by the PLM came with steep increases in memory consumption, preventing proteins greater than 150 residues from being modeled on a single GPU with 80GB VRAM. During this investigation, we also observed the role played by different sequence initialization schemes, with random sampling of discrete amino acids improving convergence and model quality over any continuous random initialization method. Finally, we showed how this approach can be used to introduce sequence and structure diversification in small proteins such as ubiquitin, while respecting the sequence conservation of active site residues. Our results highlight the effects of architectural differences between structure prediction networks on zero-shot protein design.