To characterize cell types, cellular functions and intracellular processes, an understanding of the differences between individual cells is required. Although microscopy approaches have made tremendous progress in imaging cells in different contexts, the analysis of these imaging data sets is a long-standing, unsolved problem. The few robust cell segmentation approaches that exist often rely on multiple cellular markers and complex time-consuming image analysis. Recently developed deep learning approaches can address some of these challenges, but they require tremendous amounts of data and well-curated reference data sets for algorithm training. We propose an alternative experimental and computational approach, called CellDissect, in which we first optimize specimen preparation and data acquisition prior to image processing to generate high quality images that are easier to analyze computationally. By focusing on fixed suspension and dissociated adherent cells, CellDissect relies only on widefield images to identify cell boundaries and nuclear staining to automatically segment cells in two dimensions and nuclei in three dimensions. This segmentation can be performed on a desktop computer or a computing cluster for higher throughput. We compare and evaluate the accuracy of different nuclear segmentation approaches against manual expert cell segmentation for different cell lines acquired with different imaging modalities.