We introduce TADA, a simple-yet-effective approach that takes textual descriptions and produces expressive 3D avatars with high-quality geometry and lifelike textures, that can be animated and rendered with traditional graphics pipelines. Existing text-based character generation methods are limited in terms of geometry and texture quality, and cannot be realistically animated due to the misalignment between the geometry and the texture, particularly in the face region. To address these limitations, TADA leverages the synergy of a 2D diffusion model and a parametric body model. Specifically, we derive a high-resolution upsampled version of SMPL-X with a displacement layer and a texture map, and use hierarchical rendering with score distillation sampling (SDS) to create high-quality, detailed, holistic 3D avatars from text. To ensure alignment between the geometry and texture, we render normals and RGB images of the generated character and exploit their latent embeddings during the SDS optimization process. We further drive the character's face with multiple expressions during optimization, ensuring that its semantics remain consistent with the original SMPL-X model. Both qualitative and quantitative evaluations show that TADA significantly surpasses existing approaches. TADA enables large-scale creation of digital characters ready for animation and rendering, while also enabling text-guided editing. The code is public for research purposes at tada.is.tue.mpg.de