ChatGPT, an OpenAI language model, presents a potential opportunity to improve health literacy by generating accessible and understandable health information. We assessed the readability and health literacy of dermatology health information generated by ChatGPT compared to public educational dermatology resources from the American Academy of Dermatology Association's (AAD) website and whether prompting could increase readability of initial outputs. AAD website Sunscreen FAQs and Melanoma FAQs were imputed into ChatGPT. We introduced supplemental prompts: "I don't understand, please clarify" (first prompt) and "I still don't understand, please clarify" (second prompt).The "Average Readability" score was computed by averaging results of five readability tests with grade levels as outputs. The AAD Sunscreen FAQs and Melanoma FAQs had readability levels of 9.2 and 9.4 (both 9th grade) respectively, and the original ChatGPT output readability levels were 9.6 and 10.4 (9th grade, 10th grade), respectively, with no differences in readability between AAD and ChatGPT for both questions sets (p=0.315, p=0.153). First and second prompting of the sunscreen FAQs output generated material with lower readability than AAD material (6.0, p=0.005; 4.4, p=0.000, respectively). The melanoma FAQs, after prompting, achieved lower readability levels compared to the AAD material (8.0, p=0.079; 7.4, p=0.007, respectively). Overall, we found that readability of dermatology health information is often inadequate and demonstrate that ChatGPT may enhance readability and accessibility of dermatology public educational materials, especially when prompted to enhance readability.