With misinformation on the rise and social platforms exacerbating the spread of misinformation, it is crucial to design technology that can convince people out of believing the misinformation. At a time when there is so much uncertainty regarding the role that Generative AI will play in increasing the spreading of misinformation, we believe that there is scope to leverage the same technology to convince people out of believing the misinformation. It is easier for the Large Language Model (LLMs) to access and process a lot of information faster than humans. Therefore, LLMs can play a huge part in reducing the cognitive load for humans to fact-check misinformation. However, research shows that people are often unwilling to change their beliefs despite contradicting evidence, our goal is to research on what is the best way an LLM can provide information that will help the person get out of their misinformed belief. i.e., Given a person is misinformed, what information can we offer and in what way to get them out of being misinformed? Considering that on social platforms, people may not always have a back-and-forth discussion, our goal is to generate the pipeline such that the model only gets one shot at convincing the person.