While research on flagging misinformation and disinformation has received much attention, we know very little about how the flagging of propaganda sources could affect news sharing on social media. Using a quasi-experimental design, we test the effect of source flagging on people’s actual sharing behaviors. By analyzing tweets (N = 49,126) posted by 30 China's media accounts before and after Twitter's practice of labeling state-affiliated media, we reveal the corrective role that flagging plays in preventing people's sharing of information from propaganda sources. The findings suggest that the corrective effect occurs immediately after these accounts are labeled as state-affiliated media and it leads to a long-term reduction in news sharing, particularly for political content. The results contribute to the understanding of how flagging efforts affect user engagement in real-world conversations and highlight that the effect of corrective measures takes place in a dynamic process.