Crowdsourcing systems have gained considerable interest and adoption in recent years. exchange for a reward. One important research,problem for crowdsourcing systems is,truth discovery,, which aims,to aggregate noisy answers contributed by the workers to obtain,the correct answer (truth) of each task. However, since the collected,answers are highly prone to the workers’ biases, aggregating these,biased answers without proper treatment will unavoidably lead to,discriminatory truth discovery results for particular race, gender,and political groups. In this paper, we address the fairness issue,for truth discovery from biased crowdsourced answers. First, we,define a new fairness notion named,θ,-disparity for truth discovery.,Intuitively,,θ,-disparity bounds the difference in the positive rate,of the inferred truth for protected and unprotected groups. Second, we design three fairness enhancing methods, including two,straw-man methods (,Pre-TD,and,Post-TD,) and an in-processing,method named,FairTD,, to discover fair truth from crowdsourced,answers with bias.,Pre-TD,is a pre-processing method that removes,the bias in workers’ answers before truth discovery.,Post-TD,is a,post-processing method that applies additional treatment on the,inferred truth to make it satisfy,θ,-disparity. And,FairTD,incorporates fairness with truth discovery. It estimates both worker bias,and truth iteratively, and dynamically selects bias to be removed,from the answers during truth inference. We perform an extensive,set of experiments on both synthetic and real-world crowdsourcing,datasets. Our results demonstrate that among all these approaches,,FairTD,achieves the best trade-off between fairness and accuracy.