Algorithmic bias and fairness in the context of graph mining have,largely remained nascent. The sparse literature on fair graph mining,has almost exclusively focused on group-based fairness notation.,However, the notion of individual fairness, which promises the,fairness notion at a much finer granularity, has not been well studied. This paper presents the first principled study of,In,dividual,F,airness,o,n g,R,aph,M,ining (,InFoRM,). First, we present a generic,definition of individual fairness for graph mining which naturally,leads to a quantitative measure of the potential bias in graph mining,results. Second, we propose three mutually complementary algorithmic frameworks to mitigate the proposed individual bias measure,,namely debiasing the input graph, debiasing the mining model and,debiasing the mining results. Each algorithmic framework is formulated from the optimization perspective, using effective and efficient,solvers, which are applicable to multiple graph mining tasks. Third,,accommodating individual fairness is likely to change the original,graph mining results without the fairness consideration. We conduct a thorough analysis to develop an upper bound to characterize,the cost (i.e., the difference between the graph mining results with,and without the fairness consideration). We perform extensive experimental evaluations on real-world datasets to demonstrate the,efficacy and generality of the proposed methods.