Authors: Zixuan Wang, Zhicheng Zhao, Fei Su Description: Deep learning methods have dramatically increased tracking accuracy benefitting from exquisite features extractor. Among these methods, siamese-based tracker performs well. However, in case of camera shaking, the objects are easily to be lost because of no consideration of camera judder, and the position of each pixel changes drastically between frames. In particular, the tracking performance would degrade dramatically in case that the target is small and moving fast, such as UAV tracking. In this paper, the S-Siam framework is proposed to deal with this problem and improves the performance of real-time tracking. Through stabilizing each frame by estimating where the object is going to move, the camera is adjusted adaptively to keep the object in its original position. Experimental results on the VOT2018 dataset show that the proposed method obtained an EAO score 0.449, and achieved 10% robustness improvement compared with existing three trackers, i.e., SiamFC, SiamMask and SiamRPN++, which demonstrates the effectiveness of the proposed algorithm.