Training a deep neural network model usually requires multiple iterations, or epochs, over thetraining data set, in order to better estimate the parameters of the model. However, in continual learning, this process results in catastrophic forgetting which is one of the core issues of thisdomain. Most proposed approaches for this issuetry to compensate for the effects of parameter updates in the batch incremental setup in which the training model visits a lot of samples for several epochs. However, it is not realistic to expect training data will always be fed to model in a batch incremental setup. This paper proposes a chaotic stream learner that mimics the chaotic behavior of biological neurons and does not updates net-work parameters. In addition, it can work with fewer samples compared to deep learning models on stream learning setup. Our experiments on MNIST, CIFAR10, and Omniglot show thatthe chaotic stream learner has less catastrophic forgetting by its nature in comparison to a CNN model in continual learning.