Conversational information access is an emerging research area.,Currently, human evaluation is used for end-to-end system evaluation, which is both very time and resource intensive at scale, and,thus becomes a bottleneck of progress. As an alternative, we propose automated evaluation by means of simulating users. Our user,simulator aims to generate responses that a real human would give,by considering both individual preferences and the general ow of,interaction with the system. We evaluate our simulation approach,on an item recommendation task by comparing three existing conversational recommender systems. We show that preference modeling and task-specic interaction models both contribute to more,realistic simulations, and can help achieve high correlation between,automatic evaluation measures and manual human assessments.