Advanced information systems which use human information such as safe driving support car and communication robot have attracted people's attention. To design them, modeling how people search visual information is indispensable. We focus on human visual attention closely related with visual search behavior, and propose a computational model which estimates the visual attention while carrying out visual search task. The existing models estimate visual attention using mean difference between visual feature distribution of target stimulus and the other stimuli so that they can not obtain better performance in a situation where the task is difficult. We refer to the conjunction search of the feature integration theory to deal with a difficult task and estimate visual attention using variance ratio between local visual feature distribution of target stimulus and each of the other stimuli. As the result of visual search experiment, we confirmed the effectiveness of our computational model.