Face, Content, and Construct Validation of the da Vinci Skills Simulator
详细信息查看全文 | 推荐本文 |
摘要
| Figures/TablesFigures/Tables | ReferencesReferences

Objective

To report on assessments of face, content, and construct validity for the commercially available da Vinci Skills Simulator (dVSS).

Methods

<p>A total of 38 subjects participated in this prospective study. Participants were classified as novice (0 robotic cases performed), intermediate (1-74 robotic cases), or expert (鈮?5 robotic cases). Each subject completed 5 exercises. Using the metrics available in the simulator software, the performances of each group were compared to evaluate construct validation. Immediately after completion of the exercises, each subject completed a questionnaire to evaluate face and content validation.

Results

<p>The novice group consisted of 18 medical students and 1 resident. The intermediate group included 6 residents, 1 fellow, and 2 faculty urologist. The expert group consisted of 2 residents, 1 fellow, and 7 faculty surgeons. The mean number of robotic cases performed by the intermediate and expert groups was 29.2 and 233.4, respectively. An overall significant difference was observed in favor of the more experienced group in 4 skill sets. When intermediates and experts were combined into a single 鈥渆xperienced鈥?group, they significantly outperformed novices in all 5 exercises. Intermediates and experts rated various elements of the simulators realism at an average of 4.1/5 and 4.3/5, respectively. All intermediate and expert participants rated the simulator's value as a training tool as 4/5 or 5/5.

Conclusion

<p>Our study supports the face, content, and construct validation attributed to the dVSS. These results indicate that the simulator may be most useful to novice surgeons seeking basic robot skills acquisition.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700