位置:首頁(yè) > 服務(wù)中心 > WorldViz虛擬現(xiàn)實(shí)技術(shù)相關(guān)論文

WorldViz虛擬現(xiàn)實(shí)技術(shù)相關(guān)論文

WorldViz虛擬現(xiàn)實(shí)技術(shù)相關(guān)論文

虛擬現(xiàn)實(shí)相關(guān)論文1

2016-02-21   總瀏覽:4943

1)、加州大學(xué)圣巴巴拉分校虛擬環(huán)境與行為研究中心

該實(shí)驗(yàn)室主要致力于心理認(rèn)知相關(guān)的科學(xué)研究,包括社會(huì)心理學(xué)、視覺(jué)、空間認(rèn)知等,并有大量論文在國(guó)際知名刊物發(fā)表,具體詳見(jiàn)論文列表。

2)、邁阿密大學(xué)心理與計(jì)算機(jī)科學(xué)實(shí)驗(yàn)室

研究領(lǐng)域:空間認(rèn)知

Human Spatial Cognition In his research Professor David Waller investigates how people learn and mentally represent spatial information about their environment. Wearing a head-mounted display and carrying a laptop-based dual pipe image generator in a backpack, users can wirelessly walk through extremely large computer generated virtual environments.

Research Project Examples Specificity of Spatial Memories When people learn about the locations of objects in a scene, what information gets represented in memory? For example, do people only remember what they saw, or do they commit more abstract information to memory? In two projects, we address these questions by examining how well people recognize perspectives of a scene that are similar but not identical to the views that they have learned. In a third project, we examine the reference frames that are used to code spatial information in memory. In a fourth project, we investigate whether the biases that people have in their memory for pictures also occur when they remember three-dimensional scenes.

Nonvisual Egocentric Spatial Updating When we walk through the environment, we realize that the objects we pass do not cease to exist just because they are out of sight (e.g. behind us). We stay oriented in this way because we spatially update (i.e., keep track of changes in our position and orientation relative to the environment.)

網(wǎng)站鏈接 http://www.users.muohio.edu/wallerda/spacelab/spacelabproject.html

 

3)、加拿大滑鐵盧大學(xué)心理系

設(shè)備: WorldViz Vizard 3D software toolkit, WorldViz PPT H8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display, Arrington Eye Tracker

研究領(lǐng)域:行為科學(xué)

Professor Colin Ellard about his research: I am interested in how the organization and appearance of natural and built spaces affects movement, wayfinding, emotion and physiology. My approach to these questions is strongly multidisciplinary and is informed by collaborations with architects, artists, planners, and health professionals. Current studies include investigations of the psychology of residential design, wayfinding at the urban scale, restorative effects of exposure to natural settings, and comparative studies of defensive responses. My research methods include both field investigations and studies of human behavior in immersive virtual environments.

網(wǎng)站鏈接  http://www.psychology.uwaterloo.ca/people/faculty/cellard/index.html  http://virtualpsych.uwaterloo.ca/research.htm http://www.colinellard.com/

 

部分發(fā)表論文: Colin Ellard (2009). Where am I? Why we can find our way to the Moon but get lost in the mall. Toronto: Harper Collins Canada.

Journal Articles: Colin Ellard and Lori Wagar (2008). Plasticity of the association between visual space and action space in a blind-walking task. Perception, 37(7), 1044-1053.

Colin Ellard and Meghan Eller (2009). Spatial cognition in the gerbil: Computing optimal escape routes from visual threats. Animal Cognition, 12(2), 333-345.

Posters: Kevin Barton and Colin Ellard (2009). Finding your way: The influence of global spatial intelligibility and field-of-view on a wayfinding task. Poster session presented at the 9th annual meeting of the Vision Sciences Society, Naples, FL. (Link To Poster)

Brian Garrison and Colin Ellard (2009). The connection effect in the disconnect between peripersonal and extrapersonal space. Poster session presented at the 9th annual meeting of the Vision Sciences Society, Naples, FL. (Link To Poster)

 

4)、美國(guó)斯坦福大學(xué)信息學(xué)院虛擬人交互實(shí)驗(yàn)室

設(shè)備: WorldViz Vizard 3D software toolkit, WorldViz PPT X8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display, Complete Characters avatar package

The mission of the Virtual Human Interaction Lab is to understand the dynamics and implications of interactions among people in immersive virtual reality simulations (VR), and other forms of human digital representations in media, communication systems, and games. Researchers in the lab are most concerned with understanding the social interaction that occurs within the confines of VR, and the majority of our work is centered on using empirical, behavioral science methodologies to explore people as they interact in these digital worlds. However, oftentimes it is necessary to develop new gesture tracking systems, three-dimensional modeling techniques, or agent-behavior algorithms in order to answer these basic social questions. Consequently, we also engage in research geared towards developing new ways to produce these VR simulations.

Our research programs tend to fall under one of three larger questions:

      1. What new social issues arise from the use of immersive VR communication systems?

      2. How can VR be used as a basic research tool to study the nuances of face-to-face interaction?

      3. How can VR be applied to improve everyday life, such as legal practices, and communications systems.

 

網(wǎng)站鏈接: http://vhil.stanford.edu/

 

5)、加州大學(xué)圣迭戈分校神經(jīng)科學(xué)實(shí)驗(yàn)室

設(shè)備: WorldViz Vizard 3D software toolkit, WorldViz PPT X8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display

The long-range objective of the laboratory is to better understand the neural bases of human sensorimotor control and learning. Our approach is to analyze normal motor control and learning processes, and the nature of the breakdown in those processes in patients with selective failure of specific sensory or motor systems of the brain. Toward this end, we have developed novel methods of imaging and graphic analysis of spatiotemporal patterns inherent in digital records of movement trajectories. We monitor movements of the limbs, body, head, and eyes, both in real environments and in 3D multimodal, immersive virtual environments, and recently have added synchronous recording of high-definition EEG. One domain of our studies is Parkinson's disease. Our studies have been dissecting out those elements of sensorimotor processing which may be most impaired in Parkinsonism, and those elements that may most crucially depend upon basal ganglia function and cannot be compensated for by other brain systems. Since skilled movement and learning may be considered opposite sides of the same coin, we also are investigating learning in Parkinson’s disease: how Parkinson’s patients learn to adapt their movements in altered sensorimotor environments; how their eye-hand coordination changes over the course of learning sequences; and how their neural dynamics are altered when learning to make decisions based on reward. Finally, we are examining the ability of drug versus deep brain stimulation therapies to ameliorate deficits in these functions.

網(wǎng)站鏈接: http://inc2.ucsd.edu/poizner/index.html

論文列表: http://inc2.ucsd.edu/poizner/publications.html 

人因工程技術(shù)研究院
國(guó)際工效學(xué)協(xié)會(huì)(IEA)國(guó)際獎(jiǎng)項(xiàng) HFE Award全國(guó)人因與工效學(xué)創(chuàng)新大賽 CES-Kingfar青年學(xué)者聯(lián)合研究基金 津發(fā)科研支持計(jì)劃 “津發(fā)杯”第十四屆全國(guó)大學(xué)生交通科技大賽
教育部協(xié)同育人項(xiàng)目
最新通知 項(xiàng)目簡(jiǎn)介 申報(bào)指南 申報(bào)書(shū)下載
產(chǎn)學(xué)研合作
校企聯(lián)合共建實(shí)驗(yàn)室 聯(lián)合申報(bào)科研課題 科研項(xiàng)目合作 專業(yè)共建 師資培訓(xùn) 實(shí)習(xí)實(shí)訓(xùn) 學(xué)分置換
工效學(xué)卓越研究工程
會(huì)議通知 專家介紹 CES-津發(fā)基金 科研支持計(jì)劃 會(huì)議注冊(cè) 資料分享
國(guó)際人機(jī)環(huán)境系統(tǒng)工程大會(huì)
MMESE最新通知 重要文獻(xiàn) 關(guān)于MMESE 出版動(dòng)態(tài) MMESE重點(diǎn)實(shí)驗(yàn)室
研究案例
嬰幼兒發(fā)展心理學(xué)研究 認(rèn)知心理學(xué)研究 心理語(yǔ)言學(xué)與閱讀研究 培訓(xùn)與教學(xué)研究 體育運(yùn)動(dòng)研究 廣告研究 可用性與人機(jī)交互研究 包裝設(shè)計(jì)與購(gòu)物者研究
研究論文
tobii眼動(dòng)技術(shù)相關(guān)論文 WorldViz虛擬現(xiàn)實(shí)技術(shù)相關(guān)論文
科研視頻
人機(jī)環(huán)境同步技術(shù) 虛擬現(xiàn)實(shí)技術(shù) 眼動(dòng)追蹤技術(shù) 心理與行為研究 消費(fèi)與決策研究 交通與人因研究 建筑與人因研究 設(shè)計(jì)與人因研究 安全與人因研究
下載中心
文件下載 用戶手冊(cè)與說(shuō)明書(shū) 視頻資料 項(xiàng)目案例 解決方案

QQ客服:

 4008113950

服務(wù)熱線:

 4008113950

公司郵箱:Kingfar@kingfar.cn

微信聯(lián)系:

 13021282218

微信公眾號(hào)