top of page

Building a LiDAR-based Position Visualizer—Rationale

Rebecca Vieyra, Chrystian Vieyra, Mina Glenburg-Johnson, Daniel O'Brien, and Colleen Megowan Romanowicz


Imagine that your smartphone could project a set of invisible light beams forward, and use that information to tell you where you are with respect to your surroundings. For many owners of modern phones, this is already a reality. (Got an iPhone Pro? See the video below to learn about LiDAR. Many Android phones also have this capability). However, the hardware is currently underutilized, used primarily to improve photography by adding a sense of depth in portrait photos.

Our team aims to maximize the use of these newly available sensors on smartphones to help learners understand motion concepts by creating graphs of their position and velocity. (If you are a physics teacher, think about the possibility of being able to have students engage in activities like sonic ranger body movement graphs as individuals all at once during class, or even from their own homes!) We also hope that advancements in this technology will inspire developers from other fields to explore the use of LiDAR for precise 3-D position awareness, such as the for robots in indoor environments where coordinating with GPS is not feasible.


Data Visualization Literacy for All


A critical component of learning STEM is understanding how to display and interpret data. While math classes often bear the greatest responsibility to help students frame data analysis, most typically through a Cartesian coordinate system, physics classes are an additional opportunity to put meaning to graphs. Kinematics—the study of motion—is especially helpful for learning about graphs in the context of movement, and for making sense of the mathematical relationships between position, velocity, and acceleration of objects. Once students have a good sense of how graphs illustrate relationships between two or more variables, the goal is for them transfer this understanding to wider topics, such as understanding economic trends, climate patterns, and more. So important is this topic that data science is now widely recognized as its own discipline, and it extends far beyond Cartesian graphs. (See some great data sets from ESRI for examples).


The necessity of mathematical data modeling with graphs is ubiquitous in STEM learning and careers, and learners need to practice skills associated with Data Visualization Literacy (DVL). A study from the National Research Council (1991) emphasized that “mathematical modeling [has] an importance to economic competitiveness that is very large relative to the recognition given to these activities by the academic mathematical sciences community” (p. 89). DVL can include the domain of mathematical modeling as it pertains to students’ ability to make sense of visual data (Börner et al., 2019), and includes an understanding of graphing and pictorial symbols—skills that are inherent to understanding motion and are necessary for everyone in the digital age.


Researchers have long documented students’ difficulties in learning how to model data with graphs specifically within the context of physics, as the discipline provides ample opportunity for connecting physical experiences with mathematical models. McDermott et al. (1987) were among the seminal researchers to identify students’ common difficulties connecting graphs to physical motion and to real-world experiences. They recommended that teachers provide opportunities for students to relate different graphs of the same kind of motion (such as position-time, velocity-time, and acceleration-time graphs), as well as to encourage students to use provided graphs to predict motion. In tandem, Thornton (1987) began exploring the role of microcomputers (typically composed of a commercial sonic ranger and computer screen) in understanding motion graphs, and posited that real-time motion graph visualization supported student learning.


(Above: Examples of various representations of motion. No, these don't correspond perfectly to one another! Can you figure out what is a bit problematic with these?)


Educational researchers have uncovered numerous approaches to support sense-making with mathematical models of motion, but teachers often struggle to enact them. Sense-making of mathematical models of motion requires ample opportunity for student collaboration (Simpson, 2006; Marshall & Carrejo, 2008), real-time feedback (Vitale, 2015), as well as scaffolded opportunities to embody the motion that is being modeled (Duijzer et al., 2019a). Researchers have explored the role of embodied cognition—the idea that bodily experiences influence cognition—in various STEM domains including mathematics (Alibali & Nathan, 2012), astronomy (Lindgren et al., 2011, 2016) and electric fields (Johnson-Glenberg & Megowan-Romanowciz, 2017). Students’ conceptualizations of motion have been studied using augmented and mixed reality with a variety of technologies, including Kinect for Xbox One (Anderson & Wall, 2016; Johnson-Glenberg & Megowan-Romanowciz, 2017) and situated multimedia art learning laboratory (Johnson-Glenberg et al., 2009, 2014). However, even with these tools, when teachers possess pedagogical content knowledge to teach about them (including knowledge about the importance of students’ prior ideas, identifying what is hard to teach, etc.), teachers struggle to use effective technological approaches that require extra peripherals (Johnson-Glenberg et al., 2015). Mazibe et al. (2020) suggest that teaching motion may be made difficult by instructors’ failure to reinforce important ideas sufficiently, the use of misleading representations in support texts, and perhaps even because teachers’ own “poor conceptual understanding can be obscured in reporting [pedagogical content knowledge]” alone (p. 962).


The Importance of Developing Position-Based Sensor Technology for Smartphones


LiDAR-enhanced visualizations have the potential to provide the kind of scaffolding, embodied experiences, and real-time visualizations and feedback for teaching about modeling motion with graphs and vectors, especially in the context of COVID-19. Remote learning due to COVID-19 has dramatically—and perhaps permanently—changed the way educators provide instruction, and threatens teachers’ abilities to place science investigation as “the central approach for teaching and learning science” (NASEM, 2019, Summary). At the peak of school closure in the U.S., this included up to 55.1 million K-12 students during the 2019-2020 academic year alone (EdWeek, 2020). Particularly concerning is the fact that “only 38% of science teachers reported that students had been engaged in experiments or investigations through remote learning” (NASEM, 2020, p. 1-1), which suggests that students in these contexts do not have the opportunity to learn science experientially. In response, a myriad of science teaching professional societies and educators worked to support teachers by assembling guidelines (Council of State Science Supervisors, 2020), especially resources for distance teaching and learning of lab courses (Strubbe & McKagan, 2020), including a meta-analysis of the capabilities of smartphones to fulfill instructional needs (O’Brien, 2021). These educational contingency measures, and the resulting challenge to hands-on, data-driven learning, are likely to persist as the pandemic continues.


The ability to independently sense and visualize high-precision position data in both indoor and outdoor environments is becoming a mainstream need for technologists as well as everyday individuals. In the past few decades, technologists have relied on Global Position System (GPS) to determine the position of objects around the Earth. However, high-precision measurements (with confidence intervals of a few centimeters) that are available through real-time kinematics (RTK) GPS technologies are not typically accessible to those without highly-specialized knowledge or equipment. Even RTK GPS works poorly indoors and underground, where line of sight to GPS satellites or ground-based networks is often obstructed. Further, RTK GPS anchors position relative to satellite or ground-based networks, when what everyday users often need are position measurements relative to surrounding nearby objects, such as walls. In contrast, LiDAR can provide high-precision position data relative to the environmental map.


Technological improvements in LiDAR and time of flight (ToF) sensing have enabled 3D depth mapping for many applications. These mapping devices can be divided into multiple categories; for example, one can compare the illumination method (flash vs. scanning) or the measurement mechanism (indirect “iToF” vs. direct “dToF”). Traditional LiDAR scanners have used laser pulsing for archeological (Chase et al., 2011), atmospheric (Papayannis et al., 2008), geographical (Lee & Shan, 2003), and forestry mapping (Lim et al., 2003), but the necessary technology was too bulky and expensive to integrate onto PMDs. As a result, most early mobile applications were restricted to flash iToF mechanisms. These technologies can yield high resolution depth maps but with very limited range for smartphone applications (approximately two meters; Cambridge, 2020). Accordingly, a shift to dToF imagers with longer range and faster response time seemed to represent the next step.


Apple’s introduction of scanning LiDAR will boost the market for dToF sensors, expanding access and incentivizing software and app development. The LiDAR market size is expected to expand at a compound annual growth rate of 20.1% over the next 7 years (Allied, 2020). This expansion may be augmented by the iPhone’s dToF module, as smartphone cameras have already been paired with external LiDAR data for applications like urban building modeling (Zhang et al., 2016). Once an application infrastructure for the iPhone 12 Pro is developed, integration of LiDAR in future phones will be seamless. Future hardware will need to operate with greater range and in outdoor environments—two shortcomings of current small-scale LiDAR systems (Gao & Peh, 2016). Infineon and PMD Technologies recently announced a ten meter range 3D imager—double that of the iPhone system—with volume delivery starting Q2 2021 (PMD, 2020). This and other competitive technologies will help expand the smartphone LiDAR market and access to a wider audience in coming years.


Our Goal: Physics Toolbox Motion Visualizer


Over the next year, expect to see new features within Physics Toolbox Sensor Suite that includes a position detector based on LiDAR. Expect to be able to measure distances between yourself and external objects up to 8 or 9 meters away, and to be able to plot both your relative position and velocity. Hopefully, we will also include object detection to allow tracking and 3-D position information of objects within the field of view. In a future blog, we will detail these features and seek input from users!

Curious to learn more? Reach out to the developers at support@vieyrasoftware.net

This work is funded by NSF Grant #2114586. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Comments


Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page