There is growing evidence that mixed reality visualisation methods improve learning outcomes especially in spatial disciplines. But, often these studies are focused on post evaluation of a single user visualisation experience and self-reporting. With almost all interactions within mixed reality environments never recorded or reflected upon, this leaves vital analytics of the learning process lost to the learning stakeholders. This is especially true when trying to understand how learners navigate, interact and communicate within mixed reality learning environments. Compounding this is the increasing need for synchronous communication between learning stakeholders in the mixed reality environments and growing importance on multimodal data recording. As education shifts from an educator-learner centric model to a multi-disciplinary stakeholder team, there will be growing importance to enable biometric, spatial and reflective multimodal analytics that can be evaluated not only by humans within the stakeholder team but artificial intelligent agents. This recording of multimodal data will also bring with it important ethical and legal issues around privacy, security and machine learning bias. This chapter will contribute to the theme of immersive technologies and mixed reality innovation in education by exploring current methods that can enable multimodal learning analytics within mixed reality learning applications but also reflecting on the issues we currently face with respect to data storage, privacy, security and machine learning capability and bias.