Amir Amedi

Tel Aviv-Yafo, Tel Aviv District, Israel Contact Info
6K followers 500+ connections

Join to view profile

About

Prof. Amir Amedi
Biography in English and Hebrew
With multidisciplinary education…

Activity

Join now to see all activity

Experience & Education

  • Reichman University (IDC Herzliya)

View Amir’s full experience

See their title, tenure and more.

or

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

Publications

  • Activation of human visual area V6 during egocentric navigation with and without visual experience

    Current Biology

    V6 is a retinotopic area located in the dorsal visual stream that integrates eye movements with retinal and visuo-motor signals. Despite the known role of V6 in visual motion, it is unknown whether it is involved in navigation and how sensory experiences shape its functional properties. We explored the involvement of V6 in egocentric navigation in sighted and in congenitally blind (CB) participants navigating via an in-house distance-to-sound sensory substitution device (SSD), the EyeCane. We…

    V6 is a retinotopic area located in the dorsal visual stream that integrates eye movements with retinal and visuo-motor signals. Despite the known role of V6 in visual motion, it is unknown whether it is involved in navigation and how sensory experiences shape its functional properties. We explored the involvement of V6 in egocentric navigation in sighted and in congenitally blind (CB) participants navigating via an in-house distance-to-sound sensory substitution device (SSD), the EyeCane. We performed two fMRI experiments on two independent datasets. In the first experiment, CB and sighted participants navigated the same mazes. The sighted performed the mazes via vision, while the CB performed them via audition. The CB performed the mazes before and after a training session, using the EyeCane SSD. In the second experiment, a group of sighted participants performed a motor topography task. Our results show that right V6 (rhV6) is selectively involved in egocentric navigation independently of the sensory modality used. Indeed, after training, rhV6 of CB is selectively recruited for auditory navigation, similarly to rhV6 in the sighted. Moreover, we found activation for body movement in area V6, which can putatively contribute to its involvement in egocentric navigation. Taken together, our findings suggest that area rhV6 is a unique hub that transforms spatially relevant sensory information into an egocentric representation for navigation. While vision is clearly the dominant modality, rhV6 is in fact a supramodal area that can develop its selectivity for navigation in the absence of visual experience.

    Other authors
    See publication
  • Persuasive Vibrations: Effects of Speech-Based Vibrations on Persuasion, Leadership, and Co-Presence During Verbal Communication in VR

    2023 IEEE Conference in Virtual Reality and 3D User Interfaces

    In Virtual Reality (VR), a growing number of applications involve verbal communications with avatars, such as for teleconference, entertainment, virtual training, social networks, etc. In this context, our paper aims to investigate how tactile feedback consisting in vibrations synchronized with speech could influence aspects related to VR social interactions such as persuasion, co-presence and leadership. We conducted two experiments where participants embody a first-person avatar attending a…

    In Virtual Reality (VR), a growing number of applications involve verbal communications with avatars, such as for teleconference, entertainment, virtual training, social networks, etc. In this context, our paper aims to investigate how tactile feedback consisting in vibrations synchronized with speech could influence aspects related to VR social interactions such as persuasion, co-presence and leadership. We conducted two experiments where participants embody a first-person avatar attending a virtual meeting in immersive VR. In the first experiment, participants were listening to two speaking virtual agents and the speech of one agent was augmented with vibrotactile feedback. Interestingly, the results show that such vibrotactile feedback could significantly improve the perceived co-presence but also the persuasiveness and leadership of the haptically-augmented agent. In the second experiment, the participants were asked to speak to two agents, and their own speech was augmented or not with vibrotactile feedback. The results show that vibrotactile feedback had again a positive effect on co-presence, and that participants perceive their speech as more persuasive in presence of haptic feedback. Taken together, our results demonstrate the strong potential of haptic feedback for supporting social interactions in VR, and pave the way to novel usages of vibrations in a wide range of applications in which verbal communication plays a prominent role.

    Other authors
    See publication
  • Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device

    Frontiers in Human Neuroscience

    Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed…

    Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes’ identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.

    Other authors
    See publication
  • The Topo-Speech sensory substitution system as a method of conveying spatial information to the blind and vision impaired

    Frontiers in Human Neuroscience

    This study evaluates the Topo-Speech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language…

    This study evaluates the Topo-Speech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language (named/spoken) features. The Topo-Speech sweeps the visual scene or image and represents objects’ identity by employing naming in a spoken word and simultaneously conveying the objects’ location by mapping the x-axis of the visual scene or image to the time it is announced and the y-axis by mapping the location to the pitch of the voice. This proof of concept study primarily explores the practical applicability of this approach in 22 visually impaired and blind individuals. The findings showed that individuals from both populations could effectively interpret and use the algorithm after a single training session. The blind showed an accuracy of 74.45%, while the visually impaired had an average accuracy of 72.74%. These results are comparable to those of the sighted, as shown in previous research, with all participants above chance level. As such, we demonstrate practically how aspects of spatial information can be transmitted through non-visual channels. To complement the findings, we weigh in on debates concerning models of spatial knowledge (the persistent, cumulative, or convergent models) and the capacity for spatial representation in the blind. We suggest the present study’s findings support the convergence model and the scenario that posits the blind are capable of some aspects of spatial representation as depicted by the algorithm comparable to those of the sighted.

    Other authors
    See publication
  • Testing geometry and 3D perception in children following vision restoring cataract-removal surgery

    Frontiers in Neuroscience

    In this study, we employ a battery of visual perception tasks to study the unique experience of a small group of children who have undergone vision-restoring cataract removal surgery as part of the Himalayan Cataract Project. We tested their abilities to perceive in three dimensions (3D) using a binocular rivalry task and the Brock string task, perceive visual illusions, use cross-modal mappings between touch and vision, and spatially group based on geometric cues. Some of the children in this…

    In this study, we employ a battery of visual perception tasks to study the unique experience of a small group of children who have undergone vision-restoring cataract removal surgery as part of the Himalayan Cataract Project. We tested their abilities to perceive in three dimensions (3D) using a binocular rivalry task and the Brock string task, perceive visual illusions, use cross-modal mappings between touch and vision, and spatially group based on geometric cues. Some of the children in this study gained a sense of sight for the first time in their lives, having been born with bilateral congenital cataracts, while others suffered late-onset blindness in one eye alone. This study simultaneously supports yet raises further questions concerning Hubel and Wiesel’s critical periods theory and provides additional insight into Molyneux’s problem, the ability to correlate vision with touch quickly. We suggest that our findings present a relatively unexplored intermediate stage of 3D vision development. Importantly, we spotlight some essential geometrical perception visual abilities that strengthen the idea that spontaneous geometry intuitions arise independently from visual experience (and education), thus replicating and extending previous studies. We incorporate a new model, not previously explored, of testing children with congenital cataract removal surgeries who perform the task via vision. In contrast, previous work has explored these abilities in the congenitally blind via touch. Taken together, our findings provide insight into the development of what is commonly known as the visual system in the visually deprived and highlight the need to further empirically explore an amodal, task-based interpretation of specializations in the development and structure of the brain. Moreover, we propose a novel objective method for determining congenital vs. late blindness where medical history and records are partial or lacking

    Other authors
    See publication
  • Gyrification in relation to cortical thickness in the congenitally blind

    Frontiers in Neuroscience

    Greater cortical gyrification (GY) is linked with enhanced cognitive abilities and is also negatively related to cortical thickness (CT). Individuals who are congenitally blind (CB) exhibits remarkable functional brain plasticity which enables them to perform certain non-visual and cognitive tasks with supranormal abilities. For instance, extensive training using touch and audition enables CB people to develop impressive skills and there is evidence linking these skills to cross-modal…

    Greater cortical gyrification (GY) is linked with enhanced cognitive abilities and is also negatively related to cortical thickness (CT). Individuals who are congenitally blind (CB) exhibits remarkable functional brain plasticity which enables them to perform certain non-visual and cognitive tasks with supranormal abilities. For instance, extensive training using touch and audition enables CB people to develop impressive skills and there is evidence linking these skills to cross-modal activations of primary visual areas. There is a cascade of anatomical, morphometric and functional-connectivity changes in non-visual structures, volumetric reductions in several components of the visual system, and CT is also increased in CB. No study to date has explored GY changes in this population, and no study has explored how variations in CT are related to GY changes in CB. T1-weighted 3D structural magnetic resonance imaging scans were acquired to examine the effects of congenital visual deprivation in cortical structures in a healthy sample of 11 CB individuals (6 male) and 16 age-matched sighted controls (SC) (10 male). In this report, we show for the first time an increase in GY in several brain areas of CB individuals compared to SC, and a negative relationship between GY and CT in the CB brain in several different cortical areas. We discuss the implications of our findings and the contributions of developmental factors and synaptogenesis to the relationship between CT and GY in CB individuals compared to SC. F.

    Other authors
    See publication
  • Face shape processing via visual-to-auditory sensory substitution activates regions within the face processing networks in the absence of visual experience

    Frontiers in Neuroscience

    Previous evidence suggests that visual experience is crucial for the emergence and tuning of the typical neural system for face recognition. To challenge this conclusion, we trained congenitally blind adults to recognize faces via visual-to-auditory sensory-substitution (SDD). Our results showed a preference for trained faces over other SSD-conveyed visual categories in the fusiform gyrus and in other known face-responsive-regions of the deprived ventral visual stream. We also observed a…

    Previous evidence suggests that visual experience is crucial for the emergence and tuning of the typical neural system for face recognition. To challenge this conclusion, we trained congenitally blind adults to recognize faces via visual-to-auditory sensory-substitution (SDD). Our results showed a preference for trained faces over other SSD-conveyed visual categories in the fusiform gyrus and in other known face-responsive-regions of the deprived ventral visual stream. We also observed a parametric modulation in the same cortical regions, for face orientation (upright vs. inverted) and face novelty (trained vs. untrained). Our results strengthen the conclusion that there is a predisposition for sensory-independent and computation-specific processing in specific cortical regions that can be retained in life-long sensory deprivation, independently of previous perceptual experience. They also highlight that if the right training is provided, such cortical preference maintains its tuning to what were considered visual-specific face features.

    Other authors
    See publication
  • A case study in phenomenology of visual experience with retinal prosthesis versus visual-to-auditory sensory substitution

    Neuropsychologia

    The phenomenology of the blind has provided an age-old, unparalleled means of exploring the enigmatic link between the brain and mind. This paper delves into the unique phenomenological experience of a man who became blind in adulthood. He subsequently underwent both an Argus II retinal prosthesis implant and training, and extensive training on the EyeMusic visual to auditory sensory substitution device (SSD), thereby becoming the first reported case to date of dual proficiency with both…

    The phenomenology of the blind has provided an age-old, unparalleled means of exploring the enigmatic link between the brain and mind. This paper delves into the unique phenomenological experience of a man who became blind in adulthood. He subsequently underwent both an Argus II retinal prosthesis implant and training, and extensive training on the EyeMusic visual to auditory sensory substitution device (SSD), thereby becoming the first reported case to date of dual proficiency with both devices. He offers a firsthand account into what he considers the great potential of combining sensory substitution devices with visual prostheses as part of a complete visual restoration protocol. While the Argus II retinal prosthesis alone provided him with immediate visual percepts by way of electrically stimulated phosphenes elicited by the device, the EyeMusic SSD requires extensive training from the onset. Yet following the extensive training program with the EyeMusic sensory substitution device, our subject reports that the sensory substitution device allowed him to experience a richer, more complex perceptual experience, that felt more “second nature” to him, while the Argus II prosthesis (which also requires training) did not allow him to achieve the same levels of automaticity and transparency. Following long-term use of the EyeMusic SSD, our subject reported that visual percepts representing mainly, but not limited to, colors portrayed by the EyeMusic SSD are elicited in association with auditory stimuli, indicating the acquisition of a high level of automaticity. Finally, the case study indicates an additive benefit to the combination of both devices on the user's subjective phenomenological visual experience.

    Other authors
    See publication
  • Congenitally blind adults can learn to identify face-shapes via auditory sensory substitution and successfully generalize some of the learned features

    Scientific Reports

    Unlike sighted individuals, congenitally blind individuals have little to no experience with face shapes. Instead, they rely on non-shape cues, such as voices, to perform character identification. The extent to which face-shape perception can be learned in adulthood via a different sensory modality (i.e., not vision) remains poorly explored. We used a visual-to-auditory Sensory Substitution Device (SSD) that enables conversion of visual images to the auditory modality while preserving their…

    Unlike sighted individuals, congenitally blind individuals have little to no experience with face shapes. Instead, they rely on non-shape cues, such as voices, to perform character identification. The extent to which face-shape perception can be learned in adulthood via a different sensory modality (i.e., not vision) remains poorly explored. We used a visual-to-auditory Sensory Substitution Device (SSD) that enables conversion of visual images to the auditory modality while preserving their visual characteristics. Expert SSD users were systematically taught to identify cartoon faces via audition. Following a tailored training program lasting ~ 12 h, congenitally blind participants successfully identified six trained faces with high accuracy. Furthermore, they effectively generalized their identification to the untrained, inverted orientation of the learned faces. Finally, after completing the extensive 12-h training program, participants learned six new faces within 2 additional hours of training, suggesting internalization of face-identification processes. Our results document for the first time that facial features can be processed through audition, even in the absence of visual experience across the lifespan. Overall, these findings have important implications for both non-visual object recognition and visual rehabilitation practices and prompt the study of the neural processes underlying auditory face perception in the absence of vision.

    Other authors
    See publication
  • Effects of training and using an audio-tactile sensory substitution device on speech-in-noise understanding

    Scientific Reports

    Understanding speech in background noise is challenging. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. The vibrations correspond to low frequencies extracted from the speech input. We trained two groups of non-native English speakers in understanding distorted speech in noise.

    Other authors
    See publication
  • Applying a novel visual‑to‑touch sensory substitution for studying tactile reference frames

    Scientific Reports

    Perceiving the spatial location and physical dimensions of touched objects is crucial for goal-directed
    actions. To achieve this, our brain transforms skin-based coordinates into a reference frame by
    integrating visual and posture information. In the current study, we examine the role of posture in
    mapping tactile sensations to a visual image. We developed a new visual-to-touch sensory substitution
    device that transforms images into a sequence of vibrations on the arm. 52…

    Perceiving the spatial location and physical dimensions of touched objects is crucial for goal-directed
    actions. To achieve this, our brain transforms skin-based coordinates into a reference frame by
    integrating visual and posture information. In the current study, we examine the role of posture in
    mapping tactile sensations to a visual image. We developed a new visual-to-touch sensory substitution
    device that transforms images into a sequence of vibrations on the arm. 52 blindfolded participants
    performed spatial recognition tasks in three diferent arm postures and had to switch postures
    between trial blocks. As participants were not told which side of the device is down and which is up,
    they could choose how to map its vertical axis in their responses. Contrary to previous fndings, we
    show that new proprioceptive inputs can be overridden in mapping tactile sensations. We discuss the
    results within the context of the spatial task and the various sensory contributions to the process.

    Other authors
    See publication
  • A self-training program for sensory substitution devices

    PLOS ONE

    Sensory Substitution Devices (SSDs) convey visual information through audition or touch,
    targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded…

    Sensory Substitution Devices (SSDs) convey visual information through audition or touch,
    targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches.

    Other authors
    See publication
  • Core knowledge of geometry can develop independently of visual experience

    Cognition

    Geometrical intuitions spontaneously drive visuo-spatial reasoning in human adults, children and animals. Is their emergence intrinsically linked to visual experience, or does it reflect a core property of cognition shared across sensory modalities? To address this question, we tested the sensitivity of blind-from-birth adults to geometrical-invariants using a haptic deviant-figure detection task. Blind participants spontaneously used many geometric concepts such as parallelism, right angles…

    Geometrical intuitions spontaneously drive visuo-spatial reasoning in human adults, children and animals. Is their emergence intrinsically linked to visual experience, or does it reflect a core property of cognition shared across sensory modalities? To address this question, we tested the sensitivity of blind-from-birth adults to geometrical-invariants using a haptic deviant-figure detection task. Blind participants spontaneously used many geometric concepts such as parallelism, right angles and geometrical shapes to detect intruders in haptic displays, but experienced difficulties with symmetry and complex spatial transformations. Across items, their performance was highly correlated with that of sighted adults performing the same task in touch (blindfolded) and in vision, as well as with the performances of uneducated preschoolers and Amazonian adults. Our results support the existence of an amodal core-system of geometry that arises independently of visual experience. However, performance at selecting geometric intruders was generally higher in the visual compared to the haptic modality, suggesting that sensory-specific spatial experience may play a role in refining the properties of this core-system of geometry

    Other authors
    See publication
  • Topographic maps and neural tuning for sensory substitution dimensions learned in adulthood in a congenital blind subject

    NeuroImage

    Topographic maps, a key principle of brain organization, emerge during development. It remains unclear, however, whether topographic maps can represent a new sensory experience learned in adulthood. MaMe, a congenitally blind individual, has been extensively trained in adulthood for perception of a 2D auditory-space (soundscape) where the y- and x-axes are represented by pitch and time, respectively. Using population receptive field mapping we found neural populations tuned topographically to…

    Topographic maps, a key principle of brain organization, emerge during development. It remains unclear, however, whether topographic maps can represent a new sensory experience learned in adulthood. MaMe, a congenitally blind individual, has been extensively trained in adulthood for perception of a 2D auditory-space (soundscape) where the y- and x-axes are represented by pitch and time, respectively. Using population receptive field mapping we found neural populations tuned topographically to pitch, not only in the auditory cortices but also in the parietal and occipito-temporal cortices. Topographic neural tuning to time was revealed in the parietal and occipito-temporal cortices. Some of these maps were found to represent both axes concurrently, enabling MaMe to represent unique locations in the soundscape space. This case study provides proof of concept for the existence of topographic maps tuned to the newly learned soundscape dimensions. These results suggest that topographic maps can be adapted or recycled in adulthood to represent novel sensory experiences.

    Other authors
    See publication
  • The sound of reading: Color-to-timbre substitution boosts reading performance via OVAL, a novel auditory orthography optimized for visual-to-auditory mapping

    PLOS ONE

    Differently from western sighted individuals, literacy rates via tactile reading systems, such as Braille, are declining, thus imposing an alarming threat to literacy among non-visual readers. This decline is due to many reasons including the length of training needed to master Braille, which must also include extensive tactile sensitivity exercises, the lack of proper Braille instruction and the high costs of Braille devices. The far-reaching consequences of low literacy rates, raise the need…

    Differently from western sighted individuals, literacy rates via tactile reading systems, such as Braille, are declining, thus imposing an alarming threat to literacy among non-visual readers. This decline is due to many reasons including the length of training needed to master Braille, which must also include extensive tactile sensitivity exercises, the lack of proper Braille instruction and the high costs of Braille devices. The far-reaching consequences of low literacy rates, raise the need to develop alternative, cheap and easy-to-master non-visual reading systems. To this aim, we developed OVAL, a new auditory orthography based on a visual-to-auditory sensory-substitution algorithm. Here we present its efficacy for successful words-reading, and investigation of the extent to which redundant features defining characters (i.e., adding specific colors to letters conveyed into audition via different musical instruments) facilitate or impede auditory
    reading outcomes. Thus, we tested two groups of blindfolded sighted participants who were
    either exposed to a monochromatic or to a color version of OVAL. First, we showed that
    even before training, all participants were able to discriminate between 11 OVAL characters
    significantly more than chance level. Following 6 hours of specific OVAL training, participants were able to identify all the learned characters, differentiate them from untrained letters, and read short words/pseudo-words of up to 5 characters. The Color group outperformed the Monochromatic group in all tasks, suggesting that redundant characters’ features are beneficial for auditory reading. Overall, these results suggest that OVAL is a promising auditory-reading tool that can be used by blind individuals, by people with reading deficits as well as for the investigation of reading specific processing dissociated from the visual modality.

    Other authors
    See publication
  • Decoding Natural Sounds in Early ‘‘Visual’’ Cortex of Congenitally Blind Individuals

    Current Biology

    Complex natural sounds, such as bird singing, people talking, or traffic noise, induce decodable fMRI activation patterns in early visual cortex of sighted blindfolded participants [1]. That is, early visual cortex receives non-visual and potentially predictive information from audition. However, it is unclear whether the
    transfer of auditory information to early visual areas is an epiphenomenon of visual imagery or, alternatively,
    whether it is driven by mechanisms independent from…

    Complex natural sounds, such as bird singing, people talking, or traffic noise, induce decodable fMRI activation patterns in early visual cortex of sighted blindfolded participants [1]. That is, early visual cortex receives non-visual and potentially predictive information from audition. However, it is unclear whether the
    transfer of auditory information to early visual areas is an epiphenomenon of visual imagery or, alternatively,
    whether it is driven by mechanisms independent from visual experience. Here, we show that we can decode
    natural sounds from activity patterns in early ‘‘visual’’ areas of congenitally blind individuals who lack visual
    imagery. Thus, visual imagery is not a prerequisite of auditory feedback to early visual cortex. Furthermore,
    the spatial pattern of sound decoding accuracy in early visual cortex was remarkably similar in blind and
    sighted individuals, with an increasing decoding accuracy gradient from foveal to peripheral regions. This
    suggests that the typical organization by eccentricity of early visual cortex develops for auditory feedback,
    even in the lifelong absence of vision. The same feedback to early visual cortex might support visual perception in the sighted [1] and drive the recruitment of this area for non-visual functions in blind individuals [2, 3].

    Other authors
    See publication
  • Are critical periods reversible in the adult brain? Insights on cortical specializations based on sensory deprivation studies

    Neuroscience & Biobehavioral Reviews

    We review here studies with visual and auditory deprived/recovery populations to argue for the need of a
    redefinition of the crucial role of unisensory-specific experiences during critical periods (CPs) on the emergence of sensory specializations. Specifically, we highlight that these studies, with emphasis on results with congenitally blind adults using visual sensory-substitution devices, consistently document that typical specializations (e.g., in visual cortex) could arise also in…

    We review here studies with visual and auditory deprived/recovery populations to argue for the need of a
    redefinition of the crucial role of unisensory-specific experiences during critical periods (CPs) on the emergence of sensory specializations. Specifically, we highlight that these studies, with emphasis on results with congenitally blind adults using visual sensory-substitution devices, consistently document that typical specializations (e.g., in visual cortex) could arise also in adulthood via other sensory modalities (e.g., audition), even after relatively short (tailored) trainings. Altogether, these studies suggest that 1) brain specializations are driven by sensory-independent computations rather than by unisensory-specific inputs and that 2) specific computationoriented trainings, even if executed during adulthood, can guide the sensory brain to display/recover, core properties of brain specializations. We thus introduce here the concept of a reversible plasticity gradient, namely that brain plasticity spontaneously decreases with age in line with CPs theory, but it nonetheless can be reignited across the lifespan, even without any exposure to unisensory (e.g., visual) experiences during childhood, thus diverging dramatically from CPs assumptions.

    Other authors
    See publication
  • A Whole-Body Sensory-Motor Gradient is Revealed in the Medial Wall of the Parietal Lobe

    Journal of Neuroscience

    In 1954, Penfield and Jasper's findings based on electric stimulation of epileptic patients led them to hypothesize that a sensory representation of the body should be found in the precuneus. They termed this representation the “supplementary sensory” area and emphasized that the exact form of this homunculus could not be specified on the basis of their results. In the decades that followed, their prediction was neglected. The precuneus was found to be involved in numerous motor, cognitive and…

    In 1954, Penfield and Jasper's findings based on electric stimulation of epileptic patients led them to hypothesize that a sensory representation of the body should be found in the precuneus. They termed this representation the “supplementary sensory” area and emphasized that the exact form of this homunculus could not be specified on the basis of their results. In the decades that followed, their prediction was neglected. The precuneus was found to be involved in numerous motor, cognitive and visual processes, but no work was done on its somatotopic organization. Here, we used a periodic experimental design in which 16 human subjects (eight women) moved 20 body parts to investigate the possible body part topography of the precuneus. We found an anterior-to-posterior, dorsal-to-ventral, toes-to-tongue gradient in a mirror orientation to the SMA. When inspecting body-part-specific functional connectivity, we found differential connectivity patterns for the different body parts to the primary and secondary motor areas and parietal and visual areas, and a shared connectivity to the extrastriate body area, another topographically organized area. We suggest that a whole-body gradient can be found in the precuneus and is connected to multiple brain areas with different connectivity for different body parts. Its exact role and relations to the other known functions of the precuneus such as self-processing, motor imagery, reaching, visuomotor and other body–mind functions should be investigated.

    Other authors
    See publication
  • Task-selectivity in the sensory deprived brain and sensory substitution approaches for clinical practice: evidence from blindness

    Multisensory Perception Academic Press

    What are the principles underlying the emergence of specializations in the human brain? In the last few decades, convergent evidence from studies with sensory deprived populations such as blind and deaf adults showed that most of the known specialized regions in higher-order “visual” and “auditory” cortices maintained their anatomically consistent category-selective properties in the absence of visual/auditory experience when input was provided by other senses carrying category-specific…

    What are the principles underlying the emergence of specializations in the human brain? In the last few decades, convergent evidence from studies with sensory deprived populations such as blind and deaf adults showed that most of the known specialized regions in higher-order “visual” and “auditory” cortices maintained their anatomically consistent category-selective properties in the absence of visual/auditory experience when input was provided by other senses carrying category-specific information. Here, we will review the main results of such studies with a special focus on results obtained with sensory substitution devices and their implications for sensory restoration. Our emphasis will be on the proposed brain mechanisms underlying this type of (re)-organization that we term task-selective and sensory-independent (TSSI), on the (re)-definition of the classic assumptions regarding critical periods of development, and on the crucial role of multisensory training to maximize sensory restoration outcomes.

    Other authors
    See publication
  • A systematic computerized training program for using Sensory Substitution Devices in real-life

    2019 International Conference on Virtual Rehabilitation (ICVR) IEEE

    In the past decades, Sensory Substitution Devices (SSDs) have been widely used as a research tool to unravel the properties of the sensory brain. Although their rehabilitation potential is repeatedly demonstrated, SSDs were never widely adopted by blind individuals in everyday life, except for a few super-users' cases. One reason explaining this gap is the lack of structured SSD training programs for everyday use. We thus developed an ambitious computerized SSD training program using the…

    In the past decades, Sensory Substitution Devices (SSDs) have been widely used as a research tool to unravel the properties of the sensory brain. Although their rehabilitation potential is repeatedly demonstrated, SSDs were never widely adopted by blind individuals in everyday life, except for a few super-users' cases. One reason explaining this gap is the lack of structured SSD training programs for everyday use. We thus developed an ambitious computerized SSD training program using the EyeMusic visual-to-auditory SSD and gathered 10 blind participants to test its efficiency. Participants were trained to identify pictures of real objects from different categories (e.g., furniture). For each category, we tested the performance of participants before training and again after 10 hours of dedicated training. The test included both trained and untrained stimuli. The 10 hours training program involved a combination of static stimuli and dynamic computer games and was individually adapted to participants' learning pace. Initial results show that after training, participants achieved significantly higher accuracy rates in object recognition compared to baseline for trained and most importantly, for untrained objects from the same category. This further supports the suitability of SSDs in everyday life, and thus propels their adoption for this purpose.

    Other authors
    See publication
  • Standardizing Visual Rehabilitation using Simple Virtual Tests

    2019 International Conference on Virtual Rehabilitation (ICVR) IEEE

    Many different visual rehabilitation approaches are being utilized to offer visual information to the blind. User proficiency and functional ability are currently evaluated either via ad-hoc tests or via standardized visual tests which are not sensitive enough in the range of extreme low vision. Unfortunately, this is the functional level that these approaches typically offer. This is especially important as the main criteria by which most users will judge the efficacy of these rehabilitation…

    Many different visual rehabilitation approaches are being utilized to offer visual information to the blind. User proficiency and functional ability are currently evaluated either via ad-hoc tests or via standardized visual tests which are not sensitive enough in the range of extreme low vision. Unfortunately, this is the functional level that these approaches typically offer. This is especially important as the main criteria by which most users will judge the efficacy of these rehabilitation approaches is by the functional benefits it grants them. Furthermore, currently, there are no accepted benchmarks or clear comparative testing of each rehabilitation approach, leading to the development of many new aids but the practical adoption of few. Combined these indicate a need to add standardized functional tests to this evaluation toolbox. Indeed, several functional tests have recently been suggested but their adoption has been very limited. Here, we review current tests and then conduct a formative study consulting experts in the field to map issues with current standardization attempts. This formative study offered a list of practical design suggestions for functional standardization tests. We then suggest using simple virtual environments as one such family of tests. Virtual scenarios meet many of the experts’ suggestions - they are easy to share, flexible, affordable, safe, identical wherever run, can be run by a single operator and offer control over external parameters enabling a focus on the offered visual information. Finally, we demonstrate this approach via a freely available virtual version of a relatively standard functional test - finding a door - in a 10-minute paradigm which includes 30 trials. We find that congenitally-blind and sighted-blindfolded subjects cannot perform this task without the device, but that they perform it successfully with it, demonstrating the tests’ potential viability.

    Other authors
    See publication
  • The Topo-Speech Algorithm: An intuitive Sensory Substitution for Spatial Information

    2019 International Conference on Virtual Rehabilitation (ICVR) IEEE

    Is it possible to quickly and reliably understand the position of objects in space without vision? This is one of the biggest challenge blind people face daily. We developed a novel algorithm called the Topo-Speech which conveys the spatial position of objects via speech manipulations. We ran a pilot study on blindfolded sighted adults (n=5) to test the extent to which users can locate objects' spatial positions after short training with the Topo-Speech, as well as their ability to locate…

    Is it possible to quickly and reliably understand the position of objects in space without vision? This is one of the biggest challenge blind people face daily. We developed a novel algorithm called the Topo-Speech which conveys the spatial position of objects via speech manipulations. We ran a pilot study on blindfolded sighted adults (n=5) to test the extent to which users can locate objects' spatial positions after short training with the Topo-Speech, as well as their ability to locate untrained spatial positions. Participants were trained for ~30 minutes on the detection of objects' positions on a 3×3 grid. Then they were tested on the same spatial locations (though using different stimuli). Finally, participants were tested on identifying the positions of objects on a 5×5 grid (i.e. additional spatial locations) without any specific training. Our results showed that participants performed significantly above chance for both trained and untrained spatial positions. This in turn suggests the feasibility of the Topo-Speech to convey spatial related information via a non-visual channel and prompt to quickly test such approach with people who are visually impaired.

    Other authors
    See publication
  • Virtual Self-Training of a Sensory Substitution Device for Blind Individuals

    2019 International Conference on Virtual Rehabilitation (ICVR) IEEE.

    One of the main bottlenecks to the adoption of Sensory Substitution Devices (SSDs) in everyday life by blind users is the difficulty in learning to interpret their algorithms and the consequent dependency on sighted instructors throughout the learning-process. Here we test the efficacy of a virtual online self-training program we developed for learning the basics of the EyeMusic, a visual-to-auditory SSD. Furthermore, to better understand the properties of self-training we tested the…

    One of the main bottlenecks to the adoption of Sensory Substitution Devices (SSDs) in everyday life by blind users is the difficulty in learning to interpret their algorithms and the consequent dependency on sighted instructors throughout the learning-process. Here we test the efficacy of a virtual online self-training program we developed for learning the basics of the EyeMusic, a visual-to-auditory SSD. Furthermore, to better understand the properties of self-training we tested the intuitiveness of the device, based on performance after a brief explanation but no exposure, and tested several variations on feedback during self-training.We tested the performance of two groups of sighted participants via pre-post identical exams, with intermediate training lessons. These groups were offered different feedback after experiencing the auditory stimuli - either a visual version of the stimuli or a textual description.After a brief explanation of the EyeMusic algorithm, and before training, participants scored 41±10.6% in the exam, significantly above chance. Self-training led to a highly significant improvement with a 59±12% score in the post-exam. No significant difference was found between the post-exam results of the two different feedback groups.These results demonstrate the possibility to self-train on the basics of a whole-scene visual-to-auditory SSD. Furthermore, visual access to the images during training did not seem to improve the final score, demonstrating the potential efficacy of such self-training method also for blind users.

    Other authors
    See publication
  • Immediate improvement of speech-in-noise perception through multisensory stimulation via an auditory to tactile sensory substitution

    Restorative Neurology and Neuroscience

    In the current proof-of-concept study we tested whether multisensory stimulation, pairing audition and a minimal-size touch device would improve intelligibility of speech in noise. To this aim we developed an audio-to-tactile sensory substitution device (SSD) transforming low-frequency speech signals into tactile vibrations delivered on two finger tips. Based on the inverse effectiveness law, i.e., multisensory enhancement is strongest when signal-to-noise ratio is lowest between senses, we…

    In the current proof-of-concept study we tested whether multisensory stimulation, pairing audition and a minimal-size touch device would improve intelligibility of speech in noise. To this aim we developed an audio-to-tactile sensory substitution device (SSD) transforming low-frequency speech signals into tactile vibrations delivered on two finger tips. Based on the inverse effectiveness law, i.e., multisensory enhancement is strongest when signal-to-noise ratio is lowest between senses, we embedded non-native language stimuli in speech-like noise and paired it with a low-frequency input conveyed through touch. We found immediate and robust improvement in speech recognition (i.e. in the Signal-To-Noise-ratio) in the multisensory condition without any training, at a group level as well as in every participant. The reported improvement at the group-level of 6 dB was indeed major considering that an increase of 10 dB represents a doubling of the perceived loudness. These results are especially relevant when compared to previous SSD studies showing effects in behavior only after a demanding cognitive training. We discuss the implications of our results for development of SSDs and of specific rehabilitation programs for the hearing impaired either using or not using HAs or CIs. We also discuss the potential application of such a set-up for sense augmentation, such as when learning a new language.

    Other authors
    See publication
  • The development of white matter structural changes during the process of deterioration of the visual feld

    Scientific Reports

    Emerging evidence suggests that white matter plasticity in the adult brain is preserved after sensory
    and behavioral modifcations. However, little is known about the progression of structural changes
    during the process of decline in visual input. Here we studied two groups of patients sufering from
    advanced retinitis pigmentosa with specifc deterioration of the visual feld: patients who had lost
    their peripheral visual feld, retaining only central (“tunnel”) vision, and blind…

    Emerging evidence suggests that white matter plasticity in the adult brain is preserved after sensory
    and behavioral modifcations. However, little is known about the progression of structural changes
    during the process of decline in visual input. Here we studied two groups of patients sufering from
    advanced retinitis pigmentosa with specifc deterioration of the visual feld: patients who had lost
    their peripheral visual feld, retaining only central (“tunnel”) vision, and blind patients with complete
    visual feld loss. Testing of these homogeneous groups made it possible to assess the extent to which
    the white matter is afected by loss of partial visual input and whether partially preserved visual input
    sufces to sustain stability in tracts beyond the primary visual system. Our results showed gradual
    changes in difusivity that are indicative of degenerative processes in the primary visual pathway
    comprising the optic tract and the optic radiation. Interestingly, changes were also found in tracts of
    the ventral stream and the corticospinal fasciculus, depicting a gradual reorganisation of these tracts
    consequentially to the gradual loss of visual feld coverage (from intact perception to partial vision to
    complete blindness). This reorganisation may point to microstructural plasticity underlying adaptive
    behavior and cross-modal integration after partial visual deprivation.

    Other authors
    See publication
  • The Effect of Irrelevant Environmental Noise on the Performance of Visual-to-Auditory Sensory Substitution Devices Used by Blind Adults

    Multisensory Research

    Visual-to-auditory Sensory Substitution Devices (SSDs) are a family of non-invasive devices for visual rehabilitation aiming at conveying whole-scene visual information through the intact auditory modality. Although proven effective in lab environments, the use of SSDs has yet to be systematically tested in real-life situations. To start filling this gap, in the present work we tested the ability of expert SSD users to filter out irrelevant background noise while focusing on the relevant audio…

    Visual-to-auditory Sensory Substitution Devices (SSDs) are a family of non-invasive devices for visual rehabilitation aiming at conveying whole-scene visual information through the intact auditory modality. Although proven effective in lab environments, the use of SSDs has yet to be systematically tested in real-life situations. To start filling this gap, in the present work we tested the ability of expert SSD users to filter out irrelevant background noise while focusing on the relevant audio information. Specifically, nine blind expert users of the EyeMusic visual-to-auditory SSD performed a series of identification tasks via SSDs (i.e., shape, color, and conjunction of the two features). Their performance was compared in two separate conditions: silent baseline, and with irrelevant background sounds from real-life situations, using the same stimuli in a pseudo-random balanced design. Although the participants described the background noise as disturbing, no significant performance differences emerged between the two conditions (i.e., noisy; silent) for any of the tasks. In the conjunction task (shape and color) we found a non-significant trend for a disturbing effect of the background noise on performance. These findings suggest that visual-to-auditory SSDs can indeed be successfully used in noisy environments and that users can still focus on relevant auditory information while inhibiting irrelevant sounds. Our findings take a step towards the actual use of SSDs in real-life situations while potentially impacting rehabilitation of sensory deprived individuals.

    Other authors
    See publication
  • What Can Sensory Substitution Tell Us about the Organization of the Brain?

    ensory Substitution and Augmentation, Proceedings of the British Academy

    We discuss how sensory substitution devices (SSDs) can be used to study the organization of the brain. To do so we look at the use of SSDs in the blind and how SSDs can be used to identify sensory-dependent and sensory-independent brain function. Cross-modal interactions may represent new patterns of connectivity or the unmasking of pre-existing associations. We show how the blind brain can be a window into cross-modal plasticity and can dissociate intrinsic and experience-dependent brain…

    We discuss how sensory substitution devices (SSDs) can be used to study the organization of the brain. To do so we look at the use of SSDs in the blind and how SSDs can be used to identify sensory-dependent and sensory-independent brain function. Cross-modal interactions may represent new patterns of connectivity or the unmasking of pre-existing associations. We show how the blind brain can be a window into cross-modal plasticity and can dissociate intrinsic and experience-dependent brain functions. We argue that the brain is a sensory-independent task machine and explain the implications for the rehabilitation of blind people.

    Other authors
    See publication
  • Human Navigation Without and With Vision - the Role of Visual Experience and Visual Regions

    bioRxiv

    Human navigation relies on a wide range of visual retinotopic cortical regions yet the precise role that these regions play in navigation remains unclear. Are these regions mainly sensory input channels or also modality-independent spatial processing centers? Accordingly, will they be recruited for navigation also without vision, such as via audition? Will visual experience, or the lack thereof, affect this recruitment? Sighted, congenitally blind and sighted-blindfolded participants actively…

    Human navigation relies on a wide range of visual retinotopic cortical regions yet the precise role that these regions play in navigation remains unclear. Are these regions mainly sensory input channels or also modality-independent spatial processing centers? Accordingly, will they be recruited for navigation also without vision, such as via audition? Will visual experience, or the lack thereof, affect this recruitment? Sighted, congenitally blind and sighted-blindfolded participants actively navigated virtual mazes during fMRI scanning before and after navigating them in the real world. Participants used the EyeCane visual-to-auditory navigation aid for non-visual navigation.

    We found that retinotopic regions, including both dorsal stream regions (e.g. V6) and primary regions (e.g. peripheral V1), were selectively recruited for non-visual navigation only after the participants mastered the EyeCane demonstrating rapid plasticity for non-visual navigation. The hippocampus, considered the navigation network’s core, displayed negative BOLD in all groups.

    Our results demonstrate the robustness of the retinotopic nodes modality-independent spatial role in non-visual human navigation to lifelong visual-deprivation, demonstrating that visual input during development is not required for their recruitment. Furthermore, our results with the blindfolded group demonstrate this recruitment’s robustness even to brief blindfolding, but only after brief training, demonstrating rapid task based plasticity. These results generalize the wider framework of task-selectivity rather than input-modality as a brain organization principle to dorsal-stream retinotopic areas and even for the first time to the primary visual cortex.

    Other authors
    See publication
  • Implications of cross-modal and intra-modal plasticity for the education and rehabilitation of deaf children and adults

    Evidence-Based Practices in Deaf Education

    Exploring the environment without the auditory modality elicits wholesale reorganizations at both the behavioral and the neural levels throughout life. This chapter reviews changes in brain organization and behavior arising from early deafness. It depicts a multifaceted framework in both domains: the performance of deaf persons has been shown to be comparable to, better than, as well as worse than that of hearing participants. They also show brain modifications ascribable both to intramodal…

    Exploring the environment without the auditory modality elicits wholesale reorganizations at both the behavioral and the neural levels throughout life. This chapter reviews changes in brain organization and behavior arising from early deafness. It depicts a multifaceted framework in both domains: the performance of deaf persons has been shown to be comparable to, better than, as well as worse than that of hearing participants. They also show brain modifications ascribable both to intramodal (within the visual system) and cross-modal plasticity (the recruitment of the deprived auditory cortex by intact sensory modalities). The authors discuss the implications of these results for sensory rehabilitation and highlight the benefits of multisensory systematic training programs to boost recovery.

    Other authors
    See publication
  • The mapping and reconstruction of the brain's mind eye in the absence of visual experience: a population receptive field mapping of soundscape space

    Journal of Vision

    Studies showed that blind participants trained in visual-to-auditory sensory substitution devices (SSDs) were able to recognize various 'visual' objects and even body shapes and faces. Correspondingly they also specifically activated many of the known categories in the high-order visual streams1. But how is this learned experience integrated in the brain? Does the visual-to-auditory input follow similar organizational principles as the natural senses? Here we studied a proficient EyEMusic-SSD2…

    Studies showed that blind participants trained in visual-to-auditory sensory substitution devices (SSDs) were able to recognize various 'visual' objects and even body shapes and faces. Correspondingly they also specifically activated many of the known categories in the high-order visual streams1. But how is this learned experience integrated in the brain? Does the visual-to-auditory input follow similar organizational principles as the natural senses? Here we studied a proficient EyEMusic-SSD2 congenitally blind user using population receptive field (pRF) mapping3, a method for imaging visual retinotopic maps. The EyEMusic-SSD algorithm reads the image from left to right and forms a soundscape of the image where the X and Y axes are represented by time and pitch in pentatonic-scale, respectively2. Full tonotopic maps of musical pitch-elevation (y axis) were found in bilateral A1, showing organized maps of the EyeMusic's notes. Moreover, topographical maps of the soundscape field were found in the right lateral occipital (LO), right medial occipital cortex, and right parietal-occipital cortex (PO). Full topographic maps of timing of the stimuli (x axis) were shown in the same regions in the right LO and right PO. Notably, in the right PO, the maps of the two axes overlapped. Conceptually, this proposes that the learned soundscape field may be analyzed in a similar way to how the two dimensions of retinotopy, eccentricity and polar angle, span the visual field. We were also able to reconstruct and predict the perceived stimuli in the soundscape-field. This case study suggests that in adulthood novel topographic maps can develop following extensive training in novel topographic sensory experiences. References 1. Amedi, A. et al. Trends Cogn. Sci., 2017 2. Abboud, S. et al. Restor. Neurol. Neurosci. 2014 3. Dumoulin, S.O. et al. Neuroimage, 2008

    Other authors
    See publication
  • The Implications of Brain Plasticity and Task Selectivity for Visual Rehabilitation of Blind and Visually Impaired Individuals

    The Neuroimaging of Brain Diseases

    The human brain is a formidably complex and adaptable organ capable of rewiring itself or adjusting existing connections in order to learn and to maximize its survival edge. Studies using sensory substitution devices have had a big impact on the uncovering of the mechanisms subtending brain organization. Sensory substitution devices are capable of conveying information typically received through a specific sensory modality (e.g., vision) and transferring it to the user via a different sense…

    The human brain is a formidably complex and adaptable organ capable of rewiring itself or adjusting existing connections in order to learn and to maximize its survival edge. Studies using sensory substitution devices have had a big impact on the uncovering of the mechanisms subtending brain organization. Sensory substitution devices are capable of conveying information typically received through a specific sensory modality (e.g., vision) and transferring it to the user via a different sense (e.g., audition or touch). Experimental research exploring the perceptual learning of sensory substitution devices has revealed the ability of users to recognize movement and shapes, to navigate routes, to detect and avoid obstacles, and to perceive colors or depth via touch or sound, even in cases of full and congenital blindness. Using a combination of functional and anatomical neuroimaging techniques, the comparisons of performances between congenitally blind people and sighted people using sensory substitution devices in perceptual and sensory-motor tasks as well as in several recognition tasks uncovered the striking ability of the brain to rewire itself during perceptual learning and to learn to interpret novel sensory information even during adulthood. This review discusses the impact of invasive and noninvasive forms of artificial vision on brain organization with a special emphasis on sensory substitution devices and also discusses the implications of these findings for the visual rehabilitation of congenitally and late blind and partially sighted individuals while applying insights from neuroimaging and psychophysics.

    Other authors
    See publication
  • Sensory Substitution and the Neural Correlates of Navigation in Blindness

    Mobility of Visually Impaired People SpringerLink

    This chapter reviews the most recent advances in sensory substitution and the neural correlates of navigation in congenital blindness . Studies have established the superior ability of congenitally blind (CB) participants with the aid of Sensory Substitution Devices (SSDs) to navigate new environments and detect the size and shape of obstacles in order to avoid them. These studies suggest that with training, CB can achieve a representation of space that is equivalent to that of the sighted…

    This chapter reviews the most recent advances in sensory substitution and the neural correlates of navigation in congenital blindness . Studies have established the superior ability of congenitally blind (CB) participants with the aid of Sensory Substitution Devices (SSDs) to navigate new environments and detect the size and shape of obstacles in order to avoid them. These studies suggest that with training, CB can achieve a representation of space that is equivalent to that of the sighted. From a phenomenological point of view, sensation and perception provided by SSDs have been likened to real vision, but the question remains as to the subjective sensations (qualia) felt by users. We review recent theories on the phenomenological properties of sensory substitution and the recent literature on spatial abilities of participants using SSDs. From these different sources of research, we conclude that training-induced plastic changes enable task-specific brain activations. The recruitment of the primary visual cortex by nonvisual SSD stimulations and, the subsequent activations of associative visual cortices in the congenitally blind, suggest that the sensory information is treated in an amodal fashion; i.e.,: in terms of the task being performed rather than the sensory modality. These anatomical changes enable the embodiment of nonvisual information allowing SSD users to accomplish a multitude of “visual” tasks. We will emphasize here the abilities of CB individuals to navigate in real and virtual environments in spite of a large volumetric reduction in the posterior segment of the hippocampus , a key area involved in navigation . In addition, the superior behavioral performance of CB in a variety of sensory and cognitive tasks, combined with anatomical and functional MRI, underlines the susceptibility of the brain to training-induced plasticity.

    Other authors
    See publication
  • The transfer of non-visual spatial knowledge between real and virtual mazes via sensory substitution

    2017 International Conference on Virtual Rehabilitation (ICVR)

    Many attempts are being made to ease navigation for people who are blind, both in terms of spatial learning and of navigation. One promising approach is the use of virtual environments for safe and versatile training. While it is known that humans can transfer non-visual spatial knowledge between real and virtual environments, limitations of these studies typically include results obtained mainly in simple environments, using mainly blindfolded-sighted participants and different methods of…

    Many attempts are being made to ease navigation for people who are blind, both in terms of spatial learning and of navigation. One promising approach is the use of virtual environments for safe and versatile training. While it is known that humans can transfer non-visual spatial knowledge between real and virtual environments, limitations of these studies typically include results obtained mainly in simple environments, using mainly blindfolded-sighted participants and different methods of sensory input for real and virtual environments. In this study, participants with a wide range of visual experience use the EyeCane and Virtual EyeCane to solve complex Hebb-Williams mazes in real and virtual environments. The EyeCane and its virtual counterpart are minimalistic sensory substitution devices that code single-point distance information into sound. We test whether participants improve performance in the real-to-virtual sequence: solve a real maze and subsequently improve performance in the virtual maze. We also test whether participants can sole a virtual maze and subsequently improve performance in the virtual world: the virtual-to-real sequence. We find that participants can use sensory substitution guided navigation to extract spatial information from the virtual world and apply it to significantly improve their behavioral performance in the real world and vice versa. Our results demonstrate transfer in both direction, strengthening and extending the existing literature in terms of complexity, parameters, input-matching and varying levels of visual experience.

    Other authors
    See publication
  • The Transfer of Non-Visual Spatial Knowledge Between Real and Virtual Mazes via Sensory Substitution

    2017 International Conference on Virtual Rehabilitation (ICVR)

    Many attempts are being made to ease navigation for people who are blind, both in terms of spatial learning and of navigation. One promising approach is the use of virtual environments for safe and versatile training. While it is known that humans can transfer non-visual spatial knowledge between real and virtual environments, limitations of these studies typically include results obtained mainly in simple environments, using mainly blindfolded-sighted participants and different methods of…

    Many attempts are being made to ease navigation for people who are blind, both in terms of spatial learning and of navigation. One promising approach is the use of virtual environments for safe and versatile training. While it is known that humans can transfer non-visual spatial knowledge between real and virtual environments, limitations of these studies typically include results obtained mainly in simple environments, using mainly blindfolded-sighted participants and different methods of sensory input for real and virtual environments. In this study, participants with a wide range of visual experience use the EyeCane and Virtual EyeCane to solve complex Hebb-Williams mazes in real and virtual environments. The EyeCane and its virtual counterpart are minimalistic sensory substitution devices that code single-point distance information into sound. We test whether participants improve performance in the real-to-virtual sequence: solve a real maze and subsequently improve performance in the virtual maze. We also test whether participants can sole a virtual maze and subsequently improve performance in the virtual world: the virtual-to-real sequence. We find that participants can use sensory substitution guided navigation to extract spatial information from the virtual world and apply it to significantly improve their behavioral performance in the real world and vice versa. Our results demonstrate transfer in both direction, strengthening and extending the existing literature in terms of complexity, parameters, input-matching and varying levels of visual experience.

    Other authors
    See publication
  • ask Selectivity as a Comprehensive Principle for Brain Organization

    Trends in Cognitive Sciences

    How do the anatomically consistent functional selectivities of the brain emerge? A new study by Bola and colleagues reveals task selectivity in auditory rhythm-selective areas in congenitally deaf adults perceiving visual rhythm sequences. Here, we contextualize this result with accumulating evidence from animal and human studies supporting sensory-independent task specializations as a comprehensive principle shaping brain (re)organization.

    Other authors
    See publication
  • Positive and Negative Somatotopic BOLD Responses in Contralateral Versus Ipsilateral Penfield Homunculus

    Cerebral Cortex

    One of the basic properties of sensory cortices is their topographical organization. Most imaging studies explored this organization using the positive blood oxygenation level-dependent (BOLD) signal. Here, we studied the topographical organization of both positive and negative BOLD in contralateral and ipsilateral primary somatosensory cortex (S1). Using phase-locking mapping methods, we verified the topographical organization of contralateral S1, and further showed that different body…

    One of the basic properties of sensory cortices is their topographical organization. Most imaging studies explored this organization using the positive blood oxygenation level-dependent (BOLD) signal. Here, we studied the topographical organization of both positive and negative BOLD in contralateral and ipsilateral primary somatosensory cortex (S1). Using phase-locking mapping methods, we verified the topographical organization of contralateral S1, and further showed that different body segments elicit pronounced negative BOLD responses in both hemispheres. In the contralateral hemisphere, we found a sharpening mechanism in which stimulation of a given body segment triggered a gradient of activation with a significant deactivation in more remote areas. In the ipsilateral cortex, deactivation was not only located in the homolog area of the stimulated parts but rather was widespread across many parts of S1. Additionally, analysis of resting-state functional magnetic resonance imaging signal showed a gradient of connectivity to the neighboring contralateral body parts as well as to the ipsilateral homologous area for each body part. Taken together, our results indicate a complex pattern of baseline and activity-dependent responses in the contralateral and ipsilateral sides. Both primary sensory areas were characterized by unique negative BOLD responses, suggesting that they are an important component in topographic organization of sensory cortices.

    Other authors
    See publication
  • Waist-up protection for blind individuals using the EyeCane as a primary and secondary mobility aid

    Restorative Neurolog and Neuroscience

    : One of the most stirring statistics in relation to the mobility of blind individuals is the high rate of upper body injuries, even when using the white-cane. We here addressed a rehabilitation- oriented challenge of providing a reliable tool for blind people to avoid waist-up obstacles, namely one of the impediments to their successful mobility using currently available methods (e.g., white-cane). We used the EyeCane, a device we developed which translates distances from several angles to…

    : One of the most stirring statistics in relation to the mobility of blind individuals is the high rate of upper body injuries, even when using the white-cane. We here addressed a rehabilitation- oriented challenge of providing a reliable tool for blind people to avoid waist-up obstacles, namely one of the impediments to their successful mobility using currently available methods (e.g., white-cane). We used the EyeCane, a device we developed which translates distances from several angles to haptic and auditory cues in an intuitive and unobtrusive manner, serving both as a primary and secondary mobility aid. We investigated the rehabilitation potential of such a device in facilitating visionless waist-up body protection. After ∼5 minutes of training with the EyeCane blind participants were able to successfully detect and avoid obstacles waist-high and up. This was significantly higher than their success when using the white-cane alone. As avoidance of obstacles required participants to perform an additional cognitive process after their detection, the avoidance rate was significantly lower than the detection rate. Our work has demonstrated that the EyeCane has the potential to extend the sensory world of blind individuals by expanding their currently accessible inputs, and has offered them a new practical rehabilitation tool.

    Other authors
    See publication
  • Positive and Negative Somatotopic BOLD Responses in Contralateral Versus Ipsilateral Penfield Homunculus

    Cerebral Cortex

    One of the basic properties of sensory cortices is their topographical organization. Most imaging studies explored this organization using the positive blood oxygenation level-dependent (BOLD) signal. Here, we studied the topographical organization of both positive and negative BOLD in contralateral and ipsilateral primary somatosensory cortex (S1). Using phase-locking mapping methods, we verified the topographical organization of contralateral S1, and further showed that different body…

    One of the basic properties of sensory cortices is their topographical organization. Most imaging studies explored this organization using the positive blood oxygenation level-dependent (BOLD) signal. Here, we studied the topographical organization of both positive and negative BOLD in contralateral and ipsilateral primary somatosensory cortex (S1). Using phase-locking mapping methods, we verified the topographical organization of contralateral S1, and further showed that different body segments elicit pronounced negative BOLD responses in both hemispheres. In the contralateral hemisphere, we found a sharpening mechanism in which stimulation of a given body segment triggered a gradient of activation with a significant deactivation in more remote areas. In the ipsilateral cortex, deactivation was not only located in the homolog area of the stimulated parts but rather was widespread across many parts of S1. Additionally, analysis of resting-state functional magnetic resonance imaging signal showed a gradient of connectivity to the neighboring contralateral body parts as well as to the ipsilateral homologous area for each body part. Taken together, our results indicate a complex pattern of baseline and activity-dependent responses in the contralateral and ipsilateral sides. Both primary sensory areas were characterized by unique negative BOLD responses, suggesting that they are an important component in topographic organization of sensory cortices.

    Other authors
    See publication
  • Reorganization of early visual cortex functional connectivity following selective peripheral and central visual loss

    Scientific Reports

    Behavioral alterations emerging after central or peripheral vision loss suggest that cerebral
    reorganization occurs for both the afferented and deafferented early visual cortex (EVC). We
    explored the functional reorganization of the central and peripheral EVC following visual field defects
    specifically affecting central or peripheral vision. Compared to normally sighted, afferented central and
    peripheral EVC enhance their functional connectivity with areas involved in visual…

    Behavioral alterations emerging after central or peripheral vision loss suggest that cerebral
    reorganization occurs for both the afferented and deafferented early visual cortex (EVC). We
    explored the functional reorganization of the central and peripheral EVC following visual field defects
    specifically affecting central or peripheral vision. Compared to normally sighted, afferented central and
    peripheral EVC enhance their functional connectivity with areas involved in visual processing, whereas
    deafferented central and peripheral EVC increase their functional connectivity with more remote
    regions. The connectivity pattern of afferented EVC suggests adaptive changes that might enhance
    the visual processing capacity whereas the connectivity pattern of deafferented EVC may reflect the
    involvement of these regions in high-order mechanisms. Characterizing and understanding the plastic
    changes induced by these visual defects is essential for any attempt to develop efficient rehabilitation
    strategies.

    Other authors
    See publication
  • Reorganization of early visual cortex functional connectivity following selective peripheral and central visual loss

    Scientific Reports

    Behavioral alterations emerging after central or peripheral vision loss suggest that cerebral reorganization occurs for both the afferented and deafferented early visual cortex (EVC). We explored the functional reorganization of the central and peripheral EVC following visual field defects specifically affecting central or peripheral vision. Compared to normally sighted, afferented central and peripheral EVC enhance their functional connectivity with areas involved in visual processing, whereas…

    Behavioral alterations emerging after central or peripheral vision loss suggest that cerebral reorganization occurs for both the afferented and deafferented early visual cortex (EVC). We explored the functional reorganization of the central and peripheral EVC following visual field defects specifically affecting central or peripheral vision. Compared to normally sighted, afferented central and peripheral EVC enhance their functional connectivity with areas involved in visual processing, whereas deafferented central and peripheral EVC increase their functional connectivity with more remote regions. The connectivity pattern of afferented EVC suggests adaptive changes that might enhance the visual processing capacity whereas the connectivity pattern of deafferented EVC may reflect the involvement of these regions in high-order mechanisms. Characterizing and understanding the plastic changes induced by these visual defects is essential for any attempt to develop efficient rehabilitation strategies.

    Other authors
    See publication
  • Increased functional connectivity between language and visually deprived areas in late and partial blindness

    NeuroImage

    In the congenitally blind, language processing involves visual areas. In the case of normal visual development however, it remains unclear whether later visual loss induces interactions between the language and visual areas. This study compared the resting-state functional connectivity (FC) of retinotopic and language areas in two unique groups of late visually deprived subjects: (1) blind individuals suffering from retinitis pigmentosa (RP), (2) RP subjects without a visual periphery but with…

    In the congenitally blind, language processing involves visual areas. In the case of normal visual development however, it remains unclear whether later visual loss induces interactions between the language and visual areas. This study compared the resting-state functional connectivity (FC) of retinotopic and language areas in two unique groups of late visually deprived subjects: (1) blind individuals suffering from retinitis pigmentosa (RP), (2) RP subjects without a visual periphery but with preserved central “tunnel vision”, both of whom were contrasted with sighted controls. The results showed increased FC between Broca's area and the visually deprived areas in the peripheral V1 for individuals with tunnel vision, and both the peripheral and central V1 for blind individuals. These findings suggest that FC can develop in the adult brain between the visual and language systems in the completely and partially blind. These changes start in the deprived areas and increase in size (involving both foveal and peripheral V1) and strength (from negative to positive FC) as the disease and sensory deprivation progress. These observations support the claim that functional connectivity between remote systems that perform completely different tasks can change in the adult brain in cases of total and even partial visual deprivation.

    Other authors
    See publication
  • Multisensory Processes: A Balancing Act across the Lifespan

    Trends in Neurosciences

    Multisensory processes are fundamental in scaffolding perception, cognition, learning, and behavior. How and when stimuli from different sensory modalities are integrated rather than treated as separate entities is poorly understood. We review how the relative reliance on stimulus characteristics versus learned associations dynamically shapes multisensory processes. We illustrate the dynamism in multisensory function across two timescales: one long term that operates across the lifespan and one…

    Multisensory processes are fundamental in scaffolding perception, cognition, learning, and behavior. How and when stimuli from different sensory modalities are integrated rather than treated as separate entities is poorly understood. We review how the relative reliance on stimulus characteristics versus learned associations dynamically shapes multisensory processes. We illustrate the dynamism in multisensory function across two timescales: one long term that operates across the lifespan and one short term that operates during the learning of new multisensory relations. In addition, we highlight the importance of task contingencies. We conclude that these highly dynamic multisensory processes, based on the relative weighting of stimulus characteristics and learned associations, provide both stability and flexibility to brain functions over a wide range of temporal scales.

    Other authors
    See publication
  • fMRI Dependent Components Analysis Reveals Dynamic Relations Between Functional Large Scale Cortical Networks

    bioRxiv

    One of the major advantages of whole brain fMRI is the detection of large scale cortical networks. Dependent Components Analysis (DCA) is a novel approach designed to extract both cortical networks and their dependency structure. DCA is fundamentally different from prevalent data driven approaches, i.e. spatial ICA, in that instead of maximizing the independence of components it optimizes their dependency (in a tree graph structure, tDCA) depicting cortical areas as part of multiple cortical…

    One of the major advantages of whole brain fMRI is the detection of large scale cortical networks. Dependent Components Analysis (DCA) is a novel approach designed to extract both cortical networks and their dependency structure. DCA is fundamentally different from prevalent data driven approaches, i.e. spatial ICA, in that instead of maximizing the independence of components it optimizes their dependency (in a tree graph structure, tDCA) depicting cortical areas as part of multiple cortical networks. Here tDCA was shown to reliably detect large scale functional networks in single subjects and in group analysis, by clustering non-noisy components on one branch of the tree structure. We used tDCA in three fMRI experiments in which identical auditory and visual stimuli were presented, but novelty information and task relevance were modified. tDCA components tended to include two anticorrelated networks, which were detected in two separate ICA components, or belonged in one component in seed functional connectivity. Although sensory components remained the same across experiments, other components changed as a function of the experimental conditions. These changes were either within component, where it encompassed other cortical areas, or between components, where the pattern of anticorrelated networks and their statistical dependency changed. Thus tDCA may prove to be a useful, robust tool that provides a rich description of the statistical structure underlying brain activity and its relationships to changes in experimental conditions. This tool may prove effective in detection and description of mental states, neural disorders and their dynamics.

    Other authors
    See publication
  • Intensity-Based Masking: A Tool to Improve Functional Connectivity Results of Resting-State fMRI

    Human Brain Mapping

    Seed‐based functional connectivity (FC) of resting‐state functional MRI data is a widely used methodology, enabling the identification of functional brain networks in health and disease. Based on signal correlations across the brain, FC measures are highly sensitive to noise. A somewhat neglected source of noise is the fMRI signal attenuation found in cortical regions in close vicinity to sinuses and air cavities, mainly in the orbitofrontal, anterior frontal and inferior temporal cortices…

    Seed‐based functional connectivity (FC) of resting‐state functional MRI data is a widely used methodology, enabling the identification of functional brain networks in health and disease. Based on signal correlations across the brain, FC measures are highly sensitive to noise. A somewhat neglected source of noise is the fMRI signal attenuation found in cortical regions in close vicinity to sinuses and air cavities, mainly in the orbitofrontal, anterior frontal and inferior temporal cortices. BOLD signal recorded at these regions suffers from dropout due to susceptibility artifacts, resulting in an attenuated signal with reduced signal‐to‐noise ratio in as many as 10% of cortical voxels. Nevertheless, signal attenuation is largely overlooked during FC analysis. Here we first demonstrate that signal attenuation can significantly influence FC measures by introducing false functional correlations and diminishing existing correlations between brain regions. We then propose a method for the detection and removal of the attenuated signal (“intensity‐based masking”) by fitting a Gaussian‐based model to the signal intensity distribution and calculating an intensity threshold tailored per subject. Finally, we apply our method on real‐world data, showing that it diminishes false correlations caused by signal dropout, and significantly improves the ability to detect functional networks in single subjects. Furthermore, we show that our method increases inter‐subject similarity in FC, enabling reliable distinction of different functional networks. We propose to include the intensity‐based masking method as a common practice in the pre‐processing of seed‐based functional connectivity analysis, and provide software tools for the computation of intensity‐based masks on fMRI data.

    Other authors
    See publication
  • Aging and Sensory Substitution in a Virtual Navigation Task

    PLOS ONE

    Virtual environments are becoming ubiquitous, and used in a variety of contexts–from entertainment to training and rehabilitation. Recently, technology for making them more accessible to blind or visually impaired users has been developed, by using sound to represent visual information. The ability of older individuals to interpret these cues has not yet been studied. In this experiment, we studied the effects of age and sensory modality (visual or auditory) on navigation through a virtual…

    Virtual environments are becoming ubiquitous, and used in a variety of contexts–from entertainment to training and rehabilitation. Recently, technology for making them more accessible to blind or visually impaired users has been developed, by using sound to represent visual information. The ability of older individuals to interpret these cues has not yet been studied. In this experiment, we studied the effects of age and sensory modality (visual or auditory) on navigation through a virtual maze. We added a layer of complexity by conducting the experiment in a rotating room, in order to test the effect of the spatial bias induced by the rotation on performance. Results from 29 participants showed that with the auditory cues, it took participants a longer time to complete the mazes, they took a longer path length through the maze, they paused more, and had more collisions with the walls, compared to navigation with the visual cues. The older group took a longer time to complete the mazes, they paused more, and had more collisions with the walls, compared to the younger group. There was no effect of room rotation on the performance, nor were there any significant interactions among age, feedback modality and room rotation. We conclude that there is a decline in performance with age, and that while navigation with auditory cues is possible even at an old age, it presents more challenges than visual navigation.

    Other authors
    See publication
  • Massive cortical reorganization in sighted Braille readers

    E-Life

    The brain is capable of large-scale reorganization in blindness or after massive injury. Such reorganization crosses the division into separate sensory cortices (visual, somatosensory...). As its result, the visual cortex of the blind becomes active during tactile Braille reading. Although the possibility of such reorganization in the normal, adult brain has been raised, definitive evidence has been lacking. Here, we demonstrate such extensive reorganization in normal, sighted adults who…

    The brain is capable of large-scale reorganization in blindness or after massive injury. Such reorganization crosses the division into separate sensory cortices (visual, somatosensory...). As its result, the visual cortex of the blind becomes active during tactile Braille reading. Although the possibility of such reorganization in the normal, adult brain has been raised, definitive evidence has been lacking. Here, we demonstrate such extensive reorganization in normal, sighted adults who learned Braille while their brain activity was investigated with fMRI and transcranial magnetic stimulation (TMS). Subjects showed enhanced activity for tactile reading in the visual cortex, including the visual word form area (VWFA) that was modulated by their Braille reading speed and strengthened resting-state connectivity between visual and somatosensory cortices. Moreover, TMS disruption of VWFA activity decreased their tactile reading accuracy. Our results indicate that large-scale reorganization is a viable mechanism recruited when learning complex skills.

    See publication
  • Social Sensing: a Wi-Fi based Social Sense for Perceiving the Surrounding People

    Augmented Human

    People who are blind or have social disabilities can encounter difficulties in properly sensing and interacting with surrounding people. We suggest here the use of a sensory augmentation approach, which will offer the user perceptual input via properly functioning sensory channels (e.g. visual, tactile) for this purpose. Specifically, we created a Wi-Fi signal based system to help the user determine the presence of one or more people in the room. The signal's strength determines the distance of…

    People who are blind or have social disabilities can encounter difficulties in properly sensing and interacting with surrounding people. We suggest here the use of a sensory augmentation approach, which will offer the user perceptual input via properly functioning sensory channels (e.g. visual, tactile) for this purpose. Specifically, we created a Wi-Fi signal based system to help the user determine the presence of one or more people in the room. The signal's strength determines the distance of the people in near proximity. These distances are sonified and played sequentially. The Wi-Fi signal arises from common Smartphones, and can therefore be adapted for everyday use in a simple manner.

    We demonstrate the use of this system by showing it's significance in determining the presence of others. Specifically, we show that it allows to determine the location (i.e. close, inside or outside) and amount of people at each distance. This system can be further adopted for purposes such as locating one's group in a crowd, following a group in a new location, enhancing identification for people with prosopagnosia, raising awareness for the presence of others as part of a rehabilitation behavioral program for people with ASD, or for real-life social networking.

    Other authors
    See publication
  • Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution

    PLOS ONE

    Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility for blind individuals generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could…

    Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility for blind individuals generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features and surroundings and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks–walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience.

    Other authors
    See publication
  • The origins of metamodality in visual object area LO: Bodily topographical biases and increased functional connectivity to S1

    NeuroImage

    Recent evidence from blind participants suggests that visual areas are task-oriented and sensory modality input independent rather than sensory-specific to vision. Specifically, visual areas are thought to retain their functional selectivity when using non-visual inputs (touch or sound) even without having any visual experience. However, this theory is still controversial since it is not clear whether this also characterizes the sighted brain, and whether the reported results in the sighted…

    Recent evidence from blind participants suggests that visual areas are task-oriented and sensory modality input independent rather than sensory-specific to vision. Specifically, visual areas are thought to retain their functional selectivity when using non-visual inputs (touch or sound) even without having any visual experience. However, this theory is still controversial since it is not clear whether this also characterizes the sighted brain, and whether the reported results in the sighted reflect basic fundamental a-modal processes or are an epiphenomenon to a large extent. In the current study, we addressed these questions using a series of fMRI experiments aimed to explore visual cortex responses to passive touch on various body parts and the coupling between the parietal and visual cortices as manifested by functional connectivity. We show that passive touch robustly activated the object selective parts of the lateral–occipital (LO) cortex while deactivating almost all other occipital–retinotopic-areas. Furthermore, passive touch responses in the visual cortex were specific to hand and upper trunk stimulations. Psychophysiological interaction analysis suggests that LO is functionally connected to the hand area in the primary somatosensory homunculus (S1), during hand and shoulder stimulations but not to any of the other body parts. We suggest that LO is a fundamental hub that serves as a node between visual-object selective areas and S1 hand representation, probably due to the critical evolutionary role of touch in object recognition and manipulation. These results might also point to a more general principle suggesting that recruitment or deactivation of the visual cortex by other sensory input depends on the ecological relevance of the information conveyed by this input to the task/computations carried out by each area or network. This is likely to rely on the unique and differential pattern of connectivity for each visual area with the rest of the brain

    See publication
  • Integration and binding in rehabilitative sensory substitution: Increasing resolution using a new Zooming-in approach

    Restorative Neurology and Neuroscience

    To visually perceive our surroundings we constantly move our eyes and focus on particular details, and then integrate them into a combined whole. Current visual rehabilitation methods, both invasive, like bionic-eyes and non-invasive, like Sensory Substitution Devices (SSDs), down-sample visual stimuli into low-resolution images. Zooming-in to sub-parts of the scene could potentially improve detail perception. Can congenitally blind individuals integrate a 'visual' scene when offered this…

    To visually perceive our surroundings we constantly move our eyes and focus on particular details, and then integrate them into a combined whole. Current visual rehabilitation methods, both invasive, like bionic-eyes and non-invasive, like Sensory Substitution Devices (SSDs), down-sample visual stimuli into low-resolution images. Zooming-in to sub-parts of the scene could potentially improve detail perception. Can congenitally blind individuals integrate a 'visual' scene when offered this information via different sensory modalities, such as audition? Can they integrate visual information -perceived in parts - into larger percepts despite never having had any visual experience?

    Other authors
    See publication
  • Discontinuity of cortical gradients reflects sensory impairment

    PNAS

    Topographic maps and their continuity constitute a fundamental principle of brain organization. In the somatosensory system, whole-body sensory impairment may be reflected either in cortical signal reduction or disorganization of the somatotopic map, such as disturbed continuity. Here we investigated the role of continuity in pathological states. We studied whole-body cortical representations in response to continuous sensory stimulation under functional MRI (fMRI) in two unique patient…

    Topographic maps and their continuity constitute a fundamental principle of brain organization. In the somatosensory system, whole-body sensory impairment may be reflected either in cortical signal reduction or disorganization of the somatotopic map, such as disturbed continuity. Here we investigated the role of continuity in pathological states. We studied whole-body cortical representations in response to continuous sensory stimulation under functional MRI (fMRI) in two unique patient populations—patients with cervical sensory Brown-Séquard syndrome (injury to one side of the spinal cord) and patients before and after surgical repair of cervical disk protrusion—enabling us to compare whole-body representations in the same study subjects. We quantified the spatial gradient of cortical activation and evaluated the divergence from a continuous pattern. Gradient continuity was found to be disturbed at the primary somatosensory cortex (S1) and the supplementary motor area (SMA), in both patient populations: contralateral to the disturbed body side in the Brown-Séquard group and before repair in the surgical group, which was further improved after intervention. Results corresponding to the nondisturbed body side and after surgical repair were comparable with control subjects. No difference was found in the fMRI signal power between the different conditions in the two groups, as well as with respect to control subjects. These results suggest that decreased sensation in our patients is related to gradient discontinuity rather than signal reduction. Gradient continuity may be crucial for somatotopic and other topographical organization, and its disruption may characterize pathological processing.

    Other authors
    See publication
  • Origins of task-specific sensory-independent organization in the visual and auditory brain: neuroscience evidence, open questions and clinical implications

    Current Opinions in Neurobiology

    Evidence of task-specific sensory-independent (TSSI) plasticity from blind and deaf populations has led to a better understanding of brain organization. However, the principles determining the origins of this plasticity remain unclear. We review recent data suggesting that a combination of the connectivity bias and sensitivity to task-distinctive features might account for TSSI plasticity in the sensory cortices as a whole, from the higher-order occipital/temporal cortices to the primary…

    Evidence of task-specific sensory-independent (TSSI) plasticity from blind and deaf populations has led to a better understanding of brain organization. However, the principles determining the origins of this plasticity remain unclear. We review recent data suggesting that a combination of the connectivity bias and sensitivity to task-distinctive features might account for TSSI plasticity in the sensory cortices as a whole, from the higher-order occipital/temporal cortices to the primary sensory cortices. We discuss current theories and evidence, open questions and related predictions. Finally, given the rapid progress in visual and auditory restoration techniques, we address the crucial need to develop effective rehabilitation approaches for sensory recovery.

    Other authors
    See publication
  • Reading in the dark: neural correlates and cross-modal plasticity for learning to read entire words without visual experience

    Neuropsychologia

    Using fMRI and visual-to-auditory sensory substitution which transfers the topographical features of
    the letters we compare reading with semantic and scrambled conditions in a group of Congenital blind (CB) individuals.
    We found activation in early auditory and visual cortices during the early processing phase (letter),
    while the later phase (word) showed VWFA and bilateral dorsal-intraparietal activations for words. This
    further supports the notion that many visual regions in…

    Using fMRI and visual-to-auditory sensory substitution which transfers the topographical features of
    the letters we compare reading with semantic and scrambled conditions in a group of Congenital blind (CB) individuals.
    We found activation in early auditory and visual cortices during the early processing phase (letter),
    while the later phase (word) showed VWFA and bilateral dorsal-intraparietal activations for words. This
    further supports the notion that many visual regions in general, even early visual areas, also maintain a
    predilection for task processing even when the modality is variable and in spite of putative lifelong
    linguistic cross-modal plasticity.
    Furthermore, we find that the VWFA is recruited preferentially for letter and word form, while it was
    not recruited, and even exhibited deactivation, for an immediately subsequent semantic task suggesting
    that despite only short sensory substitution experience orthographic task processing can dominate semantic processing in the VWFA. On a wider scope, this implies that at least in some cases cross-modal
    plasticity which enables the recruitment of areas for new tasks may be dominated by sensory independent task specific activation.

    Other authors
    See publication
  • 'Visual’ parsing can be taught quickly without visual experience during critical periods

    Scientific Reports

    Cases of invasive sight-restoration in congenital blind adults demonstrated that acquiring visual abilities is extremely challenging, presumably because visual-experience during critical-periods is crucial for learning visual-unique concepts (e.g. size constancy). Visual rehabilitation can also be achieved using sensory-substitution-devices (SSDs) which convey visual information non-invasively through sounds. We tested whether one critical concept – visual parsing, which is highly-impaired in…

    Cases of invasive sight-restoration in congenital blind adults demonstrated that acquiring visual abilities is extremely challenging, presumably because visual-experience during critical-periods is crucial for learning visual-unique concepts (e.g. size constancy). Visual rehabilitation can also be achieved using sensory-substitution-devices (SSDs) which convey visual information non-invasively through sounds. We tested whether one critical concept – visual parsing, which is highly-impaired in sight-restored patients – can be learned using SSD. To this end, congenitally blind adults participated in a unique, relatively short (~70 hours), SSD-‘vision’ training. Following this, participants successfully parsed 2D and 3D visual objects. Control individuals naïve to SSDs demonstrated that while some aspects of parsing with SSD are intuitive, the blind’s success could not be attributed to auditory processing alone. Furthermore, we had a unique opportunity to compare the SSD-users’ abilities to those reported for sight-restored patients who performed similar tasks visually and who had months of eyesight. Intriguingly, the SSD-users outperformed the patients on most criteria tested. These suggest that with adequate training and technologies, key high-order visual features can be quickly acquired in adulthood and lack of visual-experience during critical-periods can be somewhat compensated for. Practically, these highlight the potential of SSDs as standalone-aids or combined with invasive restoration approaches.

    Other authors
    See publication
  • Neuroplasticity: Unexpected Consequences of Early Blindness

    Current Biology

    A pair of recent studies shows that congenital blindness can have significant consequences for the functioning of the visual system after sight restoration, particularly if that restoration is delayed.

    Other authors
    See publication
  • Origins of the specialization for letters and numbers in ventral occipitotemporal cortex

    Trends in Cognitive Sciences

    Deep in the occipitotemporal cortex lie two functional regions, the visual word form area (VWFA) and the number form area (NFA), which are thought to play a special role in letter and number recognition, respectively. We review recent progress made in characterizing the origins of these symbol form areas in children or adults, sighted or blind subjects, and humans or monkeys. We propose two non-mutually-exclusive hypotheses on the origins of the VWFA and NFA: the presence of a connectivity…

    Deep in the occipitotemporal cortex lie two functional regions, the visual word form area (VWFA) and the number form area (NFA), which are thought to play a special role in letter and number recognition, respectively. We review recent progress made in characterizing the origins of these symbol form areas in children or adults, sighted or blind subjects, and humans or monkeys. We propose two non-mutually-exclusive hypotheses on the origins of the VWFA and NFA: the presence of a connectivity bias, and a sensitivity to shape features. We assess the explanatory power of these hypotheses, describe their consequences, and offer several experimental tests

    See publication
  • Navigation Using Sensory Substitution in Real and Virtual Mazes

    PLOS ONE

    Under certain specific conditions people who are blind have a perception of space that is equivalent to that of sighted individuals. However, in most cases their spatial perception is impaired. Is this simply due to their current lack of access to visual information or does the lack of visual information throughout development prevent the proper integration of the neural systems underlying spatial cognition? Sensory Substitution devices (SSDs) can transfer visual information via other senses…

    Under certain specific conditions people who are blind have a perception of space that is equivalent to that of sighted individuals. However, in most cases their spatial perception is impaired. Is this simply due to their current lack of access to visual information or does the lack of visual information throughout development prevent the proper integration of the neural systems underlying spatial cognition? Sensory Substitution devices (SSDs) can transfer visual information via other senses and provide a unique tool to examine this question. We hypothesize that the use of our SSD (The EyeCane: a device that translates distance information into sounds and vibrations) can enable blind people to attain a similar performance level as the sighted in a spatial navigation task. We gave fifty-six participants training with the EyeCane. They navigated in real life-size mazes using the EyeCane SSD and in virtual renditions of the same mazes using a virtual-EyeCane. The participants were divided into four groups according to visual experience: congenitally blind, low vision & late blind, blindfolded sighted and sighted visual controls. We found that with the EyeCane participants made fewer errors in the maze, had fewer collisions, and completed the maze in less time on the last session compared to the first. By the third session, participants improved to the point where individual trials were no longer significantly different from the initial performance of the sighted visual group in terms of errors, time and collision.

    Other authors
    See publication
  • Functional connectivity of visual cortex in the blind follows retinotopic organization principles

    Brain : a journal of neurology

    We used functional connectivity magnetic resonance imaging based on intrinsic blood oxygen level-dependent fluctuations to investigate whether significant traces of topographical mapping of the visual scene in the form of retinotopic organization, could be found in congenitally blind adults. A group of 11 fully and congenitally blind subjects and 18 sighted controls were studied. The blind demonstrated an intact functional connectivity network structural organization of the three main…

    We used functional connectivity magnetic resonance imaging based on intrinsic blood oxygen level-dependent fluctuations to investigate whether significant traces of topographical mapping of the visual scene in the form of retinotopic organization, could be found in congenitally blind adults. A group of 11 fully and congenitally blind subjects and 18 sighted controls were studied. The blind demonstrated an intact functional connectivity network structural organization of the three main retinotopic mapping axes: eccentricity (centre-periphery), laterality (left-right), and elevation (upper-lower) throughout the retinotopic cortex extending to high-level ventral and dorsal streams, including characteristic eccentricity biases in face- and house-selective areas. Functional connectivity-based topographic organization in the visual cortex was indistinguishable from the normally sighted retinotopic functional connectivity structure as indicated by clustering analysis, and was found even in participants who did not have a typical retinal development in utero (microphthalmics). While the internal structural organization of the visual cortex was strikingly similar, the blind exhibited profound differences in functional connectivity to other (non-visual) brain regions as compared to the sighted, which were specific to portions of V1. Central V1 was more connected to language areas but peripheral V1 to spatial attention and control networks. These findings suggest that current accounts of critical periods and experience-dependent development should be revisited even for primary sensory areas, in that the connectivity basis for visual cortex large-scale topographical organization can develop without any visual experience and be retained through life-long experience-dependent plasticity.

    Other authors
    See publication
  • Non-visual virtual interaction: Can Sensory Substitution generically increase the accessibility of Graphical virtual reality to the blind?

    Virtual and Augmented Assistive Technology (VAAT), 3rd IEEE VR International Workshop 2015

    Most of the content of Graphical virtual environments is currently visual, severely limiting their accessibility to the blind population. While several steps improving this situation have been made in recent years they are mainly environment specific and there is still much more to be done. This is especially unfortunate as VR holds great potential for the blind, e.g., for safe orientation and learning. We suggest in this position paper that Visual-to-audio Sensory Substitution Devices (SSDs)…

    Most of the content of Graphical virtual environments is currently visual, severely limiting their accessibility to the blind population. While several steps improving this situation have been made in recent years they are mainly environment specific and there is still much more to be done. This is especially unfortunate as VR holds great potential for the blind, e.g., for safe orientation and learning. We suggest in this position paper that Visual-to-audio Sensory Substitution Devices (SSDs) can potentially increase their accessibility generically fashion by sonifying the on-screen content regardless of the specific environment while allowing the user to capitalize upon his experience from other use of the device such as in the real world. We will demonstrate the potential of this approach using several recent examples from literature and from our own work.

    Other authors
    See publication
  • Augmented non-visual distance sensing with the EyeCane

    AH '15: Proceedings of the 6th Augmented Human International Conference

    How can we sense distant objects without vision?

    Vision is the main distal sense used by humans, thus impairing distance and spatial perception for sighted individuals in the dark or for people with visual impairments.

    We suggest augmenting distance perception via other senses such as using auditory or haptic cues, and have created the EyeCane for this purpose. The EyeCane is a minimal Sensory Substitution Device that enables users to perform tasks such as distance estimation, and…

    How can we sense distant objects without vision?

    Vision is the main distal sense used by humans, thus impairing distance and spatial perception for sighted individuals in the dark or for people with visual impairments.

    We suggest augmenting distance perception via other senses such as using auditory or haptic cues, and have created the EyeCane for this purpose. The EyeCane is a minimal Sensory Substitution Device that enables users to perform tasks such as distance estimation, and obstacle detection and avoidance up to 5m away on-visually.

    In the demonstration, visitors will receive a brief training with the device, and then use it to detect objects and estimate distances while blindfolded.

    Other authors
    See publication
  • New Whole-Body Sensory-Motor Gradients Revealed Using Phase-Locked Analysis and Verified Using Multivoxel Pattern Analysis and Functional Connectivity

    The Journal of Neuroscience

    Topographic organization is one of the main principles of organization in the human brain. Specifically, whole-brain topographic mapping using spectral analysis is responsible for one of the greatest advances in vision research. Thus, it is intriguing that although topography is a key feature also in the motor system, whole-body somatosensory-motor mapping using spectral analysis has not been conducted in humans outside M1/SMA. Here, using this method, we were able to map a homunculus in the…

    Topographic organization is one of the main principles of organization in the human brain. Specifically, whole-brain topographic mapping using spectral analysis is responsible for one of the greatest advances in vision research. Thus, it is intriguing that although topography is a key feature also in the motor system, whole-body somatosensory-motor mapping using spectral analysis has not been conducted in humans outside M1/SMA. Here, using this method, we were able to map a homunculus in the globus pallidus, a key target area for deep brain stimulation, which has not been mapped noninvasively or in healthy subjects. The analysis clarifies contradictory and partial results regarding somatotopy in the caudal-cingulate zone and rostral-cingulate zone in the medial wall and in the putamen. Most of the results were confirmed at the single-subject level and were found to be compatible with results from animal studies. Using multivoxel pattern analysis, we could predict movements of individual body parts in these homunculi, thus confirming that they contain somatotopic information. Using functional connectivity, we demonstrate interhemispheric functional somatotopic connectivity of these homunculi, such that the somatotopy in one hemisphere could have been found given the connectivity pattern of the corresponding regions of interest in the other hemisphere. When inspecting the somatotopic and nonsomatotopic connectivity patterns, a similarity index indicated that the pattern of connected and nonconnected regions of interest across different homunculi is similar for different body parts and hemispheres. The results show that topographical gradients are even more widespread than previously assumed in the somatosensory-motor system. Spectral analysis can thus potentially serve as a gold standard for defining somatosensory-motor system areas for basic research and clinical applications.

    Other authors
    See publication
  • A number-form area in the blind

    Nature Communications

    Distinct preference for visual number symbols was recently discovered in the human right inferior temporal gyrus (rITG). It remains unclear how this preference emerges, what is the contribution of shape biases to its formation and whether visual processing underlies it. Here we use congenital blindness as a model for brain development without visual experience. During fMRI, we present blind subjects with shapes encoded using a novel visual-to-music sensory-substitution device (The EyeMusic)…

    Distinct preference for visual number symbols was recently discovered in the human right inferior temporal gyrus (rITG). It remains unclear how this preference emerges, what is the contribution of shape biases to its formation and whether visual processing underlies it. Here we use congenital blindness as a model for brain development without visual experience. During fMRI, we present blind subjects with shapes encoded using a novel visual-to-music sensory-substitution device (The EyeMusic). Greater activation is observed in the rITG when subjects process symbols as numbers compared with control tasks on the same symbols. Using resting-state fMRI in the blind and sighted, we further show that the areas with preference for numerals and letters exhibit distinct patterns of functional connectivity with quantity and language-processing areas, respectively. Our findings suggest that specificity in the ventral ‘visual’ stream can emerge independently of sensory modality and visual experience, under the influence of distinct connectivity patterns.

    Other authors
    See publication
  • Visual cortex extrastriate body-selective area activation in congenitally blind people “seeing” by using sounds

    Current Biology

    Vision is by far the most prevalent sense for experiencing others’ body shapes, postures, actions, and intentions, and its congenital absence may dramatically hamper body-shape representation in the brain. We investigated whether the absence of visual experience and limited exposure to others’ body shapes could still lead to body-shape selectivity. We taught congenitally fully-blind adults to perceive full-body shapes conveyed through a sensory-substitution algorithm topographically translating…

    Vision is by far the most prevalent sense for experiencing others’ body shapes, postures, actions, and intentions, and its congenital absence may dramatically hamper body-shape representation in the brain. We investigated whether the absence of visual experience and limited exposure to others’ body shapes could still lead to body-shape selectivity. We taught congenitally fully-blind adults to perceive full-body shapes conveyed through a sensory-substitution algorithm topographically translating images into soundscapes [1]. Despite the limited experience of the congenitally blind with external body shapes (via touch of close-by bodies and for ∼10 hr via soundscapes), once the blind could retrieve body shapes via soundscapes, they robustly activated the visual cortex, specifically the extrastriate body area (EBA; [2]). Furthermore, body selectivity versus textures, objects, and faces in both the blind and sighted control groups was not found in the temporal (auditory) or parietal (somatosensory) cortex but only in the visual EBA. Finally, resting-state data showed that the blind EBA is functionally connected to the temporal cortex temporal-parietal junction/superior temporal sulcus Theory-of-Mind areas [3]. Thus, the EBA preference is present without visual experience and with little exposure to external body-shape information, supporting the view that the brain has a sensory-independent, task-selective supramodal organization rather than a sensory-specific organization.

    Other authors
    See publication
  • Reading with Sounds: Sensory Substitution Selectively Activates the Visual Word Form Area in the Blind

    Neuron

    Using a visual-to-auditory sensory-substitution algorithm, congenitally fully blind adults were taught to read and recognize complex images using “soundscapes”—sounds topographically representing images. fMRI was used to examine key questions regarding the visual word form area (VWFA): its selectivity for letters over other visual categories without visual experience, its feature tolerance for reading in a novel sensory modality, and its plasticity for scripts learned in adulthood. The blind…

    Using a visual-to-auditory sensory-substitution algorithm, congenitally fully blind adults were taught to read and recognize complex images using “soundscapes”—sounds topographically representing images. fMRI was used to examine key questions regarding the visual word form area (VWFA): its selectivity for letters over other visual categories without visual experience, its feature tolerance for reading in a novel sensory modality, and its plasticity for scripts learned in adulthood. The blind activated the VWFA specifically and selectively during the processing of letter soundscapes relative to both textures and visually complex object categories and relative to mental imagery and semantic-content controls. Further, VWFA recruitment for reading soundscapes emerged after 2 hr of training in a blind adult on a novel script. Therefore, the VWFA shows category selectivity regardless of input sensory modality, visual experience, and long-term familiarity or expertise with the script. The VWFA may perform a flexible task-specific rather than sensory-specific computation, possibly linking letter shapes to phonology.

    Other authors
    See publication

Honors & Awards

  • The Israel Science Foundation Research Workshops Grant for the Sensory Substitution, Brain Plasticity and Visual Rehabilitation Workshop

    -

  • The Krill Prize for Excellence in Scientific Research, t he Wolf Foundation

    -

  • Young Investigator Award in the memory of Professor Yaacov Matzner

    -

  • The Avraham Shalmon ‘Teva’ company founder award

    -

  • The Sieratzki - Korczyn Prize for advances in Neuroscience

    -

  • International Human Frontiers Science Program , Career Development Award

    -

  • Award for outstanding Israeli projects proposals in the “EU 7th Framework Progra m for Research & Techn ological Development"

    -

More activity by Amir

View Amir’s full profile

  • See who you know in common
  • Get introduced
  • Contact Amir directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Others named Amir Amedi

Add new skills with these courses