While at the Pediatric Cortical Visual Impairment Society’s conference in Omaha, Nebraska covering the current research in the field, I was speaking with a parent who has a child diagnosed with CVI. She spoke about how her child got a fishing pole this summer and caught his first fish.
He also had the opportunity to do a rope course that was designed for those with difficulty walking and or seeing. She went from storytelling to utmost pride when reflecting on how these opportunities had opened her child’s perception of what is possible in the world. This Brave New World sort of talk.
We also spoke about how unique of a sensorial experience it would be to feel a fish biting and pulling on a fishing line, or the sensation for him of falling through the air guided by a rope. One could only imagine how a child’s brain, if being analyzed by one of the very many doctors at this conference using MRI, would light up like a Christmas tree with unique patterns and communication pathways to interpret what in the world just happened.
“You know what is funny?” she said.
“We were walking at an event that was in a cornfield and we passed a porta-potty. My son immediately says, “What is a vending machine doing in a corn field?” Isn’t it amazing what he can see and how he sees it!?”
It really is. As she said this, I immediately thought to myself, if I was not paying attention to my surroundings, and quickly passed a porta-potty, I too could easily associate that visual outline as a vending machine. Having 20/20 vision and healthy visual pathways, my life’s experiences do not associate a corn field as having a vending machine, my brain would probably have never conjured the association.
BUT, what could make this association? What is not trained to omit certain possibilities, like my brain?
Text to image neural networks can do just this.
Here is a potential example of how this parent’s child saw the cornfield via an image rendered by a neural network given the following instructions: “a blurred image of a vending machine in a corn field as seen by someone with poor vision.”
Over time, once the child is taught to recognize the context and associations of a porta-potty, this is potentially what they could see via an image rendered by a neural network given the following instructions: “a foggy image of a porta-potty in a corn field as seen by someone with poor vision.”
Then, hopefully with even more training and life experiences with a porta-potty, the sharpness and clarity could improve to something like this via an image rendered by a neural network given the following instructions: “a realistic photo of a blue porta-potty in a cornfield.”
The potential here is having a tool that could, for the first time, create images for parents, caregivers, and educators to better visualize the world that a child with cortical visual impairment lives in. Each child is unique, and each experiences their own representation of the world. One that is often beyond our grasp.
The better we understand that world view representation, the more accurately we can work with children with CVI to make bridges to the generally accepted societal world. Having a better understanding of the world, as the general society experiences it, helps make every day independent living safer for the child.