Pursuing A Future For STEM Equality

05.09.2012 |

How one blind researcher is using his STEM expertise to improve access to visual data.

Editor’s Note: This is a guest post by Sina Bahram, a Ph.D. student in computer science at NC State who was honored as a “Champion of Change” by the White House May 7 for his efforts to make STEM accessible to people with disabilities. This post first ran on the White House blog.

I would like to thank the administration, those who serve in public office, and everyone else involved with the Champions of Change program for two reasons. First, thank you so much for recognizing and bringing attention to the projects, people, and problems that are a part of science, technology, engineering, and math (STEM). I believe that this recognition can go a long way in affecting an important and lasting positive change for STEM issues faced by individuals with disabilities. Second, I would like to simply express what a singularly humbling and profound honor it is for me to accept this recognition as a White House Champion of Change.

I’ll begin by stating that I happen to be blind. I have some light perception and some shape recognition, but for all intents and purposes, I am blind. When I was presented with maps in geography, biology, and civics classes at school, I would have to either cue off of verbal descriptions, intuit the relationships among important locations, or rely on expensive, much lower-resolution tactile copies of the graphics my classmates used. When I was given a flowchart to analyze in biology, economics, or computer science courses, I either asked someone to translate it into Braille or read its contents to me.

I have been fortunate to have some amazing teachers, mentors, and family members who went above and beyond the call of duty. I had a chemistry professor in college who, every day after class, would bring rocks from her garden and handmade wax relief molds of electrons’ orbits to illustrate what she had spoken in class about; my father spent long hours when I was in high school explaining various aspects of calculus and physics to me because they were so often explained pictorially in school.

For the most part, however, students with visual impairment rely upon expensive, proprietary, and over-simplified models of graphical information. I wanted to change this. I wanted to have the ability to independently explore any location on our planet simply by asking a computer. I wanted to read about a place in a novel, hear about it on the news, or discuss it with friends, and be able to appreciate the local geography of that place. I wanted to analyze flowcharts without having to rely upon a sighted colleague who might not have time for another half hour or a well-intentioned friend who might have no contextual understanding of the information being presented.

When I began doing research for my doctoral degree in computer science, I realized that I had an opportunity to affect the kinds of changes I wished existed when struggling to learn STEM topics as a younger student. Furthermore, I had talked with hundreds of individuals across the country, and even the world, that shared the same frustration I felt about true systemic access to highly graphical information. From my desire to change the status quo, along with an amazing research advisor, was born a system I call TIKISI, which stands for Touch It, Key It, Speak It. It’s a framework, or approach, to accessing graphical information in an eyes-free fashion.

I first decided to tackle the most famous of map systems – Google Maps – and after some time, I had a working prototype. I could drop my finger anywhere on a touch screen tablet computer and have it announce where on the Earth I was touching. I could zoom in and out, and I could even talk with the map, asking it to “take me to the capitol of the United States” and it would center Washington D.C. on my screen. From there, I was able to understand what streets I would cross if I walked south from Pennsylvania Avenue, what cities I could visit nearby. Furthermore, all you need to have all that at your fingertips is a smart phone or tablet- a far cry from the expensive, proprietary technologies that often prevent low-income and under-privileged individuals from accessing such educational tools.

In 2011, I was invited up to help mentor blind and low-vision children at a programming camp, and I showed them a prototype of the new technology. It was fascinating to see them interacting with the map and being able to understand what was being displayed on the screen, but what really spoke to me was something I never expected: the children were interacting with each other. Because it was all done by voice, it didn’t matter if one of the children was sitting across a round table from three others or even at an adjacent table several feet away; they could still all hear what was going on, and therefore help guide the young person who currently was holding the tablet on where to go, or what to explore next.

This excitement about exploring what was previously unavailable to them was quite infectious. All of a sudden I wasn’t pursuing this research for myself to solve problems in my own studies and life, but so that one of those kids could explore the solar system with the same excitement as one of his similarly space-obsessed peers, so that perhaps their teachers would not have to sacrifice the long hours that mine sacrificed for me.

After I was happy with maps, I began working with several fellow researchers in the department at NC State to tackle something much more difficult. Could we use computers to scan, recognize, and make flowcharts fully interactive? Could we take a visual diagram such as a flowchart and allow a blind user to touch it, explore it, speak to it, and in turn have it speak to them? This would involve machine learning, artificial intelligence, computer vision, human computer interaction, and a whole host of other technical disciplines that all too often don’t collaborate, but fortunately the right group was formed.

We’ve made a great deal of progress on this project, and as I write this we are putting the finishing touches to a prototype that does exactly that. The blind user can navigate a flowchart, touch anywhere on the screen, ask the computer to explain particular relationships on the diagram, and more. I can’t wait to find another group of users who will immediately show us the hidden and unobvious benefits of opening up another visual domain to eyes-free exploration and access.

I fervently believe that technology possesses an untapped, awesome potential to make a difference in peoples’ lives. It is my dream that such technology be utilized to solve the real-world issues facing all people, not just users with disabilities.

Thank you to the teachers, mentors, and role models in my life. From family to professors, friends to colleagues, I know that my achievements are only possible because of their love, support, effort, and sacrifice. Thank you again to the Champions of Change program for all you’ve done, are doing, and will continue to do to help make this dream a reality.

Leave a Reply