My Interest In AI

Yesterday a friend and I sat down to have brunch at local Seattle cafe. Not 15 minutes later Josh Lovejoy walks in to the room.

Josh is someone who’s work I follow and deeply respect. Currently he’s works on AI Ethics and Society with Microsoft and before that he worked with Google on PAIR.

My friend encouraged me to go introduce myself. It could be a professional opportunity if you’re interest. At 27 years old and not having my first fangirl moment, I chose not to introduce myself. I was too excited and knew I would fuck up the opportunity being socially awkward. Also with his family and friends I didn’t want to bring up work at a fun family brunch. Respect for privacy and family time.

After we left the cafe I thought what is it about his work that I’m interested in? If I was to introduce myself what would I have to say that’s thoughtful and could add to a conversation?

Probably best to start with what I think AI is and why I’m interested.

Artificial Intelligence is essentially what we teach. Intelligence meaning to behave appropriately to context. To transform information to be aware of the current context.

Underlaying this intelligence is machine learning. A way we help computers recognize patterns in data.

My interest is understanding how as designers our choices get embodied and reflected back to us by these machines, starting with the thought processes we have. How are we intentionally and unintentionally affecting AI/ML through labeling, data collected and defining metrics?

Is performance going to be measured by direct deliverables of A, B and C; asking how a system made a decision? Or should the performance be measured by how a design performs under conditions X, Y, or Z? Is it “how did you make this decision” or “how would you perform under these conditions in this context?” and how does the difference of those two questions change the first set of questions regarding labeling, data collected and defining metrics? How does scale change use and maintenance?

Which brings me to another point of Josh’s work, data collection. Data collected by technology companies isn’t who people are it’s what those companies chose to observe and interpret and remember about us. Often an inference about our data we’re not giving people access too.

“Our data is comparable to the shadows we cast as we move through the world. They are not *us*, yet they are made possible by us. They’re visible for all to see, leaving a fleeting impression in visual space. . Our shadows, like our data, change shape depending on time of day, environment, and our position and orientation.” – Josh Lovejoy

 

Josh’s work initially caught my interest at interesting inflection point. At the time I was taking an art class on the basics of drawing. A lot of the class we focused on the bigger concepts of drawing like shapes, grids and item placement and letting the brain fill in the rest. At the same I had come across a podcast featuring Kenneth Stanley on Neuroevolution and Evolving Novel Neural Network Architectures.

The point of novel research is that you focus on exploration and novelty over objective gradients. Or expected outcomes. Not letting what you measure become the target.

Instead of focusing on optimization, expecting probabilistic systems to produce fully formed answers to the question “how did you make this decision” you focus on what you’re not after. Also called deception.

“The best way to get something is not trying to get it.” The concept of focusing on what is new not expected fills the gaps in our knowledge by something we didn’t expect.

The video above for example. When asked to imagine a shoe my first thought was a trawl door shoe and a horse shoe. Surprised by my own answers without lack of context, my mind then shot to horseshoes (the crustaceans) and wheel chocks we use to stop large trucks from moving, which then made me think of the wheel lock clamps police use.

What sparked my interest was the lack of context. Because had the video said, define and draw a human shoe, I would have said Oh! A dressage boot! 

Automated systems rely on historical data we’ve chosen collect. To design systems that predict future states, is to be held accountable that they continue to learn and moving forward letting people explore. If personalization is about understanding people, we have to give them the space to explore and not let what we collect and measure about them become the target.

Which lastly brings me to data ownership and ethics or privacy and regulation.

Because the truth is, how do you regulate something if the knowledge and processes are implemented by the very organizations you would regulate?

 

A people’s Bill of Rights for personalization

Neuroevolution: Evolving Novel Neural Network Architectures with Kenneth Stanley

The problem with AI ethics

Biases can be subtle. After my friend and I left the cafe we walked through the Ballard farmers market where I at one point found us both trying to walk to the left of each other.

Stopping for coffee at a noisy Starbucks, I apologize to my friend for the poor acoustics.”I wish people would think about that stuff, the word you said. Acoustics. Because I’m deaf in one ear.”

Cocking my head I reply that I didn’t know that. She tells me it’s her left ear and why she prefers to walk to the left of people so she can listen with her right ear and read lips which also made me the hardest person to walk with. “Yeah, every time you change sides and I can’t figure out why.”

“I just like to walk to the left and a little behind. I don’t know why. I just do. Always have.”

While thinking of what I would say to Josh if I were to introduce myself, I had the ah-ha moment of why I walk the way I do. Horses. Almost every interaction is to the left and at or a little behind their shoulder. From the side you walk on, to the bridle and blanket buckles, to the side you mount them on. It’s to the left.

Ah. I get why the bias thing could be a problem.