My Interest In AI

Yesterday a friend and I sat down to have brunch at local Seattle cafe. Not 15 minutes later Josh Lovejoy walks in to the room.

Josh is someone who’s work I follow and deeply respect. Currently he’s works on AI Ethics and Society with Microsoft, and before that he worked with Google on People and Artificial Intelligence Research, also called PAIR.

My friend encouraged me to go introduce myself. It could be a professional opportunity if you’re interest. At 27 years old and not having my first fangirl moment, I chose not to introduce myself. I was too excited and knew I would fuck up the opportunity being socially awkward. Also with his family and friends, I didn’t want to bring up work at a fun family brunch. Respect for privacy and family time.

After we left the cafe I thought, what is it about his work that I’m interested in? If I was to introduce myself what would I have to say that’s thoughtful and could add to a conversation?

Probably best to start with what I think Artificial Intelligence is and why I’m interested.

Artificial Intelligence is essentially what we teach. My interest is understanding the mechanisms underlying thought and intelligent behavior of the machines people design. How as designers our choices get embodied and reflected back to us by these machines, starting with the thought processes we have.

What’s the system or thought process beneath it all? What’s the data input and how are we collecting it, and how are we choosing what to collect? What are the standards and ethical checkpoints?

Is performance going to be measured by direct deliverables of A, B and C; asking how a system made a decision? Or should the performance be measured by how a design performs under conditions X, Y, or Z? Is it “how did you make this decision” or “how would you perform under these conditions in this context?”. And how does the difference of those two questions change the data we collect? Which brings me to another point of Josh’s work. Data collection.

Data collected by technology companies isn’t who people are, it’s what those companies chose to observe, interpret and remember about us.

Which is perhaps where we as a designers step in. If our obligation is to care for the future, a part of our job is to understand research and innovation at the present. Including data privacy, ethics and underlying business models like ad targeting. An assumption that it’s possible to know people better than they know themselves.

“Our data is comparable to the shadows we cast as we move through the world. They are not *us*, yet they are made possible by us. They’re visible for all to see, leaving a fleeting impression in visual space.

Our shadows, like our data, change shape depending on time of day, environment, and our position and orientation.” – Josh Lovejoy

Josh’s work initially caught my interest at interesting inflection point. At the time I was taking an art class on the basics of drawing. A lot of the class we focused on the bigger concepts of drawing like shapes, grids and item placement and letting the brain fill in the rest. At the same I had come across a podcast featuring Kenneth Stanley on Neuroevolution and Evolving Novel Neural Network Architectures.

The point of novel research is that you focus on exploration and novelty over objective gradients. Or expected outcomes. Not letting what you measure become the target.

Instead of focusing on optimization, expecting probabilistic systems to produce fully formed answers to the question “how did you make this decision” you focus on what you’re not after. Also called deception.

“The best way to get something is not trying to get it.” The concept of focusing on what is new not expected fills the gaps in our knowledge by something we didn’t expect.

The video above for example. When asked to imagine a shoe my first thought was a trawl door shoe and a horse shoe. Surprised by my own answers without lack of context, my mind then shot to horseshoes (the crustaceans) and wheel chocks we use to stop large trucks from moving, which then made me think of the wheel lock clamps police use.

What sparked my interest was the lack of context.

Automated systems rely on historical data we’ve chosen collect. To design systems that predict future states, is to be held accountable that they continue to learn. And moving forward letting people explore. If personalization is about understanding people, we have to give them the space to explore and not let what we collect and measure about them become the target.

Which lastly brings me to data ownership and ethics. Or privacy and regulation.

Because the truth is, how do you regulate something if the knowledge and processes are implemented by the very organizations you would regulate?

As designers if we’re helping to create our own regulations, how do we coat check our biases and potential conflicts of interest with outside stakeholders of publicly traded firms at the door?

A people’s Bill of Rights for personalization

Neuroevolution: Evolving Novel Neural Network Architectures with Kenneth Stanley

The problem with AI ethics