AI with Better Accuracy, Lower Cost, and More Social Justice

People often say ”technology is neutral, the problem is how it gets applied.”  But Conway’s law teaches us otherwise.  Conway’s law is one of only 10 computer science laws listed in Wikipedia. In 1967, Melvin Conway wrote, “organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations.”

For example, if we compare an aerial photograph of the buildings on the old campus of the Digital Equipment Corporation (DEC) in Boston to a photograph of the motherboard of the DEC Alpha computer that DEC designed, we see that the chips on the motherboard could be labeled by the names of the buildings on the campus, the circuit traces on the motherboard correspond to the walkways and cafeterias that connected these engineers on the campus, and in sum, the engineers who talked to each other and knew each other made chips that also communicated with each other.

Similarly, our current deep learning architecture is a recapitulation of our current societal conditions. Ilya Sutskever’s annual compensation at OpenAI is public: $7,300 dollars USD per day.1  Data labelers in Kenya who label data to help train the models that Ilya designs are paid $9 per day.2  There is also a “middle class” of data scientists with median base starting salary of $95,000.3  A representative from Samasource, one of the companies employing data labelers in Kenya said,

"Yes, it's cost effective," Janah said. "But one thing that's critical in our line of work is to not pay wages that would distort local labour markets. If we were to pay people substantially more than that, we would throw everything off. That would have a potentially negative impact on the cost of housing, the cost of food in the communities in which our workers thrive."

AI is structuring itself to fit into our existing economic structures and conditions.  

Some may argue that deep learning requires a few brilliant experts to design the models and  a very large number of far less skilled data labelers to train the models and that is just how the technology works.  But try this experiment at your company:  Tell your engineers that if they reduce your data labeling cost without increasing the error rate, they can have 10% of the savings for their take home bonus.  Your data labeling costs will drop by a huge multiple.

The reason for this is that these engineers have two choices.  They can either (A) make a simple black-box model with a very large number of parameters, go home early and leave it to the data labelers to train the extra parameters, or they can (B) work harder to tighten their model, reduce the number of parameters to train, and therefore reduce the data labeling needs.  Since every parameter is an additional axis in search space, reducing the number of parameters just a little has a massive positive impact on the data labeling needs.

If you incentivize your engineers to increase accuracy at the same time, they will do it by creating interfaces to your system whereby a rising middle class of professionals can work with the AI to guide it at a more conceptual level.  

For example, instead of trying to make a completely autonomous self-driving truck using data labeling, they will create a long-distance wireless connection so that middle class workers can go to work in their local communities (and not drive away from their families for weeks on end), make a better salary in a less difficult and dangerous job than working as a long-haul trucker, and guide a group of “self-driving” trucks remotely, only intervening when a robot truck needs help from a human expert.4

The same is true at my company, Gamalon where we create AI Assistants for professionals who manage websites and other digital communications with their customers.  Rather than try to replace humans with “chatbots,” we have architected our system as a prosthetic for marketing professionals.  It learns like a new employee - first it reads the website and other company documents.  Then it starts conversing with customers.  When it runs into something it doesn’t understand, it will ask the marketer for help, but it learns having only been told the answer once. This leverages and amplifies the reach of the professional who it works for. When there are new products or services being offered by the company, the system reads about them and absorbs the new information so that it can discuss it with customers. At any time, the marketer can examine what the AI knows and view a summary of the conversations it is having with customers. We call it AI you can trust.  It is designed to deliver value to the end customer, by leveraging the expertise of its employer so that they can have it handle an increasing number of conversations and dialogues by itself.

To summarize, large tech companies chose today’s deep learning technology over the alternatives, not because it is the best, but because it recapitulates and perpetuates the income inequality of our society and how our companies are organized - a few superstar engineers in silicon valley and a large number of very low paid workers far from there.  

There is another way to architect this human-computer system that delivers more accuracy, lower cost, and and a rising middle class of professionals with AI Assistants, and all we need to do is incentivize our engineering organizations to solve for the right objective function. It turns out that AI ethics is not about the objective functions for our AI’s, it is about the objective functions for our engineers.

Citations:

  1. https://www.theregister.co.uk/2018/04/21/ai_roundup/
  2. https://www.bbc.com/news/technology-46055595
  3. https://datasciencedegree.wisconsin.edu/data-science/data-scientist-salary/
  4. https://www.starsky.io/about



< Back to All

-->