I’m obsessed with the transition between labor and technology. For a long time, I saw it as a simple case of cause and effect. Automation happens when people become too expensive. There is an exact tipping point – a financial cliff where the ROI on technology outruns the cost and social benefits of hiring employees.
But is there more to the story?
In a previous article for TechCrunch, I took a deeper look at the subtle creep of automation, making the case that even high paying jobs are under siege. Whether we’re talking about accountants, doctors, or lawyers, the job is pretty much the same. Diagnose a problem and fix it. In some cases, artificial intelligence already does a better job of analysis. As it gets better, and people focus more on applying the cure, do we need as many of them?
Of course, there are also opportunities for AI where labor was never an option. For example, consider DoNotPay. It provides legal advice that helps people contest parking tickets. The free service, offered in New York and London, employs an artificial intelligence chatbot. So far, it has taken more than 250,000 cases and won more than 160,000 of them, saving its users more than $4 million in fines.
Is DoNotPay scratching the surface of an untapped market? “I feel like there’s a gold mine of opportunities because so many services and information could be automated using AI, and bots are a perfect way to do that,” said DoNotPay’s creator, Joshua Browder.
So it seems AI can directly challenge a labor-based model and it can create opportunities in new market niches. But is it always so obvious?
A few years ago, crowdsourcing (along with crowdfunding) were big buzzwords. Crowdsourcing is a process that involves outsourcing tasks to a distributed group of people. The difference between crowdsourcing and ordinary outsourcing is that a task or problem is outsourced to a large group of free agents versus one company’s limited pool of paid employees.
Of course there were problems with the model. You have much more control over what employees provide, and you can decide when, where, and how the work is done. With crowdsourcing, quality is more difficult to obtain and maintain.
But what if that’s not really the point? What if crowdsourcing is just a means to an automated end?
One of the reasons artificial intelligence is better at diagnosing problems, is that it can quickly access and sift through huge piles of data. The human brain can’t find and absorb that much information that quickly, so we rely more on our limited experience.
Doctors and other healthcare professionals rely on a process to identify and treat an illness. They ask questions, take measurements and assess the situation, going through a list of other possible diagnoses, usually in order of most likely to least likely. Finally they create a plan for what should be done to treat the patient’s concerns.
Too often though, patients are misdiagnosed. One study suggested it could happen as often as 10-20% of the time.
Researchers at Indiana University found that machine learning could increase patient outcomes by 50 percent at about half the cost. Using sophisticated models, the software compares multiple diagnoses and maps out their impact over time.
As researcher Kris Hauser put it, “Modeling lets us see more possibilities out to a further point, which is something that is hard for a doctor to do. They just don’t have all of that information available to them.”
Where does AI get the data? Typically through all of the available research on a specific topic. In today’s era of the Internet and distributed computing, it’s certainly getting easier to collect large amounts of data.
But until recently, AI was just another consumer. What happens when it becomes the intended user?
A recent article from TechRepublic addresses this topic, suggesting that the thousands of people who plug in and work for Amazon’s Mechanical Turk crowdsourcing service, are actually building libraries of data for AI to consume.
Like a child, AI must be taught the basics before it can become more sophisticated. When training AI, the bigger the data set, the better. As the article notes:
“Training datasets are huge and growing in size—Google’s recently announced Open Images Dataset has about nine million images, while its labeled video repository YouTube-8M links to eight million labeled videos. ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50,000 people—most of whom were recruited through Amazon Mechanical Turk—who checked, sorted and labeled almost one billion candidate pictures.”
So maybe the relationship between tech and labor isn’t so simple. A few years ago many of us thought crowdsourcing would be the future of work. As we continue to feed artificial intelligence more information, its starting to feel more like the end of work.
Are we crowdsourcing the pathway to automation? If so, what will people do and how will we pay for it? With the post-labor era quickly approaching, maybe those are the questions everyone should be asking.