Facebook has always been creepy, of course, with its reliance on selling your personality to advertisers. People are shocked, shocked, that this would be used for political ends by Cambridge Analytica and other manipulators, but that’s basic to what they do.
But Google has been getting creepier too with time, and not just because they also try to infer your preferences from your search choices. They’re also using the new tech of machine learning to do creepy things.
I saw this directly at a talk last month by Olivier Temam of Google Paris called “A Shift Towards Edge Machine-Learning Processing”. This was at ISSCC 2018, and the slides are here and abstract here. The talk started by describing the recent successes of machine learning, and those are impressive and uncontroversial. It has now gotten quite good at difficult tasks like language translation and image recognition, even of things like cancer cells. The rest was about how to do machine learning on small systems, ones that could go into gadgets, instead of having to communicate with huge servers in distant buildings. These need interesting hardware techniques to run fast and at low power, and are now the subject of massive research efforts.
But they’re getting applied to hackle-raising things. Temam talked about occupancy detection for offices, where a camera tries to count the number of people in a room in order to control the temperature and ventilation. They want to do the counting in the camera itself for “privacy reasons”, so that the whole video stream does not get uploaded to some server. But who would believe that it isn’t being uploaded? Or that the camera isn’t looking at you or your screen to monitor what you’re doing? This kind of counting can be done much more easily and cheaply with an infrared sensor, with no such privacy concerns.
Then there’s Google Clips, a new camera they’ve developed that can run their machine learning package:It uses a brilliant new chip called the Movidius Myriad 2, now owned by Intel, that can do huge amounts of work at low power. It has 16 GB of internal storage, but links wirelessly to your phone to upload everything.
So do they have it cleaning up pictures, allowing you to get the best shot no matter what the lighting? No, they want it to take the video, not you. They got a team of professional photographers to work with a crew of babies and pets. They captured the entire video stream from their cameras, and looked at when the pros actually pressed the shutter button to capture a clip. Then they set their neural nets to work on the stream, trying to determine just what the cutest moments were. Should it capture when the baby is facing you? The net detects a large round blob in the middle of the image. When it’s smiling? When it’s raising its arms in glee? When it’s rolling over? The net doesn’t care – it’s just trying to predict when the professional would push the button. It knows when the actual push happened, and adjusts the synaptic weights on all of the filters it runs on the images to generate features that map to cuteness.
As Elon Musk said “This doesn’t even *seem* innocent.” This widget is watching and judging your baby constantly. It’s assuming that you’re too busy or stupid to film your own baby. God knows what it actually does with the video, but somewhere a Facebook type is thinking about how to monetize your baby videos.
OK, but creepier still is their AIY camera kit:This contains a lens, image sensor, button, and a cardboard box for the body. You supply a Raspberry Pi processing board, and load their software onto it. The demo is, and I’m not kidding, a joyfulness detector. You point it at someone’s face, and it gives you a measure of how joyful their expression is. An LED turns yellow for joy and blue for sad, just like the emotions in the Pixar “Inside Out” movie. “And if your joy score exceeds 85%, an 8-bit sound will play. Cool!” That’s one reaction, but not the one I would have.
This is still all kind of minor, though. Where this attitude starts to matter is in their self-driving cars. For the last ten years they’ve been saying how wonderful it will be when driving is taken away from fallible humans. 30,000 people a year are killed on the road in the US! If you’re skeptical about this, you’re some Luddite delaying the self-driving millenium, and costing thousands of lives in the meantime. Cars shouldn’t even have steering wheels! Trust the machine! Put your lives in our hands!
I would believe more of this if Google (now spun off into Waymo) were actually selling car safety systems. They’ve spent billions on this by now, but haven’t offered a single product. Real car companies are steadily adding safety features: blind spot detection, back-up collision alerts, and automatic forward braking. I have them on my 2017 Chevy Volt, since they really do make a difference in accident rates. I find the braking to be annoying, to be honest, since the alert goes off constantly in harmless situations, and every few months it applies the brakes when it shouldn’t. But Google isn’t doing any of this.
I think the reason is money. The Volt’s collision detector is based on a camera built by an Israeli company called Mobileye:
They were acquired by Intel in 2017 for $15B, but in 2016 they sold about 6 million systems for $400 million. That’s terrific for a small company, but chump change to Google. Even if they sold ten times as many systems, 60 million a year, enough for 75% of the cars built each year, that’s still only $4 billion. Google makes over a $100 billion a year.
No, serious money in self-driving cars only comes when they can sell car-when-you-want-it subscriptions. Charge $500 per month, and have one car handle four or five subscribers, since each one only uses it for an hour or two a day. Now you’re making $25K / car / year. Run a million of those and it’s $25 billion. When the tech really works, run 10 million of them, and you’re making $250 billion.
That’s what this is about, not safety. It’s certainly what Uber is going for, since they’re presently losing money on every ride. Actually doing full autonomous driving (called Level 5) is an enormously difficult problem because the driving environment is really ill-defined. Just because Google’s Alpha Go program can beat a human champion doesn’t mean it handle situations without fixed rules. No one cares if it makes a bad Go move, but people care a lot when your software kills someone. The current accident rate in the US is about one fatality per hundred million miles, or several million hours. It’s extraordinary hubris to think that computers can do this a lot better, and that hubris is going to kill people.