In an August 6th article - Uber’s Festering Sexual Assault Problem - the New York Times attacks Uber for holding back crime prevention tools, putting business ahead of safety.
This serious accusation hit me close to home. From 2018 to 2019, I was a software engineer working in the Safety team in Uber. I have worked on some of these projects mentioned and I am familiar with many others. We have deployed numerous systems. While they cannot prevent crime 100%, they have made an impact. These were the product of deep research and tireless labor from many talented teams. I am proud to be a part of them.
This is the other side of the story that was minimized by the New York Times in favor of the greedy corporate narrative. I want to tell you this side of the story.
In 2017, Dara Khosrowshahi became Uber’s new CEO. Safety was raised as one of the company’s top priorities. I wasn’t naive. I regarded these messages in part as corporate propaganda. Nevertheless, I decided to answer the call and transferred to the safety team.
Once I joined them, I was quickly convinced that we were in fact taking safety very seriously. Our group is officially the Safety and Insurance Group. From the outside, it is easy to portray Uber as growth obsessed, chasing those $10 fares at any cost. But we have other important priorities. Our goal is to minimize those $1 million insurance claims. With a company as big as Uber, insurance turns out to be a significant expense (Uber uses a combination of self-insurance and commercial insurance).
This corporate structure aligns business interests with safety. You may not believe it. Even in your health insurance or car insurance company, some people are working to improve safety on your behalf . Their financial outcome depends on it.
Journalists emphasize concrete personal stories to build narratives. We data scientists, statisticians and actuaries, work in a world of uncertainties, of probabilities. We try to manipulate the system to improve the odds, a few percent here, and a few percent there. They add up to a better outcome overall. But we have little control over any specific incidents.
This article approaches risk management from a more black and white perspective. Uber was accused of hindering the use of effective matching algorithms, presumably to maintain profit. Saying Uber still dispatched trips identified as high-risk sounds very irresponsible. But take a closer look, if a regular trip has a 0.01% chance to result in sexual assault. A high risk dispatch may be 10 times more risky, i.e. a 0.1% chance that things can go wrong. 99.9% of the time it still turns out just fine. Given these odds, the right decision is not really obvious.
We face dilemmas like this all the time. Algorithms are repeatedly tested and fine tuned. They are only used when it can be shown doing a lot more good than harm. Not every algorithm becomes successful.
The Women Driving Women program seems like a brilliant idea. By matching women with only women drivers, it protects them from unscrupulous men. The New York Times holds it against Uber for not adopting it. But there is a fatal flaw. In reality, driving is predominantly a male occupation. Males make up over 90% of the driver base in many cities. With only a small number of women drivers, women will have a hard time finding a ride at all, if this is a requirement. If this program appeases the safety critics, it will enrage the gender critics.
The biggest misconception is that by rejecting these projects, Uber disregards safety completely. The reality is these are only a few of the many projects developed. It is inevitable that some end up being delayed or rejected. But many more are successfully working in production. Altogether, they target safety issues from multiple different angles. Even a rejected matching algorithm can become the foundation of the next generation algorithm, with each generation improving upon the previous.
The article touched on several of those tools, but they are regarded dismissively because they did not help in those cases. I want to clear this up. While the tools cannot eliminate 100% of the problem, they all contributed. Something as simple as driver review indeed provides useful information. Every trip is also tracked and recorded in a database. These trip records provide such valuable information that law enforcement routinely request data to help fight crime.
One particular tool that left an impression on me is the RideCheck. When the system detects an unusual event, a possible car crash, it proactively reaches out to the rider to check on them. This is a first in the industry. Anywhere else, if something goes wrong in a taxi ride, you call the police yourself. No RideCheck. No one looks out for you. Considerable effort has been put into its development. Not only does it detect possible problems, it also cannot cause too many false alarms that undermine its reliability.
This feature has also appeared in the article. Alas, it did not change the outcome. But it was showcased internally when it was first used in the field. A real life trip ran into a minor distress. No harm was done. But it triggered the RideCheck and an agent reached out. The recorded conversation was replayed in the demonstration (with the rider’s consent).
We heard the rider was initially confused to receive a call. Then she was pleasantly surprised that Uber noticed something happened from afar. She became very grateful that the agent checked on her. The agent was relieved that they were alright. The case was closed in a happy circumstance.
I saw the ability of this tool to reach out to people at the moment when they are in need. I know this recording was chosen for internal PR purposes. Still, to hear it work in a real life incident, it made me feel slightly emotional.
I was only building software in Uber, nothing heroic. But if my work helps save only one life, it is a very meaningful job.
(My letter originally submitted to New York Times on August 10th, 2025)


