China, America and Outlier Farming

Einstein farming in Civ

People are talking about how China’s tech sector is going to win because they work harder. Perhaps that’s true. GDP growth is impressive. People work through the weekend. A competitive generation that yearns to give their children a life that’s better than what they had.

Meanwhile, complacent millennials in Silicon Valley are getting avocado toast cocktails at 5pm. We’re screwed, right? I’m not so certain. I think American technology has it’s best days ahead of it. Not because we work harder, but because we’re more optimistic. We’re extremely open-minded.

I grew up in Israel, which produces more startups per capita than China or America. It’s an amazing ecosystem that’s crippled by a culture of hyper-realism. Successful Israeli companies take on execution risk, not market risk. Security (Checkpoint), hardware (Anobit, MobileEye) and complex software (Waze) are the common categories. Not Facebook in 2006. Or Ethereum in 2011. Market risk ideas are met with the Seinfeld curse of “really”: “Really? You’re going to build a website where every person sets up a profile? To do what, again? Why can’t you just be a doctor or a lawyer?”.

I don’t think Israel is unique. As far as I can tell this problem is global, with the one exception: Silicon Valley, where people are willing to truly believe. Believe that small, weird ideas could become big. It’s due to two factors. First, it’s because we’re not sprinting for survival. The same force that creates employee demands for a “music room” what allows us to dream big! We wouldn’t be discussing paternity leave if we had lethally toxic air quality. Second is a positive feedback loop: early believers in the Internet, Facebook or Bitcoin were heavily rewarded. Now many are looking for small, weird ideas and people that can get big.

“He hated the strict protocols followed by teachers and rote learning demanded of students, which explains his disdain for school.”

That’s Albert Einstein. How would that person fare in China? I’m don’t think a hard-driving, highly competitive culture can afford to support ill-fitting rebels that drop out of traditional curriculum or ideas that seem ludicrously small at first.

China will win at relentlessly optimizing everything, but America will be a far better outlier farm. As long as there are nonlinear, big ideas to be had, America stands to win.

If you’re reading this and are thinking of pursuing a small or weird idea of your own, do it. Going down the traditional path (e.g. climbing your way to a job at Google) is implicit refusal of a golden opportunity. There’s an ecosystem of people desperate to give you money and advice. Within the US or remotely, wherever you are. And hey, if it doesn’t work out — at least you’ll know you tried. Google isn’t going to disappear. Your idea might.

If you have any questions or comments, please email me d@dcgross.com or on Twitter.

Business questions engineers should ask when interviewing at ML/AI companies


A few folks have been asking me if such-and-such would be good AI/ML company to work at. If you’re a data scientist or engineer and are considering a job, here are some interesting questions to ask during the interview. Note: these are focused on the business, not the technology.

  1. Why does anyone need this? Like all advice, this sounds deceptively simple. But make sure you get a very compelling answer here. Many AI companies are a solution-in-search-of-a-problem. Reverse engineering from the technology to the market almost never works.
  2. How was this problem being solved before the AI came around? Was the pre-AI “manual” solution good enough? Common answer: “we’re replacing humans.” That isn’t enough. Often having a human is desirable (bedside manner, dexterity, perfection a requirement). Often a human is affordable due to margin structure. You’re looking to get a sense that the product provided is something that was never possible before, 10X better, or just-as-good but 10X cheaper. Not 20% cheaper. 10X.
  3. How many users have you spoken to? What have you learned from them? All founders talk to some users, but few talk to enough users. Too often I meet founders who are convinced people will want their solution based on limited data-points. The best founders are endlessly talking to their customers. Importantly, they have intimate knowledge of the underlying problems users have, as opposed to a collection of anecdotes about the specific solution being offered in the product today. This expertise is important when building stochastic product (“how much recall/precision do we need to launch?”).
  4. How do you make money? Be on the lookout for what I call “multistage rockets”: “Today, we’re doing X. But our grand plan is to do Y, which will be really profitable”. These usually fail.
  5. How will you grow? How will anyone find out about you? Bad answer: word of mouth. Everyone wants to have a positive k-factor. Sometimes it works out (I’m sure you’d love to be early at Facebook). Making a viral product demands striking gold or possessing incredible artistic finesse about what makes humans tick. Unless you’re seeing either one of those, I’d suggest looking for the time-tested alternative: paid marketing. A great answer includes the cost of acquiring a customer, life-time value of a customer, marketing channels used, etc.
  6. How big is this market? I suggest this only as a founder-mentality canary test. Are they focused on making a massive company, or doing research? A bad answer is just saying a really big number. “$400B”. A better approach will have a back-of-the-envelope calculation which once multiplied out paints a picture: “We make $10 per customer per month. We think there are about 150,000,000 people in this market, so that’s $18B of annual revenue.”
  7. What is defensible about the business? Bad answer: an algorithm. In software algorithms are rarely sustainable moats. Google got great because of PageRank, but it stayed great due to network effects.

There many other factors to optimize for, like the people you’ll work with, the technologies you’ll work on, commute, etc. I hope this is a helpful guide at sizing up the market elements of the decision. I’d be happy to help with any personalized advice. My email is daniel@dcgross.com.

Managing Machine Learning


Say you’ve just started managing a team. You’re working on a stochastic product, like search or recommendations. You want to start instrumenting success. How should you set KPIs? What should you be doing in a metrics review? Here’s an overview of common pitfalls I’ve observed that you should avoid.

Bad KPIs, Unintended Consequences

Like an organism, teams evolve a culture and product in response to a KPI. If you’re not careful with the definition, you’ll produce a distorted product. LinkedIn looks like Minesweeper because the team is optimizing for clicks. A really good metric will have the opposite effect: it unleashes a tremendous amount of creativity (“We had to 10X ‘minutes of video watched’ so… we just started playing the next video in your queue automatically”).

Before solidifying a KPI, I try to imagine the “laziest” way to 10X the metric. If I suspect it will detract from a good product, I adjust. Train yourself by trying to find evidence of this in the products you use.

Example: Amazon Search

Amazon elevates sponsored search results over the organic best seller.

Someone is getting a raise — a revenue KPI is growing, in the short term. I’d argue the grating experience makes for an inferior product long term. As a leader it’s your job to keep a 30,000 foot view and ensure the team is building something good.

Changing KPIs

Sometimes the opposite happens. Instead of the product changing, KPI definitions constantly shift. For example:

“We thought click-through rate was our KPI. Since we show an info-box, we’ve realized that a lot of sessions are “good” even though you don’t click on anything. So we’re changing our metrics.”

This is fine and should be expected. Nevertheless it can be frustrating to manage as you’re lacking a repeatable baseline. The only way I know of overcoming this is just to imagine myself in the N+1 metrics review. What will be the excuses I’ll hear? I then try to preemptively optimize for that.

Pre Launch KPIs

Before a launch managers will rally the team around made-up success metrics: “Our goal is 95% precision”. Why not 20%? Or 99%? Nobody on the team will respect a made-up number. Since you lack data, it’s not clear what success should look like.

Instead, try to simulate your anecdotal reaction using a real-world analogy. For example: “If I were to see an incorrect suggestion in this UI once a week, would that feel terrible? How about once a day?”. You then back out what that translates to.

Post Launch Incrementalism

Once launched managers have the opposite problem: how do you challenge the team to really grow their numbers? “We plan to grow search volume 10% this quarter.” Why not 20%? Or 5%? A good leader will provide a rationale about how they selected the goal. To ensure incrementalism doesn’t set in, I’ll brainstorm the following with the team:

“Drop everything you know about the business today. Let’s imagine we just read Google achieved 30% growth. Hypothetically. How did they do it?”

That format can breathe big-picture thinking into a team that’s been caught in a local minimum.

Memorable Metrics

Teams often opt for a technically correct and complex KPI. For example:

“The number of queries a user runs until they click on a result. And don’t return. For at least for 5 minutes.”

What? This is confusing. A better KPI would be: “Search session length”. Sacrifice technical correctness for simplicity. Frequently a metric is more nuanced under the hood, but a key metric should be explainable just by saying it. You want these numbers to be something people discuss over lunch. Information won’t disseminate when it’s complex.

Input versus Output

The team might suggest reporting metrics that are easy to measure, but wrong to manage by. High-level KPIs should describe the desired output the business needs (“ad revenue”), not the effort the team is putting in (“number of salespeople hired”). Capturing input metrics is important, but you should focus your attention on output.

Intellectually Cute Explanations

Let’s imagine you’ve built an app for hiking:

You: “Why did engagement crash in March?”

Team: “It’s seasonal. People don’t use our product as much when it rains.”

Actually, what happened was that we fixed a bug in the data in March. Engagement was always low. Damn. Good luck in the next board meeting.

When numbers move, teams will come up with rationales about why. Often leaders grasp the first reason that makes intuitive sense. These reasons are almost always wrong. Since the excuse seems like it could be right, teams often don’t bother digging deeper.

Be suspect of anything going horribly. Be very, very suspect of anything going too well. The nightmare scenario I always worry about is a tremendous growth spurt actually being a bug in the data. Your goal as a manager is to be a Boston Globe “Spotlight team” during the review.

Summary

Hopefully this was a helpful summary of some common pitfalls to avoid when defining or reviewing metrics. If you have other ideas, please let me know! For a broader primer on management by metrics, read High Output Management.

Thank you to Elad Gil, Jack Altman, and others for reading drafts of this post.