Algorithms to Live By demonstrates how to use computer algorithms to enhance human decision making. Surprisingly, this often involves clever simplification strategies. For example, to store new information when memory is full, computers use the Least Recently Used (LRU) algorithm to decide which data to discard. If you accumulate an “unsorted” pile of papers on your desk, it’s, by definition, sorted by LRU from bottom to top (making it self organized!).
Brian and Tom also illustrate how computers cleverly handle overly challenging calculations by slightly relaxing parameters to solve a similar one with ease. Applying this in a work setting, imagine you need to convince a stubborn stakeholder of your proposal. Instead of initially focusing on them, pretend they will be onboard if you create compelling rationale and garner others’ support. By strengthening your proposal’s viability and building alliances (easier problems to solve), you’ll be better positioned to gain acceptance and move forward.
This book has enhanced my understanding of the science of efficient decision making, enabling me to improve upon it with less effort.
You should read this book if you…
- want proven systematically efficient ways to strengthen your decision making
- are interested in how computers optimize resources
- want to simplify tasks
Additional Information
Year Published: 2016
Book Ranking (from 1-10): 8 – Very Good – In depth insights on a specific topic
Ease of Read (from 1-5): 3 – Average
Key Highlights
- The optimal solution takes the form of what we’ll call the Look-Then-Leap Rule: You set a predetermined amount of time for “looking”—that is, exploring your options, gathering data—in which you categorically don’t choose anyone, no matter how impressive. After that point, you enter the “leap” phase, prepared to instantly commit to anyone who outshines the best applicant you saw in the look phase
- “The theory of optimal stopping is concerned with the problem of choosing a time to take a given action,” opens the definitive textbook on optimal stopping, and it’s hard to think of a more concise description of the human condition
- Simply put, exploration is gathering information, and exploitation is using the information you have to get a known good result
- When balancing favorite experiences and new ones, nothing matters as much as the interval over which we plan to enjoy them. Explore when you will have time to use the resulting knowledge, exploit when you’re ready to cash in. The interval makes the strategy
- Sorting something that you will never search is a complete waste; searching something you never sorted is merely inefficient
- The optimal cache eviction policy—essentially by definition, is when the cache is full, to evict whichever item we’ll need again the longest from now
- The mathematics of self-organizing lists suggests something radical: the big pile of papers on your desk, far from being a guilt-inducing fester of chaos, is actually one of the most well-designed and efficient structures available. What might appear to others to be an unorganized mess is, in fact, a self-organizing mess. Tossing things back on the top of the pile is the very best you can do, shy of knowing the future
- What we call “cognitive decline”—lags and retrieval errors—may not be about the search process slowing or deteriorating, but (at least partly) an unavoidable consequence of the amount of information we have to navigate getting bigger and bigger
- In fact, the weighted version of Shortest Processing Time is a pretty good candidate for best general-purpose scheduling strategy in the face of uncertainty. It offers a simple prescription for time management: each time a new piece of work comes in, divide its importance by the amount of time it will take to complete. If that figure is higher than for the task you’re currently doing, switch to the new one; otherwise stick with the current task. This algorithm is the closest thing that scheduling theory has to a skeleton key or Swiss Army knife, the optimal strategy not just for one flavor of problem but for many
- Another way to avert thrashing before it starts is to learn the art of saying no. Denning advocated, for instance, that a system should simply refuse to add a program to its workload if it didn’t have enough free memory to hold its working set
- This is a principle that can be transferred to human lives. The moral is that you should try to stay on a single task as long as possible without decreasing your responsiveness below the minimum acceptable limit. Decide how responsive you need to be—and then, if you want to get things done, be no more responsive than that
- The lesson is this: it is indeed true that including more factors in a model will always, by definition, make it a better fit for the data we have already. But a better fit for the available data does not necessarily mean a better prediction
- It seems like Markowitz should have been the one person perfectly equipped for the job. What did he decide to do? I should have computed the historical covariances of the asset classes and drawn an efficient frontier. Instead, I visualized my grief if the stock market went way up and I wasn’t in it—or if it went way down and I was completely in it. My intention was to minimize my future regret. So I split my contributions fifty-fifty between bonds and equities. Why in the world would he do that? The story of the Nobel Prize winner and his investment strategy could be presented as an example of human irrationality: faced with the complexity of real life, he abandoned the rational model and followed a simple heuristic. But it’s precisely because of the complexity of real life that a simple heuristic might in fact be the rational solution
- But for the computer scientists who wrestle with such problems, this verdict isn’t the end of the story. Instead, it’s more like a call to arms. Having determined a problem to be intractable, you can’t just throw up your hands. As scheduling expert Jan Karel Lenstra told us, “When the problem is hard, it doesn’t mean that you can forget about it, it means that it’s just in a different status. It’s a serious enemy, but you still have to fight it.” And this is where the field figured out something invaluable, something we can all learn from: how to best approach problems whose optimal answers are out of reach. How to relax
- But it lurched into practicality in the twentieth century, becoming pivotal in cryptography and online security. As it happens, it is much easier to multiply primes together than to factor them back out. With big enough primes—say, a thousand digits—the multiplication can be done in a fraction of a second while the factoring could take literally millions of years; this makes for what is known as a “one-way function.” In modern encryption, for instance, secret primes known only to the sender and recipient get multiplied together to create huge composite numbers that can be transmitted publicly without fear, since factoring the product would take any eavesdropper way too long to be worth attempting. Thus virtually all secure communication online—be it commerce, banking, or email—begins with a hunt for prime numbers
- For some other problems, randomness still provides the only known route to efficient solutions. One curious example from mathematics is known as “polynomial identity testing.” If you have two polynomial expressions, such as 2×3 + 13×2 + 22x + 8 and (2x + 1) × (x + 2) × (x + 4), working out whether those expressions are in fact the same function—by doing all the multiplication, then comparing the results—can be incredibly time-consuming as the number of variables increases. Here again randomness offers a way forward: just generate some random xs and plug them in. If the two expressions are not the same, it would be a big coincidence if they gave the same answer for some randomly generated input. And an even bigger coincidence if they also gave identical answers for a second random input. And a bigger coincidence still if they did it for three random inputs in a row. Since there is no known deterministic algorithm for efficiently testing polynomial identity, this randomized method—with multiple observations quickly giving rise to near-certainty—is the only practical one we have
- We use the idiom of “dropped balls” almost exclusively in a derogatory sense, implying that the person in question was lazy, complacent, or forgetful. But the tactical dropping of balls is a critical part of getting things done under overload. The most prevalent critique of modern communications is that we are “always connected.” But the problem isn’t that we’re always connected; we’re not. The problem is that we’re always buffered. The difference is enormous
- But by reducing the number of options that people have, behavioral constraints of the kind imposed by religion don’t just make certain kinds of decisions less computationally challenging—they can also yield better outcomes
- Revenge almost never works out in favor of the one who seeks it, and yet someone who will respond with “irrational” vehemence to being taken advantage of is for that very reason more likely to get a fair deal. As Cornell economist Robert Frank puts it, “If people expect us to respond irrationally to the theft of our property, we will seldom need to, because it will not be in their interests to steal it. Being predisposed to respond irrationally serves much better here than being guided only by material self-interest
- To a game theorist, a Vickrey auction has a number of attractive properties. And to an algorithmic game theorist in particular, one property especially stands out: the participants are incentivized to be honest. In fact, there is no better strategy than just bidding your “true value” for the item—exactly what you think the item is worth. Bidding any more than your true value is obviously silly, as you might end up stuck buying something for more than you think it’s worth. And bidding any less than your true value (i.e., shading your bid) risks losing the auction for no good reason, since it doesn’t save you any money—because if you win, you’ll only be paying the value of the second-highest bid, regardless of how high your own was. This makes the Vickrey auction what mechanism designers call “strategy-proof,” or just “truthful.” In the Vickrey auction, honesty is literally the best policy
- One of the chief goals of design ought to be protecting people from unnecessary tension, friction, and mental labor
Discover more from The Broader Application
Subscribe to get the latest posts sent to your email.