Some notes on questions I'm interested in
Day 15 of Inkhaven!
Jotting down some notes on questions I’m interested in. Unfortunately I ran out of time before I investigated any of them in enough detail that I can write a full blog post.
I hope some of these ideas are sufficiently interesting that it could spark a lively conversation in the comments, and/or in my DMs! :)
Hopefully with more discussions and reading and additional research, some of this could turn into actually interesting ideas!
Meta-Ideas
Are ideas important? How important are they? Can we quantify them?
Many people think ideas aren’t important. Dynomight and others I talk to claim that ideas are “cheap” for blog posts. Silicon Valley standard advice is that ideas are a dime-a-dozen and execution is all that matters. The bitter lesson formalizes this general story in the context of ML, making a claim that compute, not clever research and ideas, drives almost all progress in ML.
I think I probably disagree. But I’m not sure. How can we make progress on this question?
First of all, what is an idea? I’m not sure I have a precise operationalization. But I mean a concept that can be expressed simply in a relatively short string of words, or some other symbolic form like equations, in a relatively context- and mind- independent way. I want to contrast ideas to things like hard work, or hard-to-transfer nuanced intuitions/intelligence/pattern-recognizing ability.
What are metrics? If I want to answer a question like “to what extent is quality of idea a bottleneck for X” (where X could be different things I’m interested in, like economic growth, child mortality, or a good blogpost), how do I figure out good metrics for % contribution of a good idea (within distribution of realistic differences?)
Another angle is looking at what other people have tried to contribute to answer this question? Eg what do economists say? What about historians? Maybe philosophers? What are other groups that tries to do meta-idea study about the history, prevalence, and importance of ideas?
How do we get better ideas?
What are the generators of good ideas, other than the obvious? (eg R&D spending, smart people).
Location: It seems like some locations and times were more generative for ideas than others (ancient Greece especially, to a much lesser extent Enlightenment-era continental Europe and the UK, Bell Labs*, academia and SV today)?
Process: I seem to have better ideas when I go on walks. Is this generalizable? What are processes I’m missing with idea generation.
Drugs: I’m very skeptical of claims that different people have that drugs really help with creativity, on priors. But is this prior justified? Unclear, needs investigating.
On a practical level, one reason I care so much about ideas is that many of my ideas are shit. A few of my ideas are okay, and dare I say even good. I think all my best substack posts have what-I-consider-to-be okay-to-good ideas (the rising premium of life, evolutionary approaches to bee welfare, the clean evolutionary/anthropics answer to Eugene Wigner, novel-ish interpretations of Ted Chiang), while my mediocre posts are competently executed and sometimes even cleverly written, but lack one central idea, or a few central ideas, that are both mostly novel and mostly interesting. I’d like more high-quality ideas!
*This might just be general intelligence, I’d be interested in a model that tries to factor that out.
Bifocal thinking
English doesn’t have a word for it. That thing where two perspectives on the same problem both have merit and you need to think from each angle, first one then the other, to deeply understand what’s going on. Friend-of-the-blog Katelynn Bennett suggests I use “bifocal thinking” as a standin, and I’m interested in trying that for now.
I’d like to make a substantive post on bifocal thinking one day, both trying to introduce the concept and to say nontrivial things about specific bifocals to think about.
Bias vs variance (overfitting vs underfitting): Bias is the error from a model being too simple to capture the true pattern (underfitting), while variance is the error from a model being too sensitive to noise in the training data (overfitting).
Explore vs exploit : When do you explore (try out new things) vs exploit (keep hammering on things you know are good and can provide you high reward)?
Parsimony vs nuance: Briefly covered here, I’m interested in when we want more simplicity in our models (Occam’s razor) vs more complexity/nuance.
Forward vs backward chaining: Aka theory of action vs theory of change as formulated by Aaron Schwartz, do you want to start with what you know and try to find your way to your end goal, or do you want to start with your end goal and build a plan to achieve that?
Top-down vs bottom-up: Do you start with a high-level model of something and bubble your thoughts downwards, or do you start with a ground-level view and abstract your way upwards?
Explorer vs Assessor (generator vs discriminator, divergent vs convergent, babble and prune): When is it good to generate new/many ideas, vs try to select and be discerning among the ideas you already have access to?
Game theory vs mechanism design: More technical than some of the others, do you start with a set of rules and want to analyze how rational agents respond to them, or do you start with a number of rational agents with unknown information and try to design a set of rules and incentives that optimizes outcomes given strategic behavior?
Marginal vs total effect: Do you care more about whether something is good/great/terrible in aggregate, or how it performs on the margin?
Bias vs heuristic: Pretty self-explanatory
High agency vs seeing self as process: Is it more helpful to model yourself as a rational agent who aims to never fail, or as a process of your environment such that your rational mind should mostly be pointed towards figuring out environment self-conditioning?
Historical inevitability vs contingency: One of the most important questions. Fortunately other Inkhaveners are investigating the same question!
Tradeoffs vs non-tradeoff: Should we typically view questions in terms of real tradeoffs (like a typical economist), or is there a different view where you can extend the Pareto frontier (like Elon Musk)?
Challenges with writing this post well include deciding whether to include 12 bifocal pairs of ideas (24 total ideas) in the core post vs splitting some of them into other posts vs something else (13 post sequence?)
Another challenge is saying novel and interesting things about each bifocal. My guess is that at least 3 of the ideas here are novel or interesting to most readers, but different readers will have a different conception of which 3!
Politics
How responsible was Fairshake for Katie Porter’s loss?
Fairshake is a new crypto lobbying firm with >100M in political spending (c.2024). They swung their weight around a bunch with electioneering and making politicians fear the power of crypto. But is this fear justified?
ACX:
Anyway, they won overwhelmingly. They combined the business-as-usual strategy of donating to safe incumbents and both sides of close races, with the AIPAC strategy of picking a few big opponents of their cause and airdropping massive sums on their rivals. For example, Representative Katie Porter (D-California) was an Elizabeth Warren ally and cryptocurrency critic. When she ran for Senate, Fairshake dropped $10 million into attack ads against her in the primaries - more than most candidates’ total spending. The attack ads didn’t say she was bad on crypto - something that approximately no voters care about. They were just normal attack ads on whatever aspect of her policy and personality focus groups said she was most vulnerable on (in practice, an accusation that she mistreated her Congressional staff). She lost badly, coming in third place. Although nobody can prove she wouldn’t have lost anyway, conventional wisdom was that crypto had successfully made its point. According to SFGate:
An unnamed political operative told the magazine: “Porter was a perfect choice because she let crypto declare, ‘If you are even slightly critical of us, we won’t just kill you—we’ll kill your f—king family, we’ll end your career.’ From a political perspective, it was a masterpiece.” The scare campaign appears to have worked. The House of Representatives passed a pro-crypto bill, with bipartisan support, in May. Candidates with Fairshake’s support won their primaries in 85% of cases, the New Yorker wrote. Now, neither presidential candidate wants to run astray of the industry: Donald Trump spoke at a crypto conference, and Kamala Harris signaled her support. And Porter is forced out of Congress.
According to smart savvy people I know, the conventional wisdom is very likely false. Katie Porter very likely would have lost anyway. But this false narrative that crypto tanked Porter’s candidacy spread far and wide in DC. This seems bad!
This is actually a pretty important thing to argue against! Conventional wisdom in DC is that her loss is a result of Fairshake, and correspondingly everybody’s afraid of crypto. So if we can demonstrate that it isn’t, we can maybe steer DC to be slightly less afraid of the relevant tech lobbies, which is probably good for democracy.
What are non-partisan ways to preserve democracy?
Relatedly, what are non-partisan ways to reduce democratic backsliding and help preserve US democracy? Seems like a worthwhile project that’s maybe worth some hours and a substack post or two investigating.
Aging
How related are Aubrey de Grey’s/SENS’ ideas to my own?
What percentage of current gerontology/antiaging research by funding, citations, etc, is working off of the (wrong imo) biomarkers/root cause model?
How can I get further clarity on these questions before I pitch an editor again? They were interested but wanted more details.
Antimemes/censorship
Mapping the unknown
How can we map the unknown and see where silence is? Like measuring a black hole from where light doesn’t travel, can we do the same thing with information in the social world?
Recent examples/subproblems of mapping the unknown that interest me:
In philosophy of mind and philosophy of science, the concept of “cognitive closure” — ideas that human minds are incapable of solving or in some cases even comprehending. See both Daniel Muñoz’s excellent article on related concepts and my comment here: bigifftrue.substack.com…
In ethics and meta-ethics, the possibility of an ongoing moral catastrophe, arguments for believing that our society is unknowingly guilty of serious, large-scale wrongdoing. I wrote a summary here.
In philosophy of science and existential risk analysis, the possibility of an “anthropic shadow”: catastrophes that fully destroy humanity (or at least significantly the number of observers), which by definition limit the number of survivors, thus artificially biasing our observations to believe the world is safer than it is.
In existential risk analysis, trying to map out the risk of “unknown existential risks” that people haven’t discovered or thought of yet, eg via inductive arguments of the rate of discovery of risks over the last century.
In science fiction, the concept of the “antimeme”, concepts/ideas that are inherently difficult or impossible to think of/recall/remember, either because of self-sealing properties or because remembering/connecting those ideas together are catastrophically bad for you.
Information control in an autocracy: What can intelligent citizens reasonably infer in an autocracy that actively suppresses information and divert attention away from events/facts that are embarrassing for the autocracy?
Company NDAs and other legal information hiding: In the shadow of NDAs, bans against trading on insider information etc, what can companies reasonably infer from the silences of their competitors?
Recent OpenAI stuff.
The Emperor Has No Clothes
Related to the censorship point earlier, I suspect a lot of the value/harm of these egregious activities (Fairshake scaring crypto-hating politicians, OpenAI scaring nonprofits and ex-employees, “Peak woke”, actual authoritarian censorship) works in part from the direct effect but largely from the chilling effects created by people thinking they’re scarier/more powerful than they actually are.
I’d like to figure out if this suspicion is correct! And if it’s correct, how do we quantify the chilling effects, and counteract them! It’s most likely that moral suasion is not enough!
…Please Comment!
If you have thoughts on any of these questions, please consider commenting! I’d be actively interested in greater engagement with these questions, and perhaps your early engagement would make it into my more substantive final posts!


