Reading suggestions
Someone asked for a book suggestion. My reply…
It’s hard to think about anything other than AI these days. The books are mostly out of date.
For AI, I suggest:
- Samuel Hammond’s blog
- LLM explainer
- Sparks of Intelligence
- We can’t compete
- A rehash of Bostrom’s Vulnerable World paper, notably written by an influential SV figure with background in the Open Science movement. He’s late to the party, but at least he’s here now.
Things on my mind:
- How do we regulate and police our way to a better offence/defence balance, without wrecking civil liberties? We can surely do better than totalitarianism.
- Will the policy response in 2023/24/25 slow things down for several decades? Hard to tell.
- Should we think of digital minds as alien invaders, or our children, or something else? cede the future to as we do to children, rather than than see as alien invaders.
- Very new politics coming soon. How to prepare?
Big news of the year: most world leaders basically get it now. This has happened faster than I expected.
In the “not-AI” section, I liked Professor of Apocalypse and The Other God That Failed, both by Jerry Z Muller.
More and more, I read with GPT-4 open, asking questions as I read, as I might with a private tutor. Pay for ChatGPT if you didn’t already.
Robin Hanson on the future as reality (read by my digital avatar)
The audio and video is generated from a text input, by Heygen.
Their system was trained with 30 seconds of video from my webcam.
Here’s Nietzsche on philosophers:
ChatGPT is a junior engineer today; a senior designer and engineer next year
If you use several Google accounts, Google Meet does not automatically switch to whatever account has permission to join the call. It’s annoying.
Today I hired a team and made a browser extension to solve this.
It took us less than 30 minutes to get a working prototype. It took a further 3-4 hours to make a “production-ready” version for release on the Chrome Web Store.
I served as UX designer, engineering manager and head of QA. GPT-4 was our lead developer, and wrote roughly all of the code, to a higher standard than I would have 1.
How will GPT-5 help us make faster progress?
Here’s my guess:
- The workflow will be: I give instruction, it updates codebase, it checks the result in browser, fixes codebase if necessary, then asks me to review. 2
- It will require less of my “expert direction” on how to approach some of the engineering tasks.
- It will reply as quickly as GPT-3.5.
- It will help me create the logo, screenshot and demo video for the Chrome Web Store.
- It will help me design the user interface.
- It will proactively suggest feature ideas and possible UX issues.
I expect most of these improvements will be available by the end of 2024. With them, I’d be able to create this extension in less than an hour.
Today, my background in software design and engineering was a necessary condition to deliver a great result within a couple of hours (see the transcript). How much of this expertise will be needed in 2024? 2025?
Prediction: by December 2025, my wife (who has no software design or engineering experience) will be able to create the same extension in less than a day.
The full codebase is some 500 lines. Less than 5 of these were entirely written by me. Perhaps 30-50 were started by me, and completed by our intern (Github Copilot).↩︎
Open Interpreter (released last week) now offers this workflow for writing blocks of code, but until these systems can “see”, I’ll need to do a bunch of the software testing in browser. Multimodal LLMs are due before the end of this year.↩︎
Peter Singer on the non-natural origin of the axiom of impartial benevolence
According to Singer, to the extent that this intuition is not adaptive, we can think of it as “a truth of reason” rather than another value that emerges from competition and selection.
If we apply this to the dualism of practical reason, then what we have is, on the one hand, a response—the axiom of universal benevolence—a response that clearly would not be likely to have been selected by evolution, because to help unrelated strangers—even at some disadvantage to yourself, where there’s a greater benefit to the unrelated strangers—is not a trait that is likely to lead to your improved survival or the improved survival of your offspring. It’s rather going to benefit these unrelated strangers who are therefore more likely to survive and whose offspring are more likely to survive. So that doesn’t seem like it would have been selected for by evolution, which suggests that maybe it is a judgement of our reasoning capacities, in some way; we are seeing something through reason.
Now, if we compare that with the egoistic judgement that I have special reasons to prefer my own interests to those of strangers, it’s more plausible to think that that would have been selected by evolution. Because, after all, that does give you preference for yourself and your offspring, if you love your offspring and care for them.
So if we have these two conflicting judgments, then maybe we can choose which one by saying: just as in the case of adult sibling incest, we debunk the intuition by saying, “Well, that’s just something that evolved in our past and that doesn’t really give us reasons for thinking the same thing today,” maybe we can say that also about the intuition behind egoism, but not about the intuition behind universal benevolence, which therefore gives us a reason (not a conclusive or overriding reason) for thinking that it’s the axiom of universal benevolence that is the one that is most supported by reason.
Nope. The way we reason is a product of natural selection. You don’t get to pick some bits you like and say that these are universal truths.
AI will improve a lot, very soon
It is hard to feel “in my bones” just how much is now “baked in” and coming this year, next year, the year after. I am starting to write down concrete predictions, as part of the AI OODA loops I’m running this autumn.
Meantime: a few things have helped me “feel it” in the last week or so: v0.dev, Open Interpreter, and this note from Michael Nielsen:
As an aside on the short term — the next few years — I expect we’re going to see rapidly improving multi-modal foundation models which mix language, mathematics, images, video, sound, action in the world, as well as many specialized sources of data, things like genetic data about viruses and proteins, data from particle physics, sensor data from vehicles, from the oceans, and so on.
Such models will “know” a tremendous amount about many different aspects of the world, and will also have a raw substrate for abstract reasoning — things like language and mathematics; they will get at least some transfer between these domains, and will be far, far more powerful than systems like GPT-4.
This does not mean they will yet be true AGI or ASI! Other ideas will almost certainly be required; it’s possible those ideas are, however, already extant. No matter what, I expect such models will be increasingly powerful as aids to the discovery of powerful new technologies.
Regulation is likely to be a significant headwind, but a lot more “transformatively useful” stuff is landing soon regardless.
My software development workflow will speed up a lot, again, within the tools that will drop within a year.
Peter Thiel on the Hegelian tension of AI
It’s deranged decentralisation vs totalitarian centralisation. We need to find a third way.