AI will improve a lot, very soon

It is hard to feel in my bones” just how much is now baked in” and coming this year, next year, the year after. I am starting to write down concrete predictions, as part of the AI OODA loops I’m running this autumn.

Meantime: a few things have helped me feel it” in the last week or so: v0.dev, Open Interpreter, and this note from Michael Nielsen:

As an aside on the short term — the next few years — I expect we’re going to see rapidly improving multi-modal foundation models which mix language, mathematics, images, video, sound, action in the world, as well as many specialized sources of data, things like genetic data about viruses and proteins, data from particle physics, sensor data from vehicles, from the oceans, and so on.

Such models will know” a tremendous amount about many different aspects of the world, and will also have a raw substrate for abstract reasoning — things like language and mathematics; they will get at least some transfer between these domains, and will be far, far more powerful than systems like GPT-4.

This does not mean they will yet be true AGI or ASI! Other ideas will almost certainly be required; it’s possible those ideas are, however, already extant. No matter what, I expect such models will be increasingly powerful as aids to the discovery of powerful new technologies.

Regulation is likely to be a significant headwind, but a lot more transformatively useful” stuff is landing soon regardless.

My software development workflow will speed up a lot, again, within the tools that will drop within a year.

writing ai michael nielsen