Jascha’s blog

This blog is intended to be a place to share ideas and results that are too weird, incomplete, or off-topic to turn into an academic paper, but that I think may be important. Let me know what you think! Contact links to the left.
https://sohl-dickstein.github.io/ (RSS)
visit blog
Neural network training makes beautiful fractals
12 Feb 2024 | original ↗

window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); [^netdetails]: In more detail, the baseline neural network architecture, design, and training configuration is as follows: [^saturation]: The discerning reader may have noticed that training diverges when the output learning rate is made...

Brain dump on the diversity of AI risk
10 Sept 2023 | original ↗

window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date());

The hot mess theory of AI misalignment: More intelligent agents behave less coherently
9 Mar 2023 | original ↗

window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); [^katjagrace]: See Katja Grace's excellent [*Counterarguments to the basic AI x-risk case*](https://aiimpacts.org/counterarguments-to-the-basic-ai-x-risk-case/), for more discussion of the assumption of goal-direction, or coherence, in...

Too much efficiency makes everything worse: overfitting and the strong version of Goodhart’s law
6 Nov 2022 | original ↗

window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); [Listing [greater-efficiency]: Some additional diverse things we are getting more efficient at. For most of these, initial improvements were broadly beneficial, but getting too good at them could cause profound negative consequences.]...

↑ these items are from RSS. Visit the blog itself at https://sohl-dickstein.github.io/ to find other articles and to appreciate the author's digital home.