Karnofsky, Has violence declined, when large-scale atrocities are systematically included?
Winners of the PROSE awards look fascinating.
Five big myths about techies and philanthropy.
Debate on effective altruism at Boston Review.
The /r/AskHistorians master book list.
How Near-Miss Events Amplify or Attenuate Risky Decision Making.
How do types affect (programming) productivity and correctness? A review of the empirical evidence.
What is your software project’s truck factor? How does it compare to those of popular GitHub applications?
Hacker can send fatal doses to hospital drug pumps. Because by default, everything you connect to the internet is hackable.
Lessons from the crypto wars of the 1990s.
AI stuff
Jacob Steinhardt: Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems.
New MIRI-relevant paper from Hutter’s lab: Sequential Extensions of Causal and Evidential Decision Theory.
An introduction to autonomy in weapons systems.
The winners of FLI’s grants competition for research on robust and beneficial AI have been announced.
Joshua Greene (Harvard) is seeking students who want to study AGI with him (presumably, AGI safety/values in particular, given Greene’s presence at FLI’s Puerto Rico conference).
New FLI open letter, this time on autonomous weapons.
New FLI FAQ on the AI open letter and the future of AI.
Deepmind runs their Atari player on a massively distributed computing architecture.