I enjoyed reading this article on Wired the other day on the downsides to that apparent silicon valley cult of Effective Altruism. EA (or Alt-Eff, as I like to shorten it, for no particular reason) is on the surface of it an attractive proposition: a well-informed and clever organisation finds the technical metrics for clearly stating the most effective ways of supporting humanity around the globe, and focusses its spending on those.
The trouble is that the whole culture of rationality and programmability pervading silicon valley, added to the lack of time individuals (can) take to analyse the situation for themselves, means that metrics tend towards the overly technical, rational and clear - and thereby susceptible to omission of the humanity behind the sheer number of humans that they purport to aid.
It reminds me of the Aristotelian modes of knowledge of techne and phronesis, which relate to crafting and making (techne) and to inter-relational politics (phronesis). If you are by nature and training a technical person revelling in programming or financial analysis, you are at great risk of missing the interpersonal, or even the overall human aspect of your undertaking.
Tipping already barely stable relationships amongst societies and governments, providing targets for capture by the powerful and unscrupulous, misuse and even environmental pollution are all possible dangers that don’t appear to have been covered in the same way by Alt-Eff as the traditional on-the-ground aid agencies try to deal with.
Better than doing nothing? I’m not even so sure of that - perhaps better that the hugely rich tech bros (and their finance cousins) actually pay fair taxes to their own governments to support funding for inter-governmental overseas aid… Aid and charity are tricky, and that article goes to great depths to show why: worth a read!