The not-for-profit sector in Australia is in crisis, as it invariably is. As in every year this century, donations are down, volunteering is down, and expenses are up. That analysis is a little too broad, though, because there are several sorts of not-for-profit - most generally, the small ones and the enormous ones. Half of our charities are entirely volunteer-run, while at the other end the big universities and public hospitals have billion-dollar budgets which are carefully kept in surplus. It doesn't really help our understanding to run these sorts together. There are also overlaps between not-for-profits and obvious commerce. The Sanitarium breakfast foods company is owned by the Seventh Day Adventist Church. Ikea, of all things, is in theory owned by a charitable foundation, though that's more a matter of tax law than disinterested philanthropy. If you're a business that's dedicated to raising money for a charity, you're charitable. In theory, that means the charity at the top shouldn't be particularly concerned with the market value of its subsidiary. All profits go to the good cause, but it's difficult to realise the underlying value. Which brings us to the case of Open AI, the company behind ChatGPT, which in the atmosphere of near-hysteria about AI has run right smack into the internal contradictions of doing evil, or at least business, that good may come. Open AI is a not-for-profit, run by a not-for-profit board, dedicated to "building safe and beneficial artificial general intelligence for the benefit of humanity". Its CEO, Sam Altman, got together AI expertise that others valued at about AU$120 billion (about two- thirds of BHP, to indicate scale). Those others included Microsoft, which had tipped about AU$10 billion into the kitty. At this point tension appears to have become evident between the two worlds of "not-for-profit" and "$120 billion". The board fired Sam Altman. Its stated reasons weren't that clear, but looking at ChatGPT it's easy to see how one might disagree over details such as "safe" or "beneficial" or "for the benefit of humanity", and it's easy to see that someone like Mr Altman (or something like Microsoft), who would be up for a sizeable chunk of those billions, might exhibit a slight bias towards going for broke and letting humanity like it or lump it. The OpenAI board was plainly trying to keep the organisation focused on its mission. What happened next is a useful illustration of how far that gets you when weighed against $120 billion in crisp notes. It was pointed out to the board that, having spent some $10 billion on getting ChatGPT to its present (admittedly impressive) level, many more billions were going to be needed to get it to the stage where artificial "intelligence" was going to realise its full potential. Who was going to chip that in if they couldn't take it out? More immediately, the brilliant and dedicated workforce behind the application made it clear that while it had nothing against humanity, charity began at home, and if they weren't allowed to dip a bucket in that money bin they'd take their brilliance elsewhere - to Microsoft, for example, which by an odd coincidence was in the market for an AI app of its own. The OpenAI board was faced with a walkout that would leave it with neither the $120 billion nor a workable AI app. While that would certainly mean that the company wasn't contributing to our replacement by android overlords, it wouldn't be doing much else, either. READ MORE: The board folded. It welcomed Sam Altman back, and made sweeping changes to board membership. While nothing's been finalised, it's hard to see that the board can do anything now other than turn to a more essentially financial model - though at the same time, it's not easy to see how it'll be able to do this. The constitution of OpenAI won't allow the board to think only of the money. There'll always be the possibility in the background that a majority of the board will go to the latest Terminator remake, freak out, and decide to lower its AI model into a cauldron of molten steel, just to be on the safe side, and it's not easy to float a $120 billion company on that highly principled uncertainty. One thing we can say, of course, is that if we are indeed to be ruled by inhuman advanced technologies concerned only with inflating their share value regardless of the harm caused - and after Microsoft, Amazon, Facebook, and Twitter we've had a bit of an indication of how that would look - they're not going to be Australian. Combining commerce and the good of humanity is always tricky. We can only hope that Sam Altman is able to program our AI overlords with an unreasonable fondness for koalas.