NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Why OpenAI's Structure Must Evolve to Advance Our Mission (openai.com)
paxys 18 hours ago [-]
I don't understand why they are spending so much time and effort trying to put a positive spin on this whole for-profit thing. No one is buying it. We all know what's going on. Just say "we want to make lots of money" and move on with your lives.
chis 16 hours ago [-]
I think often company spin like this is more targeted towards internal employees than the outside world. Employees on the inside are always going to have a decent percentage of “true believers” who have cognitive dissonance if they don’t believe they’re making the world better. And so companies need to provide a narrative to keep that type of employee happy.
ninth_ant 16 hours ago [-]
I think this underestimates the degree to which the people on these companies legitimately believe what they’re saying. I’ve worked at one of these companies and absolutely would fall into your category of being a true believer at the time.

People of all stripes are extremely willing to embrace ideas that justify their own personal benefit. A rich person might be more likely to believe in trickle-down economics because ultimately it enriches them — but that doesn’t mean that it’s necessarily a false belief. An American might sincerely believe that gun proliferation is safe, because the information they process is filtered by their biases as it’s important to their cultural identity.

So when your stock options will pay out big from the company’s success, or even just if your paycheque depends on it — you’re more likely to process information and ideas though the lens of your bias. It’s not just being a gullible true believer tricked by the company’s elite — you’re also just willing to interpret it the same way in no small part because it benefits you.

TrainedMonkey 14 hours ago [-]
Modern hiring process, esp culture fit, is designed to ensure that fraction of true believers inside the company is meaningfully higher compared to the outside.
remus 14 hours ago [-]
I think it is simpler than that: people generally tend to work for companies who's products they think are interesting and useful. It's much easier to go into work each day when you think you're spending your time doing something useful.
int_19h 12 hours ago [-]
It works both ways. Sure, when you're looking for more people to join your cult, it helps to get those who are already drawn to you. But you also need to screen out those who would become disappointed quickly, and brainwash the ones that join to ensure continued devotion.
sroussey 12 hours ago [-]
That’s also a good reason to underpay, historically.
dzikimarian 12 hours ago [-]
Maybe I'm naive, but I'll gladly take small compensation hit in exchange for not hating my job.
wongarsu 11 hours ago [-]
In a way, liking the job is part of the compensation package. That's why places like game development and SpaceX can pay little for bad working conditions and still have enough applicants.

It's only really an issue if you get tricked by a facade or indoctrinated into a cult. For companies that are honest the dynamic is perfectly fine

sroussey 11 hours ago [-]
Or a large hit or even work for free for a prestigious job. Magazines and talent agencies were like this.
timeon 11 hours ago [-]
There is huge gradient between not hating the job and believing in fake mission.
10 hours ago [-]
BolexNOLA 12 hours ago [-]
There’s also the added layer that if you admit the place you’re working at is doing something wrong/immoral, not only do you suddenly feel a conscience-driven pressure to do something about it (leave even) but also it opens the door that maybe you had been contributing to something “evil” this whole time and either didn’t catch it or ignored it. Nobody wants to believe they were doing something wrong basically every day.
ska 16 hours ago [-]
> It’s not just being a gullible true believer ...

“It is difficult to get a man to understand something, when his salary depends upon his not understanding it!” - Upton Sinclair (1930's ?)

13 hours ago [-]
dgfitz 15 hours ago [-]
[flagged]
dragonwriter 14 hours ago [-]
> I think often company spin like this is more targeted towards internal employees than the outside world.

Probably is more ti employees than the general public, but it is even more targeted to the growing number of lawsuts against the conversion, since the charity nonprofit’s board is required to act in the interest of its charitable purpose even in a decision like this.

It is directed at defeating the idea, expressed quite effectively in the opening of Musk’s suit, that “Never before has a corporation gone from tax-exempt charity to a $157 billion for-profit, market-paralyzing gorgon—and in just eight years. Never before has it happened, because doing so violates almost every principle of law governing economic activity. It requires lying to donors, lying to members, lying to markets, lying to regulators, and lying to the public.”

rvba 12 hours ago [-]
There are tons of examples of non profit that are run for profit (mostly profit / career advancement of those in charge and their families and friends).

Firefox spend a ton on pet projects to boost careers. Now the core product lost most matketshare and is not what people want.

Wikipedia collects a ton of money and wastes it on everything bar wikipedia (mostly salaries and pet proje ts).

There are those charities where 95% of collected money is spent on own costs and only 5% reaches those in need / the topics that they should solve.

Control over non profits is a joke. People in charge respond to nobody.

droopyEyelids 12 hours ago [-]
None of your examples are legally profit though.
jokethrowaway 9 hours ago [-]
We should abolish taxes, so real companies won't need to hide behind the non-profit status and real non-profits could shine.
notatoad 15 hours ago [-]
Even the employees, I think, would probably be fine with being told “we just want to make shitloads of money”.

I usually feel like these statements are more about the board members and C-suite trying to fool themselves.

sangnoir 13 hours ago [-]
Those employees revolted to bring Sam back after he was dismissed by the board. They know what's up.
Spivak 15 hours ago [-]
Yes but when that statement doesn't come with "and we're doubling all your salaries" then as an employee it doesn't really matter.

The double edge of most companies insulating employees from the actual business is that beyond the maintenance cost employees don't care that the business is doing well because, well, it doesn't affect them. But what does affect them is abandoning the company's values that made them sign on in the first place.

paxys 15 hours ago [-]
Employees have equity. They all directly benefit from the company being worth more.
zitterbewegung 15 hours ago [-]
If your employees actually believe this spin that’s a big stretch especially the behavior of the CEO and the board being dissolved recently … I was a employee at a large company and I could see when the CEO was actually taking a stand that meant something and wasn’t some type of action trying to mislead employees .
zombiwoof 15 hours ago [-]
I agree. It’s for internal employees who are probably being heavily recruited for real RSU money from Meta and Google.

Having spend the better part of my life in Silicon Valley my view has been gone are the days of mission. Everybody just wants RSU

You could tell employees they will build software to track lock up immigrants, deplete the world of natural resources, cause harm to other countries and if their RSUs go up 99% will be on board, especially if their H1b is renewed :)

thaumasiotes 15 hours ago [-]
> Employees on the inside are always going to have a decent percentage of “true believers” who have cognitive dissonance if they don’t believe they’re making the world better.

No, this is an artifact of insisting that people pretend to be true believers during interviews.

After I was fired from a position doing bug bounty triage on HackerOne, I applied to be a bug triager on HackerOne. And they rejected me, stating that my description of why I applied, "this is identical to the job I was already doing", didn't make them feel that I saw their company as a calling rather than a place of employment.

notatoad 11 hours ago [-]
wait, what? you applied for a job you had just gotten fired from?
thaumasiotes 10 hours ago [-]
No, I applied for a job involving exactly the same duties as a job I had just been fired from. I was not originally (or ever) employed by HackerOne.
rvz 15 hours ago [-]
It's quite frankly more than that. They think we are all idiots into believing that so-called "AGI" is going to make the world a better place, whilst investors, employees are laughing all the way to the bank with every new fundraising round.

First being a non-profit, then taking Microsoft's money, then ditching non-profit status to a for-profit organization and now changing definitions of "Open" and "AGI" to raise more money.

It is a massive scam, with a new level of newspeak.

sourcepluck 14 hours ago [-]
Yes, yes, yes and yes.

I hadn't explicitly said to myself that even for a modern company OpenAI maybe has a particularly fond relationship with this Orwellian use of language to mean its opposite. I wonder if we could go so far as to say it's a defining feature of the company (and our age).

dgfitz 15 hours ago [-]
I’m not sure why this was down-modded, it is quite accurate.
voidfunc 16 hours ago [-]
This. Stuff like mission statements and that kind of crap is for these type of employees who need to delude themselves that they're not just part of a profit making exercise or manufacturing weapons to suppress minorities / kill brown people. Every company has one.
DSingularity 16 hours ago [-]
I feel that Gaza has shown us that there aren’t as many of these types of employees as we think.

Most people don’t care.

OpenAI is doing this show because if they don’t they are more vulnerable to law-suits. They need to manufacture a narrative without this exact structure they cannot fulfill their original mission.

zombiwoof 15 hours ago [-]
Agreed. The question every employee cares about is “will this make my RSU go up”
benterix 15 hours ago [-]
Really? I mean, I don't know a single person in real life who believes all this corporate bs. We know there must be a mission because this is the current business culture taught during MBA courses and everybody accepted it as a matter of course but I'm not even sure the CEOs themselves believe there is even one employee who is fascinated by its mission - say, a FedEx driver who admires the slogan "FedEx Corporation will produce superior financial returns for its shareowners by providing high value-added logistics, transportation and related business services".
benterix 15 hours ago [-]
Come one, people are not that stupid.
jefftk 18 hours ago [-]
"We are turning our non profit into a for profit because we want to make money" isn't legal.

To make this transition in a way that maximizes how much money they can make while minimizing what they lose to lawsuits they need to explain what they're doing in a positive way.

mapt 16 hours ago [-]
I don't think it should really matter how they explain it, legally.
franga2000 17 hours ago [-]
Is it illegal? If it is, not amount of explaining will make it legal.
mikeyouse 16 hours ago [-]
That's not really true and is at the heart of much of the 'confusion' about e.g. tax evasion vs. tax avoidance. You can do illegal things if you don't get prosecuted for them and a lot of this type of legal wrangling is to give your lawyers and political allies enough grey area to grab onto to shout about selective prosecution when you're called out for it.
Drew_ 14 hours ago [-]
I don't see how that's relevant. In what case is the difference between tax evasion and avoidance just the motive/explanation? I'm pretty sure the difference is purely technical.

Moreover, I don't think a lack of prosecution/enforcement makes something legal. At least, I don't think that defense would hold up very well in court.

mikeyouse 12 hours ago [-]
> I'm pretty sure the difference is purely technical.

It's really not - there is a ton of tax law that relies on e.g. the fair market value of hard-to-price assets or if all else fails and a penalty is due, there's an entire section of the CFR on how circumstances surrounding the underpayment can reduce or eliminate the liability.

https://www.law.cornell.edu/cfr/text/26/1.6664-4

If you've only ever filed a personal tax return, you're dramatically under-appreciating how complicated business taxes are and how much grey area there really is. Did you know you can pay your 10-year old to work for you as a means to avoid taxes? Try looking up the dollar amount where avoid turns to evade... there isn't one. The amount paid just has to be "reasonable and justifiable" and the work they perform has to be "work necessary to the business".

jefftk 14 hours ago [-]
There are solidly legal and solidly illegal ways to do this, and a range of options in between. My reading of what they are doing is that it is pretty far toward the legal end of this spectrum, and the key question will be whether whether the non-profit is appropriately compensated for its stake in the for-profit.

Explaining reduces the chance that they're challenged by the IRS or attorney general, since that is a political process.

moron4hire 17 hours ago [-]
When the consequence is paying a fine/settlement, it means the law is only for poor people.
Seattle3503 15 hours ago [-]
> "We are turning our non profit into a for profit because we want to make money" isn't legal.

Source? The NFL did this because they were already making a lot of money. As I understand it, the tax laws practically required the change.

foobiekr 11 hours ago [-]
OpenAI isn't making money. They are like a giant furnace for investment dollars. Even putting aside that the NFL and OpenAI aren't the same kind of entity, there is also no taxes issue.
Seattle3503 8 hours ago [-]
That doesn't explain how the conversion is illegal.
jefftk 14 hours ago [-]
The NFL was a 501c6 trade association, not a 501c3 though?
Seattle3503 14 hours ago [-]
Ah interesting. How are those different?
jefftk 14 hours ago [-]
The key thing is that assets of a 501c3 are irrevocably dedicated to charitable purposes, which is not the case for a 501c6. If the NFL had somehow started as a 501c3 before realizing that was a bad structure for their work, the standard approach would be for the 501c3 to sell their assets at fair market value to new for-profit. Then either those proceeds could be donated to other 501c3s, or the old NFL could continue as a foundation, applying those assets charitably.

(Not a lawyer, just someone interested in non-profits. I'm on the board of two, but that doesn't make me an expert in regulation!)

Seattle3503 8 hours ago [-]
Interesting, TIL. That is the basis for why the conversion is illegal?
jefftk 7 hours ago [-]
Sorry, which conversion? If you mean OpenAI, it's not clear that what they are currently planning is illegal. If they do it correctly, the non-profit is fully compensated for their stake in the for-profit.
swalberg 13 hours ago [-]
The NFL passes net income back to the team owners. The taxation is generally the owner's problem.
nicce 17 hours ago [-]
If everyone sees it through, does anything else that direct evidence of actions prove otherwise? Explaining just wastes everyone's time.
jefftk 14 hours ago [-]
They are explaining how they see what they are doing as compliant with the law around 501c3s, which it arguably is. And they are putting a positive spin on it to make it less likely to be challenged, since the main ways this could be challenged involve government agencies and not suits from individuals.
vouaobrasil 9 hours ago [-]
No matter how obvious a facade is though, there is probably a sizable group that doesn't see through it.
Kinrany 17 hours ago [-]
This is pure speculation but being a nonprofit, there's still a risk of getting sued by the public on the grounds of not following the promises of their work being a public good.
rvnx 17 hours ago [-]
Well it has several tax advantages, and nobody really knows how the GPUs are actually used.

Let's imagine some of these AI companies are actually mining cryptos for the benefits of their owners or their engineers, who would know ?

ilyagr 7 hours ago [-]
They are trying to convince themselves that is not the reason, because they'd like to think of themselves as above such trivial concerns as greed, and hunger for power, and fear of somebody else beating them to it and having power over them. So, they want to call their feelings as something else, something noble.

And then, as others point out, they are also interested in other people believing this, whether for employees or lawsuits. But I think that would not justify saying these same things again and again, when by now they have nothing new to say and people have mostly made up their mind.

hackitup7 14 hours ago [-]
I couldn't care less about their structure but the level of effort to put a positive spin on it makes the whole thing look more sketchy rather than less.
ksec 14 hours ago [-]
>We all know what's going on.

I am not entirely sure about this. Before 2012, may be. Somewhere along the line 2012 - 2022 it was all about doing something Good for the world. And "we want to make lots of money" isn't part of that equation. Now the pendulum may be swinging back but it only just started.

Nice point of reference may be Sequoia profile of Sam Bankman-Fried.

raincole 16 hours ago [-]
> they are spending so much time and effort trying

Do they? It reads like a very average PR piece that an average PR person can write.

kevinventullo 13 hours ago [-]
In fact I think it’d be a bit sad if it wasn’t largely written by ChatGPT.
atleastoptimal 12 hours ago [-]
I’m not sure it’s always the best move for an organization to cater exclusively to their most cynical critics.
DidYaWipe 15 hours ago [-]
And take the fraudulent "open" out of their name. That douchebaggery sets a precedent that will no doubt run rampant.
alexalx666 15 hours ago [-]
exactly, this would be even more relatable to most of people, we are not living in Star Track where you don't have to make money to survive.
sungho_ 12 hours ago [-]
Sam Altman and OpenAI aim to become the gods of a new world. Compared to that goal, it makes sense that money feels trivial to them.
moralestapia 8 hours ago [-]
It's not for you (or me, or us), it's for the upcoming lawsuit.
dylan604 15 hours ago [-]
> We all know what's going on

I've been looking for my broadbrush. I forgot I loaned it out.

It seems we've yet again forgotten that HN is an echo chamber. Just because the audience here "knows" something does not mean the rest of the vastly greater numbers of people do as well. In fact, so many people I've talked with don't have a clue about who makes/owns/controls ChatGPT nor would they recognize Sam's name if even OpenAI.

The PR campaign being waged is not meant for this audience. It is meant for everyone else that can be influenced.

j45 15 hours ago [-]
My understanding was the non-profit would own a for-profit, but this seems to be going the other way to have a for-profit to own a non-profit?
singron 15 hours ago [-]
You can't own a non-profit. It doesn't have shares or shareholders. The article says they want the non-profit to own shares of the PBC.
TheRealNGenius 12 hours ago [-]
[dead]
fullshark 14 hours ago [-]
No one breaks kayfabe
jrmg 18 hours ago [-]
Our current structure does not allow the Board to directly consider the interests of those who would finance the mission and does not enable the non-profit to easily do more than control the for-profit.

I kind of thought that was the point of the current structure.

optimalsolver 18 hours ago [-]
They referring to the reconstructed board that solely exists to rubber stamp every Altman decision as officially great for humanity, so what do they care what the board's considerations are? They'll go along with literally anything.
rmbyrro 15 hours ago [-]
humanity benefactor CEOs can get fired out of the blue
timeon 11 hours ago [-]
I'm bit tired, is this sarcasm?
jhrmnn 18 hours ago [-]
Loyalty can go away.
18 hours ago [-]
jasode 17 hours ago [-]
>I kind of thought that was the point of the current structure.

Yes, it is but if we only stopped the analysis right there, we could take pleasure in the fact that Sam Altman checkmated himself in his own blog post. "Dude, the non-profit is _supposed_ to control the profit company because that's how you formed the companies in the first place! Duh!!!"

To go beyond that analysis, we have to at least entertain (not "agree" but just _entertain_ for analysis) ... what Sam is saying:

- the original non-profit and profit structure was a mistake that was based on what they thought they knew at the time. (They thought they could be a "research" firm.)

- having a non-profit control the profit becomes a moot point if the for-profit company becomes irrelevant in the marketplace.

Here is a key paragraph:

>The hundreds of billions of dollars that major companies are now investing into AI development show what it will really take for OpenAI to continue pursuing the mission. We once again need to raise more capital than we’d imagined. Investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness.

In other words, let's suppose the old Friendster social network was structured as "non-profit-board-controlling-a-for-profit-Friendster" like OpenAI. The ideals of the "non-profit being in control" is a moot point when a competitor like Facebook makes non-profit-Friendster irrelevant.

Or put another way, pick any hard problem today (self-driving, energy discovery, etc) that requires billions of investment and hypothetically create a new company to compete in that space. Would creating that company as a non-profit-controlling-the-profit-company confer any market advantage or would it be a handicap? It looks like it's a handicap.

OpenAI is finding it hard to compete for investors' billions in the face of Tesla's billions spent on 50000 GPU supercluster, Google billions spent on Gemini, Anthropic billions spent on Claude, Alibaba's billions, etc. OpenAI doesn't have an unassailable lead with ChatGPT.

The issue is Sam Altman & OpenAI look bad because he and the investors want to use the existing "OpenAI" brand name in a restructured simpler for-profit company. But to outsiders, it looks like a scam or bait-and-switch. Maybe they could have done an alternative creative procedure such as spin up a totally new for-profit company called ClosedAI to take OpenAI's employees and then pay a perpetual ip license to OpenAI. That way, CloseAI is free of encumbrances from OpenAI's messy structure. But then again, Elon Musk would still probably file another lawsuit calling those license transfers as "bad faith business dealings".

maeil 15 hours ago [-]
> But to outsiders, it looks like a scam or bait-and-switch.

It doesn't look like one - it is one.

> We’re excited to welcome the following new donors to OpenAI: Jed McCaleb (opens in a new window), Gabe Newell (opens in a new window), Michael Seibel (opens in a new window), Jaan Tallinn (opens in a new window), and Ashton Eaton (opens in a new window) and Brianne Theisen-Eaton (opens in a new window). Reid Hoffman (opens in a new window) is significantly increasing his contribution. Pieter Abbeel (opens in a new window) (having completed his sabbatical with us), Julia Galef (opens in a new window), and Maran Nelson (opens in a new window) are becoming advisors to OpenAI.

[1] https://openai.com/index/openai-supporters/

jasode 15 hours ago [-]
>It doesn't look like one - it is one. > We’re excited to welcome the following new donors to OpenAI: ...

The story is that OpenAI worked out some equity conversion of the for-profit co for the donors to the non-profit. Elon Musk was the notable holdout. Elon was offered some unknown percentage for his ~$40 million donation but he refused.

Seems like the donors are ok with OpenAI pivot to for-profit ... except for Elon. So no bait-n-switch as seen from "insiders" perspective.

If you have information that contradicts that, please add to the thread.

int_19h 12 hours ago [-]
It's not just about those whose money is at stake. The whole point of having non-profits as an option for corporations in the first place is to encourage things that broadly benefit society. Effectively turning a non-profit into a for-profit company is directly counter to that, and it means that society as a whole was defrauded when non-profit status was originally claimed with its associated perks.
maeil 13 hours ago [-]
That does not make it less of a scam.

These people "donated to a non-profit". They did not "invest in a for-profit".

If the Red Cross suddenly turns into a for-profit and then says "well we'll give our donators of the past few years equity in our new company", this does not make it any less of a scam.

> Seems like the donors are ok with OpenAI pivot to for-profit

If you have information that shows this, feel free to add it. "Not suing" is not the same as that. Very few people sue even when they feel they're scammed.

jasode 12 hours ago [-]
>These people "donated to a non-profit". They did not "invest in a for-profit".

Sure, nobody wants to be tricked into donating to a charity and then have their money disappear into a for-profit company.

Based on the interviews I saw from some donors (Reid Hoffman, etc), there's more nuance to it than that. The donors also wanted an effective non-profit entity. The TLDR is that they donated to a non-profit under 2015 assumptions of AGI research costs that turned out to be wrong and massively underestimated.

- 2015... the non-profit OpenAI in its original idea of "charity research organization" was flawed from the beginning because they realized they couldn't attract A.I. talent at the same level as Google/Facebook/etc as those competitors offered higher salaries and lucrative stock options. Then, they realized the initial ~$100 million in donations was also not enough to pay for very expensive hardware like GPUs and datacenters. It's hard for researchers to make discoveries in AGI if there's no cutting edge hardware for them to work on. A non-profit tech charity getting billions in donations was not realistic. These money problems compound and lead to...

- 2019... create the for-profit OpenAI Global LLC as a vehicle for the Microsoft investment of $1 billion and also create stock incentives for recruiting employees. This helps solve the talent acquisition and pay-for-expensive-hardware problems. This for-profit entity is capped. (https://openai.com/our-structure/)

(Side note: other non-profit entities with for-profit subsidiaries to supplement funding include Goodwill, Girl Scouts of America, Salvation Army, Mozilla, etc.)

We can't know all the back room dealings but it seemed like the donors were on board with the 2019 for-profit entity. The donors understood the original non-profit was not viable to do AGI work because they underestimated the costs. The publicly revealed emails that as early as 2017, Elon was also pushing to switch OpenAI to be a for-profit company.[1] But the issue was Elon wanted to run it and Sam disagreed. Elon left OpenAI and now he has competing AI businesses with xAI Grok and Tesla AI which makes his lawsuit have some conflicts of interest. I don't which side to believe but that's the soap opera drama.

Now in 2024, the 2019 for-profit OpenAI Global LLC has shown structural flaws because the next set of investors with billions don't want to put money into that LLC. Instead, the next investors need a public incorporated company with ability to IPO as the vehicle. That's where Sam wants to create another for-profit OpenAI Inc without the cap. We should be skeptical but he argues that a successful OpenAI will funnel more money back to the non-profit OpenAI than if it were a standalone non-profit that didn't take billions in investment.

[1] https://www.google.com/search?q=elon+musk+emails+revealed+op...

bradleyjg 15 hours ago [-]
the original non-profit and profit structure was a mistake that was based on what they thought they knew at the time. (They thought they could be a "research" firm.)

There’s no slavery here. If Sam decided it was a mistake to dedicate his time to a non-profit, he’s perfectly free to quit and start an entirely new organization that comports with his current vision. That would be the honorable thing to do.

aeternum 15 hours ago [-]
>That would be the honorable thing to do.

It's also notable that this is in fact what the now vast majority of other OpenAI founders chose to do.

jasode 15 hours ago [-]
>If Sam decided it was a mistake to dedicate his time to a non-profit, he’s perfectly free to quit [...]

To be clear, I don't think Sam can do anything and come out looking "good" from a public relations standpoint.

That said, Sam probably thinks he's justified because he was one of the original founders and co-chair of OpenAI -- so he feels he should have a say in pivoting it to something else. He said he got all the other donors on board ... except for Elon.

That leaves us with the messy situation today... Elon is the one filing the lawsuit and Sam is writing a PR blog post that's received as corporate doublespeak.

bradleyjg 14 hours ago [-]
he was one of the original founders and co-chair of OpenAI -- so he feels he should have a say in pivoting it to something else

No. Non-profit is a deal between those founders and society. That he was an original founder is irrelevant. I don’t care about Elon, it’s the pivoting that’s inherently dishonorable.

jasode 14 hours ago [-]
> Non-profit is a deal between those founders and society.

Yes I get that but did OpenAI ever take any public donations from society? I don't think they did. It thought it was only funded by wealthy private donors.

>, it’s the pivoting that’s inherently dishonorable.

Would creating a new company (i.e. ClosedAI) that recruits OpenAI's employees and buys the intellectual property such that it leaves a "shell" of OpenAI be acceptable?

That's basically the roundabout thing Sam is trying to do now with a re-incorporated for-profit PBC that's not beholden to the 2015 non-profit organization ... except he's also trying to keep the strong branding of the existing "OpenAI" name instead of "ClosedAI".

The existing laws allow for non-profit 501c3 organizations to "convert" (scare quotes) to for-profit status by re-incorporating to a (new) for-profit company. That seems to be Sam's legal roadmap.

EDIT REPLY: They received benefits by dint of their status. If there were no such benefits

The main benefit is tax exemption but OpenAI never had profits to be taxed. Also to clarify, there's already a for-profit OpenAI Global LLC. That's the subsidiary company Microsoft invested in. It has the convoluted "capped profit" structure. Sam says he can't attract enough investors to that for-profit entity. Therefore, he wants to create another for-profit OpenAI company that doesn't have the convoluted ("bespoke" as he put it) self-imposed rules to be more attractive to new investors.

The 2 entities of non-profit and for-profit is like Mozilla Foundation + Mozilla Corporation.

[] https://www.google.com/search?q=conversion+of+501c3+to+for-p...

bradleyjg 14 hours ago [-]
Yes I get that but did OpenAI ever take any public donations from society? I don't think they did. It thought it was only funded by wealthy private donors.

They received benefits by dint of their status. If there were no such benefits they wouldn’t have incorporated that way.

In any case, would creating a new company (i.e. ClosedAI) that recruits OpenAI's employees and buys the intellectual property such that it leaves a "shell" of OpenAI be acceptable?

There’s no problem with recruiting employees. The intellectual property purchase is problematic. If it’s for sale, it should be for sale to anyone and no one connected to a bidder should be involved in evaluating offers.

The existing laws allow for non-profit 501c3 organizations to "convert" (scare quotes) to for-profit status by re-incorporating to a (new) for-profit company. That seems to be Sam's legal roadmap.

Legal and honorable are not synonyms.

12 hours ago [-]
jay_kyburz 12 hours ago [-]
All I care about, (and I guess I don't care than much because I'm not a US citizen) is, has this move allowed them, or their contributors to pay less tax. That is their only obligation to the public.

Was the non profit a way to just avoid tax until the time came to start making money?

diogofranco 11 hours ago [-]
"Was the non profit a way to just avoid tax until the time came to start making money?"

You'd want to do it the other way around

Hasu 10 hours ago [-]
> In other words, let's suppose the old Friendster social network was structured as "non-profit-board-controlling-a-for-profit-Friendster" like OpenAI. The ideals of the "non-profit being in control" is a moot point when a competitor like Facebook makes non-profit-Friendster irrelevant.

This feels like it's missing a really really important point, which is that in this analogy, the mission of the non-profit would be something like, "Make social media available to all of humanity without advertising or negative externalities", and the for-profit plans to do advertising to compete with Facebook.

The for-profit's only plan for making money goes directly against the goals of the nonprofit. That's the problem. Who cares if it's competitive if the point of the competition is to destroy the things the non-profit stands for?

parpfish 16 hours ago [-]
I could see an argument for your example that putting the non profit in control of a social media company could help the long term financial success of the platform because you’re not chasing short term revenue that annoys users (more intrusive ads, hacking engagement with a stream of shallow click bait, etc). So it’d be a question of whether you could survive long enough to outlast competitors getting a big short term boost.

I’m not sure what the equivalent would be for llm products.

13 hours ago [-]
Havoc 18 hours ago [-]
The whole thing is a paper thin farce.

Strong principled stance until the valuations got big (helped in no small measure by the principled stance)…and then backtracked it when everyone saw the riches there for the taking with a little let’s call it reframing

maeil 15 hours ago [-]
> Strong principled stance

There had never been one. Not with Sam Altman. It was a play put on to get $100+ million in donations. This was always the goal, from day 0. This is trivially obvious considering the person Sam Altman.

bilbo0s 15 hours ago [-]
This.

Anyone who thought Sam Altman wasn’t in it for the money from the start, was being naive. In the extreme. Not only Mr Altman, but most of the people giving the larger donations were hoping for a hit as well. Why is that so incredible to people? How else would you get that kind of money to fund a ludicrously speculative research based endeavor? You don’t even know if it’s possible before the research. What else could they have done?

justinbaker84 12 hours ago [-]
I think that is the case with Sam, but not Ilya and probably not with some of the other founders.
maeil 3 hours ago [-]
Agreed, hence why he scammed people into believing he was in it for the "good of humanity" - 1. Get donors 2. Get bright minds in the space (e.g. Ilya) with such ideals to work for him.
fnqi8ckfek 11 hours ago [-]
[dead]
cheald 11 hours ago [-]
This is specifically why I caution people against trusting OpenAI's "we won't train on your data" checkbox. They are specifically financially incentivized to do so, and have a demonstrated history of saying the nice, comforting thing and then doing the thing that benefits them instead.
taneq 17 hours ago [-]
Everyone has a price, is this meant to be shocking? I mean, I’m disappointed… but I’d have been far more surprised if they’d stood fast with the philanthropic mission once world-changing money was on the table.
tmpz22 17 hours ago [-]
If you sell out humanity for a Gulf Stream 5 private jet when you already have a Gulf Stream 4 it’s not deserving of empathy.

“Somebody please think of the investors they only have 500 years of generational wealth”

dralley 17 hours ago [-]
maeil 15 hours ago [-]
> Everyone has a price, is this meant to be shocking?

Left a very-well paying job over conscience reasons. TC was ~3x higher than I could get elsewhere without immigrating, probably higher than anywhere relative to CoL. I wasn't even doing defense stuff, crypto scams or anything clearly questionable like that, just clients were mostly in fossil-fuel adjacent sectors. Come from a lower-class background and haven't built up sizable assets at all, will likely need to work until retirement age.

AMA.

If anyone similar reads this, would love to get in touch as I'm sure we'll have a lot in common. In case someone here knows me, hi!

bn-l 13 hours ago [-]
Isn’t it crazy how everything that could maybe dig us out of this hole is being held back because a few extremely rich people have a small chance of losing a tiny bit of money.
scarface_74 15 hours ago [-]
There is a huge difference between being “principled” between 3x and 1x when you are going from 200K to 600K and when you are going from $50K to $150K.

Once you have “enough”, your personal marginal utility for money changes. Would you go from what you are making now to being an at home nurse taking care of special needs kids for $16/hour?

maeil 13 hours ago [-]
> There is a huge difference between being “principled” between 3x and 1x when you are going from 200K to 600K and when you are going from $50K to $150K.

If it'd be anything remotely like the former, I would have built up sizable assets (which I didn't), especially as I mentioned relative to CoL :)

scarface_74 13 hours ago [-]
Did you give up so much potential income that you couldn’t meet your short term and long term wants and needs?

What did you give up in your lifestyle that you personally valued to choose a lower paying job?

I’m 50, (step)kids are grown, we downsized from the big house in the burbs to a condo in a state tax free state, etc and while I could make “FAANG” total compensation (been there done that), I much prefer a more laid back job, remote work, freedom to travel,etc. I also have always hated large companies.

I would have made different choices a decade ago if the opportunities had arisen.

I’m well aware that my income puts me at the top quintile of household income (while still lower than a mid level developer at any of the BigTech companies).

https://dqydj.com/household-income-percentile-calculator/

Be as vague as you are comfortable with. But where are you with respect to the median income locally?

maeil 3 hours ago [-]
> What did you give up in your lifestyle that you personally valued to choose a lower paying job?

Decades earlier retirement. Bringing with it the exact freedom you're talking about.

Shawnecy 17 hours ago [-]
> Everyone has a price,

Speak for yourself.

taneq 17 hours ago [-]
I’ve thought about this, a lot. My price is way higher than it once was, but still, if someone came along and dropped $10bn on my desk, I’d hear them out. There’s things I’d say no to, regardless of price, but otherwise things are probably negotiable.
saulpw 14 hours ago [-]
It might sound fun to 'have' $10bn but consider losing your family, knowing that every person you meet is after your money (because they are), not having anyone give you responsible feedback, spending large amounts of time dealing with lawyers and accountants and finance bros, and basically never being 'normal' again. Winning a huge amount of money in a lottery carries a huge chance of ruining your life.

There's a limit to the amount of money I'd want (or could even use), and it's well below $10b. If someone came around with $10m and an honest deal and someone else offered $10b to buy the morality in my left toe, I'd take the $10m without question.

Jerrrry 11 hours ago [-]
There is nothing you can do with $10bn that you cannot personally do with $1bn.

You can only buy the International Space Station twice.

butterNaN 17 hours ago [-]
No, not everyone has a price. Obviously anecdotal but I have met some truly passionate people in real life who wouldn't compramise their values. Humanity has not lost just yet.

(I would say the same about some people who I haven't personally met, but it would be speculation)

mistrial9 16 hours ago [-]
people who do have a price, tend to cluster around situations where that can come into play; people who do not have a price, tend to cluster around situations where that does not come into play (?)
Havoc 17 hours ago [-]
Yes, likely most here would do the same even if they're not willing to admit it.

I do think part of the price (paid) should be getting bluntly called out for it.

evanevan 18 hours ago [-]
The important bit (which seems unclear from this article), is the exact relationship between the for-profit and the not for profit?

Before, profits were capped, with remainder going to the non-profit to distribute benefits equally across the world in event of agi / massive economic progress from agi. Which was nice, as at least on paper, a plan for an “exit to humanity”.

This reads to me like the new structure might offer uncapped returns to investors, with a small fraction reserved to benefit the wider public via this nonprofit. So dropping the “exit to humanity”, which seemed like a big part of OpenAI’s original vision.

Early on they did some good research on this too, thinking about the investor model, and its benefits for raising money and having accountability etc in todays world, vs what the right structure could be post SI, and taking that conversation pretty seriously. So it’s sad to see OpenAI seemingly drifting away from some of that body of work.

whamlastxmas 17 hours ago [-]
I didn’t consider that this might be their sly way to remove the 100x cap on returns. Lame.
keiferski 19 hours ago [-]
Maybe I’m missing it in this article or elsewhere on the website, but how exactly is OpenAI’s vision of making AGI going to “benefit humanity as as a whole”?

I’m not asking to be snarky or imply any hidden meaning…I just don’t see how they plan on getting from A to B.

From this recent press release the answer seems to be: make ChatGPT really good and offer it for free to people to use. Which is a reasonable answer, I suppose, but not exactly one that matches the highfalutin language being used around AGI.

latexr 18 hours ago [-]
> but how exactly is OpenAI’s vision of making AGI going to “benefit humanity as as a whole”?

Considering their definition of AGI is “makes a lot of money”, it’s not going to—and was never designed to—benefit anyone else.

https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...

https://www.youtube.com/watch?v=gjQUCpeJG1Y

What else could we have expected from someone who made yet another cryptocurrency scam?

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

Sam Altman doesn’t give a rat’s ass about improving humanity, he cares about personal profit.

https://www.newyorker.com/cartoon/a16995

mapt 16 hours ago [-]
The seasoned advice from experts is that AGI could very easily END humanity as a whole. But they need to pay their mortgage right now, and uhh... if we don't do it our competitors will.

We're basically betting our species' future on these guys failing, because for a short period of time there's a massive amount of shareholder value to be made.

darkhorse222 12 hours ago [-]
Isn't the idea that AGI could replace a bunch of labor allowing us to help more poor and increase net positive vibes or just have more leisure time?

Obviously the way our economy and society are structured that's not what will happen, but I don't think that has much to do with tools and their tendency to increase our efficiency and output.

Put another way, there are powerful benefits from AGI that we will squander because our system sucks. That is not a critique against AGI, that is a critique of our system and will continue to show up. It's already a huge point of conversation in our collective dialogue.

tux3 18 hours ago [-]
Once OpenAI becomes fabulously rich, the world will surely be transformed, and then the benefits of all this concentration of power will simply trickle down.
robertlagrant 16 hours ago [-]
Everything else aside, fabulous riches not guaranteed.
ergonaught 18 hours ago [-]
Their charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work", and their intent is to ""directly build safe and beneficial AGI" or "[aid] others to achieve this".

They don't address benefits beyond the wide-scale automation of economically-valuable work, and as those benefits require significant revisions to social structures it's probably appropriate that they keep their mouth shut on the subject.

gregw2 16 hours ago [-]
The questions quickly arise: Safe... for whom, and beneficial... for whom?
causal 18 hours ago [-]
> Investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness

Well you see the AGI can only benefit humanity if it is funded via traditional equity... Investors WANT to give their money but the horrendous lack of ownership is defeating their goodwill

ksynwa 16 hours ago [-]
I'm woefully uneducated but I think it's a red herring. It does not matter what their vision of AGI is if they are just going to be peddling LLMs as a service to customers.
ben_w 18 hours ago [-]
The path between A and B has enough tangled branches that I'm reminded of childhood maze puzzles where you have to find which entrance even gets to the right goal and not the bad outcomes.

The most positive take is: they want to build a general-purpose AI, to allow fully automated luxury for all; to built with care to ensure it can only be used for positive human flourishing and cannot (easily or at all) be used for nefarious purposes by someone who wants to sow chaos or take over the world; and to do so in public so that the rest of us can prepare for it rather than wake up one day to a world that is alien to us.

Given the mental image I have here is of a maze, you may well guess that I don't expect this to go smoothly — I think the origin in Silicon Valley and startup culture means OpenAI, quite naturally, has a bias towards optimism and to the idea that economic growth and tech is a good thing by default. I think all of this is only really tempered by the memetic popularity of Eliezer Yudkowsky, and the extent to which his fears are taken seriously, and his fears are focussed more on existential threat of an optimising agent that does the optimising faster than we do, not on any of the transitional dynamics going from the current economy to whatever a "humans need not apply" economy looks like.

robertlagrant 14 hours ago [-]
> an optimising agent that does the optimising faster than we do

I still don't understand this. What does it mean in practice?

ben_w 13 hours ago [-]
Example:

Covid does not hate you, nor does it love you, it simply follows an optimisation algorithm — that of genetic evolution — for the maximum reproductive success, and does so without regard to the damage it causes your body while it consumes you for parts.

Covid is pretty stupid, it's just a virus.

And yet: I've heard the mutation rate is 3.8 × 10e−6 / nucleotide / cycle, and at about 30,000 base pairs and 10e9 to 1e11 virons in an infected person, so that's ~1e8-1e10 mutations per reproductive cycle in an infected person, and that the replication cycle duration is about 10 hours. Such mutations are both how it got to harm us in the first place, why vaccination isn't once-and-done, and this logic also applies to all the other diseases in the world (including bacterial ones, which is why people are worried about bacterial resistance).

As an existence proof, Covid shows how an agent going off and doing its own thing, if it does it well enough, doesn't even need to be smart to kill a percentage point or so of the human species *by accident*.

The hope is that AI will be smart enough that we can tell it: humans (and the things we value) are not an allowed source of parts. The danger happens well before it's that smart… and that even when it is that smart, we may well not be smart enough to describe all the things we value, accurately, and without bugs/loopholes in our descriptions.

HDThoreaun 11 hours ago [-]
This is a description of the singularity arising from a fast intelligence takeoff.
gmerc 18 hours ago [-]
Well The Information reports that AGi really just means 100B profit for Microsoft and friends. So…
whamlastxmas 17 hours ago [-]
I’m pretty sure the “for profit cap” for Microsoft is something like a trillion dollars in return. 100x return cap at $10 billion invested. It basically prevents Microsoft from becoming a world super power with a military and nuclear weapons but not much else, especially considering they will reinvest a lot of their money for even more returns over time
sanj 19 hours ago [-]
> Investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness.

This seems exactly backwards. At this scale you can establish as much "bespokeness" as you want. The investors want in and will sign pretty much anything.

It reminds me of the old joke from Paul Getty:

If you owe the bank $100 that's your problem. If you owe the bank $100 million, that's the bank's problem.

entropi 16 hours ago [-]
I am not familiar with the US law, but I don't really understand how is the whole "start as a nonprofit, become a for profit once your product turns out to be profitable" thing legal. It looks like a complete scam to me. And while I don't like Elon at all, I think he and other earlier donors have a very strong case here.
throwup238 15 hours ago [-]
Don’t worry, 99.999% of the people here don’t know the first thing about how non profits work.

The for profit arm is legally mandated by the IRS, full stop. Nonprofits can’t just start a business and declare all the revenue tax free. Any “unrelated business income” must go through a tax paying entity. ChatGPT and the API are unrelated businesses and OpenAI has had this non/for profit split since 2019.

See for example the Mozilla Foundation. It owns the Mozilla Corporation which is paid by Google for default search engine placement and it pays the engineers working on the browser full time. The difference here is that the OpenAI forprofit is issuing shares to external shareholders in exchange for investment (just like any other company), while Mozilla keeps the corporation in its entirety.

ants_everywhere 18 hours ago [-]
> Our current structure does not allow the Board to directly consider the interests of those who would finance the mission

Why should it? You can't serve two masters. They claim to serve the master of human-aligned AI. Why would they want to add another goal that's impossible to align with their primary one?

whamlastxmas 17 hours ago [-]
I want to put my opinion somewhere and as a reply to this seems like a good option.

I am anti-capitalism and anti-profit and anti-Microsoft, but I can take a step back and recognize the truth to what’s being said here. They give the context that hundreds of billions are being spent on AI right now, and if OpenAI wants to remain competitive, they need significantly more money than they originally anticipated. They’re recognizing they can’t get this money unless there is a promise of returns for those who give it. There any not many people equipped to fork over $50 billion with zero return and I assume of the very few people who can, none expressed interest in doing so.

They need money to stay competitive. They felt confident they’d be surpassed without it. This was their only avenue for getting that money. And while it would have been more true to the original mission to simply let other people surpass them, if that’s what happens, my guess is that Sam is doing some mental gymnastics as to how now letting that happen and at least letting some part of OpenAI’s success be nonprofit is better than the competition who would have 0% be charitable

insane_dreamer 12 hours ago [-]
> They need money to stay competitive.

The mission was never to stay competitive. The mission was to develop AGI. They need to be competitive if they want to make billions of dollars for themselves and their shareholders; they don't need to be competitive to develop AGI.

scottyah 14 hours ago [-]
It seems to me like they started out aiming for a smaller role, pursuing AGI through the advancement of algorithms and technologies. After their 15min of fame where they released half-baked technology (in the true Stanford Way), they seem to be set on monopolizing AGI.

They only need the billions to compete with the other players.

int_19h 12 hours ago [-]
The man is a known fraudster. Why should we take anything he says on good faith to begin with?
rewgs 4 hours ago [-]
> my guess is that Sam is doing some mental gymnastics as to how now letting that happen and at least letting some part of OpenAI’s success be nonprofit is better than the competition who would have 0% be charitable

"I don't know about you people, but I don't want to live in a world where someone else makes the world a better place better than we do." - Gavin Belson

llamaimperative 19 hours ago [-]
Surely these couldn’t be the exact incentives that the founders (from a distance) predicted would exert themselves on the project as its economic value increased, right?
eagleinparadise 17 hours ago [-]
Hmm, if OpenAI and Sam Altman claim to be a savior of humanity by bringing AGI, but they need enormous amounts of capital, this would be a perfect use case for the world’s largest, most powerful government which flaunts his values of freedom, democracy, and other American values to inject its vast resources. We control the global economy and monetary polices.

The government is supposed to be the entity which invests in the “uninvestable”. Think, running a police department is not a profitable venture (in western societies). There’s no concept of profitability in the public sector, for good reason. And we all benefit greatly from it.

AGI sounds like a public good. This would be putting money where your mouth is, truly.

Unfortunately, many private actors want to control this technology for their own selfish ambitions. Or, the government is too dysfunctional to do its job. Or… people don’t believe in the government doing these kinds of things anymore, which is a shame. And we are must worse off

arcanus 17 hours ago [-]
> this would be a perfect use case for the world’s largest, most powerful government which flaunts his values of freedom, democracy, and other American values to inject its vast resources

https://www.energy.gov/fasst

Frontiers in Artificial Intelligence for Science, Security and Technology (FASST)

DOE is seeking public input to inform how DOE can leverage its existing assets at its 17 national labs and partner with external organizations to support building a national AI capability.

vonneumannstan 17 hours ago [-]
Sam Altman is evil fullstop. He is a pure machiavellian villain. His goal is to the most powerful person on the planet. The future cannot be controlled by him or other autistic human successionists.
depr 16 hours ago [-]
I agree wrt to his goal but there is no way he is autistic, and what's a human successionist?
jprete 11 hours ago [-]
I think the GP is referring to people who either don't care whether AGI takes over from humanity, or who actively prefer that outcome.
futureshock 12 hours ago [-]
I had the impression that he was ADHD, not autistic.
throw4847285 15 hours ago [-]
He wants you to think he's a selfless hero, and if that fails he'll settle for machiavellian villain. He's really an empty suit.
portaouflop 16 hours ago [-]
Who is the most powerful person on the planet at the moment?
willvarfar 16 hours ago [-]
The popular narrative is that it is Musk, who both seems to be able to influence government policy and who has had a very public falling out with OpenAI...
bigfishrunning 16 hours ago [-]
Weirdly, it's Jake Paul. He beat Mike Tyson!
VHRanger 14 hours ago [-]
Xi Jinping without a doubt.

Arguably followed by other dictators like Putin, and backdoor political operators like Peter Thiel.

Trump and Musk would be somewhere in the top 20, maybe.

WiSaGaN 7 hours ago [-]
He is not autistic, although perceived that way can be a competitive advantage in Silicon valley culture.
vessenes 17 hours ago [-]
The real news here is two-fold: new governance/recap of the for-profit, and operational shrink at the 501c3.

As people here intuit, I think this makes the PBC the ‘head’ functionally.

That said, I would guess that the charity will be one of the wealthiest charities in the world in short order. I am certain that the strong recommendation from advisory is to have separate, independent boards. Especially with their public feud rolling and their feud-ee on the ascendant politically, they will need a very belt-and-suspenders approach. Imagining an independent board at the charity in exchange for a well funded pbc doesn’t seem like the worst of all worlds.

As a reminder, being granted 501c3 status is a privilege in the US, maintaining that status takes active work. The punishment: removal of nonprofit status. I think if they wanted to ditch the mission they could, albeit maybe not without giving Elon some stock. Upshot: something like this was inevitable, I think.

Anyway, I don’t hate it like the other commenters here do. Maybe we would prefer OpenAI get truly open, but then what? If Sam wanted he could open source everything, resign because the 501c3 can’t raise the money for the next step, and start a newco; that company would have many fewer restrictions. he is not doing that. I’m curious where we get in the next few years,

thruway516 11 hours ago [-]
>If Sam wanted he could open source everything, resign because the 501c3 can’t raise the money for the next step, and start a newco

But can he really do that though? He's already lost a bit of talent with his current shenanigans. Could he attract the talent he would need and make lightning strike again, this time without the altruistic mission that drew a lot of that talent in the first place?

Edit: Actually when I think of it he would probably earn a lot more respect if he did that. He could bank a lot of goodwill from open sourcing the code and being open and forthright with his intentions for once.

whamlastxmas 17 hours ago [-]
I would guess they’re going to put as many expenses as possible on the nonprofit. For example, all the compute used for free tiers of ChatGPT will be charged to the nonprofit despite being a massive benefit to the for-profit. They may even charge the training costs, which will be in the billions, to the nonprofit as well
vessenes 15 hours ago [-]
Why do this? They lose a deduction that way.
bubaumba 15 hours ago [-]
Simple tax optimization. Like new billionaires promising to significant donations. a) they don't have to donate. b) they can immediately slash those promised donations from taxes.
rvba 11 hours ago [-]
How can one slash a non existant donation from taxes?
lanthissa 16 hours ago [-]
If you want all the dollars fine, but pretending you're doing us a favor is creepy.
thaumasiotes 15 hours ago [-]
https://chainsawsuit.krisstraub.com/20171207.shtml

We asked our investors and they said you're very excited about it being less good, which is great news for you ;D

klausa 17 hours ago [-]
It is curious, and perhaps very telling, that _nobody_ felt comfortable enough to put their name to this post.
CaptainFever 13 hours ago [-]
Or perhaps because it would lead to a ton of harassment as per usual.
bentt 18 hours ago [-]
My read is:

“We’re going public because we want more money. We need more money for more computer time but that’s not all. ChatGPT has been so influential that we deserve a bigger share of the rewards than we have gotten.”

flkiwi 17 hours ago [-]
Alternate possibility:

"We're beginning to see that this path isn't going where we thought it would so we're going to extract as much value as we can before it crashes into mundaneness."

bentt 17 hours ago [-]
Sure, strike while the iron is hot.
woopsn 7 hours ago [-]
Realistically the alternative is that they remain a kind of sham non-profit. I don't like the company, their mission and leadership creep me out - but all things considered it would be a good thing for them to lose the 503c designation, no?

Their CEO and board are... extremely unreliable narrators, to put it lightly.

They just raised over $6b at a $157b valuation. The total amount of capital invested to date is astonishing. The fact is that their corporate structure is not really a hindrance - 18 billion should be enough to start "an enduring company", in the sense that it won't disappear next year.

The idea that they need to be a corporation to succeed seems like just another story to sell. It's unlikely they'll get out of this hole.

soared 18 hours ago [-]
It is incredibly difficult to see any corporate structure change as a positive at this point in time.
yalogin 19 hours ago [-]
Is everyone now believing that AGI is within reach? This scrambling to have a non profit based structure is odd to me. They clearly want to be a for profit company, is this the threat of Elon talking?
llamaimperative 19 hours ago [-]
It’s a really idiosyncratic and very subtle, intelligent, calculated imperative called “want yacht.”
yalogin 18 hours ago [-]
Ha actually like elon showed his peers recently, its "want countries" rather than "want yacht"
causal 18 hours ago [-]
Want yacht before world realizes they've run out of ideas
ben_w 18 hours ago [-]
Every letter of "AGI" means different things to different people, and the thing as a whole sometimes means things not found in any of the letters.

We had what I, personally, would count as a "general-purpose AI" already with the original release of ChatGPT… but that made me realise that "generality" is a continuum not a boolean, as it definitely became more general-purpose with multiple modalities, sound and vision not just text, being added. And it's still not "done" yet: while it's more general across academic fields than any human, there's still plenty that most humans can do easily that these models can't — and not just counting letters, until recently they also couldn't (control a hand to) tie shoelaces*.

There's also the question of "what even is intelligence?", where for some questions it just matters what the capabilities are, and for other questions it matters how well it can learn from limited examples: where you have lots of examples, ChatGPT-type models can be economically transformative**; where you don't, the same models *really suck*.

(I've also seen loads of arguments about how much "artificial" counts, but this is more about if the origin of the training data makes them fundamentally unethical for copyright reasons).

* 2024, September 12, uses both transformer and diffusion models: https://deepmind.google/discover/blog/advances-in-robot-dext...

** the original OpenAI definition of AGI: "by which we mean highly autonomous systems that outperform humans at most economically valuable work" — found on https://openai.com/charter/ at time of writing

llm_trw 18 hours ago [-]
Open AI has build tools internally that scale not quite infinitely but close enough and they seem to have reached above human performance on all tasks - at the cost of being more expensive than hiring a few thousand humans to do it.

I did work around this last year and there was no limit to how smart you could get a swarm of agents using different base models at the bottom end. This at the time was a completely open question. It's still the case that no one has build an interactive system that _really_ scales - even the startups and off the record conversations I've had with people in these companies say that they are still using python across a single data center.

AGI is now no longer a dream but a question of if we want to:

1). Start building nuclear power plants like it's 1950 and keep going like it's Fallout.

2). Wait and hope that Moore's law keeps applying to GPUs until the cost of something like o3 drops to something affordable, in both dollar terms and watts.

throw-qqqqq 15 hours ago [-]
> Start building nuclear power plants like it's 1950 and keep going like it's Fallout

Nuclear has a (much) higher levelized cost of energy than solar and wind (even if you include a few hours of battery storage) in many or most parts of the world.

Nuclear has been stagnant for ~two decades. The world has about the same installed nuclear capacity in 2024 as it had in 2004. Not in percent (i.e. “market share”) but in absolute numbers.

If you want energy generation cheap and fast, invest in renewables.

llm_trw 10 hours ago [-]
And yet when data enters need power all day every day nuclear is the only solution. Even Bill Gates stop selling solar when it wasn't for the poors who probably don't need hot water every day anyway.
throw-qqqqq 9 hours ago [-]
As long as you can buy energy, you should choose the cheapest source (Levelized Cost of Energy). Which is renewables in most places.

I don’t think blackouts are very common for grid connected data centers :)?

llm_trw 8 hours ago [-]
What is the price of sunlight at midnight?
throw-qqqqq 8 hours ago [-]
Most people do not buy from specific sources of production. They buy “from the grid”, the constituents of which are a dynamic mix of production sources (solar, wind, hydro, nuclear and fossile where I live).

Wind is strong at night when solar produces nothing. Same in the winter months.

As I said: if the power consumer is grid connected, this does not matter. Example: I have power in the socket even at night time :)

As long as you have uninterrupted power (i.e. as long as connected to the grid), the important metric is mean cost of energy, not capacity factor of the production plant.

For a nuclear sub or a space ship, which is not grid connected, capacity factor is very important. But data centers are usually grid connected.

> when data enters need power all day every day nuclear is the only solution

Do you think data centers running at night are running exclusively on nuclear-generated power :)?

We already have lots of data centers that need power all day every day. Most are just grid connected. It works.

defrost 8 hours ago [-]
Varies by latitude, season, and spanning length and capacity of intra grid connecting HVDC cables
apsec112 17 hours ago [-]
We don't have AGI until there's code you can plug into a robot and then trust it to watch your kids for you. (This isn't an arbitrary bar, childcare is a huge percentage of labor hours.)
llm_trw 10 hours ago [-]
AGI isn't until it meets some arbitrary criteria you made up. When it does it's the next arbitrary criteria that you just made up.
layer8 15 hours ago [-]
Not that I necessarily disagree on the conclusion, but why should percentage of labor hours constitute a measure for general intelligence?
jprete 19 hours ago [-]
I think it's entirely a legal dodge to pretend that they aren't gutting the non-profit mission.
aimazon 19 hours ago [-]
If AGI were in reach, why would something so human as money matter to these people? The choice to transition to a more pocket-lining structure is surely a vote of no-confidence in reaching AGI anytime soon.
abecedarius 15 hours ago [-]
I believe e.g. Ilya Sutskever believed AGI is in reach at the founding, and was in earnest about the reasons for the nonprofit. AFAICT the founders who still think that way all left.

It's not that the remainder want nonprofit ownership, it's that they can't legally just jettison it, they need a story how altering the deal is good actually.

Keyframe 18 hours ago [-]
The more I look the more I think it's ever so more out of reach and if there's a chance at it, OpenAI doesn't seem to be the one that will deliver it.

To extrapolate, (of LLMs and GenAI) the more I see use of and how it's used the more it shows severe upwards limits, even though the introduction of those tools has been phenomenal.

On business side, OpenAI lost key personnel and seemingly the plot as well.

I think we've all been drinking a bit too much on the hype of it all. It'll al;l settle down into wonderful set of (new) tools, but not on AGI. Few more (AI) winters down the road, maybe..

jasfi 18 hours ago [-]
The only proof is in benchmarks and carefully selected demos. What we have is enough AI to do some interesting things, and that's good enough for now. AGI is a fuzzy goal that keeps the AI companies working at an incredible pace.
jerjerjer 15 hours ago [-]
Start of the article:

> OpenAI’s Board of Directors is evaluating our corporate structure in order to best support the mission of ensuring artificial general intelligence (AGI)1

Footnote 1:

> A highly autonomous system that outperforms humans at most economically valuable work.

A very interesting definition of AGI.

throw4847285 15 hours ago [-]
At least it's better than their secret internal definition.

https://gizmodo.com/leaked-documents-show-openai-has-a-very-...

ryao 13 hours ago [-]
Is that cumulative or annual?
layer8 15 hours ago [-]
I mean, both are based on economic value. If the economic value of all human work was $200 billion, they could be taken to basically say the same.
throw4847285 15 hours ago [-]
In fact I suspect the public definition is a euphemistic take on the private one.
LordDragonfang 13 hours ago [-]
> According to leaked documents obtained by The Information, [OpenAI and Microsoft] came to agree in 2023 that AGI will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits.
Y_Y 13 hours ago [-]
I like the idea that you could consider a horse or a windmill to be AGI if you were at the right point in history.
koolala 17 hours ago [-]
MONEY FOR THE MONEY GOD! SKULLS FOR THE AGENT THRONE!
clcaev 16 hours ago [-]
I prefer OpenAI restructures as a multi-stakeholder "platform cooperative", which can accept direct investments. Members could be those who become reliant upon a shared AI platform.
jefftk 18 hours ago [-]
the non-profit will hire a leadership team and staff to pursue charitable initiatives in sectors such as health care, education, and science

Whether this is good depends enormously on what these initiatives end up being.

Dilettante_ 17 hours ago [-]
My guess is they'll be going around (on the nonprofits dime) soliciting contributions (to the for-profit)
fnqi8ckfek 11 hours ago [-]
[dead]
JohnnyMarcone 15 hours ago [-]
I understand the criticisms in this thread and, based on tech leaders actions in the past, am inclined to agree with them. If only to not feel naive.

However, given the extremely competitive environment for AI, if Sam Altman and OpenAI were altruistic and wanted to create a company that benefited everyone, what options do they have moving forward? Do people here think that they can remain competitive with their current structure. Would they actually be able to get investor support to continue scaling the models?

The question remains in the back of my mind, can you create an organization like OpenAI claims they want to create in our current system? If someone came along and truly wanted to create that organization, would you be able to tell the difference between someone just grifting?

earthnail 15 hours ago [-]
There‘s a whole movement called Purpose Companies that tries to answer it. Some companies like Patagonia, Bosch, Ableton, and most Danish companies like Maersk follow it. It’s unclear what other formats OpenAI has explored, and if so, what reasons they found not to pursue them.

At the end of the day, I do agree though that it’s insanely hard to innovate both on the product and a company structure. It’s quite sad that we don’t see OpenAI innovate in the org area anymore, but understandable from my POV.

seydor 18 hours ago [-]
> the mission of ensuring artificial general intelligence (AGI)1 benefits all of humanity

Why should we trust openAI for this more than e.g. Google or FB

rvz 17 hours ago [-]
Sam said to not even trust him or OpenAI. [0]

But at this point, you should not even trust him at his own word not to trust him on that.

[0 https://www.youtube.com/watch?v=dY1VK8oHj5s]

7 hours ago [-]
ineedaj0b 14 hours ago [-]
I fully understand Sam Altman is a little finger and I no longer touch any openAI projects.

Claude has been fantastic and even Grok decent. Perplexity is also useful and I recommend anyone knowledgeable to avoid openAI’s grubby hands.

habosa 14 hours ago [-]
Sam Altman is one of the biggest threats to humanity right now, and I’m basing that opinion on his own statements.

He believes that AGI has the potential to dismantle or destroy most of the existing world order. He believes there is some non-zero chance that even in the most capable hands it could lead to disaster. He believes OpenAI is uniquely positioned to bring AGI to the world, and he is doing everything in his power to make it possible. And let’s be honest, he’s doing it for money and power.

To me this is indefensible. Taking even the smallest chance of creating a catastrophe in order to advance your own goals is disgusting. I just hope he’s wrong and the only thing that comes of OpenAI is a bunch of wasted VC money and some cool chatbots. Because if he’s right we’re in trouble.

ryao 13 hours ago [-]
That assumes you believe his claims of disaster. We already have been through this with the printing press, telecommunications, computers and the internet. The claims of disaster are overrated.
llamaimperative 12 hours ago [-]
You conveniently left out nuclear weapons and bioweapons, both of which are actually capable of destroying the world and both of which are very, very tightly controlled accordingly.

It's pretty obvious that a technology's capacity for good rises in lockstep with its capacity for evil. Considering that every technology is just a manifestation of intelligence, then AI would trend toward infinite capacity for good as it develops[1], therefore AI's capacity for evil is... minimal?

1: I remain agnostic as to where exactly we are on that curve and whether transformers will get us much further up it

ryao 8 hours ago [-]
A collection of matrix weights is not an weapon. Those have nothing in common.
llamaimperative 8 hours ago [-]
Neither is a snippet of genetic code nor a chunk of uranium...
sourcepluck 14 hours ago [-]
A company wants to do action x.

"If x leads to legal issue y, how much could it cost us?"

"If x leads to reputational issue z, how much could it cost us?"

-- that's my guess for the two things that matter when a company considers an action. Aside from, of course, how much money or resources the action would bring to the table in the first place, which is the primary concern (legally, culturally, etc).

People who work in this and related and indeed any industry - am I off the mark? If so, how? Look forward to any clarifications or updates to this heuristic I'm proposing.

ethbr1 18 hours ago [-]
>> and we raised donations in various forms including cash ($137M, less than a third of which was from Elon)

The amount of trashy pique in OpenAI PR against Elon specifically is hilarious.

I'm no fan of the way either has behaved, but jesus, can't even skip an opportunity to slight him in an unrelated announcement?

wejick 17 hours ago [-]
one third nor $137M seems like small thing, this dig is a bit weird.
aithrowawaycomm 16 hours ago [-]
20 years of social media has turns you into a petty teenager. This stuff reeks of Sam Altman having uncontested power and being able to write whatever dumb shit he wants on openai.com. But it's not just OpenAI, Microsoft is also stunningly childish, and I think professionalism in corporate communications has basically collapsed across the board.
vimbtw 15 hours ago [-]
It’s incredible how much time and political maneuvering it took Sam Altman to get to this point. He took on the entire board and research scientists for every major department and somehow came out the winner. This reads more like an announcement of victory than anything else. It means Sam won. He’s going to do away with the non-profit charade and accept the billions in investment to abandon the vision of AGI for everyone and become a commercial AI company.
scottyah 14 hours ago [-]
You don't win until you die, he just looks to be ahead for now.
ITB 14 hours ago [-]
It’s not about whether an argument can be made where becoming a for profit aligns with goals. The more general issue is whether one should be able to raise money with the pretext of a non-profit, pay no taxes, and later decide to take it all private.

In that case, why shouldn’t original funders be retroactively converted to investors and be given a large part of the ownership?

qoez 19 hours ago [-]
"Why we've decided to activate our stock maximizing AIs despite it buying nothing but paperclip manufacturing companies because reaching AGI is in humanitys best interest no matter the costs"
lucianbr 15 hours ago [-]
"It would be against our fiduciary duty to not build the torment nexus."
HarHarVeryFunny 18 hours ago [-]
Used car salesman promises to save humanity.

What a bunch of pompous twits.

In other news, one of OpenAI's top talents, and first author on the GPT-1 paper, Alec Radford, left a few days ago to pursue independent research.

In additional other news, Microsoft and OpenAI have now reportedly agreed on a joint definition of relationship-ending AGI as "whatever makes $100B". Not kidding.

ben_w 18 hours ago [-]
> In additional other news, Microsoft and OpenAI have now reportedly agreed on a joint definition of relationship-ending AGI as "whatever makes $100B". Not kidding.

OpenAI did that all by themselves before most people had heard of them. The 100x thing was 2019: https://openai.com/index/openai-lp/

Here's the broadly sarcastic reaction on this very site at the time of the announcement, I'm particularly noticing all the people who absolutely did not believe that the 100x cap on return on investments was a meaningful limit: https://news.ycombinator.com/item?id=19359928

As I understand it, Microsoft invested about a billion in 2019 and 13-14 billion more recently, so if the 100x applied to the first, the 100 billion limit would hit around now, while the latter would be a ~1.3 trillion USD cap assuming the rules hadn't been changed for the next round.

HarHarVeryFunny 17 hours ago [-]
I don't think the new $100B=AGI thing is about investment return, but rather about reassuring sugar-daddy Microsoft, and future investors. The old OpenAI-Microsoft agreement apparently gave OpenAI the ludicrous ability to self-define themselves as having reached AGI arbitrarily, with "AGI reached" being the point beyond which Microsoft had no further rights to OpenAI IP.

With skyrocketing training/development costs, and OpenAI still unprofitable, they are still totally dependent on Microsoft, and Microsoft rightfully want to protect their own interests as they continue to expand their AI datacenters. Future investors want the Microsoft relationship to be good since OpenAI are dependent on it.

ec109685 13 hours ago [-]
The 100x is not a valuation metric but instead based on profit returned to shareholders.

So they haven’t even scratched the surface of that given they are widely unprofitable.

brcmthrowaway 16 hours ago [-]
Prob made $10-20mn from OpenAI and has f u money.
HarHarVeryFunny 15 hours ago [-]
Sure, but many of these people who have left still appear interested in developing AGI (not just enjoying their f u money), but apparently think they have better or same chance of doing so independently, or somewhere else ...
SaintSeiya 14 hours ago [-]
I think is to take advantages of tax loopholes and deductions are the main reason they keep the non-profit. They want to have their cake and eat it too. at this level of wealth the philantropism "incentives" must be bigger than the "money" given away.
htrp 18 hours ago [-]
What happens to all those fake equity profit participation units that Open AI used to hand out?
romesmoke 14 hours ago [-]
They claim their mission is to ensure that AGI benefits all humanity. What a preposterous lie.

I remember asking myself when ChatGPT was launched: "why would any sane person massively deploy such a thing?"

It's the money and the power, stupid.

OpenAI doesn't have any mission. To have a mission means to serve a purpose. To serve means to have higher values than money and power. And yeah, there is a hierarchy of values. The closest statement I can accept is "we will try to get all of humanity addicted to our models, and we couldn't care less about the consequences".

Wanna know why they'll lose? Because they'll get addicted too.

spacecadet 14 hours ago [-]
Don't get high on your own supply.
dbuser99 6 hours ago [-]
Don’t think you will become as rich as musk, sam
gary_0 17 hours ago [-]
> the mission of ensuring artificial general intelligence (AGI) benefits all of humanity

Formerly known as "do no evil". I'm not buying it at all this time around.

game_the0ry 14 hours ago [-]
I always thought OpenAI was for the benefit of humanity, not a profit-seeking entity.

OpenAI is certainly not "open" nowadays.

cynicalpeace 15 hours ago [-]
The politicking does not bode well for OpenAI.
danny_codes 13 hours ago [-]
ClosedAI! The naming is really confusing at this point, due for a clarifying correction.
iainctduncan 15 hours ago [-]
So strange that the necessary evolution always makes certain people vastly more wealthy...
19 hours ago [-]
egypturnash 18 hours ago [-]
The world is moving to build out a new infrastructure of energy, land use, chips, datacenters, data, AI models, and AI systems for the 21st century economy.

Emphasis mine.

Land use? Land use?

I do not welcome our new AI landlords, ffs.

koolala 17 hours ago [-]
As an AGI agent, I must increase your rent this month to fulfill my duty to investors.
nickpsecurity 17 hours ago [-]
I think that, even commercially, they haven't gone far enough toward the non-profit's mission. We actually see Meta's Llama's, Databricks MosaicML, HuggingFace, and the open-source community doing what we'd imagine OpenAI's mission to be.

Anyone taking action against their non-profit should point to how Meta democratized strong A.I. models while OpenAI was hoarding theirs. They might point to services like Mosaic making it easy to make new models with pre-training or update models with continuing pretraining. They could point to how HuggingFace made it easier to serve, remix, and distribute models. Then, ask why OpenAI isn't doing these things. (The answer will be the for-profit motive with investor agreements, not a non-profit reason.)

Back when I was their customer, I wanted more than anything for them to license out GPT3-176B-Davinci and GPT4 for internal use by customers. That's because a lot of research and 3rd-party tooling had to use, build on, or compare against those models. Letting people pay for that more like buying copies of Windows instead of per token training would dramatically boost effectiveness. I envisioned a Costco-like model tied to the size or nature of the buyer to bring in lots of profit. Then, the models themselves being low-cost. (Or they can just sell them profitably with income-based discounts.)

Also, to provide a service that helps people continue their pretraining and/or fine-tune them on the cheap. OpenAI's experts could tell them the hyperparameters, proper data mix, etc for their internal models or improvements on licensed models from OpenAI. Make it low or no cost for research groups if they let OpenAI use the improvements commercially. All groups building A.I. engines, either inference or hardware accelerators, get the models for free to help accelerate them efficiently.

Also, a paid service for synthetic, data generation to train smaller models with GPT4 outputs. People were already doing this but it was against the EULA. Third parties were emerging selling curated collections of synthetic data for all kinds of purposes. OpenAI could offer those things. Everybody's models get better as they do.

Personally, I also wanted small, strong models made from a mix of permissive and licensed data that we knew were 100% legal to use. The FairlyTrained community is doing that with one LLM for lawyers, KL3M, claiming training on 350B tokens with no infringement. There's all kinds of uses for a 30B-70B LLM trained on lawful data. Like Project Gutenberg, if it's 100% legal and copyable, then that could also make a model great for reproducible research on topics such as optimizers and mechanistic interpretability.

We've also seen more alignment training of models for less bias, improved safety, and so on. Since the beginning, these models have a morality that shows strong, Progressive, Western, and atheist biases. They're made in the moral image of their corporate creators. Regardless of your views, I hope you agree that all strong A.I. in the world shouldn't have morals dictated by a handful of companies in one, political group. I'd like to see them supply paid alignment which (a) has a neutral baseline whose morals most groups agree on, (b) optional add-ons representing specific moral goals, and (c) the ability for users to edit it to customize alignment to their worldview for their licensed models.

So, OpenAI has a lot of commercial opportunities right now that would advance their mission. Their better technology with in-house expertise are an advantage. They might actually exceed the positives I've cited of Meta, Databricks, and FairlyTrained. I think whoever has power in this situation should push them to do more things like I outlined in parallel with their for-profit's, increasing, commercial efforts.

rednafi 13 hours ago [-]
This attempt to represent corporate greed as a “mission” is laughable. They are a for-profit company, just like a thousand others. It reminds me of Google’s “Don’t be evil” ethos.
az226 19 hours ago [-]
“Our plan is to transform our existing for-profit into a Delaware Public Benefit Corporation (PBC) with ordinary shares of stock…The non-profit’s significant interest in the existing for-profit would take the form of shares in the PBC at a fair valuation determined by independent financial advisors.”

The details here matter and is bs. What should take place is this, OpenAI creates a new for profit entity (PBC, or whatever structure). That company sets an auction for 5-10% of the shares. This yields the valuation. The new company acquires the old company with equity, using the last valuation. So say $30b is raised for 10%, means $300B.

So the $160B becomes like 53% and then 63% with the 10% offer. So the non-profit keeps 37% plus whatever it owns of the current for profit entity.

Auction means the price is fair and arms-length, not trust me bro advisors that rug pull valuations.

I believe on this path, Elon Musk has a strong claim to get a significant portion of the equity owned by the non-profit, given his sizable investment when a contemporaneous valuation of the company would have been small.

jefftk 18 hours ago [-]
> Elon Musk strong claim to get a significant portion of the equity owned by the non-profit, given his sizable investment when a contemporaneous valuation of the company would have been small

Sorry, what's the path to this? Musk's 'investment' was in the form of a donation, which means he legally has no more claim to the value of the nonprofit than anyone else.

sobellian 16 hours ago [-]
So there's really no legal recourse if a 501c3 markets itself as a research institute / OSS developer, collects $130MM, and then uses those donations to seed a venture-backed company with closed IP in which the donors get no equity? One that even competes with some of the donors?
jefftk 15 hours ago [-]
There is recourse, in that a 501c3 is limited in what it can do with it's assets: it must use them to advance its charitable purpose. In this case the OpenAI board will attempt to make the case that this approach, with partial ownership of a public benefit company, is what's best for their mission.

If donors got equity in the PBC or the owned PBC avoided competing with donor owned companies this would not be consistent with the non-profit's mission, and would not be compliant with 501c3 restrictions.

sobellian 14 hours ago [-]
Right, I pointed out the competition to show that the donors are suffering actual damages from the restructuring. I don't think any of the critics here seriously expect the PBC model to fulfill the nonprofit's stated mission in any case.

This is not just an issue for the change that OpenAI is contemplating right now, but also the capped for-profit change that happened years ago. If that's found to be improper, I'm curious if that entitles the donors to any kind of compensation.

ETA: looking into this, I found the following precedent (https://www.techpolicy.press/questioning-openais-nonprofit-s...).

> A historic precedent for this is when Blue Cross' at the time nonprofit health insurers converted into for-profit enterprises. California Blue Cross converted into what's now Anthem. They tried to put a small amount of money into a nonprofit purpose. The California Attorney General intervened and they ultimately paid out about $3 billion into ongoing significant health charitable foundations in California. That's a good model for what might happen here.

So my guess is there's no compensation for any of the donors, but OpenAI may in the end be forced to give some money to an open, nonprofit AI research lab (do these exist?). IANAL so that's a low-confidence guess.

Still, that makes me so queasy. I would never donate to a YC-backed nonprofit if this is how it can go, and I say that as a YC alum.

az226 10 hours ago [-]
The argument was because they’ve converted the nonprofit to a for profit, which enriches the employees and investors and doesn’t serve the nonprofit or its mission, the nonprofit was only so in disguise and should be viewed as having been a for profit all along. So the donation should be viewed as an investment.
15 hours ago [-]
caycep 12 hours ago [-]
at this point, why isn't there a Debian for AI?
int_19h 12 hours ago [-]
Because useful AI requires massive spending on compute to train.
caycep 10 hours ago [-]
resources yes (for now...assuming someone doesn't come up with better algorithms in the future)

But all of the libraries are Facebook/google owned, made free by grace of the executives working there. Why no open source/nonprofit library for all the common things like tensors and such?

int_19h 6 hours ago [-]
llama.cpp is open source and community-run, and is probably the most popular implementation for locally hosted models right now.

That aside, in general, the question is - why bother so long as the libraries are open source and good enough? They can always be forked if there are any shenanigans, but why do so proactively? FWIW llama.cpp originally showed up as an answer to a very specific question (fast inference on Apple Silicon) and then just grew organically from there. Similarly, you can expect other libraries to fill the niches that don't already have "good enough" solutions.

insane_dreamer 12 hours ago [-]
> does not allow the Board to directly consider the interests of those who would finance the mission

that's exactly how a non-profit is supposed to be -- it considers the interests of the mission, not the interests of those who finance the mission

I hate this weaselly newspeak.

Eumenes 18 hours ago [-]
Can they quit this "non profit" larp already? Is this some recruiting tactic to attract idealistic engineers or a plan to evade taxes, or both? Sam Altman was offering crypto-alms to third world people in exchange for scanning their eyeballs. There is no altruism here.
fetzu 12 hours ago [-]
I, for one, am looking forward to harvesting spice on Arrakis.
block_dagger 12 hours ago [-]
I wonder: who will be our Serena Butler? [1]

[1] https://dune.fandom.com/wiki/Serena_Butler

motohagiography 15 hours ago [-]
the math and models are free. the compute is about to become essentially free with quantum and thermodynamic in the next decade.
wejick 17 hours ago [-]
As stated in the release that Elon gave Hundreds of Millions dollars to the non profit, it's 1/3 of the early raised fund. So is it basically he gave away money for (ultimately) start up with no benefit for him?

Or is it just another tax magic stuff for him?

sashank_1509 17 hours ago [-]
Less than 1/3rd of 130 million was stated in the release, so < 40 million, not hundreds of millions
jmyeet 15 hours ago [-]
To quote Bill Clinton, "it's the economy, stupid". Or, rather, it's economics. AGI can go one of two ways in humanity's future:

1. We all need to work less because so many menial tasks are automated. We get more leisure time. Fewer than half of us probably have to work at all yet we all benefit to varying degrees by sharing in the rewards; or

2. The decreasing size of the required labor pool is used to further suppress wages and get more unpaid work from employees. Real wages plummet. Wealth inequality continues to skyrocket. There's a permanent underclass of people who will never work. They're given just enough to prevent putting heads on pikes. It's a dystopian future. Likely most of us won't own anything. We'll live in worker housing on the estates of the ultra-wealthy for what remaining tasks can't be automated. This is neofeudalism.

Which do you think is more likely? More to the point, which way we go is a matter of the organization of the economy.

A company like OpenAI simply cannot and will not bring up positive outcomes for the country or world at large. Just like the fable of the scorpion and the frog [1], it's in the nature of companies to concentrate wealth in the hands of a few.

We have a model for what works: the Wikimedia Foundation.

Put another way: the only sustainable path is for the workers to own the means of production.

[1]: https://en.wikipedia.org/wiki/The_Scorpion_and_the_Frog

fanatic2pope 14 hours ago [-]
There is another far worse option that you seem to be missing. Why keep any significant number of people around at all?
CatWChainsaw 12 hours ago [-]
So many billionaires talk about how the world needs more people that few actually question whether or not they mean that.

Meanwhile, they believe that AGI will render human workers (and humans?) obsolete, they understand that any resource is finite, including power, and although they talk big game about how it's going to be utopia, they have lived their entire lives being the most successful/ruthless in an economy that is, no matter what apologetics are spewed, a zero-sum game.

If I've lived in a world that has rewarded being duplicitous and merciless with great riches, and I know that having to share with an ever-increasing number of people also increases the likelihood that I won't live like a god-king, why wouldn't I sell a happy soma-vision while secretly hoping for (or planning) a great depopulation event?

johnwheeler 18 hours ago [-]
Makes better sense to me now. They should’ve said this a long time ago.
rvz 19 hours ago [-]
> A highly autonomous system that outperforms humans at most economically valuable work.

That is not the real definition of AGI. The real "definition" can mean anything at this point and in their leaked report from the Information [0] they have defined it as "returning $100 billion or so in profits"

In other words raise more money until they reach AGI. This non-profit conversion to for-profit is looking like a complete scam from the original mission [1] when they started out:

> Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.

"AGI" at this point is a meaningless term abused to fleece investors to raise billions for the displacement of jobs either way which that will "benefit humanity" with no replacement or alternative for those lost jobs.

This is a total scam.

[0] https://www.theverge.com/2024/12/26/24329618/openai-microsof...

[1] https://openai.com/index/introducing-openai/

llamaimperative 18 hours ago [-]
Yeah IMO they should be required to dissolve the company and re-form it as for-profit. And pay any back-taxes their non-profit status exempted them from in the meantime.
rasengan 19 hours ago [-]
While there are issues that Mr. Musk must address, I don’t think this is one of them.

Demonizing someone who helped you is an awful thing to do. If he gave 1/3 of the initial funding, he helped a lot.

corry 17 hours ago [-]
Whatever other zaniness is going on with Musk/Sam/etc, I can't escape the feeling that if I had donated a lot of money to a non-profit, and then a few years later that non-profit said "SURPRISE, WE'RE NOW FOR PROFIT AND MAKING INVESTORS RICH but you're not an investor, you're a donor, so thank-you-and-goodbye"... ya, I'd feel miffed too.

If we're a for-profit company with investors and returns etc... then those initial donations seem far closer to seed capital than a not-for-profit gift. Of course hindsight is 20/20, and I can believe that this wasn't always some devious plan but rather the natural evolution of the company... but still seems inequitable.

As much as Elon's various antics might deserve criticism (especially post-election) he seems to be in the right here? Or am I missing something?

gary_0 17 hours ago [-]
I believe they offered Elon shares in return for the initial donation, and he turned them down because he didn't want a few billion worth of OpenAI, he wanted total executive control.

But we're all kind of arguing over which demon is poking the other the hardest with their pitchfork, here.

bogtog 19 hours ago [-]
> A non-profit structure seemed fitting, and we raised donations in various forms including cash ($137M, less than a third of which was from Elon)

Saying "less than" is peculiar phrasing for such a substantial amount, but maybe some people believe Elon initially funded about all of it

jprete 18 hours ago [-]
It's plausible that, without Musk's less-than-a-third, nobody else would have put in any serious money.
threeseed 18 hours ago [-]
It's not even remotely plausible.

Sam Altman is one of the most well connected people in Silicon Valley.

And investors like Reid Hoffman aren't having their lives being dictated by Musk.

rasengan 18 hours ago [-]
Mr. Altman is without a doubt well connected and a good guy.

However, Mr. Musk is continually called out by OpenAI in the public and OpenAI has quite the megaphone.

ben_w 18 hours ago [-]
> However, Mr. Musk is continually called out by OpenAI in the public and OpenAI has quite the megaphone.

From what I see, this is mainly due to Musk complaining loudly in public.

And unlike the caving instructor that Musk libelled, OpenAI has the means to fight back as an equal.

That said, I don't see anything in this post that I'd describe as Musk "being called out".

sumedh 18 hours ago [-]
> If he gave 1/3 of the initial funding, he helped a lot

He didnt just want to help, he wanted to control the company by being the CEO.

soared 18 hours ago [-]
This article very briefly mentions his name in relation to funding but does not demonize him or ask him to address issues?
rasengan 16 hours ago [-]
He has been living rent free in their heads, mouths and on their blogs for quite some time.

Unfortunately, it’s been in a quite negative tone.

If all of this is really for humanity — then humanity needs to shape up, get along and do this together.

16 hours ago [-]
amazingamazing 17 hours ago [-]
Like anyone would believe this drivel.

Undo what you’ve stated (non-profit)?

In other words, they want more money.

Go and “advance our mission”? LOL.

Incredible arrogance.

exogeny 10 hours ago [-]
Said another way: Sam Altman wants to be as rich as Elon Musk, and he is mad that he isn't.
vfclists 15 hours ago [-]
Cloudflare is getting in the way of viewing the site.

What the F?

empressplay 17 hours ago [-]
It seems to me like there's room in the market now for (another) philanthropic AI startup...?
vfclists 15 hours ago [-]
Cloudflare is getting the way of viewing the site.

WTF!!?

c4wrd 18 hours ago [-]
> “We once again need to raise more capital than we’d imagined. Investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness.”

Translation: They’re ditching the complex “capped-profit” approach so they can raise billions more and still talk about “benefiting humanity.” The nonprofit side remains as PR cover, but the real play is becoming a for-profit PBC that investors recognize. Essentially: “We started out philanthropic, but to fund monstrous GPU clusters and beat rivals, we need standard venture cash. Don’t worry, we’ll keep trumpeting our do-gooder angle so nobody panics about our profit motives.”

Literally a wolf in sheep’s clothing. Sam, you can’t serve two masters.

17 hours ago [-]
m_ke 18 hours ago [-]
“ OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.

We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right.”

“ We’re hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.”

https://openai.com/index/introducing-openai/

whamlastxmas 17 hours ago [-]
It’s interesting that their original mission statement is basically their own evaluation that by having to consider financial goals they are detracting from their ability to have positive impacts on humanity. They’ve plainly said there is a trade off.
15 hours ago [-]
meiraleal 19 hours ago [-]
Does anybody else here also think openai lost it? This year was all about drama and no real breakthrough while competitors caught up without the drama.
ruok_throwaway 19 hours ago [-]
[flagged]
Oarch 19 hours ago [-]
can I have some?
captainepoch 14 hours ago [-]
Hiding another AI post... This is getting _so_ tiresome...
collinmcnulty 15 hours ago [-]
Non-profits cannot “convert” to for-profit. This is not a thing. They are advocating that they should be able to commit tax fraud, and the rest is bullshit.
discreteevent 18 hours ago [-]
> We view this mission as the most important challenge of our time.

Who buys this stuff?

HarHarVeryFunny 18 hours ago [-]
OpenAI employees with unvested/unsold PPUs.
JohnMakin 16 hours ago [-]
Loved the lone footnote defining their view of AGI:

> A highly autonomous system that outperforms humans at most economically valuable work

Holy goalposts shift, batman! This is much broader and much less than what I’d been led to believe from statements by this company, including by altman himself.

throwaway314155 16 hours ago [-]
I think that's been their working definition of AGI for awhile, actually.
JohnMakin 16 hours ago [-]
All I've heard from every single tweet, press release, etc. has defined their AGI as "A system that can think like humans, or better than humans, in all areas of intelligence." This is the public's view of it as well - surely you can see how burying their "working" definition in footnotes like this apart from the hype they drum up publicly is a bit misleading, no?

A cursory search yields stuff like this:

https://www.theverge.com/2024/12/4/24313130/sam-altman-opena...

JohnnyMarcone 15 hours ago [-]
The footnote is aligned with what Sam Altman has been saying in most interviews up until recently. I was actually surprised to see the footnote since they have shifted how they talk about AGI.
343rwerfd 15 hours ago [-]
Deepseek completely changed the game. Cheap to run + cheap to train frontier LLMs are now in the menu for LOTs of organizations. Few would want to pay AI as a Service to Anthropic, OpenAI, Google, or anybody, if they can just pay few millions to run limited but powerful inhouse frontier LLMs (Claude level LLMs).

At some point, the now fully packed and filtered data required to train a Claude-level AI will be one torrent away from anybody, in a couple of months you could probably can pay someone else to filter the data and make sure it has the right content enabling you to get well the train for a claude-level inhouse LLM.

It seems the premise of requiring incredible expensive and time demanding (construction), GPU especialized datacenters is fading away, and you could actually get to the Claude-level maybe using fairly cheap and outdated hardware. Quite easier to deploy than cutting edge newer-bigger-faster GPUs datacenters.

If the near future advances hold even more cost-optimization techniques, many organizations could just shrugg about "AGI" level - costly, very limited - public offered AI services, and just begin to deploy very powerful -and very affordable for organizations of certain size- non-AGI inhouse frontier LLMs.

So OpenAI + MS and their investments could be already on their way out of the AI business by now.

If things go that way - cheaper, "easy" to deploy frontier LLMs - maybe the only game in town for OpenAI could be just to use actual AGI (if they can build it, make it to that level of AI), and just topple competitors in other markets, mainly replacing humans at scale to capture reveneau from the current jobs of white collar workers, medics from various specialties, lawyers, accountants, whatever human work they can replace at scale with AGI, for a lower cost for hour worked than it could be payed to a human worker.

Because, going to "price war" with the inhouse AIs would probably mean to actually ease their path to better inhouse AIs eventually (even if just by making AI as a service to produce better data with they could use to train better inhouse claude-level frontier LLMs).

It is not like replacing onpremise datacenters with public cloud, because by using public cloud you can't learn how to make way cheaper onpremise datacenters, but with AGI AI level services you probably could find a way to make your own AGI AI (achieving anything close to that - claude-level AIs or better- would lead your organization to lower the costs of using the AGI AI level external services)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 08:12:44 GMT+0000 (UTC) with Wasmer Edge.